Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:18):
So here we are at the house. It is after RSA.
It's been a crazy week and a half. Um, what's
on your mind? What's the takeaway? We probably have several. Yeah.
S2 (00:30):
Yeah. I mean, uh, I feel like this week there
was a lot of fun, but also, you and I
had the opportunity to go to quite a few events,
and there was less FUD at those events, I feel like.
And some real innovation that we saw, uh, which surprisingly
was off the RSA show floor, I think. Right.
S1 (00:47):
Yeah. And it seems like that's like the big lesson
we learned is like, how do we figure out how
to do more of that next year?
S2 (00:52):
Yeah. Exactly.
S1 (00:53):
Yeah. And and less of like the I feel like,
how long have we been doing this? Like 15 years
or something. Yeah, yeah. And, um, at first it's like,
how do you get into the we're not going to
name vendors, but how do you get into the big
vendor conference that has, like the best music and the best.
S2 (01:12):
Big parties and stuff?
S1 (01:13):
Yeah, the big party.
S2 (01:14):
Yeah.
S1 (01:15):
And now we're just like, how can we see our
friends go to like, an event that's like smaller? Um,
I don't know. It's not necessarily that it's harder to
get into, but it seems like that's where the better
conversations happen.
S2 (01:30):
Yeah. I mean, well, I mean, RSA is like everybody
trying to grab like a mic, right? And scream as
loud as they can and get people to listen to whatever.
And the parties are a mechanism for that. The advertising
on the show floor of the booth and everything. But
I think that more and more as we go on,
there's satellite events which aren't at Moscone, and they're usually
(01:54):
held at like companies offices if they have the space or, um,
or like an Airbnb or like some kind of rental thing.
And uh, and they're very on, you know, specific topics.
They're not generalized. They're very specialized usually. And so we
went to quite a few of those this week. Um,
and that ended up being, at least for me, it
ended up being like like the kind of jewel of
(02:16):
the week, right? Like, I did a ton of speaking
and stuff, but, um, I learned the most, and I felt, uh,
I felt the most impressed and kind of, like, hopeful
from from those those events. Yeah.
S1 (02:26):
Yeah. Yeah. So we both did a bunch of, like,
talks and panels and stuff like that. Um, I went
to three of yours. Um, yeah, some really, really good stuff.
And that was, that was a pretty new talk that
you put together about the methodology stuff. That was really cool.
S2 (02:43):
Yeah.
S1 (02:44):
Yeah. And then, um, I would say that the, the
big thing was yesterday. Right?
S2 (02:50):
Yeah. Yeah.
S1 (02:51):
So that was like the funnest thing to do.
S2 (02:53):
Oh yeah. By far. So yesterday was the, uh, OpenAI
security research conference, their first ever one. Uh, and there's
about a hundred hundred people there, I would say. And, um,
talking about everything from, you know, protecting AI, AI agents
from attack to automating security workflows with AI. And this
(03:18):
was like no BS, right? I mean, this is like, yeah,
this is like, you know, academics talking about things that
were brand new, you know, new new methodologies, new ways
to train agents, new evals, new everything new. I mean,
we saw some new models dropped in that in that thing.
We can't really talk about them. But, um, yeah, we
saw some really cool stuff. And then, uh, also just
(03:38):
like it was kind of crazy who was in that room.
And we got to sit like a couple feet away
from Sam Altman and ask him questions. He did a Q&A. Um,
you know, Matt Knight, the CISO of OpenAI, was answering questions. Ian,
who's a good friend of ours, ours was opening questions. Yeah. Uh, so, yeah,
it was really, um, you know, open in feeling and
in kind of like crowd. Everybody was like, having good
(03:59):
conversations around it. And I felt like I learned a
ton from that. That content specific.
S1 (04:04):
Yeah. More than, like, the whole week and a half before. Yeah.
S2 (04:07):
Yeah. It was. Yeah. I mean, I had a really
good one that was like secondary to that, which was
I went to I went to Airbnb's off site, I, um,
thing and uh, so they did it one day at
their offices where they brought in speakers and a mutual
acquaintance of ours. Keith spoke there on a panel about, um,
AI and security. And that one was interesting because you
(04:28):
get the I felt like you got at the OpenAI thing.
You were at the cutting edge of research and at
the Airbnb one, I felt like I was at the
cutting edge of implementation from a point of view of like, businesses. Right.
Because like at the academic level, at the OpenAI one, um,
and even the enterprises there, the people who are talking
about are at the cutting edge and they're also well
(04:51):
funded and they're incentivized to do some really cool research. Right.
At the Airbnb one, it was more companies talking about
their implementation workflows. You know, how they were using AI
in the.
S1 (05:03):
Like on the ground?
S2 (05:03):
Yeah, on the ground. Yeah. So it was a different view. Um,
but I thought it was really cool. Like Adobe talked
about like their architecture for their agent based security vulnerability
management system. Um, you know, Google was there talking about
some stuff, and it was just it was it was
really cool. So.
S1 (05:19):
Yeah. So you you gave the talk at open AI
as well. So just, just give it like an overview
of like the talk like the methodology and stuff.
S2 (05:28):
Yeah. So I do this class called uh attacking AI
which you've been to. And um, the whole class is
basically it's my methodology for AI Pentesting. And when I
say I pentesting, people are like, oh, you mean AI
red teaming? And I actually don't. So what I find
at least visiting and talking to other experts, is that
(05:49):
AI red teaming has been around for a long time,
and they have cemented that term. And that is usually
about attacking the model, right? Like the model in place
can speak harm, can speak bias. It can tell you
how to cook meth, you know, and that is that
is stuff that happens in, you know, this one vertical
of attacking AI, which is the model, When you do
an AI pen test, you not only have to assess
(06:11):
the model and what it will say to your users,
but you have to assess the implementation of the model.
You have to assess, um, everything else that's hooked up
to like all of the DevSecOps tools that do logging
and observability and all this other stuff around it. Um,
and so it ends up being, uh, a hybrid web test,
an API test, and then also AI red teaming, and
(06:32):
then also now you have in order to get these
systems to do things for you, like agents that are
hooked up to tools and APIs and stuff like that,
you also have to get them to, um, accept prompt
injection through security gates, which is like classifiers and guardrails.
So the talk was basically our methodology at a high
level on the whole pen testing process, um, which has
(06:52):
seven steps. And then the second part was our prompt
injection taxonomy, which is really like a taxonomy to sneak
attacks through classifiers and guardrails. Um, and so we open
sourced a tool about a month ago. It's called the
Arcanum Prompt Injection taxonomy. And it goes through all of
these tips and tricks to do that. And so we
split it up into four levels. One is um, or
(07:15):
we call them for prompt injection primitives. One is um,
your intent, like what you're trying to do to the
AI system, is it, you know, get it to do
some of those red teaming things like speak harm and
bias or is it like get it to leak its
system prompt, or do you want to jailbreak it entirely,
or do you want to do something completely different? Right.
And then we have three other sections. We have, uh, techniques. Um,
(07:35):
is one of our primitives and techniques is how you
execute the attack. There's a framing. You can do narrative injection,
you can do all this kind of crazy stuff. And
then we have evasions, which are, um, the idea of like,
it kind of feels like WAF bypass, right? Where we
do a lot of tricky encoding. But there's even more
in the prompt injection world than WAF bypass. And these
get you past the security products like, um, classifiers and guardrails.
(07:57):
And so we talked about we talked about all of
those and utilities we had made. And that was the talk.
S1 (08:01):
So yeah, it's it's really good. And I've been watching
all these and It's absolutely the best. Thanks, man. Yeah,
and it's like it's presented really well as well. Um,
and by the way, I'm going to talk to the
camera here for a second. So on the way up here,
we were like, we're just going to have a conversation. Yeah. Yeah.
You know what's hilarious? We sat down and became podcasters.
S2 (08:22):
I know right.
S1 (08:24):
We're like. We're like, all right, let's talk about the content.
And like, so I don't know how we break out
of that because it's just.
S2 (08:31):
I think we can I think we can.
S1 (08:32):
Well, we'll just let it run and hopefully we, uh, like, relax. Um, yeah.
So what else? Um, what else should we cover?
S2 (08:42):
So I had a whole bunch of notes on my phone,
and then my phone battery died. In fact, we were
we were trying to set up with, uh, my new
DJI setup, which I like, which Julia got me for Christmas.
A friend of ours, Ron Foster, recommended the whole stack,
like the DJI camera and DJI wireless mic, so I
could go to like conferences and have quick interviews with people.
And of course one couldn't set it up right. And
then two, um, phone ran out of battery.
S1 (09:05):
Almost has it set up. Phone dies.
S3 (09:07):
Almost haven't set up. The thing dies.
S2 (09:09):
So. So we're going to come in recording in the
in the command center for you. But um, but yeah.
So I had some notes on my phone and, uh,
the notes are just like how at least I felt
on the floor. You got to walk the floor, right?
S1 (09:23):
Yeah, a little bit. Yeah.
S2 (09:24):
So we didn't sync up till later in the week, right. Like,
we mostly had our own stuff in the beginning and
then we synced up later. Um, but, uh, I don't
want to note on my phone on how the floor
was very funny this year. Right. Like. And also, I
had these weird moments of, like, justification. It was like, um, like,
everyone last year said they were going to completely automate
all these security workflows. Right? And I distinctly remember, like,
(09:45):
writing these notes last year. And then this year I
walked the floor and everybody has changed their tune. It's like, oh,
I assisted like, you know, power your people, scale your team,
you know? And it's.
S3 (09:55):
Like.
S1 (09:56):
Turns out that was harder than we.
S3 (09:57):
Thought. Yeah.
S2 (09:57):
Turns out turns out not possible. And I think I
had this conversation with you on the drive and it
was like even you and I have friends who are
starting companies who were trying to do that, they're trying
to automate a workflow. And we saw it at the
OpenAI thing, too. People are like, yeah, we're not close
to full automation yet, and we're very far away from
a place where the models can do full automation. Even
the people in the FTC competition who are going for
(10:21):
those cyber reasoning systems at Defcon, like they were like, yeah,
like a lot of this is still algorithmic. Um, you know,
tool based automation, but with like a lot of AI
glue at the I think they said like the framework level. Yeah.
And the, um, the framework and the organization levels are
the parts where I actually like, really helps them. But, um,
(10:42):
hearing that from or like seeing that on the floor
and then also hearing it from the people at OpenAI,
just like kind of cemented we're pretty far away from
fully autonomous systems anywhere. Um, you know, I was really
impressed by I don't want to name any vendors or whatever,
but like, there were some demos at OpenAI thing that
were pretty, pretty sick in the web testing world.
S1 (10:58):
Um, yeah.
S2 (10:59):
And so that was the one I was like, oh, okay.
Like like they're getting close. And I think you and I.
S1 (11:04):
At least in that one domain.
S2 (11:05):
In that one domain. Yeah. Web testing. And then you
and I were looking at some friends who were in
that room, and I think they were a little crestfallen
to see how far that one place had gotten with
autonomous testing.
S1 (11:14):
They were there live learning how far their competitors were.
S2 (11:17):
So.
S1 (11:17):
Bad. They're just like, oh, I'm excited. Oh, yeah.
S2 (11:20):
Yeah, yeah. Um, but that was that was cool. You know,
it was cool to, um, get to ask Q&A of, uh, Altman. Um, and, uh, uh,
you know, so we, you know, it wasn't like we
got one on one time with them, right? It was
just like, we just. We got. We were.
S1 (11:35):
In the.
S2 (11:35):
Front row, though. Yeah, I snagged a seat, so I
was I was.
S1 (11:37):
Yeah, that.
S2 (11:38):
Was good. I left lunch early to get a seat
right in front, but, um, so. Yeah. So we're, we're
in the front row. And one of the questions to, uh,
Sam Altman, if you don't know that, uh, head of
OpenAI CEO and, uh, they were one of the questions
was like, what do you, uh, like, what is the
security thing that you worry about? Right? And Sam is
(11:58):
the CEO. He's the CEO, right. So he's not in
security every day. But he's a smart dude. Yeah, yeah. And, um.
And he was talking about, uh, he was like. Like
he starts launching into his answer, presupposing he's like, yeah.
So when we get these, like, you know, fully context
aware agents, that has everything about my life, like written down,
you know, and has all this information and can make
(12:19):
these really great intuitive decisions for me and stuff like that.
Like what happens when that gets hacked, your whole ethos,
your whole stack of who you are as a person,
what you like to use, what you like to watch,
what you like to hear, what you like to eat.
And so many things can be intuited from that as
well about you as a person. What happens when that leaks? Right.
And I'm just there and I'm like, hitting you. I'm like, Dan, Dan.
S1 (12:41):
Grabs my leg.
S2 (12:42):
Yeah, yeah. Like grabbing his leg because. Because you've been
talking about this for like ten years, right? Like, I
remember the first few blogs about the first iterations of, like,
kind of unified context or context about your life, right?
And then you do it with your tlos files as
a person, and you also do it for companies to
understand the ethos of the company. And Sam just launched
(13:03):
right into that. Yeah, and it was like presupposed. And
I'm like, you motherfucker. Like.
S1 (13:08):
Yeah, I was so excited. I'm like, can I start talking?
S2 (13:10):
Like, yeah, yeah that's great. Yeah, it was great. But
he also he said the the risk is that, you know,
I guess in your analogy, the teller's file gets leaked
right somehow. And then people know, like what you're about
and how they can specifically adversarially market to you, influence you. Yeah.
You know, and if you're really open with your personal
(13:31):
assistant or, or your, um, you know, whatever ends up
collecting that information, you're really open to that, like some
of your, like, idiosyncrasies, some of your, like psychological stuff.
S1 (13:40):
Well, it's just like, you know, I think the most powerful,
powerful version of this is like, you have your, um,
your journal in there. Yeah. And you're just constantly complaining
about your mother in law, right? Yeah. And that gets hacked.
And so now my mother.
S2 (13:55):
In law's great, by.
S1 (13:56):
The way. And now now that becomes a, um, it
becomes like a an extortion email. Yeah, right. Someone could
just be like, do you want me to send this
to your mother in law or just send me, you know,
$20 or whatever? And it's like, that's worth it. I
don't want to. I don't want to have that fight
at dinner.
S2 (14:13):
Yeah, that's. I mean, $20.
S1 (14:14):
Seems like a good.
S2 (14:15):
Ransomware price, right? Like ransomware operators are listening. Like, I'll
pay 28 bucks. That's cool.
S1 (14:21):
Yeah, yeah. But but to your point, it's like your
entire personality. It's like your entire soul. So that's. Yeah,
that's a lot of content to lose.
S2 (14:29):
Yeah. One of the things we're talking about in the
car was how we saw so many. And it sucks.
But like friends, colleagues working for these places, hanging their
hats on these AI features or even companies that are
completely based around AI, and there's no there's no moat
for them, like it is going to be disrupted either
by because we then we went to the OpenAI thing,
(14:50):
saw what they're doing.
S1 (14:51):
Yeah. And destroying moats.
S2 (14:53):
Yeah. Destroying moats basically. Right. Like whole companies are going
to go down because they had this premise of, oh,
we'll use AI to do this thing. Now it's just
going to become part of the model, or they're going
to be just trampled by one of the big data
aggregators who already has all the data to make the
problem set easy to execute, I think, is how I
think of it.
S1 (15:11):
Yeah. Yeah, absolutely. So, so Jason talked about what he
was presenting. Um, so what I was presenting is like
this unified entity context. Yeah. Yeah. So it's like, if
it's an individual, you just get all that stuff that
you already talked about. And then if it's a company
you get all the way from the company goals all
the way to the security goals. But all the HR stuff,
(15:34):
like everything all into one bucket. And then from there
you just ask questions. Yeah. So if you ask HR questions,
is that HR software?
S2 (15:44):
I think it only works with the the data right.
Like yeah data about the people. Yeah.
S1 (15:48):
Yeah. It's like if you have HR data in there,
you have security data in there. And you ask security
questions and HR questions. You have HR software. So I
feel like software verticals just go away.
S2 (16:00):
I mean, I don't think they completely go away, but
I think yeah, a lot of them are are majorly disrupted. Right.
It's like and again, it'll be the big companies that
do this first because they already have that infrastructure like
Microsoft has the whole graph API about, you know, corporate
user data. They have security data. They have God. God
knows what other data. Right. So they're poised to move
(16:20):
quickly in some of these places that you and I
play in. And it's like they will get there first
because they have the the ability to grab everything. And yeah,
I think I think that for your talk, I mean,
you talked about like as a meta thing, you know,
like everybody is focused on, oh, we can build these
agents or, you know, I questions whether you're just using
an API, you're actually using a planning agent or whatever.
(16:41):
It doesn't matter. Like whatever architecture you've chosen, use AI.
And that's the important part. Right. Like prompting it, getting,
you know, you know, rags set up like all this stuff.
And you had that one slide where it was like, okay,
so the agents are the big boxes and we really
care about them right now, right? And then they're attached
to data sets and they're kind of small. And then
you have the other slide. And then it's like, no,
actually the key piece is the data in the middle.
(17:03):
And the thing that validated it for me was you
didn't you were not the OpenAI one. But next year
we got to go and present. But on this topic too,
by the way.
S1 (17:10):
Oh, the Airbnb one.
S2 (17:11):
Yeah. Airbnb one. Airbnb one. Yeah. So, uh, the guy,
one of the guys from Microsoft, I can't remember his name,
but he was talking about kind of the same idea
as he calls it, like Golden Data Lake or something
like that. But it was the exact same thing. He's like, hey,
you know, like the models are going to get so
good that they are general software products themselves. Right? And
so it's that's not the thing you should be focusing
your development on. Your development or either systems architecture revamp
(17:34):
or whatever should be on collecting the contextual data. That's
going to help you answer the questions. Right. And so
in your slide you had like an image where the
agents were really big and they were the important question.
And then then it's like the next slide is like
actually the most important thing is the model. And the
agents just live on the side a small little thing.
S1 (17:49):
So yeah yeah yeah Yeah. So so yeah. So what
I have was like cybersecurity with AI around it versus
AI in the middle and then cybersecurity and HR and productivity. Yeah,
the software verticals are kind of like rotating around it.
S2 (18:06):
Yeah.
S1 (18:06):
Yeah yeah. Because yeah I think you just you just
collect all that data and ask the questions and answers it.
S2 (18:14):
Yeah. And I think, you know, on my phone I
had some notes too, that um, a common theme I saw.
We're going to talk a lot about AI. Sorry. I mean,
like a lot of, you know, I know people want
to hear about security stuff, too, but, um, a lot
of this stuff in architecture, of these systems, it was
it had to be more compartmentalized than, I think even
some of the architectures I was thinking of meaning that, um,
(18:36):
where I thought I could ask a bunch of questions
of an, you know, an AI model and stuff it
all into one masterful system prompt or something, or user prompt. Um, actually,
successful implementations are asking micro questions, kind of like microservices, right?
We are asking one question of the same data of
a data set. Wherever you take it in from the
user or from context somewhere else. And then that's one agent, right?
(19:00):
It's just a one question, one agent that's really good
at doing that one thing. And then you have tens
or hundreds of those because, you know, at least from
the people I was talking to, they're like, you know, like, uh,
don't try to stuff it all into, you know, like
one process because it can confuse, you know, different models
and stuff like that. So some of the ones I
saw had up to like ten agents to like action
(19:22):
one workflow, right? Asking individual questions and then stitching together
the output of that.
S1 (19:26):
So interesting. Yeah. Yeah I, I don't know I, I
feel slightly different there. I feel like all the data
in one place is good. And so you just ask
the questions. Um, but where I do see what you're saying.
The AI, uh, x that stuff was fascinating. Oh, yeah. Oh,
(19:48):
you know, what I really loved about that was the
conversation of like, does the model matter more, or does
the architecture of the system matter more?
S2 (19:58):
So before we go into this, because that whole panel
and the two talks were awesome. Yeah, they were great.
But I don't think everybody knows what the I.
S1 (20:05):
ZK yeah, yeah yeah.
S2 (20:07):
Go ahead. Okay. So the Acsc is this competition run
by DARPA. Um, and the whole idea is that, uh,
you build what's called a cyber reasoning system. And so
there are both academic teams who are mostly CTF teams
from the Defcon CTF kind of ecosystem. And then there
are also companies who have come who have come to
compete in this contest. And the goal is to build
(20:28):
a system that is AI enabled that can go from, um,
taking an open source project repo with a vulnerability in it,
finding the vulnerability through static analysis, building an exploit, testing
the exploit to see if it works in the wild,
patching the exploit and keeping the service up and running. Yeah.
So it's got to do both offense and defense. Um,
(20:50):
and they get scored on multiple different facets of that. And, um,
I think Dan Guido from Trail of Bits was telling
us this night the top like the the they've been
giving out prizes every year. So last year was the
semifinal rounds and five teams made it to the semifinals.
It's like three teams and two companies of which Trail
of Bits is one. And my alma mater, uh, shellfish
is one. And then, um, now I think grand prize
(21:13):
will be he said, like five, three, 2 million. So
5 million, 3 million, 2 million or something like that
might be.
S1 (21:18):
Four, three.
S2 (21:19):
2432 or something like that. So first place gets 4 million,
you know, second place gets 2 million. And at the
end of the competition they will actually have to open
source their cyber reasoning systems too, which was a really
interesting conversation I had away from the table with with
some people. But okay, so that's that's leading up to it.
So yeah. So there's three talks around that competition. Uh,
there's a panel with a whole bunch of leaders or
(21:39):
people who were associated to the teams. There was, um,
a couple of people who just presented on their kind
of research around their cyber reasoning system. And then, um,
and then there was a talk by the, uh, by
one of DARPA representatives about why they built the competition
and stuff like that. So I just wanted to. Yeah, yeah.
S1 (21:56):
Yeah yeah, yeah. Perfect. Yeah. So the thing I've been
really super excited about is like these generalizations of architectures
specifically generalizing the scientific method. So I love the idea of,
like you have a collection of goals, you have a
testing engine that basically tests, you have a hypothesis, you
(22:18):
have problems, goals, and then the testing engine. And you
can actually combine the ideas as well, like mix them
and like try to find variations that are actually more effective.
But it's just this life cycle of here's a cool idea,
see if it works. Mhm. Um, and so what I
heard listening to them and we're not divulging anything because
(22:41):
they were all competitors. Yeah. On the panel. So nobody
was divulging anything secret.
S2 (22:46):
Well, except for that one guy. The one guy was like,
I don't know.
S1 (22:48):
He's like, screw it. Yeah, yeah, yeah, yeah. But I
just want to make sure we're not disclosing it. Yeah, yeah,
it's it's all open. Um, but but, um, basically the
idea that you could just, like, keep iterating on this
and you can kind of use it for anything. So
what I found is, um, really interesting was they basically
(23:09):
said they spend all their time fixing that system and
that the, the model getting smarter didn't necessarily help as
much as improving the system itself.
S2 (23:18):
That seemed to be what a few of the audience
questions were aimed at is like, well, you know, as
the model gets better, doesn't it make it better for you? Um,
and yeah, like, like you said, some of their answers
were no, uh, that the scaffolding that hooks everything together
was actually the important part that they needed to develop
more than. Yeah, um, the AI models, because they had
scoped the AI models to do certain things. I remember
(23:42):
one of the answers was like, um, you know, one
of the answers for one of the teams was like,
fast iteration of of that model. The scientific model is like, hey,
I think this is a cool idea for like a
single agent to work on, and we're just going to
test it with the limited data sets that we have
and the problems that we have and see if it works.
And he was saying that that was their key to
success is just like trying so many things, um, different
(24:04):
representations of code, different ways to, you know, um, you know,
like make exploits, like, you know, cut it into pieces,
do it all at once, like, and he's just trying
all kinds of stuff and like, you know, some panned out,
some didn't. Yeah. Um, you know, so there's still a
lot of learning to be there. There's also a lot
of talk about how the competition also has, um, or
(24:25):
at least in the first couple rounds, had a ton
of handicaps like they were. Yeah, they couldn't use like
they could only use, like $100 in tokens. Yeah. Only
certain models, um, and only so much output, uh, and context.
And so they were really, they really said that they
had designed those systems to work inside those constraints. And
now in the finals, they don't have any of those
constraints anymore. So it's like it kind of messed them up. Uh,
(24:47):
in systems design because they were like, if we would
have had no restrictions from the beginning, we would we
might have used different models for this, or we might
have done x, Y and Z, you know.
S1 (24:55):
Right, right. Because that was one of my questions of like, what?
Why are you building this ultra specific thing? It seems
less efficient. And they're like, that's because we had so
many limitations. A generic system would not have worked. Yeah.
You had to put all these weird constraints on it. Yeah.
Or because they had weird constraints on them?
S2 (25:13):
Yeah. Yeah. I mean, uh, some of the other things
there too, were that, uh, some of these security tests
end up being structured more like unit tests in software. Right?
And so, um, there are actually libraries out there that, uh,
that do unit testing really well. And people are like, yeah,
we just started using those for.
S1 (25:32):
Oh yeah. The guy mentioned. Yeah. Do you remember that one.
S2 (25:34):
It's like JS something like I don't know, I have
it in my notes, but yeah. Me too. Yeah, I'll
find it. Maybe put it in the show notes. Yeah.
But there was that. And then I thought interesting questions
like how much you know, how much are you spending
on building this system? And that was a really interesting question.
S1 (25:48):
Um, and nobody was building their own models.
S2 (25:50):
Yeah. So you can't. Yeah. It's an option now in
the finals to, to bring your own model. Uh, it
wasn't before. Um, but nobody's doing it because it's.
S1 (25:58):
Much.
S2 (25:59):
Too expensive. Too expensive. And then, you know, like the
consumers of, you know, the people who weren't competing in
the competition were like, why don't you use this one
off hugging face? And like, you don't understand, like, those
things suck. Like they were like all of the open
source models that deal with security data right now are horrible.
And we saw two talked about this week or one
came out yesterday from Cisco. And then one we can't
(26:20):
talk about. Um, but it'll be out at some point.
And they are security trained models. And then we had
Gemini also talked about a couple of weeks ago. And,
you know, none of these are out. So they can't
use them on these teams yet. And so, um, you know,
maybe the next generation of security data trained models will
be good, but, um, they just have to work with
what they've got right now. And they said pretty much
(26:40):
it all sucks. There's also a reoccurring theme of evals
suck right now for security. Um, for security based, uh, training?
S1 (26:48):
Basically, yes. Joel presented on the difficulty of evals. Yeah.
A lot of people were like, evals suck. And a
lot of people were like, these are really, really hard. Yeah,
some people were like, don't. They don't mean what you
think they mean.
S2 (27:00):
Yeah, yeah.
S1 (27:01):
But, um, yeah. To me the eval piece is part
of that testing engine because like, how do you know
if the tests worked unless you have good evals. Um,
but there's like so much hacking of the evals going on.
S2 (27:13):
Yeah. I mean, that was in my notes, too. It's
just it's like there's people who are purposefully building in
either training or, um, logic into some of these things
to score really high on these.
S1 (27:25):
Yeah. They like post train with it.
S2 (27:26):
Yeah. Post train with it. And so they score really
high on these, uh, things that I would look at
as a consumer. Right. And I'm like, oh, okay. So
I'm going to go look at, you know, these, you know,
these evals on hugging face. You know, I forget the
really popular one on hugging face. It's like the chat arena.
Chat arena. And they'd be like, cool. How does it
score on Chat Arena? And it's like, okay, it's really high. Well,
that doesn't mean it doesn't mean anything for for a
(27:47):
domain specific application of an AI, right? Like, you may
not need a model that's really good at on chat
arena for something else. And so one of our longtime friends,
Joel Parrish, former you worked you worked for me at
Deadspin when he first started. I'll tell a funny story here.
So Matt Knight is the CISO of OpenAI and I
don't know, Matt. Super. Well, I know him a little bit.
(28:08):
And I was like, hey, like, I, uh, I was
actually one of the guys who worked with Joel at Deadspin,
our first testing job together a two decades ago. Right.
And then he worked for us at HP or with
us at HP. Then he went to Apple with you,
and then he went to OpenAI with Matt. And so
I told Matt, I said, I think I'm one of
his first LinkedIn recommendations. I don't know if he kept
it on his profile or not, but I called him
(28:30):
the Kobe Bryant of web hacking. Right? Yeah.
S1 (28:32):
Totally.
S2 (28:33):
Yeah, totally. Um, but he does everything now. Anyway, Joel
gave a presentation at this is the Run Sibyl Side event,
which was also really good.
S1 (28:39):
It was.
S2 (28:40):
Great. The run, the run. Sibyl side event. Um, the
Run Sibyl is a company that's doing, um, automated pen
testing and, um, and pen tester assisted AI tools, basically, uh,
led by a friend of ours, Ari. And so I
spoke there, gave my talk there. But Joel was right
before me. And Joel gave a talk called your Eval suck. Um,
(29:01):
and his thing was like, hey, look at all these
evals that, uh, people are using right now. They're written
in the 90s, like stack based overflows, right? Which is
not what we are facing in 2025. Right. And, uh,
he just showed like, multiple examples of, you know, some
of these evals are not even testing exploit generation or
web testing or anything like that. They're testing just like
(29:22):
code quality stuff, which is from the 90s. Right? And
it's like it's like, why are we benchmarking these security
models off of these crazy old evals? Um, and that
was that was the genesis of his talk. What I
was sad about, though, is you left for dinner one
talk after or one talk before it ended. And the
last talk was this guy who basically brought a humanoid
(29:43):
robot like, about this tall and had jailbroken it with
an exploit. He bought it from China and got it
to run around the room, just like screaming at the
top of its lungs with like, a video game track.
It was so cool. Like it was. It was amazing.
S1 (29:55):
Attacking communism. Yeah. As a Chinese robot. Yeah.
S2 (29:58):
So, okay, so out of context, it sounds bad, but
if you've ever played, um, the fallout games, um, there
is like this audio track in, uh, in fallout that, like,
the rhetoric in that game is anti-communism, right? And so, like,
there's like a robot that goes around or like a,
like a character, you know, some of the enforcers run
around all they do for 24 seven just talk about anti-communism.
(30:21):
So if you if you pull down the audio track
for this game, there's a 20 minute rant robot rant
about anti-communism. And so he put it on the jailbroken
robot and it's just running around spouting like, anti-communist and
then like it was so funny because it was it
was it was like it was talking about anti-communism and
it was like it was like possible defector. And then
it ran into this lady's table and spilled her drink
(30:42):
on her, and she thought it was so hilarious. Like,
she was like, this is crazy. It was it was great. Yeah. Yeah.
S1 (30:48):
Yeah. I got to get the video.
S2 (30:50):
Yeah, yeah, yeah. So, um. Yeah. So evals, you know, struggle,
benchmark struggle right now for domain specific applications. Everyone's kind
of figuring it out. And I think I felt like
on the show floor, there was still a lot of
promising of things that when I went to talk to people,
it was not as, um. First of all, no one's
(31:10):
training their own models to do any of this, right.
Like vendors will say, yeah, we have our own model.
They're not like they're using llama for or, you know,
their own keys and yeah, or. Yeah, deep seek or.
S1 (31:20):
Just the cloud ones.
S2 (31:20):
Yeah, just the cloud ones. Anthropic or OpenAI. Right. And
then all of the business logic magic that they promise
you is happening is system prompt based. Like that's, that's
the majority of of all of those products. And the
value prop is no longer automation. The value prop for
them is now, oh, you know, up level or skill
level your people. Right. And um, and that means basically
(31:43):
at least the way I, I, you know, kind of
package that is it's good at the things we already
knew it was good at. Right. It's good at summarization.
It's good at rewriting. It's good at pulling multiple data
sets together and offering, you know, a couple insights here
and there. But the actual automation of things not quite
there yet harder to implement. Architecture is way you need
to invest way more in it.
S1 (32:03):
Yeah, I mean, I think we knew this before going
into RSA just because you and I are actually building
this stuff. It's like the problem with agents and like
pursuing goals is they just get confused, right? They get
confused over scoped.
S2 (32:17):
Scoped.
S1 (32:18):
Yeah. Especially if you have like red teaming for example,
which is the one that all our friends are struggling
with the most.
S2 (32:23):
Yeah.
S1 (32:24):
Because the first step. Cool. I can launch the web
attack second. Okay. I can pivot a little bit. Yeah.
But then it's like, okay, I've got seven more goals
to get. What have I done already? And so it
starts losing context. So I think that's where it's kind
of falling apart. Yeah. The other thing we were talking
about was how, um, the moat situation. Yeah. So, so basically, um,
(32:46):
a lot of these companies that are like, we do
this thing, that's what makes us special. And we're in round,
you know, B or C or whatever. And we've raised
all this money because we do this one thing. It's like, well,
if you have the context, like, uh, I was talking
about this week, if you have that context and you
(33:07):
can ask the questions. The thing that company does is
a feature instead of a company. Yeah. Yeah. And like,
as we would go around and see these different booths. Yeah.
They look like features.
S2 (33:20):
They're gonna get karate kicked.
S1 (33:22):
By by.
S2 (33:23):
Uh, the model vendors and. Yeah, it's, uh. Yeah. I mean,
it sucks because those are some. awesome. It is some
of our friends who are making these companies and it's like,
I don't know. I mean, you can be really good
at a problem and succeed better than a big model
vendor or a big a big company that you know.
But it's got to be really good.
S1 (33:39):
And maybe they just cut through so good with marketing
that they get a big enough market share that they're okay. Yeah.
But the time is like ticking. You just hear the
time ticking down between Google gets there, Microsoft gets there.
One of these big players gets there. Yeah. And then
they just start adding question modules for security or whatever.
S2 (34:00):
I mean maybe that's the golden plan though. Maybe it's
like they're not at the place to stay a long
term viable. Just just get bought by one of the
big companies because they do it really well. Right. Which
is totally a play. Yeah.
S1 (34:10):
Nothing wrong with.
S2 (34:11):
That. Anyone wants to buy. I'm just kidding. Um, yeah. So. Yeah. Um, yeah.
So that could be a play for sure. Yeah. Uh, yeah.
I mean, other than that, though, uh, Again, same vibes
as last year and the year before. I saw some
(34:31):
products that, uh. Besides, I are just abstractions of what
someone else already does, but it's a better visualization and
easier to make it work with. Right. So like the
Amazon ecosystem, if you're an Amazon specialist, that shit's hard
to learn. Like there are so many sub tools and
sub products and it's like and so then like you
see this other company like, yeah, we make this easy, right?
(34:52):
Like here's a nice guy explains everything Wiz. Yeah. Wiz. Right.
And it like does everything that you want it to do.
And I don't think there's a lot of moat around
that either. That's just a UI revamp to a lot
of the core services places. And so I saw a
lot of that. It's like make your SoC easier to
automate or like whatever. And it's like, okay, I get it.
I get why that's attractive right now, because you have
(35:14):
that pain right now as a consumer. But that pain,
I don't know if it'll be there forever once other
people figure out. Although you can also look at case
studies from Google, right? They never fix their UI. And
you know, so like like Gmail could use a refresh.
So yeah, I mean, yeah, um, I saw a lot
of that. Um, was also kind of surprised just at the,
at what I perceived to be the spend at RSA
(35:35):
this year in a time where I know security professionals like,
who are jobless and, and they have been looking for
roles for months. Right.
S1 (35:43):
That's a really good point. I didn't think about that.
S2 (35:45):
And then just the amount of of money on that floor.
S1 (35:48):
It felt like 2018 or 2019.
S2 (35:51):
Yeah, it was crazy.
S1 (35:51):
It was like top of the you know what it
almost felt like it almost felt like. And I hadn't
thought of this until just now when you said this.
It feels like desperation. They're just like, spend all the money.
It's like. It's like our last chance.
S2 (36:06):
Yeah, it definitely felt like that with some vendors for sure. Um,
which maybe goes into that like acquisition play is like,
we just make ourselves seem bigger this year. We'll get
acquired and it won't matter anymore. Right.
S1 (36:18):
Well, so if you take like the macro economy or whatever,
and it's just like things might get bad in the
next six months or a year. Yeah, we're going into RSA. Yeah,
we need to get bought. Yeah, we or we need
to get a bunch of customers. Yeah. Now is not
the time to go small.
S2 (36:36):
Yeah.
S1 (36:36):
So we saw baby goats.
S2 (36:38):
Yeah, we saw goats. We saw puppies.
S1 (36:41):
I didn't see the puppies, but I saw the goats.
S2 (36:43):
There's a monster truck and an F1 formula one car.
S1 (36:46):
2018?
S2 (36:47):
Yeah. I mean, um, what else? A giant, giant robot, obviously,
at the CrowdStrike booth, like, every year. Giant statue. Um, yeah,
there was I mean, there was usually there's one marquee
party at uh, at or at RSA, right. There's like
one vendor who brings in like a band. So like
last year, I can't remember who I went to Dead
Mouse last year, which is one of my favorites. And then, um,
(37:10):
for like, the rock crowd, they had, um, Incubus, I
think last year or maybe, maybe the year before, I
can't remember this year. Both marshmallow still Premier DJ and
Chainsmokers were performing at different parties on the same night,
which is crazy.
S1 (37:25):
One. Sentinel one, I think.
S2 (37:27):
Yes, I know one was a marshmallow. Yeah. And, uh,
and then, um, I can't remember who did, uh, Chainsmokers,
but yeah, I think it was like chain guard or something.
I can't remember, but, um, but yeah, I mean, that's
a lot of money to, like, you know, buy out
a nightclub for hundreds of people, you know, have, like,
a premier DJ play just for your corporate party. Um,
and so it just it felt like there was a
(37:49):
lot of money spending. And it made me sad a
little bit because, like, I do have friends who have
been struggling to find jobs or have gotten like, um,
work furloughed, you know, a lot of friends getting furloughed
where like, they're like, oh, we can't afford to pay you.
S1 (38:01):
Yeah. We got to cut back on salaries. Yeah, because
money's tight.
S2 (38:04):
Yeah.
S1 (38:05):
But we need goats.
S2 (38:06):
Yeah, we need goats. Yeah, we need goats. So. So
that kind of sucked a little bit. I think that's
a continuing pattern though. Probably that's happened every year a
little bit. But um.
S1 (38:14):
I'm worried about next year that they're just like, well
that didn't work.
S2 (38:19):
Yeah I.
S1 (38:19):
Mean tighten it.
S2 (38:20):
Up. We'll see. I mean, every year you look for
a vendor that you thought was cool last year, and
then they're not there, you know, this year.
S1 (38:28):
Yeah. Isn't that weird?
S2 (38:28):
Yeah.
S1 (38:29):
Someone comes out of nowhere and, like, four of them
just disappear.
S2 (38:32):
Yeah, there was a couple, a couple last year that
was really excited that they were applying AI to document classification.
And I was like, that's a perfect application of AI, actually. Yeah, yeah. Um,
and they were not around this year either they got
gobbled up or they didn't make it. So yeah.
S1 (38:48):
Oh yeah. What's the one I got? A buddy went, oh,
Sierra is the one. Yeah. Doing it now.
S2 (38:53):
Yeah. Um, but yeah, the, uh, I mean, the off
site events are definitely where it's at, I think. I
think if you're coming to RSA next year and we're coming. Right. Yeah,
I'm gonna do a little bit less. Speaking honestly, I
did my thing five times and I was pretty burnt. Um, again,
I guess if I rewind to the beginning of the week, though. Besides,
San Francisco continues to be an A plus con.
S1 (39:15):
Yeah.
S2 (39:15):
Um, I mean, besides San Francisco.
S1 (39:18):
Production quality.
S2 (39:19):
Production quality.
S1 (39:20):
Quality? The content?
S2 (39:21):
Yes. Staff is great. Yeah. You know, the villages there
are cool. Even the vendors set up there is like.
I just feel like it's not as nuts and in
your face. I had a I had a buddy, a
mutual friend of ours. Come this year. He's, you know,
he's a person. He works at a company and I'm
not going to put him on blast. But he's a
person who works in a company. He doesn't have any
purchasing power. But, you know, he put on his RSA
(39:43):
badge the company he worked for. And it's a big company.
And he is getting accosted like he's walking down the
floor and like, you know, like someone out of the
corner of their eye just sees his badge in the
name of his company. And they're like, hey, like, you know, like,
come talk to me. And like, um, it was surreal
for him, even even after he tells them I don't
have any purchasing power, I don't make any decisions. And
they're like, I don't care.
S1 (40:03):
Like, yeah, yeah.
S2 (40:04):
Yeah, uh, but besides, doesn't feel like that. Um, I
think that, uh, a mutual friend of ours, uh, Clint,
gave a talk on vulnerability as people and as infosec
practitioners rather than vulnerabilities as, like, you know, kind of.
We pop bones or security. Security. I really love that talk.
I think it's one of the best keynotes I've seen
(40:26):
in quite a while. It will be up on the
b sides website. You know, they eventually put everything out.
So I highly suggest watching you and I were cameoed
in that talk, actually.
S1 (40:32):
Yeah, absolutely.
S2 (40:33):
Clint, Clint talked about, um, how, you know, you and
I have like this, you know, long friendship because we've
worked together since we were young and we've just kind
of done everything together. Yeah. And, um, he he talked
about how, like, being a friend with you, he, like,
felt like a little bit less than, you know, the
connection we had. And he wanted that with you. Yeah.
And how, like, those things are hard to talk about, right?
It's like, you know, your insecurities, the way you feel. Um,
(40:56):
but eventually, if you confront them or you figure out
ways to help, you know, like, you know, be healthy
about them and have conversations with your friends and say, X, Y,
and Z, it's, um, it really can give you peace
of mind. Superpowers make you feel better. Yeah. And so, like,
you and I were referenced in that that part of
that talk. And I thought that that was it was
really great, actually. I actually like I think I cried
(41:17):
at the end because he had like a couple, like
little messages in there and he.
S1 (41:20):
Was like, yeah.
S2 (41:20):
Yeah. He was like, hey, you're enough, right? Like, what
you're doing is enough and.
S1 (41:23):
All, and it matters.
S2 (41:24):
And it matters. Yeah. And, uh, and all of us, uh,
all of us, you know, are just trying. I feel
like everyone in security has a little bit of, like,
they just want to, like, help the world a little bit, right? Like,
not everyone, but a lot of people. That's why they
get into it. Because it is easy to make that
that line to like, hey, I know this is a
small thing, this computer stuff, but in a way, I
(41:45):
am a superhero trying to help the world. Right? And, um,
but you can get so wrapped up because there's so
much stuff, right? There's so many domains. There's new research
in domains all the time you feel behind on education,
you can start to get that imposter syndrome. And like
at the end, Clint actually gave out cards.
S1 (42:01):
That he had signed.
S2 (42:02):
That he had signed. He had signed hundreds of cards
by hand. And I think I have one in my
wallet and I put mine in my wallet. And it
just like says you are enough signed by Clint. Or like,
you know, some inspiring message.
S1 (42:12):
Yeah.
S2 (42:13):
That was I mean, like, Clint's like amazing person. But
that talk was awesome.
S1 (42:17):
So yeah, and a lot of courage to do a
talk that's about people. Yeah. When like the natural play
is like AI and security V2. Yeah. Which would absolutely
crush it would crush. Yeah, yeah. And he's like, no,
I'm going to do this thing because I think this
message matters.
S2 (42:34):
Yeah, yeah, yeah. Clint's a phenomenal human. And, um, he
did the girlfriend meme of, uh, of you and me.
Like the, like the guys walking down the street, and
he looks at like the like. Like the one girl has, like,
a girlfriend, and the other girlfriend's like, you know, that
meme and I got to be the the hot girl, so.
S1 (42:52):
That's right. You know.
S2 (42:53):
I haven't I've never been the hot girl. So like.
S1 (42:56):
Yeah, yeah.
S2 (42:59):
Yeah. Um. Yeah. So that was cool. Uh, what else
happened this week?
S1 (43:04):
I don't know, what's your big takeaway? I feel like
we talked about this a little bit. I feel like
big takeaway for us. And keep in mind that If
you're coming to this, like brand new, like you don't
necessarily want to do it this way because we're we're
getting lessons after I've done this for a couple of decades,
but we're like more get away from the center of
(43:26):
the mass and move into like the smaller events where
your friends are going to be at to talk more
about the ideas as opposed to like, where can I
get the food? Where can I get the parties and
the music?
S2 (43:40):
Yeah. And that, I mean, this thinking applies to Black
Hat and Def Con, too.
S1 (43:44):
Absolutely.
S2 (43:44):
Yeah.
S1 (43:44):
It's the whole scene.
S2 (43:45):
It's, uh, at first it's cool when you're young and
we're old, so.
S1 (43:50):
Yeah.
S2 (43:50):
Yeah.
S1 (43:51):
Um, to.
S2 (43:52):
To go to the parties and be part of, like,
the loud noises and then now you kind of want to, uh,
ration your time with people who you really want to
spend time with and have conversations with and be in
smaller settings where it's not so loud and like real
research is going on. And, uh, and that was the
dichotomy for me, was like so much of a difference
(44:13):
between what people were saying on the floor about how
things worked with AI. And then when I got to
the Airbnb summit and the OpenAI summit, like, no, these
are real people working on these problems. And here are
the real problems. And yeah, um, and here also there
are success stories, but also their failure stories, like, yeah,
we thought this would work. It totally did not work.
(44:33):
We had to go back to manual process in vulnerability management.
Oh man. The conversation about vulnerability management this week were crazy, um,
about using AI and the predispositions I had about what
I thought was a good AI assisted vulnerability management plot
or not platform, but, you know, like architecture versus what
(44:54):
some people like Google have built and some people like
Adobe did a great talk on it, and then some
people talked about it in the OpenAI, um, conference. Uh,
and so, like, it turns out some of the things
that we thought I would be able to do are
not the force multipliers. It turns out to be sending
emails or like actioning tickets automatically.
S1 (45:12):
Go to where they.
S2 (45:13):
Are. Yeah, yeah, go to where they are. Right. We
talked about this in the car. It's like it's I
had some assumptions that we'd be able to do like full,
full stop, you know, full prompts like, you know, uh,
nuts to bolts send, you know, our vulnerabilities that come
through a bug bounty or static code analysis or through, um,
appsec testing or through, you know, any different number of
(45:34):
where we get vulns. Right. And it would, you know,
system would work. Uh, the, the, the value of the
AI would be the rating, the conglomeration of all that
into tickets. And it turns out that some people, at least,
who have done it say actually like that doesn't work
super well. Mhm. Um, they're stripping all of that contextual
data about ratings making their own rating systems. Yeah. Um,
(45:57):
and just pulling out the text from the advisories from
the threat feeds, from the pen test report, from all
that stuff, rewriting them themselves with custom systems that have
nothing to do with CWA or CV.
S1 (46:10):
You know. Yeah. That's the thing. The these rating systems
are trying to give us the context of the vote.
S2 (46:15):
They don't have any context or.
S1 (46:16):
Know anything about.
S2 (46:17):
Know anything about us.
S1 (46:18):
So I love this idea. Just strip it out. Yeah.
And then re-add the context from the company onto the vote.
And then and that's the priority.
S2 (46:25):
And then the thing that the Google guy said, I'm
going to say Google person or guy or whatever, but, uh,
because I don't remember anybody's names, but, um, uh, he
was saying that like, uh, so after you do that, um, uh,
the whole vulnerability management scaled by AI only works if
you have a really good asset management platform. And we've
(46:47):
been talking about this for years.
S1 (46:48):
Like I did that thing in 23 when I was
at Robinhood.
S2 (46:52):
Yeah, yeah, yeah.
S1 (46:53):
Asset management as a center of management.
S2 (46:55):
Yes, exactly. Yeah, yeah. And that you presented that at
the Black Hat Summit, right? Yeah. So yeah. And this
turned out to be true for them. Right. It's like
it's like this program does not work unless you have,
you know, had your balanced breakfast of, um, you know,
consolidating all your data sources where vulnerabilities come in, and
then having a tremendous asset management program, like knowing where,
(47:17):
like having, um, you know, for lack of a better word,
like having knowledge of where all the systems are, you know,
what they are, who owns them, what teams action them, um,
where the repos are. And that's not like that. Sounds
trivial to some companies who are small. Like if you're
a startup, you're like, yeah, of course I know where
my repos are and who owns the thing. But when
you get to a company, that's the scale of Google
(47:38):
or Apple, right? There are hundreds, if not thousands like
Ubisoft too. I mean, we had productions which are video
games everywhere, and it's just not simple anymore. Like you,
you lose a thread on an app and then it
just exists out in the wild, and then someone finds
it via your bug bounty and they're like, hey, I
read this thing and you're like, I have no idea
what that is like. It's not. I don't see it anywhere.
(48:00):
I don't know who owns it. And then you spend
all of this toil time. That was a reoccurring term toil, right? Oh, yeah.
You spend all this toil time to, like, figure that out. And, um,
you should be architecting your program from the beginning with
really good asset management instead of spending that toil time later.
S1 (48:17):
Yeah, I love that. I definitely dealt with that at
Apple because it's like you put out the state of
the system and a week later it's like, nothing like that. Yeah, yeah.
That's true. Like that.
S2 (48:26):
Yeah. Yeah. The question I had for you when we
were in the car was it's like, okay, so, um,
so you're really big on capturing context in markdown files,
which is the telos idea. Mhm. Um, and you can
do telos for yourself personally as a person. Or you
could telos as an organization. Right. City country country doesn't matter. Yeah. Yeah.
Put down your ethos, your goals, your systems, your owners,
(48:46):
you know, into markdown or maybe JSON or something like that,
whatever you want to use.
S1 (48:49):
Whatever.
S2 (48:50):
Uh, and my question was kind of like, okay, so,
so Bob, who handles, you know, Celsius app, right. Whatever. Uh,
you know, he leaves and like, who is responsible for
updating the context file to include those changes, you know,
is there a system that you prefer or like a method?
Or is it just that you have to go in
there and help them and update that? Or is there like,
do they have to hire a specific person to run
(49:11):
the Telos file and make sure everything stays in line
when they change their company vision or something like that?
S1 (49:16):
Yeah, yeah. So it's going to depend on the implementation.
So the way I'm doing it commercially is for this
thing called same page, which I'll be talking about later.
But I think the future of this is this.
S2 (49:28):
Let's talk about it. Let's talk about same page right now.
Let's heat up. I mean, you might as well. Right?
Come on.
S1 (49:34):
No.
S2 (49:35):
Oh come on.
S1 (49:36):
No, I mean, it's it's the concept of the talk.
It's just unified context. Now, I appreciate it, but no, um, um,
but no, I think just this just becomes a unified, um,
it product. Okay.
S2 (49:51):
So it maintains.
S1 (49:52):
It. So, so I think that anybody who builds anything from, like,
an ice cream truck business to a security program to
I want to be a governor, they're going to have
a core system, which is all assets, all context, all goals, everything.
S2 (50:12):
So one thing that I didn't realize until starting to
build very, uh. And I'm not afraid to admit it,
vibe coded things. Right. Like so I will now have
superpowers because I understand code, right? I understand architecture of code.
I understand problems in code, I understand security, but I've
never been a front end developer. I couldn't sit down
with react and build a build a pretty website. If
(50:33):
you put me on a modern development team, I would die. Yeah.
S1 (50:36):
Same. Same.
S2 (50:37):
But because I know about code, I know how to script.
And I know the concept of pretty much every language
from assessing it in security. I can now build fantastic
things very quickly. Same. The thing is, is that I
think I've realized in that world is that prds like
product requirements documents are necessary for so many more things
(50:57):
than I ever thought. Right. Like. And so the idea,
the way it connects to vibe Coding is like whenever
I do a new project now, the first thing I
do is I verbally talk to my browser with a
Chrome extension into an AI model with a whole bunch
of notes about just kind of what I want the
system to do, what tools it's tying together, like how
it's presenting data, why we're even making this, what problem
(51:19):
it solves. And that's just verbal garbage coming out of
my mouth, right? Like I'm just having a conversation. I
could take a podcast like this and like, do that.
And then I'm feeding into a whole bunch of AIS
to make a product document with requirements in it. And
then I'm creating a technical architecture document as well, which
is why we are choosing the frameworks that we're using,
why we're choosing the tools. Never deviate from these. And
(51:40):
so those two things in concert, especially in the vibe
coding or AI assisted coding world, have helped make my
software infinitely better and helped the AI. I'd stay on
track with the mission. With the technology.
S1 (51:52):
Yeah, because when it loses its context and loses its
mind and basically gets erased. Yeah, it just goes back
to that. Starts over.
S2 (51:59):
Yeah. And in vibe code, you can have such I
mean we're going off on a rathole now, but I.
S1 (52:02):
Mean, that's fine.
S2 (52:03):
In coding, you have that sidebar, right? And that you
can stay in that conversation for a long time and
not realize that you're you're hitting the point of where
needle in the haystack is not you're not getting good value.
And you need to.
S1 (52:14):
And they're also like $15 queries.
S2 (52:17):
Yeah, exactly. Yeah.
S1 (52:19):
$15, $24. Okay. Wait a minute. So if I do
four of these, that's 100 bucks.
S2 (52:24):
Yeah. Yeah. I mean, I was talking about it with
some friends, uh, in discord, and it's like or in,
in a signal chat, and I'm like, how much are
you guys spending on your AI subscriptions a month? Because
mine is approaching a car payment, and it's totally. I
know it's worth it, but it's still kind of painful
to add another car payment, you know? So, uh, I mean,
I'm using everything. I'm using Gemini, I'm using OpenAI's ecosystem,
(52:44):
I'm using Claude's ecosystem. I still have perplexity. I'm a
really hype user of Manus right now. Um, I love
Lambda Chat's implementation of deep seq because they host it
on the internet, and I can scrape it with puppeteer. Playwright. Um, so, yeah,
I'm just hitting everything for everything.
S1 (53:01):
Yeah, yeah, I'm doing a lot of na to n, uh,
for back end and, uh, bedrock, um, still use fabric
like most of the models. Um.
S2 (53:11):
Oh, yeah. When I'm on the command line using fabrics. Yeah, yeah.
S1 (53:14):
Favorite models. Uh, right now, uh, uh, for me, it's, um,
two five, two five pro for a Gemini.
S2 (53:21):
Oh, yeah. Gemini is on.
S1 (53:22):
And then, um, O3.
S2 (53:24):
Okay.
S1 (53:24):
With memory.
S2 (53:25):
With memory. Okay. So I use O3 with memory for
writing tasks. I think that's really good at writing and
researching tasks. Um, I actually am one of the believers
that the biggest deep sea model. It's one of the
best models that I've ever seen for research tasks, even
though it doesn't have search enabled. Um, I use it,
it's exposed for free through Lambda Chat. Um, so you
(53:46):
can go and so like you have to think about
there are different releases of R1, deep R1, and most
of us played with the middle implementations. Uh like the.
S1 (53:56):
Yeah.
S2 (53:56):
Yeah. Like the.
S1 (53:57):
Not the.
S2 (53:57):
Full, not the full. Six one. 7 billion parameter one. Right.
They have that hosted on their on their architecture for free. Yeah.
And it is fantastic to use um, and uh, I
find that model to be really, really good.
S1 (54:10):
Let me check real quick. Yeah.
S2 (54:12):
Let's see what grok is running. I don't think they're
even running 617. I think he just pulled down right
there on the compound base mini where it says right there.
S1 (54:25):
Yeah.
S2 (54:25):
Oh.
S1 (54:28):
Oh, here we go.
S2 (54:29):
Yeah, yeah. See, so they're doing deep seek 77 DB.
S1 (54:34):
No comparison.
S2 (54:35):
Yeah.
S1 (54:35):
No.
S2 (54:36):
Yeah 601 billion parameters. So um yeah. So I use
that model. It it's really good research model but it
doesn't have search enabled. Right. So that's, it's, it's uh
missing and then so Gemini uh, deep 6617 and then
I really get a lot out of, um, uh, when
(54:56):
I code, you know, 3.7 on Claude is really good. Um,
although it started to go kind of haywire recently, I.
S1 (55:03):
I switched off. I, I went from 3.5 to 3.7,
and I liked it a lot, but 2.5 from Gemini
came right after that.
S2 (55:11):
It did.
S1 (55:11):
Yeah. And I was just like, damn, that's really good.
And then then oh three roughly same time. Yeah. Yeah.
So I've kind of been messing mostly with those.
S2 (55:19):
That's the problem is like I want to consolidate subscriptions.
Like I want to just say, oh, I'm just going
to stick with this, but I can't, because if you're
at the cutting edge of using this stuff every day,
it's like you want the best model for the best
thing all the time.
S1 (55:33):
And there's like two releases per week. So you're just like,
it's not.
S2 (55:37):
Yeah, yeah, yeah. And so yeah, you end up spending
somewhere in the order of 400 to $600 a month
for all your subscriptions, you know, not including your I mean,
that's including everything that's including your, um, your direct chat
interfaces that you're paying for, like ChatGPT implementations, but also
your API calls and then also your subscription to Klein
(55:58):
or not Klein, but windsurf or um, um, what's the
other one? I use windsurf, sorry. I always forget the
name of the other one. Sea.
S1 (56:06):
Klein.
S2 (56:06):
No. Oh, yeah. Klein. But. Cursor, cursor. There you go. Cursor. Yeah. Um,
so you have subscription to one of those two, and
it's like, okay. Yeah. Now I'm paying quite a bit.
S1 (56:16):
I just realized, uh, a link between coding and the, um,
AI competition. Mhm. Which model are you using versus which
scaffolding do you have in the form of giving that
cursor rules prompt.
S2 (56:33):
Yeah.
S1 (56:33):
Yeah. Right. So it's like I strongly believe and I
Jason talks about this I forget his name is Jason Wong.
Is that his name I don't.
S2 (56:43):
Know but he's brilliant I love that guy.
S1 (56:44):
Yeah. So he makes like really, really practical videos. Yeah. Um,
so he basically, um, put out some stuff. Cursor rules,
like a single prompt that can generate that full PRD
with a checklist. Yeah. And the model actually goes and
checks things off the list.
S2 (57:01):
Yeah, I do the same way. So I use Berman's
or uh, um, the I got is it Berman I
can't remember. What is his last name. The AI content creator.
You were on his show. He talked about fabric. Matt.
S1 (57:14):
Matthew.
S2 (57:14):
Matthew. Yeah. Okay, so. So, Matt Berman did a or.
Matthew Berman did a show on this as well? Yep.
And he had a set of questions that he put
in the show. But he never like released a GitHub
or a gist or anything like that.
S1 (57:26):
That's the thing I love about AI, JSON. He's like boom,
it's on GitHub.
S2 (57:29):
Yeah, it's on GitHub. Right. So I went and grabbed
Matt Berman's and I tweeted, I just tweeted it out
like I had. I transcribe and then pull up the questions.
And then now I in that prompt to the AI
I'm like, here is a, here is the structure of
a PRD I want to build. I need you to
ask me relevant questions to fill in the thing. And
then I have it ask me the questions and then we're,
(57:50):
you know, we have a chat.
S1 (57:51):
So it builds a PRD. Based on your interview?
S2 (57:54):
Based on my interview, it interviews, I tell it to
interview me.
S1 (57:56):
Yeah.
S2 (57:57):
And then once we're done, then it's like, cool. Do
you want me to stitch all this together into a
fully functional PRD? I'm like, yes. And then we move
on to the architecture section, and then those live as
markdown files in my project. And so when I need
to start a new chat, because I know the context
window is filling up for windsurf, I start a new
chat and I say reanalyze our, you know, our core,
(58:17):
our core architecture and our PRD so that you understand
everything about this project and the readme that we built. Yeah, yeah.
And then it's like it's like I'm starting again with
great context. And you know, obviously you need to hook
it up to version control as well. So if anything
goes haywire gets deleted, you can snapshot back. But um,
but those are some pro tips for the coding people
out there.
S1 (58:36):
Yeah, yeah I've got a a rule where I basically
say in the cursor rules. If I say this, it
means go and review.
S2 (58:44):
Okay, cool.
S1 (58:45):
So if I feel like it's going wonky. Yeah, it's
just like a reset. Yeah. Um, I wanted to mention
one thing.
S2 (58:50):
You don't say it like, make it so. Like Picard.
S1 (58:53):
You could do that. Yeah, absolutely. Um, so I wanted
to give, uh, a little shout out to, uh, Caleb
Sima for a post he did before RSA, where he
was complaining about, um, panels, just like being kind of empty. Yeah. Um,
(59:13):
and I was just thinking about this. The whole reason,
these long form conversations. Because we're just riffing. Yeah. And
we're just thinking of stuff. But ideally, the stuff we're
thinking of is stuff that was new to us. Therefore,
it's going to be new to them. Yeah. And what
happens with so many talks and so many panels? And
I think this is like a meta conversation that we
(59:33):
just really need to solve. Um, obviously for us as
content creators, which I think we do a good job. Yeah.
But I would say the industry needs to solve this
is don't just come on and be like, you know,
here's the thing, Jason. Like, the landscape is changing. You know. Yeah.
You know, AI is changing everything. It's changing the game. Yeah.
(59:54):
And you're like, yeah, it's just changing the game. We're like,
all right, that's all the time we got. Yeah.
S2 (59:58):
Everybody let's go to lunch.
S1 (59:59):
Yeah. So it's like two people reflecting backwards or maybe
a panel of four. Yeah. And they're just reflecting back
these things that we've heard for the last few months
or a couple of years. Yeah. It's like, um, was
it Matt or somebody we know was like, if I
hear one more person say, um, the attacker only has
(01:00:21):
to be right once, and the defender has to be
right all the time. It's like we learned this ten
years ago. Yeah. So the first time we heard it,
it was hella smart. Yeah.
S2 (01:00:29):
Yeah. It was great.
S1 (01:00:30):
Yeah. And now it's just like so many panels, so
many conferences, so many talks are just that.
S2 (01:00:35):
Yeah, yeah. It's, uh, it's that it's that there's no
contention anymore on panels like. Yeah. I mean, even the
ones at OpenAI, when we went to that, there was
like towards the end, it was a lot of like, yeah,
yeah we agree. And so I think at the end
I was like trying to spice it up a little bit.
Like I wasn't the moderator but I asked a question.
I'm like, who's your most feared competitor?
S1 (01:00:54):
That was.
S2 (01:00:55):
Like one.
S1 (01:00:55):
Of the.
S2 (01:00:55):
Best questions. And everybody was like, oh shit. And then like,
you know, the, you know, like, uh, the guy at
the end, that.
S1 (01:01:01):
Was one of the best questions.
S2 (01:01:02):
It's definitely Dan and Trail of Bits and Trail of
Bits is like, oh, we're really scared of like whoever
at the end, he's like, he's like, I don't know,
I'm just doing my best over here.
S1 (01:01:10):
Yeah. Trying to survive. Yeah.
S2 (01:01:12):
Try and survive. And, uh, yeah, I wish there was
a little bit more contentious. And you can you can
talk about that. I've had interview panelists before or the
people leading whatever moderators or whatever. Yeah, I've had them
be like, hey, I'm going to ask this question. And
if you all answer the same fucking thing, I'm gonna
put you off the panel. And I love that. I'm like, yeah, yeah.
S1 (01:01:28):
So another of our mutual friends actually engineers this into
the thing. Oh, really? Sasha.
S2 (01:01:34):
Oh, yeah. He does. Yeah.
S1 (01:01:35):
Sasha is like, look, if I'm going to be on this.
S2 (01:01:37):
I didn't get to see Sasha for more than ten minutes.
This whole.
S1 (01:01:39):
Me neither. I just yeah, just like he was a blur.
S2 (01:01:43):
A blur.
S1 (01:01:43):
Yeah, yeah. But, like, he's like, if I sit on
this panel, I'm going to engineer a thing. I need
to know what you all believe. Yeah, so I can
find something I disagree with. Yeah. Otherwise, this is going
to be the dumbest panel ever.
S2 (01:01:55):
And it and it works on me wonderfully because like,
when people don't agree with me, I just get angry. Yeah.
S1 (01:02:01):
No kidding. Like I'm like, I'm like.
S2 (01:02:02):
No, you're.
S1 (01:02:03):
Wrong. Like, we're good at this. We're good. We're good
at having a good, you know?
S2 (01:02:07):
Yeah, yeah. No, I.
S1 (01:02:08):
Was having a go.
S2 (01:02:09):
I was, uh, I mean, while doing RSA, I was
tweeting a little bit, um, on on Twitter. And I
happen to have, uh, a friend who was rooming with
me who was doing security research. So he's he found
a zero day, basically reported it to the vendor, but
a whole bunch of individual companies that implemented the software. And, um,
they have not yet fixed the bug. So he goes
(01:02:31):
out to all these bug bounty programs and submits it,
and also a whole bunch of vulnerability disclosure programs.
S1 (01:02:36):
Mhm.
S2 (01:02:37):
And um, and he submits it to the vulnerability disclosure programs.
And he comes back to me because, you know, I'm
a former bug bounty guy. Right. And he's like he's like, yo,
is it like, you know, like normal? Like the crits
that I got for people who have bounties? I got
50 points on the platform for hacker one, but I
only get seven for responsibly disclosing it without getting paid.
He's like, that's weird. So I go and tweet about it.
(01:02:57):
And admittedly, I didn't tweet right. I don't think like
I definitely went out and kind of sensationalized it a
little bit. I'm like, this sucks. Like, give the guy
more than seven points, right? But the bug bounty community
of which are a lot of my homies, like, basically
jumped on me and and it like got in my
head a little bit. But then I managed to like
push it down a little bit, but like, you know,
the bug bounty. This is a huge thing this week.
(01:03:18):
Huge discussion is just like, you know, VDP is evil,
VDP is labor exploitation, VDP is everything wrong with bug bounty? Jason,
you're the worst person in the world for um, for
promoting that. Anyone ever even think about VDP. And so
then I came out with this post, uh, you know,
a guy was trying to debate this with me, and
he's he's a smart dude, smart bug hunter. I have
respect for him. But he, like, said one sentence and
(01:03:41):
it was it felt a little aggressive to me. And
I was just in that mood and I was like,
fuck it. Block like. And I just block this dude. And, um.
And then he, like, went off the deep end and started, like,
posting more and more and like, you know, I can't
believe Jason did it. Turned out he had been to
my class, to which I felt bad about. Oh, um,
and so I unblocked and tried to have a conversation.
But just like there's so much vitriol on this topic
(01:04:02):
of labor exploitation. So I went out and I said, hey,
I believe in VDP. Actually, I believe there are plenty
of companies who on ramp with a VDP and then
start paying for bugs. I have seen them. I have
worked at Bugcrowd. This is not a fictional thing, right? Uh,
some companies are too big that if they were just
to open up. I mean, Ubisoft was one of these companies, right?
If you would have opened up a bug bounty for
(01:04:23):
everything on Ubisoft, all scope, all all classifications of bugs,
they would have went bankrupt, right? Yeah. And so they
had to start with the VDP to burn down a
little bit of the stuff, incentivize people to come on
and then eventually move into a paid program. Yeah. And
then they ended up shutting down the program. Um, but
a lot of companies work like that, and no one's
forcing you to work on a PDP, right? Like, I wasn't,
(01:04:45):
I wasn't.
S1 (01:04:45):
Can you for everyone just give an overview of the
difference between the two?
S2 (01:04:48):
Yes. Okay. So a bug bounty program is a program
where you pay for bugs, right? Like a researcher on
the internet comes out and says, I found a web
application vulnerability with your software and you've said, yes, I'm,
I'm paying for these usually on a platform like Bugcrowd
or Hackerone or integrity. And they come in the platform,
they submit bug, you pay them, right. VDP is called
a vulnerability disclosure program, where you as a company don't
(01:05:11):
have like an email box or anything like that to
take in vulnerabilities. And so usually the platform handles it
for you and you show up on their platform as
a card that says, hey, we do not pay for anything,
but if you find something, report it to this program. Right. Yeah. Um,
and we will, uh, give you props, you know, we will.
S1 (01:05:30):
Find some kind of reward, but we don't have a
mountain of cash.
S2 (01:05:32):
Yeah, we don't have a mountain of cash. Right? And
it's just. It feels like bug hunters are, you know?
You know, this guy was trying to cite, like, he's like,
no more free bugs thing. And like, you know, like, listen, man,
I was there when that happened. I mean, I mean,
I don't know if this guy was even born then,
but he's citing this. But I was there when no
more free bugs was happening. And I don't think that
was the core message completely. I think it was I
(01:05:54):
think it was that like, you know, there should be
fairness in disclosure, there should be fairness in credit of vulnerabilities.
There shouldn't be people suing each other when vulnerability research
is discussed. Um, and so I wrote this big long
thing and I just got railed this week like by the,
by my own community. And I felt I felt really
attacked and I was like, fuck it, I'm not going
to do bug bounty stuff anymore. Like I'm hosting the
Hong Kong and it's like, it's like you guys are
(01:06:16):
some angry people. Like, if you don't want to work
on Vdps don't work on Vdps, right. But like the
other thing I didn't say in that thread was that, um,
was that a lot of people I mentor, they don't
have their foot in the door yet. They haven't had
a job yet. Right. And so when they go to interview,
if they have bug bounty experience, that's awesome. But usually
when you're starting to find a bug on a bug,
(01:06:36):
bounty is much harder than to find a real bug
on a PDP, right? Because there's so much more scope.
Not everybody is testing on the vdps there. See something?
Say something. Right? So if you find something on a
VDP and you're doing stuff like Portswigger Labs and Hack
the box and all that stuff, it's like that shows
me as an employer, well, they've got the skills, right.
They have the skills. They just need a chance. Right?
S1 (01:06:55):
And it's receipts.
S2 (01:06:57):
It's like.
S1 (01:06:57):
It's.
S2 (01:06:58):
Receipts. Yeah. And, you know, I didn't even want to
go into that part of the argument on Twitter because
I'll get flamed more. But, um, yeah, I mean, it
goes to show that like, uh, you know, like I
drew that corollary from the panel thing you were talking about, right?
It's like it's like, uh, you can't just agree with everyone.
You have to have a point of view. Yeah. We're
talking about this in content creation, right?
S1 (01:07:17):
Like, totally.
S2 (01:07:18):
So so here's this idea. And this is also off
the beaten path. But, uh, for the listeners. But Dan
and I are content creators, right? And, and we are
lucky enough to have, you know, 25, 30 years in
the industry, almost like, um, and so, you know, 20
years for me. Um, and so.
S1 (01:07:34):
Can I stop you even before. Yeah. So I have
like a pre point for this.
S2 (01:07:39):
Okay. Go ahead.
S1 (01:07:40):
Yeah. So well make sure you keep this thread because
this is an amazing thread. So I want I want
to make a quick point I can't believe this is
going in this direction. But so a lot of young
people come to us and they're like how do I
become a YouTuber? Yeah. How do I become a content
creator on Twitter or whatever? The first thing I tell them,
which is exactly your point, learn something so that you
(01:08:02):
have an opinion about it. Yeah. Because I don't I
do create content and you create content. But I wouldn't
say that fundamentally, that's what we are. There are a
lot of people who are content creators, and I feel
like that is their actual job. So they are looking
for content whereas we are building and doing things. Mhm.
(01:08:24):
You got a training company, you got a consulting company.
You're doing that. The content falls out of it.
S2 (01:08:29):
Yes. Correct.
S1 (01:08:30):
Yeah. Right. And I feel like that is so key
especially for young people. You have to like get good
at something.
S2 (01:08:36):
Yeah, well, it's not even. It's not that you have
to get good at something. It's that you have to
do something.
S1 (01:08:41):
Have an opinion, have a path. Try something. Yeah. And
the content emerges from that.
S2 (01:08:48):
Yeah. I mean, one thing. Uh, so, Clint, you know,
our friend at Semgroup, he did an interview with me
later this week on the Semgroup blog. That'll come out
in a week or so, but it was mostly about
career stuff. And what I was telling him is like,
it's like when you're new, it's really hard to show
that portfolio of work, you know, and you can do
it with Vdps. You can do with Bug Bounty, you
can do it with CVS. You can do it by
(01:09:09):
taking tryhackme and getting certs and stuff like that. But
some of that stuff is paid work, right? Or it's
paid like some of that training is paid. And when
you really have nothing, you're going to have to just
focus on the free resources. But one of the things
that you can do with nothing at all is start
up a blog and talk about your learning experience. And,
you know, you said, good, I'm going to challenge with you.
(01:09:30):
Just have to do something. Yeah, do something and write
about it and have an opinion and talk about your
learning experience. And so like when I read a blog about.
S1 (01:09:37):
And be vulnerable.
S2 (01:09:38):
And be vulnerable.
S1 (01:09:39):
Yeah. Like you were saying earlier, talk about the bad,
talk about the good.
S2 (01:09:42):
Yeah, yeah. And you know, so like when I see
a blog on someone who's writing about their first, first
usage of using Burp Suite, right? And it's like, yeah,
I've read that blog 800 times, but I haven't read
it from you.
S1 (01:09:53):
That's right.
S2 (01:09:53):
And you're talking about like, your thought process and like,
you know, learning this tool or you're doing a YouTube
video on it or something like that, I go look
at that stuff in the interview pipeline. Right. It's an
it's an additional thing in your portfolio. And it always
makes me feel good to watch somebody trying to learn something,
you know, and, and I feel like. So it's not
that you have to be good at something. It's just
you have to do something and and that becomes your content,
(01:10:14):
I feel.
S1 (01:10:15):
Yeah. And don't worry about overlaps.
S2 (01:10:17):
Yeah. Everyone.
S1 (01:10:18):
Because Jason and I are so similar. Yeah. If we
tried to make the same piece of content, we wouldn't.
S2 (01:10:23):
And then we get angry at each.
S1 (01:10:24):
Other because we have we, we have like the, we
have very similar. But it's still going to come out different. Yeah.
In a Jason way. In a Daniel way. Yeah. Yeah.
And that. So you shouldn't have to worry that someone
already wrote the burp tutorial.
S2 (01:10:38):
Yeah, yeah. Yeah, exactly. So the other one I was
talking about is, um, I think I was, I was
trying to explain is that, uh, is that in content
creation also? I mean, like, I don't know if we
call ourselves content creators or whatever.
S1 (01:10:51):
I mean, we definitely are, but it's not like my
main identity.
S2 (01:10:55):
But I see so many people having like, being content
creators for the the point of being a content creator
and struggling to find content because they're not doing work.
S1 (01:11:05):
There you go. That's it. That's that's my point. Yeah.
S2 (01:11:08):
On top of that, not having an opinion. And I
think that.
S1 (01:11:12):
That's like the worst.
S2 (01:11:12):
Yes, I think that that, you know, it goes into
the panel thing we were talking about. Right. It's like
you need in this industry, if you're going to be
doing content in opinion of some sort, you need a voice. Um,
and it doesn't have to agree with everyone. You could
be the antithesis. Like if you want to be the
anti Jay Haddock's guy and say that like, vdps are,
you know, exploitative work and like, you know.
S1 (01:11:32):
At least you're saying something.
S2 (01:11:33):
At least you're saying something like, I respect that guy.
You know, like, right. Like he has an opinion. Hurts
my feelings a little bit, but he has an opinion,
you know, and it's like.
S1 (01:11:39):
Especially, especially given what you've done for bounty.
S2 (01:11:42):
Yeah yeah yeah yeah, yeah. Um, but but yeah, having
having an opinion is is almost more important than content
these days, I think. I think content comes from your opinion.
S1 (01:11:51):
So 100% correct. Another way that gets described is taste.
S2 (01:11:55):
Yeah.
S1 (01:11:55):
Yeah. So so imagine this I talk about this a lot,
but it's like a few years from now or maybe months.
Who knows how fast this stuff is moving. But, um,
you just like you do with, uh, narration of code,
you're talking to cursor. And on all these screens, it's
popping up. You mean like this? You mean like this?
Or you're having it make you an anime series? Yeah. Or,
(01:12:17):
you know, a book or whatever. Do you mean like this?
And you're just swiping through? No, not like that. Not
like that. That's the one I like.
S2 (01:12:23):
Yeah, that's the one I like.
S1 (01:12:24):
Yeah. And that taste. Oh, this reminds me of Rick Rubin, actually. Um,
so he produced Slayer, a whole bunch of rap albums.
He's like the most famous producer ever. Okay. Doesn't play
an instrument. Really can't read music. Plays no instruments.
S2 (01:12:41):
Wow.
S1 (01:12:41):
And they say, how do you get hired to do this?
He's like, I know exactly what I like.
S2 (01:12:47):
That's amazing. Right?
S1 (01:12:48):
Yeah. So to your point, like opinion taste.
S2 (01:12:51):
Yeah. I mean, that was we were talking in the
car about the branding for Arcanum for my thing. And
one of the most freeing things for me, being the
CEO of this company now is I have complete control
over branding, marketing message what we do and do not
do to enable those things. Right? Like our strategy. I'll
(01:13:12):
give you an example. It's a freebie for any company
out there. Um, but, uh, where all of your marketing
teams are telling you, like, you should be, you know, chasing,
you know, inbound leads, and you're going to use BDR
based marketing and stuff like that. That's that's all dead like, uh,
if you if you want to be a infosec company
(01:13:32):
in 2025, it is influencer marketing and it is especially
SME based marketing. And this is a lost art. Training
someone at your company like when your sales engineers or
maybe you're small enough, it has to be your CTO
or CEO. But then going to conferences and this is
what I do, and speaking on topics, not selling anything,
contributing to the community of security. We get so many
(01:13:54):
warm leads from this dude. My talks like I get
5 or 6 people right after my talks are like,
let's work together, give me your card. Like you know.
S1 (01:14:00):
And you're not even you're not even saying.
S2 (01:14:02):
Not saying anything.
S1 (01:14:02):
Julia is.
S2 (01:14:03):
Yeah. Julia is. Yeah. Julia. My wife is like, hey,
we sell things, you know? But but that's that's our
vibe is we go to conferences. I mean, I'm traveling
to one a month and it's a lot of travel
and it's hard, but marketing people have been pushing this
away like, oh, well, we have to, like, train the
expert and set up the event, and then we have to, um,
and then we have to buy a hotel room and
flights and that's all really expensive. We'd rather spend that
(01:14:25):
on like bdrs cold calling people. I'm like, that is
the opposite way of how this industry works.
S1 (01:14:29):
You figured this out really early and you locked on
and it's working.
S2 (01:14:32):
Yeah, I figured this out early and and again, like
the creative control to for our brand, it looks like
a metal brand, right. Like our canon font. Looks like
Metallica font. And the.
S1 (01:14:41):
Purple.
S2 (01:14:42):
The purple and the Spectre logo and all this stuff
was like, before I used to have to go in
front of a, you know, like a panel of people,
the CEO, the CTO, the, you know, the marketer, the brander,
and be like, here's my idea for this cool thing.
I know it'll crush all of our competition. And they'd
be like, ah, I don't know. I don't like that.
And I'm like, I know the security industry like, this
(01:15:03):
is what they want. Like they want cool shit. Like,
you know, another example is like when we put our
our name out there with them, like it looks like
Metallica font. So it says Arcanum, right? We don't put
our website, we don't put anything. The goal is for
you to wear our shirt, that you could just wear
it out every day. And nobody knows. You're marketing a
security company and many people are very anti that. They're like,
(01:15:24):
you have to have your website, you have to have
something to tie people back. And I'm like, no, eventually
someone's going to ask that person, where did you get
that badass shirt? And they're going to be like, oh,
I went to a security conference. There's actually this hacking
outfit that does training called Arcanum. That person works in it.
They're like, oh, I'll go check that out. And like,
and it's like word of mouth, basically.
S1 (01:15:41):
In the meantime, it just looks cool.
S2 (01:15:42):
And it just looks cool. You wear the shirt every day.
And so like there are a whole bunch of like
micro tricks like that inside of marketing, branding, having an opinion,
building a company that I feel like I have just
seen regular marketing people from other domains just come in
and not get in the security industry. And so that's
why it works for us, I feel like. Yeah.
S1 (01:16:01):
Well, dude, we finally did it.
S2 (01:16:03):
We did we we sat down and did a thing.
I think three last saw were like, we should just record,
go up to the room.
S1 (01:16:08):
We, we've talked about. So what will happen is we'll
get on the phone. Yeah. And we'll do exactly this. Yeah.
We should have recorded. We should have recorded that.
S2 (01:16:18):
Yeah. Yeah.
S1 (01:16:19):
That's happened 29 times. Yeah, exactly. And then we're like, no,
we got to get on the mic and we got
to do it. And it's like never happened. Yeah, we
did the one for a vendor. Yeah, but that wasn't this. Yeah, yeah.
S2 (01:16:29):
Um, okay, I got I got one for you before
we leave. I got so your opinion on Thunderbolts. We
went to go see the new Marvel movie, uh, two
nights ago.
S1 (01:16:40):
I'm struggling to.
S2 (01:16:40):
Tell the people. Tell the.
S1 (01:16:41):
People? I'm struggling. I'm struggling.
S2 (01:16:44):
I loved it.
S1 (01:16:45):
So I liked the main character. I liked, um, you
know Johansson's sister?
S2 (01:16:50):
Yeah.
S1 (01:16:50):
Yeah, yeah. She was like, the strength, um, the father
daughter thing. The father daughter.
S2 (01:16:56):
Thing was strong.
S1 (01:16:56):
Yeah, yeah, yeah, I like that a lot. Um, I
don't know, I just like so many things about the
old franchise. Yeah, I just feel sad. Yeah, when I
think that, like, they're not around anymore.
S2 (01:17:10):
Yeah.
S1 (01:17:11):
First, I have a question. Where'd they go?
S2 (01:17:12):
I don't.
S1 (01:17:13):
Know. Why can't they come back?
S2 (01:17:14):
I think Thor left off planet um, because he has,
like a daughter now, right? And he's on, like, other planets.
Like saving them, not Earth.
S1 (01:17:22):
The. Okay, so multiverse freaks me out.
S2 (01:17:25):
Yeah, the multiverse I was not super down with. Honestly.
S1 (01:17:28):
Like, you know, here's my problem with the multiverse it
invalidates death. It invalidates things you're supposed to care about. Yeah. Sacrifice. Yeah. Yeah.
If someone can, like, snap and just people come back,
I'm like, yeah, what are we even doing here?
S2 (01:17:45):
Yeah. It's also just hard to keep up all the
different multiverse plot lines. And, um.
S1 (01:17:50):
Is that person alive or dead? Well, not in this universe.
S2 (01:17:53):
Yeah, exactly. Yeah. Uh, we were talking about the differences
between comic, like, adaptations of comics and movies, and then
what actually happens in the comic books? Pretty much. I
feel like is universally better in the comics. Uh, except
for a couple things. Uh, I think that there have
been a couple of movies that they've done, origin stories
that were better than the comics, but we were talking
(01:18:14):
about Bane and how much different Bane is in comics
versus Superman two. Yeah. Superman two. Yeah. Um, How different the.
The death or death of Batman. But the breaking of
Batman was in the comics with Bane versus the movies,
and then the death of Superman in pretty much everything. Um, like.
Like there was a whole bunch of stuff that I
guess you just don't have time to do, you know, like, uh,
(01:18:36):
like basically, like, before doomsday killed Superman in, you know,
in the comics.
S1 (01:18:42):
Spoiler.
S2 (01:18:43):
So. Oh, yeah. I mean, if you haven't, you haven't
read the comics, you should. But, uh, this was when
I was a kid. I mean, doomsday ran amok on
the on, like, you know, in America, like. And he
just he wrecked every single person in the Justice League. Like, just,
like slapped them down. And so that that, like, builds
this context of, like, six issues up to the point
where you're like, oh, man, the only person that's going
to be able to stop this villain.
S1 (01:19:03):
Yup.
S2 (01:19:04):
You know, the combined might of the Justice League has
done nothing. It's only going to be Superman. Yeah. And so, like,
it breeds this, you know, like up and coming crescendo
of battle. And then, you know, you have the epic
battle between Doomsday and Superman and he dies. Um, and,
you know, one of the things I like to talk
about is when I was a kid, that issue came
in a plastic bag with the Superman logo with blood
dripping down it. No way. And included in the plastic
(01:19:26):
bag was the comic of his death. It was all
about his funeral, him dying the world without Superman. And
it came with a black armband with the Superman logo
on it so.
S1 (01:19:36):
You.
S2 (01:19:36):
Could mourn him. And it was made of, like, vinyl
and so you could put it on your arm. I
wore that to school that day when Superman and some
of my nerd friends did, too. It was.
S1 (01:19:45):
And you were Superman in your wedding?
S2 (01:19:47):
I was Superman in my wedding. We all wore superhero
shirts under, uh, under a thing as I was Superman.
And we were talking. Oh, we watched that video last
night of, like, the iterations of all the Superman.
S1 (01:19:55):
Oh, yeah. Yeah, there.
S2 (01:19:55):
Were a couple Superman I've never seen before, by the way.
S1 (01:19:57):
Yeah. And you're getting me on the the one that
you like.
S2 (01:20:00):
Yeah. Um. Superman and Lois, uh, the two two season
one I thought was fantastic. I think he's one of
the best Superman's I've ever seen. Um, and then I'm
getting you on arcane, too.
S1 (01:20:11):
Yeah.
S2 (01:20:11):
That's right, arcane is fantastic.
S1 (01:20:12):
Like.
S2 (01:20:13):
Like like in a few. Yeah, yeah, yeah. So, um. Yeah,
I mean, if you wanted some nerd segment that was,
you know.
S1 (01:20:19):
Yeah. And then a week from now, we go on
our spiritual retreat.
S2 (01:20:23):
We do? Yeah. We do.
S1 (01:20:24):
And can't wait to see you there.
S2 (01:20:25):
Yeah. We, uh, we do EDC every year in Las Vegas.
It's three days where we try not to talk about work. Yup.
And just listen to music and be best friends. Outside
of that, I highly recommend for any of you who have,
you know, friends, who's who sit in the industry and
you kind of go and hang out with them, and
you end up talking a lot about infosec and work
to plan something that's not work related. Like, you and
(01:20:46):
I have EDC, and I'm trying to build something with
Kev where I go to like a comic con, you know? Yeah. Um,
and you're more than welcome to come if you want, but, uh.
S1 (01:20:54):
I've been once, I think.
S2 (01:20:55):
Yeah, I like comic cons a lot, but, um, but
just that activity of doing something outside of infosec is
really nice. So.
S1 (01:21:00):
Yeah. And then we start with work, but really, it
ends up being like life plans. Yeah. And how we're
helping each other.
S2 (01:21:06):
Yeah, exactly. Yeah. Yeah.
S1 (01:21:07):
So we'll do. This was fantastic.
S2 (01:21:10):
Awesome. Yeah. Hopefully.
S1 (01:21:11):
Hopefully it was recording.
S2 (01:21:12):
Yeah. Hopefully. Yeah. I see the little thing. So. Yeah.
For sure.