All Episodes

June 3, 2025 33 mins

Sponsored by Vanta. Vanta takes the busywork out of GRC so you can focus on what actually matters—improving your security, not chasing compliance. https://ul.live/vanta

This isn’t just another AI podcast. It’s about the deeper shift that’s happening in cybersecurity—away from individual tools and dashboards, and toward real-time, comprehensive world models of what we’re trying to protect or attack. I'll walk through how I came to this idea, what it means for security assessments, red teaming, vuln management, and beyond—and why context, not AI, is the actual revolution.

📽️Check out the full video here: https://youtu.be/UwTTcka1Wd8

Topics covered: 
Why the core problem in security is organizational knowledge
Unified Entity Context (UEC) as the future architecture
Modular, AI-augmented security stacks
Why every attacker and defender will soon be running one
How this flips the AI conversation on its head

If you care about where hacking, automation, and AI are headed—this is the blueprint.

📬Subscribe for updates about trends and ideas in Cybersecurity, National Security, AI, Technology, and Society👇🏼
https://newsletter.danielmiessler.com/
👉🏻 X (Twitter): https://ul.live/x
👉🏻 Instagram: https://ul.live/ig
👉🏻 BlueSky: https://ul.live/bluesky
👉🏻 LinkedIn: https://ul.live/li

Become a Member: https://danielmiessler.com/upgrade

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:02):
All right. I want to do something crazy here. Specifically,
I want to talk about the future of hacking. And
what do I mean by that? What do I mean
by hacking? I mean all hacking attack, defense bug bounty,
personal automation stacks, enterprise automation stacks, attacker automation stacks, enterprise security. Everything.

(00:24):
Everything hacking related, I think is going to come down
to what I'm about to talk about. And I am
aware of how big of a claim that is. The
reason I'm able to make this prediction is I'm not
stupid enough to claim to know how it's going to happen,
or exactly when or with what companies or like what
technologies exactly it's going to manifest as. That would be
ridiculous because the stuff is basically unpredictable. What I'm going

(00:46):
to show you is a direction. And I think once
you see it, you will be unable to unsee it.
And by the way, this video is going to be
around 30 minutes. But the whole first 25 minutes is
leading up to the last five. The last five is
the good stuff. All right. So quick intro for people
who don't know me. I've been in security since like
1999 and I went heavy into AI in late 2022,

(01:09):
still doing tons of security stuff just with this AI
wrapper around it. And you're going to see why here
in a second. And I would say most of my
technical background, I mean, I've done lots of different stuff,
but it kind of boils down to a container of
security assessment that's like the main outline. So first going
to show you how I kind of walked into this idea.

(01:29):
So when I start to do a security assessment this
goes back, you know, 15, 20 years, uh, doing security assessments.
I like to start at the very top. When I
talk to a company, I like to interview the CEO
if I'm able to, if it's a, you know, medium
sized company or below. And I talked to CEO, I
talked to the CEO, I talked to the head of legal,

(01:49):
I talked to like, as many people at the top,
and I'm sort of asking them the same questions. And
then I move down through the structure. I talked to
all the VP's and senior VP's and CISO and the
rest of the C-suite and everybody. And then I start
moving through all the management, and I'm asking kind of
similar questions, but I'm also asking different questions because now
I'm playing off the answers I got before. And I
just moved through the whole structure all the way down

(02:10):
to the people who do the actual work. And then
as I keep gathering more and more information, I start
filling in these elaborate diagrams that describe to me exactly
how this company works, like here's how the information flows.
Here's where the data is stored. Oh, we got vendors
over here. They're able to touch this data or whatever.
And ultimately I'm trying to figure out like what they're protecting,
how they're doing it, what day to day business looks like.

(02:32):
After a couple of weeks of this, I then start
doing my technical assessment to find vulnerabilities. So I'm also
reviewing their previous technical assessments, but I'm really doing my
own as well to go and probe in these various
different areas and point my, uh, observations at things that
I've seen in the interviews. But the underlying theme here
is I'm taking all the context, not just of the

(02:53):
vulnerabilities and the technical aspects like of the IT stack,
but of the business itself. Right. All of that. I'm
gathering into one place, and that's kind of how I
view and how I start security assessments. The other teams
are still managing GRC with spreadsheets, screenshots and manual processes,
but with everything evolving compliance frameworks, third party risk, customer expectations,

(03:14):
this is no longer good enough. And the problem isn't
just that it's time consuming. It can actually hold you
back slow. Audits miss risks and give you less time
to focus on what actually matters, which is improving your security.
Trust management platform is designed to help with that. It
automates the core parts of your GRC program, things like compliance, readiness,

(03:35):
vendor risk, and internal controls so you're not buried in
manual work. According to IDC teams, using Vanta 129% more
productive in their GRC work. That means faster prep, fewer surprises,
and more time for real security work. It's not about
making compliance easy for the sake of it. It's about
getting the friction out of the way so you can
move faster, do better work, and build trust more efficiently.

(03:58):
And if you're thinking about how to approach AI risk
than to put together a free AI security assessment, it's
a structured way to evaluate risk across AI use, development
and governance. You can get the assessment at. That's. And
thanks to Vanta for sponsoring this video. So in a

(04:20):
completely separate thread of consumer tech, in 2013, I started
getting a picture of where I thought all this AI
tech was going, like on the consumer side, which at
the time I called IoT and I was actually talking
about this with my friend Jason Haddox in like 2013,
which I was reminded of this because I just read
the foreword and I was thanking him for encouraging me
to write this book in 2013. But, um, these ideas

(04:42):
are pretty decent. The book is actually crap, so you
don't need to read that. In fact, I have it
published online, um, as a blog post. It's very short.
You should go read it there. It's actually not half bad.
Plus you could use AI to help you read it.
Plus it's got way better typography. So I would say
skip the book and just go read the blog. but
the basic ideas are quite good, even though the book
is not so great. Um, so the basic idea is

(05:04):
you have digital assistants that know everything about you and
they advocate for you, and then everything gets an API,
including people and objects and businesses and everything. This is
like the second piece of this, and it's really, really important.
And your digital assistant, your Da, basically uses all those
services to interact with those APIs on your behalf. Then
your Da will use augmented reality to show you context

(05:25):
for wherever and whatever you're doing, right. So you're wearing
glasses or lenses or whatever. Neuralink, whatever. It doesn't matter.
It'll start with glasses, obviously. And basically your Da knows
everything about you, knows your entire personality, it knows your
entire history or whatever. So it knows when you're scared.
It knows when you're skeptical. It knows when you're hungry.
It knows, you know when you're curious. And it's using

(05:48):
these millions of services that are available at APIs to
get data back for you and change the screen. Okay.
Sometimes it's a security screen, sometimes it's like a social
screen because you're trying to find, you know, your life
partner or whatever. So it's constantly changing what you're looking
at to help you. Maybe it's popping up little notes
or little reminders or whatever. Right? A security overlay if
you're in danger or something. So that's the third piece,

(06:10):
which is augmented reality, displaying the information from all these
APIs from your from your Da, who is like the
one handling all of this and advocating for you. And finally,
the last idea is that when you have like an
entire family or an entire city or an entire country
with all these demons, all these APIs available, that produces
tons of context that a top level AI could look

(06:33):
at and say, okay, how can I manage this city better?
How can I manage its resources? How can I turn
on these lights and turn these off and reflow this traffic?
And you know, how can I optimize? How can I
help this city achieve its goals based on all the
context that I know about it, from all this context,
from all these APIs and demons? So that was fun.
That was a cool book. That was some cool ideas.

(06:54):
Then in 2018, I got a job at Apple doing
information security stuff, but the team I came in on
was with Joel Parish and we actually built, um, well,
he was already doing machine learning, right? He was part
of the machine learning team within security. So, um, I
wanted to study up and just get really good at
this stuff. Excuse me. And my math was really bad,

(07:15):
so I had to refresh my horrible math, and I
went and did Andrew Ng's entire machine learning course. And, um,
over the course of my time there, I got exposed
to tons of ML stuff and practical uses of ML
like before the current AI stuff, and it was really helpful.
I ended up building a product there which is still
used today, so I'm happy about that. In early 21,
I left Apple to go build the Appsec and build

(07:35):
management teams at Robinhood with Caleb Sima. And there I
did a blackhat talk about building vulnerability management programs based
on company context and specifically around asset management. And that
turned out to be another brick in this path that
I'm laying out here. After doing that, I decided it
was time for me to go build things on my
own and do like my own consulting. So I went

(07:57):
independent with unsupervised learning in like August of 22. And
it turns out that was just a few months before
ChatGPT came out. And obviously I went absolutely apeshit. When
that came out, I called everyone, I called Jason, I
called Clint, I called Caleb, I called my mom, I
called my dog. I don't have a dog. But yeah,
I called everyone. And the first place that my head
went with all of this was like security assessment and

(08:18):
building and managing security programs. So I started doing that immediately. Basically,
I took everything I was doing previously with all this context, right,
that I've been doing, like, you know, a decade and
a half or whatever. And I'm like, okay, how can
I use AI to, you know, make this even better? And,
you know, you know, put the context first. So in
March of 23, I wrote this post called Sspca, which
basically says everything is about state policy questions and actions. Basically,

(08:40):
we have current context for a company or a program
or whatever. Then you have the policy, which is what
you're trying to accomplish. Then you have the questions you're
constantly wanting to ask and have answers to, and then
you have actions or you know, that we could take
or that I could take against that context. So I
feel like I'm starting to zero in on this concept
and this got decent traction. But I wanted to like

(09:01):
have a demo or something for it. So I did
another blackhat talk, I think maybe the following year or maybe, yeah,
it must have been the following year, uh, to put
together a demo for this. So I put together a
fake company called alma, and I put in tons of
context for this thing. I basically made a copy of
one of my security assessments the way that I do it. Um,
but I did it for a fake company with a
whole bunch of fake data. So I've got company mission,

(09:23):
I've got their goals, how they're different from their competitors,
what they do in business. I got the risk register.
I've got their full tech stack. I've got their security
team and their members. Like the skill sets of the
team members. Um, the list of applications, the full IT stack,
every app that they use, um, all their documentation. I've
got some fake, like, slack conversations in there. Um, what

(09:43):
repositories they use, what dev teams they belong to, how
they push code, all that stuff. It's all in here.
So then I can ask questions the same way that
I do in security assessments. And using this, you can
actually manage the entire security program using this context that
you have because you could do planning from here, you
could do threat modeling, you could do your communications. You
can produce your reports. Like I'm doing this for a customer.

(10:04):
I could produce a report like a quarterly update security
report in 30s, which used to take them months and
like hundreds upon hundreds of hours of some of their
best people actually trying to make this report, just to
be able to prove to the rest of the organization
that they're actually effective. So turn that to a couple
of minutes. Right. The other cool thing is you can
respond to security questionnaires, because if you have a static

(10:25):
database of answers, you they always ask the question differently.
And it doesn't perfectly match when that you have. Right.
But if you have this kind of system with context,
it can answer it perfectly every time. So this is
an example of like a CISO making a statement about
no more connections are allowed to a particular sensitive resource.
And we're asking the question to the AI system, this
is a real AI system, right? And this is back

(10:45):
in 23 that I did this. So it's a real
AI system. I'm asking the question, should Julie be allowed
to connect to this thing. And it says no, she
shouldn't because the CISO just said nobody should be allowed
to connect to this thing anymore. Right. So you could
do really cool stuff when you have context. So throughout
23 and 24 and into this year, I've been building
more and more stuff around this theme of context and AI.

(11:06):
So later in 23, I built this thing called threshold.
So it takes a whole bunch of sources and I
have context of what I enjoy. Right? The kind of
content that I think is high quality, lots of good ideas,
lots of density in the ideas, lots of novelty in
the ideas. So I give it tons of context and
that becomes the filter for the quality level. And then
I can slide this bar to say, I only want

(11:26):
to see things from these 3000 different sources that exceed
at least this quality level. Right. So that's threshold. I'm
currently about to launch another enterprise product called Same Page.
It's basically a whole bunch of management like this around
different stuff security management for programs especially. Another thing I've
had for like nine years that didn't have any AI

(11:46):
whatsoever until, you know, a couple years ago was my
attack surface monitoring service called Helios. And I'm in the
middle of rewriting this entire thing to be like what
we're about to talk about. And once again, this is
all about using the context. That's a central part of
the rewrite. And the last one I'll mention is like
a daily brief for intelligence. So I basically go find

(12:07):
all my Osint people, all my national security people that
I know have high signal, you know, high alpha in
what they say. And I basically bring that in and say, okay,
here's everything they said yesterday. Turn that into a picture
of like where this might be going. Like, where are
they agreeing in a way that looks like there might
be signal there. And then I make myself a daily report.
So just another example. So these are all kind of

(12:28):
like separate ideas hovering loosely around the concept of context.
And I and I feel like I was doing pretty
well here. I feel like I kind of had a
grasp of this, but a couple of weeks ago I'm like,
wait a minute. I think I actually have a much
better way to think about this and to describe it.
And that is something I'm calling unified entity context. And
that won't be the real name that gets used, because

(12:48):
Gartner will come up with their own name. And of course,
that'll become the official thing. But if we look at
cybersecurity in general. We look at some use cases. There
are some interesting patterns and similarities. So for SOC you
got to look at all these different types of data right.
And try to like come up with like what actually happened.
Like is this thing bad? Is it okay. Is it benign. Whatever.
For IR it's a lot of the same stuff. You

(13:09):
got a whole bunch of different data you're trying to
figure out. Like, is it bad? Did it actually happen?
What's the blast radius? Pentesting you're also collecting tons of
information and you're trying to figure out, like, what path
do I go down? How do I show impact? Same
with Red team. It's just like more extreme. You're trying to,
you know, show more of a story and like the
actual impact to the business with management, you actually need

(13:30):
to understand the organization and like how they push code
and how they do remediation. Otherwise you can't actually help
them fix things. For program management, you need project management,
budgeting strategy, time management, GRC. You've got like what do
we have to be compliant with in which jurisdictions and why?
And what are our current gaps right. And how those
mix together. So the common issue with most of these
is the actual ability to see multiple parts of the

(13:53):
organization at the same time and then to connect those pieces, right.
This is why security analysts and red team people, and
especially like principal people, people who have been doing this five, ten,
15 years are so valuable. It's not actually a single
task that is difficult. The problem is getting all the
information together to paint a picture to actually do the task.

(14:14):
So I'm going to take vulnerability management as an example.
Since I've lived in this hellscape for so long. What
is actually so hard about vulnerability management? Is it finding vulnerabilities?
Is it like making a pretty enough dashboard to show vulnerabilities? No,
it's actually fixing vulnerabilities. And the reason it's hard to
fix them is because you have to know what application
it's part of. You have to find the right engineering team.

(14:36):
What repo does that code go into? What's the DevOps
workflow for that? Like the team changed, right? There was
there was a a riff. And now that team doesn't
even exist. And it got combined with this other one.
Where did that one developer go? Who's responsible for that
one app. Oh, it's different this week than it was
last week. This stuff is not easy to do because

(14:56):
it's constant change inside this company. So here's the question.
How much of our inability to do a good job
at vulnerability management or security in general over the last
15 years is actually a security problem? And how much
of it is actually an organizational knowledge problem? And think
about that for all of security. Even crazier, think about it.

(15:17):
For all of it. Right. Or all of software and services. Right.
HR collects, you know, HR data and asks HR questions
and they put it into an HR interface. Right. Project
management collects project management information into a project management database.
They ask project management questions and they put it into
a UI design for project management. Do we really think

(15:38):
these things are going to need their own separate databases
going forward? Their own separate APIs, their own separate questions?
Maybe they need their own questions. Do they need their
own interfaces? I don't think so. I think that all
kind of goes away and we end up with this
thing called unified entity context, or building a world model
for the thing that you care about. So if you're

(15:59):
an individual, your history, your belief system, your aspirations, your
favorite books and music, your past, your traumas, your salary,
blood pressure, friendships, job, career, family goals, financial goals, your upbringing,
your medical history, how strong you are, how much you
can curl like you know your blood sugar levels, right?
And then you can ask questions. Just like with the
security program, you could be like, why is my relationship

(16:19):
not working? What can I do to improve my health?
And if you're a company, it's back to the stuff
we talked about with alma. It's all of its goals.
It's all of its competitors. It's all of its slack communications.
It's all the transcripts from all of its calls. It's
all of its Google Docs and Confluence and all of that.
It's their desired are for the company, all the product
marketing that you're putting out for all of your products
and all the product marketing your competitors are putting out

(16:41):
for all their products. This becomes the baseline for everything.
Once you have that, then you do this. Then you
take the smartest, biggest context AI that you have and
this will be massive in the future. Right. It's getting
bigger all the time. And you look down at this
entire context and it can hold it all in its
mind all at once. So this is completely insane. Basically,

(17:03):
I think most people have this eye thing exactly backwards
instead of cybersecurity or finance or whatever, being at the
center with context and AI being things that you kind
of like sprinkle on to do that thing better. It's
actually the opposite. The context of the entity is everything.
The world model that you have for this thing is everything.

(17:25):
Software verticals kind of go away. They just become use
cases on top of this architecture. Cool. But we were
talking about hacking, right? How do we bring this back
to hacking? So basically the future of hacking, because all
of this relates to context, is basically how you can

(17:48):
keep an exhaustive, accurate and up to date world model
of the thing that you are attacking. And this is
true whether you're actually attacking or whether you're defending. So
it turns into a giant competition between attackers and defenders
and attackers versus attackers and defenders versus defenders between who

(18:08):
has the most accurate and up to date world model
for their organization. So everyone listening to this, every attacker,
every bounty player, we are all going to have a
stack like this. I've been building this for years already,
so like and I know some people on this call
are probably along the path as well. So it's not
a bunch of agents with random tools. It's an interoperable

(18:31):
system where the output of one is the input to
the next one. Okay. This is a big thing that
people aren't understanding about that whole agent thing. You don't
just say blah and give it like a prompt and
then say, oh, agents, figure it out, because then you're
offloading all the work to the model to actually do
the hard work of building the system itself. The better
way to do this? And if you talk to the
people at AI. eye, the people who are actually building

(18:54):
these systems to actually go and find vulnerabilities, exploit them,
fix them or whatever. They need a system like this.
These are the systems I've been building for years. They
are modular. Each little piece does one thing well. It's
a Unix concept, right? Each little piece does one thing well, right.
So I've got a million of these things finding domains,
finding websites, crawling the websites, running automated scans. And each

(19:15):
one of these could be like a super basic version.
It's like curl okay. You got curl on one side
and you've got fully automated puppeteer browser automation going through
bright data on the other side, right? So you have
all these quality, you know, spectrums in between for each
of these modules. But the whole system works together based

(19:35):
on a set of goals. Right. So running automated crawls,
parsing all endpoints, pulling out every single API endpoint from
every piece of JavaScript writing exploits, POCs actually doing the attacking, um,
writing up reports. All of these are separate modules. So
let's say the target, you know, has like five main
web applications, like a few hundred pages per site. And,

(19:57):
you know, there's a whole bunch of agents. Think of this.
You're going to have like thousands of agents. You'll start
with dozens, right? Dozens, then hundreds, then thousands, then whatever.
So we're also learning from new marketing campaigns on X
or LinkedIn. Keep in mind multiple of these these modules
are actually watching the company. They're watching everything the company does,

(20:18):
every piece of marketing, every piece of information that's put
out about this company gets parsed and brought back into
the context, because the system as a whole and the
AI that's sitting on top of it, watching the goals,
is using that new information to tweak how we're going
to do this attack. Right. So they have a new
product launch, which is a new website, a mobile app. Cool.
Go download that. Right. Right now we can't do too

(20:41):
much with that because that's a little bit difficult. In
a year or so, we're going to be able to
go download that full mobile app, run the mobile app
in a full virtual environment, Run a whole bunch of
mobile tools, find out like which APIs aren't secured where
they're not using TLS. Um, all sorts of issues that
you have with mobile security. And that'll just be one
little tiny module which brings that context back into the

(21:03):
overall engine, which enhances all the other components inside of
that engine. Right. Send that over to automated Burp intruder tool. Right.
Then all of burps output. And that's a lot of output.
It overwhelms anything including Gemini by the way. So this
is still a place where, you know, the AI has
to grow because, um, something like burp output from crawling

(21:23):
a website is still massive. Anyway, you've got all that
content coming out. All that content can then be repassed
to find the JavaScript in there, to find where they're
doing all their controls on the client side. Again, you
only have to tell it a couple of core things
inside of the system. Here are the types of things
I'm looking for. Any output that you get, go and
look for the following things. Oh cool. We got new
output from burp. We found new JavaScript files. Let's go

(21:45):
parse the hell out of them and find the files
and API endpoints. Bring that back into the system. Right?
And meanwhile, all this stuff is being fed into the
appropriate modules. So let's say we find some good stuff, uh,
send off to the exploit agents and try to do
something according to the rules, uh, in goals we've laid out. Right. Uh,
so for an attacker, we're trying to extract data. Maybe

(22:07):
we're going to sell that access to a broker for
a bounty person. We're going to create a POC in
a short video to go with the automated report, and
we're going to submit it to Hackerone or Bugcrowd or whatever. Right.
And that just becomes another module that your thing is
good at, right? It's automated workflow, but that's not the
cool part. The cool part is this thing never sleeps.

(22:27):
Dozens or hundreds or thousands of agents in this infrastructure
working at all times, finding new domains, finding a new
announcement which includes a new domain which you then go
find the subdomains, which you then go find all the infrastructure.
You find the web apps that are listening. You then
go crawl those ad infinitum through this entire system, right?
Open admin portals. You're taking all the screenshots. You're finding
the screenshots. Oh, that's an admin portal. That thing's wide open. Oh, look.

(22:50):
Default credentials. Right. Looking for open ports. Seeing if there's
any new stuff out there right now. This sounds complex
because there's lots of different tools and everything to keep
in mind. But this system only needs to be built once,
and then you're just adding modules and upgrading the modules.
And this is a big part of what the AI
helps you do. It helps you just make each one
of these little things better and smarter. Again, everyone is

(23:14):
going to have a stack like this. Individual bounty hunters,
individual people just doing security research or hacking on their own,
and definitely the attacker organizations. And guess who else needs
to have it? The defenders. If you are a defender
and you are not running this against yourself, you are
going to lose. You are going to lose because because
there are going to be so many people running a
stack like this against you, you are just going to

(23:36):
lose now at first, including everything I built for myself. Right.
This was just going to be some basic information, right?
Because we can't do the full version of this yet. Right?
This is a year, two, three, four years. You know,
This gets better as the AI tech stack gets better.
But the system itself is core. So, um, this is
like an internal. This is the AI remake I'm currently

(23:58):
doing of my Helios system. And, you know, it's not
going to have fully automated burp yet. It's not going
to have a bunch of different modules. But like I said,
this gets better as the tech gets better. The other
thing is running, you know, hundreds of agents constantly. That's
not cheap, right? So these prices have to come down
the context windows have to go up. It's an upgrade process.
So some vignettes to just think about this. So imagine

(24:22):
that you're out at dinner and you get a notification
that some employee at some company. Right. Um, they just
talked about how, oh, I've got this thing at work
and blah, blah, blah. This thing is a they're drunk
or whatever, and they're talking online and some, you know,
Reddit subreddit or whatever, and they're like, yeah, this new
domain we put up and it doesn't have toufar. And
I can't believe they used default credentials. That's why I

(24:43):
want to quit. I'm going to start my own business
or whatever. And so, um, you're sitting there eating dinner
with a friend and you get a discord message from
your AI bot and it's like, hey, some, uh, some
dumbass just got drunk and posted that, um, there's a
brand new, uh, domain open and, uh, potentially there's a
vulnerability here. Do you want me to go mess with it?
And you're like, yeah, yeah, go mess with it. So

(25:04):
it comes back and it tells you basically. Yeah. The
vuln that they mentioned actually does exist. Um, I do
you want me to exploit it? Yes. Cool. All right.
So we sent it in. We got the money. Or
if you're a bad guy, you know, you're, uh, stealing
data or whatever. And keep in mind, this could be from, like,
a forum post. Um, it could be an announcement on
TechCrunch that they just bought a company. So it's a

(25:26):
merger and acquisition. Um, anything on the internet relative to
your target? The agents are constantly watching new announcements, you know,
new mergers, disgruntled employees, uh, a new job req for
a new technology that you didn't know. So you add
it to the tech stack for that company. Uh, new
website posts are constantly being discovered, right, because they can

(25:47):
make a slight change to the site, but they added
a new API. Have we tested that API before? No,
it was actually a different team that built that API.
They didn't use all the security that the other team used. Boom.
Now that's how we got in. That's how we pulled
the data or whatever. New S3 buckets not probably secured
all this stuff. So the entire game here and this
is a really big point, is maintaining as accurate as

(26:07):
possible world models for these things you're attacking. It doesn't
matter if you're a company. It doesn't matter if you
were hired to defend the company. It doesn't matter if
you have your own startup. It doesn't matter if you're
a bounty player. It's all the same shit. You have
to keep the most updated version of this thing in
your mind as possible. And here's something else that's crazy

(26:28):
about this. One of the modules here is the actual
list of attacks that you run when you attack. Okay,
so check this out. You have like it's your bag
of tricks. Your bag of tricks is what gets thrown
at every web app. At every mobile app. Right? Um,
for every social engineering campaign, for every fish you have,
like your favorite little stuff that you do. Well, one

(26:48):
of the AI modules that you have inside of your
overall system is the one that parses new research. So
I keep forgetting the guy's name, but every blackhat he
releases like a new attack on HTTP itself. Um, he's
the guy that works with, uh, DAF over at, um, uh,
you know, burp. Um, portswigger. But anyway, uh, I want
to say albino wax, but that's not quite right. Anyway,

(27:10):
you all know the guy. So every time he releases something,
every time he tweets, I have another module which go
and reads that pulls it down and says, oh, that's
actually interesting. Guess what? Upgrade. It's like the Borg from
Star Trek. You hit it. Once it falls over, you
hit it the second time. It's blocked that technique. Okay,
Jason puts out a new video. He's like, oh, I've

(27:31):
got this new, uh, this new attack that I always
do against my things. It finds way more domains. I
got this new attack. It, uh, it goes through filters for, uh,
prompt injection. Right. Um, maybe, uh, Joseph is talking about that.
It goes straight through prompt injection. Cool. Add that to
the methodology. The whole system has now been upgraded. You
can have an entire dedicated thing. It does nothing but watch.

(27:53):
TLDR right, it watches Clint's entire thing. It finds every
single thing that it mentions. It reads, goes and reads every, um,
you know, every presentation, every GitHub repo. And it pulls
out the research and uses that to upgrade the methodology.
And again, that's also continuous. So the system is always
being upgraded. So then the question is like what are

(28:16):
you going to actually point this at. So I'm already
monitoring like all the new bounty programs as they go live. Right.
I'm not started testing them yet because I'm still building out, um,
the rest of this new stack based on context. But
my goal is to set this thing free on, uh,
on actual program soon. Um, but but the point I'm
mentioning this is that you can always be adding new targets, right?

(28:38):
Attackers are going to have their own criteria for picking targets, right?
Maybe they have a lot of money. Maybe they have.
It's a combination of a lot of money and a
bad security team. Maybe it's a combination of they have
a lot of money, but I just saw on LinkedIn
that half of their security team got fired. Oh, let's
add that one to unify context and let's start attacking
that one. Point is this is also continuous to find targets.

(29:00):
And in my case, just parsing a brand new, uh,
bug bounty programs that are coming live. So the entire
game here is maintaining these accurate real time world models
for entities. Like I said, it doesn't matter who you are. Um,
what's really hilarious about this is AI is not the
main feature here. AI is not the point. AI is
just the tech that enables this to happen because of

(29:22):
the agents, and because of the fact that um, models
can hold way more information in their brains at one
time than we can. That's the only thing we're really
getting from the AI is the models are pretty smart, right?
And the smarter they get, the better this gets. But
it's not the kind of the point. The point is
the world model capture of this thing. It. Okay, so

(29:42):
just imagine this. Imagine it's 20 years in the future.
Imagine we're not dead yet. Or, you know, like, everything
has gone well. The planet is still here, uh, 20
years in the future. Imagine an ASI and this is
a little sci fi, but it's not too far off. Honestly.
Imagine China holding the United States context of every open port,

(30:07):
every vulnerable API path, every, um, opportunity for, like, file inclusion,
every single attack possible, every every AI agent that's vulnerable
to a particular type of prompt injection. It just pulls
in the entire context of the United States current state
of vulnerability and holds it in its mind in one piece.

(30:29):
And then they ask the question, who do I go
after first? What is the next best action to harm
the United States? The most right or to harm Russia
the most, or to harm whatever target that they are
pointing at? The point of this is that is millions
of IPS. Hundreds of millions of IPS. Is it billions
of IPS? The point is, think about how much context

(30:50):
you need for that, right? Doing this for a company
itself is actually hard enough, right? To understand its entire
history and every state change of all its IT and tech.
We're talking about terabytes or petabytes to hold a state
in its mind at once. And it's got to like
keep that in context. Keep in mind, so you think
we've actually gone far with AI? We haven't gone near

(31:11):
close to what we actually need. And the AI is
not the point. The point is having the size of
the state that you can hold in your mind at once, right?
So all this to say that the AI is not
that important. It's kind of a supporting actor because size
of context and yeah, okay. The models are smart. So
that emulates, you know, some human components of this. But
what actually matters is knowing that you have to keep

(31:33):
the state and understand this world model of the thing,
and that you build a system, a replicable system that
produces outputs based on how the different modules in the
system interact. The system is more important than the eye, right?
And the concept of context itself is more important than eye.
The eye is just the supporting tech. So what we

(31:55):
end up with here is a world where every single stone,
every single port, every single URL, every single API endpoint,
every single agent is constantly being overturned, checked, and double checked.
As an attacker, you are competing with hundreds of thousands
of other attackers. As a bounty player, you're competing with
hundreds of thousands of other bounty players and the attackers.

(32:17):
You're racing to go do that thing. And as a defender,
you're defending against all of them, plus all the other defenders,
because you know you want to be the one that
gets away from the bear while the other defender gets eaten.
So natural question is, okay, what does all this mean?
If this is correct, if you're a defender and you're
trying to determine what AI to build for your company,

(32:39):
you need to start building your own world model of
your company. You need USC context for your company. Your
attackers are going to have it and you better have
a better version. And if you're a bounty player, you
need to rebuild your automation stack. Putting the world model
building and USC at the center of it. And if
you don't have an automation stack, go look for a

(32:59):
new hobby because you're not long for this world. They're
about to be millions of people slash agents going after
the same bugs with constantly evolving and improving systems and
stacks and AI intelligence helping it. Right? So this is
a competition between their system versus your system, not one
of them against you. It's their system against yours, their

(33:20):
context against yours, their world model against yours. And finally,
if you're just trying to figure out, like where things
are going with all this AI stuff, just remember one
core idea. The game is not adding AI to stuff
we care about. The game is having real time world
models of what we care about, which we can then
take action on using AI. Thanks for your time.
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.