All Episodes

August 21, 2025 98 mins

You are currently listening to the Standard version of the podcast, consider upgrading and becoming a member to unlock the full version and many other exclusive benefits herehttps://newsletter.danielmiessler.com/upgrade

Read this episode online: https://newsletter.danielmiessler.com/p/ul-494

Subscribe to the newsletter at:
https://danielmiessler.com/subscribe

Join the UL community at:
https://danielmiessler.com/upgrade

Follow on X:
https://x.com/danielmiessler

Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler

Become a Member: https://danielmiessler.com/upgrade

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:19):
All right. Welcome to episode 494. Sorry, it's been a while.
Had some, uh, config issues been working on with the
home gear. I think I got them sorted out, and
we are ready to, uh, keep going. So my stress
and excitement levels continue to rise. Feeling basically unbelievably excited

(00:45):
about the future and, like, what we could potentially do
and like all the different ways that I'm using AI
and I'm trying to get other people to use AI.
I'm doing probably, I don't know, five sessions a week
with various friends, showing them how I'm building my stack,
how I'm building out my dachi, and how they can

(01:07):
build their own for their own purposes. A lot of
the stuff that I talked about in the augmented course
really is basically starting to happen, um, instantiated inside of
cloud code. So it's like this extreme optimism and thought
about possibilities and like, everything I could possibly do and

(01:31):
help other people, do, you know, escaping from the drudgery of, like,
corporate work? A lot of corporate work, not all corporate work,
of course. And just like thinking about a better future
and trying to actively build that future, especially when I
start talking about some of the stuff from previous episodes,

(01:54):
podcast episodes of the newsletter. And when I want to
call out is, uh, Damon, which is this new service
that I've stood up. So it's damon.com. And what I'm
essentially doing is bringing back this idea that I've always wanted,
I think, even before the book stuff that I think

(02:17):
this is like early 2000, there was this one service
in the past called Friendfeed, and I had it kind
of like integrated in my website at the time. And
what it did was it, it was able to follow
what you were doing on Reddit, on Facebook, whatever the
current services were at the time. I know Reddit was

(02:38):
definitely one of them. Uh, book feeds, RSS feeds, basically
you could subscribe to or you could publish like click
the options for all these different accounts and connect them.
And then when someone comes to your Friendfeed page, they
would see like the books that you're looking at, like
the stuff. Oh, it was upvotes as well. So we

(03:00):
could see what you upvoted on like Hacker News and
what you upvoted on Digg and what you upvoted on Reddit.
So it was like this live dashboard thing that was
going on with that person. It was just unbelievably awesome.
It was called Friendfeed, and I have always been trying
to get back to that thing. And so Damon is

(03:24):
essentially just like a more pure form of that because
rather than it being a third party service, the idea
is that each of us hosts our own just using
a common protocol, and we all just know how to
call it, you know, it's a common protocol. Therefore there's clients,
there's servers, and like it's very simple. But the idea

(03:45):
is that world. And it wouldn't just be humans that
have this right. It's it's also the businesses and the
buildings and everything. Um, so everything gets an API, but
it's like this format. But if I'm in a coffee
shop and, you know, I share people with things around me,
they have their favorite. Their top book is also their

(04:06):
top fantasy book is also The Name of the wind,
which is my top fantasy book. So it would show
me that right. My digital assistant would be able to say, hey,
you know, check out this person next to you. You
should say hi. They also love name of the wind. Boom.
Human connection. So everything comes back to the human connection
facilitated by the tech. So that was one of the

(04:28):
things I talked about a lot in a previous part
of the newsletter. I've been blogging like, absolutely crazy. Just
absolutely crazy. Like I've been blogging higher quality stuff. Um,
more thorough stuff. More well researched stuff. I've put out

(04:51):
probably ten of these in like the last month. It
is unbelievable the amount of output I've been able to
do because of Chi. All because of my AI stack.
Because what I'm now doing, my workflow now is to
use Whisper Flow to record into vim or record into
Apple notes or record into whatever. And then I just

(05:15):
copy and paste that into Kai and I say create
blog and it knows to not change my language. It
doesn't write anything for me. It just puts my language
in there. Um, I actually do have a rewrite command,
and if I use that, it's allowed to fix grammar
and typos and like sentence structure, but it's still not

(05:37):
allowed to change the content. And what happens is it
still reads exactly like me because all it did is
really clean up. Like, should that have been a semicolon
or a or a dash or a period or whatever?
And then I could say, okay, enrich. Okay. You know,
in like the movies it says like enhance. Well, now

(05:59):
I can say enrich. And what that does is it
goes and gets links, right? Because when I'm narrating, this
is insane. When I am narrating, I can literally in
Whisper flow. I say things like, yeah, and um, go
get a quote from Viktor Frankl about this and put
it here. Right. So I'm just narrating. I'm like, okay,

(06:21):
we need some data to back this one up. Um,
I'm talking about, you know, tech layoffs. So find the
latest data on this from a reputable source and link
to that here. Also create a visualization using our standard
D3 visualization. And boom. So I press enter to that thing.

(06:42):
And this thing fires off. And like 10s later I
have a linked like graph with real data and the
data sources listed and the data sources listed inside the graph.
It's all annotated, all the notes are there, and all

(07:06):
I did was talk. All I did was talk. It
is unbelievable. And, you know, for the longer ones, I
end up going back in multiple phases. But I'm working
on that workflow. Ideally, I'd like to have as much
of it as done as possible. What has not worked
for me is kind of like iterating back and forth.
It's no fun to do that. I prefer to either

(07:29):
just capture like violently and like this giant thing and
then just work from there, or try to have the
thought fully parsed out. Done. Inside of the note over to,
you know, Claude. And he just takes it and runs

(07:49):
with it and just knows I've got all these different
custom commands for adding links for cleanup. I have a
separate command for publishing, which does like editor type stuff.
So it makes sure none of the links are broken.
It it, uh. It'd make sure I don't have any
unsubstantiated claims that it thinks I should add data to

(08:11):
and stuff like that. It checks to make sure quotes
actually came from that person. Right. So I'm checking basically
for like, hallucinations and mistakes that I've made. I've got
a piece of errata for the newsletter that just went out.
So the open source package that does the bug bounty stuff,
searching for vulnerabilities that was not released by XPO. And

(08:32):
I guess this is a good example of how I'm
not using AI for for a whole lot of stuff
on the newsletter. Um, I actually wrote that and it
was incorrect. Um, I had just read both articles and
somehow in my mind I must have like crossed. I
don't know, cross pollinated or something. But anyway, yeah, that'll

(08:54):
be in the errata in the next newsletter. But since
I'm doing a podcast and it's after it, I can
correct it now. Uh, okay. What was I talking about? Yeah,
basically just this workflow is completely insane and I'm just
ultra excited about it now. All that to say, I'm
still extremely worried. Like I talked about in that other post,

(09:17):
that other post was talking about the. How worried I
am about the, uh, the future and especially for a
whole lot of workers globally, basically knowledge workers. I'm really worried.
And I'm not just worried about them. I'm worried about
systemic effects of them losing so many jobs and there

(09:41):
being some sort of like threshold where if it crosses that,
then enough people lose their jobs, it creates enough panic,
it creates enough reduction of income into local businesses, so
enough businesses close. So it kind of turns into like
a like a mini pandemic effect, where it's like every

(10:04):
I mean, that one was like different because it's so
obvious why it's happening. But in this case, it would
be like, oh, we're having some sort of economic meltdown.
And then so the business is closed because no one's
coming to the businesses, because they're panicking, because they don't
have any money. And it's just like this massive cascade effect.
So I'm really, really worried about that. And I'm really

(10:24):
worried about I don't think a lot of these people
are going to be able to reskill. I think this
whole reskilling narrative is like something we tell ourselves to
make ourselves feel good. And I'm not sure it's like
how realistic it is. And it's it's like, really, I
don't know. It's really depressing to me, the whole thing.

(10:45):
So thinking about that side of it and how little
I can possibly do is, is just not good for
me to dwell on. So I literally dwell on it.
I sit inside of the the suck, you know? And
I sit inside there and I'm like, what can I do?
What can I do? Um, how bad is it going

(11:07):
to be? I have to, you know, share how bad
it's going to be or whatever. So then I write
a post like that which goes into all the detail,
and I get it all out, and I basically put
it there, and then I like, basically say, okay, that's it.
We're not thinking about this anymore right now that we're
setting that aside because the solution is always the same.

(11:28):
The solution is to expand the mind to, you know,
brush that off to to have the stoic mindset and
be like, okay, yep, that super sucks. Okay. Well. Anyway. Right.
I've been talking about this human 3.0 thing. I did
that previous post called The End of Work. I didn't

(11:49):
think this would be happening this fast when I wrote that, but.
In end of work, I've already talked about it in
David Graeber's book Bullshit Jobs. Right. I've always seen this
current structure of like 9 to 5, and everyone, or
a vast majority of people can't stand to go to work.

(12:10):
It's cutting out 9 to 5 is the most important
hours of the day. And all they have time to
do is to get ready for the next day of work,
which they don't want to go to, to have a
bunch of stupid meetings. Like it? It's a bad situation
that we're in, right? It's also bad that it's going to,
you know, catastrophically come to an end. I would say,

(12:34):
or at least, you know, very badly come to an end,
I would say. Disruptively. Right. It's going to be extremely
disruptive to millions upon millions of people, maybe billions. I
don't know, it depends how you count, but it's going
to be massively disruptive to the global economy. What is
what is happening but going back to the positive, I

(12:58):
think that needs to happen. It needs to happen that
we get rid of this corporate structure. I mean, corp okay,
this all comes from the military, the hierarchy. You work
for Mr. Johnson. Mr. Johnson works for Mrs. Williams. Right.
And Mrs. Williams is trying to get her presentation together, uh,

(13:19):
to give to Mrs. Clark, because she's the one who
put all this together, and we're all just happy to
work for her. It's a very submissive, very hierarchical. Very
unhealthy structure to have. Most people in any company be
in this, like. Oh, yeah, I only contribute this little piece. Oh,

(13:42):
they don't ask me for ideas. I just help them
implement them. I'm just happy to be a part of it.
Like that's cool for 1950s or whatever. I mean, we
have to move through stages as humans. Well, it's time
to get out of this stage. We should all be founders.
We should all be CEOs, not not CEOs in the

(14:03):
old style. I'm using the CEO term for what it
is in the future, which won't be called a CEO.
We are all individual contributors to society and we should
be sharing with each other. We have companies. Yes, we have,
you know, products. Yes, we have offerings. Yes. And of course,

(14:27):
somebody gives us something in return for us giving them
that service. Right. So there's still a currency here, but
that currency won't be won't be based on this negative
infrastructure that we currently have, where it's all based on
your labor and it's all based on your, uh, you know,
your participation in this negative hierarchical structure that creates all

(14:51):
these bullshit jobs like David Graeber talks about in the book.
And the way I think about this is this is
just a natural progression. And this idea, I think I
got most from Harari and I can't remember which book
it was. I think it might have been the second
one after sapiens. They use so that basically what he

(15:15):
talked about is that, um, governments and systems of organization
and systems of government are basically their time bound based
on what is needed for the society at that time.
So it's like, it's not that we shouldn't have had, uh,
capitalism or we shouldn't have communism or whatever. It's like

(15:37):
those structures are like best cases. They are the things
that fit where humanity is at the time. And it's
a good way of looking at like, okay, why didn't
Iraq suddenly become a democracy as soon as we got
rid of Saddam? Well, that's not where the population is

(15:59):
ready for. Right. And maybe there's an argument that, like,
we are no longer, uh, up to the task of
maintaining a democracy because our population has declined so much. Cognitively, philosophically,
education wise, or I would say knowledge wise. And so

(16:22):
we're losing our ticket or we're losing our ability to
be worthy of democracy. That's one argument. That's one argument.
So I like this framing. I like the structure. And
so essentially what we would be saying here is that
the capitalist structure is super useful at a certain phase
of our maturity and development. And that ideally we would

(16:45):
move out of it and move into something more like
not like a Stalin communism, not like a mao communism.
It wouldn't even really be communism because it wouldn't be
like state control of things. It would be more like
shared human control of things and shared human voting. It
would be actual democracy instead of democracy. Republic. Of democracy

(17:10):
or a republic. Because we would have the tech, presumably,
to do so. And you could use other things for
the whole federal versus state or whatever. We could basically
figure out these problems. Right. And the more advanced we
are as a population, the easier that would be to do,
because you just make a good argument and people are like, oh, yeah,

(17:31):
it's a pretty good argument. Yeah, let's do it that way. Right.
Which we clearly don't have today. But so the point
of all of this is to say that the only
way I can function is to take all of my
concern and worry and sort of pain around. Holy crap,

(17:54):
this is going to get bad and use that exact
same force and that exact same energy and just be like, no,
you know what? I am not thinking that way. I
don't know that we have a chance to get out
of this clean. We might exterminate ourselves. We might go
into massive civil war. We might have, like, a huge

(18:15):
amount of problems. There's a million what ifs. There is
nothing I can do about it. There's nothing you could
do about it. This is a thing that is happening
to the planet right now, and it's just kind of
playing out. And it's really weird with this whole free
will thing. It's like, I know that we don't have
the free will to fix this, but guess what? There's

(18:38):
a guaranteed way it doesn't get fixed is to think
that you don't have the free will to fix it.
Which means I behave as if. And this is another
essay I'm working on, but you behave as if you
had the freedom, and ironically, that ends up bringing forth
a more positive thing. Now you could say, well, that

(18:59):
was what was going to happen anyway. Well, cool. Whatever.
I don't have time to think about that right now.
I need to think about positivity and trying to make
it happen because we are humans. Actually, the name of
this essay, and I know I'm sort of going on
a tangent here, but hopefully it's okay. The name of
this essay is Reality is Layer Dependent, which means at

(19:21):
my layer right now, when I am trying to fix
the world at my layer right now as a human,
free will does exist. Because if I go outside and
someone has stolen my trash cans and I see them
across the street in front of their house and they're
full of garbage. I'm going to walk across the street

(19:46):
and ask them to not do that again. Please. I'm
not going to say I'm not going to walk across
the street and say, hey, I just want to let
you know, I know you don't have free will. And, um,
you there was nothing you could have possibly done to
not steal my garbage cans and fill them up full

(20:06):
of garbage. That is not a conversation I'm going to have.
I'm going to have a conversation about them changing their behavior,
which means at this layer of of reality, which is
human interaction and the perception of free will. These are
the kinds these are the types of conversations that we have. Right.

(20:29):
And layers below this, it's kind of lame and stupid. Uh,
the biological layer, the chemical layer, the physical layer. You
talk to atoms about the fact that somebody stole garbage cans.
They don't understand that. You know what I mean? So.
All this coming back to. I'm trying to focus. I

(20:54):
recommend other people do this as well. Take all that
hurt energy, all that negativity, all that everything, and just
move it over into this positivity category because everything could
go horribly wrong. Or this could be the origin story
for a few scrappy people got together and they planned

(21:17):
on building a better internet. That was human based and
they just, you know, steadfastly refused to give into the suck.
And they started building the better thing. And, oh, the
idea started catching on. And when things got bad enough,
people leaned over into the positive side that that's actually

(21:39):
what we ended up building. and it turns out we survived.
It was way better. We got rid of the corporate
work we're now teaching in schools that everyone is a creator.
Everyone is a builder. Everyone is a entrepreneur. Everyone has
value to give to the world, right? Not the special few.
Your job is not to get out of school and

(22:01):
find a really cool company, because that's a really smart
person and you're going to give your little meager skills,
you know, to, to help them succeed. Screw that. This
is new world. This is we all have that capability, right?
So if we have a tiny chance of this happening,

(22:24):
that chance is larger if if we all try and
push towards it. So that's that's why I'm very. Sort
of sad and very excited all at the same time. Okay.
That was the first line of the newsletter. After 22
minutes in. That was literally the first line of the newsletter. Okay,

(22:45):
we're scrolling down. Incredibly awesome conversation with my buddy Jason
Haddox over the weekend about automated recon pen testing type
stuff that we're building together. So mine is called Helios.
I've had it for like, whatever, 15 years, I don't know,
12 years, something like that. And before it was just

(23:06):
like nasty Python and bash and stuff like that. Basic
automation stuff. Linuxi type automation stuff. Using a whole bunch
of custom tools, a whole bunch of project discovery tools
and like it does the job. But now I'm basically
doing a full AI upgrade on it, and Jason's been
working on his stack as well. And so, uh, we

(23:29):
talked a bunch about, uh, what I'm doing currently with
my stack with Kai, and he pings me like, I
don't know, like 15 minutes later, he's like, dude, um,
my thing just found a bug. Um, a P1 bug.
And it turns out that, um, he had this password
field for a thing that he was testing, and, uh,

(23:53):
it was just the password field, and it was for
some sort of admin site, and he was thinking, like,
do I? Okay, what do I want to try to
brute force this thing? It doesn't look extremely vulnerable. And
his I found something that he and I would not
have thought of. Uh, right. I mean, it's somewhere in

(24:13):
my methodology, but it's not part of my main methodology. Um,
which is to start messing with parameters. That's fine, but
it in one shot, it's first thing that it tried
was to add a parameter to post, which was id
equals one and instantly bypassed. So an actual live real
world P1. And he said he had to mess with

(24:37):
it a little bit to get up to that point.
But in terms of like doing an actual exploit, a P1,
doing something that he wasn't going to do, I wouldn't
have thought of, uh, at least in that immediately. I'm
sure somewhere deep in our methodology, we have to not
only modify parameters, but also to add parameters. I mean,
that's a pretty common thing, actually, but it didn't come

(25:00):
to mind, right? That's the point. It was like, that
was pretty cool. It was pretty cool. So I think
the takeaway that we have there is just like, this
stuff is not theoretical. This stuff is like getting very,
very tangible. And my point about this has always been
that we just have an orchestration problem, like the individual

(25:23):
steps don't require all that much intelligence. It's not super
hard to do this stuff. The problem is keeping the
plot when you're doing it and you're doing like, you know,
175 different mini tasks, and you're gathering all this context
and there's too much context, you know, falling out of
different places. So like a big part of what I'm

(25:43):
doing with Kai is breaking context into separate subfolders with
their separate cloud MD files and their separate large pieces
of context, so that the subagents are consuming context and
they're not polluting the main agent with that context. Right.
Because you absolutely need this when you're, you know, parsing

(26:03):
like complete Doms and stuff like that. Um, there's just
so much content in there. Then you have, uh, haystack
performance issues, right? You're trying to find like endpoints inside
of a giant, you know, Dom full of JavaScript. It's
like it's pretty nasty. And it takes up a lot
of tokens, and it's traditionally not great for for AI.

(26:24):
And the other thing to mention about this is like, no,
when you should be using AI and no when you
shouldn't be. Right. Um, I'm going to talk about this
somewhere else. But one thing that's really cool is, is
to think about the fact that, um, you should use
as little AI as possible. I talk about this a

(26:45):
little bit later in the newsletter, but uses little AI
as possible. Ideally, the whole workflow would be legacy traditional
deterministic tech, just regular code, right. So like addition, subtraction,
multiplication like no reason to enter AI into this. Right.

(27:06):
And then there are certain types of problems that do
require that that do require intelligence. So if it requires
intelligence it's the type of thing like hey is this
a cat, for example. That's not an if this, if this,
then that statement that you can give to regular code,
which is why we needed machine learning, which is why
we use AI. Right. So just knowing when to use

(27:29):
which is super important. Um. All right. Scroll down. Peace.
About limitations to creativity. And I realized there's actually a
third limitation. So I'm writing another blog post that kind
of unifies them all. But the third one, the third
type of limitation to creativity, is not even thinking about
some options for creating a new solution or solving a

(27:51):
problem because it was previously impossible. It's like being stuck
in like old world, old mind, old tech or whatever.
And I got an example from this weekend, so I
was I'm not very happy with like my, my um.

(28:12):
Web analytics tech. So my web analytics stack is currently fathom,
but I would much rather be using a chartbeat. But
even Chartbeat doesn't do everything. It doesn't have a menu
bar thing. Um, it also doesn't have historical metrics, right?
I can't use it like Google Analytics. So I for

(28:32):
like over a decade, I use Google Analytics and Chartbeat
Chartbeat to look at Google Analytics to go and get
reports from. And that's kind of lame, but I never
questioned it. I never thought it was a problem. I
was just like the way the world is, no big deal.
Like that's just reality. Well, I was messing with, uh, fathom,

(28:54):
which is my current replacement for both of those, because
Chartbeat is too expensive now, and Google Analytics is garbage now.
So I use fathom, and it's like it doesn't have
the Google Analytics, like, I guess like enterprise feel. Um,
it doesn't look nearly as good as Google Analytics or Chartbeat. Um,
and it kind of has like a, uh, I found

(29:16):
a way to give a menu bar icon, but I realized,
hold on, I actually know how Chartbeat works. It works
by having a piece of JavaScript on the page that
sends a beacon every few seconds. And that's how it
actually sends the difference between. It can tell the difference between, um, well,
its primary functionality is that it can tell you who's

(29:37):
reading the website. Right? Like right now there's 165 people
reading my website. And that's that's my current count. I'm
looking at my menu bar that's 165 people with a
tab open that are reading it. Right. So that that's
what I care about. I care about who's currently looking
at what pages. Right. Google analytics doesn't have this. Chartbeat

(30:01):
has this. It's the only one I've found that really
had this feature. And they also had a guy that
was like super pretty and it was just like a
really cool way to visualize it. And I'm like, this
is the best thing ever. So I paid for it
forever and ever and ever. And now I just suddenly
realized on Sunday while I was working on the newsletter, hey,

(30:22):
I wonder if I could just make chartbeat. So I'm like, hey,
I want a piece of JavaScript called analytics. I'm going
to serve it inside my page. Here's where you're going
to put it. Here's the functionality it's going to have. Um,
here's how often it's going to beacon. Set up an
endpoint on Cloudflare to receive the beacon. You're going to

(30:42):
parse referrers like this, blah, blah, blah, basically. And I
gave it a whole bunch of spec sort of stuff,
you know, um, basically a full spec of what I
want to build, uh, in the Google Analytics JavaScript, also
in the receiving endpoint, also in the front end showing it.

(31:02):
And I give it a bunch of examples and stuff
like that. And in about 18 minutes of me just
talking to it and giving it to it in that format,
it then produced the full spec and wrote the entire thing.
So within 18 minutes I literally had a chart beat replacement.
And I then spent many hours after that trying to,

(31:26):
you know, tweaking and adding fields and, you know, tweaking
the color scheme slightly and display and stuff like that.
So it's not like I was done after 18 minutes,
but I had live visitors showing up and showing up
on my graph, and the site looked pretty much like
Chartbeat within 18 minutes. That is ridiculous. And the whole

(31:48):
point of talking about this is, yeah, it's cool. So
I want to talk about it. I think it's cool.
I love, uh, live telemetry from the world is just
a thing that, like, I'm obsessed with. You have a map.
You realize that there's some human who's reading some thought, uh, and,
you know, maybe maybe you've seen this person before. Maybe

(32:09):
you've walked by them in a coffee shop or something.
It's just like human connection, I think is really cool.
But the thing I want to focus on here is that.
Literally the second before I had this thought, I was
caged by this concept of like, well, that's too bad.

(32:31):
Short beats expensive because there's no way I could make
a heartbeat. I wasn't actually even thinking there's no way
I could make a heartbeat. That was an automatic default
barrier inside my brain. I didn't even have to think
that the barrier was there. The barrier is invisible. Now,
keep in mind, I've been able to do this, this

(32:54):
advanced level of like building for like the last couple
of years, I've been building tons of stuff. Why did
I never think to build a better analytics system? I've
paid for Fathom Analytics. I've paid them hundreds upon hundreds
of dollars for a thing that's worse than what I

(33:14):
made in 18 minutes. Let that sink in, okay? 18
minutes to be way better than my Google Analytics solution
and my Chartbeat solution and my fathom solution. The limitation,
the reason I hadn't done it, is because it hadn't
popped into my mind that I was able to do it.

(33:37):
And what really breaks my brain is I'm still currently
limited by an infinite number of those walls and those barriers.
So in this blog this is type three creativity limitation.
And we seek. My argument is we seek or should

(33:57):
seek type three freedom. In fact, all three type one
freedom is freedom from internal um, not not being able
to reach our internal child like creativity. So the freedom
is to be able to and the restraint is not
being able to. And then the second one is audience capture.

(34:21):
So it's like you're worried about what your parents and
your friends and your your surroundings. It's not really audience.
It's more like peer or societal, uh, capture and like limitation.
So you literally stop having good thoughts because you can
only think along certain lines of what you're expected to
produce and what is acceptable to produce. So that's layer two.

(34:44):
And level three is thinking in the past being limited
by the past. So it's like I grew up thinking
only certain things were possible. And I can't even know
all the ways that those limitations are still on me.
I mean, I was born in the 70s, right? So phone,

(35:06):
cell phones, right? I mean, all these things that we
have now, how many of them? How many of them
are like limiting me in me thinking that those things
are possible. I feel like I'm way ahead of the
game against most people in the world, honestly, including extremely

(35:27):
young kids who grew up with tech. I feel like
I'm more unbounded than most, but I still feel very
bound by this. I feel extremely sort of caged by this.
So I'm writing this post to kind of remind myself.
Do you have all three types of these freedoms? Oh,
and by the way, the reason I got on to
this at all is from reading, uh, Rilke and Rilke's

(35:53):
Letters to a Young Poet, um, which is a book
that I've been recommending. And that's where I first started
thinking about this. But then I started thinking about it
from this book, Mathematica, which I just finished. And oh
my God, it is extraordinary. So anyway, um, a bunch
of that stuff is in the previous two posts about creativity,
but I'm about to do this third one that kind

(36:14):
of unites all of them. Oh, and by the way,
I'm going to tell Kai to once I'm done with
the third post to go and update the other two
to point to the third one. And I can also
say in the third one, hey, I wrote these previous
two posts and it's going to Kai is going to
automatically go find those two posts and link to them.

(36:34):
So this is an example of a thing that you know,
in the whatever 26 years that I've been blogging, I've
had to do that manually. And it's just really, really
great to think that I could just say one thing
into Whisper Flow, and all of that gets dynamically updated
by Kai. Okay. So, uh, all right, so that's the, uh,

(36:58):
whole chart beat creativity thing. Um, oh. Had a really cool,
interesting conversation with Matthew Brown from Trail of Bits. So
we were talking about this AI system design thing that
I was talking about earlier, and he gave me a
couple really cool, um, ways to think about this and like,

(37:19):
design principles that I'm going to use for my meta
system designer. Um, and one of the things he said
that was really cool is problem design, problem understanding and
problem problem categorization is what it is really. So he's
talking about problems that could be solved or problems that

(37:40):
are like identifying a cat in a picture. That is
a certain type of problem that code doesn't work for.
You need machine learning, right? Or you need intelligence. Which
machine learning is like a proxy for that. And then
AI is a proxy for it as well or a
type of it. So but there's other problems where you
should not be using AI at all. Now, the reason

(38:02):
I was talking to him is because he's, uh, he
led the team on Trail of Bits team that won
second place in the AI competition, which is a fully
automated discovery and and patching system. Agent system. It involves
multiple agents broken into multiple components. And so my main

(38:24):
question for him was like, are you relying on the
model the most or are you relying on the system
the most. And he ended up saying system as well.
I've thought it was system, so I was so happy
to hear I wasn't wrong about that, at least in
his opinion. And um, yeah, he just had some really
cool ideas about like how you design the agent system, um,

(38:46):
to constrain it and make sure you give it, like,
only specific types of context. But, um, that episode should
come out soon. Um, that's going to be on YouTube
and also in the, in the audio and the regular podcast.
And so, uh, really cool conversation. That guy is sharp.
I mean, this guy's been doing AI forever. Like, he's

(39:06):
probably going to get hired by Google for $74 trillion
would not surprise me. Um, a few weeks back I
posted about feeling really guilty about eating meat, and then
I saw a Dwarkesh episode where he had this, uh,
meat guy on or animals, you know, kindness guy on,

(39:27):
and he's like, yeah, we've got this thing called Farm
kind and it's a charity. And like, you know, even
$10 can save a lot of animals from suffering. Um,
and just just to, like, clarify why I care about this,
it really upsets me how similar we are to some animals,
especially like pigs, even chickens. I would say, like, obviously,

(39:48):
I don't know what's in the mind of a chicken,
but a pig looks and feels very normal to me.
It looks and feels like a a more advanced animal.
If you hear a pig squeal, especially if it's like
in pain or something like that sounds like pain to

(40:09):
my brain. It sounds like pain. And, um, there are
people in, uh, I think it's groups in Africa. Or
maybe it's Australia. Um, they call humans long pig. Uh,
because the flesh is similar and in flesh means muscle.

(40:29):
I mean, they're also pink, and a lot of humans
are pink, right? I mean, there's so many similarities here,
and I'm just like, oh, evidently we taste the same.
That was that was the point of of being called
Long Pig because, uh, these groups eat humans. They also
eat pigs, uh, I assume like wild boar or whatever.

(40:49):
And they're like, yeah, they kind of taste the same.
And I'm like, okay, uh, good to know. Uh, how
do I get out of here? Um, but I just like. Oh,
and then you see the pictures of, like, this farm
kind thing and just rows and rows and rows like
matrix where you see all those, you know, human eggs
or whatever, but it's rows and rows and rows of pigs.

(41:13):
And this is the craziest part to me. They are
laying on their sides like kind of like rotting from
like infections or whatever, like, like bed bed sores or whatever.
So they got bugs all over them rotting. These are pigs.
They want to run around like young kids, right? They
want to run around and have fun and, like, squeal

(41:35):
or do whatever pigs do. They live their entire life
on their side, and they can't stand up. They can't move.
They can't run around. And they just. Okay, so I
imagine a meter, this is the way I kind of
see the thing. There's a meter for planet Earth. There's

(41:56):
a suffering meter. And there is a like, fulfillment and
happiness meter. Right. And we can go deeper with that
of like, is it true meaning or whatever? Let's just
say fulfillment is the far right. It's like just complete
and utter, like deep contentment with life and happiness and such. Right.
And on the far left is like a an AI

(42:22):
designed like torture device, which embeds directly into your brain
and produces the maximum amount of suffering. Okay, so that's
the far left. And then you got the far right
is kind of the opposite. Now, having millions of animals
that cannot move and are stuck in a cage and

(42:44):
are just like experiencing extraordinary hardship. This is not hundreds
of animals. This is not thousands. It is hundreds of millions.
I think it's actually billions think it's billions of animals
who that we farm to eat or eat their meat
and they are producing suffering into the world. It's not

(43:05):
quite AI brain implanted torture, but it's too close for
my happiness. It is pretty close. Just just imagine a
conscious creature. Okay? We're not talking about a gnat or
an insect or a caterpillar who knows what their brains
look like, but I'm sure I'm not sure they're experiencing suffering. Okay.

(43:28):
A pig is. Maybe a chicken is as well. So
pigs and chickens? Cows? I care about this. I care
about what that meter says on planet Earth right now. Obviously,
I care more about humans, right? Because we got billions
of humans who also need help, or at least hundreds
of millions or millions who are in a suffering category

(43:49):
as well. And obviously that's the highest priority. But the
irony is this we could lower the meter more effectively
with these types of things. And this gets back to
the point I was making here. This this charity is
able to move that meter. I wish they had a
I wish they had a visualization for this. Should I

(44:11):
should make one because I'm not limited by type three restraint.
There should be a way to visualize. And they are
able to do this with metrics. They actually have a
way to say $10 will reduce. Will seems to through
the lobbying effects and through the various programs that they implement.

(44:34):
And they've got dozens of of programs. It's not just
one that they put in. So there's like a cage one,
there's like a cleanup one, there's like a medicine one,
there's like a requiring that they are able to go outside, um,
the size of the cages, um, cage free. Um, the
types of medication, um, the, the number of inspections that

(44:56):
they have to do, like, I'm, I can't remember all
of them, but there's multiple programs that focus on particular
parts of the problem. And this farm kind thing basically
can allocate money into different ones of those. And so
they're saying they're getting extreme results from a small amount
of money. So anyway, I reached out to them and um,

(45:19):
so they set up a thing for unsupervised learning. So
it's farm kind. I forget the URL. It's farm kind, something. Um,
but but it'll be in the show notes. It's in
the newsletter, obviously, which is the show notes. So yeah.
Go check this thing out. Click on the link. Give $1,
give $1 billion whichever one you have. And we could

(45:42):
actually like put a dent in this suffering meter on
planet Earth because like my suffering okay, I'm a human,
but I'm just one, one thing like is my suffering.
How much higher does that rate? Because mine is existential
versus a pigs. Who is? I mean, it probably feels
just as bad, like just because it's not existential suffering.

(46:06):
I think it feels just as bad. Right? And if
there's a billion of them, they damn well damn well
outrank me. So I love having a leverage via a
donation system like Farm Kind to be able to actually
do this. So, um, I recommend I ask you to

(46:29):
go and actually put some money into this thing. And
I love the fact that they set up a campaign
for us because, you know, a lot of people get
their newsletter, a lot of people get the podcast, uh,
plus YouTube or whatever. Um, we should, you know, be
able to help a lot of animals is, um, what
I'm thinking. And I would say, don't even think if

(46:50):
you don't care about animals, don't even think about animals.
You could help a lot of earth based Consciousness, experience
less suffering and reduce that that suffering. Meter down a
couple new blogs two primary limitations on creativity. Talked about
that one are 20,000 eyes thought experiment about how AI

(47:14):
transcends transforms us from workers building other streams into creators
with 20,000 eyes and hands building our own. Cybersecurity Expo
raised 117 million for hackers, then open sourced the whole thing.

(47:36):
This is the errata. This is the wrong thing. This
Strix repo is really cool. This is it's my top
story to. This is super annoying. Um, clear evidence that
I didn't use AI. Um, because this was my mistake. Um, anyway, yeah.
Strix GitHub repo is a cool thing. It's basically an

(47:57):
automated vuln checking system. Um, and it wasn't Expo that
did it. Expo was someone doing kind of something similar,
and they just decided to stop doing their experiment on
hacker one. So that part of the story is correct. Um,
and Expo is doing cool stuff, but they did not

(48:18):
release this GitHub repo. That's that's the part that's wrong.
Here's workday lost data through social engineering targeting Salesforce. So
a whole bunch of people are getting Salesforce compromised, which
is really, um, social engineering attacks targeting Salesforce data about

(48:38):
that company. And that's kind of the why you keep
seeing stories that are kind of the same around these lines.
And workday appears to be the most recent victim of this.
Russia hacked a Norwegian Dam and released 1.9 million gallons
of water zoom patches, a critical privilege escalation vulnerability affecting

(49:01):
windows users new public exploit chains for two SAP flaws
with RC agents. Um, getting zero false positives in security
testing with deterministic validation. So that deterministic validation thing is
really interesting. This is, um, this conversation, this, uh, blackhat presentation.

(49:26):
There's a link to Expo here. Maybe this is how
I got confused because I was reading like 20 of
these things at the same time on Sunday, and I
just like, got cross-pollinated whatever. Uh, my mistake. Um. Basically,
the idea here is similar to what I was talking
about before and what Matthew Brown was talking about, which
is deterministic. That's code. Right. Um, and then you have

(49:50):
AI and like don't make don't confuse the two. You
can mix the two, but don't confuse the two. And
they're basically saying like, here's a general thing. You don't
want to ask it, hey, is this vulnerable? Because they
want to please you. They'll be like, yeah, it looks
vulnerable to me. So many false positives. So it's better

(50:12):
to use like a deterministic system to check if something
is vulnerable, if you can't. And then use the AI
for different parts of the system that require intelligence, critical
vulnerabilities and flow wise, AI platform 9.8 on the Richter
scale is going to happen. It's going to keep happening
because more and more systems are being built. So the

(50:36):
attack surface by its very nature is growing, therefore going
to be more vulnerable. Um, and then you have the
fact that AI is building so much of that code.
I'm not actually sure. Now I think about it. I'm
not actually sure if we add a 100 no. 2
million new websites to the internet that are suddenly built

(51:00):
by humans or suddenly built by AI. Which one is worse?
I guess it comes down to the question of what
made it be sudden. Why is there so all of
a sudden, so many 2 million more people making websites?
And who are those people? That's the question. Um, but
right now, I mean, humans aren't good at making secure code.

(51:23):
Really good humans are really good secure coding. Writing humans are,
but most aren't. Which is why Stack Overflow is full
of garbage. Which is why I trained on it. And
it's also making garbage. But AI is making lots of
slop on the internet, and it's shipping code very fast.
So it's bad. Um, now, over time, I think that's

(51:48):
going to get better and better. But in the meantime,
the slop will have already been made. And then you'll
need something like the AI automation bots to go clean
it all up. Chinese hackers breached Taiwan web servers using
customized open source tools. This is a Cisco Talos. Fine.
Lars shows us how missing MFA turn valid logins into

(52:10):
full compromise. Russia hackers breached US federal courts through vulnerabilities discovered.
But they were not fixed in 2020. So getting compromised
by something that you should have or you were told
about but didn't fix. Cisa releases new guidance telling companies

(52:31):
to inventory their OT systems. What do you know? Asset management.
This is going to be one of the biggest, coolest
things ever. Oh, by the way, I want to um,
I just thought of another idea which I didn't put
in the newsletter. Here's a cool idea and related to
my USC concept of unified entity context for enterprises. And

(52:53):
I got this idea, um, well, it's in the USC
idea as well, but I got this very specific idea
from a prince who's part of, um, Project Discovery. And
the idea is you send your agents out and their
job is to be like, they just sit inside of software.

(53:16):
They just sit in different places. They sit inside of wikis,
they sit inside of like organizations. They sit inside of software.
They sit inside of GitHub repos. And I forget how
he was talking about it. Um, but he basically said
distribute agents and have them do things and report back.

(53:36):
And so mine, my mind started jumping towards the USC idea. Um,
but I also jumped to the idea of security champions.
You know how like a lot of security teams, they'll
send their people out, like into the field, which is like, oh,
you live with engineering now? Oh, you live with product now.
You live with sales now or whatever. And their job

(53:59):
is to like sort of like understand report back but
also subtly influence them. Well, what if you didn't have
three of those people for the entire company, but instead
you had 3 million of them? Just imagine that every
piece of software, every piece of tech like your AWS accounts, your, um,

(54:22):
all your infrastructure, everything has agents inside of them watching
and reporting back. And some, of course, would be able
to make changes that'll come later. Um, and if you're
freaking out, that's because you work in security. Um, because
I'm freaking out. Just saying the words. So you have
millions of these agents out there They're gathering, reporting back,

(54:44):
blah blah blah. Who's controlling these agents? What identity do
they have? How are we logging their information? Like auditors
are going to freak out about this, but I think
this is absolutely the way that USC happens. Or one
of the ways that USC happens. There's different ways to
do this. You could have centralized agents that go, just

(55:05):
go and hit the stuff live and pull the stuff back.
But I think it's better to just deploy agents into
the systems themselves and have them point back to the center. Right?
Because I feel like we well, it's probably going to
be both. It's kind of like asset management, which is
how I got in the point. If you have the

(55:25):
agents inside of all of these systems and they're reporting back,
guess what they're doing? They are updating and breaking out
sub contexts, which then get summarized by other agents to
report up into a higher context, which which is the UFC,
and that UFC ultimately becomes this conglomeration of all these

(55:48):
mini contexts that got updated from all the air systems
and the project management system and the budget system and
all of those, like somebody buys, you know, a toothpaste
while they're traveling overseas on a work trip, being they
paid for the toothpaste. Okay, that hit some system somewhere.

(56:10):
Well guess what? In this world, which, by the way,
this is going to take a very long time to
roll out. So we're talking between two and, you know,
15 years, right. I'm sure someone's already doing it a
little bit, but it's going to take a very long
time to move through all of enterprise it in that world.

(56:31):
Whenever whoever gets it, that agent lives inside of the
finance system and the accounting system, and it sees the
toothpaste hit, and it reports up a tiny little entry
into a subcontext, which goes into another subcontext, which goes
all the way up. And ultimately, that probably wouldn't be
part of a big strategic picture, but at least it

(56:51):
would increment values, right? Um, and maybe it didn't need
to report that because that's part of traditional tech, which
gets valued later, but maybe they weren't supposed to be
traveling and that that, uh, purchase of toothpaste means they're
not where they're supposed to be. And now we have
a traveling problem and the VPN connection is turned off.

(57:13):
Whatever it is, the point is, all these different agents
in the different systems can be updating the central picture
of not just USC. But guess what you get for
free when you have USC, if it's reporting on the
proper stuff. Asset management we've never been able to do

(57:33):
asset management. Never. It's it's hard Like I consult for
Jupiter one because I love their vision of doing this
with a graph. It is really, really hard. It is
really hard. Um, I did this for Robin Hood. I
did asset management for Robin Hood using, um, a product

(57:55):
like this. Right. And I'm telling you, this is like,
it's completely like shifting, like game shifting stuff to have
actual real time understanding of the systems. Right? That's what
it comes down to. Asset management is an old word.
Let's let's not use that anymore. Um, you know, uh,

(58:20):
real time information gathering or whatever, like, there's old words
to use, and they're kind of dead in our minds.
Here's just another way to think about it. Millions of
agents deployed across your entire organization inside of all the
different tech stacks. Stocks. That is very hard for a
centralized entity to go and audit. And by the time

(58:42):
you do go and audit it, it's too it's too late.
That information is rolled off like there's a million examples
of this. And if you're in it or security or whatever,
you already know this, right. So planning, finance, tracking what
you know people are doing and if they're happy or
unhappy and how we can how can we make them happier.

(59:04):
How can we keep people from quitting? Um, customer churn.
That's a great one. Customer management. I mean, think about that.
Look at all the different things that we know the
customer is doing, and which one of those could be
signals that they're getting ready to churn? And what can
we do as a, as a result, like a reactionary action? Um,

(59:31):
so they haven't come to the portal recently. Okay, cool.
They start. Um, I don't know. They start blogging about
how the competitor looks better and their team at their current, uh, offering. Doesn't, uh,
ever reach out to them? Well, that's a great one.
Are there any agents right now watching customers public blog posts,

(59:55):
trying to figure out if they're unhappy with the product,
or they're getting excited about another product and then sending
them a preemptive email that says, hey, haven't seen you
log in recently. Hey, here's a $5 discount code or whatever. No,
I don't think so. And if there are, there's very few.
This will become commonplace. The point here, and I'm just

(01:00:18):
saying the point over and over. So, um, I'm saying
in different ways, hopefully it's sinking in. It might have
sunk in like, uh, 38 minutes ago. If so, I apologize.
Real time updating at the edges inward, as opposed to
being the central person who is responsible for all these things,

(01:00:40):
and all the systems are opaque. You have to reach
out to that owner. You have to get a user
and a login like it doesn't work. It hasn't worked.
That's why asset management has been so difficult. Um, and
that takes us all the way back to the point
I was making with Sisa releasing new guidance, telling companies

(01:01:01):
to inventory their OT systems. It's hard to inventory your
OT system, right? It's also hard to install agents on
OT systems because they don't want to install anything and
haven't wanted to for decades. So this will be a
slow process. Um. National security UK police are getting ten

(01:01:23):
facial recognition vans to catch serious criminals. And they're saying
like sex offenders, people wanted for serious crimes. So they're
looking for, like, outstanding warrants. But you know what? It's
still the van with cameras. And, uh, it all depends
on the government you have on how those tools will
be used. Will it be the best thing ever for humanity? Yeah, maybe.

(01:01:46):
Depends who's using the tool. America's gray zone blind spot
is becoming serious national security problem. So, uh, someone at
the cipher Brief is basically saying that, like, Russia, China,
Iran are ramping up, sabotage, cyber attacks, other hybrid warfare attacks.

(01:02:06):
And basically the US is not good at these things. Now,
what I would hope is we actually are understanding these.
We're preparing to do them, uh, not not in a
terrorist way, but like we're we understand and we're learning
how to defend against them. And we could potentially use
some of these tactics ourselves. Um, but that we just

(01:02:29):
don't talk about it. I think that's more likely to
be the case, and I hope it's the case. US
military transforms Indiana training grounds into drone warfare innovation hub.
Happy to see this. We are behind on drones, especially
as it relates to China. Russia blocks signal and WhatsApp
to control communication. US Navy wants 150 unmanned ships by 2027. Yes, please.

(01:02:56):
Our ships. I remember in the early 90s, I was
in the army. I was studying assassin's mace weapons, and
there was some general talking about how our fleets are
going to be extremely vulnerable to assassin's mace weapons by
specifically China, where they could deploy something. And our giant

(01:03:18):
like projection of power through aircraft carriers and carrier fleets
could just be taken out in a matter of minutes
or hours. using one of these types of weapons, or
multiple of these types of weapons, that costs very little
to make. And it turns out the number one assassin's
mace weapon. That has started to happen. I don't remember

(01:03:41):
if he was talking about it back then, because I
just wasn't smart back then, and I just can't remember
that far back. But, um, it is drones. Absolutely. It's drones.
And I will mention kill Decision by Daniel Suarez. Go
read that Israel says Iran recruited dozens of Israeli citizens,

(01:04:04):
highlighting concern with like, yeah, it's pretty easy for foreign
Intel services to like recruit citizens. I mean, if you
have enough, if you have thousands or millions of citizens,
it's just a matter of like finding, you know, it's
a filter system to find the right ones. And it's
unfortunately pretty easy. China detained second senior diplomat in expanding probe.

(01:04:27):
So they're arresting a bunch of diplomats. Not exactly clear
what they're doing wrong. I'm guessing it might be like
not being pro-China enough. Maybe they're becoming too Western. No idea,
but I'm curious about that. I Anthropic's also Anthropic's CEO,
says I will write 90% of code in 3 to

(01:04:49):
6 months. He said stuff like this before, and to
be clear, he's not saying there won't be people writing
code like only 10% of people now will be writing code.
He's saying that just there will be so much more
AI code that it'll be 90% of the total. OpenAI

(01:05:09):
is building chromium based web browser with AI agents that
can browse for you. My buddy Jason is like, yeah,
the next big thing is, uh, AI for AI is
basically browsers. And I think he's turning out to be correct. Like,
I keep waiting for the next step, which is the Da,
which is what I'm building. And I think that's kind

(01:05:31):
of the final step there is to have your Da
do all this for you. And a browser would just
be a tool, but it sounds like he's correct in
that this is going to be the home for a
while because we all use our browsers. We don't have
DA's yet. Da platforms aren't there yet with like Apple
and Google or OpenAI with like a separate device. So

(01:05:53):
that ecosystem like hasn't started happening yet. Um, but browsers have.
So I think he's turning out to be very correct
about this. And now OpenAI is getting in the game.
Sam Altman says we're in an AI bubble. I agree
with this. I just think it's not the kind of
bubble that people think it is, because it's not like

(01:06:15):
AI is going away or it's going to crash. So
when I think bubble, I think NFTs or, um, Pets.com.
Because largely those things don't exist anymore. And it was
it turned out to be kind of a scam. Now,
I'm not an expert on, uh, the.com bubble because I

(01:06:37):
kind of wasn't sentient at the time, but, um. My
understanding was a lot of hype around, like, if you
just have like a.com, blah, blah, blah, like you're just
going to make tons of money. And the NFT thing
is like, yeah, you're going to have this, you're going
to buy it for $18.14, and you're going to be

(01:06:58):
a millionaire if you hold on to it for a
year and a half. Those are bubbles to me. Okay.
The AI thing, to me, it's the most important tech
ever invented, bar none. No questions asked. Bigger than internet.
Bigger than mobile. Bigger than steam engine. It's the simply
the biggest tech ever. Okay. That is not a bubble

(01:07:22):
at all. What's a bubble about? I is thousands of
people putting hundreds of billions of dollars. I think we
can call it trillions of dollars now. Like invested and,
you know, earmarked or whatever. Like, let's just call it
hundreds of billions of dollars into new startups, new websites,

(01:07:45):
new whatever. And everyone's jumping in there like, oh, yeah,
you know, I do a slightly different prompt and, um,
I've got this cool website and I'm going to raise
a whole bunch of money and I'm going to hire
some people, and I have a startup and it's like, no,
not really. Not when OpenAI and anthropic make a slight

(01:08:06):
change to their their code base, which just eliminates your
company and 40,000 other companies just like it. It's a
bubble because people are unsure of what it is and
they're getting in in unsafe ways and haphazard ways and
kind of stupid ways in a lot of cases. A

(01:08:28):
lot of people aren't, obviously. But there is going to
be a major correction because there is so much money invested.
The investors don't understand AI. They're starting to a little
bit now, but they've been investing hundreds of billions of
dollars for like the last year or two years. I
don't know how far back all that money was sent

(01:08:51):
in on FOMO. It was sent in on hype. Okay,
so this is this is the distinction. This is why
people need to think about this super clearly. Both things
are true at the same time. AI hype that's going
to die down because people got in in very stupid ways.

(01:09:12):
People are going to call that a bubble. Look what
happened to AI. It turns out to be garbage. Very
huge mistake. Like massive mistake. That is not a bubble.
That is stupidity about a thing that is not understood
and basically incautious and ignorant action taken as a result.

(01:09:34):
And absolutely that's going to crush. Honestly, I think it's
going to crush the majority of companies, the vast majority
of these small little companies. Um, how about Google? How
about meta? Are they going to have an AI hype
and crash cycle? Uh, no, they are not. Because what

(01:09:58):
people are doing inside of those companies is they're looking
at all their existing processes. Another good example. Bank of America.
Are they going to have an AI hype cycle? No
they're not. They already know how everything works inside their company,
and they are carefully figuring out how to apply AI
to those existing processes. They're going to save tons of money.

(01:10:21):
They're going to do things way better. And looking back
ten years, they're going to be like, yeah, we wish
we had this earlier. Like we would have saved tons
of money. Unfortunately, they'll probably get rid of people as
a result as well, which is a whole separate talk show. But.
For people who are doing AI correctly and will do
AI correctly in the future, it is laughable. Laughable to

(01:10:45):
think of AI as a bubble or as hype. Okay,
for all the NFT like crypto, like people who just
thought I was a way to make money. It's absolutely
a bubble. 100%. And they are going to get crushed.
So just keep both of those ideas in your mind

(01:11:07):
at the same time. Whenever you hear AI bubble AI hype.
All right OpenAI updates GPT five to be warmer and friendlier.
A lot of people really hated it. They hated the personality.
I mean, there were I think there were suicides over
using GPT four. Oh, he had to return GPT four

(01:11:27):
back because everyone loved it so much after they complained
about it constantly. Uh, should be noted. Slop squatting is
when attackers register fake packages that llms hallucinate. I should
have put this in the cybersecurity section. Yeah, this is
really funny. So you make up a whole bunch of
AI slop names, and then later on when I slop generates, uh, packages,

(01:11:54):
names that are real people start using them. They will
own the real estate. So they basically just like man
in the middle, your, uh, your stupid AI named library
shadow AI is like having rogue interns secretly running parts
of your business. Yeah, especially given what I just talked about,
about millions of agents deployed. You imagine an auditor coming

(01:12:18):
into a place and they're just like, yeah, I just
need to see, you know, who has access to change things.
And it's like, yeah, we've got, you know, so and
so AI platform it um, it's running on average 2
million active agents, but they're, they're swapping in and out
all the time with like a variance of like 2

(01:12:39):
to 400,000 agents active at any one time. And it's like, okay, cool.
Can you print me out a spreadsheet? I just want
to review those line by line. Like that's going to
be a nightmare. An absolute nightmare. Of course they'll be
using AI to do that, but whatever. It is weird
to think about millions of things deployed that have agency. Um,

(01:13:04):
and they could see things and they could report things.
I mean, think about central AI system compromise in this world.
I mean, are you kidding me? I mean, a single
kill command. Because ideally, you're supposed to be able to
talk to all these agents, right? You obviously need to
be able to give them commands. You need to be
able to update their instructions. What if you could just

(01:13:26):
tell them, wipe everything? What if you could tell them
inject this tiny little like cuckoo's egg, steal half of
a cent or whatever and send it to this Bitcoin account? Like,
I mean, this is just wild world we're about to
move into. Now, ideally you'd really, really defend this thing,
but oh my goodness. You know mistakes will be made.

(01:13:49):
DeFi startup replaces 45,000 in content staff with 23 agent
AI systems for $23 per month. No. $20 per month
23 agent AI systems. This is a type of thing
I'm hearing about this all over the place. They're like, yeah,
I was spending, you know, 400 grand on this thing.

(01:14:11):
Now I'm spending $284 and getting better results. Oh, best
example of this is like, big giant reports. Big giant.
So two names that I'm super like, I just think
they're about to get destroyed. Gartner and McKinsey. And like
everyone who's like Gartner and McKinsey. So the more expensive

(01:14:35):
it is for, the nicer the looking report it is there,
the more the most screwed. Like there is expertise inside McKinsey.
There is expertise inside Gartner. But they're going to have
to become like influencers. They're not influencers, but like content creators,

(01:14:56):
like basically what I'm doing, they're going to have to
go independent essentially, and have their own broadcast of their
talent and their skill and their expertise, because it's going
to be hard to differentiate because most, I would say
most high quality reports like that. First of all, they

(01:15:16):
show up with the coolest people like me who've been
in the industry for 30 years, and I go do
the sales call and dazzle them, and then I leave.
And then, you know, 14, you know, smiling 22 year
olds show up who just basically learned what a command
line is and basically learned what security is. And those

(01:15:38):
are going to be the consultants writing your report. And
really they're using the expertise from all the previous reports.
And like how does that match. Oh, and by the way,
it's going to be $180,000 for this report, which takes
nine weeks to make. Okay. And it's a giant opaque thing.

(01:15:59):
It takes time to parse it and everything or whatever,
as opposed to all the same context that those smiling
22 year olds had to get from you by all
the interviews, you could just give that to your eye.
Your eye is actually way smarter than the collective knowledge
of all of McKinsey. And so you make that same

(01:16:19):
report in four minutes for $1.17, or let's just say
it's actually way more expensive. We're actually going to spend
way more time. We're going to use the most expensive
AI ever invented, you know, you know, uh, Claude Coad fucking, uh, seven, right?

(01:16:44):
So it's actually $14. It's the most expensive prompt ever
run in the history of mankind. It's a $14 report,
and it took, uh, nine minutes to make compared to
whatever the cost was of the other one. That took

(01:17:04):
nine weeks. These companies are in dire, dire trouble. Technology.
Robin hood CEO says remote work was a mistake. Orders
executives back five days a week. Yeah, honestly, I think
on site work is better. I really do. I love

(01:17:28):
working remotely. I think certain people are as good or
better working remotely, but the vibe of a team is
a hundred times better on premise. I've seen it multiple times,
and I've seen CEOs learn this the hard way multiple times.

(01:17:51):
And I know lots of people who've done both and
tried both. And this is my current wisdom level, is
that in general, on site work is better. So cool. Uh,
what I do like about the announcement was for executives,
it's mandatory five days on site. For managers, it's four

(01:18:14):
days on site, and for regular workers, it's three days
on site. And his his quote here is, um. Your
boss should be feeling more pain than you. I love that.
I hate the fact that it's pain. Uh, which is

(01:18:34):
why I think we need to move to human 3.0.
Because this is ridiculous. We should all be working remote
and doing, like, live meetups to work on our mutual projects.
And if we have companies together or whatever. But this
whole thing of you must come in. Uh, I just
I think it's very old world. Uh, unfortunately, it seems

(01:18:57):
to be more effective in the current model. China mandates
domestic firms source 50% of chips from Chinese producers should
have put that in Natsec. Text only websites are faster, cleaner,
more accessible than modern web bloat. I love text only websites.
I have very few images on mine. Researchers create micro

(01:19:19):
microfliers that levitate using only sunlight. I need many of these, um,
although I want to give them lots of capabilities. Which
means they'll be too heavy, which means they won't float. Um,
but this is a project I want to work on
humans more. Most Americans think AI will permanently eliminate jobs.

(01:19:41):
This is from Reuters Ipsos. Multiple studies show transfers to
poor Americans basically don't help. This is like UBI studies.
Looks like UBI studies are not looking good. The hope,
including my hope, was that you give them this money
and they're just going to focus on growing themselves and

(01:20:01):
like learning and blah, blah, blah. The more I think
about this, the more I kind of always knew this
would not happen. Most people simply don't want to learn.
Most people do not want to improve themselves. Most people
are not like, curious and ambitious and like they're looking
for their next Netflix show. The vast majority of people

(01:20:22):
I would say, I don't know the numbers, but I
think this is a thing that can be helped. I
think this could be an education thing. I think some
of this could be trauma, it could be bad education,
it could be bad training, it could be bad culture.
It could be lots of different things that make this happen.
But for example, um, most libraries like, don't have people

(01:20:46):
like knocking down the doors. It's free. Those are free books.
Nobody's reading them. Okay. Okay. Free books. Nobody's reading them. Okay.
The internet is full of free books. Nobody. Nobody's reading them.
If you ask the average person, like what books are
you reading? They're like, are you stupid? Like, I graduated

(01:21:08):
high school a long time ago. I barely read those books.
Why would you make me read a book? Now, some
people read a lot of romance books, a lot of, like,
comics or whatever I'm talking about, like. And I would
say that's already just brilliant and amazing, just that by itself.
But I'm talking about actually, like mostly non-fiction or really,

(01:21:30):
really good fiction, difficult books, books that grow us. That's
what I'm talking about. Very few people like, I'm in
the San Francisco Bay area, okay. And most people here
aren't reading. Most people I know in San Francisco Bay area,
I don't want to make like, broad statistical things. Um,

(01:21:51):
because I think, you know, first of all, that's impossible. But, Um,
and I think a lot of the stats are wrong.
I think when you report, how many books do you
read a year? I bet you most of those stats
are complete garbage. Who wants to say I haven't read
a book since high school? Not very many people. My
point is, I know lots of very smart people who

(01:22:14):
make decent money, and I know all their friends as well.
And I mean, I know hundreds or thousands of people.
So this is my anecdotal data set. Most people don't
read books, right. So. I don't know why I went

(01:22:35):
so heavy into the book thing, but my point is,
in general, most people don't care about self-improvement. They're not
obsessed with it. Um, so when UBI money comes in,
they're not thinking, wow, I can learn a new career. Wow.
I could, I could buy even more books with this
so long way of saying. I think this is why

(01:23:01):
we're seeing the results from these UBI studies. Like, they'll
just eat out more. They'll just, you know, improve their
lives in other ways. They'll, you know, buy more subscriptions,
buy clothes. And what's wrong with that? There's nothing wrong
with that. I'm just saying, like, we should stop, like
deluding ourselves that everyone is trying to be in this

(01:23:23):
category that they think they are. All right. Europe faces
an emergency as American markets completely dominate global investing. China's
economy slowed in July despite exports staying strong. I'm about
to do a major substrate project here to gather economic data.
A whole bunch of data, actually. But economic data. I

(01:23:46):
want to know where we're getting these numbers from because
we know China's numbers are bad. Like, what are we
judging this based on? And I want to kind of
establish some ground truth there or at least triangulate onto it.
Nobody's buying homes or switching jobs as American mobility stalls.
Americans are drinking less alcohol than they have in 90 years.

(01:24:09):
This is from a Gallup poll. Williams syndrome makes people
hyper social and trusting of everyone. It's basically the opposite of.
Being autistic. I almost said autistic. It's way different. UK
drought group tells citizens to delete emails to save water.

(01:24:35):
That's why the UK cannot have nice things. They're telling me.
It's not the UK, it's some drought group. Okay, it's
a country didn't do this, but whatever. It rhymes with
a lot of things that the UK is trying to
do to stop progress. The story behind Timothy Leary's turn on,

(01:24:56):
tune in, drop out. You got to go check this
thing out. Tune in. No. Turn on, tune in, drop out.
Definitely something to check out. Ideas. Happy about this section?
I feel like I is going to have a profound
impact on globalism, especially for knowledge work. So the first
jobs to go away are like the Fiverr jobs, the
Upwork jobs, um, a lot of creative work, a lot

(01:25:22):
of build me a basic website. And that's the first
stuff to go with I and I'm really worried about
millions of people because, like, we're always so us focused.
Obviously I live in the US. I'm very US focused,
but it's like, what about the rest of the world?
What about all these really smart, ambitious knowledge workers who

(01:25:44):
learned how to make websites and they're getting crazy income
and helping their families and helping their local economies from
all over the world. You know, Vietnam, Poland, like all
over the place. And now suddenly that faucet gets turned
off because smaller groups of people can use more AI

(01:26:09):
and take more of that work. One of the primary
directives of building AI systems should be to use as
little AI as possible. Cool. Talked about that one. Thought
I forgot it. It's in there. It's number two. If
most people aren't confused about what I'm writing and sharing,
that means I'm not properly exercising my type one freedom

(01:26:31):
or my type two freedom. Actually, now that I think
about it. But yeah, this is a whole thing and
this is already a 17 hour podcast. So I'm going
to I'm going to pass on this one for now.
But basically if you are speaking from your true authentic self.
You should be surprising yourself and you should be surprising

(01:26:51):
everyone around you. If you only say things that people
are like, yeah, that's that's perfectly correct. And they're not like, hmm,
that guy's a little bit weird. Like, yeah, what? Why
is he talking about that? That's kind of weird. Like,
it doesn't match any of his other content. Um, so
this is why I like Tyler Cowan. This is why
I like Dwarkesh is they just explore things. Okay? And, like,

(01:27:18):
I don't think Dwarkesh is sitting around wondering, like, oh, gee,
I can't put out this video because no one will
like it because I'm supposed to talk about AI. He
literally just goes on rants about China and he just
did the animal one. He's literally just chasing his his, uh,
his interest. Right? And that's I feel like that's what

(01:27:38):
I do as well. It's massively hurt me. It's massively
hurt my socials because I'll talk about whatever and they'll
be like, don't talk about politics. Don't beat up on Trump.
He's a great guy. And it blunts my stuff there.
But I think it's just a better way to live,
to not be restricted in this way. Number four, a

(01:27:59):
lot of what comes out in your words and your
writing comes not from your thoughts or what you wanted
to say, but from your current emotional state. So the
way I think about this is there's a thing you
think you're saying, like, for example, in this whole podcast,
there's a think thing that I think I'm saying, right?
I think I'm sounding smart. I think I'm sharing information

(01:28:22):
with people. I think I'm like being helpful. I think
I'm being excited. But people might be hearing something else.
They might be hearing, oh, this guy's actually trying to
promote his products. Like, let's say I was actually trying
to promote a product, which I'm not right now, but
maybe I will be in the next podcast. Who knows? Um,

(01:28:45):
let's say I was actually trying to promote a product,
but I was trying to talk in a way in
which it seemed like I wasn't. I was like, oh, yeah, well,
you know, everyone all use threshold, for example. I'll be like, yeah,
everyone has a problem. You know, I just think it's
really weird. People, you know, can't find whatever, they can't
find the right stories and like, they're not rated high enough. And,

(01:29:07):
you know, there's there's, uh, small creators around the world
and they can't get their stuff found or whatever. And
the receiver is like, I feel a pitch coming. And
in my mind, I don't think I'm sending that, but
they can receive it. So I'm obsessed with this idea of, like,
receivers can hear things beyond the senses. They have an,

(01:29:32):
you know, an extra sensory way to sense truth in
what's being sent. So it's almost like what I'm sending
is beyond words and meaning, and what they're receiving is
beyond words and meaning. So the words I'm saying and
the words they're hearing are at that surface level, but
what they're actually understanding, what I'm actually sending is different

(01:29:55):
than what I think. And the only thing that I
can interpret it is, is a receiver. And they're going
to receive it emotionally, and I'm going to send it
emotionally at another layer than my words. So I'm like
obsessed with this. I'm like, that's how I could read
a whole bunch of tweets, including from my own, and

(01:30:15):
just be like, um, well, I'll just use someone else,
for example. They're like, oh, blah blah, I'm such a
hero and blah blah blah. And I did this and
I didn't even want any appreciation from it at all. And,
you know, I just put a lot of effort in
and blah, blah, blah. And the thing that they're trying
to send with their words is. Um, well, they're trying

(01:30:44):
to create the impression that they're just a selfless person,
and they just do it for the community or whatever.
But what a tuned receiver hears is I'm a giant narcissist,
and I want everyone to love me and think I'm
the best person ever. And I am obsessed with the

(01:31:04):
difference between those signals and how the second one is
often invisible in the text that they said, but also
to the to the sender, to the person who said it.
They're like, what are you talking about? I'm just like,
just sharing some of the things I've done. Like I
donated blood nine times yesterday. Like I'm just sharing that information.

(01:31:25):
I'm not trying to say that I'm a hero. Now,
if you want to call me a hero, you, you're
free to do so. Um, anyway, that's kind of the
point there. All right. Discovery I apps are becoming like
music personal, contextual and infinite. Really cool little idea. So

(01:31:47):
it's not like there's not like the music. Everyone has
their own world of music. Isn't that cool? That's a
cool idea. Claudia is a desktop companion for cloud code.
Haven't messed with this command line person, as everyone knows. Uh, which,
by the way, I was trying to convey there that

(01:32:09):
I'm awesome. Even though I just said I'm a command
line person, I was actually trying to convey that I'm awesome.
And you should be awesome too. And use the command
line context seven brings always fresh documentation to AI coding
assistance via MCP. This is cool. Okay, so this is
an MCP you put in your AI system and what

(01:32:32):
it does is it will go get the current documentation.
So if you're like hey, build me a new website
or build me a website. It will go and get
just without even asking. Go and get the latest Shadchan documentation, um,
before it starts doing the design phase using your specs.

(01:32:55):
So it won't just start building using its current model knowledge.
It keeps it updated based on what you mentioned. So
if you mention MVP, it'll go read the latest MVP stuff.
If you mention cloud code it'll go read all the
cloud code docs. Really really cool. Amsterdam's Ritman Library puts
2178 rare occult books online for free agent auto updates

(01:33:21):
to jobs or auto applies to jobs for for free. Um,
I haven't used one of these, but, um, I'd like
to make one and give it to my friends. That's
what I would like to do. Holy crap. That's a
project I need to do. Why have I not done that? Okay.
Hold on. Writing this down. Oh, then I'm going to

(01:33:46):
put a stripe page on it and sell it. Um,
but not for my friends. Okay. Make, um, automated. Sorry.
Doing this live. Make automated job agent or. Oh. See that? See?

(01:34:09):
Watch this. What what what did I actually say? What
I said was I was making an agent for friends.
What am I actually conveying? Um. Oh, this guy makes
stuff for his friends and gives it to him for
free because he's a nice guy. I am absolutely sure
I was subconsciously sending that. Um, I'm also consciously sending

(01:34:32):
that all the time, so I don't mind that it's
unconsciously being sent, but there are cases where I'm unconsciously
sending something that I don't want to be consciously sending. Uh,
let's see here. Oh, whisper. Oh, whisper makes local text
to speech as simple as. Oh, llama. This is a
pretty cool little project. Uh, generated EDM is getting really good,

(01:34:54):
so I went and made some EDM. I'm telling you,
you can make fun of EDM for this, but, like,
it's starting to sound pretty good. Same with rap songs. Um. Oh,
metal songs too. Oh my goodness. I've heard some metal
songs actually. Chad. My buddy Chad made one for my
character that I just made in this one game. It's

(01:35:16):
one D&D game we play on Fridays, and, uh. It's,
I mean, it was themed to my character and the
character concepts. I think he used Suno to make it,
and it was a song. What I sent back to
him in the text was I would go see these
people live. I would seriously go and see these people live.

(01:35:41):
Of course, there is no these people because he made
the song in like, you know, 14 seconds using Suno. Um,
insane insane world. I tools spawn equal and opposite I tools.
This is Joan Westerberg. She is just killing it. Absolutely
killing it with the essays, like every single one, is good.

(01:36:02):
Shadow AI agents are creating the same security chaos as
early Wi-Fi adoption. That's another cool analogy I really like.
We've replaced anxiety with apathy as our defense mechanism. This
is another one from Westerberg. 70 kids jumped bikes over
each other constantly. That was me. Um, and there's pictures
in there and they look very familiar. Actually, a lot

(01:36:24):
of the kids look exactly like me. Uh, I Automation
specialist post job search on Reddit. That was a cool
one to check out. Recommendation of the week. Think about
the distance between your thinking, speaking and writing versus your feeling.
Speaking and writing. Have someone read. Basically, have someone read

(01:36:47):
some of your stuff where you think you're going to
get an unexpected reaction and see what they hear or
see when they read it. So what were you saying
without you knowing you were saying it? And you'll have
to do this with like a very smart friend who's
also very honest. And then if you find a big gap,
try to work on making sure that your verbal words

(01:37:08):
are the same as your emotional words in the receiver.
ISM of the week. The first principle is that you
must not fool yourself and you are the easiest person
to fool. First principle is that you must not fool
yourself and you are the easiest person to fool. Richard Feynman.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Charlie Kirk Show

The Charlie Kirk Show

Charlie is America's hardest working grassroots activist who has your inside scoop on the biggest news of the day and what's really going on behind the headlines. The founder of Turning Point USA and one of social media's most engaged personalities, Charlie is on the front lines of America’s culture war, mobilizing hundreds of thousands of students on over 3,500 college and high school campuses across the country, bringing you your daily dose of clarity in a sea of chaos all from his signature no-holds-barred, unapologetically conservative, freedom-loving point of view. You can also watch Charlie Kirk on Salem News Channel

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.