All Episodes

A Chrome 0-Day, Meta Automates Security Assessments, New Essays, My New Video on Hacking with AI, Ukraine's Asymmetrical Attack, Thoughts on My AI Skeptical Friends, The Dangers of Winning the Wrong Game, and more...

You are currently listening to the Standard version of the podcast, consider upgrading and becoming a member to unlock the full version and many other exclusive benefits herehttps://newsletter.danielmiessler.com/upgrade

Read this episode online: https://newsletter.danielmiessler.com/p/ul-483

Subscribe to the newsletter at:
https://danielmiessler.com/subscribe

Join the UL community at:
https://danielmiessler.com/upgrade

Follow on X:
https://x.com/danielmiessler

Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler

Become a Member: https://danielmiessler.com/upgrade

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:17):
All right. Welcome to episode 483. Just a quick reminder.
I'm trying to get a lot more content out, and
I don't want to miss regular newsletter episodes. And I
want to actually get them out more closer to Monday
or Tuesday, ideally Monday, so that the podcast and any
videos and basically this newsletter and the podcast version of

(00:39):
the newsletter comes out as close to Monday as possible.
So I'm going to be doing a number of things
to try to speed that up. But one of those
things is actually having less editing on this podcast, especially
when I'm doing the newsletter. I mean, I'm talking straight
for 20, 30 minutes, and if I give this to
an editor, it's going to take them, you know, a
day or two to turn that around. Now, eventually, hopefully soon,

(01:01):
I will be able to find all my mistakes and
clean out the ums and ahs and the restatements of
a sentence. Because what I'll often do is like, make
a mistake in a sentence and then I'll say the
word edit, and then I will say the sentence over again.
So if you hear any of those or you hear me,
just restate the sentence. I know it's annoying. I apologize,

(01:22):
I do not like to hear that in podcasts I
listen to. But if the podcast is high enough signal,
then I'm okay. You know, taking a little bit of that.
And I'm saying, just bear with me because I think
it's more important that we get the content out, and
it's more important that we get the ideas out, and
we are just waiting for I to be able to
automatically clean that up. So I will just drag and drop.

(01:44):
It'll go clean all that up and we ship the
thing and there's no delay whatsoever. So I just wanted
to mention that and sort of preemptively apologize. All right.
First piece here released my new video on where I
think hacking is going. It's called The Future of Hacking
is Context, and it is on YouTube. And I recommend
you go check it out. Got a new essay on
how I see AI affecting education that's on the website. Uh,

(02:07):
new essay on AI job replacement timelines, and a new
essay on my two groups of cyber AI friends where
basically it's crazy. I've got one group of friends who
are like, I can't believe this whole thing is a fad.
This is nothing but billionaires trying to get rich. Like
it's an NFT crypto scheme and it's just complete and

(02:28):
absolute garbage. And you know, I've got one friend, Marcus Hutchins,
who doesn't even believe that AI is intelligence. He just
believes it's a complete scam. The entire industry is broken.
It's not real intelligence. It's not real AI. And it
just completely down on it. And he published a really

(02:50):
cool essay, I think, which I linked to, which talks
about why it's not real intelligence. And in the short
answer or the short summary of it, is the fact
that it knows so much, but it still can't invent
new things. And my concise answer to this, and by
the way, I'm about to debate him in public in

(03:11):
a recorded podcast on both of our shows, and we're
going to go through this in depth. But the short
answer to that is your rights. It's not producing real
innovation yet. But my argument is that real innovation, real
net new ideas is actually a tiny fraction of the

(03:32):
intelligence that humans use every day. You know, 99.99% of
intelligence that we use every day is not inventing net
new things. It's applying existing knowledge that we have existing
experience that we have, using our intelligence to apply that
to a new problem. And the new problem is also
not a net new 99.99% problem. It's just a slightly

(03:58):
different version of that problem. Right. If you think about
an average white collar worker, if you think about an
average admin or secretary person or a an average coder even. Right.
Usually the problems are like very similar to problems we've
solved before, right? And there are some exceptions with with

(04:18):
coding and with high end engineering and of course, with,
you know, pinnacle level science. There's lots of exceptions, but
the vast majority of the intelligence that we use every
single day, it's all the same stuff with different names
and slightly different configurations. It's all the same stuff, right?
And more importantly, teaching kids answering their questions, being a tutor,

(04:40):
being a teacher, being a helper, you know, being able
to help out inside of companies and be an assistant and,
you know, make meetings happen and coordinate and summarize things
and keep everyone up to date. This is what the
vast majority of work is, right? And I would argue
that's absolutely intelligence. It's absolutely human intelligence. And we know

(05:04):
this because we can't automate it. That's the reason it's
AI is because. If if we could automate it away,
it would have already been scripted and we wouldn't actually
have human employees doing that work. So if you think
about the average job that someone does in an intellectual
or cognitive role, like knowledge work, 99.9% of it, and

(05:27):
I'm just making up that number to, you know, to
be a little bit hyperbolic there. 99.99% of that work
is extremely repetitive. It's just slightly changing the work that
we do, right? It's slightly different scenarios, slightly different users,
slightly different use cases. But ultimately what this comes down

(05:50):
to is can I replace that? And I think we're
already seeing that. So I think that's empirically being shown
to be the case. Now Marcus's argument here is well,
that's not real intelligence. That's not real AI because it's
just doing the same thing over and over. And my
point is, that's what millions of jobs are actually doing.
That's why millions of jobs are actually going to get replaced,

(06:12):
is because AI is going to be able to do that.
0.9 9% better and faster and much cheaper than most humans.
So my whole thing with AI is that it matters
because of how it impacts humans, right? Getting rid of
millions of jobs affects humans. That's why I define AGI

(06:33):
as an AI that can replace an average knowledge worker,
because I don't care about AI in a theoretical sense.
I don't care about it in a technical sense. I
care about it mostly in how it impacts humans and
human society and human meaning. So that's what our debate
is going to come around. And anyway, the way I

(06:54):
got on to this is that that's one view. It's
like this very anti-ai, very negative view of like, we
don't even have it. It's not real. It's all hype.
It's just like crypto. It's just a bunch of billionaires
trying to get rich. All these stats and benchmarks, everything
being said is basically marketing from executives for these lame, stupid,
temporary AI companies that are going to be gone immediately.

(07:18):
And basically it's not going to have much job impact
because it can't actually replace a human. So that's kind
of their argument. I'm not lumping everything I just said
into Marcuse's argument. I'll let him make his own arguments
when we have that discussion. But that's one group generally now.
The other group is like myself and Joseph Thacker and
my friend Jason Haddox and my friend Sasha Geller. Like,

(07:42):
we're all thinking, you know, we kind of saw where
this was going, especially myself and Joseph Thacker and Sasha.
It's like end of 22. We're like, oh my goodness. Right.
So all the stuff I've been talking about for the last,
you know, two and a half, three years is we

(08:02):
I kind of saw the glimpses of that already happening
at the end of 2022. It was kind of inevitable.
And the more the more the AI started to happen,
the more obvious it gets or whatever. And of course,
it's part of the book. That was in 2016 as well.
But the point is, one group of people just still
today in mid 2025 doesn't accept it. Other another group

(08:24):
of people like myself and a bunch of other people
just like me or similar to me in mindset. They
got it instantly. Or if I'm right, they got it instantly.
If I'm wrong, then they were also instantly wrong, just
like me. And what's fascinating to me is these are
all security experts. These are all deeply technical, deeply experienced,

(08:44):
multiple decades of experience in cybersecurity, and we have drastically
different views on this. That's what's so fascinating to me, right? Why?
Why do they think that way? And why do we
think this way? What makes the difference between being in
one group or the other? And I think the answer
comes down to just how they overall view the world. Um,

(09:06):
Marcus in particular is very, I want to say socialist,
but not not socialist in a, in a bad way,
socialist in a human uplifting way, which I am as well, right?
I just I'm also a capitalist. So it, uh. Complex topic.
I'm not going to go into that, but the point
is he is very sort of anti billionaire anti-big business.

(09:29):
He sees all this as a giant scam, a big, uh,
you know, a big push to try to just, uh,
remove jobs or trick people into removing jobs when it's
not even real. He's just very cynical about the entire
thing based on, you know, kind of being anti-capitalist, anti,
you know, big business anti billionaire for sure. So I

(09:51):
think people who have that view or I think the
other view that that puts people in this category often
is just being anti-change. And a lot of people in
information security are actually anti-change. I didn't realize this until
a few years ago, but it's like there are many
people who are just like, oh, don't change that. You know,
when I was growing up in the 80s or 90s,

(10:11):
you know, the protocol worked like this and that's how
it should always work and can't stand these these crazy
kids with all their JavaScript and all their dynamic web
apps and, you know, all the crazy APIs. Like, I
just see a lot of get off my lawn type mentality.
And people who have that mentality tend to be very
anti-ai in this way. And then my other group of

(10:32):
friends are like, yeah, tech changes all the time, no
big deal. And they kind of have this mentality that
I have, which is shepherding. So I see myself as
a shepherd, you know, with like the stick and the robes.
And I don't want to use sheep here because I
don't consider all of humanity to be sheep, but whatever,
we'll use sheep. It's not actually pertinent in this. The

(10:54):
idea is you. It's your job to walk up ahead
and see if there's danger and to, like, help steer people.
Because in this metaphor, it's not superiority of the human
to the inferiority of the sheep. It's more like superiority
of your knowledge of the risks versus inferiority, of the

(11:14):
knowledge of the risks. So it's my job. Switching to
a military metaphor, it's my job to go out there
and step on mines. It's my job to go out
there and detect all the mines, and then spray paint
a path on this minefield so people can walk out,
walk on it without getting hurt. Right. That's the way
I see it. And I see the this minefield is
constantly changing because the tech is constantly changing, and we

(11:37):
should just accept that and everything's fine, right? Change is constant.
That's fine. Oh, we're going to lose all these jobs. Cool.
What's next? Um, AI is going to replace a whole
bunch of stuff. Cool. What's next? Let's talk about the upsides. Oh,
we're all going to become way more creative. Okay, cool.
Let's talk about that. Let's figure out how to uplevel
and upskill. So I think that mentality versus the other
anti-change mentality is the primary thing that's guiding whether or

(11:59):
not someone is in one of these camps versus the other. So, um,
that's that's essentially what that essay is about. And what
I just said is like 20 times longer than the
actual essay, which is like a one minute read. But
that's the advantage of doing a long form podcast. All right.
Let's see here. Gukesh Indian current world champion beat Magnus

(12:23):
Carlsen in the for the first time in classical chess.
And Magnus like hammer fisted the table. Crore. Super super loud.
Gukesh didn't jump, so I gave him credit for that.
But that was cool. Got a video of that. All right. Cybersecurity.
Google patches new Chrome zero day bug in attacks Microsoft
and CrowdStrike create a shared threat actor dictionary. This one's

(12:45):
cool because some threat actors have like 15 different names
for them, so they unified that. That's, I think, progress.
OpenAI's O3 discovers Linux kernel zero day vulnerability I talked
about this last week, but, um, yeah, this article is
pretty good here. Meta plans to automate product risk assessments
with AI. This is really cool. 90% of app updates

(13:06):
they're going to do the assessments using AI. This has
always been one of my favorite things. I did this
very early, um, when I first started to pop, um,
a filtering system for a big problem with security teams
is they are the bottleneck between engineering and being able
to ship and not being able to ship. So they
get really mad at security about that. I love the

(13:27):
idea of having all the context of the app that
they're trying to push through and sorting it into, like buckets.
You could actually use lots of different buckets. You could
use like 2 or 3, but you might be able
to sort it into like 15 different filtering buckets where
your pipeline of security checks is vastly different based on okay,
is it privacy focused? Is it customer data security focused?

(13:48):
Is it proprietary, data focused? Like what are what's the
threat model, which it could automatically do as well. It
could understand your risks of the company. It can understand
worst case scenarios for the company. And let's say it's
sorting into 16 different granular security check types. Hopefully most
of those being automated. But if it's a super high
priority thing it goes into the top one. It goes

(14:10):
into number 16. And that one is actually a manual
red team pen test, which is associated with automated scans
that already happened or whatever. But imagine being able to
speed up 85%, 95%, 98% of projects that engineering is
trying to push out for the company and have only 2%
actually involve manual testing, um, or whatever. I think that's really,

(14:34):
really powerful use case for this. And anyway, so meta
is doing that or something like that for 90% of
app updates using AI. Massive Asus router botnet uses persistent backdoors.
Over 8000 Asus routers affected by this, and it survives updates.
So if you reboot the router, the thing is still there.
Russian market becomes top destination for stolen credentials. So this

(14:57):
is the Russian market cybercrime platform. That's the name of it.
And it's filling the gap left by Genesis Markets takedown
DOJ takes down four major services using that were used
by cyber criminals. So it's a basically like URL hiding
uh infrastructure China linked hackers exploit SAP and SQL server

(15:18):
flaws in attacks across Asia and Brazil. And this actor
is called Earth. Lamia. Lamia and they're hitting multiple organizations
since 2023. National security. Ukraine hides explosive drones in wooden
sheds to hit parked Russian bombers. This one is absolutely insane.
I'm sure you've seen stuff on it already. It's. It

(15:40):
reminds me of what actually happened with the Israeli attacks
on Hezbollah, with this ingenious, sort of like personalized little
mini bombs or whatever that went after leadership. So it
wasn't like giant drone strikes. It wasn't like giant bombing raids.
It was these personalized little things against leadership, which, uh,
a lot of people are saying, you know, tactical win here. Uh,

(16:00):
and it was pretty effective. And I think this is
similar in the sense that it's asymmetrical use of force. Um,
you have Ukraine, which is out outmanned, outgunned. And they
snuck these drones, uh, over a thousand miles inside of Russia.
I think maybe even close to 2000. 3000 miles in

(16:23):
some cases. Um, snuck them in, and, um, basically, they're
inside of Russia. They they get all the way to
the airfield. They remotely open them. They're now sitting on Russian, uh,
cell networks, uh, which is what's used to activate them
and control them. And they, um, they hit these Russian bombers,

(16:45):
and these Russian bombers cannot be reproduced. Russia does not
have the ability to make these bombers again. And what's
interesting about this is they didn't diminish Russian bombing against Ukraine.
They diminished Russian bombing capabilities completely strategically as compared to
the United States as compared to China. So they significantly

(17:06):
damaged Russia's capabilities, right? Not to mention, you know, the
hundreds of thousands of actual troops they've lost. Um, but
this this war is seriously, seriously harming Russia's overall strategic
military capabilities. And, uh, the thing to call out here
is just how much Ukraine has changed warfare using drones.

(17:31):
And again, I implore you to go read Kill Decision
by Daniel Suarez, which talks about specifically autonomous drones, which
can't be taken out by GPS blockers or Em blockers
because they could do everything, uh, self-sufficient inside the drone itself.
China's deep network penetration signals war preparations. So former Trump

(17:53):
adviser H.R. McMaster told lawmakers that China's extensive hacking of
US infrastructure is actually them preparing for kinetic war. FBI
arrests defense intelligence IT worker for Pak drop espionage. So
he basically went to go do one of these, you know,
spy versus spy dead drops in a park in Virginia. And, uh,
the guy was actually part of DIA's internal threat division. So?

(18:18):
So he's part of the team designed to help people
find or design to find spies inside of Dia. And
he was one of these spies, and he dropped this material,
and it was actually dropped to an FBI agent and
got all the write ups there. I Mary Meeker returns
with First Trends report since 2019. Definitely go check this out.

(18:42):
She is a beast of analysis and she's been dormant
for a while. And now she's back. And this analysis
is like 300 something pages. Got a decent summary here
in the newsletter, but go check it out. Dwarkesh Patel
has longer AGI timelines than his podcast guests. I think
he's wrong about this, I think. So basically what Dwarkesh
is saying is that AI is not smart enough to

(19:06):
do continuous learning. And I think I'm not sure about this,
but I think he thinks that's an IQ problem, whereas
I see it as a scaffolding and system problem. It's
a memory problem. It's a context size problem. It's a
even more than a context size problem for the model.
It's more of a context knowledge base management problem. So

(19:26):
this is all scaffolding to me. And I think this
scaffolding is going to improve as fast or faster than
the IQ of the models themselves. So I think this
is not as big of a hurdle as Dwarkesh is
thinking it is. McKinsey says the future of work is Agentic.
They released this big article talking about becoming digital workers

(19:48):
that can think, decide and execute tasks on their own
and that it's worth a read. You should go check
it out. It's a it's a good article by McKinsey.
The truth about AI and job loss. Neruda from meta
dug into historical data to find out which AI, which
jobs AI will actually eliminate, and whether there's still room

(20:08):
for junior developers. That was a good read. Google Gemini
integration with Siri could fill Apple's personal context gap. I
think this analysis is wrong. I included it because it's
an alternative view to my own, but I think this
analysis is wrong. The problem is not Apple's access to
personal data. They have the best. They've been building iOS

(20:29):
all of this time. They have the best personal contacts
on people, I believe, on the planet. The problem is
having Siri prompt injected while it has access to all
that context. I don't think they know how to give Siri,
or whatever the super agent is going to be is
probably still going to be Siri. They don't know how
to give Siri access to all that data in a

(20:50):
secure way, in a comprehensive, powerful way that is secure.
I think that is what is stopping them. That's my guess.
I don't know that for sure. Not talking to people
on the inside about it, but that is my guess.
Based on working in security at Apple for like three years.
Snowflake buys crunchy data for $250 million. And this is snowflake,
which is a giant data platform making an AI agents

(21:13):
play Technology. McKinsey uses AI to automate PowerPoint creation and
proposal writing. So their platform Lily now handles PowerPoint creation
and proposal drafting. Over 75% of people using it monthly, internally, employees.
Workday plans to rehire the same number of people they
laid off, but with different skills. This is what I've

(21:34):
been saying. They want completely different people, completely different people.
Nvidia develops new AI chip for China that meets export controls.
I don't think this stuff matters that much. I think
if China gets less chips, they'll learn how to make
more AI with less chips, and in the meantime, they're
making their own chips. In fact, they just released one

(21:54):
that performs really well. I think ultimately China is going
to get everything that we have. I think ultimately, most
AI advancements are happening at the software layer via what
I used to call in 2023 tricks, a whole bunch
of different tricks that they're using. Right. And those tricks
I think get leaked, I think they get stolen. And
so for example, there's an argument that's something that OpenAI

(22:18):
was doing with oh one got stolen by deep sea
and deep sea copied it. And guess what? Their R1
model was nearly as good as oh one. After OpenAI
spent all that money and deep sea spent way less.
That's an example of a trick or a set of
tricks getting stolen. I think that's going to continue to happen.

(22:39):
And it's why not only that, the pinnacle model makers
are going to stay close to each other, but also
open source is going to stay close to the pinnacle
model makers. So I think this whole thing just kind
of churns along, chugs along, advances, slowly rolls forward with them,
largely similar with them largely staying neck and neck. Not always.

(23:03):
There's going to be these these big jumps, right? But
I think that thing will be copied. What I thought
was would happen, and what I think most people thought
would happen is when somebody made a major advancement, like
OpenAI or anthropic or someone in China. They would be
a leader for six months, a year or two years.
They would just be like, oh, they did something crazy,

(23:25):
and no one can find out what it is. And
they're just way ahead. And everyone's developing like independently in
these silos. But it turns out so many of these
researchers know each other. So many of these researchers are
moving between the model maker companies. So many of them
talk and share their stuff in coffee shops and in

(23:45):
conferences and stuff. These secrets, these tricks, these scaffolding things,
because a lot of this is not about the pre-training
of the model. It's about what you do after Post-training
where a lot of these tricks come into play and
it's just being spread. It's just being leaked. It's just
being shared so much that I think we're going to
have a lot of parity with, with some exceptions, uh,

(24:08):
going forward. And I think the only thing that changes
this and gets really weird is when you cross through AGI,
where I think that still maintains what I was saying.
But when it hits ASI, that's where you're going to
have these giant jumps. And I don't know necessarily. Well,
no one knows what that's going to look like because
it's going to be ridiculous. Computer science unemployment hits six 1%

(24:30):
despite Major's popularity. So a lot of questions of like,
is computer science still a good thing to go into? Um,
I think dialectic, I think studying, um, philosophy, um, rhetoric, dialectic, um,
how to think clearly, studying history, studying computer science, um,

(24:53):
studying physics. I think these are the core things that
people should actually be studying. And you should be getting
your kids and your younger young ones, or even people
who are advanced in their careers. This is how you retool.
And this is what I'm going to be putting into
the curriculum for H3 for human 3.0. Uh, this curriculum,
I think, is the core sort of first principles type

(25:15):
education that you're going to need. And of course, you
can specialize in all the stuff that you want like
that you're super interested in. But this core, I think,
is what's going to make you resilient to all these
different changes. Humans 60% of Americans have retirement savings accounts,
but it's very lumpy. So 83% of people making over
100 K have retirement accounts. Only 28% under 50 K

(25:39):
have retirement accounts. There's a massive racial gap. There's also
a massive gap between college graduates and, uh, people who
don't have any college. So 81% versus 39% and white
versus people of color, it's 68% versus 42%. Uh, US
economy contracts, contracts more than expected in Q1. It actually

(26:03):
looks like it contracted by 0.2 percent in Q1. Younger
generations less likely to develop dementia. I think this is
because previous generations they would stop working at 60 or 65,
and they would just be like, oh, I'm just going
to play with grandkids and I don't really need to
do anything. And I think when you tell your brain
to shut off, it, like literally starts shutting down and

(26:24):
you start getting dementia. That's that's me being a fake doctor.
Do not listen to me. I think it's backed up
by tons of data. But I think if you treat
your brain and your body like you are a 22
year old in terms of like you're lifting weights, you're
working out, you're in the sun, you're doing cardio, you're

(26:44):
learning new things constantly. You are executing on new cognitive
tasks constantly, and you're constantly relearning and in upgrading yourself,
then you are basically sending a signal to your brain
and body, hey, we are 22. You better stay in shape.
And me, as old as I am, that's exactly how
I feel. I feel 22 all the time. Unless I'm

(27:09):
like off routine and low energy or whatever, then I
feel like my age. But, um, I would say 95%
of the time I feel 22. And this is why
and this is why I think there's such a big
difference between dimension numbers. Uh, in and I think the
number is 25% versus 15%. So what is that? That's

(27:29):
a massive difference. Um, is just because the old mindset
of retirement versus the new mindset, which is more active
American versus European mindset on life. Really good piece. If
you're useful, it doesn't mean you're valued. This was a
good piece, uh, about economic value and how companies view us.
How much coffee is too much? Really good news. About

(27:50):
3 to 5 cups of coffee. Uh, discovery. Run your own.
I locally on your Mac Anthropic's interactive prompt engineering tutorial.
Indirect prompt injection overview. My AI skeptic friends might are nuts.
My AI skeptic friends are nuts. This is by another
security guy who's basically saying the same thing that I

(28:11):
was talking about in the beginning from that essay. Um,
and I've got a cool, cool quote here from it. Um, quote.
But the code is shitty like that of a junior developer.
And his response to that. Does an intern cost $20
a month? Because that's what cursor costs $20 a month.
I think that's a great point. Cloud code is my computer.

(28:32):
You to inky turns video into vocabulary flashcards, Java filters,
hacker news job posts with AI powered metadata. Tensor product
attention is all you need. The metamorphosis of intellect. This
is a really cool book. It's an online text. You
can just click the link in the newsletter and you
get the full book. It's fantastic. One of my top
ten sci fi books ever. Andor season two shows how

(28:54):
insider threats actually work in real world organizations. I have
not seen all of Andor one, so I need to
go watch that. And two evidently it's the best Star
Wars in a long time. GitHub repository of end to
end workflows. Someone created a GitHub repo of tons of
scraped any workflow like hundreds of them. And you could

(29:15):
just go get them. Because they're in GitHub now. And
because it's like a text configuration, you could just like
paste them into N810, which by the way, is becoming
one of my favorite AI automation frameworks. And the number A10,
that and bedrock are my favorites and lane graph. But
I'm using that last now, um, preferring to use my

(29:38):
five year experiment with UTC. So someone is now using
UTC for everything. I thought I was crazy for going
to metric. This guy's using UTC book of Secret Knowledge
GitHub repository. Jason Chan and Clint Gibler have a brilliant
conversation in the latest TLDR sick. So my buddy Clint,
who runs TLDR comm, which you should go sign up
for that newsletter. Um, he had a guest post by

(30:00):
Jason Chan, who's like a absolute security guru, actually kind
of pioneered, uh, security guardrails and, um, really, um, really cool,
innovative security guy who ran security at Netflix. But anyway,
he wrote a piece about building a security program for TLDR,

(30:20):
and it is extraordinary. You got to go check it out. Okay,
this is the end of the standard edition of the podcast,
which includes just the news items for the week to
get the rest of the episode, which includes much more
of my analysis, the ideas section, and the weekly member essay.
Please consider becoming a member. As a member, you get

(30:40):
access to all sorts of stuff, most importantly, access to
our extraordinary community of over a thousand brilliant and kind
people in industries like cybersecurity, AI, and the humanities. You
also get access to the UL Book Club, dedicated member
content and events, and lots more. Plus, you'll get a
dedicated podcast feed that you can put into your client
that gets you the full member edition of the podcast

(31:01):
that basically doesn't have this in it, and just goes
all the way through with all the different sections. So
to become a member and get all that, just head
over to Daniel Store.com. That's Daniel Miessler, and we'll see
you next time.
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.