All Episodes

AI Finds an 0-Day!, Postman Leaking Secrets, High Agency Mental Model, My Unified Entity Context Video, Github MCP Leaks Private Repos, Google vs. OpenAI vs. Apple on AI Vision, and more...

You are currently listening to the Standard version of the podcast, consider upgrading and becoming a member to unlock the full version and many other exclusive benefits herehttps://newsletter.danielmiessler.com/upgrade

Read this episode online: https://newsletter.danielmiessler.com/p/ul-482

Subscribe to the newsletter at:
https://danielmiessler.com/subscribe

Join the UL community at:
https://danielmiessler.com/upgrade

Follow on X:
https://x.com/danielmiessler

Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler

Become a Member: https://danielmiessler.com/upgrade

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:18):
All right. Welcome to unsupervised learning. This is Daniel Miessler.
And this is episode 482. And I want to mention
real quick that there haven't been any regular podcast episodes
recently on the podcast, as I'm sure you've noticed, and
I'm basically rethinking how I'm approaching, uh, basically the podcast. Basically,

(00:40):
the issue that I have is that a lot of
the podcast episodes were just really long, and they're really
difficult for me to actually read out, because I end
up going into a whole bunch of like, extra exposition
into them, and it's either too long for people or
it's really exhausting for me. Well, it is too long

(01:01):
for many people, and it's definitely exhausting for me. And
the other problem is, a lot of the content that's
in there is not interesting to a lot of people, right?
It's a lot of detail about stuff that a lot
of people don't care about. So what I've been doing
is switching up to focusing on ideas that are shorter
and more compact. And that's what you've been seeing in
the podcast feed, and it looks like you've been enjoying

(01:22):
those based on the number of downloads. So I'm going
to continue to do that and especially focus on that
because I think that's that's basically, you know, the the
future of the podcast is to focus mostly on, you know,
ideas and, you know, exciting things that are happening and
trends and basically talking through them in depth, which is
the thing that the people like from the podcast anyway.

(01:43):
So think of it this way, the podcast will still
be the same, the podcast will still be coming out,
but what you'll be getting is where I would go
into a topic before inside of a podcast episode. Those
are now getting pulled out and turned into their own
little mini segments, and those are going into the the
top of the podcast at the feed level, right? And
they'll have their own names and everything. The question is,

(02:04):
do I still do this news one which I'm doing now, right?
And a lot of people still want to get the
news one. So I think what I'm going to do
is continue to release the specific episode based ones. You know,
the idea ones I just talked about, but do the
audio version of this one here, where it's labeled episodes
and it's actually news and it's, you know, it gives
you coverage of what happened in the week before, like

(02:26):
the podcast has always been. So I'm going to try
not to go into too much depth on various ideas
and topics. I probably won't be able to hold to
that because I get excited, but ideally, any depth content
inside of that episode should be broken out into its
own episode, which will be a mini segment inside of
the feed. So this is why you haven't seen the
same number of specific UL labeled episodes. Is just because

(02:50):
I'm a little bit less interested in the news stuff,
and more interested in the ideas and the analysis, which is,
you know, where the main push is going to be.
But again, enough people are saying they want the news.
So I'm going to continue to put that out in
audio only right here at the same place on the podcast.
All right. So with that out of the way, let's

(03:12):
go into the episode. All right. So if you still
doubt that eyes are able to reason, you need to
watch this segment of two anthropic researchers talking with Dwarkesh Patel.
So these are two guys that he's brought on before,
and they talked about the future of AI. There was
like almost a year ago, seems like five years ago
or whatever, but it was really good. But this particular

(03:34):
segment that I highlighted in the newsletter, and by the way,
you won't have the newsletter handy when I'm talking when
you're listening to this audio, ideally, if you could scroll
through the newsletter or watch the video, pause it, listen, whatever. Um,
they are definitely sisters of each other, right? The podcast
versus the newsletter. It's all the same stuff. It's just
the text version, but more importantly, it has the links.

(03:55):
So I have a link in there specifically going to
that part in the video where they talk about this.
And the thing they're talking about is this segment research
that they did. So what they did was the interpretability
team from anthropic released a paper saying that they could
light up. They could when they ask, uh, Claude a question,

(04:16):
what would happen is different sections. They could see them
light up. And this is parameters. The circuit that they're
talking about is parameters across multiple layers of the model. Okay.
So you've got like you know I don't know how
many layers current models have, but it's many, many, many,
many layers deep right. From left to right. Now inside

(04:38):
of each layer there's so many parameters, right. You know,
millions of parameters or whatever it is. But what they
could see is what each of those lit up areas
actually correspond to in terms of concepts. So they can
now see that when they start to prompt somebody, uh,
a model for like write a sonnet or something like that,

(04:58):
you could see it planning, you could see it planning
early in the process, you could see it thinking about
where it wants to go and moving through that. The
other example that he gave that was absolutely insane is
there's this really rare symptom of a problem that has
to do with pregnancy. And what they did was they
gave the problem to Claude, and Claude immediately took the

(05:21):
symptoms and immediately started mapping like cause and effect across
multiple possible diseases. And it immediately lit up the answer, actually.
But it started thinking about what would the symptoms be,
which would could exclude it. Right. So it started thinking
very much like a human. But but the thing that
was different about it is they released a paper showing

(05:41):
how these sections are actually lighting up. They're getting visibility
into how the eye is actually thinking and how it's
thinking about cause and effect and logic and conclusion. And
basically like the thing that a lot of people say, oh,
it's just next token prediction. This pretty much puts that
to sleep. It really does because obviously it's still only

(06:03):
doing next token prediction. You know, air quotes only. But
if it's lighting up these sections and we could see
it's lighting up the same sort of sections, there's probably
an analog like this in our own brains. We are
mapping concepts, we are mapping cause and effect in the world.
And by the way, it came up with the right
solution for this medical thing. And I said this to

(06:24):
a friend of mine named Jonathan who's a cardiologist. He's like,
this is exactly how I think about problems, you know,
related to my industry, which is he's an active cardiologist.
He's in, you know, his his office, in his practice
all the time. And I just want to emphasize how
incredible this is, okay. And I want you to be

(06:47):
able to send this link and send this clip to
people who keep saying, I don't really understand. And I've
also got another link related to this of Ilya talking
about explaining how next token prediction and true understanding can
be the same and in fact are the same. And
basically what he says is it turns out if you
want to do proper next token prediction, you actually have

(07:09):
to understand the world that you're talking about, okay. And
that very closely maps and tracks with this segment research,
where you're lighting up different parts of the model, which
are interconnected, and related concepts which show a clear understanding
of cause and effect. Right. So this is just really
important to understand that just the fact that they're doing

(07:32):
next token prediction doesn't mean they're not understanding next token
prediction is just the output. It's just the mechanism. It's
like saying to going to a poet, watching a poet,
watching Shakespeare speak words on a on a stage, and
then looking over at somebody and saying, he's just moving
his mouth. And what happens is when his mouth moves,

(07:54):
it vibrates sound. And the sound comes to our ears
and it vibrates something in our ears. And then we
hear his words like, he's not doing anything special. He's
just moving his mouth and vibrating air. Yeah, you're right.
You 100% got me. He's just moving his mouth and
just vibrating air. That has nothing to do with what

(08:14):
underlies it? Right. And it is the same with AI
understanding and next token prediction. All right. See, you see
how I get diverted here. All of that. We're on
the number one bullet before we even start the newsletter. Okay.
Next one. If you're part of the 23 Andme breach,
which I was, you can submit a claim to receive
between $500 and $1500. But you have to do it

(08:37):
before July 14th. And I have the link here. All right.
I've got a new video out. It is called Unified
Entity Context. The AI stack. Everyone is building without realizing it.
You've got to go see this video and make sure
you like and subscribe on the channel as well. And
my friend Ali is looking for a new hybrid on

(08:59):
site position, the New York area doing back end development.
She's an absolute force MIT grad, a master's in engineering
and bachelor's in electrical engineering and CS three years back.
End engineer experience founder Devrel experience. Lots of Golang, really
strong public speaking and presentation skills and she's on social media.

(09:23):
She explains stuff. She does education for the community. She's
just fantastic. And she's a like hardcore serious MIT developer. Like,
don't get it twisted. A lot of people see her.
They see that she's like, has good energy and she's
on social media and they assume like she's some kind of,
you know, influencer who's talking about coding. No, she's not
talking about coding. She's doing it. Masters from MIT. Okay.

(09:45):
So you want to hire her very quickly if she's
not already snatched up. Cybersecurity. Palo Alto their research team
simulated a full ransomware attack in 25 minutes using AI agents.
I continue to like, help Net Security's job posting of
different cybersecurity roles. I think they put this out every

(10:05):
week or every couple of weeks. But, um, it's pretty good.
And I got the list here to latest job listings.
Postman looks like it's logging secrets and environment variables. So
you want to be you want to go in and
change this. This is actually on by default. A lot
of people are really pissed about this. So, um, you
want to make sure to turn that off or, you know,
find another solution if it makes you that mad. Um,

(10:27):
O3 actually found a remote Linux kernel zero day, so
it might be the first instance, although I wouldn't be
surprised if it weren't of an AI model actually finding
a zero day vulnerability. Cloud and GitHub MCP will leak
your private GitHub repositories. You got to watch out with

(10:47):
these MCC. You really do. There's a lot of, um,
hand-waving going on with them. They're very powerful. They're very awesome.
But you got to watch out. So be careful with
anything you're doing that's sensitive stuff. Signal blocks, windows, screenshots
to counter recall features. So recall wants to take screenshots
of everything. Signal is specifically blocking that because it goes
against their ethos obviously police arrest third suspect in bitcoin

(11:11):
torture case is this guy escaped after three weeks of
torture by colleagues trying to force him to reveal his
Bitcoin password. And evidently he did not. My understanding from
the military is if someone's good at torturing, you are
going to break. The only question is when. So the
fact that it went three weeks, um, probably just because
it's his colleagues and not really good at the whole, uh,

(11:33):
I guess, uh, applying pressure in that way. US intelligence
is creating a unified portal for buying your personal data.
So they're building a centralized system to streamline how they
purchase sensitive information from brokers. So they normally would have
required a court order to get this stuff. But if
they buy it legally from brokers, they can do that,
which means they're just now building this giant AI portal

(11:55):
AI data lake so they can ask questions and get
the data back in aggregate. I mean, we definitely knew
this was going to happen, but it's still pretty scary.
Coinbase hit with insider data breach, refuses $20 million ransom.
So here's the craziest their overseas support agents, which means
you could pay them much less money to bribe them.

(12:15):
So there's a really cool meme on this. I don't
remember how they phrased it, but my phrasing is it's
cheaper to hire overseas people, which means it's also cheaper
to bribe overseas people. And definitely something to think about.
Czech Republic accuses China of hacking foreign ministry AI hallucination
cases database launched. So someone created a database of real
world hallucinations caused by AI that you EU enters vulnerability

(12:40):
tracking race as Nvda is getting really old. So they're
bringing their own competitor in there. FBI warns of AI
voice impersonation campaign targeting officials. So say to watch out
for that. And they're actually coming after the people who
are doing it if they can find them. Amazon has
a serious problem with fake supplements. Really good thread here.
Indian police are attempting to read suspects minds, so they're

(13:02):
using brain scans to detect guilt. And pretty much all
the experts are like, do not do this. It's not
what you think it is, but they continue to do it.
Snowflake CISO shifts from shared responsibility to shared destiny, which
is really just more alignment with the business, which I
think is the only way to survive as a security
leader right now and going forward. Russian Apt28 breaches organizations

(13:26):
to track aid routes. So they are hacking a whole
bunch of different groups to figure out who's sending support
to Ukraine, obviously, so they can try to shut that off.
Regeneron to acquire 23 Andme and its customer data. National
security Russia mandates location tracking app for foreigners in Moscow.
Every foreigner they're going to have to they're going to

(13:46):
track their location or they're going to try to. I'm
not sure how far along it is. All right. I
OpenAI's Jony Ive hardware Google I o in my current assessment.
So they are building this little round necklace. It looks
very it looks very pretty. It looks to have a
camera and a microphone on it. And Apple actually bought
his little company for over $6 billion. So he now

(14:09):
works at OpenAI for OpenAI for Sam. And I think
this is moving us towards the vision that I was
talking about before or have been talking about for a
long time, where basically, um, your digital assistant is using
this thing that you're wearing to monitor the visual in
front of you. Now, Apple is talking about doing this

(14:30):
with AirPods and actually having cameras on the AirPods. I
would love to be able to see in front of
me and behind me. That would make me happy because
my digital assistant, whose name is probably going to be Kai. Um, well,
already is kind of Kai. But when the real one
comes out, transfer his soul into the new location. But anyway,
I want him to be able to see you in

(14:51):
front of me. Obviously, here, obviously monitor all the APIs
that we've been talking about for everything around me. And
obviously you have an earpiece, so we could talk in
my ear very quietly, like, this person is lying to you.
Someone's sneaking up behind you. Someone is staring at your
screen from your right rear oblique, you know, uh, 8:00
to your 8:00. Someone's monitoring your screen, whatever. The camera

(15:15):
is the biggest piece here. And, um. Yeah, I'm getting
worried that Apple is falling behind here. Google's I o
conference was unbelievably great. They announced so many different things.
They are going they're basically the monster has awoken, the
sleeping giant has woken up and they're moving quick. And
Apple needs to do something cool hopefully in this week. Oh,

(15:37):
and by the way, they're renaming their operating systems. So
it's going to be iOS 26 coming out in the
next couple of weeks instead of iOS 19. So they're
naming it for the year coming. That's upcoming just like
car models. So iOS 26 coming out. Uh anthropic's. Claude,
opus four shatters limits with seven hour coding marathon. So

(15:58):
the big thing here is that, um, the agent can
maintain the context of the goal for seven hours and
keep working towards that goal. That's the real big update here.
I've been talking for a long time about this. Uh.
Scaffolding matters more than the models. Context matters more than
the models. Systems matter more than the models. Having a

(16:20):
way to track what you're trying to do and understand
what you're trying to do, and understand the various steps involved,
and understand where you are in terms of doing all
those steps. That is the thing that's going to super
power all this AI stuff. Yeah, the models get better,
they get smarter, whatever. But the more important thing is
how big is our context window and how powerful is

(16:41):
the system that they are using to try to accomplish
their goals? And how good can they track the state
of their progress towards those? And those are the things
that they keep making more and more progress in across
all these different companies. And that's why we're getting these
giant leaps, not just because, like the intelligence of the models,
but the scaffolding of everything around it. Authors accidentally leave

(17:03):
AI prompts in published novels. So I think these are
like romance novels. And I got some examples here. But yeah,
it's got like it's got like the I talking back, okay,
I did this in the voice of so and So author.
Hope you like it. And it's like in the actual
published book on Goodreads, researchers develop RL method that predicts
future outcomes. So yeah, exciting. Scary. Um, which is everything

(17:28):
in AI tech CEOs using AI avatars to replace themselves
in earning calls. So Klarna and Zoom CEOs are actually
using AI versions of themselves. That's I'm not sure that's
a good precedent to set, but pretty cool technology. Apple
races to build smart glasses after getting behind on AI wave.
So yeah, Google is building these things. Meta is building

(17:50):
these things. Apple was starting with Vision Pro, but that's
not portable. So they got to go with the glasses
model because that's where Google and Meta are going. And
look they got to move fast. They got to move fast.
Tesla makes big push to catch Waymo in Austin robotaxi race.
I can't wait to see this play out. Um I
want Elon to win here. I'm mad at Ellen, so

(18:12):
I don't kind of personally want him to win all
that much right now. But I want Tesla to win
because I don't want BYD to win. Um, Google is American,
so I'm happy that there's a competition between Waymo and Tesla,
and that makes me happy. And I can't wait to
see a lot fewer drunk drivers, hopefully. Uh, because when
they get into this vehicle, um, maybe it won't drive

(18:36):
if you're drunk. Maybe, um, they just get into a
back of a free vehicle because some, uh, local community
service pays for any drunk person to call, and it
will take them home. Stuff like that I'm really looking
forward to. Plus texting and driving. Uh, someone really wants
to look at that TikTok, and they happen to be driving,
and that is horribly dangerous and I can't wait for
those accidents to go down. YouTube introduces peak points, AI

(18:59):
ads after emotional moments. Google commits $150 million to develop
AI glasses with Warby Parker. We just talked about that one.
Walmart is cutting 1500 technology jobs trying to reduce costs.
Greater Manchester NHS trust rejects Palantir's national data platform. They're
going to build their own in-house system. And BYD is

(19:22):
beating Tesla in Europe for the first time, registering 7231
vehicles and Tesla fell to 11th place. Humans. Tech companies
cut entry level hiring by half since 2019, GPT fake
consciousness performance was disturbingly convincing, so Jesse Singal asked or

(19:46):
single asked ChatGPT to pretend to be conscious and had
a very unsettling conversation with it. Uh, yeah, it was convincing,
in other words. Salesforce execs says, uh, that agents should
make us rethink every job. I think he's correct, but
it doesn't go far enough. The right path is to
constantly ask, you know, if you're a business owner, what

(20:09):
would I do today if I started start it over,
it doesn't mean you could always do that. But you
should always be thinking about that and not suffering from
the sunk cost fallacy. And with AI and hiring specifically,
this is going to this is going to be a
big deal. And I think a lot of companies are
doing that. And I think this is the bigger problem
with the job market is companies are basically saying, look,
what if we just started over? Why do we need

(20:31):
all these thousands of people? What are they really doing
for us? Let's get back to basics. Let's only look
at the core programs that we have to have, the
core products we have to have. Let's cut the products
we don't need. Let's cut, you know, programs we don't need.
Let's cut services we don't need. Basically, I think everyone's
going to be like, let's get in super thin shape,

(20:52):
reduce everything that's not necessary. And this doesn't just mean hiring.
This means focus of the business, which happens to also
mean hiring. And then once they have okay, these are
the core things we're going to do. How would we
hire for that now. And I'm telling you the answer
is going to be find really really smart. Extremely ambitious.

(21:13):
No work balance type. People who are experts with AI.
That is who people are going to be hiring now.
And of course, where you have like extreme expertise, like
an AI data scientist or something, then obviously they're going
to go with, you know, an extreme expert there. But
in general, as employees, they're going to hire crazy people
who are obsessed with the company, who will work crazy

(21:34):
hours and not ask about work life balance, who are
experts with AI and who will actually have multiple AIS
working for them. And that is basically what the new
employees look like. That is what new employment looks like.
And if you are not one of those people, I
think you're in extreme danger right now. Poland is about
to overtake Japan's GDP. Llms outperform paid human persuaders in

(21:57):
both truth and deception. Claude three five was significantly better
at persuading people towards both correct and incorrect answers than
humans who had financial incentives. Sleep apnea pill shows striking
success in large clinical trial. Yeah. Cut breathing interruptions by 50%.

(22:17):
Denmark raises retirement age to 70 by 2040. A Guardian
writer downloaded years of Alexa data and basically found their
his family's soul. So 15,000 recorded conversations included his daughter
using Alexa as a therapist, asking about dating ages and
sleep problems, which she didn't discuss with the parents. Earth's

(22:40):
dual high tide phenomenon turns out there's actually two high tides,
one on opposite sides of the Earth. Lost dog found
thanks to dead AirTag, which someone put a new battery
in AirTag. And it lit up and the owner came
and got it. And, uh. Tim Hartfords four decade journey
with Dungeons and Dragons. So a missing teenager story in

(23:03):
1979 caused a moral panic, and it made D&D famous
man 79. Wow. I didn't realize it was that old. Discovery.
Reject jealousy and root for your friends. My buddy Joseph
Thacker made a compelling case for rejecting jealousy and celebrating
your friends wins. You got to go read this essay.

(23:23):
What Sam Altman wishes someone had told him borrows from
Anthropic's Claude team. I'm going to say that again. Borrows
from Anthropic's Claude code team says context stuffing is way
better than rag. It's funny because I've been using context
stuffing as my main way of doing context. I don't
like rags. I've always thought that they were janky, and

(23:46):
it's great to hear Boris say that as well. Yeah,
Ilya explaining that AI is actually do understand this. What
I was talking about in the beginning inside the AI
security arsenal that I've built, someone shows off 21 plus
AI tools that they've created, from import table analysis to
threat hunting frameworks. Paul Graham on what makes writing good.

(24:08):
Internet Archive launches Live microfiche. Digitisation. Digitisation. Digitisation. Stream goodness. Digitisation.
So basically this is a live YouTube stream of people
digitizing microfiche. So they are just uploading things into the
Internet Archive. And yeah, that's the stream. You're watching the

(24:28):
internet be recorded into this giant archive. And yeah, um,
it's accompanied by relaxing lo fi beats. And there's a
Terraform MCP server integration tool between AI assistants. And Terraform
helps you discover, explore and understand infrastructure code faster. Uh,
got away here to use git as S3. Someone teaches

(24:52):
data visualization with a bag of rocks. There's a new
paradigm for psychology proposed here. Um, control systems as mental
building blocks. I actually ordered this book. I can't wait
to go read it. Um, the book is called The
Mind and the wheel. Um. Oh, yeah, I gotta I
got a go download this thing. I think it's already
in my audible. Hold on one second. We're doing it live. No,

(25:21):
there is no audible version. Okay, well, whatever. That's annoying. Um, anyway,
I really want to learn this, uh, or check it out,
so see if it's cool. Um, so it basically says
the mind might function as a collection of control systems
that regulate everything from hunger to loneliness. And, uh, yeah,

(25:42):
I'm not sure if this is the dumbest thing ever
or the coolest thing ever, but worth including for you
to check out. Tycho is a new API that returns
visual answers from structured data sources in response to natural
language queries. So if data is trapped in databases or
web crawlers, you can now go and basically get that

(26:03):
data with natural language queries. A database of 1400 startup
ideas from Hacker News and Reddit. Decibels are silly. That
was a cool piece, basically talking about how the measurement
doesn't make any sense whatsoever. Google Gemini advanced ads GitHub
integration for code analysis, timeline views, and smart filtering on
Hacker News. It's a new Hacker News interface with a

(26:24):
lot of filtering options undetected. Tag stealth tracking for stolen items.
And ideas entering into the Member edition stuff here. Ideas.
So the idea of the season high agency and ideologies
as tools versus values. So a lot of interest right
now in what people are calling this high agency thing, which, uh,

(26:47):
I think there was a yeah, George Mac came on
Chris Williamson Modern Wisdom and I love the concept, and
I think a lot of people need more of it.
But the I would say most people, but not everyone, uh,
we actually talked about this in book club and uh,
someone in there was basically saying, look, this this might
be really nasty. You know, it seems kind of elitist

(27:08):
or like very talking about the NPC thing. And I
think anything taken to an extreme can be bad. Like
if you have too much capitalism, you need socialism, you
have too much socialism. You might need some capitalism. Uh,
too little self-esteem. You need more. But narcissism is bad.
You know, Stalinism is bad. But maybe Nordic countries got
something figured out. So one idea I put out there

(27:30):
was basically a set of fundamentals, like I talked about, um,
in my political post where it's like, these are my
core values. This is what I believe, you know, humanity
should look like. This is the ideal world that I
wish we lived in. And that's one set of things.
And then if someone says capitalism or whatever, some, uh,

(27:50):
you know, Marxism or some other thing or some sort
of ideology, think of the way I'm thinking of this
is those are tools. They're separate from values. So are
you a capitalist? No, no, what I am is a
Star Trek liberal, right? I want to elevate humanity. I
want humanity to be more, um, enabled. I want everyone

(28:10):
to have equal opportunities. I want I want to raise education.
I want to have everyone, you know, participating in creativity
and exchanging creativity with other humans. I want technology to
be in the background, not in the foreground. I want
humans to be in the foreground. Right. So those are
the core things that I think about kindness, helping others,
exchanging value with others. Right. Uh, freedom. All these core

(28:34):
core value things. Now, if someone hits me with, you know,
high value or high agency or Marxism or whatever, I'm
going to listen, right? Okay, cool. Sounds awesome. But that's
a tool. That's not something that replaces one of my values, right?
It's a method for getting there. So if I don't
have enough socialism in my life to achieve my values,

(28:56):
we should sprinkle some more on there. If someone sprinkles
too much and I only taste socialism on my sandwich
or my steak or whatever it is, well, that's too much.
It went too far. And now you're actually conflicting with
my values. We need some more capitalism. Cool. Let's do
some more capitalism. It goes crazy, becomes like a crony capitalism.
It becomes banana republic. It becomes all this stuff. Well

(29:17):
guess what? We got too much capitalism there need to
take some of that off. So the key thing is
to be guided by these values and then use everything
else as like these different slider bars of tools that
you could use and adjust to how much you, you know,
adjust to taste, with the guiding principle being not affecting
the actual values of what you're trying to do. So

(29:39):
that was the idea of the week. Um, got a
simplified I security term breakdown, which, um, which I basically
modified off of my buddy Joseph Thacker's list. So I
alignment don't kill us. I safety don't help us do
bad stuff. I security basically I appsec oh we already

(30:00):
talked about discovery so we're good and now recommendation of
the week embrace chaotic reading if that happens to you.
I'm currently reading letters to a young poet, the Dharma Bums.
Don't tell me I can't. Rilke's Book of Hours, the
Pleasure of Finding Things out, The Peregrine, The Rise of
Theodore Roosevelt, Accelerando Dominion birds, Sex and Beauty, the player

(30:25):
of games. And they thought we were free. I'm usually
reading like 2 to 3 books at a time, so
this is way more than usual. Maybe more than ever.
What is this? Is this, like 20 books? Like 15 books?
Something like that. But I'm learning to be okay with this.
And this is my recommendation. This is my ask for you.
Be okay with it. If you're only reading one book,
be okay with it. If you're reading 15 and you're

(30:46):
switching between them, be okay with it. The one thing
to not be okay with in my mind is not reading.
So just read one book at a time or 17.
Don't be frustrated with yourself, just read. And the aphorism
of the week. The most important decision you make is
to be in a good mood. The most important decision
you make is to be in a good mood.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.