Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:19):
All right. Welcome to episode 47. This is Daniel Miessler.
All right. Lots of updates. I've been the most creative,
productive I've ever been in my life in this last week,
and I think it probably has to do with a
combination of cloud code and cursor. And I would say
specifically cloud code, because I am now using it to
(00:41):
do way more things than just coding. I'm basically using
it as like basically it's like I'm a CEO of
multiple companies and I could give work to like 100
workers at all these different companies. So basically I've got
like five shells open right now and they are all
in separate repositories. And like not many of them are
(01:03):
actually building anything related to like a web app or
web coding or any of that. Um, a lot of
them are like, organize this, clean this up, you know,
go get this content, um, fix all the links on
my website or whatever. It's like, It's a whole bunch
of stuff that would have taken hours and hours of
manual work, and that's why I hadn't done it yet.
(01:23):
And so now this thing is just like, yeah, tell
me what to do. It's like, okay, cool, I'll write
a script to do that here. I just fixed your
entire blog. Okay. It fixed my entire blog for content
going back to 1999. Hundreds. No. Thousands. Thousands of articles.
Thousands of pieces with broken links. It migrated all of
my images to a new system. Um, I cleaned up
(01:48):
all my cloud or all my, uh, Cloudflare. Um, basically
how everything was being handled before I was doing a
bunch of stuff, um, with S3 and CloudFront, and I
moved everything to be Cloudflare now, um, I migrated all my, uh,
images to use that instead. And they're all located at
slash images. This is all stuff I was like, piecing
(02:10):
away at manually and sort of like as I came
upon the problem, I would go and fix it, but
to me it was daunting to think about even trying
to do this right. So, I mean, it's the type
of thing you hire somebody and they go and do
it and it's going to be done really, really poorly.
If I would have paid someone to do this, first
of all, it would have cost me thousands of dollars.
(02:31):
Second of all, they would have messed a whole bunch
of stuff up. So it's like I can't even express
how useful and powerful this is. And I'm paying for Max,
which is like 200 bucks a month. But like, if
this was $2,000 a month, I have like five times over,
made my money. And that's just from like last weekend, right?
(02:53):
I mean, or like up to up to now, which
is like Wednesday. So I'm telling you, this is like
the most excited I've, I've been about this whole AI thing,
and I don't even see it as I actually like.
I'm starting to, like, just not even care about that.
What I'm caring about is this agent, uh, thing and
not like AI agents, but the concept of handing work
(03:14):
to something that understands what you're trying to do. Okay,
here's another good example. I have content on the site
from so many different previous platforms, right? Because I've been
blogging since 1999, which means I've got just grossness, just nastiness,
like weird embeds and weird linking strategies like my images
(03:36):
were being called in five different ways. Do they have
forward slashes or not? Like, all of this stuff was
messed up. I went and told this thing, here's the
canonical way to do things. Here's what a post is
supposed to look like, right? When when I send you
a URL. And by the way, I don't have to
give it like specifics. I'm just like, hey, there's a
I have a post called The End of Work. It's
(03:58):
really messed up. Go and fix it. It goes and
checks all the 20 things that I need it to
do for it to be normalized, for it to be
a canonical post. It's like, okay, cool, I did that
and I pushed it. Anything else? Oh, and by the way,
I tested it and it works. I can't express to
you how Insane. That is, it is completely, completely spectacular.
(04:20):
I mean, this is a type of work. Like I said,
if I hired someone who is super smart and super awesome,
like a super hard core developer, and I said, I
want you to do these 20 things to this post.
And by the way, some of these posts are like
a thousand words, 2000 words, 3000, 5000 words. I mean,
(04:44):
I've got posts that are 17,000 words, 9000 words, 10,000 words.
We're talking about small books here, okay. And you hand
that to a human and you're like, hey, go through
all this text. And by the way, it's done differently
for every single post, okay? Every single post. It's broken
in different ways, and I tell it a canonical way
to do it. It literally comes back in 2 or
(05:06):
3 minutes and it's like, yeah, I fixed all that.
And I'm, I'm so I've got cloud code over here
working and then I have cursor, I have the same
project open in cursor. So I can actually go in
and do manual edits. More importantly, I can see what
it's actually doing, what what Claude is actually doing. And
sometimes I could use the cursor agent if I want,
but generally I'm using it for transparency. And just like,
(05:27):
you know, manually accepting things if it's crazy or whatever.
Plus I have my vim bindings active inside of cursor,
so I could actually just go right using air quotes
vim inside of cursor and it doesn't feel crazy. I
don't have auto save enabled though, someone can figure that out.
But anyway, it's just been. I've been getting so much done,
so many projects that are just like. And I've got
(05:49):
1520 more that are popping into my mind. Like, I
literally have seven terminals open right now doing cool projects.
I'm about to go do cool stuff, like I'm doing
some physical stuff with some friends. We're going to, um,
celebrate Dustin's life tonight, so I'm really looking forward to that.
And all these other cool things, like I'm doing a
(06:10):
cool project, I'm traveling or doing whatever. Right now I
am thinking, when can I get back to coding? I
recently been messing around with Diablo for a little bit.
Diablo four is my favorite video game, or Diablo in general,
is probably my favorite video game, and I'm pretty excited
about what's going on in there. I could go and just,
you know, have some fun for two hours or whatever.
It is. Nothing compared to these five terminal windows that
(06:34):
are calling me. I literally play around with this stuff
and build stuff and fix stuff and create. Net new things.
The next one I'm about to do is substrate. It's
going to be insane. But I start on this. I
look up at the clock and I'm like, oh well,
I need to stand up. I need to eat. I
haven't eaten in six hours. Also, it's going to be
(06:55):
breakfast and that's 3 p.m. breakfast or whatever. Like, I
have never been this excited and focused that I can
remember on anything. Now, you combine that with the fact
that this thing is getting better, right? Just imagine Cloud
Code a bit better. Imagine cloud code, but I can
send it content from my voice, right? I mean, and
(07:17):
this is where this is all going that I've been
talking about forever, right? We already know where it's going.
We already know how this is all going to be
stitched up, and we're messing with the individual pieces here. Um,
one of these projects is called Chi. It's actually my
digital assistant. So that when I'm actually teaching the meta
tools of how to do, you know, skills and tools
and tasks that I'm just going to have to do,
(07:39):
you know, universally going forward. So a good example of
this is, um, I recently taught Chi how to draw
animations using Grant Sanderson's stuff. So he's the guy who does, uh,
three blue, one brown or three brown, one blue. So animations, math,
animations of whatever. Well, now I can send it a
(08:00):
prompt and say animate this. So I sent it a
prompt of the solar system and it drew it. It
drew it and it animated it, and it opened it
up and showed it to me. And I'm like, this
is absolutely insane. I've got another one, which is like
a knowledge map. So I'm going to be able to
take other projects. My blogs are sitting right next to
it in a directory, all markdown files, all normalizing canonicalized now. Right. So,
(08:26):
so they're canonicalized. So they have all tags in them.
They're all marked up. They're all clean. So I could
just be like, hey, pull the pull the knowledge from this.
And you know, map that in this new graphing function
because I've got this graphing knowledge map and I'm teaching
Chi right now how to use this library, how to
(08:47):
use this GitHub project to do this with any knowledge
that I give it. Right. So it's just like these
small universal tools which are getting so good they just
become skills that it has. So collecting data, collecting data
from sources, um, rating the quality of a source. Right.
Because when I'm doing like these collections and I'm making
(09:09):
graphs based on them and, you know, if I want
to publish them or whatever, I need to be able
to point and say, look, this came from here, this
came from CDC, this came from so-and-so French agency that
puts out data or whatever. Um, but you got to
be able to show your work. Otherwise it's just pretty graphs, right?
So all of that just becomes core components built into
this thing. All right. So yeah, that very exciting stuff.
(09:35):
The other side of this is I'm basically manic about that, right?
I'm manic about that. And I'm morose about the way
I see this affecting humanity, because the way I see this,
I wrote a post a long time ago, maybe like
2017 or something called, uh, The Bifurcation or something like that.
And it's like the separation of people. And it was,
(09:57):
I don't know, somewhat readers and non-readers I was talking
about just like how some people just can't get enough
and they're crazy ambitious and they're always trying to learn
things or whatever. And the rest of the masses are
just like, yeah, haven't read a book since high school,
and I hated it then, you know, haven't read a
book since college and I hated it then. So it's
(10:17):
like they've never really read or studied for fun. And
I'm telling you, people like that are unbelievably screwed, right?
If you do not have learning as, like part of
your core DNA right now, you you are just absolutely screwed.
And it's not like it's 50 over 50. It's not
like it's half and half or like 60% or hardcore
(10:40):
learners and 40% art. It's more like 10% or 5% are.
Or maybe it's 2%, maybe it's 1%. Right? But this
is the group, whatever that small percentage is that sees
all these tools. Right. And like I'm in these communities
with like, uh, all my friends are doing this, I'm
watching other people do it on online or whatever. It's like,
(11:02):
this community is so excited right now. They're just building
stuff all the time, just like I am. So in
one world and I live in the Bay area, so
it's like I'm living inside of this world as well.
It's like one world. It's like, oh my God, can
you believe this is possible? And it's just like constant
shock on our faces. And then there's this other world
of like, I. What are you talking about? Isn't that ChatGPT? Yeah,
(11:24):
I use it sometimes. I don't know, but it hallucinates
a lot. So you can't really trust it. And I'm like, well,
that person's screwed. It's like a completely different universe that
we live in. Completely different universe. And what I'm trying
to figure out is like, what can you say? How
(11:45):
can we get more people into that world? How can
we convince more people that this is a critical skill? Right?
And it's hard because if they're already not readers, if
they're already not into the whole learning thing, then like
giving them a better tool for learning is going to
be difficult. But we've got to find a way. I
don't want to just give up on these people and
(12:05):
just be like, well, I guess they're screwed and just
kind of move on and be like, well, it well,
it sure is fun to build with cloud code. That's
what makes me morose. So I basically oscillate between. This
is the saddest thing ever because I look around at
all these jobs, all these people who have jobs, who
think AI is ChatGPT and they use it once a month,
(12:27):
you know, for 15 minutes. And I'm like, the economy
is going to crash. This is what I'm honestly thinking about. This.
This economy is going to have a massive, massive disruption
because these folks here who who don't think of AI
in this way, they're going to get fired. They're just
(12:48):
going to get fired. They're going to get replaced either
by a smart person who could do nine jobs all
at once because they understand AI a little bit, but
then eventually just by AI itself. Right. And who knows
the timelines and all that stuff. You you saw the debate.
That could be tomorrow. It could be in four years
from now. Who actually knows? It actually doesn't matter, right?
(13:09):
Don't even care about that. What matters is, you know,
the direction stochastically. You. You know this for certain, right?
And there is another caveat here, which is legislation. But
what could the legislation possibly look like? They're like, you
can't fire your human workers anymore. Well, that just that
just tells like all the workers like you could do
(13:29):
whatever and they can't fire you. Like it's some crazy
union stuff. And I don't think it'll be that extreme,
although there is some extreme stuff happening in New York City,
so who knows, maybe something like that will happen. That
could definitely slow things down. Uh, big sort of terrorist events,
I think, that are like planned or assisted, planned by AI,
(13:49):
that cause major disruption, that could cause some legislation. But
other than that, I just see kind of normal non
I would say non-crazy people about AI, people who aren't
forget AI, not crazy about learning, not crazy about building,
not crazy about Progressing and evolving that mentality because the
(14:13):
tech doesn't really matter, right? We've we've always had people
that were like that and weren't like that. Right. And
previously the tech was like reading and computers. Right. And
you know, tech in that sense. Um, and obviously people
who are fluent with computers and really good readers and,
you know, could write well and all that, they have
a massive advantage. This is just taking that same advantage
(14:36):
with a different tech and magnifying it by a thousand.
That's what this is right now. And it's really hard
to watch when you think about what is likely to
happen to most people. Um, and then I'm like, okay,
what am I going to do to get myself out
of this mindset? And I'm like, well, I've got to
(14:57):
help build. I got to help with government. I've got
to help get someone elected who's actually, uh, you know,
centrist and actually cares about poor people and, um, isn't
going to break the economy and move us towards communism,
or go extreme right or extreme left. Like how do
how do we do this? All these, all these powers,
all these tools that we have? Can I use something
(15:18):
like substrate to build a platform that is democratic, it's transparent.
It uncovers, um, corruption. It provides, you know, an auditable
government for ourselves where we feel like we're more involved. Um,
how can we bring this technology to be personal tutors
(15:39):
for everyone, for billions of people on the planet, as
opposed to only the, um, you know, the most academically
focused cultures having it? Why can't we give it to everyone? Right.
So I'm thinking about like. Basically, what are projects that
I could do that can maybe help solve some of
this problem for people who are going to be left behind.
(16:02):
And right now that's largely substrate. So I'm working really
hard on that. That's going to be like my main
thing for the next couple of weeks is trying to
get that spun up with a lot more data, a
lot more ideas. Um, maybe a whole site around it. Um,
domain and stuff like that. But the concept here is like,
I don't want to be sad about it. I don't
(16:23):
want to just, you know, have an apathetic attitude toward it. Um,
I definitely am not going to have, like, an elitist
attitude towards it of like, well, we're just special and
they're just not special and therefore we deserve with a
D word, right? We deserve all these things because people
just don't want to read books and they don't want
to play with AI. That's a shitty attitude to have to, um,
(16:46):
as is the attitude of like, well, there's nothing I
could do to help them, right? So it seems like
the only third rail there is fucking try to do
something and and before, you know, you kind of have
an excuse. Well, what can you do? I can't possibly
research all that. I can't possibly find all the different
donors who are giving money to these politicians. I can't
(17:06):
possibly build an entire government transparency platform. Well, yeah, now
you can. Now there's no excuse to not being part
of the solution. So that's the mindset I have right now. And, uh,
anyone who wants to help, please reach out. I really
would love some ideas on this. Uh, ways to take threshold,
(17:27):
not threshold, uh, substrate, uh, along these lines and, uh. Yeah.
All right. Next one here. Um, Marcus posted a YouTube video,
and he did an intro of the video, just like
I did, where he basically laid out how he doesn't
believe in AI. He, uh, basically completely opposite views to me, uh,
(17:48):
which I thought was cool. I thought it was a
cool intro. And I got the link there. Project Hail Mary.
One of the coolest books we've read in UL. Book
club has a trailer out and the trailer is out
and you got to go check it out. It looks
really good. It looks like they've they've done it well.
It felt like a book when I was looking at
the trailer, so that's a good sign. The big question is,
what is, um, you know, the the alien going to
(18:10):
look like. Uh, so I can't wait to see that,
but it looks promising. Network. Chuck did a whole video
on Telos and his process of trying to go through
the the whole Telos exercise, and it was really good.
And it was also the honesty and the vulnerability that
Chuck shared in this was really, really endearing and really, really, um,
(18:31):
I just loved it. And, uh, hats off to him to,
to put himself out there like that and, uh, continue
doing the good stuff that he's doing. All right. Cybersecurity
US agencies warn Iran that hackers may target critical infrastructure
during Mideast tensions. So FBI, NSA, everyone's basically saying that
this is likely to happen, especially around USA slash Israeli, uh,
(18:56):
operations or companies. DOJ says that in 2018 Sinaloa Sinaloa
Sinaloa cartel hacker used FBI officials phone to track their
movements and identify informants, and those informants were then intimidated
and or killed. Switzerland. Government data is stolen in a
(19:21):
ransomware attack. A whole bunch of it. 1.3TB of government
data now available. Free. Google fixed the fourth Chrome zero day.
Persona is a really cool technology. It's, um. This is
actually a company, so I'm not sure how they made
it in here, but, um, 75 million blocked attempts means
there are probably way more getting through to other systems. And, uh, yeah,
(19:44):
they basically try to counter, uh, I fraudsters. So it's
basically a whole bunch of AI to counter, um, you know,
corruptions and fakes and fraud and stuff like that. So
a really cool, uh, company not involved with them, but
would love to be, uh, Chinese hackers hit Canadian telecom
using 16 month old unpatched Cisco flaw. US House has
(20:06):
banned WhatsApp because they don't like the data handling practices
of meta and AT&T. Finally rolls out SIM swap lock, which, uh,
they have not had in previously. Uh, Verizon and a
bunch of other companies have had it, but, uh, took
a while for AT&T to get it. National security. China's
mosquito sized spy drone. So small you might not not
(20:28):
even notice it flying around your house. And it's got
tiny cameras and microphones. Holy crap. This is where I
tell you to read Kill Decision by Daniel Suarez. As
per requirement, AI cloud code gets hooks. So now you
can auto execute functions from your AI conversations. Meta creates
(20:49):
a new superintelligence lab and they hired a whole bunch
of OpenAI people. It's kind of funny. It's like Sam taunted, uh, Zuckerberg.
And he's like, yeah, they didn't hire any of our people.
And a week later, Zuck is like, oh, really? And
hires like five or something or more of like their
best people or really good people anyway. I don't know
how good they rank, but really good people spends like
(21:12):
billions of dollars, I think literally billions of dollars on this.
And now everyone's like, hey, maybe meta is the place
to work. So yeah, don't, uh, don't taunt Mark Zuckerberg.
That's my lesson. Marc Benioff says that AI now does
half of the work at Salesforce. He's got some cool
stats here, but again, he's selling an AI product. So
(21:32):
I don't know. You got to you got to weigh
that in. Google brings AI search to YouTube. So they're
basically doing a lot of summaries of videos. And I
think this is heading in the direction I've been talking about,
where I actually makes the deal version of the content
for you. Right. The actual future is that your own
(21:52):
personal Da would do this for you. So basically it'll
see this YouTube video of whoever pop up, it'll go
read the video, watch the video, whatever it does. And
it knows what you're doing at the time. Okay. If
you're on a bicycle ride, if you're on a hike,
if you're on a whatever, you're driving and it's like
you can't pay attention to the screen or whatever for
(22:13):
whatever reason, or maybe you don't even like to watch video,
it'll give it to you an audio version, but more importantly,
it cut it up. It re-edited it for you in
the way that you like it. Right? So it made
a complete net new video. It might have deepfaked the
actual original creator, but it might have just put your
favorite person on there. Maybe it was Taylor Swift or
Richard Feynman who actually gives you the content. So dynamic
(22:38):
custom content. That's what we're going to see from AI.
And this is just like an intermediary step of that.
It is Google providing the AI summary of the content,
which also came from Google. But you see where it's going.
And I think creators need to be very concerned about
this and be thinking about the future, because it could
be that raw content is not really the thing anymore.
(23:02):
Raw content just becomes literally raw material. And here's the
problem can raw content ever compete with someone's personal digital
assistant who knows them better than anybody in the entire universe,
who is also an expert in creating content? Can the
(23:23):
original source make a video better than that? All knowing,
super smart, super creative AI can make for their favorite person?
I think the answer is probably no. Now there are
going to be a whole bunch of exceptions to this,
and it's not going to happen overnight, right? So these
things aren't emergencies. They take time. But so one example
(23:44):
is like you're a celebrity, okay. If there's a celebrity,
you like the way they talk, you like the way
they look. You like their jokes. Well, maybe that doesn't
get deepfaked because maybe the deepfake doesn't copy that perfectly right.
Or maybe it's just human. You just like to watch
humans do their thing right? We see this in chess,
(24:04):
and chess is more fun and more popular than it's
ever been right now because people have real, actual stories, right? Um,
Ganesh like, uh, winning and Ganesh losing and, uh, you know, uh,
Magnus winning and losing. Like, I care about that. I
care about that. I care about the Botez sisters who
are grinding, and I care about, um, the younger one, Andrea,
(24:27):
because she's doing, um, techno on the side, and she's
doing modeling, and she's a chess influencer, and she's hella smart,
like her sister. And so there are content creators, but
they're also doing chess, and they love chess. And they're,
I don't know, I think they're stuck at around 19
or 2100 somewhere around there. And then you've got Gotham, who?
I really like him. He does the best like overviews
(24:48):
of chess. Um, he's a really good narrator, and he's
on a mission to get to Grandmaster, right? I love that.
I love that everyone makes fun of him because he's
not great. I think he's like, is he like 2000,
2200 something or no, 2200 is Grandmaster, I think, or 2400. Anyway,
he's a lot of people think he'll never get there.
(25:10):
But he started having wins, right? I think he had
a win over Magnus, uh, kind of on accident or whatever.
Or he just. He played a really good game, and
Magnus played a bad game, whatever it was. But, um,
that is a human story. Here's a guy who probably
you're not expecting to be a grandmaster, and he might
get there, and he gives you updates on it. So
you're pulling from for him, right? So the question is,
(25:34):
will those sorts of stories, those sorts of human aspects
protect against this? I think it will. There's no question.
It will. The question is how much versus like an
AI influencer, maybe it's an AI influencer who's actually giving
your stuff. Maybe that's the avatar for your actual Da.
And he or she is like the most attractive person
in the world, and they could change their outfits or
(25:57):
whatever to put on a school uniform like Richard Feynman,
you know, teaching at university with the university hat or whatever,
and just be like, yeah, here's how this physical thing
works and, you know, teach you physics or teach you
math or whatever. Um, but the threat to content creators
is like, I'm making all this content only I is
(26:17):
consuming the content. People don't, you know, very few actual
humans are consuming my content because there are millions of
other people, hundreds of millions of other people creating content
and billions of other agents creating content. So it's a
giant content creating marketplace. Um, and for regular consumers, keep
(26:41):
in mind, our bandwidth is limited as humans to consume content, right?
So even if we wanted to, we can't follow all
all the cool people that we want to. I mean,
I got friends whose newsletters I don't even read. I
know for a fact a lot of my friends don't
read my newsletter. They're just too busy. Why? Because they're
content creators, right? And I'll read, um, many of their newsletters,
(27:02):
like once a month, right? But I'm not reading it
all the time. And they're damn sure not reading mine
all the time. Right? We catch a little piece from
each other and we're like, hey, great, great job with that.
It's like, hey, did you see this one? No, I
didn't see that one. Sorry. Right. We don't even have
the time. We don't have the time. Now imagine when
there's ten times more content, a thousand times more content,
a million times more content. The only thing that's consuming
(27:23):
that content is I working for somebody. And that somebody
in many cases is going to be the digital assistant
of somebody else, right? And then they'll just dynamically create
the content for them. All right. Grammarly acquires superhuman. That
was supposed to be bolded. I hate formatting mistakes. Um,
(27:45):
that was my bad. Um, all right. Anthropic turns Claude
into a no code app. Okay, so basically anthropic is
now competing directly with, um, like V0 and all the
other app makers. So everyone's everyone's converging on the same space,
my content or my comment. There was almost no castle.
Where's the moat? No. Where's the castle? Let's see here. Yeah. Technology.
(28:09):
Cloudflare now blocks AI crawlers by default. They are doing
this crazy model. They're going to start charging AI crawlers
to consume content. So this is the type of thing
that could potentially economically disrupt this whole this whole meta
that I was just talking about. Right. Um, and that's why,
you know, you got to watch yourself with predictions. You
(28:29):
don't know how the economy, you don't know how society
is going to adjust. Um, maybe there becomes this thing
where it's like, maybe we're able to make an economy
where the raw content has value. I find that hard
to believe, but, um, I think it will happen to
some degree. Okay. There will be sectors where where it
(28:51):
does happen or a degree to which it does happen.
Like anything that's a good idea is going to happen
a little bit. The question is what's going to be
the dominant model. I don't unfortunately, you know, and I
create content as well as one of the things that
I do, um, I don't I don't see a crazy
good future for, you know, billions of people are going
(29:13):
to be watching this YouTube video that I put out.
It's not going to be that. It's going to be
I watching that video I put out. And hopefully here's
the only thing I can hope for. The only thing
I can hope for is the Da. Their Da attributes
it to me and you know, shows a picture of me,
or shows a piece of content that I wrote or
(29:33):
says something cool about it and it's like or says, hey,
this is the seventh thing that I've shown you, you know,
from Clint Gibler this week, um, you know. Oh, and also,
he lives nearby. You should go get a coffee. That's
the type of thing that hopefully is going to happen, right?
You should talk to Clint, because Clint is also talking
about a lot of the same stuff you're talking about.
I'm like, oh, really? And my da, my da, whose
(29:56):
name is Chi, by the way. Chi is like, yeah, actually,
a lot of the stuff I've been showing you, like
a lot of those videos that I made for you,
those actually came from Clint's videos. And I'm like, okay, well,
in the future, you know, put like a Clint, you know,
watermark on it or something. And he's like, okay. Yeah, sure.
My bad. Yeah, I'll make sure I do that. So
hopefully there's something like that dynamic that happens. All right. Uh,
(30:19):
meta adds another gigawatt of renewable power to feed its datacenters.
So it's buying, um, okay. Solar, wind, geothermal. I thought
I also had in here. Yeah. They're also, um, doing
nuclear stuff as well. They're buying nuclear stuff. Um, maybe
that was a previous story. Um, all right, humans, scientists
(30:42):
finally pinpoint what wiped out America's bees, and it wasn't pesticides.
I'm going to leave that as a mystery for you
to go. Click noise ruins sleep quality even when you
think you're sleeping through it. I've recently moved to full earbuds. Um,
not Apple ones like actual, um, in-ear, you know, non-electronic earbuds.
(31:07):
And I got a new sleep mask. Oh, I'm about
to do a wirecutter episode for for members, by the way.
So it's going to be like, because I'm super crazy
when it comes to all this, uh, like the best pens,
the best shoes, the best, you know, covers for your
pillows or whatever. So I'm about to do that. Um,
I don't know. I should call it Wirecutter probably get sued, though.
Maybe I'll call it something else. UL wirecutter. I don't know. Um, anyway,
(31:30):
I mask massively upgraded my eye mask, so it's better
than the ms1 that I've been using, and it's just fantastic. Um.
Highly recommend. Eye mask and earplugs. I do have to
have very strong locks on the doors, because I don't
like the idea of of not being able to hear
people sneak up on me. That's the army for you.
(31:53):
Luckin coffee opens up first US stores after beating Starbucks
in China. They just opened up the first two US
locations and they're in New York City. This is basically
the BYD of coffee. Um, so naturally I want to
go there and I wanted to check it out. I
want to sit in the chairs. I want to look
at people there. I want to just see the vibe.
I want to, you know, knock my knuckles on the
(32:15):
table and the chairs and just, like, see what they're
doing different. I want to see, like, how their employees
interact with people. I'm like, fascinated by this whole concept
of like, when one chain wins out over another, what
did they do better? You know, in and out, chick
fil A, stuff like that. Uh, staff at the Louvre
shut down the museum to protest unmanageable tourist crowds. Uh,
(32:39):
I've been once, and it was a hellscape. It was
an absolute hellscape of people who cared nothing about art whatsoever.
They were crammed in like insects and nothing but iPhones everywhere.
And it was, I don't know, it was kind of
the opposite of art. It was the worst art experience
(33:00):
I've ever had was at the Louvre. That's what I'll
say about that. The dollar just had its worst first
half since 1973, and stocks are doing better than ever.
But the dollar is falling and Trump threatens to investigate
Musk's companies through Doge. He also. I didn't see the
actual quote. So I don't know like what level of
(33:21):
grumpiness it was. But, uh, he also somehow alluded to
deporting Elon. And the whole reason for this is because
Elon is now proposing a third political party, because he
is so morally upset with the waste and the budget. Um,
I actually had a debate, a public debate, uh, on
(33:43):
LinkedIn with Marcus. Uh, we made two bets, and I
believe I've now won both bets. Uh, so the first
bet that we made was, um, he thought Tesla's Tesla
stock was going to drop below like $300 or $200.
I forgot what the number was by a certain date
and um, or it would not get above a certain amount.
(34:05):
I forget what it was. Well, it immediately went to
like 400 something and it's now very high. It's like
300 something, I don't know. Um, but I think I
already won that one. And then what I told him
is that, uh, Musk would break from Trump because of
some moral issue, because of some principle issue. And that happened, uh,
(34:25):
that happened. And the issue wasn't like, you know, child
labor or something. It was balancing the budget and reducing
the debt. And it turns out all that stuff that
Elon was saying about Doge saving money, Elon was not joking.
He was serious. He thought he was actually doing good.
(34:46):
So he goes over there and does all this stuff.
And by the way, a lot of it was really,
really bad. Really bad. Um, I forget the name of
the one organization that just should not have been messed
with the way that it was. And like hundreds of
thousands of people have died because of it. It's or
are going to because of it. And it's like not
all good is what I'm saying there. But the point
(35:09):
is he thought he was doing good. Um, and was
actually trying. It was not like some charade or whatever.
So he leaves Doge kind of gets kicked out of
the whole establishment, and then they're like, hey, big beautiful Bill,
let's raise the debt by trillions of dollars. Let's not
cut anything. Basically multiples of like damage, way more than
(35:35):
any possible good that Doge has done. And so Elon
is absolutely freaking out. That's why he called him a
pedophile in front of hundreds of millions of people. And
it's now why he says he's going to raise a
completely different party called the American Party. So all that
to say is, um, I think, um, I think I
(35:57):
want both of those, um, ideas. Stanford professor made up
that famous chess grandmasters burn 6000 calories. Claim. So the
reason I put this in ideas is because I find
it very interesting that. I learned stuff in the 80s
and the 90s and the 2000 and the tens. We're
(36:18):
still in the 20s. So all those decades I learned things.
How many of those things were just straight? False? For example,
the marshmallow test, how much fundamental? How much did the
belief that that was real lead to mental models in
my brain that are now broken? Um, it's possible that
that particular case didn't actually have those, but what other
(36:41):
things do I still believe are true because of some
crap study that came out that turns out is not replicable. Replicable?
I think the answer is probably large. Hopefully from all
my reading and stuff like that, it's like cleaned up
some of it or most of it, or maybe it's
all gone. Who knows? But I guarantee you, I've still
got some dumb beliefs that some teacher told me or
(37:03):
my parents told me in the 80s. You know, the
food groups, I mean, stuff like that, right? There's just
got to be a million of these things. And, like,
I just hate the idea that, like, the foundation of
the house is, like, got, like, termites or something, or like,
I don't know, nanobots eating, uh, eaten away the, uh,
(37:23):
the the undergirding. Uh, I don't like it. And it's
worth a question of, like, for all of us, you know.
Where is that? Where is that rot? Taste is the
new intelligence. This one should have had a fire mark
next to it. This thing is so good. So how
about Rick Rubin talking about his book? Uh, but it's
a really good full length essay. Uh, go check it out.
(37:46):
Joan Westenberg deleted her entire second brain. Love the idea
here where basically. So she had like, this the super elaborate,
awesome second brain in obsidian. And she basically decided, you
know what? Nah. Nope. Um, I'm spending too much time
optimizing this thing. It's taking away the focus of, like,
actually being good in my own brain and has put
(38:09):
all my effort at being good in the second brain.
So the movement is basically first brain, and she just
trashed the entire thing. I think it's a really interesting
thing to like, not overoptimize. AI is a good example.
Don't overoptimize on AI. It's really anything. Exercise. You know work.
Don't work too much because then your personal relationships, it's
(38:30):
like anything that's a good thing can be a bad
thing if done too much. It's like the most cliche
thing in the world and absolutely worth repeating or remembering.
Schizophrenia may be the evolutionary price. I love articles like
this that are like, hey, you know that thing you
think is bad? That could be a necessary piece of
exhaust that comes after a thing that you think is good.
(38:52):
In this case, they're saying ingenuity, creativity, new ideas. You know,
Einstein type stuff where, oh, maybe it's just space is curved.
You know, maybe that requires that something like schizophrenia exists. Discovery.
(39:12):
How to use markdown. Been book uses AI to turn
coffee bag photos into detailed brew logs. Hmm. Proxy cloud
code requests through Cloudflare. Aging related inflammation isn't universal. The
developer built an AI Dungeon Master that runs in the terminal.
This thing is really cool. Um. James Webb takes first
(39:34):
directional exoplanet photo local LM notepad on a USB stick.
This thing is so cool. Uh, Joseph and I have
been talking about this. Joseph mentioned it to me first.
He's like, hey, you know, how do we have, like,
a local thing of knowledge? You know, worst came to worst.
I want to be able to ask this thing, you know,
how to restart society or something like what's a small
(39:54):
version we could do? Well, here's one that's on the
USB stick and custom voice agent tutorial. Okay, this is
the end of the standard edition of the podcast, which
includes just the news items for the week to get
the rest of the episode, which includes much more of
my analysis, the ideas section, and the weekly member essay.
(40:15):
Please consider becoming a member. As a member, you get
access to all sorts of stuff, most importantly, access to
our extraordinary community of over a thousand brilliant and kind
people in industries like cybersecurity, AI, and the humanities. You
also get access to the UL Book Club, dedicated member
content and events, and lots more. Plus, you'll get a
dedicated podcast feed that you can put into your client
(40:36):
that gets you the full member edition of the podcast
that basically doesn't have this in it, and just goes
all the way through with all the different sections. So
to become a member and get all that, just head
over to Daniel Store.com upgrade. That's Daniel Miessler upgrade and
we'll see you next time.