Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:17):
All right. Welcome to episode 488. This is Missler and
we are going to talk about updates. So biggest news
for me continues to be cloud code. And it just
keeps getting more extreme for me. I'm just completely completely
blown away by this thing. It's getting bigger for me
like over and over. And I wrote a few blog posts.
(00:41):
I've written like five blog posts in like the last, uh,
last week or something, which is very fast paced, um,
compared to recently. And, uh, yeah. And actually two of
them I helped me actually Cloud code specifically helped me
a little bit. So really excited about that. And whenever
I have it, help me with a blog post, I
actually have it right in the notes that it helped
(01:02):
and what amount of help that it gave, because I
think that's important. But, um, this is just like the
most insane thing for me. Like I keep saying, I
keep repeating it, but, uh, it's the most excited I've
been about tech in years and years and years. I
would say I haven't been this excited about tech since
(01:23):
I basically got into hacking, and I would say nothing
compares to hacking and getting into it, especially for the
first time, and realizing you could actually break things and,
you know, make things, do things that they're not supposed
to do. And I would say, as someone in their
early 20s going through that, there's nothing that compares to that.
But I don't know, I feel like this is actually higher,
(01:45):
a higher plane. I feel like I'm more excited or
at least me now would be more excited about this
than that previously, because this is building as opposed to breaking.
But I don't know. I will never get rid of
the hacker DNA, I don't think for good or bad,
but I'm just finding more and more ways and applications
(02:06):
of doing this. I just did one earlier today when
I was supposed to be working on the newsletter, but, um,
I was supposed to be recording, but, uh, This thing.
So what it does is when I'm in vim and
I have a question about anything, or I want to
audit code, or I want to add code, or I
want to say, hey, does this have any security vulnerabilities
or whatever I can like select the code or whatever.
(02:29):
And then I bring up the leader I, which goes
to my prompt, and it's actually calling Claude Switch P
which is the SDK, and Claude switch P. I'm giving
it the context of Claude, MD and a whole bunch
of other contexts. It's also taking context from the current buffer,
(02:50):
and it's using all of that to do the correct thing.
So I could literally be in there and just be like, hey,
go get my post where I talked about this before.
What are you talking about? I'm, I'm writing a new
blog post, okay? And I say, hey, go get my
post where I talked about this before and link it here.
The thing goes, rip through all 3000 posts finds content,
(03:16):
then it reads. It goes and reads the content. So
regret found it based on, you know, like word matches
inside of the posts and titles. Okay, so it sounds
what I think it was, it thought it was, but
then it actually reads reads the post, right, reads the
whole thing and it's like, yeah, this is definitely relevant.
So it goes and links it and writes it and
updates the page. And I have the live page over
(03:37):
here in dev and it just updates it. So like
I'm collaborating with an editor with me right here. But
except for the editor could do code reviews, the editor
could do prose, the editor could find synonyms for words,
you know, thesaurus. Like, it's completely insane. I'm about to
do a video on this basically inception with like, oh,
and the other thing raycast. I'm about to link up
(03:59):
raycast to this thing called switch P is probably the
sickest thing that like people are not talking about, which
is why I'm going to do a video about it,
but it was kind of mentioned by Boris in the
end of his intro video. It's absolutely insane. So the
same way that I like, um, use fabric to call
all these different things. Well, what we're doing is we're
(04:22):
getting closer to my eventual goal, right? Because my da
is called chi. Well, when I call Claude, what I
do is, I tell it, you are chi. You are
an instantiation of chi, right? So what I do is
I say, okay, look, here are your tools that you
have available. You have playwright, you have bright data which
I'm doing a video on soon. You have um, fire crawl.
(04:46):
You have a bunch of different MCC, but guess what
else you have? You have fabric. You can now go
and make images. You can now make an image of
anything using the context of this blog or whatever. So
I'm giving my Da. I mean, this is the, you know,
the precursor to having chi up and running, right? Chi
is sort of up and running, but right now chi
(05:07):
is multiple pieces, multiple personalities. It's mostly Claude. I've got
some stuff. I mean, when it uses fabric, it's using
lots of different models. But the point is, like, I'm
just asking Kai to do this. Kai is looking at
its available tools. It's looking at my desires and goals.
It's inferring that from the instructions that I give it.
But it's using all this context of the cloud MD file,
(05:30):
which has got tons of stuff in there. It's got
all the tool use plus plus it has the context
of the actual buffer that we're actually editing. I mean,
it's just ridiculous. And so when I'm in raycast, when
I'm just in my regular operating system, again, I do
not want to switch into an application to go use something.
I want to command space. Boom. I'm talking to Kai. Right?
(05:53):
The best possible way to do that is to talk
to Claude first. Claude is the one that can use fabric, right?
So now I'm not even going to call fabric anymore.
I'm going to call Claude and say go, or specifically Kai.
I'm going to call Kai. It's going to use Claude
to go use my tools to go do the actual task.
Now we're getting closer. Now we're getting closer to where
(06:14):
this is actually going, which I've talked about in all
these videos. So this is why I'm so excited about it,
is because it's like using AI for AI. This is
like meta tooling. Oh, and by the way, I'm doing a, um,
a whole session on this live with the UL community, uh, tomorrow,
because tomorrow is Thursday and tomorrow is our monthly meetup,
(06:37):
and we have various topics, but tomorrow is going to
be live automation with cloud code. So, um, yeah, go
sign up. It's, uh, slash upgrade Daniel. Upgrade, I think,
or newsletter upgrade, something like that. Um, you should be
able to find it. All right, so I'm stressing out
about this. I got a blog post on that. I
(06:58):
think it's the biggest AI jump since ChatGPT. This is
a realization I had, uh, a couple days ago where
I'm just like, wow, this is so big. And I'm like,
what compares to this? Going from not having AI to
having ChatGPT. I would say is the only thing that compares.
Another way that I put that in that blog is
(07:19):
that ChatGPT is ChatGPT of knowledge, and cloud code is
the ChatGPT of action. And I also said that it
is proto, AGI and proto means I believe it means
before or early. So I absolutely believe that's the case
because cloud code, if you only had to do certain
(07:42):
tasks and I could like give it all the context
that it needed, it could do a very limited knowledge
workers job if they only had like 100 tasks or
something or like 20 tasks. The problem with knowledge work
is you don't know what you're going to get day
to day. You got a new boss, you got a
new department, you get moved like all these things change, right?
And so that's the difficulty is like, you know, you
(08:03):
call the pizza shop to order the pizza. is closed.
They retired. They moved to Florida. There was no pizza
in that building anymore. So, like, there are so many
opportunities to get stuck for automation, even for AI, even
for smart AI. And um, also, this is the point
that Dwarkesh makes is it's not learning on the ground, right?
(08:25):
It's not learning all the time. It's not taking the
knowledge of its entire career and using that. It kind
of is. That's the model, right? But if you do
ten more jobs after the model was trained and each
of those jobs was one year long, do you really
have ten years of of knowledge inside of context? Not
not today. Not today because it's too much knowledge, right.
(08:48):
So we have a knowledge limitation, a context limitation where
the primary mechanism is to have it built into the model.
But that doesn't really scale right. Because everyone's job is different. Right.
So this combination of current context with the model context
or the model knowledge, that is a problem. Working memory
size is a problem. Learning on the job is a problem.
(09:08):
So I agree with Dwarkesh on these points. And I
think you know he makes a good point. It's a
it's a barrier I think he's wrong about. We're not
going to have that in the next couple of years.
I think he's wrong about that because ultimately this is
just scaffolding. This is actually just a tech problem of
like managing the scaffolding and managing this memory and working
(09:28):
memory and context and stuff like that. So all of
the previous tools that we've had available to us in tech,
we now are able to leverage those tools to bring
that content into and out of the mind of these AIS. Right.
This is not an issue of like, is the AI
smart enough? It's the context. It's the extra data that
(09:52):
it needs. It's the memory. It's the learning on the job.
Like Dwarkesh is talking about all of that. We're just
not good at it right now. Watch this. The reason
cloud code is good is because it's way better than
Devin or Baggy or like all the previous versions of agents.
The reason they were cool in demos and they failed
(10:12):
when they tried to do anything is because of this
memory issue. Like the better this memory issue gets. And
this is the whole reason I'm calling this proto. When
this improves either through, well, it's not going to be either.
It's going to be a combination of context windows in
the models themselves. Those are going to go up by ten,
you know 100,000 x right. But even set that aside,
(10:34):
I'm not even sure that's going to be enough. Plus
you have to worry about expense. I think it's going
to be more hacks. I've been talking since 2023 about
this concept of tricks, tricks and hacks, and I use
these words because it's simple things like, well, what if
you cut it up into smaller pieces? What if you
have like this real time database that can pull the
(10:56):
exact perfect context for the exact perfect moment? And it's
using this really tiny AI that's virtually free. And it's
like this little tiny local model or whatever. And it's like, well,
we just emulated the concept of it having ten years
of experience because it's pulling in the exact context for
the exact moment, for the exact decision that wasn't AI.
(11:18):
That was AI support tech going into the AI. So
obviously all the models, all the model companies cloud code.
It adds this in, let's call this trick dynamic context. Okay.
That's what we're going to call this dynamic context. And
it's able to do whatever terabytes or multiple gigabytes. And
(11:40):
like 0.001 seconds. It's able to pull all this stuff in.
All we have to do is get to as good
as humans do it or better, right? And it doesn't
even have to be that good in everything. But that's
what we're shooting for, because humans do a really remarkable
thing when we think about a random task at a
random job. We are literally leveraging our entire knowledge base
(12:03):
of every job we've ever worked on. Now we do
it crappily we do it inefficiently. We can't even we
don't even know what we're recalling. And surely we're recalling
only a portion of what we actually learned. But either way,
it happens instantly. And it's magical, right? So that's the
thing that we're not doing in AI yet. And that's
(12:25):
the thing that I think is going to have hacks
and tricks that are going to multiply how effective AI
is at doing this, and that's going to happen in months.
That's going to happen in the next version of Cloud Code.
It's going to happen in the next version of all
these tools that are about to copy cloud code, right?
Just like OpenAI did, just like Google did with Gemini
command line, this hack right here, this dynamic context thing
(12:48):
is the thing to solve because you could freeze the models,
the model intelligence where it is right now Opus four
or whatever. Sonnet four. I mean, even a couple of
generations behind. If you have this dynamic context thing, it
just makes that model super smart. The reason we're smart
(13:09):
is because we have the ability to pull from our brain,
to pull from our, you know, history. As I talked
about before, if you, you know, you lose your memory,
you lose your ability to recall long term memory or
even short term memory or whatever. Like, you just can't
be that effective. You can't work a knowledge job with
that limitation. So all that to say, Claude code is
(13:30):
proto AGI because it's starting to stitch together all these pieces.
It's starting to do dynamic context a little bit and
slightly improving this, making dynamic context better. Plus the model
gets better. Like I mean an opus five. I mean,
forget about it. I mean that's going to be AGI.
We so so the way I broke it down in
(13:52):
the in the post is it's the tools they have
access to. too. It's your working memory size, and it's
your ability to recall from your entire knowledge base, which
is kind of like the model right now, but it's
the model plus the new context, which you could do
through Rag and all these different other techniques. But that
combination there, this is this is it. This is the grail.
(14:15):
This is what we're shooting for. And I just don't
see how that doesn't kind of happen in the next, uh,
number of months or a year or two years or
three years. Right. And so I'm maintaining my numbers. My
numbers since 2023 have been what did I say? 25
to 28 is when we have AGI defined as the
(14:37):
ability to replace an average knowledge worker. Cloud code is
already getting close. It is already getting close. You've got
interface issues, you've got working memory issues, you've got the
number of tools that you could use. You improve those.
And by the way, once it gets actual AGI, like
it's going to be a better knowledge worker than most, right?
(15:00):
Or it already is, right? But it's just going to
exceed immediately after getting to that level. So really excited
about all of this. Um, that's also the reason I'm
depressed and stressed out about it. And I'm like, like
I say in that post, which is like a I
don't know, it's stressing me out. I forget what I
(15:20):
called that post, but it's essentially I'm like manic jumping
around going, oh my God, oh my God, look what
I could do. And I've got ten windows open. I'm
building all this stuff. I've never built this fast before.
I'm integrating AI into everything and I'm just, like, becoming like,
this superhuman. Then I go out and, you know, get
a sandwich or something, and I'm just looking at everyone
(15:40):
and they're like, oh, yeah, you know, ChatGPT is, you know,
AI and I don't use it. And I'm just like, man, what?
What can I do? How can I help? What can
I do? And this is why this is why I
become shrill sometimes kind of repeating the same thing over
and over. It's like, oh, you know, you know, companies
don't care how many employees they have. They want to
fire you or whatever. I'm trying to shake people. I'm
(16:02):
trying to wake them up. And sometimes I do it
too much, and I do it in like a shrill voice.
And I'm just like, that's annoying. Got to stop that.
But at the same time, I'm not going to stop
sending the message. So it's a question of just like
taste and like when you do it and where and timing.
And you know, if someone just gets laid off, you
don't want to be like, well, that's what's happening. It's
(16:23):
the future of AI. It's like, no, be a human right.
You know, read the room, listen, be empathic first. And, um. Yeah. Anyway, it's, uh,
it's why I keep repeating a lot of stuff, and, uh,
it's why I'm sad and very excited about this moment
(16:43):
right now in history. Um, I did switch to TypeScript
for all the things. Uh, I got a buddy who
basically hates Python and moved. And this was years ago
and basically moved all to TypeScript. And I have since
followed suit in the last couple of years. And I'm
trying to now with building this whole AI stack, just
make it very clear. I don't do Python. If I do,
(17:04):
if I'm forced to, it's with UV, but I'm switching
over to bun, which is super cool. Um, all right,
I found a new creator I really love. Uh, her
name is Westenberg or Westerberg. I can't remember exactly, but
the link is actually in the newsletter. Go check it out. But, um,
(17:26):
she's the one who wrote this, uh, recent post on, um,
deleting her second brain. So she's the one who got
kind of famous for that one. That one went viral.
And a lot of her new stuff. I've seen some
of her old stuff, like it had some echo in
the recordings. It wasn't as tight. So basically she massively
tightened up the game. Audio is better and she's essentially
doing what I'm doing, which is you have an idea,
(17:49):
you put it in audio, you put it in video
and you release the blog and then you and then
you share that, right? And that's like, that's just what
she's doing now. And she has some sort of paid thing.
I haven't clicked on that yet. Um, might go sign
up if it's not too expensive, but really, really excited
about her writing and her thought process. Um, oh, Joan,
(18:10):
Joan Westerberg is the name of the person and the
channel and it's YouTube. It's on podcasts. And, uh, and
obviously the blog, which it might be Substack or something. Uh, cybersecurity.
Google just gave Gemini access to your Android apps without
really asking. A lot of people were kind of confused
about the permissions that they're giving. This is something we're
(18:31):
going to have to watch out for quite a bit,
like as the AI rollout happens, when our models being
exposed to your data and turning on functionality like it's
usually going to be cool, it's usually going to be safe.
But without total transparency, I guarantee you some bad stuff
is going to happen. Um, I'm being careful with with
(18:51):
what I give access to. It's the reason I'm not
running one of these rewind. Or one of these apps
that's recording my whole desktop or whatever. Like, you see
how positive I am on AI. You see how, you know,
risk accepting I am on AI. Well, I'm not giving
some random third party startup full access to record everything
I'm doing in every keystroke. And then that's being uploaded
(19:13):
24 over seven every few seconds to this startup to parse. Uh, no.
Be careful. And also watch out. Watch out for these clicks. Um,
these pop up windows that are like, hey, is it
cool if I give Gemini access to so-and-so? Hey, is
it cool if I give Claude or OpenAI access to so-and-so?
It's like you give them access to email. That's how
(19:34):
you do password resets. Just just keep in mind you
got to watch out for that. China linked hackers create
thousands of fake brand websites to steal payment data. So
big fish against Apple, PayPal and a bunch of others.
And evidently it was very effective. Nova Scotia's power systems
from March to April got hacked, stole everything from bank
(19:55):
details to power consumption. 280,000 customers. And the DOJ shut
down a massive North Korean operation where fake IT workers
used stolen identities and AI generated profiles to get remote
jobs at US companies. And all right, moving down here.
(20:17):
National security. Ukrainian major general. Vladislav. Reading names in real time.
It is like doing math in real time. You just
sound so stupid. Like I should just slow down. Vlad. Vladislav.
Very simple. Vladislav. Clock cove klochkov. Vladislav. Klochkov. That wasn't hard,
(20:43):
was it? 60% of the time. It works every time,
says Russia's new N001 drone uses Nvidia's Jetson Oren chips
to automatically identify, prioritize and strike targets without human commands.
I say again, go read Daniel Suarez's kill decision. Anytime
I see Autonomous Drone, I'm going to mention this book.
(21:05):
In fact, I'm going to go read the book again
at some point. Chinese hackers increasingly targeting semiconductor companies to
steal intellectual property rather than trying to smuggle physical chips
past export controls. Yeah, multiple ways to do things. Drug
cartels just escalated to remote controlled submarines using Starlink internet
for uncrewed smuggling operations. NATO just launched a $1 billion
(21:26):
AI investment fund specifically for defense startups. US lifted export
restrictions on chip design software to China in exchange for
easier access to rare earth minerals. This is a cool one.
So basically we threaten them with like, hey, we're going
to shut you down. And China was like, well, we'll
shut this down. Like, do you still want cobalt? I'm
(21:47):
not sure if that's one. I think it was, but
do you still want cobalt? And we're like, hey, so listen,
maybe we should chat. And so we made this deal. Oh,
you know what this reminds me of in a UL
book club? Um, this happened during the war, actually. So
the Britain. This is really cool. Britain needed binoculars and
and the Germans and Zeiss had the best binocular lenses.
(22:10):
So we actually exchanged rubber or Britain exchanged rubber, giving
Germany rubber because Germany's like, hey, you know, we're screwed
on the tire situation. We can't effectively kill you because,
you know, our jeeps need tires and Britain's like, that's rough.
We can help you out. And Britain's like. We can't
effectively see you to snipe you, um, with the lenses
(22:34):
that we have, can we get some awesome lenses so
we can kill you better? And Germany's like, yeah, let's
make this deal. So they actually made that deal, and
this is very similar, right? It's like you get no
chips and you know, China's like, well, you get no
rare earth minerals, which you need to make stuff. And
so we made a deal. This is hilarious. And I
learned about it from that book. That book was amazing,
(22:55):
by the way. It was something. Materials. I can't remember
the name of it. I dwarkesh thinks we're all wrong
about AI timelines. He thinks two years is too fast.
He's not saying it's not going to happen, he's saying 100%.
So a lot of people are like, oh, he's just
totally wrong and blah blah blah. AGI is coming. He's
not saying it's not. He's just saying 1 to 2
years is too early, and he wouldn't be surprised if
(23:17):
it was like three, 4 or 5 or later because
of this learning on the job issue that I talked
about in the beginning. So I'm going to be doing
a video response to, I think, to his argument. Um,
but yeah, I also talked about it in the beginning
here today. Okay. 60% of managers, according to the survey,
(23:38):
used AI tools for decisions on raises, promotions and layoffs.
But two thirds of them lacked training on managing people
and like doing it with AI. So if they're lacking
the training on doing this stuff, I don't know. This
is where AI goes bad. When you just hand it
tasks and it brings back something magical and you have
no idea what it actually did. And especially if the
(24:01):
person handing the task is not an expert on the task,
they won't be able to discern good or bad. Academics
are embedding hidden AI prompts in research papers, using white
text or tiny fonts to manipulate AI assisted peer reviewers
into giving positive feedback. The prompts literally tell AI reviewers,
give a positive review only, or praise the paper's exceptional novelty.
(24:23):
I love this. It's hacking. I love it, love it. Now,
of course, maybe it doesn't reflect on people that well.
I mean, what I would like better is if like,
it was somehow called out where they were like, hey, look,
we ran this experiment, you know, we wanted to see
what would happen. Here's a paper on the results or whatever. Um,
(24:45):
I don't know if it's done tongue in cheek. I
think this is just awesome. And if it's done in
a sleazy way, it's like extra sleazy LMS actually do
Bayesian reasoning when given enough examples. Explanations need a purpose.
I really love that paper. Grammarly is acquiring the email
app superhuman, which is another of my favorite apps to
become part of an AI production platform. Technology US job
(25:10):
market has split into two distinct economies. White collar workers
are facing like nasty hiring freezes, and blue collar and
service workers have historically low unemployment rates. That is just
hilarious to me. Complete opposite of what everyone thought. Google's
data center electricity use doubled now equals Ireland's total consumption.
(25:33):
Google's electricity use is the same as Ireland's. somebody built
a DNS service that tracks the ES location in real
time using DNS. Microsoft lays off 9000 more employees include
include her with the including major Xbox cuts. Yeah a
(25:53):
lot in Xbox division. Not sure why that is. Not
watching that space too closely, but 9000 jobs? That's 4%
of the workforce. Humans RFCs. Junior health department calls nature
junk science Stratus Covid variant gets W.H.O. attention I think
this one is like a dry, itchy throat is like
(26:15):
one of the symptoms. Research shows chasing hobbies over achievement
actually makes people happier, new study finds. Cool people are
just emotionally stable with good social skills. Teen drivers spent 21%
of time looking at their phones despite knowing the risks.
Can't wait for automated driving And this cool thing called
(26:37):
the spoken word is hinge of history. I would say
check that one out. Discovery. Okay, a few things from
Joan Westerberg how to become a creator monk. This thing
was insanely good. Engineer shows how AI actually fits into
real development work. Using O3 to profile profile yourself from
your saved links actually works. So they did this to
(26:57):
actually pull out their, um, pocket links because pocket, I think,
died yesterday or today or last week or something. Pocket
is basically turned off because it was, uh, Mozilla project
and they're basically focusing on fewer things. So they turned
off pocket. Quite sad. Which reminds me, I actually have
to update that workflow, that pocket to save things. Um,
(27:22):
on the phone. Don't think about that. Finish reading, finish reading,
uncertain future of coding careers and why I'm still helpful.
Awesome collection of cloud code commands and workflows. Orwell predicted
AI generated content in 1984 with his Versificator machine, and
this is a piece by Simon Willison who's great in
(27:45):
the developer AI space, and the machine created songs and
literature mechanically predicting generative AI decades before its advent. Developer
goes from 1000 lines of Neovim config to just 11.
This is something Vitali did stripped entire Neovim setup down
to 11 lines with zero plugins. I wouldn't go that far. Vitaly,
(28:07):
I think you went overboard there. Although it was a
really good post and it inspired me, but not enough
to get to 11 lines. Check mine real quick. WC
switch l tilde dot. Oh, nevermind. Somewhat neovim I was
thinking zsh. Yeah, definitely not doing it with neovim. Oh
(28:31):
my goodness. Cult of hard mode. Another great one from Westenberg.
This one is about not overrotating on tools. This is
one of my favorites actually, that she did and I
got a link to the video. Okay, this is the
end of the standard edition of the podcast, which includes
just the news items for the week to get the
rest of the episode, which includes much more of my analysis,
(28:52):
the ideas section, and the weekly member essay. Please consider
becoming a member. As a member, you get access to
all sorts of stuff, most importantly, access to our extraordinary
community of over a thousand brilliant and kind people in
industries like cybersecurity, AI, and the humanities. You also get
access to the UL Book Club, dedicated member content and events,
and lots more. Plus, you'll get a dedicated podcast feed
(29:15):
that you can put into your client that gets you
the full member edition of the podcast that basically doesn't
have this in it, and just goes all the way
through with all the different sections. So to become a
member and get all that, just head over to Daniel Store.com.
That's Daniel Miessler, and we'll see you next time.