Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:20):
All right. Welcome to episode 496. All right, so my
video on building personal AI system using cloud code is out.
This thing has been a blocker for me in like
life and work and just it's taken so long to
get this thing out. Recording it took like 5 or
(00:41):
7 takes or something. The blog post, the editing, everything
about it has been difficult. I am so glad it
is out and recommend you go check it out. Just
upgraded to a Keychron Q11 keyboard, it is a canted
and tilted. I can't remember like which words associate with
(01:06):
which angle, but it's basically tilted inwards because that's the
way your hands are when you type. And then it's
also tilted upwards on the inside. So it's like an
upside down V. Kind of not as extreme though. So
if you imagine you just put your hands, your elbows
by your side and you put your hands out, they're
(01:29):
going to be like angled at like a, an upside
down V shape, you know, slightly like a wide open
upside down V. And they're also going to be moving
well if you stick them straight out they will be
straight out. But if oftentimes you'll want to slightly bent
(01:49):
them inwards or, you know, uh tilt them inwards or
not tilt but um, angle them inwards. And if you
do that then of course you can um, you could
reach the keys on a regular keyboard that's lined up
right now on a regular keyboard that's lined up just
(02:11):
the way a traditional keyboard is. You actually have to
tilt your your wrist wrists inward, and it's just not
it's not good for you over time. I don't really
have any pain that I'm dealing with and trying to
address here. I just want it to be more natural.
I want to be able to type faster and I
don't know. I also think it looks cool. It feels
(02:32):
cool as well. So I like that. And then I'm
experimenting with, um, the keyboard because the keyboard, basically it
has um, it's two separate pieces. Okay. So the left
is separated from the right. That's like the most important feature. Um,
(02:52):
and that allows you to do this, this canting and
everything completely separate for left and right. Um, obviously you
want them to match. You want it to be symmetrical, basically.
But the idea is you can have them all the
way across, like the distance for from your elbows, uh,
one elbow to the next, which is what, you know, One,
(03:16):
you know, two feet, maybe something like two feet apart.
You could do that and then your hands would be
just straight out. And a lot of people who do
like the custom keyboards, they actually do that. I have
mine probably at the bottom of the keyboard. It's probably
separated by, I would say, um. Actually, I have a
(03:41):
tape measure. Like five six inches is how far the
bottom is and the top is only a couple of
inches separated because again, I have it, you know, canted
(04:02):
downwards in that way. And it's pretty cool. So the
inside of the keyboard I have elevated. So it's in
that upside down V shape. And yeah, it just feels
really natural. Feels awesome. I've tried some other more customizable
keyboards that are just like 60. They're called 40% keyboards
(04:24):
and those you have to use layers to be able
to get to a lot of the keys. Um, I
didn't find that very fun. Um, the muscle memory there
was not easy to grow, and I feel like there's
not a reason to get rid of that many keys
and to force you to have to do layers. I
(04:45):
think this is a better approach. It's a more traditional keyboard.
So I have numbers above my fingers. I still have
my spacebar on both sides. I still have number keys
in the bottom right of my right hand. So it's
like it's more classic, more traditional, but at the same
(05:07):
time just oriented in a more ergonomic way. So again,
it's a Keychron Q11 and I do watch too many
keyboard videos on YouTube, so this might not be my
last keyboard. I have way too many keyboards, unfortunately. Okay, um,
(05:29):
found a major problem with doing member essays here in
the newsletter, so I'm not actually going to do them anymore.
And the problem is very simple. It's like I'm what
I'm trying to do is orient my work and my
life around the central theme of ideas. Right? I've been
trying to do this for years now, and the central
(05:51):
theme of ideas is you try to have ideas first
of all, and then you try to get them out
into the world. And what I would do is I
would be writing a newsletter and I would come up
with a cool idea, and I would do a member
essay on it, and I was going to do like
one a month, but I ended up doing it like
almost every month. But the problem is, they were so
(06:15):
decent I thought they were halfway good a lot of times.
At least several of them were halfway decent. And I'm like, well,
that just needs to be a regular blog post. Like,
I can't, you know, just it doesn't make any sense
to put the effort into it. Have people actually enjoy it,
but it only goes out to like 1500 people or
(06:37):
whatever the current member count is, right? So it's like
it just it was not in harmony with me trying
to get ideas out into the world as like a
primary mission. So. So I'm not going to do that anymore. Um,
basically I'm just going to turn those into blogs, right?
(06:58):
So you're still going to get them. They're still going
to be in the newsletter. It's just you're not going
to have to pay for them. And instead I'm going
to basically raise the member perk that has been most
asked for and just make that the primary perk. And
basically a clearly stated explicit thing of 25 to 50%
(07:20):
off all UL paid content. So when human 3.0 launches,
when augmented online, the online version of augmented launches, which
will be sometime this year, sometime soon, like you're just
going to get a massive discount like hundreds of dollars
or in some cases for courses or whatever. Or, you know,
(07:44):
conceivably if I charge enough for a thing, it'll be
thousands of dollars off for members, which is kind of
ridiculous given that the membership is only like $100. So
I don't know. I'll figure that out later. Uh, content
is more important in the meantime. So yeah, you can
(08:05):
subscribe at Daniel or subscribe. My friend Monica is running
her Resilient Career Accelerator course again soon. I'd recommend you
check it out. It's a week long course. How to
build a career in security leadership. Run a security program
and generally make yourself more resilient career wise in this
AI world. And she talks about that a lot. She's
(08:27):
very bright, very passionate about these topics. And I'm also
going to be a guest speaker at the course that
is in November. Uh, very much in line with my
recent Westerberg shares. You probably seen some of those videos
from her. I'm loving this guy. His name is Mainstone,
(08:48):
or at least that's his brand or whatever. And he's
just like this. Extremely honest, like British guy. And, uh, yeah,
he wears sunglasses and talks at the camera sometimes, but
he also is a videographer, photographer, videographer. Anyway, he takes
videos of like, terrain. And so his whole theme is
(09:12):
like going back to basics. It's like, what are we
actually doing here? It's a very existential, like, if I
had to categorize this YouTube channel, it's an existential YouTube channel.
And he's just he's smart. He his takes on AI,
I think are pretty clean. Like some of the cleanest
that I've seen, which I find to be hilarious. Like
(09:34):
this guy who is I don't know, I'm not even
sure how technical he is. Um, computer wise, um, video wise,
he's like, obviously extremely technical. The guy is. Yeah, he's
he could probably be a professional, professional video guy if
he isn't already, but I'm not sure if he's like
(09:56):
a computer guy. I don't really know that. Um, he
mentions being broke all the time and having like, $1,500
to his name and like, trying to find work. So
I'm guessing he doesn't have or hasn't had like a
tech career. That's. I'm guessing he doesn't have that to
fall back on if he's talking about that. But in
(10:18):
the economy right now, who knows if that's actually the case?
Could be that he just can't find a job there.
But anyway he goes his one of his favorite things
to do is just like to disappear and just like
go far, far away. He's also really good at Lan Nev,
by the way. Um, because I'm not that good at
Lan Nev. Or at least I'd have to train up. Um,
(10:41):
I think he does carry, like, a satellite device and, like,
batteries and stuff. So he could, like, actually get signal
if he needed to. I think he gets, like, weather reports,
but he's going places like very few people have been.
And he, like, disappears, like in, like, random parts of
Egypt or something, or like some random mountains in, like
(11:03):
Alaska or whatever. Just like disappears for like seven days.
And he's doing video the whole time. He's talking the
whole time. And it's really cool to see him actually
talk to the camera when he's feeling these extreme, like
emotions of solitude and like the wisdom he's getting from
it and everything. And his voiceovers, his narrations are great. Um,
(11:28):
his honesty. Like, he's just like, look, I just can't
figure this out. I don't know what I'm going to do. Like,
he's just very plain and open. It's extremely refreshing, very authentic.
Highly recommend it. Especially if you're kind of thinking, you know,
what is all this tech for? I want to go
back to basics, you know, analog. It's got that sort
(11:50):
of vibe to it. It's got the existential vibe to it,
gorgeous scenery as well. Like some of these shots he's taking.
Just absolutely amazing stuff. So anyway, highly recommend that. We
read Epictetus, the Enchiridion for book club. And it was wonderful.
(12:13):
Absolutely wonderful. Um, also reminding me so the Westerburg thing.
This guy mainstone, I don't know, all my stoic studies.
A million different things have, like been bringing me and
I don't know the attribution amounts, so I can't really
properly give credit. But I've also been reading a bunch of, like,
(12:34):
you know, dead tree books, uh, with no tech around me.
And I feel like whenever I get an insight from
a real book, an old book with text on a page,
I feel like the inside just hits me harder than,
you know, all the cool shit that I'm getting from YouTube, right?
(12:54):
And I'm getting great distilled, highly produced, great stuff from YouTube.
And somehow the book stuff just hits me harder. And
I'm trying to disentangle Like, is it because it's better content?
Is it become because it's coming from a different angle
where I'm not ready for it and I'm like surprised
by it. But one of my possible frames for this
(13:18):
is that it's simply like. One of the models I
was thinking of is like, I heard about on Reddit somewhere,
that some people use tea bags like 12 times and
like they keep track with like a little counter of
like how many times it's like, oh, 11th time. Well,
(13:40):
we haven't got enough out of it. So they keep
using it. And I'm like, I wonder if the content
that I'm getting from YouTube and from like, all these,
you know, modern sources or whatever, it's just like used
tea bags like 12 times used tea bags, it's just
it's good, but it's massively diluted. Just like, not the
(14:02):
good stuff. And I wonder if the stuff that I'm
getting from these books. I don't know, I can't tell
if it's just a better tea leaf, or if it's
just an undiluted tea leaf, or if it's just. I'm
not used to taking tea leaves by snorting them or
shoving them into my ears like, I don't know if
it's the medium. Like I'm still trying to figure that out.
(14:25):
So if anyone has any ideas there. But bottom line
is I am getting extremely excited about this concept of.
Old sources. Classics, but not even classics like literature. Literature.
(14:48):
I'm getting so much out of the biography of. Roosevelt.
Teddy Roosevelt. I was thinking of the other one. I
get them confused. But yeah. FDR is the later one.
He was the economy thing. FDR policies and I think
New Deal and all that. But Teddy Roosevelt was much younger.
(15:12):
He was doing things early 1900s or actually, no, 1800s. Yeah. Um,
18 like 76 ish or whatever. Uh, roughly around there.
I think he was in his teens, like 100 years
after the country was born or something like that. Garfield
(15:33):
was assassinated, uh, during that time, something that just happened
in the book. But anyway, uh, he was. Yeah. This
biography was talking about his life, and he was talking about, like, um,
so he was going through his journals. That's what the
biography is largely source based on is his journals like
(15:54):
Roosevelt wrote tons. And the guy was just absolutely insane, like, lie.
Completely insane. He would just. He was like a boxer,
like a fisher, like dancing, but not very well. But
he just spoke multiple languages and literature, like he just
read everything. And recitations, evidently was a huge thing at Harvard,
(16:19):
going from place to place, doing recitations, like, I don't know,
is that people reading literature is that people reading memorized
literature without the book. I'm not sure exactly what that is,
but I need to go research that. But. I feel
like this content I don't again, I don't know if
(16:42):
it's the tea leaf itself or if it's the medium,
or if it's just different and it's just a different
style with different, you know, diction and and it spawns
thoughts in the brain differently. But I just feel like
people were so sharp back then. Um, I don't know.
(17:03):
I think there were extremely limited in other ways as well.
So it's hard to, uh, you know, measure it on balance.
But anyway, I'm getting extraordinary creative, uh, boosts from reading
these older books. I'm reading a book right now about, um,
(17:26):
rhetorical figures. It absolutely insane. It's like it's rhetoric, but
specifically about something called figures in rhetoric. Uh, it's a
book about writing, and I'm just loving every single page.
I also just love how the page is written. I
love the writing itself in this book. I think it's
(17:48):
using like it's tricks that it's teaching me. It's using
them on me while it's teaching me. Pretty sure it's
doing that. But what I find is it's very similar.
I got to post somewhere like my favorite writers. So Hitchens,
Scott Adams, the Dilbert guy before he went crazy, although
(18:09):
he probably still writes well now, um, not so much
Sam Harris. I like him for a different reason, but
the writing style that I like the most, and that
I use and probably emulate quite a bit is this
alternate alteration alternation between long and short sentences? It's very powerful.
(18:33):
I'm seeing this happen in this book here again. Um,
Hitchens did it. It's it's in several books that I've
read about good writing, so I'm not sure what the
actual source is. I'm not sure if it's one of
these figures, these rhetorical figures. But anyway, really, really powerful stuff. Um,
(18:56):
and we are 700 minutes into this podcast. Um, let's
get into it. That was just the intro. Cybersecurity. Anthropic
says attackers are using models to automate full hacking operations.
So this is the thing I talked about in my
Future of Hacking video, which you should definitely go look
at if you haven't seen that. Um, and that's just
(19:18):
a derivation of or derivative video, a hacking focused version
of my unified entity context, which is like this unified
context for the enterprise that everything runs off of, where
the AI talks to it. And it's like, you know,
the context is all in one place. Well. Anthropic just
(19:42):
came out and basically said, look, we've we were tracking
this actor, and the actor basically was acting like a
giant team, like, like a team of multiple people. And
this one person accomplished what normally takes large teams, you know,
months to do. So, uh, 17 organizations in weeks using Claude.
(20:08):
So somehow they were getting Claude to do all this,
by the way. So they were bypassing whatever restrictions Claude
had healthcare, governments, defense contractors, even a church. It, uh,
Claude analyzed stolen data, created customized extortion plans for each victim.
North Korean agents use Claude to fake IT worker identities
(20:30):
at companies. They asked Claude basic questions like what is
a muffin to maintain cover? So this is one of the, uh,
prompt injection bypass techniques, or at least, you know, to
bypass the policy of, like. Hey, you seem to be suspect. Uh,
fake employees can earn high salaries. Romance scammers use Claude
(20:51):
as their emotionally intelligent conversation engine. That's diabolical. Chinese hackers
use Claude to guide espionage against Vietnamese telecom companies. Criminals
are building ransomware as a service. Businesses powered by AI.
So all of this is exactly what I've been talking
(21:13):
about here, where it's like it's their AI system against
our AI system, right? It's them defining goals and the
AI basically using all its different tools, which is essentially
what I just talked about in my personal AI video,
where I'm talking about, like, you have a central system
(21:33):
and you give it all these tools and you say, look,
go do this, and it picks the tool to go
do it, and it does it. And it looks like
a system probably very similar to what I've built is
being used by this attacker. And I'm sure many thousands
of others, or at least hundreds of others soon to
(21:54):
be thousands. And maybe, I don't know, maybe more than
that later. I just think this is the way to go, right?
Everyone's going to have an automation stack. The question is
what are they using it for? And so what do
you have to do as a defender. You've got to
have the same exact automation stack. That was my takeaway
from this. You know future of hacking is context. Video
(22:18):
is basically what's the answer. The answer is to have
the same thing that the attackers have. And the framing
that I have for all of this is you have
to have a, a world model of your system and
your company and your employees and your projects and your
budget and everything. You have to have a world model
(22:39):
of that whole thing, and you have to be able
to audit that and understand its state at any given time.
And then at the bigger picture, the whole game is.
Excuse me. Take a drink. At the bigger picture. The
(23:02):
ultimate game is going from current state to desired state. And. Anyway,
that's kind of a bigger topic, but the point is,
attackers that I made before is attackers will have this
(23:23):
world model of your company, and they will use that
to use their automation system to continuously launch attacks against you.
And basically what your logs are going to look like
is thousands of AI automation systems just hitting you constantly.
Just constantly. And if you fart or you burp or
(23:48):
you step out of line one second With like an
S3 bucket or a misconfiguration or, you know, a DNS
domain that you're not maintaining whatever it is that is
going to get jumped on by a whole bunch of
AI who does not sleep. Right. So your attacker is
(24:12):
absolutely going to jump on that, and they're going to
do it fast. And if not, one of their competitors
is going to do it. So we're not going to
have the time. So our only chance is we get
there first. Our automated AI system sees that thing pop
up first before them and goes and fixes it, or
(24:34):
blocks it, or does whatever, you know, puts up a
firewall rule or whatever. So this is the game. To me,
this is the security story. This is the number one
security story going forward. Everyone talks about, you know, AI
and security. Hey, what are the future trends or whatever?
(24:56):
This is my number one answer. Number one answer. Unified
entity context to get everything unified inside of a company.
So the future of basically company management is the founders.
The owners sit at a giant table with a bunch
of other leaders who are also extremely business savvy, creative
(25:22):
and and technical as well. And then there is a
number of similar minded people that are basically like wizards,
and that essentially is their IT staff. And their IT
staff isn't really IT staff. It's more like business IT liaison.
(25:45):
So it's like they are experts at understanding exactly what
the leaders at that table just said, because they're at
the table as well. But they're kind of like the
execution arm. And then what they do is they they're like, okay,
I just understood that I am parsing that and bringing
it into the unified entity context that is now just updated.
(26:07):
Our goals. Our goals have now been updated. All our
plans have been rewritten. Um, these additional details have been
written into all future iterations of all of our products.
For all the products that you just mentioned building, we
have now started specking those out and they're basically wielding
(26:29):
these giant teams, these giant armies of. AI systems. And
those AI systems are the planners and the speakers and the,
you know, they're going to build marketing campaigns and they're
going to build all this different stuff around that. But
it has to start at that table with the people
deciding actually what to do. Right now, of course, there's
(26:54):
arguments that, okay, well, in the future you don't need
them because, you know, the AI will just be smarter
than all of them. Honestly, I believe that. Right? And
that's that's the ACI story. Very possible. But that's not
a security thing. I care about right now. It's just
not tangible to me yet. Maybe in five years, maybe
(27:16):
ten years, maybe 20 years. And who knows, it could
be two years. But for the foreseeable future, and I
think it'll be a good amount of time, even if
ACI hits, it's going to take a while. It's going
to be that leadership team in the middle, um, with
an enforcement arm of these extremely bright people wielding giant
(27:39):
teams of thousands or millions of tiny little AIS doing execution.
And in terms of security, really, it's about everything. It's
continuous monitoring and continuous context management and understanding of the business.
But especially for security, you have to continuously understand the
(28:01):
state of what you have, which will previously have been
called asset management. But really it's like state management of
the business overall. And then once again, it's your AI
systems against their AI systems. It's just that's just reality.
How fast are yours. How well do they understand the context?
(28:24):
How small is the gap between a change and an
understanding of the change that is the game. Okay. Um,
anthropic shows how they detect sleeper agents hiding malicious behavior. Yeah.
Catching AI agents that pretend to be helping while secretly
planning harmful actions when deployed. Nasty FBI confirmed Salt Typhoon
(28:48):
breached at least 200 American companies plus firms in 80 countries.
NSA and NCIS trace the Salt Typhoon espionage campaigns to
Sichuan Jianshui Beijing. How you knew? Why would I? Why
would I choose to read these things? It's not going
(29:09):
to help me. I don't want to butcher people's names. Um,
but these are companies. Actually, not not people. So those
are different companies providing Chinese company providing cyber tools to
China's intelligence services. Uh, Claude will start training on personal
account data unless you opt out. So this is an
(29:31):
important one. They you probably seen the emails if you're
a user. Uh, they emailed everyone and basically said you
have until September 28th to opt out. I wouldn't opt
it out immediately. Um, if you don't want your data
to be trained on and if you have like a
custom like enterprise account or certain types of, you know,
(29:53):
paid accounts, they won't do this by default. But if
you just have a regular account, including like a pro
account or a max account or whatever, um, they will
turn this on. They will start using your content for training. Uh,
if you do not opt out. So definitely go and
do that if you want to. I'm also curious about
(30:15):
why everyone's freaking out about this. Don't most other AI
companies already do this? Like, I feel like this is
just pretty standard at this point. Um, what I don't
understand and what I wouldn't like is if they're like,
we don't have the ability to opt out before September 28th.
Why would they say before September 28th? I guess that means.
(30:41):
I guess that just means that they will turn it on.
They will turn on the use after September 28th, but
hopefully it also means that you can still opt out
after that point. I'm not sure they made that clear
in the email. I didn't see it in the email,
but hopefully that would be the case. It would be
(31:02):
super shitty if they were like, nope, we sent you
the thing. I told you you had to do it
before September 28th and if you didn't? Too bad. That
would be weaksauce. And the more I think about it,
that's so ridiculous. I don't think that's going to be
the case. Researchers hide malicious prompts in images that only
(31:23):
appear when AI systems downscale them. This is from Trail
of Bits. Really cool attack. Uh, worth checking out. DoD
uses node JS utility maintained by a Russian Yandex developer.
Supply chain Transparency. This is why I can't wait till
(31:45):
there's like, you know, millions upon millions of AI eyes
watching everything. Oh, here's a cool idea that, um, I'm
going to be doing a panel with my buddy Sasha's
dealer pretty soon. And by the way, sorry for my voice.
I'm just realizing because I'm trying not to sniff while
(32:08):
on the microphone because it's a sniffing sound. But if
I'm trying to sniff, that means I have the sniffles,
which means I probably sound a little bit nasal, so
apologize for that. Not sick, but I'm like one quarter sick.
I just don't feel bad. Anyway, uh, the idea here is,
(32:29):
you know how we were told many eyes sunlight is
the best disinfectant and all this. And then we have
the SSL thing, which was open source, and it was
a massive, massive vulnerability across the whole internet. Well, what
if AI is basically a revisiting of many eyes, right?
(32:51):
Because Many Eyes turned out not to be true. Not
to be real. Because many eyes implied that because people
could look at the code that they would, that they
actually would. Right. And that turned out not to be
the case. Nobody was looking at all these open source projects.
Just because they're open source doesn't automatically make them secure,
(33:14):
because that implies a second step, which is lots of
people are actually taking the time to look. Why are
they doing that? What time are they doing that in?
How are they incentivized to do that? There's no answers
to those questions, which means which is the reason that
it wasn't being done. But what if we could just
(33:35):
unleash like this army? Like the government can unleash an
army of millions of agents, and their job is to
literally just go and clean up all this stuff. And
funny enough, that's exactly what the AI project was about.
Guess who? It was launched by DARPA. Yes, the creators
(33:58):
of the interwebs. So they literally created a project that
does this exact thing. It autonomously, autonomously, using agents can
go and find bugs and fix them end to end
without breaking the software. Isn't that insane? And this is what? Trilobites.
(34:23):
They just won second place. This is what Michael Brown
was talking about, which I haven't interviewed, which you should
go check out. It's on YouTube. And yeah, this is
exactly what the government set out to build. And this
is like, this is many eyes. That is literally what
many eyes was supposed to be. It was supposed to
be humans. Turns out that doesn't really scale. Um, because
(34:48):
open source turns out to be just one guy. I
saw somebody say that recently. Actually, it wasn't a person.
I just saw it as a headline. Open source is
just one guy. Maybe it's actually in the, uh, in
the discovery section. Open source is just one guy. That
is hilarious. Uh, anyway. FBI warns about three phase phantom
(35:11):
hacker scam targeting seniors. Scammers use AI to analyze social
media and personalize their attacks. Oh yeah? What do you know? First,
they pose as tech support and gain remote computer access.
Then they fake bank employees, convince victims their accounts are
compromised overseas. Just so nasty, so nasty. Just going after seniors.
(35:35):
And it's just going to be done at scale now.
national security. Pentagon security agency admits China keeps beating their defenses. Sad.
AI Cloudflare radar tracks global AI bot traffic across the internet.
This thing is really cool looking. Yeah, it shows you
(36:02):
all the stats for all the crawlers and everything. Super
super cool. AI is the natural step in making computers
understand humans. AI browser agents fail at real work. Except
for one unknown tool. Yeah, this was a really interesting one.
You should go check out that one in the newsletter.
(36:26):
AI adoption linked to 13% job decline for young US workers. Yeah,
I saw one of my smart buddies who was like, yeah,
you know, there's no evidence that AI is affecting the
job market. I'm like, what are you reading? Like, like
the CEOs, so many founders are like, yeah, I'm not
(36:47):
hiring anymore. I'm looking into how I could do more
of this with AI. And it's just like dozens of
these people coming out and saying this. The job stats
are going down at the same time. And it doesn't
mean like that's necessarily proof, right? You know, it's correlation, right.
But then you just see more and more people saying, yeah,
(37:08):
I'm automated this, you know, I have automated this. I
was told directly that the team was being laid off
because of AI reasons. Like, there's just so much direct
evidence that we have from people, you know, which is
like anecdata you add those up and they turn into
like almost data. So 13% I think is low. I
(37:31):
think it's just just now starting. And yeah. And actually
I have things queued for the next newsletter that, um,
that backs that up. There's actually more studies coming out.
It's probably going to be way more than 13%. And yeah,
only getting worse is my guess. Creators are automating their
(37:52):
entire content workflow with AI. So you spot trending topics
with AI agents, trained Claude on your style. Clone your
voice with 11 labs, create your digital avatar with hey Jen,
generate B-roll with Vo three and runway ML. Merge everything
(38:12):
in Canva. Automate distribution across platforms. Complete the workflow. So
that is basically the dream, right? This is like end
to end. They're just doing everything or at least saying
that it can be done. I love this. I think
all anyone should have to do, like as a creator,
(38:34):
is think about their thoughts and their ideas and like
how to get them out there and like the format
and the style that they want to get them out there.
But all the busy work that isn't fun, of course,
obviously still do the fun stuff. All that stuff should
just be automated away so you could focus on the
actual creativity part. So yeah, interesting. AI coding makes me
(38:58):
faster but kills my flow state. So this guy Praful
is talking about how. He used to be able to
just listen to music and code, and now it's like
you're dictating or you're typing or you're going back and
forth with this AI. Uh, for a lot of the
(39:20):
code generation. And it's just like, it's not the same
as just listening to music and coding. And I thought
that was really, really interesting. I hadn't realized it until
he said that. And it's definitely true for me. ChatGPT
usage drops hard when summer vacation starts. Yeah. This is
(39:42):
sad for me because it means they're not doing full
life integration with with the stuff they're mostly using it
for like homework and research and stuff like that, which
is fine, but I would like to see it more
widely adopted. Stop calling automation AI unless it actually learns something.
I included this just because I want to include counter
(40:05):
opinions to my own. Um, yeah. I think it's fine
that we want employees like it's obvious that we want
employees who can learn, but that's not the main thing
that we pay employees for to learn. We the main
thing we pay employees to do is to. Execute the tasks. Right.
(40:31):
So I think the learning and the executing of the
tasks are two separate things, because if we had AI
that could just execute the tasks, then the learning thing
would not be as important. Um, because what's more likely
to happen is that you would simply have, you know,
separate systems that are like, okay, we detected a sufficient
(40:53):
change in this thing. Therefore, you know, some retraining needs to,
you know, be added, whether that's a, you know, a
fine tuning of a model or whatever it is. And then, um.
Or just prompting or context or whatever it is, the
(41:13):
method you're going to use to do the improvement and
then you roll it back out. And the other thing is,
as the stuff gets better, I think. A lot of
that will generalize into being the same task and maybe
not even require too much learning. Again, just keep in
(41:33):
mind that just keep in mind how similar problems will
start to look when eyes are seeing all the problems
for all these different companies. How different really are the
sales use cases and the marketing use cases, and the
(41:53):
product management problems and the writing code problems. Right? It
seems to me like the hardest part is the creating of.
Net new things. It is the true creativity. It's the
true coming up with new ideas. And guess what? That's
(42:14):
not what we're paying most people for. We're paying them
to execute on these other tasks, which are the other 99.99%
of knowledge work. So I think this argument about, you know,
I'm paying you to learn. And A's can't learn on
the fly. Really good. Right now. Therefore, it's not a
(42:36):
threat to us. That is not a good argument. DSP
turns prompt engineering from art into systematic optimization. And this
is a towards data science article. Really cool. DSP is
a really powerful framework. It's basically automation of intelligence tasks
(42:59):
in pipeline type structure where it's repeatable and consistent and
you can depend on it and kind of like an
enterprisey kind of na to n type of vibe technology.
Three mystery customers account for over half of Nvidia's data
center revenue. That's spooky to me as like an investor
(43:21):
in Nvidia, which I currently don't hold any Nvidia, but
if I were holding some, that would be scary. Like,
who are those people and can they go away? And
if so. What would happen to their stock? It's more
likely that they'll gain five other unnamed ones and they'll
(43:41):
go to the moon. But I don't know. It just
seems weird to me if like so, such a high
percentage of like they're good numbers are from these unknown people.
It's like one of them Satan. Like, I don't know.
I just don't feel good about it. Venture capital in trouble.
Based on SEC filing patterns, TJ Jefferson discovered that formed
(44:05):
filings containing fund one peaked in Q3 2022 and have
been dropping hard ever since. Pretty cool analysis there. CrowdStrike
buys On'em for $290 million to build a genetic socks.
Computer science grads are struggling to find new jobs despite demand.
(44:30):
Entry level tech hiring drops 25%. Who knows where they're
getting that stat from? There's a million different of these studies.
I wouldn't be surprised if it was 25%. I wouldn't
be surprised if it was 65% drop. Yeah, like many
applying to places like Chipotle while software engineer postings remain down. Oh,
(44:55):
here we go. Software engineer postings remain down 70%. Cognitive
load is what actually matters in software complexity. Good piece there. Humans.
South Korea bans smartphones in schools starting 2026. Great job
and great job to probably Jonathan Haidt. Therapists are secretly
(45:19):
using ChatGPT and clients are upset about it. Middle aged
are no longer the most miserable. Turns out it's now
younger people. Younger people are now more miserable than middle aged.
It used to be a U-shape. With younger people happy
and older, older people happy, now younger people are massively unhappy.
(45:45):
US retail giants are raising prices based on Trump tariffs.
Shocking everyone. A brain abscess gave Eric Markowitz the clearest
moment of consciousness he's ever experienced. Describes how facing death
from a cerebral abscess made him truly conscious for the
(46:07):
first time. Trade school enrollment surges as Gen Z ditches
degrees for HVAC and welding. I got an idea here.
Why go to school right now? Why go to college?
I still think you should. So I'm of two minds
(46:28):
of this. I'm very strong. Like polar minds here. You
should still go because of connections. You should still go
because you do learn some decent fundamentals. If you weren't
to get. If you weren't going to get them some
other way. Right. It's a rigid structure. You have to
show up to class. You have to get the grades right.
Your parents are usually pushing you. So these are like
(46:50):
all good reasons to go. But holy crap, compared to
having your AI build you a custom curriculum of like
the best YouTube videos to teach you computer science. And
I'm going to be talking about this a whole bunch actually,
in the next few months. I've got this whole concept
(47:12):
of slack in the rope and actually what a real
curriculum could look like. I'm actually going to be building
curriculums into substrate curriculums for computer science and like all
these different, um, all these different degrees and stuff. And
I'm actually going to go find some experts on this
(47:32):
to help build the degrees, to help build the content
that needs to be in there. And I'm also going to,
of course, look at like all the classic ones, I'm
going to look at Oxford, I'm going to look at Stanford,
I'm going to look at like all these different schools, um,
including looking at what they used to teach, look at
what the Romans used to teach, like the trivium. Right.
(47:52):
And basically build like these custom bespoke things. But the
point is, I bet you it's pretty easy to put
together a set of YouTube videos, um, and maybe some
custom content that is like, just gets to the heart
of it, right? Imagine learning this stuff from like, Richard
Feynman and like the best teachers you could find. And
(48:16):
maybe certain teachers are better for certain types of learners, right?
And that could be stacked in there. But there's just
no possible way that learning computer science in college is
anywhere close, anywhere remotely close to like optimizing the cost function.
(48:39):
Like nowhere close to ideal. Uh, I know I use
I learned computer science in college in a traditional four
year college, and I've seen it taught in many, many others.
And I know what the curriculum looks like. Anyone could
go look it up. They're following the same books, like
(49:01):
most of them are reading the same books. Most of
the lecture styles are very similar. The homework is very similar.
Like the algorithm is just so ancient for doing this. And.
Right now, at this particular moment, we're like four years
from now is like 14.5 years from now In the
(49:23):
sense of like how much progress is going to be made.
What is your opportunity cost for sitting in a class
for a semester, trying to learn computer science, versus having
an AI build you a custom thing and actually have
conversations with you about it? And how about this? It's like, hey,
write me some code that does this, and you're writing
(49:45):
the code and it's like, yeah, that's not quite right.
It's more kind of like this. And it dynamically changes
your code for you and you're like, oh, okay, cool.
Or it's like, hey, I'm not going to help you.
You go ahead and do this and see if it compiles. Um,
go ahead and do this and, you know, test out
the website. Okay. Why is it not rendering? Why is
my thing off center? Why do my code not compile
(50:07):
or whatever? And it's like, I don't know, what do
you think it is like a Socratic sort of tutor
type thing, except for it never gets tired and you
don't pay extra money to ask the question again. Like,
how does that even compare To like this, this older
system that I just described, it's completely ridiculous. So there's
(50:28):
only a matter of time before this is going to
become open source. Like I'm going to make it open source.
I'm going to make these curriculums open source. And the
part that I don't have that I don't think I
can quickly build, and I'm sure other people will and
they'll do a great job is building the tutor that
can then use that curriculum. Right. Like I have Kai here,
(50:51):
but Kai's not specifically trained for being a tutor. Kai
is specifically trained right now because he lives inside of
cloud code to actually be a, um. Developer, right? Kai
is currently built because of his current body and brain structure,
(51:12):
to be an executor of tasks, to be a writer
of code, and that sort of thing. Um, but, you know,
I could reskin him as a tutor. In fact, I'm
getting excited about it. Just thinking about it. Like having
a tutor mode or a set of commands which are
tutor based. Okay, never mind, I'm excited. I'm actually writing
this down right now. I'm making some custom commands for tutoring.
(51:39):
I'm really curious about this right now, pairing that together
with substrate. Oh my goodness. Tutoring commands. And then open
sourcing the whole thing free on substrate and the tutoring
to go along with it. Question is how do you
make that packageable like as a mobile app? Because it
(52:01):
needs to go out to lots of people who need it.
Not like the 18 people or the even the 1800
or 18,000 people who see it on YouTube or whatever,
and they use it. That's barely making a dent. And
it's helping people who already maybe don't need the help
as much. Right. So anyway, I digress. Opportunity cost for college.
(52:24):
It's a tough question. I still think in general better
to have a degree. But right now, at this particular moment,
I am really worried about telling somebody to go and
get a four year degree and basically shutting down their
mind the whole time. What are you doing when you're
in four years of college? You're doing the coursework. You're
(52:47):
trying to please that teacher. You're trying to please that curriculum.
You're trying to please that old structure of college courses
from whenever this structure started. Right. And that is an ancient,
broken structure, extremely inefficient. I'm going to give it a 3%
efficiency score compared to AI tutoring you based on like
(53:11):
custom match content. Custom match YouTube content? Are you kidding me?
Like no comparison? Person. I think we could get like
a less ideal is 100. I think we can get
like a 60% efficiency up from like, okay, three is
too low, up from like a 10%. I think we
(53:34):
can get what is that, 5X5X better. And like how
does that turn into reality. I don't know. I'm talking
about full degrees. Full understanding. Here's what I'm talking about.
Having a better grasp of computer science. A better grasp
of history a better grasp of psychology, evolutionary biology, political science.
(54:00):
Getting a grasp of those things in one week's time
that is better than 99.99% of all people on the
planet in one week's time. You can do that with
20 really good YouTube videos combined with active tutoring by
(54:23):
your AI in a Socratic, challenging way to quiz you
to make sure you're able to move on to the
next video. Think about that. That's not even counting. We're
going to make you custom videos, right? Which is not
here yet. And still coming. Right. That that's next level.
(54:43):
But I think we can get massive multiples of gains
just from using the tech we already have. So like
I said violently of two minds on this whole college thing. Yes.
You should go. Holy crap. Don't turn off your brain.
You need to be doing all this AI stuff, all
(55:03):
this modern shit at the same time. If you're doing
it to get the degree, go do it. Get the
degree right. Especially if you're almost done. But I don't
think the people being laid off are like, nope, we
can't lay that person off. They have a four year degree.
I don't think anybody fucking cares right now, honestly. Um,
(55:26):
some people do, okay? Some people do. If you're trying
to go into some traditional field, I'm just telling you,
the world might be very different in four years. Like,
dramatically fucking different. So really, really think about opportunity cost.
Best of both worlds is do all the shit that
I'm talking about at the same time. Then you don't
(55:48):
really take a hit by getting the degree right. Plus,
ideally you're using AI to help you get your degree right.
So if it's easy, it's easy. Plus it's friendships. It's
maybe sports, it's extracurricular activities. It's social. You're making a
friend network. There's tons of benefits to college that aren't
related to the curriculum. I'm mostly criticizing the curriculum stuff.
(56:13):
Man can't get through these podcasts just talking. Everyone loves
it when I do this though, and go into more
detail and talk about things, but it does take a
lot of time. Um, I got to figure out how
to get it down into my old format of just like,
read the title and boom, go through, be done in
(56:33):
15 minutes. Didn't I say I'm supposed to be done
in 15 minutes? Isn't it supposed to be a 15
minute podcast? Haven't had one of those in two years.
All right. Brain abscess already did that one. The feeling
your work creates matters more than the checkboxes. This is
by Mitchell Hashimoto, who also made the ghosty terminal, which
(56:56):
is my go to terminal. A really good article there.
Physical technical books create memory palaces that websites can't match.
I was just hanging out with my buddy Tim Leonard,
who is a I met through UL and, um, Hanging
out in Vegas, and he opens up his his logbook
(57:18):
that he keeps with him. And he's got like these custom,
like hand-drawn images. So, like, he takes a note or
whatever talks about a plant or a, or a thing
that he saw with his wife or whatever. He takes
these notes. He had like little pressed flowers, like he
pulled from a place. It was just brilliant, just absolutely
(57:40):
brilliant analog. He's got a pen, he's got paper memories.
I think you can have that with technical books as well.
And that's what this article is arguing about. It was
it was really cool. MIT and Scripps researchers develop one
shot vaccines for HIV and Covid. Really, really cool. All right.
(58:03):
Discovery I'm going to try not to talk for 74
minutes about each one of these. I'm going to blast through.
I'm going to try. AI continues computing's evolution towards understanding humans.
A beginner friendly jiu jitsu tutorial for non git users.
(58:24):
This is an alternative to git. It's called jiu jitsu.
It's supposed to be awesome, but I do not have
the time for the switching cost right now. The opportunity
cost of me trying to learn jiu jitsu right now. Um.
And migrate everything. Get off of GitHub. Does GitHub even
(58:45):
do jiu jitsu? I don't know. I'm guessing not. Git
is in the name. FFmpeg pages simplifies video processing with
visual commands. AI agents should use files and git instead
of context windows. Sounds pretty familiar to me with like
(59:06):
this whole UFC thing I just came up with. Um,
That's when I actually didn't read yet. I need to
open this tab. Because it's in the discovery section. So
I'm just kind of like sometimes kicking off cool links. Um,
that you should go check out. Single servers can handle
(59:28):
millions of HTTP requests per section. Dot files don't belong
in application support. Loved that one. That was so spot on.
Google lets you block AI overviews from your site. Nardin
automates bug bounty enumeration via discord, training yourself to like
(59:52):
things you hate. Klein runs completely offline with LM studio.
New vibe coded web app from Kathy Helps. I don't
see many of my women friends doing the AI coding entrepreneurship,
so I'm posting this here. This is a woman who
just built an app and is having super fun is
talking about it, and she's just really enjoying the whole
(01:00:13):
development process. And I just want to see a lot
more of my women friends, uh, doing this whole entrepreneurship,
like build your own business, you know, AI coding, AI
hacking thing. So I'm going to be sending this to
a bunch of my friends. Trace prompt provides tamper proof
audit trails for LLM applications. Oh, a duplicate got a
(01:00:38):
duplicate there. Claude gets a chrome extension. Physical textbooks become
unchanging memory places talked about that one. Wire prompts don't
belong in git. Uh so oops oops on that one.
Guile's or shares creative bash prompt designs and K-pop Demon
(01:01:00):
Hunters is Netflix's most watched video ever, and I still
have not watched this, but I do want to watch it.
Recommendation of the week. Start thinking about what you would
do if you had a personal assistant and what you
would give them. Scheduling, planning, collecting information, finding the best books.
Keeping people aware of what you're working on, keeping your
(01:01:23):
family synced up. Finding materials for your hobbies. Teaching yourself topics.
Start building that list because you will soon have this.
Whether it's through, like what I'm doing or a more
consumer based approach that's going to come out soon. Like,
you should start thinking about this. You want to be
able to magnify yourself in this way. Um, and I
(01:01:45):
recommend you go check out the the personal AI video
that I just put out talking about how I'm doing it. And, um,
the aphorism of the week argue for your limitations, and
sure enough, they are yours. Argue for your limitations, and
sure enough, they're yours. Richard Bach.