All Episodes

January 29, 2025 33 mins

Plus: The AI Vulnerability Glut, Remotely Hacking Subarus, Criticism of CVSS, the United Breach, and much more...

Protect Against Bots, Fraud, and Abuse. Check out WorkOS Radar at workos.com/radar

Subscribe to the newsletter at: 
https://danielmiessler.com/subscribe

Join the UL community at:
https://danielmiessler.com/upgrade

Follow on X:
https://twitter.com/danielmiessler

Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler

See you in the next one!

Become a Member: https://danielmiessler.com/upgrade

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:00):
Does your app get fake sign ups, throw away emails
or users abusing your free tier? Or worse, bot attacks
and brute force attempts work OS radar can block all
of this and more. Simple API gives you advanced device
fingerprinting that can detect bad actors, bots, and suspicious behavior.
Your users trust you. Let's keep it that way. Check

(00:23):
out OS Radar at work OS radar. That's work. OS radar.
Unsupervised learning is a podcast about trends and ideas in cybersecurity,
national security, AI, technology, and society, and how best to
upgrade ourselves to be ready for what's coming. All right.

(00:48):
Welcome to Unsupervised Learning. This is Daniel Miessler, and I'm
building AI to upgrade humans. Episode 466. Hope your week
is starting off better than Nvidia's did, which is probably
pretty easy to do. Went to a phenomenal offensive security
and AI conference sponsored by Rob Ragan. Not really sponsored
by him, but put on by him. Had lots of

(01:09):
cool sponsors for different things. But anyway, it was a
whole day of hacking. Basically, it was a hackathon fundamentally,
but it was also just a great networking event for anybody,
kind of at the intersection of AI and security in
San Francisco. And it was just just great. Rob did
a great job. Kind of a nerd observation. Far too
many people don't know how to use name drop. If

(01:32):
you take your iPhone and set it next to another
one's iPhone, someone else's iPhone at the top. Like if
you just touch, you don't even have to touch, but
you get close enough. It's NFC based and you could
transfer your contact. Every time I do this, people are like,
what have you done? Like, they can't believe it. It's
got like this little wobbly like liquid effect. It's like

(01:53):
the coolest thing ever. And it's how I like to
exchange contacts with people who have iPhones. And yeah, I
don't know why more people aren't doing it or don't
even know the feature exists. And this is in San Francisco,
by the way. This is like the center mass of
Apple in the center of mass of like the tech scene.
And people in San Francisco don't know how to do

(02:16):
this thing. They don't know that it exists. Really glad
I bought a bunch of TSMC last week because it
just massively crashed. Uh, yeah, that's that's awesome. All good though.
Playing the long game. I think it's fine. We'll actually
talk about that a little bit later. I just finished
reading a book for UL. Book club. Picture of Dorian gray.
Lots to say about that. That's a whole nother talk show.

(02:38):
But one thing I will say is that read classics.
You need to read classics. They are more dense with
value than almost any other type of book, probably any
other type of book. And every time I read one,
I'm like, how many of these are just sitting there
full of this much knowledge, and I'm not reading them. Infuriating.

(03:01):
So luckily we have UL Book Club to prod us
into doing that. Here's what we do. In UL Book
Club we oscillate between nonfiction, fiction, nonfiction classic. Nonfiction fiction, nonfiction. Classic.
So what we're doing is half nonfiction and half or

(03:21):
half nonfiction, one quarter classic and one quarter fiction. And
that has been a good mix for us. And we've
been doing this since like 2016 or something. So a
long time this book club has been going. This week's
discovery section is really good. I make some tweaks to
make it even better. So this is kind of the

(03:42):
first instance of that. Had a great conversation with Faisal Khan,
a GRC guy over at Vanta, and that was a
sponsored interview And fantastic. Last couple of sponsored interviews have
just been really, really good. So that was fun. I'm
going to jump into security Sonicwall vulnerability being actively exploited.

(04:04):
Really nasty vulnerability in their SMA 1000 appliances. And it
is being used in the wild and it is a
9.8 on the Richter scale. My buddy Sam Curry and
Shubham Shah found that they could remotely unlock, start and
track Subarus. So a couple of the goats of Bounty
and yeah, found a really nasty one. And it looks

(04:27):
like the network that they're using actually involves other manufacturers
as well. So it's not just Subaru, but I think
the main effect was on the Subaru stuff and got
a thread here on the downsides of everyone getting a
coding assistant, where I say one of the biggest impacts
of AI that goes kind of unnoticed is that we're
about to see an explosion of poorly built applications, specifically

(04:49):
applications built completely By eye with no thought of security whatsoever.
And the reason I a little bit of like opening
the kimono here. The reason I noticed this, or just
wanted to think about it, is because I'm doing it.
I see myself being sloppy. Now I'm going and cleaning up.

(05:11):
In most cases. But but there are certain cases where like,
it's not really publicly exposed. I'm not too worried about it.
Like it's only running local. I am moving so fast
that I can see myself being sloppy. There's another thing
that is really seductive about AI coding that you really

(05:33):
have to watch out for, which is the better it gets,
the more it encourages you to just kind of trust
it and just kind of go with it. And this
is bad because it's bad for a number of reasons. One,
it's bad for the security reason because what you don't understand.
You can't secure. Okay. But another reason is it just

(05:56):
stops you from actually getting under the covers and actually
doing the mental work of figuring out how the components
should be laid out. So the better the A's get,
what you'll do is you'll just drop a a stupid
paragraph into your AI agent in a coding tool like Klein,

(06:16):
or like cursor, or like windsurf or something. This is
a starting point that so many people are doing right now.
It's not my recommended way to start, and it's not
the way I start when I really want to build
a good application. If I want to build a really
good application, I actually start with a structured prompt, which
is a little bit like a PRD document, and you

(06:39):
have that in the directory, and then you tell the
agent to go look at that thing, or you put
it in the place where it automatically parses it, like
in the cursor rules file, for file, for example, and
that's the way you do it properly. But what most
people are doing, I would guess, because I even find
myself doing this as well, is they're just babbling, typing,

(07:01):
or even dictating, oh, I want an app that does
this and oh, it's going to be like this. Oh,
and make sure it looks cool. And oh, and by
the way, like it should have this thing, but I
want the button on the left instead of the right.
It's just this long paragraph of babble garbage. And then
they press enter and then the thing starts making stuff.
Now it could be making the most horrendously insecure things.

(07:25):
It could be making things that are not really scrutable
or understandable, like they split the logic across multiple pieces.
So it's not like it's not readable, it's not understandable.
And there's a tendency for people because they want to
move fast. And again, when I say people, I mean me.
And I think the situation is what? Much worse than

(07:47):
me because I'm actually guarding against this actively. So I
think it's way worse for just the general public. And
it's already bad for me. But it's like what you
end up with is a thing that kind of works.
Which is why I use this image here, um, which
my team actually came up with. Angela came up with this, and, uh,
it's like you end up with this monstrosity that can, like,

(08:11):
fall over so easily. It's just total, absolute garbage. And
then you're like, uh, no, not like that. No, no.
Change that one thing. And then after like 20, 30, 50,
100 iterations of this thing over the course of like
an hour and a half, like if you're doing something
else on the side, but you come back and check
on its progress. Okay, maybe you made something, but one

(08:34):
can you edit it? Can you? Do you understand it?
Do you understand all the technologies that you use to
build that thing? Probably not. Did it use proper Technique
and best practices to build the stuff? Probably not. It depends. There's.
Wide variation there, but probably not. And finally, like okay,

(08:54):
for all the configs, did you use cloud services? Okay cool.
Where are the API keys? Are the configs readable? Like
oh did you use cloud storage? Are those buckets locked down?
All these things are like the faster we get at
using these AI tools, and the more people get in
who aren't actual coders who never coded anything beforehand, this

(09:16):
problem is 100 times worse for that. So bottom line is,
there is so much good that's going to come out
of AI. And I would say it's still more good
is going to come out of AI. Because I would
say creation is more important than security. That would be
my my thing. And which might be strange to hear

(09:37):
from a security person, but I honestly believe that's true.
I would, I want secure creation. That's what I want. But, um,
if I had to choose. Everything is secure and nothing happens.
I would rather have insecure creation. So I think that's
more important for humans. But bottom line is there are
externalities to. These AI tools, severe externalities to these AI tools.

(10:04):
The better they get, the more they coax you into.
Not looking under the covers. And it's nasty under the
covers with a lot of these AI tools. Crater of
curl just announced they're completely abandoning Cvss scoring because it's
fundamentally broken for wide, widely used open source projects. And

(10:26):
this is specifically Daniel Stenberg explaining how Cisa recently marked
a low severity vulnerability as a critical with A91. And
he's basically saying it should not have been A91. It
should have been a low. So all this talk has
really been flying around for a long time, and they
kind of made some updates with the latest version of CSS.

(10:48):
But I think ultimately the problem is that these static
things are not made to be looked at constantly. They're
not made to look at the actual risk of your
org and your codebase and all the context around that,
combined with the risk of the vulnerability. And then to
compare those two. And this is probably going to surprise you,
but I think I might be a good application here.

(11:11):
You've never heard that from me that I might be useful.
But the idea is that what you what CVS, CVS
and other tools like this are bad at is they
are they're static. Right. You put some dumb information there
about like, oh how exposed am I or whatever. That's
not really the way to find out how exposed you are.

(11:34):
The real way to do that is to pull in
real time context. So the future of all of this
is like super obvious. It's basically context plus intelligence of
combining a vulnerability with your actual like risk posture. And
that's what's going to determine like what your current risk is.

(11:56):
That's that's just the reality of what's coming. And so
it could be that, you know, there's nothing particularly wrong
with Cvss. It's it's always been like this. The previous
versions were, you know, worse. And then there's other systems
and they also have their problems. But ultimately it comes
down to this one thing, which is what is the

(12:17):
state of the world? What exactly is the vuln and
how does it relate to me? And if you want
to do that properly in the old days, that's that's
like sending security engineers to go do research, to study to,
you know, look at a whole bunch of docs, figure
out where we have this thing installed. We're talking about
asset management. We're talking about nightmares. I mean, you can

(12:40):
spend weeks looking at one phone and seeing how vulnerable
you are to it. If you want to have a
good assessment, right. So the whole point of this is
that you're going to be able to do that much
faster now. And I think that's ultimately what replaces systems
like Cvss and the change. Healthcare ransomware attack is now
officially largest healthcare breach in the US 190 people affected.

(13:04):
Now Swedish authorities just grabbed a ship they think could
an underwater internet cable between Sweden and Latvia. This is
after multiple similar incidents and a lot of them are
tying back to Russia. All right, the big story deep
in AI and tech. So Nvidia loses $600 billion after

(13:26):
deep sea AI breakthrough. And the stock market overall on
Monday lost $1 trillion. So it was the single biggest
day of a market loss in history, and it beat
the record previously from also from Nvidia. But um, basically
here's what happened. This is what it came down to like.

(13:47):
Nvidia had been a darling of AI hype because they're
the GPU leaders. Much of the future hope of making
money from AI has been embodied by Nvidia. The idea
is that GPUs rule the AI world and Nvidia rules
the GPU world. And implicit in that assumption is that

(14:07):
Nvidia chips are scarce and necessary and expensive. That means
anyone who wants to be a leader will have to
have lots of Nvidia chips. So deep tech just blew
that out of the water because they produced something that
should have cost them like tens of billions, but they
did it for $5.6 million. They basically found workarounds that

(14:28):
allowed them to get a lot more performance for less resources.
And it freaked everyone out. Mostly investors, but it like
it messed with the entire stock market because the stock
market is largely pushed by tech. So basically less necessary
equals less valuable. My analysis is so what. Right. If anything,

(14:52):
deep seek is nothing but exciting because we're getting more
AI for less resources and we need way more AI.
And I've got a whole video which I'll link up here,
but it's like, how much do we need? Way, way
more than we have. And I keep doing this thing
where I'm like, oh, we only have 0.000 whatever, ten

(15:13):
zeros and A19. We're only at that percentage of how
much AI that we need. But I don't know that percentage.
I'm making that up. I'm trying to show lots of
zeros to prove it's like a tiny fraction of 1%.
It's a made up number. It's everyone should obviously know that.
But the point is we're just getting started. And in
that video, I lay this all out. I lay out

(15:35):
what types of things we as a society are going
to want to do with AI, and why that requires
so much processing, so much inference, so much intelligence to
be able to do that for basically every human problem.
And when I say human problem, I mean that means
also business problems and stuff like that. Personal problems, everything. So, uh,

(16:00):
I think we're very confused about this whole thing. Um,
and the advantage that Deep Seek found is an example
of what I've been calling slack in the rope. And
I've been talking about this with my friend Jai Patel
for a very long time, and he's, like, one of
the smartest AI people that I know. And I basically
said back in like 23, I'm like, look there. There

(16:22):
is so much slack here that's going to allow people
to jump way ahead, and it's going to jump them
ahead of the of the line, even above people who've
been grinding. So, for example, you spend billions of dollars,
let's say we're at parity between like two competitors and
somebody spends a billions of dollars and they jump up

(16:43):
by 20%, but somebody figures out, hey, wait a minute.
Like if you just double the size of the alphabet
and you add some special characters to the mix or whatever,
it actually jumps up by 600%. Isn't that weird? And
then everyone starts doing that, and then everyone collectively jumps 600%. Well,

(17:03):
what happened to the last six months where competitor B
just spent $4 billion trying to go up by 20%?
My argument to GI was basically, there are so many
of those big jumps completely hidden from us because we
have no idea how any of this stuff works. So

(17:25):
this China team just found one of these things they
just found through a number of things with like. Mixture
of experts. And they've got papers detailing everything they did.
But it's like, hey, look, we just did this. And
it's like, now, now it's obvious to everyone, everyone's going
to go copy that. That's another big jump. I think

(17:45):
there is hundreds of times the amount of performance, like
laying on the floor, laying, sitting on the table ready
for the taking. They just they have to be discovered.
And I think as the technology changes, there's a new algorithm,
there's new transformers that we use, maybe a completely different

(18:07):
type of AI. Every time we go into this like unknown.
It's a new set of things, a new set of
tricks or slack in the rope just sitting around to
be discovered. And my point is that those pieces of slack,
those tricks that allow you to jump forward, and I
don't want to want to demean with the word trick.

(18:29):
Someone mentioned that. Like when I say trick, I just
mean in retrospect it'll appear obvious. It doesn't mean it
doesn't take extreme genius and hard work and discipline to
actually discover it. So I just want to make that clear.
But the point is, we're going to be able to
just keep jumping massively. And I think a very large

(18:52):
percent of those jumps are going to come from these
sorts of tricks or slack in the rope type effects.
And that's what I think we had with, uh, with
deep Post-training is perhaps the most powerful category of these tricks.
This is what I said back in August of last year.
The most powerful category of these tricks is like teaching

(19:13):
a giant alien brain how to be smart when it
had tremendous potential before, but no direction. And again, I'm
saying there's lots of these different ones, but kind of
playing off this thing from 2024, I'm going to say
this in a different way, and this is kind of
a casual way of me thinking about this, which, uh,
I feel like GI doesn't really like when I do

(19:33):
this because it's not perfectly accurate. But one way I'm
casually thinking about this is that there are now two
steps here the intelligence, which is the model, and then
the wisdom, which is the reinforcement learning. So this is
kind of where a lot of this slack has come from,
is the fact that the model itself is not super
intelligence by itself. You actually have to tell it what

(19:56):
constitutes intelligence or wisdom or smarts or usefulness. And that's
after the fact with the. And more generally, the reinforcement learning.
So it's almost like intelligence is the size of the brain.
And reinforcement learning is the life experience. And like I said,
it's not technically true. But I think it's it's pretty powerful.

(20:18):
So bottom line is I think the the market reaction
to all of this is very wrong. Well, I've got
another piece in here about the total tam of I.
In fact, I'm just going to jump to that right
now because I don't think I mentioned it anywhere else.
Let's see here. Oh, this is really cool from Andre.
Move 37 is the word of the day. It's when

(20:39):
an AI trained via the trial and error process of
reinforcement learning discovers actions that are new, surprising and secretly brilliant.
So this is in reference to Lee Soto losing in
go to an AI and it was, uh, it was
moved 37 that made everyone freak out. So I thought
that was a cool concept, but, um. Yeah, if I

(21:02):
go here, here we go. The total Tam for AI
is a combination of two primary components. The total cost
of human workforces, and the amount of money that current
and future companies will pay to start ten x or
1000 X their business. So we're talking about hundreds of
trillions of dollars. So if you go back to this,

(21:25):
this is why I think, uh, the whole Nvidia deep
sea thing is very mistaken. The Tam and I did
a deep sea analysis, actually, of this very tentative, very non-scientific,
but it came up with like 80 to $120 trillion
as being the Tam for AI over the next ten years.

(21:48):
So I don't know. Again, it's better if you do
like one of the, uh, the new Google Deep research
projects to actually feed it that thing and then give
that to deep Seac or O1 or O3 or whenever
it comes out. Um, that would be a better way
to get those numbers and have it fully explained the numbers.
But either way, it's trillions upon trillions of dollars. So

(22:13):
that's why I think this whole thing is overblown and mistaken.
And that's why my attitude is very much so. What,
Because the market has gone from being foolish to overvalue
Nvidia to being foolish to undervalue it. Right. It's because
there is just so much to go, so much room
before we get to anything like a ceiling. And here's

(22:35):
where I talk about that. Right. So what happens to
Nvidia or any other part of the stack doesn't matter
much at all, because we're still at whatever percentage of
the amount of AI that we need in the world.
Doesn't matter how we get there, it's not predictable. Could
be ARM processors, GPUs, some brand new thing. Doesn't matter.
I think Nvidia and TSMC and all these people are

(22:58):
going to continue to rise because again, we're at the
tiniest little sliver. The question is who is going to
move into the next space. Can Nvidia pivot? I believe
they can. If they had to I'm not sure they
have to. First of all, Deep Seek was built on Nvidia.
If they had better Nvidia deep seek would be better.
Like it's like more Nvidia still makes everything better, right?

(23:21):
And I'm not trying to just go pro Nvidia because
I'm invested. I actually got a decent amount out of Nvidia.
I want to say, um, about two weeks before this,
because I made the choice to move more money into
places that would benefit from AI, and not necessarily just

(23:41):
the roads and bridges. Happened to get lucky there because
I pulled some money out before it went down. But, um,
I very much I did buy the dip a little bit,
and I very much would have liked to buy even
more at the dip of Nvidia, because I think it's
still massively going up along with everything else. It's like,

(24:02):
I don't think they're particularly perfectly, uh, set to be
the ones. Everyone's going to be the one if you're
smart about it and you have a good leader, which
Jensen is. All right. Enough about that. So OpenAI launched
a preview of operator which can navigate web browsers just
like a human. I think we need more generalized agents.

(24:22):
I didn't like the whole App Store vibe to it.
Google just dropped a massive update to Gemini. This is
like one of the sleepers. Honestly, Gemini is one of
the sleepers of this whole competition. Google has been just
doing insanely good for the last six months 12 months.
I use the the flash 2.0 Pro. I think it

(24:44):
is for a lot of analysis where I need really
good haystack performance over like 2 million tokens. So when
I do rag searches across like all my content, for example,
and I get back like 50 documents that hit on
the rag and this is over like 3000 posts, 3200

(25:05):
posts since going like back to 1999. So if I
have a big giant piece of content like that and
I don't want to miss like the The nuggets or
the or the, you know, the needles inside of this
massive haystack. I send it over to Google and it
does a better job of of finding the haystack. Plus,
I mean, just brute force. It has 2 million tokens

(25:27):
as opposed to 200,000 for like sonic anthropic build citations
API to combat hallucinations. That's really cool. Uh, Google just
dropped another billion dollars into anthropic. Very interesting. So they're
not only doing their own play. They're also betting against
OpenAI with anthropic. I don't know if they're in OpenAI

(25:49):
as well. But anyway, they're investing in anthropic, which I
thought was really interesting. Leaked memo from Apple says they're
focusing completely on rebuilding series infrastructure and improving their existing
AI models this year, which, yeah, it's the AI chief
talking about it. So obviously they're talking about AI. Well,

(26:10):
overall startup funding has dropped significantly since 2021, seed rounds
are actually getting bigger. This is interesting. My friend, uh, Mike, um,
has a really, really cool podcast on this, uh, which
you could check in the show notes. His name is
Mike Privett, and you should absolutely go check out his stuff. Uh,

(26:30):
Colorado police are now giving away free AirTags and the
tile trackers to help prevent vehicle theft in their community.
A ring camera in Canada. This is insane. Caught the
exact moment of a meteorite smashing into a basically a
walkway in front of their house. So they just came
out and just like passed that that location and you

(26:52):
hear like this really loud crash and it hit tiles
on the walkway. And I, I don't understand how it didn't, like,
blow out the tile and create a little hole in
the ground. But anyway, they picked it up. It was
an actual meteor and it just like blew up against
the tile. I don't know, That had to be going
thousands of miles per hour, right? It's terminal velocity, I guess.

(27:15):
Is it hundreds of miles an hour? I don't know,
but had to be going fast. It definitely would have
killed someone if it hit him, I think. Anyway, I
don't know why it didn't crack the, uh, the stone
on the ground. A new Harvard study shows we should
be taking blood pressure readings while lying down instead of sitting.
And Hans Zimmer is apparently in talks with Saudi Arabia
to remake their national anthem. And a pre-mortem is basically

(27:38):
where you imagine your product has already failed, and you
work backwards to figure out why, and you do this
before you even start the project. That's brilliant. Reminds me
of the PR system inside of Amazon, which I learned
a lot because we use that at Apple. And uh, yeah,
four components of top AI model ecosystems. It's in the

(27:59):
ideas section related heavily to this deep tech stuff. And
for discovery, Klein is my new go to for doing
AI stuff. It is an extension built into standard VS
code and actually like it more than using cursor, which
is like more integrated and everything. Klein is more conversational.

(28:24):
It's more following my thread. It's just seems more intelligent
and more like cohesive in the way that it answers things. Um,
and again, it's I'll show you the website. Yeah. Klein bot.
Thoughtful AI coder. I don't know why it's better, but honestly,
it feels better to me. I take this pain right here.

(28:45):
I actually move it over to the right. That's how
I have it going. Developer created a brilliant trap for
web scrapers by using specifically crafted CSS selectors. So anti
scraper trap I recommend you try out deep seek. One
thing about deep seek that is really, really cool is that,
uh you when it responds, it just starts talking out

(29:08):
loud and you just see it. It's very similar to
O1 in that way. You just see it. That's the
whole reason it's doing so well in the benchmarks. That's
the reason it's controversial because it's very much acting like O1.
But bottom line is you see it having a conversation
with itself and it goes back and forth and it's like, yeah,
but this is really cool. Yeah. But have you thought

(29:29):
of this? Actually that could be wrong. Let me think
about this. And it does that for a number of
paragraphs before it actually gives you an answer. I recommend
you try it out with Olama. It's really easy to install.
And magenta NVMe is one of the I first, uh,
vim tools that I'm trying to use. Ultimately, I'd like

(29:50):
to get out of VSCode and have all my AI stuff.
Something like Klein inside of vim, but it's hard to
do that in the terminal, which is why I'm like
stuck in VSCode and not happy about it. Ben Thompson
had a pretty cool deep seq FAQ, FAQ, and Lane
Changes dropped. A cool new tool that lets you do

(30:10):
deep web research. This is the thing I was talking
about before completely using a hosted LMS. I've not messed
with this one yet, but I cannot wait to. Um.
Somebody created a simple service that converts WordPress blogs to
Hugo static sites. Highly recommend. Anyone building a website goes
static these days and basically keep your markdown. Your markdown

(30:32):
is your website that is your content. You can make
a rag out of it. It is just text and
basically any place you want to bring your blog to
give them the markdown and have it turn it into
a nice website that you like. You like the aesthetic of.
So it's content aesthetic and don't mix the two and

(30:52):
don't have it where it's like the platform owns the
mixture because then you're screwed. Um, and I've had this
happen to me multiple times. I just had it happen again,
getting my Getting my content out of beehive, which I love.
Beehive for my newsletter, but it's not my favorite place
to run an actual website. Philips Hue bulbs getting motion sensing.

(31:15):
I really hope this is true and I can't wait
to get it and play with it. Be nice to
have all different hue things basically. Also be motion sensors.
How to say no as a product manager. This is great.
This is great. I think it just refreshes. Yeah it
just refreshes. Let's not do that. So I'm hitting refresh.

(31:37):
Can we revisit this after the next sprint refresh. We
need to ensure alignment before moving forward. Refresh. This could
be part of a larger initiative, but let's hold off
for now. This is really good. It's really good. Uh,
and recommendation of the week. Remember that AI is not
AI stocks. AI is not the survival of AI companies

(31:59):
that did marketing in 2023 and 2024 A's. Tam is
the replacement of human labor and the magnification of GDP
that can come from millions or billions of people becoming
a founder, builder or creator. That is the ball to
watch and everything else is noise. And the aphorism of

(32:20):
the week is to be completely cured of newspapers spend
a year reading the paper from the previous week to
be completely cured of newspapers. Spend a year reading the
newspaper from the previous week. Nassim Nicholas Taleb. Unsupervised learning
is produced on Hindenburg Pro using an SM seven B microphone.

(32:43):
A video version of the podcast is available on the
Unsupervised Learning YouTube channel, and the text version with full
links and notes is available at Daniel. Com slash newsletter.
We'll see you next time.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.