Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:00):
Unsupervised Learning is a podcast about trends and ideas in cybersecurity,
national security, AI, technology and society, and how best to
upgrade ourselves to be ready for what's coming. All right,
welcome to unsupervised learning. This is Daniel Miessler. All right.
(00:21):
So absolutely must see conversation. I'm going to open this
up real quick.
S2 (00:26):
One of the top.
S1 (00:28):
Yeah. This these guys here. Unbelievable conversation about AI specifically.
It is extraordinarily good on the aspect of what just
happened with deep seek and actually competition around chips and
AI and Nvidia and all that stuff. And basically how
(00:48):
competition is happening between all the pinnacle makers, open versus
closed source, basically everything. They're just doing so well in
that conversation. So highly, highly recommend that we made the
list of san's top security newsletters and podcasts. That was cool.
Got a quick thing on how to use O1 and
(01:08):
O3 on fabric? It's just a PP is basically the
flag you want to use. You can't use s um,
you have to use R, which is raw, because you
cannot send a temperature to O1 or O3 where you
can with most of the other models. And uh, after
listening to that conversation I just told you about, I
(01:30):
ended up tightening up my definition of AGI, which we're
going to talk about in a second. But it's essentially, uh,
we're going to go into more depth here in a second,
but it's the ability for an AI, and that could
be a model, a product or a system to perform
the work of an average US based knowledge worker from 2022. And, uh, yeah,
(01:51):
I talk about how this is the most debated term,
and I want to spend a lot of time on it.
This is in, um, raid, uh, real world AI definitions.
And I've got two criteria for whether or not something
is a good definition of AGI. Regular people should understand
if we reached it or not. And attaining it should
(02:11):
be significant to society. So who cares if nobody cares
about it? And also, who cares if you can't figure
out like what somebody is talking about? You know, they
start using big words or something. I feel like that's
not going to stick. It's not a useful definition. All right. Security,
deep sky exposed customer data and unprotected database. So basically
(02:32):
as soon as they launched and all this press starts happening,
Nvidia's stock starts falling. Some people a lot of people
and definitely some people, researchers at Wiz found an open
database full of nasty stuff chat logs, API keys. It's
in Chinese, so I'm not sure if there's any. This
(02:52):
is kind of what I was wondering. Does it reveal
any evidence that they used OpenAI for training? because that
has been one of the things that people have been alleging,
that the only reason they got so good is because
they were using OpenAI to actually train on. And I'm
curious if that's talked about in the logs or, um, or,
(03:13):
you know, if there's any evidence of that happening inside
of the thing inside of the leak. Cisa found that
contact patient monitors have been secretly been sending patient data
to China, and you can download and execute files remotely. Evidently,
there's some sort of, uh, yeah, RCI type deal there. Yeah. Evidently,
(03:36):
when you actually fixed it, it still didn't fix it.
It just disabled the network interface, which the backdoor re-enables.
So that was kind of a mess. Sonicwall says there's
a massive exploit campaign against authentication bypass. And, uh, got
some details here. It's 84. 43 is the port about
2000 of these things vulnerable on showdown right now. Major
(04:00):
hacking forums seized in international operations. Law enforcement took down
some of the biggest hacking forums included Cracked and Nulled,
which had over 10 million users combined. A new startup
called backline raised $9 million to use AI agents that
automatically fix vulnerabilities. This this is interesting, right? Everyone's talking
(04:21):
about using AI to find vulnerabilities. What about fixing the vulnerabilities?
So I'm really looking forward to vulnerability management remediation services
that use AI. I personally think that this is going
to require a whole lot of understanding of the company.
It's going to require a lot of asset management. You're
(04:41):
going to need to know people. You're going to need
to know organizations. You're going to need to know how
different engineering groups actually push code. So there's a lot
of context you're going to have to get from something
like an asset management system to be able to do this.
Tulsi Gabbard is facing a lot of pushback back in
her thing. I'm not sure if she's already been passed through. Uh,
(05:02):
I'm trying not to pay attention to the news right now.
It's a bit depressing, honestly. But, um. Yeah. Want to
make a little bit of a statement here? I think
if you dump top secret documents to the internet, this
is one of the things she was being asked about
was Snowden. Or if you break into the Capitol building
on verification day, on results verification day, because you want
(05:24):
to hopefully change the results, you are an actual criminal.
That's that's what I think. At one point I saw
Snowden as a whistleblower, too. I think he actually technically
is one, and I think he might have been trying
to do some good at some point, but I think
it kind of got all messed up. And I haven't
really been on that boat for many years now. Bedfordshire
(05:46):
Police just became the first UK force to deploy Palantir's
AI system, and they got some good results. 123 at
risk kids were found in just one day argument that
we're getting passkeys all wrong, and that they should be
used alongside magic links, not as a complete replacement for
other auth methods. I've been thinking about this a lot.
(06:08):
I really do love Passkeys, but I do worry that
it's I don't know, it feels like we might be
being a bit sloppy with them. All right. This is
why I think you should care about us reaching AGI.
Wanted to say more about the AGI thing. I think
it's the most important topic in AI. Actually, tons of
very smart people don't know why they should care about it. Like,
(06:32):
who cares if AGI hits this or that benchmark. And
I think there's only one good reason why you should care.
And here's how I talk through that. So think I
coworkers right. Like actual coworkers that you have in person.
But imagine your team at work. You've got like five
coworkers or 20 or 35 whatever, however big your team is.
(06:56):
Now imagine it's like 10,000 co-workers instead. Like overnight. So
it was seven people. You show up and now it's
a zoom call with 150 people or 10,000 people instead
of seven. And you're watching them work. You're seeing them work.
A couple of them, you see physically work you see
(07:17):
them the results of their code, you see it come in.
You get other signals telling you how good of a
job they're doing or whatever, but you find out they're
not perfect. They make mistakes just like anyone else. Someone
still has to review their work. They still get lost.
Sometimes they're like, what are we doing? Like, I'm not sure.
You know, I'm not sure if I should be working
(07:37):
on this project. Does this look correct? Or they mess
up the code base or whatever. In some ways, they're way,
way better than your human counterparts, and in some ways
they're just way more stupid, right? But what they do
is they make steady progress. They show up for video calls.
They can read docs. They can read code. They're talking
(08:00):
to you in slack. They're asking questions. When you ask
them a question, they give you a response about the
work that they're doing, or they help you find a document, whatever.
And they can also readjust based on a coworker or
their manager telling them, hey, that's not the way you
should be doing this, or we got a new directive
from above. You know, the goal is slightly shifted or whatever.
(08:21):
They can change what they're doing. But the key here
is there's actually 10,000 of these people, right? There's 10,000
of these agents instead of ten, or there's 100,000 of
these instead of your team of 100, and they work
24 over seven. They put in the work, they put
in the effort, and they do not get tired. And
(08:42):
they also constantly improve. So if they start off with
like 105 IQ, which is, you know, decently smart or
two years of knowledge in security or programming or intern
level or whatever. After a year or two, they've had
like 20 updates, which were silent updates or whatever. And
assuming none of those updates went horribly wrong, uh, now
(09:03):
they got 110 IQ. Now they got 120 IQ. Right.
Or whatever. The AGI alternative to uh, or analog to
IQ is. It's not quite IQ, but call it raw
fluid intelligence. Right. So they're getting those constant updates, but
they're also learning to be better coders, not just by
the work that they're doing, but actually by just better
(09:24):
and better models. So now instead of 105 IQ with
two years work experience, now it's 120 IQ with 20
years of experience. So it's like a staff engineer, but
you still have 100,000 of them, right? This is why,
in my opinion, AGI is a big deal. And I
(09:45):
think we're getting really close to this. I've got another
thing to read about. This is what I just posted
earlier on one of the Socials, but I think we're
getting close to this. It's not one component, right? This
is not like one model that's going to release and
do this. It will be a system. It will behave
like one person or one thing or whatever you want
to call it, but it'll be this composite that lets
(10:08):
it behave in a cohesive way, which is actually the
way our brains work as well. My guess was in
2023 that it was going to be 25 to 28.
That was my thing. And I put that in like
two years ago. Okay. Like March of 23, I think
I made this prediction. 25 to 28 is when we're
going to have this. And my definition for this is
(10:30):
basically somebody who can replace a human knowledge worker, a
decent human knowledge worker working at some remote job. Right.
So I don't think they're going to be able to like,
bring you a coffee because that's robotics. But working a
remote job, pushing code, doing organization, project management. Um, editing documents,
(10:54):
that kind of stuff. 25 to 28. I think that's
what is when we're going to actually hit this. So
my current estimate for this is late 25 or maybe
sometime in 26, and it could actually go into 27.
Because this type of progress that we're seeing in AI,
it's big jumps. But then it kind of plateaus for
(11:17):
weeks or months. But that plateau could actually go for
a year. I don't think it will. I think it's
going to continue to jump kind of violently, but it
could plateau out and then maybe it turns into more 2728.
I think the chances that it happens after 29 are
really low, maybe 5%, which is a lot. It's a lot, right?
(11:40):
So my guarantee for this right now, my high, it's like, uh,
what's the CIA, CIA range? I think the CIA range is, Um,
almost guaranteed or something like that. It's, um, extremely probable.
It's one of those. But it's only like 95%. And
that's where I'm at because I'm not really 100% on anything.
(12:03):
And when this does happen, I think it'll be the
single biggest impact on humanity from tech, by far bigger
than the internet and both in the negative and positive directions.
So that's why I think you should care about AGI.
And in perfect timing, MuleSoft reports that almost all IT
leaders are planning to use autonomous AI agents within two years.
(12:24):
That's 93% and about half are already doing so. Sam
Altman basically says he feels like he was on the
wrong side of history with open source. He didn't say
he's going to do anything about it or what he's
going to do about it. It seems like kind of
obvious that he will do something. My guess is they
will release some nerfed versions of some of their models
(12:47):
just to be like, look, we are playing nice with
the open source model, but I don't think they're going
to completely pivot to like what meta is doing, for example.
And like I said, yeah. OpenAI claims Chinese rival deep
tech stole training data. Yeah, I don't actually think it's
kind of funny to be like, oh, OpenAI is worried
about their stuff being stolen when, you know, they stole
(13:08):
the whole internet to train on. I think. I mean,
anything you put on the internet is just going to
be on the internet, and that's free game. I think,
I think unless you have specific like licenses or something,
and obviously I don't want people to steal like paid
content that I'm producing. Right. But but I don't release
paid content live to the public, on social media or
(13:33):
on a blog post, and then get mad when someone
scrapes it, right? I consider all the stuff that I
put out since like 1999 or whatever, how long I've
been on the internet. You gave it to the world
when you posted it online. That's the way I see it.
I always have R0 or R1 zero Shows AI reasoning
without human training. This is really cool because like I
(13:54):
talk about here, it's like what happened in chess. So
at first they used they built a chess computer. Um,
I think this was Google that was doing this, and
it just got really good at learning the rules, and
it watched human players and it improved on that. And
of course, it could beat previous chess computers like Deep
Blue is nothing compared to these computers. They just crush it.
(14:17):
What the big breakthrough was with AlphaZero and also with
R1 zero is it's learning more actually, or at least
significantly more. It's learning significantly more from the reinforcement learning
from the world. It's learning from the world as opposed
(14:38):
to from the rules or watching other people interact with
the world. So it's just an accelerator because it could
do that by itself, right? Without a lot of effort
from other people and that's really, really powerful. A new
study shows deepfakes AI model refuses to answer the vast
majority of sensitive questions about China. Shocking, shocking. This is
(15:00):
why it matters where you get your models right. Because
according to them, Tiananmen Square didn't happen. It's all a
breakdown of how to properly evaluate ragtag systems and llms
in Practice by Salman Khan and Andrej Karpathy talks about
vibe coding, where you basically get in a flow state
and code like you're playing an instrument. It says the
(15:22):
key is to stop overthinking and just think and respond
and let the AI do most of the work. I
think this is spot on and a lot of people
are really mad. They're like, oh, I can't believe you know,
you're a hard core coder and you're just going to
let AI code for you. And again, he doesn't care.
I don't care. People like us, we don't care because
we could see what's actually coming, which is pretty soon.
(15:45):
You don't fire up an IDE at all. What you
do is you fire up a voice conversation and the
thing starts showing you what the product looks like on
one side, and it's showing you roughly like the code
or the diagram or whatever it is on the left,
and you're having a verbal back and forth like that
(16:05):
is coding within, you know, 1 to 3 years. So
this whole thing of like, oh, you're not doing the
real thing unless you're doing the real coding. Once again,
you have to take that all the way back. Um,
are you writing assembler by hand? No you're not. Okay, well,
you're cheating by using Python or whatever, right? So it
just keeps going farther and farther away, and eventually it's
(16:27):
going to be like, I wish I had a product
that looked like this, blah, blah, blah, and boom, it
just goes into the world. And what did you really contribute?
You contributed the raw base idea. But at some point,
the A's are good enough to contribute that as well.
And now you're competing with them on that. And that's
when it turns a little bit sad. Apple quietly added
(16:50):
Starlink satellite support to iPhones through a software update. And
this is a partnership with SpaceX and T-Mobile. So yeah,
SpaceX is how your iPhone is able to talk to
satellites when you have no signal. Humans massive new survey
revealed that 87% of astrobiologists think that extraterrestrial life exists
(17:11):
somewhere in the universe. 87%. I am one of those people,
not one of the astrobiologists, but one of the people
who agrees with that study. A Montana shows that drones
are way better at keeping grizzlies away from humans, so
they basically fly up to them and chase them and
scare them. Could it be cool to take drones with
(17:32):
you when you go into the forest? So you're worried
about bears? You release like three drones or whatever. That's
part of your panic response. You hit a button. It
makes a really loud sound, like some sort of horn
or something, and you could point the horn at the thing,
but it also launches the drones and the drones go
over and maybe they make noises. Maybe they have sounds, whatever,
(17:55):
but they just start buzzing around the the bear. And maybe, like,
you could sell that as a package is like bear defense.
Maybe they shoot, uh, bear spray as well. That would
be cool. All right. Swerving Broncos I was thinking about well,
I was basically driving on the 101 at like 11:00
at night or something just a couple days ago. And
(18:16):
I am watching the cars on the, on the road
in front of me. And they're just like, drifting, like
right over the line. The second wheel goes over the line.
They're just like drifting. Obvious that they're on their phones
or drunk or high. And then like, I get away
from that thing or it turns off the highway or whatever.
And like, here comes another giant truck, a giant like
(18:38):
Cadillac or a Bronco or something like that comes next
to me, and it just starts drifting into my lane.
And I'm like, what are you doing? And I And
I look at them, they're like looking down. They're not
even paying attention. And I see a third and this
is within like 30 miles or something. I see a
third one. They're just like drifting over. There's no cops around.
I hardly ever see cops anywhere. Like on the on
(19:01):
the highways anymore. There's no cops. There's no one stopping them.
They're just kind of like driving like crazy people. And
I'm thinking, you know what won't do? This is Waymo.
Waymo won't do this either. Will autonomous Tesla or an
autonomous like BYD device from China or whatever. Say what
(19:21):
you want about AI driving. They bicyclists love them because
they never crash into them. I mean, I think we
just have to realize how bad human drivers are before
we start criticizing how bad I drivers are. AI's big jump.
This one is very simple. I think AI from Apple
(19:43):
is like it's been the worst for a while and
it's about to be the best. That's the bottom line.
They are putting out stuff in 18.4 that basically finally
connects Siri to the context inside the phone. So your phone,
you have your exercise, you have your heartbeat, you have
your appointments and your calendar and your email. And it
(20:06):
could see all the different stuff. It could see your life. Right.
But it's never unified that stuff. Well, Apple just got really,
really serious about Siri or they're in the process of
doing that. They got a new leader over it. Like
it's just become super high priority. They're doing that at
the same time that they're doing the cloud secure enclave.
(20:26):
At the same time, they're doing the partnership with ChatGPT
at the same time that they're now giving Siri access
to all that context that they have, and access to
call apps and pass data between the different apps. So
they've been setting up this infrastructure for AI for the
last couple of years. They've built all this stuff. They've
(20:46):
spent all this R&D. And again, I used to work there.
This has nothing to do with anything. I've been gone
for a while. And also, I wouldn't be disclosing anything
if I actually worked on a project like that. But
I've been watching them since, whatever, 2007. And I'm telling
you that they are about to switch this thing on.
And I really do think that they're about to have
(21:09):
the best personal ecosystem integration AI type play. And it's
going to go from being like a are they even
doing anything to Holy crap, this is what they've been
working on. That is my prediction around that. And I
novels are coming. I actually have a YouTube video about this.
But yeah, I'll wait till that comes out. It might
(21:30):
actually be out soon, but yeah. Anyway, I'm going to
wait to talk about this one. Basically, it's going to
be possible to make a complete decent novel using AI.
I think within months or years. I've already got a
got a couple of emails actually, where they're like, what
are you talking about? I've been doing that for a
year and a half. So I think it was possible
(21:51):
to do a year and a half ago, and I've
messed around with some, uh, longer, uh, like, fiction stories,
and it was halfway decent. And that was like a year,
year and a half ago. But it was difficult. What
I'm trying to say is it's going to get easy
and lots of people are going to do it. Um,
ChatGPT new tasks feature. A lot of people are talking
(22:12):
about this. It's basically scheduled and automated, um, agents to
go and do things for you. I have not messed
with this one, but it looks really, really exciting. Really
clever way to use UV which is a fast Python
package manager. Look at this. You put this at the
top of oh man, I haven't messed with this yet.
(22:33):
I have to write this down. I ran it once,
but I haven't put it in like all my main scripts.
This allows you to run anything that has multiple dependencies
and normally breaks, and it will automatically instantiate you a
working environment to make your script run. Isn't that just beautiful?
(22:53):
It's absolutely beautiful. I mean, this is so much better
than regular Pip, because if you have multiple environments like
that's my problem. My regular Python has like 3 or
4 different environments, different virtual environments, like nested spread all
over the file system. It's like a nightmare. Um, so
(23:14):
that's why I use UV now. And this is just. Yeah,
this is really great. I haven't seen a downside yet,
but I haven't messed with it that much. Uh, cool.
Developer created Napthine's malicious software that traps aggressive AI web
crawlers in infinite loops and feeds them garbage data to
poison their models. Smiling, not smiling OpenAI dropped new ChatGPT
(23:35):
capability called deep Research. Supposedly you only get like 100
a month because they're really expensive and it takes like
takes like ten minutes to finish, but I've heard it's
created some serious, serious research. Really good papers. Haven't checked
if I have access to that API yet. All right.
Justin McGurk explores how William Gibson novels perfectly capture our
(23:58):
obsession with commodifying everything unique and authentic until it loses
all meaning. This is why I'm doing human 3.0. I
do not want to be replaced by AI and technology.
No thank you. I want it to enhance humanity. Found
a really clean python script that lets you download YouTube videos.
(24:19):
This thing works really well, I just worried. I'm just
worried it's going to get blocked soon. Recommendation of the week.
If you ever get overwhelmed by what all this AI
stuff even means, or you want to explain to someone else,
I got a little quote here. So within the next
few years we might have something called AGI where I
can work as a full knowledge worker, like joining the
(24:40):
onboarding cohorts, reading documentation, participating on slack, submitting code, adjusting
their work based on the work of others. But instead
of 2 or 5 of these, imagine hundreds of them
for the cost of one human employee. I think that's
pretty cool way to kind of explain the possible impact
in a way that, like regular people who don't pay
(25:02):
attention to this all the time could understand. And the
aphorism of the week, some books leave us free and
some books make us free, some books leave us free,
and some books make us free. Ralph Waldo Emerson. Unsupervised
learning is produced on Hindenburg Pro using an SM seven
B microphone. A video version of the podcast is available
(25:25):
on the Unsupervised Learning YouTube channel, and the text version
with full links and notes is available at Daniel Miessler newsletter.
We'll see you next time.