Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:00):
Unsupervised Learning is a podcast about trends and ideas in cybersecurity,
national security, AI, technology and society, and how best to
upgrade ourselves to be ready for what's coming. All right,
welcome to Unsupervised Learning. This is Daniel Miessler, and we're
going to start off with this really cool story from
(00:22):
Reversing Labs. So basically what their team found was that
it's possible to exploit local models on hugging face if
they use the pickle format, which is used by TensorFlow models.
So it's used by a lot of different models. And
basically what pickle is, is it's a Python library for
(00:45):
parsing model files. Essentially it's a parser, right? It's a
parsing library. And everyone knows in security that parsing is
the root of all evil. So what they figured out
is that the scanning system, the security software for hugging
face would basically break. If you get over a certain
(01:06):
size or if you have like a broken file. So
what they figured out is that somebody had actually embedded
malware in a couple of these models. And this kind
of breaks people's brains because you think essentially that if
you're running a local model, it's safe, right? That that's
what everyone has been saying. If you run a local model,
(01:27):
it's completely safe. It's the services that are dangerous, right.
Like the deep seek service obviously that's in China, so
it's dangerous. But if it's a local model, it's fine.
What they found was look, it's still software, right? Running
a model is still software. You have a parsing library
involved if it's a pickle file based model. And yeah,
(01:51):
they were basically able to find malware inside of this
thing that was trying to dial home. It was trying
to dial home to an IP address inside of China.
So really, really cool finding and it's going to be
in the newsletter next week. But in the meantime, you
can get it from this URL right here. They they're
(02:14):
calling the attack nullify which is pretty good name. Nullify nullify.
And yeah it nullifies I guess hugging face security. So
yeah that was that was cool. All right. Continuing down here.
Oh yeah. We did the augmented version three. We talked
about Telos Files. This was last Friday. It was a
(02:35):
great class. Best of the three. It was definitely the
best of the three courses in my opinion went a
full four hours. It was quite good. The next one
I think is going to be online, although I said
that for the last one. Um, so it might be
another live class but haven't announced that one yet, so
don't worry about that. Um, going back to roots on
the story format here. So I'm basically if a story
(02:56):
is only one sentence, It's going to be one sentence.
I went back and looked at a whole bunch of
my old episodes where I just felt really comfortable with
doing that writing, and I was surprised at how short
a lot of the things were. I would just be like, hey, yeah,
this thing happened. I think it's kind of cool because
of this reason. Boom, whatever. That's like 12 words or whatever.
(03:18):
And I had a link in boom, that was it.
And I'm just going to kind of go back to that.
I've been so I focused, I want to write more
because I could use AI to help with summarizing, but
I want to focus more on manual writing of that
first sentence and use the AI more to like, gather
data and gather support and sources for the sentence that
(03:40):
I write myself, which I'm already writing a whole lot
of the sentence, but I just want to start from
scratch and write the sentence completely myself, which is how
I was doing it, obviously going back to 2015. So
that's a little bit of a change there. Uh, put
a bunch of new fabric patterns focusing on analyzing your
Telos file. And this is part of the the the
(04:01):
course just put out. And they all start with t underscore.
So if you look for those patterns those are telos oriented.
Um yeah. Had some loud music in my raycast video.
I'm re-uploading that I've been talking to the team about it.
Sometimes you might hear in the podcast edit, you're never
supposed to hear that the team is supposed to take
(04:21):
that and edit that out, which is why I say
the word. So I've had some talks with the team
and they are working on fixing that. But apologies in
the meantime. And yeah, I had some shenanigans with an
automated post to my X account a few days ago,
and it turns out it was like an automated like
Twitter application that looks like it might have been compromised
(04:43):
on the back end. Maybe it not sure exactly, but yeah,
it was really kind of weird. And I got an
email out to the security team to see if they
want to fix it. See, what have we got here? Um,
someone's using 2.8 million. This is under security. 2.8 million IPS.
To brute force the passwords of basically every type of
VPN and firewall device out there, and most of the
(05:05):
IPS are in Brazil, but lots of other countries as well.
Apple just patched another zero day that Citizens Lab says
was being used in extremely sophisticated, targeted attacks against specific
journalists and dissidents. Ken Wong from CSA released a detailed
framework called maestro. And the thing that this does, it's
a threat modeling framework, but it's specifically focused on AI agents,
(05:28):
and it's addressing gaps in things like stride and pasta
and other frameworks that just were pre AI. So they
don't really handle AI that well. Uh, also uh, deals
with emerging threats like goal misalignment, model extraction, adversarial attacks,
stuff like that. And researchers at Watchtower discovered 150 abandoned
around 150 abandoned S3 buckets used by a whole bunch
(05:53):
of software companies like governments. Romance, various types of infrastructure pipelines.
And the concern is if those S3 buckets and the artifacts,
the assets inside of those S3 buckets are being consumed
as part of a pipeline, well, that's ripe for a
supply chain attack. Cloudflare had a significant outage because someone
(06:13):
tried to block a phishing URL, accidentally turned off their
entire R2 storage, which R2 is basically S3. It's cloudflare's
answer to S3. And yeah, Cloudflare always has really good postmortems,
so you definitely want to check that one out. While
ARM released a report on API security that includes a
(06:34):
bunch of stats on AI services that use APIs, and
the big takeaway for me, which I kind of already
believed is AI security right now is largely API security,
and with the caveat being that it's AI API security,
so that your API has to be strong because that's
like that's the parsing system for going into your application.
(06:57):
But the added twist with AI is it's not only
API security, it's also whatever is being consumed through that API.
You also have the problem of prompt injection. And prompt
injection can like travel and like detonate multiple times or
get stored and detonate later. It's kind of like a
stored cross-site scripting type of vibe, except for it's even
(07:19):
more nuanced because it's English and it can detonate at
multiple levels, and different parsers can detonate the content differently, right?
It could be that you have like a five layer
nested prompt injection attack, and maybe some systems peel off
two of the layers, or some systems peel off only
(07:39):
one of the layers. Some systems peel off four of
the layers, but the fifth one gets through. So there's
a lot of extra complexity because you're using English to
try to trick systems. But big, big focus, however, on
AI security or API security with AI security. Ex-google engineers
(07:59):
facing espionage charges for passing confidential IP to China, Estonia,
Latvia and Lithuania just cut their last major tie to
Russia by switching their power grids from Soviet era systems
to the European continental network. That is great to see
that independence happening. And a bunch of Russian drone operators
(08:21):
were attacked with kind of like that, um, Hezbollah attack
where they attacked the pagers, but they were receiving booby
trapped headsets. But the problem was when they opened up
the packages which they were going to use, the packaging
was not perfect. And because the packaging was not perfect,
they they detected tampering and they were like, hey, something
(08:43):
is weird here. And of course, they probably thought of
the Hezbollah situation and they did not use them, but
if they did, it could have been similar to like
the Hezbollah attack where a bunch of Russian drone operators
get killed or injured from modified headsets. I in tech,
Gumroad says they're no longer hiring junior or mid-level engineers
(09:05):
because AI is handling most of that work. Now, this
is colossal. And also this is not an AI company.
So Salesforce said something like this. But Salesforce also makes
agents for or what is that thing called agent Force
or something like that. It's they have their own product.
So when the head of Salesforce says, hey, you know,
(09:26):
we don't even need engineers because our product is so
awesome that maybe it's true, maybe it's marketing. In this case,
this is just a regular company. Gumroad they're saying they
are not hiring junior or mid-level engineers anymore. The only
people they're going to hire are senior people, senior devs,
or senior architects who are really good with AI. I've
(09:48):
been saying this since very early 23, like February of
23 and so have a lot of other people, right?
I'm not the only one. It's it's kind of obvious
where this is heading, right. And you get into a
real big problem with like the pipeline. But at the
same time this seems a little bit extreme. It seems
like they might be overstating their case, because I do
(10:10):
know the current state of AI agents, and it's not
quite at the level of replacing fully replacing junior engineers.
I don't know, in some tasks you can mid-level engineers?
I don't think so. I don't I don't think so.
Not right now. Not in February of 25. Maybe in
(10:31):
two months, maybe in six months, maybe in a year,
maybe inside of 2025. Yes. And of course, the details
are nuanced based on the use case, but I would
say junior engineers maybe, but not quite. Mid-level engineers? No,
and definitely not senior engineers. So I think this might
(10:53):
be a little bit padded, but, um, yeah, I would
say there's a lot of daylight between these tools and
a human developer with 105 IQ. There really is there's
a there's a lot of daylight between those two things
still in February of 25. So one of Chatgpt's main architects,
John Schulman, left anthropic after only five months. And the
(11:15):
rumor is he's going to join Mira at her AI startup.
I'm not sure if that actually happened, but seems like
a lot of people are leaving both anthropic and OpenAI
to go work with Mira. Uh, new demo shows OpenAI
assistant giving remarkably natural sales calls, handling objections and questions. Yeah,
(11:37):
if you're a mediocre salesperson, you should be afraid. You
should be afraid and either do something else or get not. Mediocre.
Anthropic released an anthropic economic index report on how AI
is being used in the workforce. They see 36% of
people use AI for at least a quarter of their tasks. Interesting.
Good report. I like the content coming out of anthropic.
(12:00):
I feel like their their paper about like how we
view agents really good essays. I like the essay that
came out from the CEO Dario. Yeah, I like the
thought leadership that they're putting out in addition to their models,
by the way. A new model is supposed to be
coming out from anthropic. I would have bet money that
(12:22):
it would have been in January. Not a lot of money.
I would have put it at 70 or 80% chance
of it being in January. So I am quite surprised
that they have not released a new model family, or
just an update to sonnet or opus or haiku or whatever.
I think they're probably going to do a big family upgrade,
(12:44):
maybe even a new family. I think they're going to
do something big, but I thought they would have wanted
to do it in January, so I'm surprised that it's
not out yet, but the rumor is basically that it
is coming soon, maybe in the next couple of weeks.
Lee Robinson says AI is finally enabling truly personal software,
where people can build exactly what they need without extra bloat. 100%
(13:08):
agreed their LinkedIn is testing an AI tool where you
just talk to the interface about what you're looking for,
and it returns results. Another way to say this is
like LinkedIn is working on an interface that everyone's going
to have soon. Like so many people, especially developers are
just talking to like cursor or to Visual Studio or
to whatever AI tool. You just talk to it and
(13:30):
it's like it figures out what you mean and it
just starts working. All software is going to be like that, right? Um,
in my opinion, you're going to be talking to your, uh,
your own AI model and it's going to be the
interface to this tool. But you could also talk to
the tool directly in the meantime, because we don't have
full Das yet. but it seems obvious that voice is
(13:51):
going to become more and more of our interface to things.
Because as nice as a keyboard is, and we'll still
use the keyboard for some things, and some people will
prefer the keyboard, but it is friction compared to just talking,
especially when there's lots of back and forth and stuff
like that. Chick fil A is using drones to fly over,
study and optimize their drive thrus, helping them achieve the
(14:12):
highest per restaurant revenue in the US. So they got
an aerial film studies unit, and it helped one location
boost drive thru sales by 50% in 2022. So they've
got a new Atlanta location serving 700 cars per hour.
If you ever go there during a rush, it's like
(14:33):
it disrupts traffic on the street outside. So now they
got drones flying over like optimizing the thing. Mad respect
to chick fil A. Mad respect to chick fil A.
They're just like, yeah, they know how to run a business.
Same with In-N-Out, right? Same kind of vibe. Like really
focused on optimization. Apple's making a smart home display called
(14:55):
the home Pad. Basically a seven inch square display. Um, also,
there's a big announcement coming out. When is it? The 19th.
Apple is teasing some kind of big product. So it
might actually be this might actually be this. It's like
a seven inch square display that you put different places
in the house. So it's like a touch display for
(15:16):
managing your, your home. So it could be that or
it could be like a new surprise product. Not sure
what that's going to be. Uber is in a weird
spot because they're just the middleman between users and a
service like Waymo. So if you're anywhere near San Francisco,
you're coming to visit or you live there, you should
be using Waymo. Waymo is way better. Uber is just
(15:38):
it's amazing. It's absolutely great. Car comes and picks you up.
There's no driver. It's extremely safe. It's like the best
driver you've ever seen. It's way better than a human
Uber driver. Probably better than you as a driver or
me as a driver. It's just extraordinarily high quality. And
the question is, if Waymo gets big and becomes the
(16:01):
big thing, why wouldn't you just have the Waymo app
like I do? I don't I why would I use
the Uber app to call Waymo? Now one answer is
maybe I just want one app. And if it's the
same user experience or better, then I would use the
one app. But the problem is if you have a
DoorDash app and you have a Waymo app, those are
direct lines into the thing. I will tell you the
(16:25):
answer though. I'll tell you the answer, which is going
to be one or 2 or 3 years out, which
is you won't need any of those because your single interface,
going back to the previous point, your single interface will
be your Da, your digital assistant will be the single interface.
So you won't be calling Waymo. This is what people
(16:48):
don't realize. This is the book that I wrote in 2016.
I don't talk about the book enough. I should talk
about it. I literally predicted this stuff in the book
in 2016, which is why I wrote the book. It's
kind of a crappy book, honestly. It's more like a
structured blog post. It's very short, but I literally wrote
it down because I saw it so clearly, including the
(17:10):
AI stuff, including the Da stuff, and it just seems
logical to me. It seemed logical to me then. It
seems logical to me now. There won't be a Waymo app,
or they there will be, but it will be a
Waymo API. Companies become APIs, okay? Companies become APIs. That
(17:31):
is what's going to happen. Waymo is just an API.
Uber is just an API. If I go to the
corner and I have had a few Moscow mules with Tito's,
Which is my go to drink, and, um, I'm not driving.
My car is right there, but I'm not going to
drive because I've had too much to drink, and, um,
(17:53):
I need to go home. I'm going to say to
my da, give me a ride home. It is then
going to look at all the different services it's going.
First of all, it might know that I have a preference,
but anyway, it has all the different things available. All
the cab companies have an API, Uber has an API,
Waymo has an API. Maybe there's a competitor to Waymo,
(18:17):
maybe it's BYD. Hopefully it isn't. But let's say there's
a Chinese competitor. Oh, Tesla. Tesla will have an API
because there will be Tesla auto driving cars as well.
So that's a competitor to Waymo. All these things have
an API. My Da knows my preferences. It knows what
I like inside the car. It knows the cars that
I like. It knows when I got angry about an
(18:39):
experience before. It's judging all those things based on my
current AI I state it's selecting from which one. Also,
it could be which one can get to here the fastest.
Maybe it's raining. Maybe I need a car immediately. And
a car is literally like 100ft away from me and
it can pick me up in 30s. Maybe it selects that,
(19:01):
but it makes the choice for me. I literally say
I literally walk out of the thing and I say,
give me a ride home. And 28 seconds later, a
car stops and opens up and says, hey, Daniel, um,
I'm here to take you home. And it's an automated car.
It's autonomous and it takes me home. I did not
(19:21):
pick up my phone. I did not open an application.
That is ridiculous. That is ridiculous. I will not be
opening apps. I will be talking to my da. My
da is the proxy. It is my interface to the world.
My da is looking around at San Francisco at the
(19:43):
millions and millions of demons which are APIs for objects,
including the objects which are businesses. It has the ability
to see and read and interpret and manipulate and interface
with all of those millions of APIs to create the
(20:04):
best possible experience for me, which in this case involves
getting me home quickly while not driving drunk. That's it.
If you think about the future of AI and the
future of tech, this is the biggest thing that nobody's
talking about because they haven't figured it out yet. They
just haven't figured this out yet. Like, it's I don't
see anybody talking about it yet. I keep talking about
(20:25):
it every once in a while, like I am now,
but in a couple of years, people are going to
be like, you know what? I tell you what? You know,
it's going to be cool. Everything's going to have some
sort of interface. And then the AI on the individual's
device is going to interact with those services on behalf
of the user. And I'm going to be like, yeah, yeah,
(20:48):
that's exactly what's going to happen. And the next piece
on top of that, okay, I'm just going to pull
it up. The next piece on top of that is
going to be the interface. Okay. What's this thing called
invoking raycast. Going to do this real quick. Digital assistants.
Step one digital assistants are personal AIS know everything about
(21:08):
us and they are everything to us. That is our
single interface. Then two everything gets an API, which is
a demon or an aura or an API. Whatever we
want to call that, I will call it a demon
for a person, which is Greek for soul. Three our
Das will mediate everything. This is the point that I
(21:29):
was just making. We'll say what we want in our
Das will manipulate external demons APIs to make it happen,
for our Das will become our active advocates and defenders.
They will actively monitor, filter and manipulate the world to
protect us, to advocate on our behalf. Okay. That's four.
(21:51):
And millions of examples in here. You should definitely go
read this one. I'll try to link to it somewhere.
And five A thriving ecosystem of Da modules. These Da
modules are the APIs. The companies become the APIs. The
companies are the APIs. The companies put themselves into the
world via the APIs. Six AR interfaces show us the
(22:16):
demons around us through an interface that our Da is
providing us. So some companies will provide the filters through
which we see the demons. So for example, this filter
that my Da is using, let's say it's from a
company called chroma. Okay. This company called chroma displays people's
(22:41):
demons in really cool, colorful ways. That's the only thing
this company does. So my Da knows that I like
to use the chroma interface. So when I walk around
and I see people's demons, this girl, for example, this
girl is super creative. She's hyper creative. And the chroma
(23:03):
interface when it reads a demon. When my da reads
a demon, it reads her demon. We're in a Starbucks.
I look over at this girl. This is what I
see in my visor. This is what I see in
my apple glasses, or my meta glasses or my Google glasses.
Whatever the company is, this is what I see. I
see this glowing fire around this girl and I'm like,
(23:26):
Holy crap, she must be an artist. She must be
like a musician. She's like on fire with like, creativity.
My Da knew that because it read the thing, but
most importantly, it went through the filter of the chroma interface,
and that's what made the display inside of my visual.
Inside of my. Ah, right. So some people don't want
(23:50):
to see this. They want to see a more tech version.
They want to see a more minimalist version. They want
to see a green outline for creative people or whatever. Right.
So those filters are themselves separate companies with separate APIs
and software that run on the, ah, interface, etc.. Right.
(24:11):
So it's all this giant marketplace of filters and demons
and companies that are available to me. Right. And if
I go back up here, look, this is where the
the this is all the demons. I can't read that.
That is millions of demons literally on the screen right now.
I as a human cannot read this. My Da is
reading it for me and advocating on my behalf. Right?
(24:36):
So you have this giant ecosystem of these things. You
have the ah, R interface. Here's another way of viewing
a person. This is another way of viewing a person.
This one I think it yeah, this is very human.
This is somebody going through hard times. So if I
look over and I see this on somebody, I'm going
to be like, hey, can I buy this person a coffee? Hey,
(24:57):
how are you doing? Like you're going through some shit.
What's going on? Do you need somebody to talk to? Right.
But this is why I. And to me, and tech
is a human thing. I want to be able to
connect with this person. And maybe I should read it
on their face. Right. But this is a public demon
(25:18):
that she's broadcasting, so she's allowing this to go out
into the world. That life's messed up right now. So
I'm going to go say hi. Right. I'm going to
go see if they need a friend. So these are
the different demon options. Look there's going to be a
whole ecosystem around this. But anyway, I digress. Okay. Ted's
Chris Anderson is looking for somebody to take over the
(25:40):
entire Ted organization, and he's running the search like a
Willy Wonka contest where anybody can apply. Interesting. Christie's is
doing their first AI only art auction, and it's a
lot of traditional artists getting really mad about this. Yeah. Understandably,
Google says they are getting rid of their diversity hiring
targets for 24, saying it was positive discrimination, which yeah,
(26:03):
it might technically be true. That is the purpose of
affirmative action is because that particular group needs extra help.
So I you got to keep in mind what Dei
is for. Like I agree, Dei has caused damage. It's
it's been poorly implemented. The concept is not broken. The
concept is great and we should still continue doing it.
(26:28):
I think what they are saying, I'm giving you a
charitable reading here. What they're saying is that we're still
doing Dei the correct way, which is we still want
to hire everyone who's capable and who's best qualified for
the job, and we are absolutely looking for underrepresented groups
to do that. What they are saying, charitable interpretation, again,
(26:51):
is that we are not specifically going to say we
must hire this many of this particular group, and we
exclude other qualified candidates because they are not those groups.
They are saying they are discarding that because they're getting
sued over it. And it's not a good idea and
it's reverse discrimination and all of that. So the question
(27:13):
is how does this look in a few years? Is
it that they're actually getting rid of all Dei, all
affirmative action and they're just using this as cover? Yeah.
So here's here's my comment on this surprising not surprising
that all these programs disappeared overnight on January 21st. What
does that tell you? And like I say here, it
(27:35):
tells me that they couldn't wait for a reason or
opportunity to do this, so they've been wishing they could
do it already. So what I'm hoping is it doesn't
just get rid of all affirmative action, which like I said,
the core concept is correct and we need to continue
doing it until we actually have equal opportunity, which we
do not right now. Doctors are now a major client
(27:57):
base for weight loss drugs like Ozempic. This link said
something like 40% of doctors are on Ozempic. I like that,
although few decades ago all doctors were smoking camels, so
doesn't necessarily mean it's perfect advice. New York City is.
Subway crime dropped by 36% in January because they added
(28:18):
1200 more police? Yeah, only 147 subway crimes in January 25th,
compared to 231 in January. And they got 300 specifically
for overnight trains. Love it. Yeah. More cops like it.
Measles outbreak is hitting the least vaccinated parts of Texas,
with nine cases in an area where only 82% of
(28:41):
kids are vaccinated. You actually need 95% for herd immunity.
We're also in the worst flu season the last 15 years. But, uh,
but Covid was actually super annoying. So let's not talk
about let's not talk about, uh, flu or pandemics or
anything like that that could actually benefit humanity because Covid
(29:02):
was annoying. I think that's a good policy. One of
my favorite thinkers, Robin Hanson, breaks down how different social
circles value different status markers. And specifically, he's looking at
his own friend group and how people that he hangs
out with pursue and signal value. And he says most
intellectuals that he knows are actually chasing fame and prestige.
(29:25):
He's not talking about his core friends, his core friends.
The highest sort of social status is two polymaths who
follow evidence across disciplines. That's who I like to hang
out with, too. I think we should all like that.
After 12 years of Walmart domination. Amazon just jumped ahead
(29:45):
with 187.8 billion in quarterly revenue. Compared to Walmart's expected
180 billion. So that's revenue. I'm curious about profit, but
not curious enough to look it up because why do
I care? Ideas paralyzed by crisis I'm a bit paralyzed
by what's going on right now in politics and specifically
(30:06):
with the government. I basically cycle between depressed, apathetic and
very angry. Did the government need to be audited and
cleaned up? Yes. Is the best way to start from
scratch and be aggressive with it? Possibly. Yeah, I'm kind
of excited about that. But you lose me when I
don't see you being careful about the programs that matter.
(30:27):
Like this whole. What was it? NCIS, NCIS. I didn't
look into it. I get depressed about the stuff, so
I just like I tune out. But there's no question
to me that a bunch of that stuff was bad
and needed to be removed. And there's also no question
to me or for me that a bunch of stuff
(30:47):
in there was actually really good and it wasn't very expensive,
and you should not have turned it off. Right? So
you lose me when you just throw everything out. And
maybe they didn't do that. They're claiming that they're putting
everything on the Doge website and it's all public and
it's all visible. I went and looked yesterday for the
first time, because I couldn't bring myself to pull up
(31:09):
a government website called Doge, having lived through Dogecoin. So
I'm just like, I don't want to go to that website.
But anyway, I go there, it's dot gov. I don't know,
it looked like there might have been some data published.
I'll probably use AI to figure out if it's like
actually publish if it's like legitimate transparency. I saw Elon
talk from the white House the other day on YouTube
(31:30):
in the middle of the night, which I should have
been going to sleep instead. But anyway, he looked honest
about it. He looked like he was actually trying to
uncover things and do the right thing, which is what
they were claiming they were doing. When I read the
media about it, it's like, oh, this is the Antichrist.
I don't think Elon is good at lying. When I
see him saying that he's trying to do the right
(31:52):
thing and he's trying to uncover this stuff, and he's
making it public. And this is exactly what they were
elected to do. I agree with him. However, you add
in Nazi salutes, you add in his personality on X
and how it's like slam dunk and like mean and rude.
(32:12):
It doesn't rhyme with the authenticity. I just saw him
giving about trying to do this doge thing correctly. So
I'm very torn on the whole thing. I'm very like,
you know, how how much can I actually believe what
I'm hearing right now from from him? Um, I do
(32:32):
know that the media portrayal of it, I believe it's wrong.
I believe he's they and a bunch of them and
a bunch of them are pure garbage. It just outright
pure garbage. I actually don't think Trump is nearly as
bad as a lot of people around him, and I
don't think Elon is nearly as bad as a lot
of the people around him. But at the same time,
(32:54):
I can't fully trust Elon. And after the the Nazi
salute stuff, honestly, I'm looking for a new car because
I don't want to go political here. I already did,
but I can't dig on Nazi salutes. I don't think
he's anti-Semitic, which is why so many Jewish people still
like him. He's very pro-Israel. He's very pro Jewish, very
(33:16):
pro Jewish hostages. So I don't think he's anti-Semitic. Why
are you doing Nazi salutes if you're not anti-Semitic? The
answer is the answer is he's appealing to AfD about
a culture war in Europe, and he's signaling to them
that he's on their side. Problem is, it's a fucking
(33:37):
Nazi salute that has implications that motivates neo-Nazis in the US,
in Europe, all over the world. And it's not nuanced
along the lines of like, oh, he's trolling and he's
actually not anti-Semitic. neo-Nazis are anti-Semitic. Hate to tell you.
So very upset with him looking for a new car
(33:59):
because of this. At the same time, I think the
narratives around what they're doing is pure evil. They're actually
trying to ruin the country. They're going to go off
their authoritarian. I don't think that's actually accurate. I think
they're actually trying to do good inside of the government.
This it's actually not the title of this little piece
I'm doing right now. Um, it's actually one I'm must
(34:21):
not have published that one yet. I'm still working on it.
This complexity, the ability to hold multiple conflicting things in
your mind is a thing that I believe that we
actually need my North Star. I guess I'm going here.
I didn't plan on going here. My North Star is
Star Trek the Next Generation. I want a multiracial, multicultural
(34:44):
future in which everyone has equal opportunity to thrive. I
believe in government programs. I believe in taxation. I believe
it is society's purpose and responsibility to make sure that
everyone growing up and even into adulthood, if they didn't
get this, they get the nurturing that they need to be,
(35:07):
even with everyone else. Then they can launch. They can
go to their the limits of their discipline, the limits
of their talent, the limits of their creativity, whatever. But
you can't have certain groups get left behind and then
just be like, oh, well, whatever. No, not whatever. If
we are successful, it is our job to lift them.
(35:29):
That is what a society is. That is what a
government is. That's what taxation is for. And if you
listen to somebody like Rogan, he was on with someone
who I really disagree with right now about vaccines. Weinstein,
one of the one of the Weinstein brothers. The other one, not, uh,
not Eric, his brother. Anyway, very much disagree. Very angry
(35:52):
with him about vaccines, very much disagree with him about vaccines.
And they, him and Joe, who are considered like extremely
right at this point by all my very left friends,
they are both like, we need government programs. We need
those government programs to help people who need them. We
cannot get rid of taxation. Taxation is great because it
(36:16):
helps people lift up to the bottom. And I'm like,
see these people? Look, these people all used to be liberals,
including Elon, including Joe. This is why I disagree with
Harris and I might be wrong about this. I also
disagree with my friend Marcus. Marcus Hutchins about this. He's
very much in the camp of Sam Harris about this.
(36:37):
And sorry, I ended up going very, very politics on
this one. Feel free to skip ahead to the discovery
section if you don't want to hear me, close this out.
But it's top of mind for me and I want
to mention it. I want to rescue center people. I
consider Elon a center person. I consider Joe a center person.
(37:00):
I consider, well, Sam Harris is definitely a center person,
but these people who I feel like have strayed and
gone crazy with vaccines and like gone crazy with conspiracies.
If they are still core liberal like I am, who
believe society's responsibility is to help everyone and lift them up,
(37:20):
and they want to build a multiracial, multicultural future which
also lifts people up and is humanistic in nature and
is based on maximizing benefit for all of humanity. That
means there are liberal. That's why I am a liberal.
(37:41):
If they believe that, then I will stick with them
through the nastiness of all this other crap. The moment
I see, the moment I see that they are actually
not going for that. And what they're actually going for
is like a white race is better thing where it's like,
(38:03):
does it matter where you came from? It doesn't matter
what you get. Um, your identity is what matters more.
Guess what? Those are the same on the far left
and the far right. It's most known on the far right. Right.
You must be light skinned. You must be white. You
must be whatever. You must be Jewish. You must be
whatever it is that they're obsessed with. They think that's
(38:24):
the best one. And they think everything else should fall
under that. That is that's a zero for me. That
is a do not pass go if that is the
world you are trying to build, I do not support
you and I will not support you. So the reason
I continue to support to some limited degree people like Joe,
people like Elon, is because I believe they're going for
(38:47):
the other one. They're going for the one that I'm
going for, which is this mutual lifting up of everyone
into a multiracial, multicultural future. And the moment they stop
supporting that, the more in anything they do that doesn't
support that, I fight them on because I'm not confused
about what my morals are. I'm not confused about my politics.
(39:07):
They have not changed. They will not change. This is
why I voted for Kamala, because somebody tried to change
the vote on vote verification day, and they're claiming that
it's not a crime. I'm not cool with that. I
can't vote for him. End of story. So all of
this to say that complexity is required to make sense
(39:30):
out of a complex world, right? There is criticism of
people like myself who try to manage this complexity of like,
looking at all sources and all claims and everything in
like multiple levels and multiple layers and trying to find
the good in it and trying to find the bad
(39:50):
in it. People, there are people who say, and these
are like my far left friends and my far right
friends who are like, um, you're both sides. No, no,
the world is both sides. Like, if you judged the
actual deep moral character of everyone that you hang out
with based on the worst thing that they've done, you
(40:12):
would not talk to anybody. You understand that? You would
not talk to anybody. Like a lot of the people
that we hang out with, they would become horrible people
if they had the power of the people who have power. Right?
And that's just reality. That's just like how the world works.
So I think we should focus on and I talked
(40:33):
about this somewhere else. I think we should focus on
figuring out this person that you're talking about, this person
that you are supporting, this person that you are criticizing,
this person that you think is the Antichrist, whatever. Figure
out the ideal world that they are trying to build,
then think about your world that you are trying to
(40:55):
build and see how similar they are. I would argue
if those two worlds are similar, then the work that
we need to do is to try to help them
and help yourself muddle through the garbage, muddle through the
the chaos, muddle through the noise, muddle through the emotion,
(41:17):
whatever to get to this mutual place. And this is
why I still support people who my friends on the
far left have abandoned and completely abandoned a long time ago.
I support them because I think, I think, and I
could be wrong about this. And the moment I'm wrong
about this, I switch to this fast. Okay. I think
(41:40):
that the thing they are trying to build is very
similar to the liberal thing that I want to build,
and I want to see happen. That's it. It's that simple.
If I think the world they are trying to build
is this liberal world and that they are wrong about it,
they are misguided about it. They don't know something. They're
(42:02):
super dumb about a thing. I don't care about any
of that. I care about what's inside of them. I
care about their soul. I care about what they're trying
to actually build. Right. So that's what I'm trying to
figure out. This is why I am less worried about
Trump now, because I see much more clear the thing
that he's trying to build. It's actually not that bad.
(42:23):
I don't like the guy because he's an egomaniac and like,
half the things he says like, drive me absolutely crazy.
And I think he's being influenced by Russia. I don't
think he's I don't think he's a Russian asset at all.
I don't think he's being, uh, directly manipulated by anyone
at all. I think he's being side manipulated by narratives
(42:47):
that are in his brain that could actually help people,
that we don't want to help. But the point is,
I think he wants a world, first of all, and
this is really a tangent, he actually wants less nuclear weapons.
He wants peace time. The downside for this guy is
he wants a statue, and he wants people to stand
(43:09):
up and salute when he walks in the room. The
guy's he's a narcissist. Okay, but half the world is
created by narcissists. If he's a narcissist who wants to
build an amazing world where Literally the whole planet loves him.
Does not fear him. He doesn't put everyone in jail.
(43:29):
He doesn't kill everyone. He's not stalling. He's not Hitler.
If that's the world he's trying to build, I'll take
those floors. I will take those floors. He wants to denuclearize.
He wants. He wants to, like, have all these countries
get along. If that's the world he's actually trying to build.
And I'm not sure of that. But if that is,
(43:50):
then I put him in a much lower threat level.
There are people around him in this administration who are
not trying to build that world. They are trying to
build something nasty, nasty, and they're using him and his, uh,
naive naivete to try to get what they want. And
so when I find those people, I'm like, yeah, that's garbage.
(44:12):
Total garbage. Elon, I think he's going for a better version.
I think he's going for a good world. I think
it rhymes in is very similar to mine. Very angry
with what he's currently doing, his actual actions. Right. But
if I find out that he's not trying to build
that world, he's trying to build this other world, this
(44:32):
race centered one, or this inequality built one or whatever,
that's it. I'm done with them. That's it. Right. That
is that is how I operate in this world. And
I would recommend I would appeal to you, especially if
you're one of my friends on the left and also
one of my friends on the right. Think about what
(44:53):
they're trying to build and let everything else fade away.
Let that be the center that you focus on in
your political conversations. And don't let media or noise or distraction,
you know, pull you away from that central conversation. What
are we trying to build? How can we get there?
And again, apologies for going into it is in the
(45:14):
ideas section. So it's not in a news section, but uh,
hopefully you didn't have to sit through that if you
didn't want to hear here politics, but hopefully it's a
good message that you got at the end. Moving on
to discovery Lmics. This is probably the coolest AI library
that you've never heard of. This thing abstracts your LLM
(45:34):
calls to a universal config. So basically you have a
little piece of like JavaScript or JSON like config. And
you can send this to any back end. It speaks
all the different AI back ends. Right. So you use
this in like a proxy and like a lambda or
like some sort of like AI function. So this is
(45:56):
for people like myself who are building AI. But you
don't have to write that code yourself. You have that
thing implemented in this LLM X library, and then every
single app that you build, every single component that you build,
all these different multi-level, uh, agent systems and everything, they
(46:19):
all have prompts. They all have models. They all have
configs being sent to the models. You do that in
this tiny little piece of config. That's what gets sent
to Lmax and it handles everything for you. And the
return comes back in a super clean way. It's the
(46:40):
coolest thing ever. It's from a friend of mine who
put this out, and it's like not getting near the
coverage that it should get. You should absolutely check it out.
MTR this thing combines traceroute and ping. I put this
in here every year or so. It's like a really
cool utility. It combines traceroute and ping and shows you
(47:00):
connection quality between hops in real time. So it's just
a cool geeky tool. RPG Map Bundle collection of print
and play RPG maps for your next role playing game.
Especially if you're doing like a one off and you
just need maps real quick. A blog and pure text files.
I think it's a cool idea. Just write however you
do it. Science is a strong link problem. Very cool
(47:24):
essay and a frustrated Redditor asks what career options exist
for those who consider themselves less intelligent. I thought this
was a really fascinating discussion on Reddit. Recommendation of the week.
Remember that there's only so much that one person can do.
Good books are always there for you. Go to the classics.
(47:44):
If you're wondering what to read because you don't want
to like current events or politics or anything to seep in.
Go to the classics. Read the classics. Maybe not political ones,
but tons of books. Books that are there to cradle
you and save you and enrich you while you get
some needed separation from the shit show that is reality.
(48:07):
And supplement this with journaling. Um. My friend Clint Gibler
really has continually emphasized how important journaling is to him,
and it's got me writing more journal entries, so I
highly suggest that as well. Especially combined with separating and
possibly reading good books, the good books will give you
(48:29):
ideas which you could also combine with your life experience
and turn that into journaling. And the combination of the
two is really beneficial. And the aphorism of the week.
It is one of the blessings of old friends that
you can afford to be stupid with them. It is
one of the blessings of old friends that you can
(48:50):
afford to be stupid with them. Ralph Waldo Emerson. Unsupervised
learning is produced on Hindenburg Pro using an SM seven
B microphone. A video version of the podcast is available
on the Unsupervised Learning YouTube channel, and the text version
with full links and notes is available at Daniel Metzler
Comm Slash newsletter. We'll see you next time.