Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:17):
All right. Really excited about this episode. Like I said, uh,
in the newsletter, I think it is one of my
favorite episodes ever. Actually. Last few. I've been feeling really
good about. I love the way the links are working out. So, uh,
if you want to give back any positive feedback, I
would appreciate it. But, uh, let's get into it. Let's
(00:38):
see here. All right. I ordered some noise canceling earbuds.
I haven't unpacked them yet, but, uh, they're for sleeping.
And basically I'm going to try to put them in,
and you can either stream music to them or you
can stream, like, white or brown noise, or you could
just have them noise cancel. And the idea is if
(00:59):
you combine that with like an eye mask, which I
already wear, then it's supposed to really help with sleep.
So I'm going to see. I know a few people
who do both and I just want to try it out.
I did a huge blog post called Why Google I
o scared this 2007 Apple fanboy for the first time,
(01:20):
and it's basically a review of WD DC juxtaposed with
Google I o, which was a couple weeks back. And yeah,
I've been an Apple fanboy for I mean, I mean,
I guess acolyte. I don't really like the word fanboy,
but yeah, really drinking the Apple Kool-Aid for a very
(01:42):
long time. And if you're listening to this, you probably
already know that. But, um, I like to think that
I'm logical about it, but I'm already admitting to being biased. Uh,
so that's what being an acolyte is, right? But what
I saw essentially this year at Google, I o is
(02:02):
it seemed like they are switching to Gemini and I
in general to be like their core mission. I feel
like they're moving away from ads. I feel like they're
moving away from that, being like the future in the
center of their business to like data and AI being
the center of their business. And I think this is really,
(02:24):
really cool on on their part. And I feel like
Google I o they did so well because of that
integration right there talking about integration with glasses and, you know,
XR or AR or whatever they're calling it, and Gemini
being integrated in like all the different apps. So it
just felt like they're doing something new and they're doing
(02:45):
something different. I've always been kind of averse to Google
because they are fundamentally an ad company, or they were
fundamentally an ad company. And I really like the CEO.
I feel like he gets it. I feel like it's
not gross. I don't feel like he's gross. I don't
feel like he's there to maximize ad revenue. I think
he really gets the AI stuff. So my perception of
(03:08):
Google has overall just like improved, I would say over
the last say year or two. just in the context
of them switching to this AI model as opposed to
like an advertising model now. Then you have Apple, which,
like I said, I'm massively biased towards. I love their stuff.
I worked there for like three years, I worked in
(03:30):
their security group and I just love their way of
thinking about the world. I love their way of doing
UI and UX, and I love how they are building
in my opinion. Not that I have any inside information
and it's not like actual product, but I think for
the last ten years they've been building what I'm calling
(03:50):
life OS, right? So forget iOS or Mac OS or
whatever they're building life OS. I mean, they just launched
the Journal app on the Mac, for example, and they
just brought the phone app to the Mac, which is
like the coolest thing ever. And like, AirPods seem to
work better. So there's lots of improvements in iOS or
in Mac OS 26, by the way. But, um, basically
(04:16):
their ecosystem is what keeps me their their focus on
UI and UX is what keeps me their their focus
on art and creativity and like enabling people to be
creative and enabling people to be their best selves or whatever.
Like that's the vibe that I've always liked about Apple,
and I think they do UI and UX better than
anyone else. I think they've had this unified vision of
(04:38):
context and, you know, having finance and creativity and work
and play and personal and all of that, all integrated
into the operating system. They've always just done that better
than anyone else. So that's why I'm so into Apple now.
Obviously AI is happening has been for the last three
years or however long it's been. And Apple is behind. Right. I've,
(05:03):
I've been saying for you know whatever a year that
look they're going to come out with their unified AI
product which is going to go on top of their
life OS or whatever. And basically Siri is going to
jump way ahead of everyone. And I thought that was
going to be like end of last year. And of
course they stumbled on that. And I believe it's because
of prompt injection. I think it is too difficult to
(05:26):
put an AI in front of all the personal context
that Apple has about us. And to do it in
a secure way, I think it's too difficult to do
that right now. That is my guess. And I know
this technically because I know how to do prompt injection and, um,
I know some of the best other prompt injection people
in the world. And it is pretty trivial still to
(05:49):
bypass most protections. So I think they are struggling with that.
Maybe they're struggling with other parts of it as well,
but I imagine they're definitely struggling with that piece. Like
right now I just got a pop up screening call,
and if I click view and this is while I'm
recording on my desktop, if I click view, it, um,
(06:09):
it shows this person like spamming me. And there it's
actively live transcribing that thing that's going to voicemail right now.
So I think that's really, really cool. Um, and then
I could also just click block. Uh, so that's, that's
pretty cool that I have that option. And the phone
is now integrated with the Mac, which it should be. Right.
(06:29):
That's just a voice call. It's not tied to a
physical device anyway. UI stuff, the way I'm kind of
seeing the world now is UI. UX is Apple always
has been. Seems like it will be going forward. And
then Google is now becoming like less of a negative
(06:49):
for me. And it's more about the data and services
and specifically AI. And obviously I wish that Apple had
all of that. They don't have it yet. I think
they will get there hopefully. And obviously Google is probably
working on UI and UX as well. Like I assume
it's getting better. I've seen some, you know, UI stuff
from them that seems to be halfway decent. So the
(07:12):
real question to me is will Google figure out the UI,
UX life, OS, full integration of ecosystem? Will they figure
that out before Apple, or will Apple figure out data
and AI before Google figures out that UI UX piece? Right.
(07:36):
That's really the question. Which one gets the the one
that they're weak on up to a certain bar first?
I can't conceivably think of myself switching over to the
Google ecosystem, because there's so many pieces of it that
I know I would miss, but I am. I am 100%
open to it. I honestly, I is a much bigger
(07:59):
movement than any other tech movement. Um, it's bigger than
UI and UX, especially since I shouldn't be typing on
a keyboard anyway. I shouldn't be, um, you know, thumb
typing on a phone, right? This should all be moving
to voice and virtual interfaces and glasses. So as that
(08:21):
starts to happen, like I'm more likely to move towards
the Google side if Apple doesn't have that. Um, keeping
in mind that I really do care about my interface
to tech. Oh, here's the other thing. The other big
reason that I really prefer Apple is because I've been
on the inside, and I know how seriously and crazy, um,
they take the security and privacy stuff like it is
(08:44):
not marketing, it is not made up. It is absolutely
dead serious. So if I'm building my whole life ecosystem
around this, plus agents, I'm building all this stuff on it,
they got to do it perfectly right. Now, I do
think Google is really, really good at security. I just
don't think their incentives have been aligned, you know, in
the past because they were primarily an ad company, the
(09:07):
more they become an OS and life OS for everyone.
With AI, you know being the primary thing, I think
they should start to move more into the realm of
like what Apple does with like, okay, look, privacy first,
you know, security first. Obviously they say that obviously they
do a good job, but I'm saying there's going to
be less of a conflict because they're not trying to
(09:29):
make all their money off of your data. Right. Hopefully
that's the case. Hopefully they're migrating away from that, which
is why I, I just haven't been enthused about using
them as a core OS. All right. I think that's
enough about that. Basically a little bit of disappointment around WD, DC.
(09:49):
I mean, I love the operating stuff. I, I'm a
little halfway between like how much do I like the
liquid stuff? I've seen some places where the interface is
quite bad. I've seen some places where it's quite good.
I think we're going to let that bake for a
little while, see how it goes. We're on the first
beta right now. They're going to clean up a lot
of stuff. But what I have noticed is some really
(10:11):
good improvements around the fluidity of like moving from my
phone to my desktop, having the AirPods switch. Naturally, this
is the type of thing you lose when you go
to Google. Like you try to switch over and you
realize like nothing works, right? Because Apple has been working
on this for so many years, and this macOS and
iOS 26 update, it seems like things are really, really smooth.
(10:35):
I mean, my OS is just working better now, and
this is beta one. Back in the day when I
was doing these betas, like I would lose core functionality
like regular apps wouldn't launch. I would like go dark,
like I couldn't text people. It was crazy, you know,
running these Apple betas early on. But, um, no, it's
it's working great. I recommend you try it out. And
(10:58):
if you do, you could text me and we could
try out some of these new emoji features or whatever.
All right. Um, got a couple blogs. Yeah. So one
is the WWE one, the other one is, um, uh,
see here. Oh, yeah. It's an argument against. It's just
(11:19):
next token prediction. I thought that was a pretty good
way of encapsulating it. Uh, so definitely check that one out.
I'm about to do, like, a kind of, like a
personal project to profile, like all the CCP members and structure,
just profile them and, like, figure out who are they,
what are they up to? How are they different than she? What, uh,
(11:41):
what are their political opinions? What are they focused on?
What are their areas of expertise? What are their areas
of responsibility like how does legislation get. I guess it's
not legislation because people aren't voting, but how do they
decide what new laws to put out? Is there a process?
Is it on a particular timing? I just kind of
want to understand how the Chinese government works. So I'm
(12:03):
going to start with understanding all the CCP people, um,
because that is the government. And I and I want
to look at the different echelons, like, is there a
like is there like a Politburo? Is that a smaller group?
Is it like a larger thing of like hundreds of people? Um,
what's the difference between those tiers, like stuff like that?
So I'm going to do a study like that. And, uh,
(12:25):
I don't know how I'm going to put that out.
Maybe a blog post, maybe some member content. I have
no idea. Uh, the other thing I'm going to do
is I'm going to do a future trend investment analysis exercise,
which I did, I want to say like five years ago,
and it basically told me to buy Amazon and Nvidia.
I forget what that exercise told us to buy, but
I did it with my good buddy Tai Sabarno, and
(12:48):
I think we ended up making a number of purchases
based on that, and I think they've done well for us. But, um, yeah,
I want to do another one this time. We'll get
to use things like, you know, O3 Pro and I
get to throw in tons of context about myself and
what I'm looking for and the, you know, the types
of investments I'm looking for and the type of risk
(13:08):
I want to be open to. Um, and this isn't
only for investment. That'll be kind of like a side product, um, of,
like the exhaust that comes out of it. The biggest
thing I'm trying to do here is just figure out trends.
So here's like the I'll give you like kind of
what I'm going to feed to this thing. Right. I'm
going to set this entire thing up, which is going
(13:29):
to be like 20 pages of me writing up contacts
to my own thoughts and my own predictions, my own like,
concerns or whatever. And I want to know what it
thinks could happen. And I'm sure it's going to be
very careful and be like, look, we can't predict the future,
which obviously everyone knows that. But, um, what I'm going
to be is what I'm going to say is like, look, um,
(13:53):
if this were to happen, what seems obvious to you,
that would be some second order effects. That is the power, right,
that I didn't have before where I is like, well,
obviously this is the type of thing that happens and
it gives you like 150 different things. So when certain
things are scarce, prices of certain other things go up.
(14:14):
Which companies are associated with those? I don't know all
that information. Like I'm not like a commodities expert. I'm
not a trading expert, I'm not a stock expert. So
I think there are some things that don't involve that
much prediction or that much conjecture on the side of
the eye. That's actually just benefits from it. Understanding how
the world works. So I could say, okay, um, like,
(14:38):
what am I? You know, hypotheses is like, okay, private
security is going to go up. Oh, there's probably going
to be a bunch more prisons built, uh, because, you know,
we're stupid. And we took down all the mental health facilities.
So what happens when there's like UBI comes out? But
UBI is limited to only citizens. So they start doing
mass deportations, not because of the current kind of vibe
(15:01):
of like anti-immigrant stuff that's going on, but more along
the lines of like, we need to control how many
people were sending UBI to, we're not going to send
UBI to everyone in the country just because they're physically here.
So now there's a bunch of tech to find who
is and isn't supposed to be receiving UBI, and more
and more people will become homeless because, you know, AI
(15:23):
is taking jobs. Okay, so they're homeless. So then you
have the fact that they don't have a place to live. Uh,
then you have addiction, then you have violence. So who
goes into prison, right? Prison is mostly full of people
who have addiction, who have mental illnesses, and obviously some
people who are violent. Right. And the percentages, you might
(15:44):
be surprised how much it's actually people that just couldn't
find a way to get it going. Right. So they
end up on drugs, they end up mentally ill. Those
ideally in some world, like the world we're supposed to
be building, that would be like drug treatment programs and
it would be mental health facilities. But all of that
is going to collapse down into basically like prisons. Get
(16:05):
them away from society, separate them from society. So what
does that mean? What does that mean for water? What
does that mean for food? What does that mean for, um,
what companies are going to do well in that world? Um, like,
I'll tell you one of mine Costco I love Costco.
I'm like, so like into Costco. Um, and also Google
(16:26):
and also Apple. But those are like my three main
investments right now. So the question is like, how is
how can I find other ways to connect these dots.
And I'm going to tell it, look, don't be doing conjecture.
Don't be trying to predict the future. I'm going to
give you a bunch of options of ways. I think
maybe it could go. But I have no idea. And
(16:47):
you as this I you also have no idea. But
there's there are lanes of probability, Right? Uh, turns out
if people don't have jobs, they have no way to
pay for food, especially for their families. They might get
a little upset. Right. So certain things don't require a
lot of, like, future prediction or futurist type crap. Um,
(17:07):
so I'm going to use the AI to help me
sort of navigate that stuff that seems a little more obvious. And, um,
the main output is going to be like, it looks
like if these things happen, then these things might happen
as well. And then, um, maybe some investment thing of like, okay, well,
for all of those different ones, what companies or what
(17:28):
products or what raw materials or where would I want
to live? Where would I want to move? What is
a good city to be in if something like, you know,
three through seven were to happen, right? That's the type
of analysis I'm going to do. And similar to the
CCP one, I'm not sure how I'm going to put
that out. Maybe I'll make a giant PDF, maybe I'll
do a video. Maybe it'll be like member content, maybe,
(17:49):
you know, charge for it, Make it free. No idea.
Probably some combination of those. All right. What else? Doo
doo doo. All right. Cybersecurity. So Trump overhauled Biden's cybersecurity
policies with a new executive order. Uh, he removed focus
on mandated digital IDs. That must have been, like an
immigration type thing. Accounting, compliance checklists, micromanaging, agency decisions. I
(18:15):
think that might have been like Cisa, maybe like harnessing
or like trying to control Cisa ads focus on defeating
foreign threats, secure software practices, border gateway protection. That's BGP,
I believe. Um, yeah. Like, you know, hijacking BGP routes.
Post-quantum cryptography, modern encryption protocols, AI for vulnerabilities, IoT security
(18:38):
standards and limiting sanctions scope. So that was, uh, a
combination of mine and an AI analysis of the talking points,
or it's called a fact sheet. Bellingcat tests whether or
not I can actually geolocate photos, and it found that
O3 actually did the best and beat out Google Lens, actually.
(19:00):
Sentinel one reveals details on Chinese supply chain attack attempt.
Protonvpn sees 1,000% sign up surge after Pornhub blocks France.
So France evidently loves Protonvpn. And, uh, yeah, they went
over there so they could still get access. Microsoft Teams
with Indian police to shut down fake tech support scammers
(19:24):
and Bishop Fox 2025 red team tools list. All right.
National security OpenAI published their annual report on how bad
actors are using AI maliciously, and I love the fact
that they put these out. So four out of ten
major abuse cases look like they came from China, from
social engineering to cyber threats. They're seeing deceptive employment schemes.
(19:45):
So task scams from like Cambodia comment spamming from Philippines
and a covert influence operations potentially leaked to Russia and
Iran using AI as force multipliers. Um, I'm actually going
to mention something right now. I can't figure out what
I'm going to do with this. I might do another
report like the one I just the two I just
(20:06):
talked about. So my Twitter feed is full, full of
what I am absolutely certain is like widespread propaganda. So
the content that I see is like, don't you hate
black people because they do this? Um, don't you wish
the world looked like this? And then if you click
the video, it's like a whole bunch of white people
(20:28):
walking around and there's like, no crime and there's like
a fountain with, like, a nice water sound, and there's, like,
somebody buying something at, like, an Abercrombie or something, and
it's like. It's like massive propaganda to like, uh, don't
you wish the world was the way it was before? Before, like,
brown people messed it all up, right? And then there'll
(20:49):
be another one. It'll be like about fighting or something
like that. Or it'll be like masculine movies. Don't you
wish movies were like this? And it's like someone holding
a Budweiser. Budweiser. And, like, shooting a crossbow or something.
And I'm just like, what is going on? So every
time I go back to Twitter, which I probably check
it like, I don't know, like 15, 20 times a
day or something, and then I'll just be browsing to
(21:11):
see what people are saying. But if I go to
the home tab, which is like impossible to clean up
because I block all these things when I see them,
although lately I've started bookmarking them because I'm going to
do this research project. But normally I just block, block, block.
But it's just an endless supply. Like these people have
like multiple accounts. So what I started doing is clicking
on the account and scrolling through their feed, and it's nonstop.
(21:35):
It's nonstop narratives about how messed up the country is,
reasons for why the country got messed up, which is scapegoating, basically.
And then, um. Don't you wish it was like, uh,
the new thing, right? Don't you wish it was like
it used to be? Don't you know? Don't you think
we could do better? And I'm just, like, seeing this everywhere.
(21:56):
Just like hundreds of accounts doing this. And I sent
this a few of these accounts to, uh, some friends
of mine who go and research this stuff. And, yeah,
I'm just really surprised that, like, I guess I'm not surprised, given, uh,
given Elon's situation and the fact that he lost so
many advertisers and the fact that I guess he's like
(22:18):
conspiracy friendly, I would say, um, but ultimately people click
on this stuff. People watch this stuff because it is,
you know, specifically designed to be that way, to be
very watchy and clicky. Um, so I guess that's why
it's there. But but it's really gross. Like, I really
wish there were a platform or I wish there were
(22:39):
options in this platform where it's like, look, I'm paying money. Like,
Can you not show me this garbage? I want to
see stuff from people I follow and stuff that's very
tightly correlated with people I follow. Which he says he's
working on that actually, he says he's working on similar
content matching, which is similar to the app I have
called threshold. But, um, yeah, this whole thread of like,
(23:02):
who is sending all this stuff to us, who is
basically trying to change the narrative or try to put
inject into our minds, like this feeling of the US
is messed up. It was messed up by them pointing
at trans people or brown people or black people or whatever.
Gay people or you know, the wrong religion or whatever
(23:24):
it is. Liberals, definitely, um, people who aren't right enough
or aren't right in the correct way. Um, it's just
it's worse than I've ever seen. It's absolutely worse than
I've ever seen. Um, so I don't know if their
skill level is just going up. I imagine it's because
it's all a lot of it's being AI generated. So
(23:44):
you could just do it at more scale. That's probably
a reason. But anyway, it's something I'm going to look
into and probably do some kind of essay or report
or whatever. All right. AI. Okay. This is the craziest story,
the absolute craziest story of the week. Um, and honestly,
for a long time. And I've got a follow up
to it as well, because my buddy told me about
(24:06):
this one company. But check this out. So I finally
found something that trains scientists have missed for decades. So
a number of professional scientists have been trying to figure
out how a particular kind of bacteriophage, which is a
virus that infects bacteria, uh, which I didn't know that. So, um,
there's a podcast here that supports this that's basically like, um,
(24:28):
it's all the scientists who actually did this. So basically
they have been studying this thing for over a couple
of decades. They are the world's experts in how these
particular bacteriophages, um, gain mobility and move out of the cell. Basically,
what they do is they take over the cell. They
blow it open and spread the virus. That's what these
(24:48):
things do. So they study how these things get their
heads and their tails, which allows them to move and
propagate and, you know, take over other things. And I'm
simplifying and probably messing up some details because it's a
very complex, you know, specific topic. But I went and
listened to the entire one hour episode with the actual
experts talking about this. Now the story is they were
(25:11):
given this new version of a Google model, which is
specifically for researching and coming up with novel hypotheses, novel ideas. Okay,
so what they did was they they had this thing
where they're like, how is this thing possible? They've been
thinking about this for years and years and years. They've
done numerous studies. They are the world's premier experts on this.
(25:32):
And they have been stumped by this fact. The fact
that this one bacteriophage, I believe it was there, were
stumped by the fact of how was it actually propagating
or how was it becoming mobile. It was something like that.
It was some sort of problem that they could not
figure out, and they know exactly how it works. You know,
they were very sure about this and that. Therefore, you know,
they're like, this makes no sense. So they gave a
(25:54):
bunch of this observational data to it, you know, stuff
that they've had for years and years. And it came
back with hypotheses. And one of the hypotheses out of
a very short list was like, hey, um, it might
be this actually, because, um, what you could do is
you could just do this instead of this, and it
puts a tail on there and it actually picks up
(26:16):
the one, um, once it's outside of the cell, it
picks up someone else's. And I'm kind of messing up
the details here, but they describe it pretty good in this,
in this, uh, full episode of, uh, Cognitive Revolution, I
think is the name of the podcast. But, um, when
they heard this, when they saw it written down, they're like, oh,
that that's probably what it is. So here. Here's the thing.
(26:39):
They had made an assumption that a certain type of
mobility or a certain type of propagation was not possible,
and it was a very silly assumption that they had made.
And this, this assumption had stopped them from solving this
problem for years upon years. How many hundreds of hours,
thousands of hours have they spent stumped on this as
(27:01):
the world's premier humans on the entire planet? The smartest
people working on this in the entire world, on the
entire planet, were stumped by a thing. And Google came back,
I think within a couple of minutes, I think, and
it was like, yeah, I think it's probably this. They
(27:22):
go and check and it's 100% verified that was correct.
And they here's here's the thing. They were kind of, uh,
I believe what they were saying is they were about
to figure this out. Maybe. Maybe this year. Maybe next year.
They were doing some experiments that might have got them
to this. But the point is, Google found it instantly.
(27:43):
Which means if they had this Google research assistant earlier,
maybe last year, maybe years ago, maybe it would have
found this right. And they not would not have had
this problem for this entire time. Now, what excites me
about this is not this specific use case. It's the
how they reacted to it. They were like, this is insane.
(28:04):
Because the thing I'm so excited about, and it's the
same thing they were talking about, is how many other
things are like this. There is data everywhere. There is
evidence everywhere. There are dots. Okay, think of connect the dots.
Think of how many dots are lying around in papers,
lying around in data sets. Raw data sets. Unfiltered. Unreviewed. Why?
(28:28):
Because there aren't enough academics. There aren't enough researchers. Forget academics.
There aren't enough humans with the training required, write either
the formal current training or like even rudimentary training. There
aren't enough human eyes on all this raw data. There
are no human eyes to look at the stuff and
(28:48):
find the dots and connect them. How many cures for illnesses?
How many novel ways are we sitting on the ability
to improve our IQs by 20%? Are we sitting on
the ability to solve aging? I would guess we absolutely are.
We absolutely are. And that the problem is not enough.
(29:10):
People are studying it using not enough data. So what
I now allows us to do with tools like this,
which are about to be drastically better. I mean, this
thing that found this is about to be 100% stupid
compared to whatever comes out next year or next week.
Right now, just imagine giving all that raw data and
(29:31):
even going to collect more, right? Getting sensory data from
cells or whatever from, you know, different layers of the
world and basically feeding it into this thing all the
time and saying your job is to connect dots. That's
that's the promise of AI. Your job is to connect dots.
So what we've basically done at that point is simulate
(29:53):
billions more people on the planet with training for connecting dots,
and then they could go, look, I came up with
all these hypotheses. And if you go test this, it's
probably going to work. Oh, and by the way, that
is actually a cure for, you know, 86% of cancers
or that will extend your life by 42 years, um,
or whatever. This will allow you to transfer your brain
(30:17):
into a, you know, digital, uh, synthetic form or whatever. Um,
I'm just blown away by this. Now, I sent this
article and my write up of this to a buddy,
and he's like, yeah, I'm actually, um, you know, invested
in and, you know, uh, an advisor for this company
who does that. Their specific thing is they are already
(30:40):
going and crawling all this other data. They are going
to find they have actively collected and are collecting all
this raw data and all these studies that basically didn't
go anywhere because they lost funding or whatever. That company,
this company he's talking about, it is actively doing that.
And guess what they're doing? They build the labs and
(31:02):
they're automating the labs. So when they do a hypothesis
and this is crazy, they already have use cases of
this working. So it's kind of even further than the
main story here. They have a use case of an
example of it actually, um, came up with a hypothesis.
It built a full, uh, methodology and schematic for how
(31:25):
to build the lab. And humans basically took that and
went into the lab. They built a little thing. They
basically set up the experiment exactly the way that the
I said to. And it 100% was true. They confirmed
experimentally that this hypothesis that it could come up with
was true. Now, if it had had the ability to
(31:46):
actually automate those tests, um, if it had the raw
materials in the lab and they had robotics or whatever,
that would have been even more insane, right? What if
it had the raw materials to to combine these molecules
or whatever? And there you got to be a little
bit careful because you don't want to just prompt inject
and it builds a, you know, a meth lab or a,
you know, sarin gas lab or whatever, a Covid lab, right.
(32:10):
Automated Covid generation. Um, so you got to be careful
with those. Obviously, this is what a lot of people
are worried about. But unbelievable, unbelievable possibility here in terms
of like connecting dots. And then the most important thing
and I talked about this, um, I got an essay
called The Path to Asi. I don't know. You guys
might have seen that it's came out. I don't know.
(32:32):
I put that out a few months ago and it
basically describes this process, right? You have hypotheses, you test them.
That's it. Right. Humans have hypotheses. They test them. But
we do it at such small scale. And more importantly, um,
real innovation comes from getting a bunch of smart people together,
like Bell Labs or the Renaissance, and a whole bunch
(32:52):
of smart people are together there talking about their ideas.
But the other person's idea that they're talking to influences
them slightly, changes their idea. Or even somebody, um, somebody, uh,
copies the idea, but they copy it incorrectly with a
slight deviation that actually makes it better. And that's absolutely insane.
(33:14):
So essentially what you have is like this evolution. You
have evolutionary biology or evolutionary, um, improvement happening to the
realm of ideas. Now, that is 100% Sense automatable as well.
You could use AI to do that. You could use
genetic algorithms to do that. You know, it's easy to
(33:35):
do this in a not sophisticated way, and I'm sure
you could do it in a lot more sophisticated ways
the more you think about it. But essentially you have
this engine of human generated ideas, AI generated ideas, and
then you have this evolutionary petri dish which constantly evolves them,
that spits out hypotheses which go through some sort of filter,
and then the higher quality ones go into this automated
(33:58):
testing lab and we start to have this giant Bell Labs.
Renaissance mechanism for inventing new things. Right. Inventing completely new things,
solving things like aging and cancer and all these different illnesses,
extending lifespan, like solving world hunger, like creating abundance on
(34:21):
the planet, like, you know, getting away from the zero
sum game of like capitalism and all this stuff. Like
you could literally use this engine to solve human problems,
you know, at scale. So all this is just massively
exciting to me. And I thought that was like the
most interesting story of the week. Apple releases controversial paper
(34:43):
on AI. Yeah, yeah. It was silly. It was a
silly paper. Um, some people are like, oh, you're behind
on AI. So now you release a paper that basically
says AI is stupid. Anyway. Um, I didn't even want AI. Uh,
it's dumb anyway. So that's why. That's why I'm not
trying hard. That's why other people are beating me. That's
(35:03):
not actually what happened though, because Apple Apple air quotes.
Apple isn't saying this. It is some ML team within Apple. Right.
Releasing this. So, um, anyway, it was a funny narrative.
OpenAI massively drops O3 prices. They dropped by 80% and
they also released O3 Pro. Um, OpenAI doubles revenue to
(35:25):
$10 billion annually. And OpenAI also must keep all ChatGPT
conversations indefinitely due to a legal hold related to a lawsuit.
I find this weird. I don't know how accurate this is.
If it's all like what exactly the scope is. There's
a quote here that says we are required to retain
all data. Cannot process deletion requests during this period. What
(35:47):
I don't know is if it applies to like tenants
where they've paid extra money just to have their stuff
be ephemeral, right? There are a lot of people with
like private Azure instances where like that was the whole
reason they're paying extra is the fact that it's ephemeral,
or it gets deleted constantly or they're able to delete it,
but I don't know if it applies there. I doubt it,
(36:07):
but who knows? OpenAI makes ChatGPT voice mode sound way
more human. Okay, go play with voice mode. Advanced voice
mode on ChatGPT. It is ridiculous. I mean, it is
absolutely ridiculous. I was having a conversation with it yesterday,
which I haven't talked to in a while, and it
(36:28):
felt so incredibly human. The pauses, the voice tones, it
actually sang. Um. At first it did like vocal song
and I was like, no, not, you know, uh, spoken
word actually sing. And it actually sung with, like, tone. Um, anyway,
I still use the Cove one because it sounds like, uh,
(36:48):
one of the AIS from interstellar, and that's my favorite voice, uh,
since they got rid of, uh, Samantha. All right, Stanford
study shows doctors put AI beat. Oh, plus, I beat
traditional diagnostic tools. So doctors normally in this particular test
(37:09):
were 75% accurate. Doctors with the AI were 85% accurate, 10% jump.
But the hilarious part and extremely sad part I by
itself was 90%. It was 5% better than the doctor
using I. So it's like, yeah, it's like, yeah, I
(37:31):
am way better than the doctor. This is the I talking.
I'm way better than the doctor. I am 15% of
the doctor. Now, if the doctor uses me and asks
me questions, it can almost be as smart as me,
but not quite. And we all know how early the
game is. We all know how bad I is at
certain things, and it's already to this point. And this
(37:53):
is not like some random study at a junior college.
This is a pretty large Stanford study. Insane. Absolutely insane. Uh,
Microsoft reshuffles leadership to focus on AI agents. My new
favorite description of a business mode. This is from Jamin Bell.
A real long term moat is just a sequence of
smaller moats stacked together. Each one buys time. And what
(38:16):
you do with that time, how fast you execute, how
quickly you evolve, determines whether you stay ahead. And my
analysis of this or my. You know, um. Clarification of
it or tightening it up is to me, that means
speed and adaptability is the only real moat. And a
hat tip to, uh, Clint Gibler, uh, my buddy for, uh,
(38:38):
sending me this, like, in a text or something. Um, really,
really good technology. Why Bell labs works so well. Uh,
we talked about that already. BYD's five minute charging puts
China in the lead for EVs. Five minute charging. That's
basically a gas station. Uh, very worried. Um, this is
(38:58):
why I want Tesla to win so bad. And now
also Waymo. Uh, I want, but it's got to be
Tesla because they're the ones making the cars. I'm really
worried about BYD. BYD is making extremely good cars. They
are subsidizing them. They have 10,000 EVs. And we're talking
about five minute charging like a gas station. I mean,
(39:22):
if they were to come here, they would crush right now.
So we need our options. We need American options to
be getting better way faster than they are right now.
Wing and Walmart expand drone delivery to 100 stores, which
is five major cities. Uh, Chinese tech behind Amazon's humanoid robots. Great.
(39:43):
So Amazon is doing humanoid robots now. Um, they're talking
about potentially having humanoid delivery drivers. And the tech is Chinese. Great.
YouTube loosens content rules using public interest as a standard.
And AWS has opened a new region in Taiwan with
three availability zones. Humans. Rents are dropping in most US
(40:06):
cities for the first time since 2023. Caffeine keeps your
brain awake even while you sleep, so you could actually sleep.
According to the study. You could actually sleep and stay asleep,
but you wake up more tired because your brain is
not properly resting. Because the caffeine. Because caffeine basically blocks
the sleep, um, receptor. Um, that's one of the mechanisms.
(40:28):
So that's probably related to why this is the case,
or at least why they found that to be the case.
Las Vegas fights record heat with massive tree planting. And
this is not, uh, like it's absorbing carbon. Uh, because
that wouldn't be nearly the amount of skill you need. Uh,
that's a planetary thing. But, um, for shade is what
they're talking about. Mushrooms may communicate using up to 50 words.
(40:51):
And forests offset global warming more than scientists previously thought.
So replanting the trees that we lost since the 1800s
could cool the planet by like half a degree, which
I think is Celsius. Uh, which would be cool. Uh,
I think we should do that. I think we should plant, like,
(41:13):
trillions of trees and just, uh, see if the temperature
falls really fast. See if CO2 levels fall really fast.
And if they do, um, cut down some of the trees.
That's what I think. And also carbon sequestration. And also
we should go all in on solar discovery. Someone built
(41:34):
an MCP server that actually runs on Cloudflare workers. So
you don't have to stand up your own infrastructure. Data
visualization reveals patterns in D&D monster designs. How anthropic teams
use cloud code. We are No longer a serious country
by Paul Krugman. Great explanation of how model context protocols
is different from traditional APIs. I'm not sure I'm going
(41:55):
to read all these, because they're actually just links for
you to go and click on in the newsletter. Um,
so I'm just going to go right to aphorism of
the week. It is not death that we should fear,
but never beginning to live. It is not death that
we should fear, but never beginning to live. Marcus Aurelius.