Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
I wanted to try to build something useful, but I
didn't think I would build anything particularly great. You've said,
probabilistically it seemed unlikely, but I wouldn't to at least try.
Speaker 2 (00:11):
So you're talking to a room full of people who
are all technical engineers, often some of the most eminent
AI researchers coming up in the game.
Speaker 1 (00:20):
Okay, I I think we should.
Speaker 3 (00:24):
I think I like the term engineer better than researcher.
Speaker 1 (00:27):
I suppose if there's some fundamental algorithmic breakthrough, it's a
research Otherwise it's engineering.
Speaker 2 (00:35):
Maybe let's start way back when you were This is
a room full of eighteen to twenty five year olds.
Excuse younger, because the founder set is younger and younger.
Can you put yourself back into their shoes? When you
were eighteen nineteen learning to code, even coming up with
a first idea for ZIP too, What was that like
(00:56):
for you?
Speaker 3 (00:56):
Yeah?
Speaker 1 (00:57):
Back in ninety five, I was basically choice of either
do Grass studies PhD Stanford and material science actually working
on ultra capacities for potential use of electric vehicles, essentially
trying to solve the range problem for electric vehicles, or
try to do something in this thing that most people
have never heard of, called the Internet. And I talked
(01:17):
to my professor, was Bill Nicks in the material science problement,
and I said, can I defer for a quarter because
this will probably fail and then I'll need to come
back to college.
Speaker 3 (01:26):
And then he said, this is probably the last conversation
we'll have, and he was right.
Speaker 1 (01:30):
But I thought things would most likely fail, not that
they would most likely succeed. And then in ninety five
I wrote, basically, I think the first or close to
the first maps directions, Internet, white pages and yellow pages
on the Internet. I just wrote that personally, and I
didn't even use a website. I just read the port
directly because I couldn't afford and I could afford a
(01:51):
t one. The original office was on Firman Avenue in Palelto.
There was like an ISP on the floor below. I
rolled a hole through the floor and just ran a
land cable directly to the ISP, and my brother joined
me and another co founder of Great Courrier who passed away,
and at the time we couldn't even afford a place
(02:12):
to stay, so we just The office was five hundred
bucks months, so we just left in the office and
then shouted the YMCA on page mill and al communo
and yeah, and I guess we ended up doing a
little bit of a useful company is up to.
Speaker 3 (02:24):
In the beginning, and I did build a lot of
really really good.
Speaker 1 (02:28):
Software technology, but were somewhat captured by the legacy media
companies and that night rout or New York Times, the
host whatnot were investors and customers and also on the board,
so they kept wanting to use software in ways that
made no sense. So I wanted to go direct to
consumers anyway, as long story, dwelling too much use up
(02:50):
too But I really just wanted to do something useful
on the Internet because I have two choices.
Speaker 3 (02:55):
Do I PhD and watch people build the internet or
help build.
Speaker 1 (02:59):
The internet in some small way. And I was like,
I guess I can always try and fail and then
go back to grad studies. And anyway, that ended up
being like reasonably successful. Sold for like three hundre million dollars,
which is a lot at the time these days, that's
I think the minimum impulse, but for an AI startup
is like a billion dollars.
Speaker 3 (03:15):
It's like.
Speaker 1 (03:18):
There's so many freaking unicorns. It's like heard of unicorns
at this point was a billion dollar situation.
Speaker 4 (03:24):
There's been inflation since, so quite a bit more money.
Speaker 1 (03:27):
Oually Yeah, like you could probably buy a voter burger
for a nickel. Not quite, but yeah, there has been
a lot of inflation. But at the high level in
AI is pretty intense. As you've seen companies that are
I don't know, less than a year old getting sometimes
a billion dollar, multi billion dollar valuations, which I guess
(03:47):
could pan out and probably will pan out in some cases,
but it is I ordering to see some of these valuations.
Speaker 3 (03:54):
Yeah, what do you.
Speaker 4 (03:55):
Think, I'm pretty bullish. I'm pretty bullish, honestly.
Speaker 2 (03:59):
So I think the people in this room are going
to create a lot of the value that a billion
people in the world should be using this stuff, and
we're not even we're scratching the surface of it. I
love the Internet story in that, even back then, you
are a lot like the people in this room back then,
in that you know, the heads of all the CEOs
(04:20):
of all the legacy media companies look to you as
the person who understood the Internet, and a lot of
the world, the corporate world, like the world out large,
that does not understand what's happening with AI. They're going
to look to the people in this room for exactly that.
It sounds like, what are some of the tangible lessons.
It sounds like one of them is don't give up
board control or be careful about have a really good lawyer.
Speaker 1 (04:42):
Well, I guess with first my first thought of the
big really the mistake was having too much shareholder and
board control from legacy media companies who then necessarily see
things through the lens of legacy media, and that they'll
make you do things that seems sense full to them
but aren't really doesn't make sense with the new technology.
(05:03):
I know I should point out that I that I
didn't actually first intend to start a company. I try
to get a job at Netscape. I said my resume
to Netscape, and Mark Hendrison knows about this, but I
don't think he gave us my resume, and then nobody responded.
And then I tried hanging out in the lobby of
Netscape see if I can like bump into someone, but
I was like too shy to talk to anyone.
Speaker 3 (05:24):
So I'm like, man, this is ridiculous.
Speaker 1 (05:26):
So I'll just write sof for myself and see how
it goes so it wasn't actually from the stand point,
if I want to start a company, I just want
to be part of building the Internet in some way.
And since I couldn't get a job at an ARAT company,
I had to start an internet company anyway. They from
an AI will so profoundly change the future.
Speaker 3 (05:45):
It's difficult to fathom how much.
Speaker 1 (05:48):
But at the economy is, assuming we don't things don't
go awry, and like AI doesn't kill us so well
and itself, then you'll see ultimately an economy.
Speaker 3 (06:03):
That is.
Speaker 1 (06:05):
Not ten times more than the current economy. Ultimately, if
we become say or whatever, our future machine descendants or
but mostly machine descendants become like a chief scale to
civilization or beyond, we're talking about an economy that is
thousands of times, maybe millions of times bigger than the
(06:28):
economy today. Yeah, I did feel a bit like when
I was in VC taking a lot of flag from
getting rid of waist and fraud, which is an interesting
side quest as side quests go.
Speaker 4 (06:39):
But I've got to get back to the main quest.
Speaker 3 (06:41):
Yeah, I got to get back to the main quest here,
So back to the main quest.
Speaker 1 (06:45):
But I did feel a little bit like this. It's
like fixing the government. It's kind of just say, the
beach is dirty and there's some needles and PCs and
like trash, and you want to clean up the beach.
But then there's also this like thousand foot wool of water,
which is tsunami of AI. And how much does cleaning
the beach really matter if you go to a thousand
foot tsunami about to hit.
Speaker 3 (07:07):
Not that much.
Speaker 4 (07:10):
Well, we're glad you're back on the main quest. It's
very important.
Speaker 1 (07:13):
Yeah, back to the may quest building technology, which is
what I like doing. It's just so much noise like this.
The single noise ratio in politics is terrible.
Speaker 4 (07:21):
I live in San Francisco, so you don't need to
tell me twice.
Speaker 3 (07:24):
Yeah, DC's I guess it's all politics in DC.
Speaker 1 (07:26):
But the if you're trying to build a rocket or cars,
or you're trying to have software that compiles and runs reliably,
then you have to be maximally true seeking or your
software or your hardware won't work like you can't fool
like mathem.
Speaker 3 (07:43):
Physics are rigorous judges.
Speaker 1 (07:45):
So I'm used to being in a maximally true seeking
environment and that's definitely not politics.
Speaker 3 (07:50):
So I'm good. Glad to be back in technology.
Speaker 2 (07:54):
I guess I'm curious going back to the Zip tou moment,
you had hundreds of millions of dollars or you had
to exit a worth millions of dollars.
Speaker 3 (08:01):
I got twenty million dollars.
Speaker 2 (08:03):
Okay, so you solved the money problem and you basically
took it and you rolled. You kept rolling with X
dot Com, which is in PayPal and Confinity.
Speaker 3 (08:13):
Yes, I kept the chips on the table.
Speaker 2 (08:16):
Yeah, so not everyone does that. A lot of the
people in this room will have to make that decision. Actually,
what drove you to jump back into the ring?
Speaker 1 (08:25):
Think I felt for with up too. We'd both like
incredible technology but never really got to use. I think,
at least from my perspective, we had better technology than
say Yahoo or anyone else, but it was constrained by
our customers. And so I wanted to do something that
where okay, we wouldn't be constrained by our customers, go
direct consumer. And that's what ended up being like X
(08:46):
dot Com PayPal, essentially X dot Com merging with Confinity,
which together created PayPal, and then that actually the sort
of PayPal diaspora has it. It might have created more
companies then, so more companies than probably any anything in
the twenty first century. So many talented people were at
the combination of Confinity X dot com.
Speaker 3 (09:07):
So I just wanted to, like, I felt, we got.
Speaker 1 (09:09):
Our wings clift somewhat with ZIP two, and it's okay,
what if our wings are on clipped and we go
direct to consumer And that's what PayPal ended up being.
But yeah, with I got that twenty million dollar check
for my share of ZIP two. At the time, I
was living in a house with four housemates and had
a ten grand in the bank and then this check
(09:31):
arrives in the mail of all places, and in the
mail and then now then my bank anounce went from
ten to twenty million in ten thousand. Okay, sluck to
pay taxes on that at all. But then I ended
up putting almost all of that into x dot com
and as you said, just keeping almost all the chips
on the table.
Speaker 3 (09:49):
And yeah, and then after PayPal was like, I was
curious as to why we.
Speaker 1 (09:54):
Had not sent anyone to Mars, and I went on
the NASA website to find out when we're sending people tomorrow,
and there was no date. I thought maybe it was
just hard to find on the website, but in fact,
there was no real plan to send people to Mars.
So then I've come to this is such a long story,
so I don't want to take up too much time here, but.
Speaker 4 (10:14):
I think we're all listening with rapt attention.
Speaker 3 (10:16):
So I was.
Speaker 1 (10:17):
Actually I was on the long end of the expressway
with my friend day ORESI. We were like housemates in college,
and DAAs asking me what I'm what we're gonna do?
What am I gonna do after a PayPal and I
was like, so, I don't know. I guess maybe I'd
like to do something philanthropic in space, because I didn't
think I could actually do anything commercial in space because
that seemed like the purview of nations. But I'm curious
(10:37):
as to when we're gonna send people to Mars. And
that's when I was like, oh, it's not on the website,
and I started digging. There's nothing on the NAST website.
So then I started digging in and I'm definitely summarizing
a lot here. But my first idea was to do
a philanthropic mission to Mars called Life to Mars, where
would send a small greenhouse with CS and dehydrated nutrient
(10:59):
gael land out our mosque and grow hydrate the Jael
and then you'd have this great sort of money shot
of green plants on a red background. For the longest time,
I by the way, I didn't realize money shot. I
think is a porn reference. But anyway, the point is
that would be the great shot of green plants on
a red background. And to try to inspire NASA in
the public to Santa astronauts to Mars.
Speaker 3 (11:21):
As I learned more, I came to realize along the way.
Speaker 1 (11:24):
By the way, I went to Russia in two thousand
and one and two thousand and two to buy ICBMs,
which is that's an adventure where you go and meet
with Russian high command and say I'd like to buy
some ICBMs.
Speaker 4 (11:37):
This was to get to space, yeah, a rocket, not.
Speaker 1 (11:40):
To nuke anyone, but they had to as a result
of arms production talks, they had to actually destroy a
bunch of their big nuclear pistoles. So I was like,
how about if we take two of those minus the
nuke added an additional upper stage for Mars. But it
was trippy being in Moscow in two thousand and one
negotiating with the Russian military to buy icvms, like that's crazy,
(12:07):
but the kettles are raising the price on me. So
that so literally it's like the opposite of what a
negotiation to do. So I was like, man, these things
are getting really expensive. And then I came to realize
that actually the problem was not that there was insufficient
world to go to Mars, but there was no way
to do without breaking the budget, or even breaking the
NASA budget.
Speaker 3 (12:26):
So that's where I decided to start SpaceX.
Speaker 1 (12:29):
It's SpaceX to advance rocket technology to the point where
we could send people to Mars.
Speaker 3 (12:35):
And that was in two thousand and two.
Speaker 2 (12:37):
So that wasn't But you didn't start out wanting to
start a business. You wanted to start just something that
was interesting to you that you thought humanity needed, and
then as you kat pulling on a string, it just
the ball sort of unravels and it turns out this
is could be a very profitable business.
Speaker 1 (12:58):
It is now, but it there had been no prior
example of really a rocket startup succeeding. There have been
various attempts to do commercial rocket companies and then whole
failed again with SpaceX. Starting SpaceX was really from the
standpoint of I think there's a less than ten percent
chance of being successful, maybe one percent, I don't know.
(13:18):
But if a startup doesn't do something to advance rocket technology,
it's definitely not coming from the big defense contractors because
they just impedancemashed to the government, and the government just
wants to do very conventional things. So it's either coming
from a startup where it's not happening at all. So
like a small chance of success is better than no
(13:39):
chance of success, and so that Yeah, so SpaceX start
that in mid two thousand and two expecting to fail,
Like I said, probably ninety percent chance of failing. And
even like when recruiting people, I didn't try to make
out that it would probably I said, we're probably going
to die, but a small chance we might not die.
But this is the only way to get people to
(14:01):
Mars and advanced the state of the art. And then
I ended up being chief engineer of the rocket, not
because I wanted to, but because I couldn't hire anyone
who was good, So like none of the good sort
of chief engineers would join because it's like this is
too risky, you were going to die. And so then
I ended up being chief engineer of the rocket, and
the first three flights did fail, so it's a bit
(14:23):
of a learning exercise there, and the fourth one fortunately worked.
But if the fourth one hadn't worked, I had no
money left and that would have been it would have
been curtains.
Speaker 3 (14:32):
So it was a pretty close thing.
Speaker 1 (14:33):
If the fourth launch of a Falcon not work, it
would have been just curtains and we would have just
been joined the graveyard of prior rocket startups. So it's
like my estimate of success was not far off. We'd
just we made it by the skin of our teeth.
And Tesla was happening simultaneously. Like two thousand and eight
was a rough year because at mid two thousand and
(14:55):
eight were called Summer two thousand and eight, the third
launch of SpaceX had fail, a failure in a row,
the Tesla financing round had failed, and so Tela was
going bankrupt fast.
Speaker 3 (15:07):
It was just, man, this is a grim.
Speaker 1 (15:10):
This is going to be a tale of warning of
an exercise in hubris.
Speaker 2 (15:15):
Probably throughout that period a lot of people were saying
Elon is a software guy, Why are you working on hardware?
Speaker 4 (15:21):
Why would you yeah, why would he choose to work
on this.
Speaker 1 (15:25):
Harp said, so you can look at the because they're
still the press of that time is still online. You
could just search it. And they kept calling the Internet guy.
So Internet Guy aka fool is attempting to build a
rocket company that we got ridiculed quite a lot. And
it does sound pretty absurd, like internet guy starts rocket
(15:45):
company doesn't sound like a recipe for success. Frankly, so
I didn't hold it against them. I was like, yeah,
admittedly it does sound improbable, and I agree that it's improbable.
But fortunately the fourth launch worked and a NASSA ordered
us a contractory supply the space station. And I think
that was like maybe December twenty second or it was
(16:08):
it like right before Christmas. Because even the fourth launch
working wasn't enough to succeed, NASA also needed we also
needed a big contract to keep us LFE. So I
got that call from like the NASA team and I
literally said, we're rewarding you one of the contracts to
resupply the space station. I like literally blurred it out,
I love you guys, which is not normally what they
(16:29):
hear because it's usually pretty sober. But I was like, man,
this is a company saver and then we closed the
Tesla financing around on the last hour of the last
day that it was possible, which was six pm December
twenty fourth, two thousand and eight. We would have bounced
payroll two days after Christmas if that round hadn't closed.
So that was a nerve wracking end of two thousand
(16:51):
and eight, that's for sure.
Speaker 2 (16:52):
I guess from your PayPal and zipq experience jumping into
these hardcore hardware startups, it feels like one of the
through lines was being able to find and eventually attract
the smartest possible people in those particular fields. What would
the people in this room, like some of most of
the people here I don't think have even managed a
(17:13):
single person yet, they're just starting their careers. What would
you tell to the elon who's never had to do
that yet.
Speaker 1 (17:21):
I generally think to try to be as useful as possible.
It may sound try, but it's it's so hard to
be useful, especially to be useful to a lot of people.
Where you say, the area under the cove of total
utility is like, how how useful have you been to
your feel human beings? Times how many people? It's almost
like the physics definition of true work. It's incredibly difficult
to do that. I think if you aspire to do
(17:42):
true work, your probably success is much higher. Don't aspire
to glory, aspire to work.
Speaker 4 (17:49):
How can you tell that it's true work? Is it external?
Speaker 2 (17:53):
Is it like what happens with other people or what
the product does for people? Like what is that for you?
When you're looking for people to come work for you?
What what's the salient thing that you look for or
if they're.
Speaker 3 (18:04):
As certain question? I guess it's in terms of your
in product.
Speaker 1 (18:07):
You just have to say, if this thing is successful,
how useful it be to how many people? And that's
what I mean. And then you do you do whatever,
whether you're CEO or any role in a startup, you
do whatever it takes to succeed, and just always be
smash smashing your ego like internalized responsibility. Like a major
failure mode is when ego to ability ratio is double
(18:29):
graded than sign one.
Speaker 3 (18:30):
If you if your.
Speaker 1 (18:31):
Ego to ability ratio is it gets too high, then
you're going to basically break the feedback loop to reality.
And in in AI terms, your your er, you'll have
your you'll break your RL loop. So you want. You
want to don't want to break your art. You want
to have a strong r L loop, which means internalizing
responsibility and minimizing ego, and you do whatever the task is,
no matter whether it's grand or humble. That's like why
(18:54):
I actually I prefer the term like engineering as opposed
to research. I prefer the term and I don't I
actually don't want it to call XAI lab.
Speaker 3 (19:04):
I just want to be a company.
Speaker 1 (19:05):
Like It's like, what of the simplest, more straightforward, ideally
lowist ego terms are that those are generally a good
way to go. You want you want to just close
the loop on reality hard that's a super big deal.
Speaker 2 (19:18):
I think everyone in this room is really looks up
to everything you've done around being a paragon of first
principles and thinking.
Speaker 4 (19:26):
About the stuff you've done.
Speaker 2 (19:27):
How do you actually determine your reality because that seems
like a pretty big part of it. Like other people
who have never made anything, non engineers, sometimes journalists at
time who've never done anything like they will criticize you.
But then clearly you have another set of people who
are builders, who have very high area under the curve,
(19:50):
who are in your circle. How should people approach that
what has worked for you and what would you pass
on to X to your children? What do you tell
them when you're like you need to make your way
in this world? Here's how to construct a reality. It
is predictive from first principles.
Speaker 3 (20:06):
The tools of physics are incredibly helpful to understand and
make progress in any field. First principles means just obviously,
just means breakings down to the fundamental axiomatic elements that
are most likely to be true, and then reason up
from there as cosionally as possible, as opposed to reasoning
by analysis or metaphor. And then it just simple things
like thinking in the limit if you extrapolate, minimize this
(20:30):
thing or maximize that thing.
Speaker 1 (20:31):
Thinking in the limit is very helpful. I use all
the tools of physics. They apply to any field. This
is like a superpower. Actually, so you can take, say,
take an example like rockets. You can say how much
should a rocket cost? The typical approach to how to
that people would take how much rocket should cost?
Speaker 3 (20:49):
There is the would look.
Speaker 1 (20:49):
Historically at what the cost of rockets are and assume
that any new rocket must be somewhat similar to the
prior cost of rockets. A first principle's approach would be
you look at the materials that rocket is comprised of,
so if that's aluminum, copper, combon, fiber, steel, whatever the
case may be, and say what what how much does
that rocket weigh, and what are the constituent elements and
(21:10):
how much does they weigh? What is the material price
per kilogram of those constituent elements, and that sense the
actual floor on what a rocket can cost. It can
asymptotically approach the cost of the row materials, and then
you realize, oh, actually a rocket, the row materials of
a rocket are only maybe one or two percent of
the historical cost of a rocket, so the manufacturing must
(21:34):
necessarily be very inefficient if the if the row material
cost is only one or two percent, would have cost
principles analysis of the potential for the cost for cost
optimization of a rocket. And that's before you get to reusability.
To give an AI sort of AI example last year
went for XAI when we were trying to build a
(21:55):
training supercluster. We went to the various suppliers to ask
it said, this is the beginning last year that we
needed one hundred thousand, h one hundreds to be able
to train coherently, and there are estimates for how long
it would take to complete that were eighteen to twenty
four months, so we need to get that done in
six months so then or we won't be competitive. So
(22:18):
then if you break that down, what are the things
you need? We need a building, you need power, you
need cooling. We didn't have enough time to build a
building from scratch, so we've had to find existing building.
So we found a factory that was no longer in
use in Memphis that used to build electrolytes products. But
then the input power was fifteen megawatts and we needed
one hundred and fifteen megawats. We read the generators and
(22:42):
had generators on one side of the building, and then
we have to have cooling, so we rented about a
quarter of the mobile cooling capacity of the US and
put the chillers on the other side of the building.
But that didn't fully solve the problem because the volted
the power variations during training are very big, so you
can have power can drop by fifty percent in one
hundred milli seconds, which the generators can't keep up with.
(23:02):
So then we added Tesla megapacks and modified the software
in the megapacks to be able to just move out
the power variation during the training run. And then there
were a bunch of networking challenges the networking cables. If
you're trying to make one hundred thousand GPUs training coherently
of very challenging.
Speaker 2 (23:20):
Almost it sounds almost any of those things you mentioned,
I could imagine someone telling you very directly, no, you
can't have that, you can't have that power, you can't
have this. And it sounds like one of the salient
pieces of first principles thinking is actually, let's ask why,
let's figure that out, and actually let's challenge the person
across the table, and if they if I don't get
(23:42):
an answer that I feel good about, I'm going to
not allow that to be I'm not going to let
that know to stand.
Speaker 4 (23:48):
Is that that feels like.
Speaker 2 (23:49):
Something that everyone, if someone were to try to do
what you're doing in hardware, seems to uniquely need this.
Speaker 4 (23:57):
In software, we have lots of fluff.
Speaker 2 (23:59):
And it's like we can add more CPUs for that,
it'll be fine, But in hardware it's just not going
to work.
Speaker 3 (24:06):
I think these general principles of first principle thinking apply
to software and hardware apply to anything. Really.
Speaker 1 (24:12):
I'm just using a hardware example of how we were
told something is impossible. But once we broke it down
into the constituent elements of we need a building, we
need power, we need cooling, we need we need power smoothing,
and then we could solve those constituent elements. And then
we just ran the networking operation to do all the
cabling everything in four shifts twenty four to seven. And
(24:33):
I was like sleeping in the data center and also
doing cabling myself, and there were a lot of other
issues to solve. Nobody had done a training runt with
one hundred thousand, h one hundreds training coherently last year.
Speaker 3 (24:45):
I mean maybe it's been done this year. I don't know.
Speaker 1 (24:47):
And then we ended up doubling that to two hundred thousand,
and so now we've got one hundred and fifty thousand,
h one hundred fifty k H two hundreds and thirty
k GB two hundreds in the Memphis training center, and
we're about to bring one hundred and ten thousand GB
two hundreds online at a second data center also in
Memphis area.
Speaker 2 (25:08):
Is it your view that pre training is still working
and larger. The scaling laws still hold, and whoever wins
this race will have basically the biggest, smartest possible model
that you could distill.
Speaker 1 (25:21):
There's other various elements that side competitiveness for large AI
is for sure, the talent of the people matter. The
scale of the hardware matters, and how well you're able
to bring that hardware to bear. So you can't just
order a whole bunch of GPUs and they don't. You
can't just plug them in. So you've got to get
(25:42):
a lot of GPUs and have them training coherently and stavely.
Speaker 3 (25:46):
Then it's what unique access to data do you have.
Speaker 1 (25:49):
I guess distribution matters to some degree as well, like
how do people get exposed to your AI? Those are
those are critical pactores for if it's going to be
like a large foundation model as competitive as many have said.
I think my friend Eliasitzkayer said, we'd run out of
pre training data of human generated Like human generated data,
you run out of tokens pretty fast, of certainly of
(26:11):
high quality tokens, and then you have to do a
lot of You need to essentially create synthetic data and
be able to accurately judge the synthetic data that you're creating,
Like is this real synthetic data? Well, is it an
hallucination that doesn't actually match reality? So achieving grounding and
(26:31):
reality is tricky, but we are at the stage where
there's more effort put into synthetic data. Right now, we're
training Grock three point five, which is a heavy focus
on reasoning.
Speaker 2 (26:44):
Going back to your physics point, what I heard for
reasoning is the hard science, particularly physics textbooks, are very
useful for reasoning, whereas I think researchers have told me
that social science is totally useless for reasoning.
Speaker 3 (26:58):
Yes, that's probably true.
Speaker 1 (27:01):
So, yeah, there's something that's going to be very important
in the future is combining deep AI in the data
center or supercluster with robotics. So that's things like the
Optimist humanoid robot, and yeah, Optimist is awesome.
Speaker 3 (27:18):
There's going to be so.
Speaker 1 (27:18):
Many humanoid robots and robots of all sizes and shapes.
But my prediction is that there will be more humanoid
robots by far than all other robots combined by maybe
an order of magnitude, like a big difference.
Speaker 4 (27:31):
And is it true that you're planning a robot army of.
Speaker 1 (27:34):
The sort, whether we do it or where the Tesla
does it, because it works closely with XAI. You've seen
how many humanoid robot startups are there. It's I think
Jensen bongs on stage with a lot with a massive
number of robots, robots from different companies. I think it
was like a dozen different humanoid robots. I guess part
(27:57):
of what I've been fighting and maybe what has slowed
me down somewhat is I'm a little don't I don't
want to make Terminator real, you know, So I I guess,
at least until recent years, dragging my feet on on
AI and humanoid robotics, and then I come to realize
that realization it's happening.
Speaker 3 (28:16):
Whether I do it or not. So you got really
two choices, participant.
Speaker 1 (28:21):
You could either be a spectator or a participant, and
so I guess I'd rather be a participant than a spectator.
So now it's pedals to the metal on humanoid robots
and digital superintelligence.
Speaker 2 (28:33):
So I guess there's a third thing that everyone has
heard you talk a lot about that I'm really a
big fan of becoming a multiplanetary species.
Speaker 4 (28:40):
Where does this fit.
Speaker 2 (28:41):
This is all not just a ten or twenty year thing,
maybe one hundred year thing, like it's a multi many
generations for humanity kind of thing.
Speaker 4 (28:49):
How do you think about it?
Speaker 2 (28:50):
There's AI obviously, there's embodied robotics, and then there's being
a multiplant, multiplanetary species. Does everything feed into that last
point or what are you driven by right now for
the next ten to twenty one hundred years?
Speaker 1 (29:04):
Jeez, one hundred years, man. I hope civilizations around in
one hundred years. If it is around, it's going to
look very different from civilization today. Not predict that this
going to be at least five times as many humanoid
robots as there are humans, maybe ten times. One way
to look at the progress of civilization is percentage completion
(29:25):
karda chev So if you're in a kard Chef scale one,
you've you've honest all the energy of a planet. In
my opinion, we've only honessed maybe one or two percent
of Earth's energy, so we've got a long way to
go to be Kardschef scale one. Then Cardship two you've
honest all the energy of the Sun, which would be
(29:46):
I don't know, a billion times more energy than Earth
maybe closer to a trillion, and then Cardschep three would
be only energy of a galaxy, pretty far from that.
So we're at the very early stage of the intelligence
big bang. I hope we're in terms of being multiplanetary,
like I think. I think we'll have enough mass transferred
(30:10):
to Mars within like roughly thirty years to make Mars
self sustaining such that Mars can continue to grow and
prosper even if the resupply shifts from Earth stuff coming,
and that greatly increases the probable lifespan of civilization or
consciousness and intelligence, both biological and digital. So that's why
(30:32):
I think it's important to become a multiplanet species. And
I'm so much troubled by the filmy paradox, like why
have we not seen any aliens? And it could be
because intelligence is incredibly rare, and maybe we're the only
ones in this galaxy, in which case the intelligence of
consciousness is just the tiny candle in a vast dogness,
(30:54):
and we should do everything possible to ensure the tiny
candle does not go out. And being a multiplanet species
or making consciousness multiplanetary, greatly improves the probable lifespan of civilization,
and it's the next step before going to other star systems.
Once you at least have two planets, then you've got
a forcing function for the improvement of space travel, and
(31:17):
that ultimately is what will lead to consciousness expanding to
the stars.
Speaker 2 (31:23):
It could be that the Fermi paradox dictates once you
get to some level of technology, you destroy yourself. How
do we see ourselves? How do we actually what would
you prescribe to a room full of engineers? What can
we do to prevent that from happening?
Speaker 1 (31:39):
Yeah, how do we avoid the great filters? One of
the great filters would obviously be global term nuclear war,
So we should try to avoid that. I guess building
benign AI robots that AI that loves.
Speaker 3 (31:52):
Humanity, and robots that are helpful.
Speaker 1 (31:56):
Something that I think is extremely important in building AI
is a very rigorous adherence to truth, even if that
truth is politically incorrect. My intuition for what could make
AI very dangerous is if you force AI to believe
things that are not true.
Speaker 2 (32:13):
How do you think about there's this argument for open
for safety versus closed for competitive edge. I think the
great thing is you have a competitive model. Many other
people also have competitive models, and in that sense we're
off of. Maybe the worst timeline that I'd be worried
about is there's fast takeoff and it's only in one
person's hands that might collapse a lot of things, whereas
(32:37):
now we have choice, which is great.
Speaker 4 (32:39):
How do you think about this?
Speaker 1 (32:41):
I do think there will be several deep intelligences, maybe
maybe at least five, maybe this much as ten. I'm
not sure that there's going to be hundreds, but it's
probably close. Like maybe it'll be like ten or something
like that, of which maybe four will be in the
US so.
Speaker 3 (33:02):
And I don't think it's.
Speaker 1 (33:03):
Going to be anyone AI that that has run away capability.
But yeah, several deep intelligences.
Speaker 2 (33:10):
What will these deep intelligences actually be doing? Will it
be scientific research or trying to hack each other?
Speaker 1 (33:17):
Probably all of the above. Hopefully they will discover new physics,
and I think they're very They're definitely going to invent
new technologies. I think we're quite close to digital superintelligence.
It may happen this year, and if it doesn't happen
this next year, for sure. A digital superintelligence defined as
(33:39):
smarter than any human at anything.
Speaker 2 (33:41):
So how do we direct that to super abundance we have,
we could have robotic labor, we have cheap energy, intelligence
on demand.
Speaker 4 (33:49):
Is that sort of the white pill?
Speaker 3 (33:51):
Like?
Speaker 2 (33:51):
Where do you sit on the spectrum? And are there
tangible things that you would encourage everyone here to be
working on to make that white pill actually reality?
Speaker 3 (34:03):
I think it most likely will be a good outcome.
Speaker 1 (34:07):
I guess i'd agree with Jeff Hinton that maybe it's
a ten to twenty percent chance of annihilation, but look
on the bright side, that's eighty to ninety percent probably
of a great outcome. So yeah, I can't emphasize this enough.
A rigorous adherence to truth is the most important thing
for AI safety. Safety obviously, empathy for humanity and life
(34:29):
as we know it.
Speaker 2 (34:30):
We haven't talked about neuralink at all yet, but I'm curious.
You're working on closing the input and output gap between
humans and machines. How critical is that to agi ASI?
And once that link is made, can we not only
read but also write?
Speaker 1 (34:47):
The neuralink is not necessary to solve digital superintelligence. That'll
happen before neuralink is at scale. But what neuralink can
effectively do is solve the output bandwidth constraints especially our output.
Speaker 3 (35:04):
Bandwidth is very low.
Speaker 1 (35:05):
The out the sustained output of a human over the
course of a day is less than one but per second,
So there's eighty six four hundred seconds in a day,
and it's extremely rare for a human to output more
than that.
Speaker 3 (35:18):
Number of symbols per day, simainly for several days in
a row.
Speaker 1 (35:22):
So you really within your link interface you can massively
increase your output bandwidth and your input band with input being.
Speaker 3 (35:30):
Right to you have to do right operations to the brain.
Speaker 1 (35:34):
We have now five humans who have received the kind
of the read input where it's reading signals, and you've
got people with als who really have their tetraplegics, but
they can now communicate at with a similar bandwidth to
a human with a fully functioning body and control their
computer and phone, which is pretty cool. And then I
(35:58):
think in the next six to twelve month we'll be
doing our first implants for vision, where even if somebody
is completely blind, we can write directly to the visual cortex.
And we've had that working in monkeys. Actually, I think
one of our monkeys now has had a visual implant
for three years. And at first it'll be relatively fairly
(36:19):
low resolution, but long term you would have very high
resolution and be able to see multi spectral wavelengths. You
could see an infrared ultraviolet radar like a superpower situation,
but like at some point the cybernetic implants would not
simply be correcting things that went wrong, but augmenting human
capabilities dramatically, augmenting intelligence and sensors.
Speaker 3 (36:44):
And bandwidth dramatically.
Speaker 1 (36:46):
And that's going to happen at some point, but digital
superintelligence will happen well before that.
Speaker 3 (36:51):
At least if we had.
Speaker 1 (36:53):
A neural link, we'll be able to appreciate the AI better.
Speaker 2 (36:58):
I guess one of the limiting reagents to all of
your efforts across all of these different domains is access.
Speaker 4 (37:05):
To the smartest possible people.
Speaker 2 (37:07):
Yes, simultaneous to that, we have the rocks can talk
and reason, and they're maybe one hundred and thirty IQ now,
and they're probably going to be super intelligent soon. How
do you reconcile those two things. What's going to happen
in five ten years? And what should the people in
this room do to make sure that they're the ones
who are creating instead of maybe below the API line.
Speaker 1 (37:30):
They called it the singularity for a reason, because we
don't know what's going to happen in the not that
far future. The percentage of intelligence that is human will
be quite small. At some point, the collective some of
human intelligence will be less than one percent of all intelligence.
And if things get to a clot a ship level two,
(37:51):
we're talking about human intelligence. Even assuming a significant increase
in human population and intelligence augmentation like massive intelligence augmentation
where everyone has an IQ of AE thousand type of thing.
Even in that circumstance, collective human intelligence will be probably
one billion that of digital intelligence. Anyway, where the biological
(38:14):
boot bootloader for digital superintelligence?
Speaker 4 (38:16):
I guess just to end off, was I It was like,
was I a good bootloader? Where do we go? How
do we go from here?
Speaker 2 (38:25):
All of this is pretty wild sci fi stuff that
also could be built by the people in this room.
But if you do, you have a closing thought for
the smartest technical people of this generation right now, what
should they be doing? What should they be working on?
What should they be thinking about tonight as they go
to dinner?
Speaker 1 (38:46):
As I saw Rofe with, I think if you're doing
something useful, that's great.
Speaker 3 (38:51):
If you just try to be as useful.
Speaker 1 (38:52):
As possible to your fellow human beings, and that then
you're doing something good. I keep harping on this, like
focus on super truthful AI. That's the most important thing
for AI safety.
Speaker 3 (39:04):
Obviously.
Speaker 1 (39:04):
If anyone's interested in working at XAI, let me please
let us know. We're aiming to make Rock the Maximum
a truth seeking AI, and I think that's.
Speaker 3 (39:13):
A very important thing.
Speaker 1 (39:15):
Hopefully we can understand the nature of the universe, that's
really what AI can hopefully tell us.
Speaker 3 (39:21):
Maybe AI can.
Speaker 1 (39:21):
Maybe tell us where are the aliens, and how did
the universe really start?
Speaker 3 (39:26):
How will it end?
Speaker 1 (39:27):
What are the questions that we don't know that we
should ask and all we in a simulation or what
level of simulation are we in?
Speaker 2 (39:38):
I think we're going to find out an MPC, Elon,
thank you so much for joining us. Everyone please give
it up for Elon Mott.