All Episodes

June 23, 2025 39 mins
Elon Musk's Brutally Honest Interview!!!

#ElonMusk

Source: Solving The Money Problem

Follow me on X https://x.com/Astronautman627?...
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Elon, Welcome to AI startup School. We're just really really
blessed to have your presence here today.

Speaker 2 (00:05):
Thanks for having me So.

Speaker 1 (00:08):
From SpaceX, Tesla, Neuralink, XAI, and more. Was there ever
a moment in your life before all this where you
felt I have to build something great? And what flipped
that switch for you? Well?

Speaker 3 (00:20):
I didn't originally think I would build something great. I
wanted to try to build something useful, but I didn't
think I would build anything particularly great. You've said, probabilistically
seemed unlikely, but I wouldn't to at least try.

Speaker 1 (00:32):
So you're talking to a room full of people who
are all technical engineers, often some of the most eminent
AI researchers coming up in the game.

Speaker 2 (00:40):
Okay, I I think we should.

Speaker 3 (00:44):
I think I'd like the term engineer better than researcher.
I mean, I suppose if there's some fundamental algorithmic breakthrough,
it's a research. Otherwise it's engineering.

Speaker 1 (00:54):
Maybe let's start way back, I mean, when you were
this is a room full of eighteen to twenty five
year old excuse younger, because the founder set is younger
and younger. Can you put yourself back into their shoe?
When you know, you were eighteen nineteen, you know, learning
the code, even coming up with this first idea for
zip too. What was that like for you?

Speaker 2 (01:15):
Yeah?

Speaker 3 (01:16):
Back in ninety five, I was faced with a choice
of either do you know Grass Studies PhD a Stanford
in material science actually working on ultra capacities for potential
use in electric vehicles, essentially trying to solve the range
problem for electric vehicles, or try to do something in
this thing that most people have never heard of called
the Internet. And I talked to my professor was Bill

(01:38):
Nicks in the material science problement, and said, like, can
I defer for a quarter because this will probably fail
and then I'll need to come back to college. And
then he said this is probably the last conversation we'll have,
and he was right. So but I thought things would
most likely fail, not that they would most likely succeed.
And then in ninety five I wrote basically, I think

(02:01):
the first or close to the first maps directions, Internet,
white pages and yellow pages on the Internet. I just
wrote that. I just wrote that personally, and I didn't
even use a website. I just read the port directly
because I couldn't afford h and I could have afforded
t one. Original office was on Sherman Avenue in pale Alto.
There was like an I s P on the floor below,

(02:23):
so I rolled a hole through the floor and just
ran a land cable directly to the I s P.
And you know, uh, my brother joined me and another
co founder of Great Courier who passed away, and we
at the time we couldn't even afford a place to stay,
so we just the office was five hundred bucks a months,
so we just left in the office and and then
shouted the YMC A on page mill and al communo

(02:45):
and uh yeah, and we I guess we ended up
doing a little bit of a useful company as up
to in the beginning, and we did built a lot
of really really good software technology, but we were somewhat
captured by the legacy media companies, and that night Router
New York Times hearst whatnot were investors and customers and

(03:09):
and also on the board. So they they kept wanting
to use our software in ways that made no sense.
So I wanted to go direct to consumers anyway. As
long story dwelling too much up too, but I really
just wanted to do something useful on the Internet because
I had two choices, like do a PhD and watch
people build the internet or help build the Internet in
some small way. And I was like, well, I guess

(03:30):
I can always try and fail and then go back
to grad studies. And that ended up being like reasonably successful.
Sold for like three hundred million dollars, which is a
lot at the time these days. That's like I think
minimum impulse, but for an AI startup is like a
billion dollars. It's like there's so many freaking unicorns. It's
like heard of unicorns, point of unicorns, a billion dollars situation.

Speaker 1 (03:51):
There's been inflation since, so quite a bit more money.

Speaker 3 (03:54):
Yeah, I mean, like you could probably buy a burger
for a nickel. Well not quite, but I mean, yeah,
has been a lot of inflation. But I mean the
high level of AI is is pretty intense. As you've seen,
you know, you see companies that are I don't know,
less than a year old getting sometimes billion dollar, multi
billion dollar valuations, which I guess could could pan out

(04:17):
and probably will pan out in some cases, but it
is I ordering to see some of these valuations.

Speaker 2 (04:23):
Yeah, what do you think.

Speaker 1 (04:24):
I mean, Well, I'm pretty bullish. I'm pretty bullish, honestly, so.
I think the people in this room are going to
create a lot of the value that you know, a
billion people in the world should be using this stuff
and we're not even we're scratching the surface of it.
I love the Internet story in that, even back then,
you know, you are a lot like the people in

(04:45):
this room back then, in that, you know, the heads
of all the CEOs of all the legacy media companies
look to you as the person who understood the Internet,
and a lot of the world, you know, the corporate world,
like the world out large that does not understand what's
happening with AI, they're going to look to the people
in this room for exactly that. It sounds like, you know,
what are some of the tangible lessons. It sounds like

(05:06):
one of them is don't give up board control or
be careful about have a really good lawyer.

Speaker 3 (05:12):
I guess with the first my first start up, the
big really the mistake was having too much shareholder and
board control from legacy media companies who then necessarily see
things through the lens of legacy media, and that they'll
kind of make you do things that seem sensible to them,
but really don't make sense with the new technology. I

(05:33):
know I should point out that I that I didn't
actually at first intend to start a company. I try
to get a job at Netscape SMI resumes Netscape and
Mark Hendrieson knows about this, but I don't think you
have a swim my resume, and then nobody responded. So
and then I tried hanging out in the lobby of
Netscape see if I could like bump into someone, but

(05:53):
I was like too shy to talk and talk to anyone.
So I'm like, man, this is ridiculous. So I'll just
write sof for myself and see how it goes. So
it wasn't actually from the stand point of like I
want to start a company. I just want to be
part of building you know, the Internet in some way.
And and since I couldn't get a job at an
ARAC company, I had to start a interac company anyway.
The yeah, yeah, I mean from an AI will so
profoundly change the future. It's difficult to fathom how much.

(06:16):
But you know, the the economy is, assuming we don't
things don't go awry, and like AI doesn't kill us
all in itself, then you'll see ultimately an economy that
is not not ten times more than the current economy. Ultimately,
like if we become say or whatever, our future machine
descendants or but mostly machine descendants become like a cotttership

(06:39):
scale to civilization or beyond, we're talking about an economy
that is thousands of times maybe millions of times bigger
than the economy today.

Speaker 2 (06:48):
So yeah, I mean I.

Speaker 3 (06:51):
Did sort of feel but like, you know, when I
was in DC taking a lot of flag for like
getting rid of waste and fraud, which is an interesting
side quest as side quest go.

Speaker 1 (06:59):
But but I've got to get back to the main quest. Yeah,
I got to.

Speaker 2 (07:02):
Get back to the main quest here, So back to
the main quest.

Speaker 3 (07:07):
So but I did feel, you know a little bit
like there's you know, it's like fixing the government. It's
kind of like there's like, say, the beach is dirty
and there's like some needles and vcs and like trash,
and you want to clean up the beach. But then
there's also this like thousand foot wall of water, which
is a tsunami of ai, like, and how much does
cleaning the beach really matter if you got a thousand

(07:27):
foot tsunami about to hit not that much.

Speaker 1 (07:30):
Well, we're glad you're back on the main quest. It's
very important.

Speaker 3 (07:33):
Yeah, back to the main quest, building technology, which is
what I like doing. It's just so much noise like this,
The single noise ratio in politics is terrible.

Speaker 1 (07:42):
So, I mean, I live in San Francisco, so you
don't need to tell me twice.

Speaker 3 (07:46):
Yeah, DC is like, you know, kind of I guess
it's all politics in DC. But the if you're trying
to build a rocket or cars, or you're trying to
have software that compiles and runs reliably, then you have
to be maximally true seeking or your software or your
hardware won't work like you can't fool like math and
physics are rigorous judges. So I'm used to being in

(08:09):
a maximally true seeking environment and that's definitely not politic.
So an I'm good glad to be back in technology.

Speaker 1 (08:17):
I guess I'm kind of curious going back to the
ZIP too moment, you had hundreds of millions of dollars
or you had an exit worths of millions of dollars.

Speaker 2 (08:24):
I got twenty million dollars.

Speaker 1 (08:25):
Right, Okay, so you solved the money problem at least,
and you basically took it and you rolled. You kept
rolling with X dot Com, which is in PayPal and Confinity.

Speaker 2 (08:35):
Yes, I kept the chips on the table.

Speaker 1 (08:38):
Yeah, so not everyone does that. A lot of the
people in this room will have to make that decision. Actually,
what drove you to jump back into the ring.

Speaker 3 (08:45):
Well, I think I felt with with Zip too, we'd
both like incredible technology but never really got used. You know,
I think, at least from my perspective, we had better
technology than say Yahoo or anyone else, but it was
constrained by our customers. And so I wanted to do
something that where okay, we wouldn't be constrained by our customers,
go direct to consumer. And that's what ended up being

(09:07):
like X dot com PayPal, essentially X dot Com merging
with Confinity, which together created PayPal, and then that actually
the sort of PayPal diaspora has it might have created
more companies then, so more companies than probably any anything
in the twenty first century. You know, so many talented
people were at the combination of Confinity and X dot Com.

(09:29):
So I just wanted to, like, I felt like we
kind of got our wings clift somewhat with Zip two
and it's like, okay, what if our wings on clipped
and we go direct to consumer, And that's that's what.

Speaker 2 (09:39):
PayPal ended up being.

Speaker 3 (09:41):
But yeah, with I got that like twenty million dollar
check for my share of ZIP two. At the time,
I was living with in a house with four housemates
and had like a ten grand in the bank, and
then the check rights in the mail of all placesn't
in the mail. And then now then my bank a
bounce went from tenenty to twenty million in ten thousands.

(10:04):
You're like, well, okay, sluck to pay taxes on that
at all. But then I ended up putting almost all
of that into x dot com and as you said,
just kind of keeping almost all the chips on the table.

Speaker 2 (10:16):
And yeah, and then.

Speaker 3 (10:17):
After Paypala was like, well, I was kind of curious
as to why we had not sent anyone to Mars,
and I went on then went on the NASA website
to find out when we're sending people Tomors, and there
was no date. I thought, maybe it's just hard to
find on the website, but in fact that there was
no real plan to send people to Mars. So then
you know, I've come to this is such a long story.
So I don't want to take up too much time here.

Speaker 1 (10:39):
But I think we're all listening with rapt attention.

Speaker 3 (10:42):
So I was actually I was on the long end
of the expressway with my friend day O RESSI were
like housemates in college, and day was asking me what
I'm what we're going to do? What am I going
to do after a PayPal And I was like, like,
I don't know. I guess maybe I'd like to do
something philanthropic in space, because I didn't think I could
actually do anything commotional in space because that seem like
the purview of nations. So but you know, I'm kind

(11:03):
of curious as to when we're going to send people
to Mars. And that's when I was like, oh, it's
not on the website, and I started digging on not
that there's nothing on the NAST website.

Speaker 2 (11:11):
So then I started.

Speaker 3 (11:12):
Digging in and and and I'm definitely summarizing a lot here.
But my first idea was to do a philanthropic mission
to Mars called Life to Mars, where would send a
small greenhouse with see some dehydrated nutrient gael land out
on mosque and grow, you know, hydrate the jail and

(11:32):
then you have this great sort of money shot of
green plants on a red background. But the longest time, I,
by the way, I didn't realize money shot, I think is.

Speaker 2 (11:39):
A porn reference.

Speaker 3 (11:40):
But anyway, the point is that that would be the
great shot of green plants on a red background, and
to try to inspire you know, NASA and the public to.

Speaker 2 (11:50):
Send astronauts to to Mars.

Speaker 3 (11:52):
As I learned more, it came to realize and along
the way, by the way, I went to Russia in
like two thousand and one and two thousand and two
to buy ICVMS, which is like, that's an adventure, you know,
you go and meet with Russian high command and say
I'd like to buy some ICVMS.

Speaker 1 (12:07):
This was to get to space, yeah, as a rocket
to not.

Speaker 2 (12:10):
To nuke anyone, but.

Speaker 3 (12:13):
They had to as a result of onstroduction talks, they
had to actually destroy a bunch of their big nuclear missiles.

Speaker 2 (12:20):
So I was like, well, how about if we take two.

Speaker 3 (12:22):
Of those, you know, minus the nuke, added an additional
upper stage for Mars. But it was kind of trippy,
you know, being in Moscow in one of two thousand
and one negotiating with like the Russian military to buy ICVM,
like that's crazy, but they kept also raising the price
on me, so that so like literally it's kind of

(12:44):
like the opposite of what negotiation should should do. So
I was like, man, these things are getting really expensive.
And then I came to realize that actually the problem
was not that there was insufficient world to go to Mars,
but there was no way to do so without breaking
the budget, you know, or even breaking the NASA budget.
So that's where I decided to start SAX SpaceX to
advance rocket technology to the point where we could send

(13:05):
people to Mars.

Speaker 2 (13:07):
And that was in two thousand and two.

Speaker 1 (13:09):
So that wasn't you know. You didn't start out wanting
to start a business. You wanted to start just something
that was interesting to you that you thought humanity needed.
And then as you sort of you know, like a
cat pulling on you know, a string, it just sort
of the ball sort of unravels. It turns out this
is could be a very profitable business.

Speaker 3 (13:30):
I mean it is now, But there had been no
prior example of really a rocket startup seating. There have
been various attempts to do commercial rocket companies and that
all failed. So again with SpaceX, starting SpaceX was really
from the standpoint of, like I think there's like a
less than ten percent chance of being successful, maybe one percent,

(13:51):
I don't know. But if a startup doesn't do something
to advance rocket technology, it's definitely not coming from the
big defense contractors because they just impedance smashed the government,
and the government just wants to do very conventional things.
So there's it's either coming from a startup or it's
not happening at all. So so like a small chance
of success is better than no chance of success, and

(14:14):
so that yeah, so SpaceX started that in mid mid
two thousand and two expecting to fail, Like I said,
probably ninety percent chance of failing. And even like when
recruiting people, I didn't like to try to make out
that it would probably I said, we're probably going to die,
but four chance we might not die. But this is
the only way to get people to Mars and advanced

(14:34):
state of the art. And then I ended up being
chief engineer of the rocket, not because I wanted to,
but because I couldn't hire anyone.

Speaker 2 (14:42):
Who was good.

Speaker 3 (14:43):
So like none of the good sort of chief engineers
would join because it's like this is too risky you
were going to die. And so then I ended up
being chief engineer of the rocket. And you know, the
first three flights did fail, so it's a bit of
a learning exercise there, and fourth one fortunately worked. But
if the fourth one hadn't worked, I had no money
left and that would have been it would have been curtains.

Speaker 2 (15:03):
So it was a pretty close thing.

Speaker 3 (15:04):
If the fourth launch of Falcon Nott work, it would
have been just curtains and we would have just been
joined the graveyard of prior rocket startup. So it's like
my estimate of success was not far off. We'd just
be made it by the skin of our teeth. And
Tesla was happening sort of simultaneously. Like two thousand and
eight was a rough year because at mid two thousand

(15:26):
and eight we're called summer two thousand and eight, the
third the third launch of SpaceX had failed, a third
failure in a row. The Tesla financing round had failed,
and so Telsa was going bankrupt fast. It was just like, man,
this is a grim this is this is going to
be a tale of warning of an exercise in hubers.

Speaker 1 (15:47):
Probably throughout that period, a lot of people were saying,
you know, Elon is a software guy.

Speaker 2 (15:51):
Why are you.

Speaker 1 (15:52):
Working on hardware? Why would you? Yeah, why would he
choose to work on this?

Speaker 3 (15:56):
Right, Harp said, So you can look at the like
the because it's still the you know, the press of
that time is still online. You could just search it.
And and they kept calling the Internet guy. So like
Internet guy aka fool is attempting to build a rocket company.
So you know that we got ridiculed quite a lot.

(16:17):
And it does sound pretty absurd, Like internet guy starts
rocket company doesn't sound like a recipe for sess frankly,
so I didn't hold against them. I was like, yeah,
you know, admittedly it does sound improbable, and I agree
that it's improbable. But fortunately the fourth launch worked and
uh and, and NASA warded us a contract to resupply
the space station uh and. I think that was like

(16:39):
maybe December twenty second. It was like right before Christmas.
Because even the fourth launch working wasn't enough to succeed.
NASA also needed we also needed a big contract to
keep us a life. So so I got I got
that call from like the NASA NASA team and I literally,
we're rewarding you one of the contracts to re supply
the space station, like literally blurt it out, I love

(17:01):
you guys, which is not normally you know what they hear.
It's usually pretty, you know, sober. But I was like, man,
this is a company saver. And then we closed the
Tesla financing around on the last hour of the last
day that it was possible, which was six pm December
twenty fourth, two thousand and eight. We would have bounced
payroll two days after Christmas if that round had had closed.

(17:22):
So that was a nerve wracking end of two thousand
and eight, that's for sure.

Speaker 1 (17:25):
I guess from your PayPal and zip two experience jumping
into these hardcore hardware startups, it feels like one of
the through lines was being able to find and eventually
attract the smartest possible people in those popular fields. You know,
what would I mean the people in this room, like
some of most of the people here I don't think
have even managed the single person yet, they're just starting

(17:45):
their careers. What would you tell to, you know, the
elon who's never had to do that yet.

Speaker 3 (17:51):
I generally think to try to try to be as
useful as possible. So it may sound trit but it's
it's so hard to be useful, especially to be useful
to a lot of people, Where you say the area
under the cove of total utility is like how much
how useful have you been to your failer human beings?

Speaker 2 (18:04):
Times?

Speaker 3 (18:05):
How many people? Like it's almost like like the physics
definition of true work. It's incredibly difficult to do that.
I think if you aspire to do true work, your
probably success is much higher, Like like, don't inspire to glory,
aspire to work.

Speaker 1 (18:19):
How can you tell that it's true work? Like is
it external? Is it like what happens with other people?
Or you know what the product does for people? Like
what you know? What is that for you when you're
looking for people to come work for you? Like what
you know? What's the salient thing that you look for?

Speaker 3 (18:32):
Or if they're that's certain question I guess it's I
mean in terms of off your in product, you have
to say, like, well, if this thing is successful, how
useful will it be to how many people?

Speaker 2 (18:41):
And that's that's what I mean.

Speaker 3 (18:43):
And then you do you do whatever you know, whether
you're CEO or any role in a startup, you do
whatever it takes to succeed, like and just and just
always be smash smashing your ego like like internalized responsibility.
Like a major failure mode is when ego to ability
ratio is double graded than sign one. You know, like
if you if your ego to ability ratio is it

(19:06):
gets too high, then you're you're you're you're going to
basically break the feedback loop to reality. And in in
AI terms, your your er, you'll have your you'll break
your r L loop.

Speaker 2 (19:16):
So you want you want to don't want to break
your art.

Speaker 3 (19:17):
You want to have a strong r L loop, which
means internalizing responsibility and minimizing the ego, and you do
whatever the task is, no matter whether it's you know,
grand or humble. So I mean that's kind of like
why I actually I prefer the term like engineering as
opposed to research. I prefer the term And I don't
actually don't want it to call x AI lab. I
just want to be a company. Like it's like whatever,

(19:39):
the what of the simplest, most straightforward, uh ideally lowest
ego terms are that those are generally a good way
to go. You want to just close the loop on
reality hard That's that's a that's super big deal.

Speaker 1 (19:51):
I think everyone in this room is really looks up
to everything you've done around being sort of a paragon
of first principles and you know, thinking about the stuff
if you've done, how do you actually determine your reality?
Because that seems like a pretty big part of it.
Like other people people who have never made anything, non engineers,
sometimes journalists at time who've never done anything, like they

(20:14):
will criticize you, But then clearly you have another set
of people who are builders, who have very high you know,
sort of area under the curve, who are in your circle. Like,
you know, how should people approach that? Like what has
worked for you? And what would you pass on like
you know, to X to your children? Like you know,
what do you tell them when you're like you need
to make your way in this world?

Speaker 2 (20:35):
Here?

Speaker 1 (20:36):
You know, here's how to construct a reality that is
predictive from first principles.

Speaker 3 (20:39):
Well, the tools of physics are incredibly helpful to understand
and make progress in any field. And first principles means
just obviously just means, you know, breakings down to the
fundamental axiomatic elements that are most likely to be true
and then reason up from there as cosionally as possible,
as opposed to reasoning by analysis or metaphor, and then
it just simple things like like thinking in the limit,

(21:02):
like if you strapolate, you know, minimize this thing or
maximize that thing. Thinking in the limit is is very
very helpful. I use all the tools of physics. They
apply to any field. This is like a superpower.

Speaker 2 (21:13):
Actually.

Speaker 3 (21:14):
So you can take say, take for example, like rockets.
You could say, well, how much should a rocket rocket cost?
A typical approach to how to that people would take
how much rocket should cost? There is they would look
historically at what the cost of rockets are and assume
that any new rocket must be somewhat similar to the
prior cost of rockets. A first principal's approach would be
you look at the materials that the rocket is comprised of,

(21:35):
so if that's aluminum, copper, carbon, fiber, steel, whatever the
case may be, and say what what how much does
that rocket weigh, and and what are the constituent elements
and how much does they weigh? What is the material
price per kilogram of those constituent elements, In that sense
the actual floor on what a rocket can cost. It
can asymptotically approach costs the raw materials and then you realize, oh,

(21:58):
actually a rocket. The ROM turials of a rocket are
only maybe one or two percent of the historical cost
of a rocket, so the manufacturing must necessarily be very inefficient.
If the if the rom heerial cost is only one
or two percent, that would be a first first principles
analysis of the potential for cost optimization of a rocket.

Speaker 2 (22:18):
And that's before you get to reusability.

Speaker 3 (22:20):
To give an AI sort of AI example, I guess
last year went for XCI when we were trying to
build a training supercluster. We went to the various suppliers
to ask what they said, this is the beginning of
last year, that we needed one hundred thousand, h one
hundreds to be able to train coherently. And there are
estimates for how long it would take to complete that

(22:42):
were eighteen to twenty four months. It's like, well, we
need to get that done in six months so then
or we won't be competitive. So then if you break
that down, what are the things you need? Will you
need a building, you need power, you need cooling. We
didn't have enough time to build a building from scratch,
so we've had to find existing building. So we found
at a factory that was no longer in use in Memphis.

Speaker 2 (23:03):
That used to build electrolytes products.

Speaker 3 (23:06):
But then the input power was fifteen megawats and we
needed one hundred and fifteen megawatts, So we read the
generators and had generators on one side of the building.
And then we have to have cooling, so we rented
about a quarter of the mobile cooling capacity of the
US and put the chillers.

Speaker 2 (23:20):
On the other side of the building.

Speaker 3 (23:21):
That didn't fully solve the problem because the voltage the
power variations during training are very very big, so you
can have power can drop by fifty percent in one
hundred milliseconds, which the generators can keep up with.

Speaker 2 (23:33):
So then we.

Speaker 3 (23:34):
Combined we added Tesla megapacks and modified the software in
the megapacks to be able to smooth out the power
variation during the training run. And then there were a
bunch of networking challenges because the networking cables, if you're
trying to make one hundred thousand GPUs training coherently, are
very very challenging.

Speaker 1 (23:52):
Almost it sounds like almost any of those things you mentioned,
I could imagine someone telling you very directly, no, you
can have that, you can't have that power, you can't
have this uh, and it sounds like one of the
salient pieces of first principles thinking is actually, let's ask why,
let's you know, figure that out, and actually let's challenge
the person across the table and if they if I

(24:14):
don't get an answer that I feel good about, I'm gonna,
you know, not allow that to be I'm not going
to let that know to stand. Is that? I mean,
that feels like something that you know, everyone if someone
were to try to do what you're doing in hardware,
hardware seems too uniquely and software we have lots of
you know, fluff and things that you know, it's like
we can add more CPUs for that, it'll be fine.

(24:34):
But in hardware it's it's just not going to work.

Speaker 3 (24:36):
I think these general principles of first principles thinking apply
to software and hardware, apply to anything really. I'm just
using kind of a hardware example of how we're told
something is impossible, but once we broke it down into
the constituent elements of we need a building, we need power,
we need cooling, we need uh, we need power smoothing,
and then and then we could solve those constituent elements.

(24:58):
But and then we and then we rand the networking
operation to do all the cabling everything in four shifts
twenty four to seven, and I was like sleeping in
the data center and also doing cabling myself, and there
were a lot of both issues to solve. You know,
nobody had done a training run with one hundred thousand,
h one hundreds training coherently last year.

Speaker 2 (25:20):
I mean, maybe it's been done this year. I don't know.

Speaker 3 (25:22):
But then and then we ended up doubling that to
two hundred thousand, and so now we've got one hundred
and fifty thousand, h one hundred fifty k H two
hundreds and thirty k GB two hundreds in the in
the Memphis training center, and we're about to bring one
hundred and ten thousand GB two hundreds online at a
second data center also in the Memphis area.

Speaker 1 (25:42):
Is it your view that you know pre training is
still working and you know larger the scaling laws still hold,
and whoever wins this race will have basically the biggest,
smartest possible model that you could steal.

Speaker 3 (25:55):
Well, there's other various elements that side competitiveness for larger
I this this for sure, the talent of the people matter.
The scale of the hardware matters, and how well you're
able to bring that hardware to bear. So you can't
just order a whole bunch of GPUs and if they don't,
you can't just plug them in. So you've got to
You've got to get a lot of GPUs and have

(26:15):
them train training coherently and stavely. Then it's like, what
unique access to data.

Speaker 2 (26:21):
Do you have?

Speaker 3 (26:21):
I guess distribution matters to some degree as well, like
how do people get exposed to your AI?

Speaker 2 (26:26):
Those?

Speaker 3 (26:26):
Those are those are critical factors for if it's going
to be like a large foundation model as competitive, you know,
as as many have said.

Speaker 2 (26:34):
I think my friend eliasit Skier.

Speaker 3 (26:36):
Said, uh, you know, we we've kind of run out
of pre training data of human generated pre like human
generated data, you run out of tokens pretty fast, of
certainly of high quality tokens. And and then you then
you have to do a lot of you need to
essentially create synthetic data and and be able to accurately
judge the synthetic data that you're creating to very like

(26:57):
is this real synthetic data?

Speaker 2 (26:59):
Well?

Speaker 3 (26:59):
Is it in a hallucination that doesn't actually mash reality?
So achieving grounding and reality is tricky, but we're at
the stage where there's more effort put into synthetic data.
And right now we're training Grock three point five, which
is a heavy focus on reasoning.

Speaker 1 (27:16):
Going back to your physics point, what I heard for
reasoning is that hard science, particularly physics textbooks, are very
useful for reasoning, whereas I think researchers have told me
that social science is totally useless for reasoning.

Speaker 2 (27:31):
Yes, that's probably true.

Speaker 3 (27:33):
So yeah, there's something that's going to be very important
in the future is combining deep AI in the data
center of Supercluster with robotics. So that's you know, things
like like the Optimius humanoid robot, and yeah, Optimist is awesome.
There's going to be so many humanoid robots and robots
of all robots of all sizes and shapes. But my

(27:55):
prediction is that there will be more humanoid robots by
far than all other robots can bind by maybe an
order of magnitude, like a big difference.

Speaker 1 (28:03):
And is it true that you're planning a robot army of.

Speaker 3 (28:07):
A sort, whether we do it or or you know,
whether Tesla does it. You know, Tesla works closely with Xai.
Like you've seen how many humanoid robots startups are there.
Like it's like I think Jensen Bong was on stage
with a lot with a massive number of robots, you know,
robots from different companies. I think it was like a
dozen different humanoid robots. So I mean, I guess, you know,

(28:28):
part of what I've been biting and maybe what has
slowed me down somewhat is that I'm I'm a little
I don't want I don't want to make Terminator real,
you know. So I've been sort of, I guess, at
least until recent years, dragging my feet on on AI
and and humanoid robotics. And then I sort of come
to realize that, realizations it's happening.

Speaker 2 (28:46):
Whether I do it or not.

Speaker 3 (28:47):
So you got really two choices participate. You could either
be a spectator or a participant. It's like, well, I
guess I'd rather be a participant than a spectator. So
now it's you know, pedals to the metal on humanoid
robots and digital superintelligence.

Speaker 1 (29:01):
So I guess you know, there's a third thing that
everyone has heard you talk a lot about that I'm
really a big fan of, you know, becoming a multiplanetary space.
Where does this fit? You know? This is all you know,
not not just a ten or twenty year thing, maybe
one hundred year thing. Like it's a multi you know,
many many generations for humanity kind of thing. You know,
how do you think about it? There's you know AI obviously,

(29:21):
there's embodied robotics, and then there's being a multiplant, multiplanetary species.
Does everything sort of feed into that last point? Or
you know what? What are you driven by right now
for the next ten, twenty and one hundred years?

Speaker 3 (29:32):
See one hundred years, man, I hope civilizations around in
one hundred years. If it is around, it's going to
look very different from civilization today. I mean, I'd predict
that this is going to be at least five times
as many humanoid robots as there are humans, maybe ten times.
One way to look at the progress of civilization is
percentage completion Kladishev. So if you're in a cludos Ship

(29:53):
scale one, you've you've hot as all the energy of
a planet, and in my opinion, we've only ha maybe
one or two percent of Earth's energy, So we've got
a long way to go to the Kardschev scale one.
Then Karshev two, you've harnessed all the energy of the Sun,
which would be I don't know, a billion times more

(30:13):
energy than Earth, maybe closer to a trillion, And then
Karshev three would be all the energy of a galaxy,
pretty far from that. So we're at the very very
early stage of the intelligence big bang. I hope we're
in terms of be multiplanetary, like I think. I think
we'll have enough mass transferred to Mars within like roughly
thirty years to make Mars self sustaining such that Mars

(30:35):
can continue to grow and prosper even if the resupply
shifts from Earth stuff coming, and that greatly increases the
probable lifespan of civilization or consciousness and intelligence, both biological
and digital. So that's why I think it's important to
become a multiplanet species. And I'm so much troubled by
the filmy paradox, like why have we not seen any aliens?

(30:56):
And it could be because intelligence is incredibly rare and
maybe we're the only ones in this galaxy, in which
case the intelligence of consciousness is the tiny candle in
a vast dogness, and we should do everything possible to
ensure the tiny candle does not go out, and being
a multiplanet species, or making consciousness multiplanetary, greatly improves the

(31:17):
probable lifespan of civilization, and it's it's the next step
before going to other star system on you. Once you
at least have two planets, then you've got a forcing
function for the improvement of space travel, and that ultimately
is what will lead to consciousness expanding to the stars.

Speaker 1 (31:34):
It could be that the Fermi paradox dictates once you
get to some level of technology, you destroy yourself. How
do we see ourselves? How do we actually what would
you prescribe to I mean a room full of engineers, like,
what can we do to prevent that from happening?

Speaker 2 (31:49):
Yeah?

Speaker 3 (31:49):
How do we avoid the great filters? One of the
great filters would obviously be global termin nuclear war, So
we should.

Speaker 2 (31:55):
Try to avoid that.

Speaker 3 (31:56):
I guess building benign AI robots that that loves humanity
and you know, robots.

Speaker 2 (32:03):
That are helpful.

Speaker 3 (32:05):
Something that I think is extremely important in building AI
is a very rigorous adherence to truth, even if that
truth is politically incorrect. My intuition for what could make
AI very dangerous is if if you force AI to
believe things that are not true.

Speaker 1 (32:22):
How do you think about you know, there's sort of
this argument for open for safety versus closed for competitive edge.
I mean, I think the great thing is you have
a competitive model. Many other people also have competitive models,
and in that sense, you know, we're sort of off of.
Maybe the worst timeline that I'd be worried about is,
you know, there's fast takeoff and it's only in one
person's hands, so that might you know, sort of collapse

(32:44):
a lot of things, whereas now we have choice, right,
How do you think about this?

Speaker 2 (32:48):
Yeah?

Speaker 3 (32:48):
I do think there will be several deep intelligences, maybe
maybe at least five, maybe as much as ten. I'm
not sure that there's going to be hundreds, but it's
probably close. Like maybe it'll be like ten like that,
of which maybe four will be in the US. So
I don't think it's going to be anyone AI that
that has runaway capability.

Speaker 2 (33:09):
But but yeah, several deep intelligences.

Speaker 1 (33:11):
What will these deep deep intelligences actually be doing? Will
it be scientific research or trying to hack each other?

Speaker 2 (33:17):
Probably all of the above.

Speaker 3 (33:19):
I mean, hopefully they will discover new physics, and I
think they're very they're definitely going to invent new technologies.
I think I think we're quite close to digital superintelligence.
It may happen this year, and if it doesn't happen
this year, next year for sure. Digital superintelligence defined as
smarter than any human at anything.

Speaker 1 (33:38):
Well, so how do we direct that to sort of
superabundance you know we have we could have robotic labor,
we have cheap energy, intelligence on demand.

Speaker 2 (33:46):
You know?

Speaker 1 (33:47):
Is that sort of the white pill? Like where do
you sit on the spectrum? And are there tangible things
that you would encourage everyone here to be working on
to make that white pill actually reality?

Speaker 2 (33:57):
I think I think it most likely will be a
good outcome.

Speaker 3 (34:01):
I guess i'd sort of agree with Jeff Hinton that
maybe it's a ten to twenty percent chance of annihilation,
but look on the right side, that's eighty to ninety
percent probably of a great outcome. So yeah, I can't
emphasize this enough. A rigorous adherence to truth is the
most important thing for AI safe safety, and obviously empathy
for humanity and life as we know it.

Speaker 1 (34:22):
We haven't talked about neuralink at all yet, but I'm curious.
You know you're working on closing the input and output
gap between humans and gene How critical is that to
agi ASI And you know, once that link is made,
can we not only read but also write.

Speaker 3 (34:39):
The neuralink is not necessary to solve digital superintelligence. That'll
happen before neuralink is at scale. But what neuralink can
effectively do is solve the input output bandwidth constraints. Especially
our output bandwidth is very low out The sustained output
of a human over the course of a day is

(35:00):
less than one but per second, So there's you know,
eighty six four hundred seconds in a day, and it's
extremely rare for a human to output more than that
number of symbols per day, so certainly for several days
in a row. So you really with with within neuralink interface,
you can massively increase your output bandwidth and your input
band with input.

Speaker 2 (35:20):
Being right to you, you have to do right operations
to the brain.

Speaker 3 (35:24):
We have now five humans who have received the.

Speaker 2 (35:29):
Kind of the read input where it's reading.

Speaker 3 (35:32):
Signals, and you've got people with with als who really
have their their tetraplegics, but they can now communicate at
with similar band to a human with a fully functioning
body and control their computer and phone which is pretty cool.
And then I think in the next six to twelve
months we'll be doing our first implants for vision where

(35:54):
even if somebody is completely blind, could we can write
directly to the visual cortex. And we've had that working
in monkeys. Actually, I think one of our monkeys now
has had a visual implant for three years. And at
first it'll be relatively fairly low resolution, but long term
you would have very high resolution and be able to

(36:14):
see multi spectral wavelengths, so you could see an infrared
ultra violet radar like a superpower situation. But like at
some point the sabernetic implants would would not simply be
correcting things that went wrong, but augmenting human capabilities dramatically,
augmenting augmenting intelligence and senses and bandwidth dramatically. And that's

(36:36):
that's going to happen at some point, but digital superintelligence
will happen well before that. At least if we have
a neural link, we'll be able to appreciate the AI better.

Speaker 1 (36:46):
I guess one of the limiting reagents to all of
your efforts across all of these different domains is access
to the smartest possible people. But you know, sort of
simultaneous to that, we have you know, the rocks can
talk and reason, and then you know they're maybe one
hundred and thirty IQ now and they're probably going to
be super intelligent soon. How do you reconcile those two things,

(37:07):
like what's going to happen in you know, five ten
years and what should the people in this room do
to make sure that you know they're the ones who
are creating instead of maybe below the API line.

Speaker 3 (37:17):
Well, they called it the singularity for a reason, Because
we don't know what's going to happen in the not
that far future. The percentage of intelligence that is human
will be quite small. At some point, the collective some
of human intelligence will be less than one percent of
all intelligence. And if things get to a cult a
ship level too, we're talking about human intelligence even assuming

(37:39):
a significant increase in human population and intelligence augmentation, like
massive intelligence augmentation where like everyone has an IQ of
one thousand type of thing. Even in that circumstance, collective
human intelligence will be probably one billion that of digital intelligence. Anyway,
we're the biological bootbootloader for digital superintelligence.

Speaker 1 (37:58):
I guess just to end off.

Speaker 2 (38:01):
Was that it was like was I a good bootload.

Speaker 1 (38:03):
Where do we go? How do we go from here?
I mean, I mean all of this is pretty wild
sci fi stuff that also could be built by the
people in this room. You know, if you do, you
have a closing thought for the smartest technical people of
this generation right now? What should they be doing? What
should they what should they be working on? What should
they be thinking about, you know, tonight as they go
to dinner.

Speaker 3 (38:22):
Well, as I saw Rolfe with, I think if you're
doing something useful, that's great. If you just just try
to be as useful as possible to your fellow human
beings and that that then you're doing something good. I
keep harping on this, like focus on super truthful AI.
That that's the most important thing for AI safety. You know, obviously,
if you know anyone's interested in working at x AI mean,

(38:44):
please please let us know. We're aiming to make rock
the maximumly truth seeking AI and uh, I think that's
a very important thing.

Speaker 2 (38:53):
Hopefully we can understand the nature of the universe.

Speaker 3 (38:55):
That that's really I guess what AI can hopefully tell
us maybe AI AI and maybe tell us where are
the aliens? And you know, how did the universe really start,
how will it end? What are the questions that we
don't know that we should ask, And are we in
a simulation or what level of similation are we in?

Speaker 1 (39:11):
Well, I think we're going to find out.

Speaker 2 (39:13):
An MPC.

Speaker 1 (39:15):
Elon, thank you so much for joining us. Everyone, please
give it up for Elon Musk Basically,
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

New Heights with Jason & Travis Kelce

New Heights with Jason & Travis Kelce

Football’s funniest family duo — Jason Kelce of the Philadelphia Eagles and Travis Kelce of the Kansas City Chiefs — team up to provide next-level access to life in the league as it unfolds. The two brothers and Super Bowl champions drop weekly insights about the weekly slate of games and share their INSIDE perspectives on trending NFL news and sports headlines. They also endlessly rag on each other as brothers do, chat the latest in pop culture and welcome some very popular and well-known friends to chat with them. Check out new episodes every Wednesday. Follow New Heights on the Wondery App, YouTube or wherever you get your podcasts. You can listen to new episodes early and ad-free, and get exclusive content on Wondery+. Join Wondery+ in the Wondery App, Apple Podcasts or Spotify. And join our new membership for a unique fan experience by going to the New Heights YouTube channel now!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.