Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Bloomberg Audio Studios, Podcasts, Radio News.
Speaker 2 (00:07):
We'd like to welcome in video CEO Jensen Wang to
the program, and Jensen, it's been an astonishingly busy day
for you in Washington, DC. So I'm grateful for your time.
You're sold out of Blackwell, that's what you said, but
also that five hundred billion dollar forecast, which is Blackwell
Rubin has room to grow. How do those fit together?
Speaker 1 (00:34):
I said, sales are after charts for Blackwell and in
video GPUs in the cloud are sold out. We got
plenty of Blackwells to sell you. We have lots of
Blackwells coming, We're making a lot of Blackwells, and we
have a bunch of Vera Rubens coming and so business
(00:57):
is very very strong. But we we've planned our supply
chain incredibly well. We have the largest supply chain in
the world. Our partners TSMC, our memory partners, s k Heinex, Micron,
Samsung are doing a fantastic job supporting us. And all
of our systems partners, fox Con and Quantum and Wistron
(01:17):
are packaging partners. Everybody's doing a fantastic job supporting us.
And we've done a good job planning for a very
very strong year, and we've done a good job planning
for Vera Rubin, so sales are off the charts and
video GPUs and the cloud is sold out, but we've
got a bunch of Blackwells to sell.
Speaker 2 (01:36):
Jensen, what's the road ahead for Vera Rubin. It's one
of the most common questions we get for you of
how that ramp will go relative to what we saw
with the Blackwell generations.
Speaker 1 (01:49):
Well, the silicon for Vera Rubin, seven different chips are
back in our labs and the bring up is happening
across engineering teams. Probably a couple of twenty thousand people
are working on bringing up Vera Rubin. From silicon to systems,
to software to algorithms. People are working around the clock
(02:10):
and this bring up is going beautifully. We're on track
to deliver Vera Rubin about Q three timeframe of next year,
continuing our once a year cycle. Vera Ruben is already
assured a huge success. Everybody's incredibly excited about it. Can't
wait to show everybody. And then one last thing is
(02:34):
that the Rack architecture, the Rack scale architecture, is completely revolutionary.
It includes a scale up switch called the mvy link
mvy Link seventy two. Our fifth generation is the only
one of its kind in the world. This rack architecture,
which is incredibly complex, started with Grace Blackwell, then Grace
(02:55):
Blackwell Ultra. It is transitioned to Grace Blackwell Ultra is
incredibly seamless. The same rascale architecture is going to be
used for Vera Rubin, and so the supply chains are
used to it this complexity that we enjoyed with Grace
Blackwell transition. We're now incredibly smooth running and so I
(03:18):
think Vera Rubin is going to be just really smooth
and we're gonna rt it really hard.
Speaker 2 (03:22):
Jensen. I tried to go through what the CFO Collect
Crest said about China in the quarter gone It seemed
like there was not meaning for h twenty sales because
the demand wasn't there even if you were permitted to
sell each twenty. And then in the current period and
going forward in video seems committed to working with both
the United States and China to sell what Collect called
(03:47):
more competitive compute. Where do we stand with that and
could you just clarify what Collect was talking about in
the current state of play for China.
Speaker 1 (03:59):
The most important thing she said is that we've said
for some time now our forecast for China is zero.
All of our forecast guidance that we showed zero, we
should start. That's the most important thing that she said.
She also said that effectively, China is a very important
market to us. It's very important to the United States,
(04:21):
it's very important to China. We would love the opportunity
to be able to re engage the Chinese market with
excellent products that we deliver and to be able to
compete globally. The Chinese market is very large this year.
My guess is probably about fifty billion dollars. It's great
for the American people that we're able to compete in
(04:44):
the Chinese market. It's great for the China market that
we're able to provide Nvidia's technology to them. It's great
for the rest of the world as Chinese software companies
and Chinese open source models leave China and are used
all over the world. And so I think it's fantastic
that we're able. It would be fantastic if we're able
to participate in the China market, but for now, we
(05:07):
should just assume our nvidious forecast for China market is zero.
We're going to continue to engage the US government, continue
to engage the China government to advise them and to
encourage them to allow us to go back and compete
in the open market. And so until then we should
assume zero Jensen.
Speaker 2 (05:27):
During the call, the US Commerce Department issued a statement
saying that you are now permitted to export up to
thirty five thousand Blackwell chips each to both Saudiast humane
and to the UAE through G forty two. But there
are some requirements that the US has a view, in
particular around controls of preventing tech transfer to China through
(05:51):
the Middle East. What can you tell us about your
understanding of what the US government's asking of you.
Speaker 1 (05:55):
There That that UH that that that element UH is
has been around for a long time is to prevent diversion.
Of course, over the years, people have speculated about diversion.
We've chased down every single concern and UH we've repeatedly
(06:20):
UH tested and sampled data centers around the world and
found no diversion. And so this is a this is
an area that that will continue to be rigorous on
and UH there's a lot of different ways to uh, comply.
And one of them, of course, is to have it
be run by American Cloud. Another way is just to
(06:43):
make sure that we have measures put in place, whether
technology or processes, to ensure that no diversion happens.
Speaker 2 (06:50):
Jensen, the number one question I get for you is
always about energy. How severe is the energy shortage in
the context of AI expects and would you talk a
bit about power and whether power is a bigger constraint
for this buildout than the chips themselves.
Speaker 1 (07:12):
When you're growing at the rate and scale of Nvidia,
Remember we're growing some sixty percent a year. Just quarter
to quarter growth of our company is ten billion dollars.
We grew an entire size of a company just in
one quarter. And so the scale and the rate at
(07:32):
which we're growing, everything's a challenge, which is the reason
why Nvidia has to be world class at our supply chain,
working with incredible providers and suppliers like TSMC and the
memory partners and all of our systems partners, but also
working downstream to work with energy providers, power generator companies,
(07:54):
all of the land power and shell providers, so that
we could make sure that as we launch into the marketplace,
as we deploy into the marketplace, land Power and Shell
will be ready for us. One of our great advantages
is that we have such a large network of go
to market. We're in every single cloud. Every single cloud
(08:15):
service provider is a customer of ours. We're in every
single GPU cloud, and so we have a large network
and not to mention OEMs, not to mention, all around
the world. Our customer base, our network of partners is
so large that we will find nooks and crannies of
power and large scale, medium scale, small scale in different
(08:36):
parts of the world. And so this is a huge
advantage of ours, and it stems from the fact ed
that Nvidia's architecture literally runs every model and today, yesterday
we announced a big news with Anthropic and so now
the premiere frontier models, open AI, Anthropic, XAI, Gemini, all
(09:01):
the open sources, biological models, physical AI models, everything in
the world runs on Nvidia. And as a result of that,
irrespective of which cloud provider you are, it is fantastic
that we can deploy in your cloud because the off
take will be incredible.
Speaker 2 (09:20):
Jensen. We can see where the hyperscalers are getting the money,
where they have the money to deploy and build. But
you mentioned anthropic. With Anthropic or indeed open ai, they
have tens of billions of dollars of commitments around the world.
Very simple like, how do you know the open ai
is good for it that it will be able to
find the money.
Speaker 1 (09:43):
Well, we're thoughtful along with open ai, thoughtful in aligning
on and taking into consideration visibility of demand and their
financing capabilities. All of that has to be in accordance,
has to be aligned, has to be coherent before we
(10:03):
start to build out. And so I think the ambitions large,
but the execution is disciplined, and that's really really important
to recognize. We're very disciplined with our investment. We're disciplined
with our buildout. These are very large scale investments, and
so the two teams are quite disciplined, very disciplined in
(10:25):
thinking through the investment levels. Now, it's also important to
take a step back and realize that open Ai, Anthropic,
these are the fastest growing company in the history of humanity.
They're off take their end market demand is absolutely real
and absolutely incredible, and you could see that they're really
(10:46):
struggling to keep up with the demand that they have.
The engineering teams. We work incredibly hard to make sure
that we bring them on more capacity, but also optimizing
their stack so that the usage of whatever capacity is
as sufficient as possible. And meanwhile, there's so many new
use cases that they want to put back put out
into the world, and it's currently limited by the capacity
(11:09):
they have, and so this is a really important time.
You're seeing an exponential growth in the amount of compute
demand necessary for AI, You're seeing an exponential growth of
adoption and use of AI, and the number of applications
that are going to be using these AI is also growing.
And so we've got to do our best to support
(11:30):
the scaling out of two of the most consequential companies
in history, and we're delied to be partnered with them.
Speaker 2 (11:37):
Jensen investors have been worrying about depreciation. Software can actually
extend life, like there are a one hundreds out there
in the real world still at full utilization. Are people
underestimating how long your chips stay useful or are they
kind of misunderstanding in the context of depreciation, how oh
(12:00):
you're handling generation to generation updates of GPU.
Speaker 1 (12:06):
Nvidia's architect Nvidia is unlike any other accelerator and The
reason for that is because of Kuda's diversity of capability
and versatility. Remember I said two things earlier. I said
the fact that Nvidia participates and could accelerate every phase
(12:30):
of AI, pre training, post training, and inference. Were the
only architecture in the world that does that fantastically. The
second thing we do, we run every single model, and
so most agentic systems, most clouds are running so many
different diverse type of models language models, vision models, biological models,
chemical models, for all the different fields of science. Nvidia
(12:54):
could be used across the entire lifespan of the technology.
And so if you look at products that we shipped,
Ampier A one hundred we shipped six years ago. But
because we're continuing, our installed bas is so large, our
diversity is so great, we could continuously update our software
(13:14):
bring value to our customers on the one hand, but
because our versatility is so great based on the capability
they need, they could use our GPUs for a very
very long time. Now, remember A one hundred and six
years old. However, it is still an order of a
magnitude faster than any CPU could put the bear. So
it is still the best computer. It is still the
(13:37):
best processor for much of the workload in the cloud,
and most people misunderstand that because unlike US, most accelerators
are kind of singular use, because they don't have diversity,
because they don't have versatility, because they're not great at
every phase of AI, once they're used for whatever they
(13:58):
were designed to do of value fast off a cliff.
That is not true with n video Jensen.
Speaker 2 (14:04):
My final question is a point of clarification, if I may.
You were asked on the call about content and in
video's contribution to any given piece of AI infrastructure, and
what you said was Hopper around twenty to twenty five
Blackwell about thirty with those figures billions of dollars in
dollar terms on a one gig of what data center?
(14:26):
Or are you talking about percentages of total cost?
Speaker 1 (14:32):
Oh, thank you, billions of dollars billions of dollars. Yeah,
And so for our ver Rubens system, a one gig
of what data center is probably something along the lines
of fifty to fifty five, and video's contribution is probably
about thirty five of.
Speaker 2 (14:50):
That video CE one billion noted. I was asked to
ask you, the question has been asked answered astonishingly. Busy
day for you in the nation's Captain.
Speaker 1 (15:03):
That's a great question. The difference between percentages and billion
is a big deal. Yeah, it's a brilliance.
Speaker 2 (15:10):
The stakes are high, but we're always grateful for your time.
I know it's been a busy day. Thank you for
joining us on Bloomberg Television. That's Nvideo CEO Jens Wang