All Episodes

August 28, 2024 • 26 mins

In a special edition of Bloomberg Technology, host Ed Ludlow speaks with Nvidia CEO Jensen Huang to discuss the company's latest quarterly report that fell short of investor's lofty expectations.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Bloomberg Audio Studios, podcasts, radio news.

Speaker 2 (00:08):
From Mahart where innovation, money and power Collie in Silicon
Vallet NBN.

Speaker 3 (00:14):
This is Bloomberg Technology with Caroline Hyde and Ed.

Speaker 4 (00:17):
Ludlow live from San Francisco to our TV and radio
audiences around the world. Welcome to a special edition of

(00:39):
Bloomberg Technology. I'm Ed Ludlow. In just a few moments
in video, CEO Jensen Wang will join us for a
live interview following their latest earnings report, the company posting
a revenue forecast that beat consensus that fell short to
some of the most optimistic estimates, stoking concern that the
explosive growth is waning. Let's get right to Bloomberg Semiconductor

(01:01):
correspondent Ian King, who joins me on set. Let's start
with the basics, the fiscal third quarter forecasts and what
we learned through it.

Speaker 5 (01:07):
Yeah, that forecast was fine. If you compare it to consensus,
it was there or thereabouts, and for most of the
companies in the world, that would be great and everybody
would be happy.

Speaker 3 (01:17):
But this is in video.

Speaker 5 (01:18):
This is a company which beats pretty much every quarter
and by an order of magnitude, and it didn't do that,
and it didn't indicate it would do that, and that
raised a lot of questions. As we heard on the.

Speaker 4 (01:30):
Call, there are many storylines. I think the demand from
the hyperscalas is clearly intact, but Blackwell was everything. I
want to play a SoundBite of what Jensen Wang said
about Blackwell. Listened to this.

Speaker 6 (01:41):
The change to the mask is complete. There were no
functional changes necessary, and so we're sampling functional samples of Blackwell,
Grace Blackwell in a variety of system configurations as we speak.

Speaker 4 (02:00):
The main point here is that there was not a
design issue with Blackwell itself as had been reported, but
based on what Nvidia said, this was about the production mechanism.
Blue Beazine King, you did a very good job in
the top Life blog of explaining a GPU mask. Could
you just try to give a short version of that
to our audience.

Speaker 5 (02:20):
Yeah, the mask is basically the blueprint, which basically that
is used to burn in the circuit onto the surface
of the chip to give it its function. Right, is
a very important step. That's the fundamental blueprint, and they
were saying we didn't get it wrong, but when it
came to manufacturing, it didn't produce as many good chips
as we wanted, so we made some tweaks to that.

(02:41):
Didn't have to redesign it, but made some tweaks, and
that is helping us to get a better yield.

Speaker 4 (02:46):
It's worth noting at this stage the stocks down almost
seven percent in after hours, had been down more at
the conclusion of the call. I think basically because we
didn't learn enough about Blackwell. What they said was in
the fiscal fourth quarter of this year twenty five, there
will be several billion dollars of sales through Blackwell. Why
does the market want more and what does it want?

Speaker 5 (03:08):
They wanted a reassurance from in videos management, and they
wanted reassurance in the form of details. They wanted, you know,
gens to Puddy's arm around everybody and say, don't worry,
it'll be fine. This is how much I'm going to get.
They asked consistently, We're asked questions of how many billions
and when will those billions exactly come in and collect. Kretz,
the CFO and Jensen Wang essentially avoided that question and

(03:32):
refuse to give that precise reassurance.

Speaker 4 (03:35):
We're showing some of the after ours reaction, not just
in in video itself, but some of its peers, both
on the chip making side the server equipment providers, and
I think AMD and ARM in particular are very noteworthy.
Bloombogt and King stay with us. I want to go
out to Chicago and Bloomberg's Ryan for Lostelka on I
Equities team and Ryan. That's the broad summary from me
about the names moving and after Hours there's probably a

(03:57):
bigger picture after ours movement in the markets as well.
I'll start with Nvidia and work outward from that.

Speaker 1 (04:03):
Sure, well, one thing I would say about in videos
after hours decline is that it does come after a
very strong year to day performance. I think it closed
up more than one hundred and fifty percent this year,
So even though the re forecast was maybe a little
bit shy of some of the most optimistic expectations, it
did beat expectations. And it is, you know, coming off
such a huge gain. So it's not necessarily surprising to

(04:24):
see a little bit of a consolidation now. And I
think even the decline is a little bit less and
the options market was anticipating. So just some context there
for looking at the declient. But you're right that pretty
much we are seeing widespread weakness following this report. All
the megacaps are, you know, modestly lower. We are seeing
much more pronounced weakness in other chip makers and chip
design companies and so forth. So yeah, certainly it does

(04:47):
seem like the initial read through here is negative, but
again it does come after a very strong start to
the year.

Speaker 4 (04:53):
We made a big deal about this earnings print for
quite a long time. I look at something like an
investco QQQ, the ETF that tracks the nast that one hundred,
and I think it's down around a percentage point, right.
But in truth, Ryan, in the market's reaction, was this
the macro level event that we thought it was going
to be.

Speaker 1 (05:14):
That's a great question. I would say that it's kind
of close enough in line with expectations, even if it
is a little bit shy the most optimistic ones that
I don't think this is going to really cause people
to really change how they're allocating, change their opinions on
AIS as kind of fundamental secular driver. But maybe in
the near term we do see a little bit of weakness.
I mean, again, some of these stocks have been moving

(05:34):
up so much there have been sort of kind of
growing calls about their valuation some concerns about that. I
don't know if this is the kind of absolute blowout
that we'll just you know, cause people to continue piling
into the way that they were doing earlier this year.

Speaker 4 (05:48):
There is a lot in the news cycle inclusive of
and outside of in video. I think one of the
names that you note in after ours is super Micro.
It's down significantly. There's an video relationship to that, and
then there's the news around that name in and of itself.

Speaker 1 (06:03):
Yeah. Absolutely so. We saw yesterday a short report came
out today it's delaying the filing of the ten K.
You know, both of those caused us a weakness in
the stock. I think today it was down more than
double digits, so you know, a lot of reasons to
be concerned there in general. And it is one of
Nvidia's biggest customers. I think it's the third largest, so
you know, this is just you know, another reason for

(06:23):
people who might have been kind of souring on super
Micro to you know, maybe be pulling the cell button.

Speaker 3 (06:27):
A little bit.

Speaker 4 (06:29):
Bloomdo's Ryan for Selica with the after hours action out
of Chicago. Thank you. Let's get the reaction from the
cell side with one of the stocks relative bears. D A.
Davidson's GIL Luria had a neutral rating and a street
low ninety dollars price target on shares of Nvidia. He
joins US now and GIL. I guess does that change

(06:49):
your barished perspective on the stock and your main takeaway
from the analyst school, the quarter.

Speaker 2 (06:56):
Was a very good quarter. There ability to continue to
grow delivering chips at this rate at this scale is
fantastic and unprecedented. I don't think there's much of a
revenue issue here, of a growth issue here. I think
a little bit of a pushback is probably more around
the margin situation. You had a good conversation with Ian

(07:17):
about Blackwell and some of the moving pieces part of
what's happening with Blackwell. It's a new product, it's got
a few hitches along the way that put pressure on
gross margins, and then operating margins grew more than anticipated,
so less of the top line upside flowed to the
bottom line. And that's probably where investors are getting a

(07:38):
little bit more cautious than they were before, because in
the last several quarters, the huge upside to revenue flowed
all the way to the bottom line, creating huge upside
there to believe in and nvideo right now is to
believe twenty twenty six calendar is going to be about
two hundred billion of revenue and about four and a
half to five dollars of earnings. That's only possible if

(08:01):
they continue to get this type of growth on the
top line and more leverage. And the second part of
that equation looks a little less certain right now.

Speaker 3 (08:11):
Gil.

Speaker 4 (08:11):
When you said that two hundred billion dollar figure, Bloombergsy
and King, who's covered semi conductors at this company since
nineteen ninety eight, sat next to me still for what
it's worth, gave a little smile. I mean, it's an
astonishing figure. I think there's a lot of emphasis here
on Blackwell. How did you react to the news of
several billion dollars in fiscal fourth with Blackwell and the

(08:32):
explanation that this was a production issue impacting the ramp,
not a sort of core design issue around the product itself.

Speaker 2 (08:41):
So, first of all, our estimate is far less than
two hundred billion for twenty twenty six.

Speaker 4 (08:45):
But that's you're a relative bear, Gill, and I got
that bit right.

Speaker 2 (08:48):
I think in terms of Blackwell, what happened to in
n videos. They used to be on a two year
cycle for the data center product, and they decided to
aggressively move to a one year cycle and then put
a lot of new feature functionality and bundle into the
Blackwell platform. So that was a very aggressive agenda, and
it's not that surprising that there's some challenges that they

(09:10):
have to encounter. If they can still ship in their
fiscal fourth quarter, that's a very good sign. And by
the way, their customers don't really care that much. Their
customers are just going to continue to buy the latest
and greatest, and if that's the Age two hundred, they'll
just continue to buy h two hundred. They have made
that abundantly clear on their conference calls, Microsoft, Meta, Amazon, Google, Tesla.

(09:34):
Elon Musk made it abundantly clear they'll buy as much
GPUs as in Video can produce, at least this year,
because they're saying the demand signals are there. As long
as the demand signals are there, they'll buy whatever and
video is selling, and towards the end of the year,
a piece of that will be Blackwell.

Speaker 4 (09:51):
Gil Jensen Wong was it pains and gave a detailed
answer that basically summarizing video as a systems vendor. The
discussion around Envy link, CPUs, everything beyond GPU. How much
credit do you assign the company for that, the sort
of complete systems that they sell.

Speaker 2 (10:08):
A tremendous amount of credit. Let's start with the beginning.
When history is written, we'll talk about how Jensen Wangen
Nvidia created the capabilities that make generative AI possible today,
in a similar way that Elon Musk is responsible for
the electric vehicle. And part of the genius wasn't just
seeing that this is coming, that accelerate computers coming, AI

(10:30):
capabilities is coming well before anybody else did. It's also
realizing that in order to create a moat around the chip,
they need to do a couple of things. One is
to own the proprietary software around that KUDA, but the
other thing is to enhance the bundle as much as possible.
These GPUs work in a raise when they work together,

(10:52):
ten thousands or sometimes tens of thousands of GPUs together.
When you box those up in a box with multiple
g CPUs, wiring, cabling, and firmware, you make it harder
for your competitors to catch up. And they've done a
remarkable job of that.

Speaker 4 (11:10):
I also wonder about all the rest. The story thus
far has been about the hyperscale is a meta five names,
essentially being the core customer of Nvidia. We got some
detail on low double digit billions sales this year in
Sovereign AI. Do you see evidence that the market is
broadening out beyond just the cloud providers?

Speaker 3 (11:30):
Not really.

Speaker 2 (11:31):
It seems like those five are still a substantial amount
of the revenue. And those five have in their most
recent calls said some interesting things. One of them is
that they consistently said they are over investing. So they
said they definitely want to invest more. We're seeing that
in results today. But they also use the term over investing.
Over investing is something you do before you do the

(11:53):
other thing, and they made it clear that they're only
going to do so when they have the demand signals.
If or whatever reason, demand for AI capacity goes down,
for instance, because compute doesn't continue to drive performance of
foundational models, you're gonna see those same five actors pull

(12:15):
back and say, you know, maybe we have enough data
center capacity, or maybe what we have in the pipeline
is enough. And that's the concern for twenty twenty five
and twenty six is that those very companies could do that.
Beyond that, these are the same companies that are developing
their own chips. They're at various stages of this, but

(12:35):
Google's TPUs are probably as good as in videos. So
in terms of their internal use and selling to customers
such as Apple, Google's already caught up, Amazon's catching up,
Meta and Microsoft are earlier stages. But these same companies
that are buying every GPU from video they can will
be the ones that will buy possibly less next year

(12:56):
in the year after that.

Speaker 4 (12:58):
I just want to reflect for our TV audience there
are people tuned in around the world. My understanding is
there are watch parties in cities from New York to London.
For this earnings, We've hyped it up for weeks. Probably
this is the most unique earnings print that I've covered
in my career. What about for you in the semiconductor space,
Why is this so different?

Speaker 2 (13:18):
Well, so, first of all, I tried to participate by
wearing in video green today. But importantly and video is
important because of how much they've added to the cumulative
earnings of this in p. Five hundred and the cumulative performance.
But I wouldn't necessarily extrapolate their success or sometimes setbacks

(13:38):
to the rest of the market because what they do
is very specific. The growth and data center construction and
data center a GPU fulfillment into those data centers is
a very small set of companies. I'd argue and Video
is sucking the oxygen for the room from a lot
of other technology companies. Well, we celebrate with Nvideo because

(14:01):
they've added so much value. I wouldn't necessarily read through
to the rest of technology.

Speaker 4 (14:07):
Gilerya of DA Davidson, the relatively most bearish or perhaps
the least bullish name on the street, Thank you so
welcome to this special edition of Bloomberg Technology for our
TV and radio audience around the world. Joining me now
is Jensen One, CEO of Nvidia, straight from the analyst's call,
and Jensen, good evening, good afternoon to you. I think

(14:29):
the market wanted more on Blackwell, they wanted more specifics,
and I'm trying to go through all of the call
and the transcript. It seems like a very clearly this
was a production issue and not a fundamental design issue
with Blackwell, but the deployment in the real world, what
does that look like? Tangibly and is there a sort
of delay in the timeline of that deployment and thus

(14:50):
revenue from that product.

Speaker 7 (14:54):
I let's see, that's just the fact that I was
so clear and it wasn't clear enough kind of tripped
me up there right away.

Speaker 8 (15:05):
And so let's see, we made a mass change to
improve the yield. Functionality of Blackwell is wonderful. We're sampling
Blackwell all over the world today. We show people giving
tours to people of the Blackwall systems that we have
up and running. You could find pictures of Blackwall systems

(15:25):
all over the web. We have started volume production. Volume
production will ship in Q four. Q four, we will
have billions of dollars of Blackwell revenues and.

Speaker 3 (15:40):
We will ramp from there. We will ramp from there.

Speaker 8 (15:43):
The demand for Blackwell far exceeds its supply, of course
in the beginning, because the demand is so great, But
we're going to have lots and lots of supply and
we will be able to ramp. Starting in Q four,
we have billions of dollars of revenues and we'll ramp
from there into Q one, into Q two and two

(16:05):
next year. We're going to have a great next year
as well.

Speaker 4 (16:08):
Jensen, what is the demand for accelerated computing beyond the
hyperscalers and meta.

Speaker 8 (16:15):
Hyperscalers represent about forty five percent of our total data
center business. We're relatively diversified today. We have hyperscalers, we
have Internet service providers, we have sovereign AIS, we have
industries enterprises, so it's fairly fairly diversified. A site outside

(16:39):
of hyperscalers is the other fifty five percent. Now, the
application use across all of that, all of that data
center starts with accelerated computing. Accelerated computing does everything, of course,
from well the models the things that we know about,

(17:00):
which is generative AI, and that gets most of the attention.
But at the core we also do database processing, pre
and post processing of data before you use it for
generative AI, trans coding, scientific simulations, computer graphics of course,

(17:21):
image processing of course. And so there's tons of applications
that people use are accelerated computing for, and one of
them is generative AI. And so let's see what else
can I say?

Speaker 3 (17:35):
I think that's the.

Speaker 4 (17:36):
Lever jump in Jensen. Please on sovereign AI. You and
I've talked about that before and it was so interesting
to hear something behind it that in this fiscal year
there will be low double digit I think you said
billions of dollars in sovereign AI sales. But to the
lay person, what does that mean? It means deals with
specific governments, if so, where.

Speaker 8 (17:57):
It's not necessarily Sometimes it's deals with particular regional service
provider that's been funded by the government. And oftentimes that's
the case in a case of in the case of Japan,
for example, the the Japanese government came out and UH
offered UH subsidies of a couple of billion dollars. I

(18:21):
think for several different internet companies and telcos to be
able to fund their AI infrastructure. UH India has a
sovereign AI initiative going and they're building their AI infrastructure. Canada,
the UK, France, Italy, missing somebody, Singapore, Malaysia, UH. You know,

(18:48):
a large number of countries are subsidizing their regional data
centers so that they could become able to build out
their AI infrastructure. They recognize that their countri's knowledge, their
countries data digital data is also their natural resource, not

(19:09):
just the land they're sitting on, not just the air
above them. But they realize now that their digital knowledge
is part of their natural and national resource, and they
are to harvest that and process that and transform it
into their national digital intelligence. And so this is what

(19:29):
we call sovereign AI. You could imagine almost every single
country in the world will eventually recognize this and build
out their AI infrastructure.

Speaker 4 (19:38):
Jenson, you use the word resource, and that makes me
think about the energy requirements here. I think on the
cool you talk about how the next generation models will
have many orders of magnitude greater compute needs. But how
will the energy needs increase and what is the advantage
you feel in Vidia has in that sense, Well, the.

Speaker 8 (19:58):
Most important thing that we do is is increase the
performance of and increase the performance and efficiency of our
next generation. So Blackwell is many times more performance than
Hopper at the same level of power used, and so
that's energy efficiency, more performance with the same amount of power,

(20:18):
or same performance at a lower power.

Speaker 3 (20:21):
And that's number one.

Speaker 8 (20:22):
And the second is using luca cooling. We support air
cool we support air cooling, we support liqual cooling, but
liqual cooling is a lot more energy efficient. And so
the combination of all of that, you're going to get
a pretty large, pretty large step up.

Speaker 3 (20:37):
But the important thing to also.

Speaker 8 (20:39):
Realize is that AI doesn't really care where it goes
to school, and so increasingly we're going to see AI
be trained somewhere else, have that model come back and
be used near the population, or even running on your
PC or your phone.

Speaker 3 (20:54):
And so we're going.

Speaker 8 (20:55):
To train large models, but the goal is not to
run the large models necessarily all the time. You can
surely do that for some of the premium services and
the very high value AIS, but it's very likely that
these large models would then help to train and teach
smaller models, and what we'll end up doing is have
one large, few large models that are able to train

(21:18):
a whole bunch of small models and they run everywhere.

Speaker 4 (21:22):
Jensen, you explain clearly that demand to build generative AI
product on models or even at the GPU level is
greater than current supply. In black Vel's case in particular,
explain the supply dynamics to me for your products and
whether you see an improvement sequentially quarter on quarter or
at some point by the end of fiscal year into

(21:43):
next year.

Speaker 8 (21:45):
Well, the fact that we're growing would suggest that our
supply is improving and our supply chain is quite large,
one of the largest supply chains in the world. We
have incredible partners and they're doing a great job supporting
us in our growth. As you know, we're one of
the fastest growing technology companies in history, and none of

(22:07):
that would have been possible without very strong demand but
also very strong supply. We're expecting Q three to have
more supply than Q two, We're expecting Q four to
have more supply than Q three, and we're expecting Q
one to have more supply than Q four, And so
I think our supply, our supply condition going into next
year will be in it will be a large improvement
over this last year with respect to demand. Blackwell is

(22:31):
just such a leap and there's several things that are happening,
you know, just the Foundation model makers themselves. The size
of the foundation models are growing from hundreds of billions
parameters to trillions of parameters. They're also learning more languages.
Instead of just learning human language, they're learning the language
of images and sounds and videos, and they're even learning

(22:55):
the language of three D graphics. And whenever they are
able to learn these languages, they can understand what they see,
but they can also generate what they're asked to generate,
and so they're learning the language of proteins and chemicals
and physics. You know, it could be fluids and it
could be particle physics, and so they're learning all kinds
of different languages, learn the meaning of what we call modalities,

(23:18):
but basically learning languages.

Speaker 3 (23:20):
And so these models are growing in size.

Speaker 8 (23:24):
They're learning from more data, and there are more model
makers then there was a year ago. And so the
number of model makers have grown substantially because of all
these different modalities. And so that's just one, just the
frontier model. The foundation model makers themselves haven't really grown tremendously.
And then the generative AI market has really diversified, you know,

(23:46):
beyond the Internet service makers to startups and now enterprises
are jumping in and different countries are jumping in, so
the demand is really growing.

Speaker 4 (23:57):
Jensen, I'm sorry to cut you off. I will lose
your time soon. You've also diversified. And when I said
to our audience you were coming on, I got so
many questions. Probably the most common one is what is
in Nvidia. We talked about you as a systems vendor,
but so many points on in Vidia GPU cloud, and
I want to ask, finally, do you have plans to

(24:19):
become literally a cloud compute provider.

Speaker 7 (24:23):
No.

Speaker 8 (24:25):
Our GPU cloud was designed to be the best version
of Nvidia Cloud that's built within each cloud. Nvidio DGX
cloud is built inside GCP, inside Azure, inside AWS, inside OCI,
and so we build our clouds within THEIRS so that

(24:46):
we can implement our best version of our cloud, work
with them to make that cloud, that that infrastructure, that
AI infrastructure, in video infrastructure, as performance, as great TCO
as possible. And so that strategy has worked incredibly well.
And of course we are large consumers of it because

(25:08):
we create a lot of AI ourselves, because our chips
aren't possible to design without AI, our software is not
possible to write without AI, and so we use it
ourselves tremendous, you know, tremendous amount of it self driving cars,
the general robotics work that we're doing, the omniverse work
that we're doing. So we're using the DGX cloud for ourselves.
We also use it for an AI foundry. We make

(25:30):
AI models for companies that we'd like to have expertise
in doing so, and so we are AI.

Speaker 3 (25:36):
We're an AI.

Speaker 8 (25:37):
We're a foundry for AI like tsmcs a foundry for
our chips, and so there are three fundamental reasons why
we do it. One is to have the best version
of Nvidia inside all the clouds. Two because we're a
large consumer to ourselves, and third because we use it
for AI foundry for to help every other company.

Speaker 4 (25:56):
Jensen one, CEO and Video. I want to thank you
for your time the extended conversation straight off the earnings call.
Thank you in videos earnings, Fiscal is good to see
you too. Fiscal second quarter done. I point out that
in after hours the stock is down still almost seven percent.
There'll be a lot of analysis to be done overnight
by the cell side, by the byside, and we'll bring

(26:17):
you the best from Bloomberg's reporters and editors from San Francisco.
This is Bloomberg Technology
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.