Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Bloomberg Audio Studios, Podcasts, radio news. Bloomberg Tech is live
from coast to coast with Caroline Hyde in New York
and Edlavelow in San Francisco.
Speaker 2 (00:24):
Welcome to a special edition of Bloomberg Tech Live from
CS in Las Vegas, when we will bring you conversations
from the biggest names in the industry throughout the week.
Speaker 3 (00:35):
Of course, coming.
Speaker 2 (00:35):
Up on the show, we sit down with the CEO
of AMD, Lisa Sou after the company announced a new
chip for corporate data center use.
Speaker 3 (00:44):
Plus, we discussed the global EV and.
Speaker 2 (00:45):
ROBOTAXI landscape with Lucid Interim CEO Mark Winterhoff, and then
we talked to the CEO of gaming hardware company Raiser,
joining us to discuss its latest AI gaming ecosystem. But
here on the or of CF we've got to get
straight over to Ed Ludlow.
Speaker 3 (01:03):
It's standing by with the CEO of m D.
Speaker 2 (01:06):
Ed.
Speaker 4 (01:09):
Thank you, Caroline, and welcome back to Bloomberg Tech. Lisa
Sue helios with I four fifty five x AMD's first
RAX scale system solution. But inside it AMD's first in
the world's first two nanimeter.
Speaker 5 (01:26):
Chip of that type.
Speaker 4 (01:28):
A lot was made of it when you actually just
stood on stage and held it in your hands for
the first time.
Speaker 5 (01:33):
Why is it significant?
Speaker 6 (01:34):
Well, first of all, Ed, it's great to be here
with you at CS. I think CS is always a
great way to kick off the year because you get
so much perspective. So it was fun giving the keynote
last night. Look, Helios is a massive system. You can
see it in the background here, and m I four
fifty five is just an incredibly powerful chip. And probably
the context I would give Ed is, you know, one
(01:57):
of the things that we're so clear about is that
the demand for AI compute is just continuing to increase.
And you know, we have seen that over the last
five years. When you think about just how much new
capabilities have come on board, we've now seen a real
inflection in the number of people who.
Speaker 7 (02:14):
Are using AI.
Speaker 6 (02:15):
So if you think today, they're probably more than a
billion active users using AI, and we expect that to
scale to over five billion users over the next five years.
So for all of that, you need compute and lots
and lots of compute. And from that standpoint, you know,
m I four fifty five is a significant leap forward
in terms of technology capability made up of two and
(02:38):
three nanometer trips, three hundred and twenty billion.
Speaker 7 (02:41):
Transistors, just a lot of performance and.
Speaker 4 (02:44):
A lot of the timeline for it to be deployed
in the real world. Then then who will be the
principal first user of it.
Speaker 6 (02:49):
You'll see it in the second half of twenty six
and it will ramp from there. And you know, we
have very strong partnerships open AI. Greg Brockman was on
stage with us last now talking about all the.
Speaker 7 (03:01):
Use cases that they see.
Speaker 6 (03:03):
We've announced a partnership with Oracle many others as well.
Speaker 4 (03:07):
So given that it's two h it's in full production now,
it's getting ready.
Speaker 7 (03:11):
To a We are absolutely getting ready to ship it.
Speaker 4 (03:15):
That's at one end of the sort of scale and spectrum.
At the other you have m I four forty x,
which is for small data centers. I'm trying to simplify it,
but it's basically an enterprise product. What was it that
you were trying to solve for with that?
Speaker 6 (03:31):
Yeah, I think what we're trying to solve for is,
you know, the world is a very heterogeneous world.
Speaker 7 (03:37):
You have all kinds.
Speaker 6 (03:38):
Of use cases for AI from you know, sort of
the very biggest cloud data centers that are doing you know,
large scale training and inference to enterprise applications as well
as supercomputers.
Speaker 7 (03:50):
And so we actually have a family of chips.
Speaker 6 (03:52):
At the highest end, is there m I four fifty
five for the cloud environment. But we announced last night
am I four forty, which is actually using the same
basic building blocks, but is now really focused on enterprise
applications so that you can go into you know, let's
call it current data centers with the new technology.
Speaker 7 (04:12):
So we're excited about that as well.
Speaker 6 (04:14):
You know, there is enterprises are starting to increase their
adoption of AI. In some cases they want their own
control of their data centers in terms of on prem environments.
Speaker 4 (04:23):
What are they doing with it though, I mean, you know,
we've been so fixated on frontier models with hundreds of
billions of parameters and the scale of infrastructure needed for that.
With m I four forty, we're talking about something slightly different.
I'm just I think it's just really interesting if you
could explain what the demand is from those enterprises, what
they want with the technology.
Speaker 6 (04:43):
Well, I think you see many enterprises now using AI
all throughout their business processes, whether you're talking about things
in their workflow. Even AMD, we're using AI through every
part of our development process. A lot of applications in
financial services, in healthcare, and these are areas, especially in
(05:05):
financial services, that people actually don't want everything necessarily in
the cloud. They'd like to be able to have their
own on prem deployment or private cloud deployments. And in
this case, you don't want to have to build a
brand new data center for every new generation of.
Speaker 7 (05:21):
Chip M I four P forty.
Speaker 6 (05:22):
Allows us to use some of those existing data centers
and upgrade with the new capabilities.
Speaker 4 (05:28):
Welcome if you're watching us on Bloomberg Television or you're
listening on Bloomberg Radio. We're live in Las Vegas and
we're with AMD CEO Lisa Sue, and we're talking about
the latest.
Speaker 5 (05:39):
Generation of accelerators.
Speaker 4 (05:42):
What makes this generation of AMD accelerators the better option,
particularly for on prem and at the edge, over what
Nvidia is offering right now.
Speaker 6 (05:52):
Well, the best way to think about it, ED is
we're in this place where AI is adding reflect flection point.
We're seeing AI now in every part of compute. We
see it in the largest models. You know, when you're
thinking about things like traction, PT and Gemini and Rock.
You know, we're also seeing you know, many use cases
(06:13):
in uh, you know, new capabilities like you know, video production, entertainment, healthcare,
where you're doing drug discovery, all of these various things.
You know, our claim to fame is really you know,
outstanding performance at very advantage.
Speaker 7 (06:29):
Total cost of ownership.
Speaker 6 (06:31):
And the other thing that you know we believe very
strongly in is an open ecosystem and deep partnerships, you know,
with our uh you know, with our overall ecosystem coming together.
So when you put those things in perspective, I think
we have a great set of applications that will take
advantage of these newest generationships.
Speaker 4 (06:51):
You mentioned that Greg Brockman, who's the Opening Eye president,
was on stage with you last night, and one of
the basic points that he made was there are tools
and functions they would love to release and put out
into the world, but they're compute constrained. I often ask
you to quantify demand, but is there a way to
quantify the severity in the lack of compute, you know,
(07:12):
the deficit that's out there right now?
Speaker 6 (07:14):
Well, let me just give you some numbers to kind
of ground what we think the demand environment is looking
like so if you think, you know, today we have
about a billion active users and we're ramping that to
you know, five billion over the next five years, and
we have about let's call it, one hundred zeta flops of.
Speaker 7 (07:33):
Compute you know, all around the world.
Speaker 6 (07:35):
And that's just a generic number that that aggregates all
of that. You know, we think we have to increase
compute by another one hundred times as you go over
the next you know, four or five years.
Speaker 7 (07:48):
And I introduced a term last night, the YadA flop.
You know, people are like, what is a YadA flop?
A YadA flop is.
Speaker 6 (07:56):
Actually ten to the twenty fourth in terms of flops.
That's a one followed by twenty four zeros. And to
give you, you know, just a view of just how
much things have really increased. I mean, that's another one
hundred times more compute than we have today. So that
gives you an idea. Now you think, what are you
going to use all that compute for? I mean, the
(08:16):
truth is the models that we have today are great.
I mean they do amazing things. You know, we talked
about a number of use cases. Uh you know, perhaps
you know one that's you know, very hits very close
to home, is is writing software? Like you know, people
are using the AI tools right now to significantly enhance
the productivity of software developers.
Speaker 7 (08:37):
But it's good, but it can get so much better.
And I mean, I think that's the key point.
Speaker 4 (08:42):
You know.
Speaker 6 (08:43):
We we like to say that AI is really going
to be everywhere, and it's really for everyone, and it's
for each one of us to make our businesses more productive,
you know, each one of us more productive, you know,
going forward. And so we're still in the very early
innings of really unlocked the power of AI.
Speaker 4 (09:01):
So where we stand is we Okay, there's a there's
a compute deficit and software is kind of hit the
limits of what current generation compute can offer. Help us
understand the bottlenecks and barriers that to deploying that compute
a lot at the moment about memory chips, what helps energy, electricity?
What's crossing your desk, Lisa that gives you pause and say,
(09:25):
this is this is a problem right now?
Speaker 6 (09:27):
Well, our job as a technology industry is to push
the bleeding edge.
Speaker 7 (09:32):
I mean, that is our job.
Speaker 6 (09:33):
And so you know, when we think about like the
four fifty five deploying you know, two Nanimeter and three
Nanimeter chips, having the latest generation memory high bend with
memory that is out there, and really deploying these big systems.
The important thing is that the entire ecosystem come together
and we plan together for this next big inflection in compute.
(09:55):
And that's exactly what we're doing right now. I think
we're working very closely with the entire so apply chain
to ensure that we have the resources to expand this
compute environment. And yes, you know, some of the things
that you mentioned are let's call it constrained, but.
Speaker 5 (10:13):
Which is most severely So, you know, I don't.
Speaker 6 (10:15):
Think that's any one thing. I think we're all looking at.
You know, how do we build faster? You know, certainly,
you know, power is one of those areas where you know,
you see throughout the world, you know, power is being
brought online as fast as possible. Certainly from a silicon standpoint.
You know, we're ramping our production capabilities with our partners.
From a memory standpoint, our partners are ramping as well.
(10:38):
So it's not any one thing. I think it's all
of these things have to go sort of in tandem.
And that's why partnership is just so important in this business.
Speaker 4 (10:48):
We started this conversation talking about helios first, RAC scale
architecture and infrastructure from AMD, could you talk about the
future and how much of the content you want to
own in a server? You know, we started this story
with the GPU. Frankly, if you look at what Nvidia
is doing, they want to increasingly own all of what's
(11:10):
inside the box. Is that something that AMD's focused on too.
Speaker 6 (11:13):
You know, what's most important for us is to ensure
that we have turnkey solutions that are very, very easy
for our customers to deploy, because when you think about,
you know, how do you use all of this AI
compute most effectively? You want it to go into the
data center and you know, really be up and running
on day one, and for that you have to optimize
(11:34):
a full system. But from that standpoint, you know, we
are very focused on an open ecosystem. So yes, we
designed the CPUs and the GPUs and some of the
networking elements, but we also work you know, really with
the broad ecosystem of partners with industry standards. It's all
about ensuring that you know, we get the best of
all worlds when we put our solutions together.
Speaker 4 (11:56):
Looking ahead to m I five hundred twenty twenty that
has one thousand times the performance of the I three
hundred generation. So your last generation of real well deployed gear.
Something's coming that's a thousand times better. How did you
make it a thousand times better?
Speaker 6 (12:15):
It is just incredible engineering at every level.
Speaker 7 (12:19):
So m I four fifty five is.
Speaker 6 (12:21):
Ten times better than the trip that we just launched
six months ago them I three fifty five and m
I five hundred is another ten x you know. On
top of that, we are using the most advanced technology
out there. We have a very you know, very clear
focus on you know, hardware, software, system code design, and
(12:43):
it is you know, clearly the pushing the bleeding edge
of capabilities.
Speaker 4 (12:48):
What is the status of AMD's ability to sell products
into China right now?
Speaker 6 (12:54):
So, you know, China is an important market for us.
You know, we actually sell a broad range of trips
into China, including our you know, our PCs as well
as you know, other embedded chips in the data sets
context of course, sorry, in the data center context. We
are you know, certainly we see China as an important market.
We were we did get some licenses from the US government,
(13:16):
you know, late last year as it relates to some
of our previous generation are m I three oh eight
you know chips, and we are in the process of
applying for new licenses with our am I three twenty
five chips that were recently.
Speaker 7 (13:32):
Allowed to apply for licenses.
Speaker 6 (13:34):
We haven't gotten those licenses yet, but we con continue
to view China as an important market.
Speaker 4 (13:39):
For the reason I ask about it is in part
because a lot of the work that's being done in
open source models and bridging the gap between open and
closed it is being done in China. To some extent,
there's been able to discussion about the demands being there
in China, but could you reflect a little bit on
that demand, but also what the Chinese government's attitude is
to you taking a later generation of tech to the country.
Speaker 3 (14:04):
Well, I do think.
Speaker 6 (14:05):
The demand for you know AI in general and in
China is high for all the reasons that we talked about.
I think we are in a demand environment where more
compute is beneficial across the world. We think you know
China is an important market for us, and it's very
active in having our solutions deployed, So, you know, we
(14:26):
continue to view it as something that's important we're working
with the US government as well as our Chinese customers,
you know, to find good.
Speaker 4 (14:33):
Solutions there, and there are signs from both governments that
the licensed process is moving. Commerce is kind of notorious
for things sitting on a desk for quite a long time.
Speaker 6 (14:41):
I think we are optimistic that, you know, we'll have
an opportunity to get some of those licenses.
Speaker 4 (14:47):
Granted you're watching Bloomberg Television, you're listening to Bloomberg Radio,
this is Bloomberg Tech, and we're live in Las Vegas
with the a M d CEO Lisa Sue. Last question
really in the data center context, is the markets and
investors want data and signs that you're taking market share.
What would the metrics be that you'd point to, either
(15:09):
that already exists or over the coming twelve months that
would evidence that.
Speaker 5 (15:13):
Well.
Speaker 7 (15:13):
I think m I.
Speaker 6 (15:14):
Four fifty five is a clear inflection point in both
our technology capability as well as the deep partnerships that
we have across the industry. So we're excited about you know,
what we see in front of us, and you know,
we've talked about you know, tens of billions of dollars
in an AI revenue as we get into twenty twenty seven,
(15:36):
and I think these are important metrics, you know, for
us as a company. When we think about the AI potential.
Speaker 4 (15:43):
For all the focus on data centers, some forget that
AMD is leader in PC in many respects. The forecasters
have very different opinions of what will happen this year.
Some see, you know, shrinking market, some see modest growth
driven literally by just AIPC. You've been able to take
(16:05):
market share and grow irrespective of what the broader conditions are,
but it haven't been great. How have you done that
and do you expect that to continue to be the case.
Speaker 7 (16:16):
Well, the PC market is a very good market for us.
Speaker 5 (16:19):
You know.
Speaker 6 (16:20):
We grew a ton in the PC market in twenty
twenty five, and that really came from the strength of
our product portfolio. We bet early in aipcs, so it
was a clear area where we believe that the technology
would generate demand. We also went through a refresh cycle
with Windows eleven, and as we go into twenty twenty six,
(16:40):
I think we'll want to see how a few quarters
play out. I think the general demand for computing is
certainly there. There are some supplies chain constraints that you
know we're working through and we want to you know,
watch going forward. But you know, our case is one
where we are still underrepresented in parts of the market.
It you know, we are very very strong in gaming,
(17:02):
we're very strong in consumer. I think we're underrepresented in
enterprise laptops and we view this as a growth area
for us.
Speaker 5 (17:09):
Does AI PC change that.
Speaker 6 (17:11):
Aipcs absolutely help in terms of, you know, just the
upgrade cycle coming in. We're excited about some of our
work with AI development systems as well. We announced a
new AI development system last night that you know, we
think will be also very attractive.
Speaker 4 (17:29):
Those constraints you were talking about in the PC context
are specifically dram or it's broader than that.
Speaker 7 (17:35):
It's more around the memory side.
Speaker 6 (17:37):
So when you think about you know, memory overall, I
think we have so much demand coming from let's call
it AI data center compute that we want to see
how it impacts, you know, sort of the rest of
the memory market out there.
Speaker 4 (17:50):
One of the other areas that you discussed with Greg
Brockman on stage from Open AI was sort of the
net or broad economic impact of AI, not just for
the companies. I think you were talking more about global economy.
Again very difficulty, So that how does one measure progress
(18:11):
in whether AI has or has not had a direct
positive economic impact around the world in any given year.
Speaker 7 (18:18):
You know, It's true, it's.
Speaker 6 (18:19):
Hard to deconvolve all of the things that are happening.
But I think from a sense of you know, what
we see in the business, and you know, many people
want to see direct return on investment for a particular
a set of investments, what I would say is that
we know that AI is making a difference in a
productivity of companies. We know that I can see that,
(18:42):
you know, within a m D in terms of as
we deploy AI, you know, we're able to get products
to market faster, We're able to you know, significantly improve
some of our business processes. So you know, as we
go forward over the next several years, I think you're
going to see that much broader enterprises.
Speaker 7 (18:58):
Every CEO that I talked to is talking about AI.
Speaker 6 (19:01):
It is front and center in terms of how to
build a better company, how to build a better portfolio,
and so you know, I think what you know Greg
was talking about is when you aggregate all of that
AI has to impact, you know, the world at a
GDP level, and we'll see that over the next few years.
Speaker 4 (19:22):
You're watching Bloomberg Television, you're listening to Bloomberg Radio. This
is Bloomberg Tech and we're live in Las Vegas. We're
speaking to a m d's CEO, Lisa Sue. You are
an investor in Generative Baronics, also technology partner, and they
have unveiled a humanoid robot here at Las Vegas ces.
In fact, if the magic of television can happen, and
(19:43):
we cut to the y gen, we actually see it
in the background.
Speaker 8 (19:45):
Right.
Speaker 4 (19:46):
You know, this is the first tangible sign I feel
we've seen from AM and D on how you intend
to play in physical AI. Yes, explain your strategy. It
is the next big market, right.
Speaker 6 (20:00):
Yes, And I wouldn't say it's the first time, but
it's probably one of the areas where we don't highlight
as much because there's so much focused on data center
and cloud and the opportunities there are, you know, very much.
Speaker 7 (20:12):
In front of us.
Speaker 6 (20:13):
But when we look at physical AI, you know, starting
from all of the work we've done in FPGAs and
embedded real time capability, we have been in this space
for a long time. You know, we already power a
lot of robotic applications you know out there. But I
think as we go into the humanoid capability, and you know,
we're excited about our partnership with uh you know, g
(20:36):
Bionics and the work with on June one, I think
that takes us to another level in terms of capability
and intelligence and what we're trying to do so.
Speaker 4 (20:46):
Is the business model to be all things the brain
inside of the humanoid robot and inference side that, the
underlying software being traded on a trained on a m
D accelerators.
Speaker 5 (20:55):
Just I don't what's the go to the market, I
guess is what I'm asking.
Speaker 6 (20:58):
You should expect that our partnerships extend all through all
of those levels. So we have the components that can
power the humanoid robots, you know, sort of real time
local capability, which is a very very important, and then
we also have the technology behind that in terms of
you know, how to train and inference on these humanoids.
Speaker 5 (21:20):
When last we met in person, it was in Washington,
d C.
Speaker 4 (21:24):
And the President had just outlined a broad strategy for
America in AI and it really centered around infrastructure deregulation
allowing those building the infrastructure to move faster. That was
kind of in the second half of last year. In
the months that have followed, have you seen any signs
(21:44):
that it worked and anything that you could point to
that says, yeah, people are able to build faster maybe
to address.
Speaker 5 (21:52):
Some of the compute deficits. We discussed, well, I.
Speaker 6 (21:55):
Can say for sure, you know, the President's AI Action Plan.
You know, when we met, now this was back in
July when it came out.
Speaker 7 (22:02):
I was very.
Speaker 6 (22:03):
Optimistic about having a really forward leaning strategy from you know,
sort of the whole view of what does it take
for the US to lead an AI. And I think
we've made a ton of progress along the way. And
you know, I had Michael Kratzios joined us last night
on stage as well to talk about the Genesis Mission,
(22:24):
which is you know, another you know, sort of public
private partnership approach to really advance science in the United States.
And when you look at you know, all of these things,
you know, building faster, ensuring that we have you know,
the right export controls so that we were able to
have the US stack adopted across.
Speaker 5 (22:44):
The right export controls.
Speaker 6 (22:45):
Currently, we are certainly working very closely with you know,
the the various parties in the US government to ensure
that we have the right balance there. And we also
have you know, this notion of how do we invest
more here and ensure that in the the United States
that we are running as fast as possible to bring
you know, AI capacity you know online, to help us
(23:08):
in you know, science and you know sort of the
broader you know, economic benefits.
Speaker 4 (23:13):
Lisa, what happens in twenty twenty six, what happens in
the world of AI, and what do you think will
define this year in terms of the progress that your
industry hopes to make.
Speaker 6 (23:24):
Well, I started our keynote last night with the sentence
that you know, you ain't seen nothing yet.
Speaker 7 (23:30):
That's really how I feel. I mean, we're sitting here.
Speaker 6 (23:32):
In January, and it's just amazing how much progress is made,
you know, every week and every month, when we see
how these models are developing, when we see how the
use cases are developing, and then when we see the
tangible results on businesses and outcomes. I believe that you know,
we saw a good amount of that, you know, come
(23:54):
to fruition in twenty twenty five. We're going to see
much more of that in twenty twenty six, so that
everyone should understand that, you know, AI is not just
you know, hype out there. It's not just you know,
sort of things that people are talking about in the
investment community. It's things that people are using every day,
real time and feeling like, Hey, my life is better
(24:14):
because I have this technology, and I think we're going
to see that in twenty twenty six.
Speaker 4 (24:18):
Lisa Suit AMD CEOMD with it's in the world's first
two animeter chip going into Helos, its first rack scale
system solution carry back to you on set in Las Vegas.
Speaker 3 (24:33):
What an extraordinary conversation.
Speaker 2 (24:35):
Ed Ludlow, as always with Lisa Su of AMD, of course,
took to the stage last night alongside Greg Bruckman of
Open AI, and we're just going to check it on
the shares because there were significant statements coming from Lisa
throughout that interview with Ed. We are off by three
point three percent, even if she continues to articulate how
much computer is going to need to increase one hundred
times in the next four to five years, talking about
(24:57):
the early innings that we're at in terms of unlocking AI,
and really talking up the MI four fifty five, pushing
at the bleeding edge of capability. They're also talking about
the demand they have in AI for China in particular,
Chinese demand is high. They're working with the government as
it stands for finding solutions to be able to ship
through China. Also talk about how they're underrepresented and enterprise
laptops as well. What by three point three percent though,
(25:20):
because seeing JP Morgan saying look like in video, AMD's
outlook for compute demand is very bullish, and they're hearing
about the Mi five hundred series coming on course for
twenty twenty seven launch, but not much new according to
Morgan Stanley in terms of brand new information and video
is up four tenser percent. Is maybe we've got a
little bit more detail on the Rubin unveil, on the
fact that six chips have already come back on their
(25:42):
next innovations, the next architecture for their compute and we're
seeing that they're already back they're likely to be shipping
in the course of twenty twenty six, but also talking
up the future of self driving vehicles within video, a
new platform to rival maybe even a Tesla. Enol must
says I'm not losing sleep over that. But we're also
hearing about the future robotics coming from in video as well,
and demand for h two hundreds coming from China.
Speaker 3 (26:04):
So so much to.
Speaker 2 (26:05):
Digest on these particular stocks, so much skill to learn
here in Las Vegas. And we're going to be sitting down,
of course, with the CEO of Nvidia, that is Jensen
Wang and Zeeman's CEO, ronand Bush right here from Las Vegas.
From CES coming up, we are going to be talking though,
right alongside not only these leaders who are speaking with Ed.
Speaker 3 (26:27):
Ludlow in the next few hours.
Speaker 2 (26:29):
In the next few minutes, you're going to be hearing
from the Lucid interim CEO Mark Winterhoff. We're going to
be discussing the future of global evs, the industry, the
company's robotaxi partnership with Ubo, with neuro, how are they
intertwining within video, how are they thinking about the future
of self driving platform there as well. So much to
get to in terms of supply chain as well. Stick
(26:51):
here we are from the Consumer Electronics Show in Las Vegas.
Welcome back to a very special edition of Blomberg Tech
Live from Las Vegas.
Speaker 3 (27:07):
In CEES.
Speaker 2 (27:08):
Quick check on these markets as we stand, because we've
had some big announcements over here at the Consumer and
Electronics Show. Key among them has been from Nvidia and
from AMD. Now VideA up a quarter of a percent.
At the moment we hear about the future of Rubin,
we hear about how it's coming on track already. Manufacturing
partners are bringing back six sets of chips for the
next stage architecture. They're talking about new self driving platform,
(27:29):
they thinking about robotics and video catching a bit only
about a quarter of a percentage point, AMD down three
point eight percent, even as we hear lisas who are
talking about the one hundred x compute need over the
next four to five years and how they're going to
be satisfying it with their next generation of chips, but
not enough new for the market to get its head around.
Speaker 3 (27:45):
It would feel. Now's that one hundred and up four
tenser percent.
Speaker 2 (27:48):
Let's move on to somewhere at big movers, though, because
amid these announcements come the ramifications of the ripple effects
on other key companies. Johnson Controls check out, we're off
by eight percent, but this is cooling and several equipment
company in many ways that's looking at the ways in
which you cool down in videos chips, Well, maybe you
(28:08):
won't need air to do that in the future, maybe
be able to cool them with water cooling, And that
sends them shivers down. Some supplier's spines that we've seen
were off by eight point five percent. Some of the
markets saying there's a bit of overreaction, So says Berkley's
Tesla off by four percent.
Speaker 3 (28:21):
Is this the concern.
Speaker 2 (28:22):
Around a self driving platform being built by Nvidia? Well,
in I must post it on X yesterday that it's not.
He's not losing any sleep over it. But for now
there's a little bit of a reason to be selling Tesla.
After its rally yesterday, sand Disc out twenty four percent,
and this is as we hear, actually the still unbelievable
need for memory and memory storage stand Disc once again
(28:43):
managing to feel the ripple effects of Jensen Wang's words.
He said that on stage yesterday, and sand Disc rallies
higher along with other memory companies such as Micron. But
let's talk about other announcements being made here at CES,
and among them.
Speaker 3 (28:56):
Is from Lucid.
Speaker 2 (28:57):
Because Lucid, neuro and Uber they're bringing their robotaxi ambitions
to life. I'm bailing a new autonomous vehicle right here
at ces Mark went Tohart Lucid interim CEO joins us.
Now I'm very pleased to say, why is your robotaxi
going to be different?
Speaker 9 (29:11):
Well, I guess it's the integration of our leading ev technology,
the luxury experience that the.
Speaker 10 (29:18):
Lucid gravity provides.
Speaker 9 (29:19):
We're still neurodriver, you know, in bringing this to market
very very very fast, because I mean from when we
all came together, the three of us, to when we
plan to roll it out by the end of the
year in a paid service, actually it's less than eighteen months.
And if you do that in that short period of time,
that's actually a very unique thing by itself.
Speaker 10 (29:39):
But the product excel itself.
Speaker 9 (29:41):
We are now unveiled the production intent design. It's much
more integrated, less you know, a lot of different things
on the edges of the vehicle, more integrated, and so
it's going to be a very very good experience for
the customer.
Speaker 2 (29:56):
Will it always take that form the relationship of neuro
uber and yourself, because we've just had Jensen involved unveiling
his own self driving platform.
Speaker 3 (30:05):
Would that be integrated?
Speaker 2 (30:06):
Would you be an OEM that uses that more directly
for the consumer.
Speaker 9 (30:10):
In fact, we announced a couple of months ago and
partnership is Nvidia on exactly that topic. So we're also
using Nvidia Drive for our gravity but for our B
two C customers, So the same thing that it was
basically announced yesterday with Mercedes, we will also have by
the end of this year in our Lucid gravity and
(30:31):
when we come with our midsized platform also the end
of this year, it will have this from the start
in that what we don't stop there. We will actually
next step is L three where you actually have mind
off on the highway, and then that will is planned
for twenty twenty eight, and then L four we're planning
together as in Vidia by twenty twenty nine.
Speaker 10 (30:52):
For our B two C customers.
Speaker 9 (30:54):
So it's different, it's a different path than on robot taxis,
but we will also continue to evolve for our robot
taxi ambitions.
Speaker 3 (31:02):
Will the regulation be there by twenty twenty nine?
Speaker 9 (31:04):
Is that what you're banking on, Well, that's what we're
banking on here, absolutely, yeah.
Speaker 2 (31:07):
And more probably, how do you see therefore the ecosystem evolving?
Speaker 3 (31:12):
What in twenty twenty nine will I own.
Speaker 8 (31:16):
You mean a car.
Speaker 2 (31:17):
Or robotaxis that are already on offering. If I choose between,
waymo I can use between Uber, I could get in
your car. In that respect, do I need to therefore
really own my own Lucidy?
Speaker 10 (31:27):
You will? You will?
Speaker 9 (31:28):
I think I don't think that we get to a
point where there will be only robotaxis, because use cases,
for instance, in inner cities, you know, or short runs
make sense robotaxis. But you also want to be able to,
you know, make it your own vehicle. You don't want
to go into something where somebody else was just sitting in.
Or let's say, if you have a family, you have
more than one kit you need, you know, a child seat.
(31:51):
Do you want to lock this around and put it
into a robotaxi and then take it out again?
Speaker 3 (31:55):
I mean, it's not feasible.
Speaker 10 (31:57):
It's not feasible anyway.
Speaker 9 (31:59):
They will always be both, you know, and but I
think what is very very important is on the not
only on the robotaxi side, but also on the the
retail customer side. You want to choose do I want
to be driven or do I do want to drive myself?
In particular, our cars are known how great they drive,
and that actually That's a very important thing because I
(32:21):
mean evs very often are you know, kind of like
stigmatized with oh, that's the sustainable choice and it's expensive
and it needs incentives. That is not true. Our cars,
for instance, they drive fantastic. They actually have better performance
than internal combustion engines if you compare them in their
in their real in their real comparative competitors set. So
(32:42):
I think this will go away over time that you
have that the conversation between internal combustion engines and evs,
and evs will will win in the end.
Speaker 2 (32:50):
Well, let's talk about where Lucid is at this moment,
because last year it was a painful year in.
Speaker 3 (32:56):
Terms of stock performance.
Speaker 2 (32:58):
You were having to cut down grade how many cars
you were going to be able to produce, and suddenly
rumped at the end of the year. You've delivered significant
production in Q four. How does that scale?
Speaker 9 (33:08):
Yeah, So, I mean I have to say I'm very
proud of what the team pulled off. I mean, we
had and I was very vocal about is we had
issues with ramping up our gravity our first suv supply chain,
it was supply chain, several supply chain issues. I mean,
whold twenty twenty five was full of you know, surprises,
let's put it that way, not only for us, also
for the whole industry. But then we're still delivered in
(33:34):
Q four our eighth consecutive record Coode on deliveries as well.
You know that means the last two years, every single
quarter we increased our deliveries. And when it comes to production,
we increased the production for the whole year by more
than one hundred percent, and just in the in the
last quarter from Q three to Q four even that
by more than one hundred percent. So we're really now
(33:57):
ramping up, and we solved the supply chain issues.
Speaker 2 (34:00):
So twenty twenty six will not be a supply chain
headache issue.
Speaker 10 (34:03):
Not that what I know of.
Speaker 9 (34:04):
I mean last year, if you would have asked me
the same question in January, I would say the same thing,
and then a couple of things happened.
Speaker 2 (34:12):
What happened was a trade war and tariffs. How have
you changed your supply chain with Asia in particular.
Speaker 9 (34:19):
Yeah, well, I mean this is still a process because
you cannot do this from one day to the other
if you have building Actually, by the way, all of
our vehicles right now are built in the United States,
but still we have components coming from other parts of
the world, and we are in process to localize this
in order to not have to pay the tariffs. As
(34:41):
an example, one very big element of our bill of
material is the batteries, and right now they come either
from Korea or bigger trunk actually from Japan, and we
will localize this to the United States mid of this year,
so that will actually help already quite a bit.
Speaker 10 (34:57):
But there's still more work to do.
Speaker 9 (35:00):
We are making those decisions as we go in order
to bring more things stateside.
Speaker 2 (35:05):
In order to save that Where is lucid space In
terms of global market share, We've just heard that Byd
has become the number one EV producer in the world,
eclipsing Tesla. Tesla still has significant chunk of share. We
see Shao Me grow in China as well. Where do
you fit.
Speaker 9 (35:21):
Well, here's the thing when these days when people talk
about evs, they mix everybody up, meaning BYD, Tesla and
Shaomi or anything else and us and us we are
a luxury manufacturer right now, we're not playing the same
price point as right now for we don't have yet.
Speaker 10 (35:42):
We will, but we don't have yet.
Speaker 9 (35:44):
The the fifty thousand dollars or even less or for
we we want to yes.
Speaker 10 (35:48):
That Tesla has.
Speaker 9 (35:49):
The Chinese are actually further down when you look in
in the Chinese market, I mean you cannot make money there.
And we have, by the way, no plans to go
to China because I don't think there's any way to
make a profit. Therefore a Western om coming in. But
we think in our luxury space, what we offer luxury,
(36:10):
and I would also say premium because we will go
down to the premium sector around about fifty thousand dollars.
That's what we will do, yes, absolutely, but we don't
have no plans to go down to I don't know
thirty twenty thousand dollars, and that's where the bulk of
the sales of the Chinese are right now.
Speaker 10 (36:28):
When you look at the level higher then it's not
that great.
Speaker 3 (36:34):
We love speaking with you here.
Speaker 2 (36:35):
Congratulations on the announcement a future of Robotaxis and of
consumer owned lucids.
Speaker 3 (36:41):
Mark Wenthoff, the interim CEO of that business.
Speaker 2 (36:44):
Coming up, We've got a gaming conversation for you, Raiser,
looking to AI to enhance the gaming experience was the
CEO Mignantan that's next.
Speaker 3 (36:52):
That's a Bruett Tech gaming company, Raiser.
Speaker 2 (37:03):
Well, it's unveiled a suite of new AI products and
enhancing the gaming experience. This is say Minyang Tan Raise,
the CEO, joins us now to talk about how your
roots are in gaming hardware. We know you for the seats,
for the headphones, for the mouse, but you want to
be an AI ecosystem. How do you frame this to
your users?
Speaker 11 (37:21):
Well, first up, you know, for us at Razer, we've
been building the ecosystem in the gaming space. Well from
hardware perspective, many people are familiar with us for the hardware,
but over and above, from a software perspective, we've got
over one hundred and fifty million users on our platform.
We've got about seventy thousand developers just developping on our tools.
Speaker 8 (37:40):
Over and above, we've.
Speaker 11 (37:41):
Also built out one of the largest payment networks for gaming.
Speaker 8 (37:45):
So that's been the ecosystem we've got.
Speaker 11 (37:47):
But over the years, we've been building AI for ourselves
because we believe that AI gaming is going to be
completely disrupting, changing things.
Speaker 8 (37:55):
In the gaming space.
Speaker 11 (37:55):
So you know, we've been looking at everything from AI
gaming tools for developers, we've been looking at things for
gamers and at CS, we've got a whole bunch of
super exciting lineup for everyone.
Speaker 3 (38:05):
Okay, so let's talk about the lineup.
Speaker 2 (38:06):
I think about, in particular some of the hardware the
headphones that in many ways are going to rival smart glasses.
Talk us through these headphones and how they work and
why they're AI enabled.
Speaker 11 (38:15):
Sure, you're talking about Project Motoco, So we've unveiled.
Speaker 8 (38:18):
That at CES.
Speaker 11 (38:20):
First up, we think smart glasses are great, but headphones,
it's already a universal form factor. We're not looking to
bring a new form factor through the gamers the users.
And we're one of the largest producers of gaming headphones
in the world at Razer. So what we do is that, well,
I think the entire installed base today for headphones in
the world is about one point five billion headphones, and
(38:42):
we're talking about perhaps every year there's about four hundred
million new headphones being shipped at any point of time.
Speaker 8 (38:48):
The refresh rate is really great.
Speaker 11 (38:50):
And what we've done is that we've taken a common
universal form factor and we've added AI smarts to it.
So it's got dual Project in Motoko has got dual
four K cameras.
Speaker 8 (39:00):
To provide vision in.
Speaker 11 (39:04):
Absolutely well, it provides vision to the AI assistant. We've
got far field micro to get audio. So in short,
what we've got is a AI wearable which is universal.
It's going to be easy to just provide the smarts
cross to it. It works with all the models out there,
it works with grog chat, GBT, so on and so forth,
and essentially we've now got AI smarts for every single
(39:24):
gamer and every single person out there.
Speaker 2 (39:27):
So you're thinking that this will make a leap from
not just gamers but others who just want to use
it as a tool in the house.
Speaker 8 (39:34):
Well, if you.
Speaker 11 (39:35):
Look at how gaming as a whole pretty much leads
a lot of innovation out there. If you're talking about
social networks, all that came from gaming first. Even AI
in the GPUs, it started just with gaming. So the
way that we see it is that a vast amount
of innovation comes from gaming, and essentially we'll see the
gamers adopted first and then the rest of the world.
Speaker 2 (39:58):
Let's talk about six hundred million dollars spending. Where is
that going to be deployed on?
Speaker 3 (40:02):
Is R and D? Is it talent? Is it compute?
Speaker 8 (40:04):
Well, pretty much all of the above.
Speaker 11 (40:06):
In terms of R and D, We've been hiring AI scientists,
We've been working on our internal tools. That's expensive, expensive,
but multimodel. I think in terms of that, and we
believe that where AI is going, we're going to see
EI vertical companies come up. And for us, we are
hyper focused in terms of AI gaming. Where we see
a massive opportunity for ourselves. It's the entire industry from
(40:30):
gaming being able to use AI tools develop new games.
Gamers will be able to use AI hardware, software and
services to get a more immersive and engaging experience.
Speaker 3 (40:42):
But some are uncomfortable.
Speaker 2 (40:44):
Some are worried about AI slots, Some are worried about
their own gaming experience not being as well high end
as it had usually been. How do you counteract some
of that slight growing backlash to these of AI and
gaming development.
Speaker 11 (40:58):
So I'm a gamer, I'm not wild about AI slop either.
But what we are talking about at Razer is providing
the tools across the developers to develop even better games.
So it's not about generative AI, it's about for example,
QA Companion.
Speaker 8 (41:12):
We're coming up with QA Companion.
Speaker 11 (41:14):
To allow game developers to shorten the time cycles to
do quolicy assurance for a game. Over and above, we're
looking at other ways in which we can reduce the
cost for game developers so that they can spend more
time in terms of creativity, in terms of being able
to build even better games. So for US, AI is
about augmenting the experience rather than replacing it.
Speaker 2 (41:36):
But you have said that you think one or two
mega games will be AI created in the future.
Speaker 11 (41:41):
Well, I think all of the games in the future
will have some level of AI tools who assist it,
whether it's in terms of designing better workflows, whether it's
in terms of doing better QA. In short, I would
say that AI has the opportunity to really provide even better,
more immersive games, even more competitive games in the future.
Speaker 3 (42:02):
You have flown from Singapore.
Speaker 2 (42:04):
You're an interesting company that's got presence in California but
also in Asia.
Speaker 3 (42:08):
How does the supply chain look right now for you?
Amid what was a pretty turferent in twenty twenty five, It.
Speaker 11 (42:13):
Was an exciting time, I must say. So we're a
dual headquartered in the US and in Singapore. I think
in terms of supply chain, we spent a lot of time.
I think because we ship globally. A third of our
businesses in the US, a third in Europe, a third
in Asia. We have a truly global company. But we've
been able to kind of work through our supply chain
in terms of getting the components done in terms of shipments,
and we are still looking at it every day.
Speaker 7 (42:35):
Okay, well, it's.
Speaker 3 (42:36):
Been wonderful for having you here.
Speaker 2 (42:37):
Thank you, Thank you for talking us through the announcements
some of the new gear and the supply chain.
Speaker 3 (42:41):
That goes with that. Menyangtan of course, the CEO of Razor.
Speaker 2 (42:45):
Coming up, We're going to be breaking down more of
the news coming out.
Speaker 3 (42:48):
Of the Consumer Electronics show right here in Vegas. Stick
with us.
Speaker 2 (42:51):
This is bloombag Tech, very specialisation of Bloomberg Tech live
from Las Vegas. We are checking on a Nvidia that
is up seven tenths of a percent, actually managing to
rally a little bit more on the day as we
hear that Nvidia is saying that the US government is
(43:12):
working hard on China license approvals. Of course, we're trying
to understand when they will get the approvals for H
two hundreds to really get done and start shipping to China.
Speaker 3 (43:20):
If China wants them.
Speaker 2 (43:21):
But Jenson Wang last night saying that there is strong
demand in China for his H two hundreds, and more broadly,
there's strong demand for his Blackwell and he's already managing
to get back the six prototypes. Already got six chips
from the VIRAA architecture from the manufacturing partners and will
ship them in the course of twenty twenty six as well.
Speaker 3 (43:39):
So mayly some moon music. That sounded very optimistic.
Speaker 2 (43:41):
He's also talking about the future of self driving a
platform being unveiled, and also robotics. So let's talk more
about what we're gonna hear unveiled at CEES. We're in
full swing, maybe not behind me, as you can see
the participants still waiting to come into this particular center,
convention center in Las Vegas, but we're all about the announce.
Speaker 3 (44:00):
That's already the power the impact of AI.
Speaker 2 (44:01):
But in most consumer Tech editor is with this, Dana Walman,
and I'm so pleased to have you here because trying
to discern what the most impactful announcements are as tough.
Speaker 3 (44:11):
What about the robots side of things? What's catching your attention?
Speaker 12 (44:13):
So there are so many robots here that CS the
organizers of CS have set aside a whole dedicated space
just for robots this time around, and there are some
that we really feel that we need to see in
person when the show floor opens today. We've read, for instance,
about LG's laundry folding robot, which was announced a couple
days ago. But a lot of these things you really
have to see them in action to fully appreciate them,
(44:34):
or as the case may be, in some cases, not
appreciate them. Say actually, this is really overhyped and maybe
the demo is too tightly controlled.
Speaker 7 (44:42):
So that's what we're going to be looking.
Speaker 12 (44:43):
For today, now that the media days have settled down,
and now that the show floor is opening and we
can actually see some things kind of up close.
Speaker 2 (44:49):
Yeah, because we just had the raise of CEO on
and you've been up close with some of those headphones
and maybe not always in practice.
Speaker 3 (44:56):
They work quite as well as they would like to
in the wild.
Speaker 2 (44:59):
What about what hasn't worked in the world, many would say,
is where AI wearables. Some of them have flopped, some
of them and been bought by others. Are we going
to get more flavors as well?
Speaker 12 (45:08):
So many and what has really struck me at this
show is that there are so many form factors that
are not smart classes, almost as if all of these
manufacturers decided that smart classes, even though it's an emerging category,
are already pesse.
Speaker 7 (45:20):
They're already pedestrian.
Speaker 12 (45:21):
Necessarily owns that, yes, we need to do something else
for the sake of doing something else. So you're going
to see a lot of the same ideas built in
microphones and cameras that can do a lot of the
same things as I don't want to say traditional smart classes,
but met as smart glasses, but packed into other form
factors that are not.
Speaker 7 (45:39):
Smart glasses.
Speaker 12 (45:40):
I mean, for instance, without revealing anything I'm not supposed
to on live TV, you may see some jewelry at
the show that does a lot of the same things
and it uses the same core technology, but it's just not.
Speaker 3 (45:50):
I mean everyone in many ways. An Orra ring is enabled,
and Samsung has an AI ring.
Speaker 12 (45:55):
Too, yes, just not with no camera capabilities in the ring.
Or to your point, the headphones that Razor just announced,
they look like regular over the here over the ear.
Speaker 3 (46:05):
Headphones, and they are.
Speaker 12 (46:06):
They function as headphones, but they have dual microphones and
cameras inside and can do things like offer real time translation,
which is something a lot of the new smart glasses
can do.
Speaker 2 (46:15):
Also, the multi mold, multimodal versions of AI and in
wearables is where we're going to start seeing these things
progress and move.
Speaker 12 (46:22):
Yes, it's just that if last year at CS was
the year of smart glasses, this is the year of
sort of TBD something else in terms of form factor.
Speaker 3 (46:29):
Yes, jewelry and what would you say?
Speaker 2 (46:31):
Sentiments like here this year because we just had from
two CEOs thanks twenty twenty five who just couldn't make
it up. As for what a surprise it was when
it came to tariff turvenance. What do we think the
makers here are feeling.
Speaker 12 (46:44):
Oh, the device makers, Yeah, you know, there's been some
discussion at least among the laptop makers of memory shortages
and the price of RAM.
Speaker 7 (46:54):
Some are trying not.
Speaker 12 (46:55):
To discuss it, but they will say, oh, by the way,
this is the price of our new our new devices.
And then I think otherwise there's a real effort to
make consumers comfortable with AI. I think what you're going
to see around the floor are either robots and not
all humanoids, but different kinds of robots, and in some
cases like Razors, Desktop, AI Avatar, different implementations that are
(47:17):
either cute or anthropomorphic. We saw the other night a
robotic AI dog, the cute puppy that got a lot
of attention. So it does seem like the companies are
very intentionally trying to make people comfortable with AI, in
this case using cuteness.
Speaker 3 (47:33):
Cute AI bloombogs down a woman. We'll let her loose
on the floor.
Speaker 2 (47:36):
Now, that does it for this special edition of Bloomberg
Tech in an hour, an exclusive conversation with Jensen Wang
stick for it.
Speaker 3 (47:44):
This is Bloomberg Tech