All Episodes

June 30, 2025 55 mins

The generative AI revolution has created an insatiable demand for efficient, powerful computing. How is a key industry leader like AMD navigating this critical inflection point?

In CXOTalk episode 884, Michael Krigsman sits down with Mark Papermaster, CTO and Executive Vice President of AMD, to explore the future of AI infrastructure.

Discover AMD's strategy for winning in the AI era, which hinges on a tight integration of hardware and software, a commitment to open ecosystems, and a relentless focus on energy efficiency.

Mark shares invaluable insights for technology leaders on the rise of hybrid AI, the future of tailored language models, and the leadership practices required to foster innovation and agility in this fast-paced environment.

===+

🔷 Newsletter: www.cxotalk.com/subscribe

🔷 LinkedIn: www.linkedin.com/company/cxotalk

🔷 Twitter: twitter.com/cxotalk

🔷 Episode: https://www.cxotalk.com/episode/inside-amds-ai-strategy-with-evp-and-cto-mark-papermaster

====

00:00 🚀 AMD's Journey and AI Evolution

03:34 🧠 AMD's Strategy in the AI Era

09:38 🤝 Building Expertise and Collaborating with AI Leaders

11:40 🏃‍♂️ Agility and Cultural Transformation at AMD

15:25 🤖 Navigating AI Infrastructure Challenges

19:13 🔮 The Future of AI: Tailored and Efficient Models

19:36 🤖 AMD's AI and Product Innovations

20:50 🏗️ AMD's Internal AI Adoption and Development Process

23:51 🌍 AMD's Technology Strategy and Sustainability Goals

30:01 🌱 Energy Efficiency and Technological Milestones

31:07 🤖 Holistic Design and Competition in AI Technology

34:39 🗣️ Leadership, Communication, and AI Integration

40:03 🧠 Exploring Reasoning Models and Computational Challenges

43:12 🌍 AMD's Global Expansion and AI Strategy

47:01 🔮 Future Trends in AI and Agentic AI Applications

50:12 🤖 The Role of Agentic AI in Enhancing Efficiency

52:07 💡 Advice for CIOs and CTOs in the AI Era

54:06 🎤 Closing Remarks and Future Outlook

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
AI infrastructure is a critical and rapidly evolving part of the
artificial intelligence landscape.
I'm Michael Krigsman, and today on CXO Talk Episode 884, we're
exploring this topic with Mark Papermaster, Chief Technology
Officer and Executive Vice President at AMD.

(00:23):
Mark overseas an extensive portfolio of hardware and
software. So let's get into it.
I've been very fortunate becauseI've had the chance for the last
13 1/2 years to be the CTO of AMD and an incredible inflection
point in the industry. So myself and Lisa Sue were

(00:44):
recruited at that time because AMD was a storied Silicon Valley
company. It's over 55 years old and had
such promise, but was facing challenges at that time.
And so it was an opportunity to jump in right as AI was starting
to come out. Think about, think about 2012
when you were starting to 1st get natural language processing

(01:05):
where it could have a much higher accuracy rate than other
techniques using to, you know, actually be effective use of
neural net engines and, and to be able to be that first promise
of what was yet to come so that the timing was perfect.
AMD had great building blocks, CPU technology, GPU technology,

(01:27):
other, other accelerators and the know how to put them
together to address a specific markets, everything from, you
know, supercomputers down to, you know, PCs to embedded
devices. So it's been a phenomenal
opportunity. There's just great, great talent
at AMD and, and, and like I say,it's just been a been a dream

(01:48):
job. The technology world is going
through another inflection pointtoday with generative AI.
What does that mean for AMD? The real inflection point was
indeed generative AI. We we saw the promise coming of
what what could be. And so that's why we started

(02:10):
investing and making sure you know, our technologies would
would be there, both hardware and software.
But generative AI was such a fundamental inflection, Michael,
because it made AI accessible tothe masses.
Now you could go into that started all of course, was
ChatGPT. And so think about when in, in,

(02:31):
you know, November 2022, ChatGPTcomes out and suddenly it's a
conversation you're, you're putting in tokens or questions
into this ChatGPT and it's accessing supercomputing to give
you, you know, answers that you,you, you, you never thought of
getting that type of intelligence out of a computer

(02:52):
before. So it really opened up AI and,
and honestly, the supercomputingunderneath it to the masses.
And what have we seen? And, and what's hard to believe?
It's just been several years since then, Just such a
explosion of capability, more and more accuracy from those
large language models and, and ChatGPT and, and, you know, all,

(03:17):
all of the Grok and, and, and the rest of the models there.
But you're also seeing now the shift to inference.
So now people have really started to realize how you can
deploy AI and fundamentally change most every process that
we've dreamed of. So this shift to inter
inference, this expectation of accuracy that you were just

(03:41):
describing, what does that do toAMD in terms of your technology,
your, your and your and your strategy?
The need for more accuracy and now the shift of inferencing,
meaning, you know, millions and millions and more users already
ChatGPT is running, you know, 400 to 500 million users

(04:04):
interactions per week. I mean, it's just stunning.
And what I'd have to say is it means for us and frankly our
peers in the industry, is it theneed for the computation to
support that is just growing exponentially.
I call it this insatiable demandfor more computing.
And it needs to be efficient because it you can't, you can't

(04:25):
just throw just more and more engines at it because it, it, it
would burn more power than we have available.
And so it takes, Michael, a lot of innovation to be able to
drive forward that kind of computing capability in a smart
way. And it really takes
understanding how those softwarealgorithms work with the

(04:47):
hardware, because otherwise you can't, you can't truly optimize.
So you were strategy then involves both the hardware and
the software, the interaction. So you're, you're broadening,
you have broadened the portfolio, could we say from
your traditional business, whichwas more narrowly focused on the

(05:10):
processors? We've always been a hardware and
a software company. But but we were, you know,
capital H hardware in small S software for for years, software
was very, very important, but always an enabler.
Now in this AI era, the softwareis equal or more important,
frankly, than than the hardware because, you know, we, we we

(05:32):
can't let off at all on hardware.
As I said, they that those engines have to be more and more
efficient at every generation. But to unlock that power, you
have to understand where are theAI algorithms going?
What are the what are the techniques that could be used to
make AI more efficient? Can can we use different matrix

(05:54):
math methods to make it more efficient?
Are there algorithms? Think about the you know, some
of your listeners may know of ofthe change when the transformer
was introduced in the AI map model and and flash attention.
These are all techniques in the algorithms to allow AI to be
more accurate and and more efficient and the hardware has

(06:15):
has to match up with that. So it takes a very, very tight
collaboration of hardware and software.
And that's what makes it hard, you know, to to bring
competition, you know, to the market in AI.
You know, I'll tell you, Michael, for AMD, we had to earn
our way to get a seat at the table with these largest of

(06:36):
companies that are that are creating the cutting edge large
language models, the large language models, the new
advancements in AI. We had to prove that that we're
that player, we can bring that competition and that got us
access with their developers andthat allows us to make sure that
our road map is absolutely competitive in AI going forward.

(06:58):
What does winning for AMD mean in this market today as you've
been describing it? We are very tight in our
communications with with her CEOleadership and her
communications is always drivingus to be the very best that we
can. When we enter a market, we
intend to rapidly gain share. We want to be a a leader in that

(07:21):
market. The analogy I'll give you
Michael is what we did in in CPUservers.
When Lisa and I joined, we actually had to actually exit
the server market because we didn't have a sufficiently
competitive X86 CPU. And So what we did was laid a
strategy. Let's get a leadership CPU.
Let's, let's, let's go win that market because it's clear that

(07:43):
so many workloads were moving tothe cloud, moving to the data
center. And so we did exactly that.
And we, it, it, it doesn't happen overnight when you do
chip design. So it took, it took, you know,
literally five years to get thatnew leadership CPU.
It's called our Zen family of processors.
But look at what's happened and the server market.

(08:03):
We went from virtually 0 share in 2017 when we launched that to
roughly 40% market share right now versus the, the, the, the
incumbent leader, Intel that, that, that had that dominant
share. And that's exactly what we want
to do in, in GP US and CPUs and GPUs, actually they work very,
very closely together. And that's what we want to do in

(08:26):
AI. And that's what we've already
started, I mean, which when we launched our first AI oriented
processor, AI optimized processors, it's called the
Instincts 300 MI 300 December of2023.
So first year production 2024, we went from again virtually 0
revenue in these data center GPUs to $5 billion in one year.

(08:48):
Fastest product ramp ever at AMDby far.
And so it you know, so that was like, you know, again, zero
share to about 5%. It's a huge market, but we won't
let up. I mean, we'll do just that CPU
journey I described to you. Was it it?
It took generation after generation of listening to
customers, putting out products,hearing, having that seat at the

(09:10):
table and then improving the product, the hardware and
software every generation. That is exactly what we're doing
for data center GPU and frankly it's what we're doing across our
portfolio because our whole portfolio from PCs to embedded
device to gaming graphics, they're all AI accelerated.
Go to cxotalk.com, Subscribe to our newsletter.

(09:33):
We have incredible shows coming up, so check them out.
How do you work with the model makers with open AIS and others
in this world to optimize what you're doing against what they
ultimately need? We had to beef up our skills in
this area. You can't show up at an open AI

(09:56):
or a meta, you know, or folks that are just absolutely steeped
in, in the, the fundamentals, you know, down to all of the
details of what it takes to create optimized a large
language models handling these, you know, massive tasks that the
generative AI, you know, massivescope, I should say, the

(10:17):
generative AI takes on because, you know, think about the, the
billions to trillions of parameters that are going into
these large language models, Michael.
So the, what we had to do and what we did was to mode match.
So we took our brightest software leaders and we brought
them to the table. You don't come in with a
marketing team. You don't come in, you know,

(10:39):
with a waving of hands. You come in with your deepest
and and steepest of technical experts and then we grew that
expertise, that software expertise through organic hiring
and also acquisitions. And so at this point we have,
you know, very skilled teams that can sit down and really go

(10:59):
through and understand where arethe bottlenecks.
How can we hardware software that we provide the enablement
across those CPUs, GPUs and now racks, you have to build that up
into, you know, rack scale expansion to, to be able to
handle the training and inferences for these largest
LMMS. And so that is a muscle that we

(11:21):
have built up over the last two years and it, and we had to do
it very, very quickly. And we have to be incredibly
agile. If there's one constant, as you
work with the, you know, companies like Open AI and Meta
and the rest, if there's one constant, it's change.
We'll, we'll sit down with them and, and, and they'll say, well,
guess what? We're going down this path.
We found a better way. There's a better algorithm.

(11:43):
You know, here's what we need todo different.
And we were always an agile company, always able to be quick
on our feet. That's one of the, the stalwarts
of our, of our AMD culture. What we put that to test as we
work with those, those AI model companies and, and they're,
luckily we're good at it. So we react quickly and adapt
and meet their needs. Folks, right now you can ask

(12:06):
your questions of Mark Papermaster.
If you're watching on LinkedIn, just pop your question into the
chat. If you're watching on Twitter X,
use the hashtag CXO Talk and getyour questions answered.
Mark, is there a cultural dimension to this change?

(12:27):
You mentioned developing a new set of muscle memory muscles.
Is there a cultural change that has to go on when you go through
this type of inflection? We went through a cultural
change at AMD first overall that's been frankly the fuel of
the whole turn around. If you just look at AMD over the
last 10 years, we've grown dramatically.

(12:48):
Michael and I attribute our culture as a big piece of that
and and the change in culture was to really focus first of all
on execution. When you put out a product road
map and you talk to customers and you listen to them.
So you built in features that you know they need, you know,
can differentiate your product and and make it better than the

(13:09):
competition. You have to deliver that when
you said you would with quality and become that dependable
supplier. And so that is just been a
maniacal focus. When I got here, I started
working with the rest of the engineering team on really re
engineering our engineering processes so we could be that
repeatable engine of getting outnew leadership, innovative

(13:31):
technology cycle after cycle. And as Lisa stepped up from
she's running all of our businesses in AMD and in 2015
became ACEO of the company. And so if you think about
actually as late 2014, she became the CEO and what she

(13:52):
brought was this amazing focus across the entire company on a
focus on customers, listening tocustomers, a focus on delivering
products that really make a difference.
That's that execution engine. And then thirdly, just
simplicity. Let let's not be a, a, a company
that's caught up in complexity, but really simplifying how we do

(14:16):
things to make sure that we're the most efficient that we can
be. So that was a, that was a change
from a company standpoint. And then as AI, this explosion
of AI, as you say, back to the release of the first generative
AI. And now what we've seen is such,
you know, a massive inflection industry that, that that's still
going, that's causing yet again new muscle to the company.

(14:40):
That new muscle is 1 like what Isaid earlier, being as much a
software company, if not more than we are a hardware company.
So that's been a change. And then secondly is just speed
at which we move. I I have been in the industry 4

(15:01):
plus decades and I have never moved at the velocity that we
are now. And as I look across our whole
company, we are moving at a faster velocity than we ever
have. And when I look across the
entire industry I see everyone moving at a faster pace.
AI is accelerating the rate and pace at which innovation is

(15:23):
delivered to the market. We have a question coming in
from LinkedIn and again, folks, when else will you have the
chance to ask Mark Papermaster, he's the CTO of AMD, pretty much
whatever you want. So take advantage of it.
And this is quite, this questionis from Preeti Narayanan.
And she says as the cost of running large scale AIAI models

(15:49):
in public cloud environments continues to rise, many
enterprises are re evaluating high performance on Prem or
hybrid infrastructure options. Yet bare metal deployments bring
their own challenges in terms ofcomplexity and maintenance.
From AM DS perspective, how should CI OS and CT OS navigate

(16:13):
this trade off and are we approaching a point where hybrid
AI infrastructure becomes the strategic norm?
One size doesn't fit all. We think about that with our
product portfolio. We're going to offer choice.
We offer a broad ecosystem so our customers can think about
really, really having choice, not a bestowed, you know, not,

(16:36):
not, not just here's your only, you know, AI solution.
Here's here's, you know, a rack that our competitors putting out
there and, and, and not giving you the ability to really tailor
it as you need. We're, we work with OEMs and we
work with hyperscale. So what you're starting to see
is a bit of a dichotomy. You're getting massive rack

(16:58):
scale designs in the hypers hyperscalers that are supporting
these largest of large language models.
And so where you have significant training needs or
large scale inferencing that you're doing, you're probably
going to continue to run those on the cloud.
And by the way, because of the efficiency that we're driving

(17:19):
the industry, the cost per tokens going down.
Now the cost of the computing isgoing up because we're adding
more and more capabilities, but the actual cost per token is
going down. But it's still, it's an
expensive bill because all of industries bringing on more and
more users because people are starting to deploy AI, they're

(17:41):
running AI infra scene in most every process that they that
they have. Consumers are using more and
more AI in their daily lives. So that demands going up.
How are businesses thinking about that?
I do see them moving to a hybrid.
I do see them, as I said a moment ago, using the cloud for
those big tasks that they have. But they're starting to tailor

(18:03):
and fine tune models to their business.
They're harnessing the data theyhave and and so then you're not
you're not needing the the LLM that you can ask it anything.
You're creating your own large language model and in some cases
small language models really tailored to more point task.
And that's allowing businesses to develop the and to be able to

(18:28):
support more economically on Prem and frankly lower latency.
It's faster because it's right there at the point of, of the
factory floor, the point of, of where the users are, are seeking
those, you know, those immediateanswers.
So I, I, I do see a, a, a, a hybrid approach.
I see a dichotomy and, and there's a third leg of that.

(18:49):
And that's actually the embeddeddevices.
So you have on Prem, you know, and you have what you're running
in the cloud. And then thirdly, the edge for
AI is really going and that's where you're literally embedding
the AI engines at the point of data acquisition so that it's,
you know, it's just immediately providing smarts to the to the

(19:11):
process that's being controlled.Other guests on CXO Talk have
also said that they believe thatthe future is smaller but more
tightly focused models for particular domains and
applications. It's obvious, right When you
step back, you think about it's like, OK, as you start deploying
AI, you know it will get tailored, it will get, it will

(19:34):
become more efficient. People have to drive the cost
down for point applications. We saw this coming at AMD, you
know, quite some time ago. And we leveraged the fact that
we have that breadth of portfolio across, as I said,
everything from supercomputers. We have the number one and #2
supercomputer in the world, right in an AM, DCPUS and GPUs.
We're growing in, in the data center with both CPUs and GPUs,

(19:57):
but also smart PCs, Michael. So we were the first to
introduce AI, deep AI acceleration for copilots.
In the PC and that's been a really growing piece of our
portfolio, but also we just introduced with our latest
version of gaming graphics leveraging AI there beautiful

(20:19):
upscaling and graphics or leveraging the embedded AI.
And then think about our acquisition of Xilinx IN2022A
leader in embedded devices, programmable Gator Grays along
with the the embedded X86 business that we have all of it
AI accelerated. So we're, we, we are absolutely
seeing this coming and expect tosee our continued growth of AI

(20:43):
deployments as people understandhow to harness their data and
bring inference, effective inference applications.
Let's take some more questions. We have some from Twitter right
now from X and this is from Ricardo Dionda.
And he says, how does AM DS internal IT organization act as

(21:07):
Customer 0 for your AI solutions?
Are there examples you can sharewhere internal adoption directly
influenced product development? We're a technology company.
Shame on us if we didn't make ourselves Customer 0.
But we have actually been doing this for for the last years and

(21:28):
starting about four years ago, we actually raised the
visibility of IT significantly in being customer 0.
So our CIO is directly engaged. Hausmach will engage with
customers, he'll understand their needs, he'll share with
them how we are deploying AI, and he's also supporting our

(21:49):
engineering teams as they're running our compute systems.
And you know, to the extent where we don't think about
bringing out a new data center GPU, a new data center CPU, a
new PC, we don't even think about bringing it to market
until RIT has already exercised or already deployed it first to
the, the users, the, the computeusers at AMD and that we've

(22:13):
tested, we build, we build clusters of data center
computation first. We do deployment of of hundreds
of PCs before it goes out to themarket.
And and likewise across our embedded applications, it's a
little bit harder on than embedded applications, but it's
not hard at all for PCs through our data center computing

(22:34):
products to be customer 0. And that's exactly what we do.
In fact, I'll add, we've actually speed our chip design
process up. That's what we do.
We develop chip designs and and the hardware and software has to
work effectively. So what do we do in the chip
design process? We've we are using AMD and just
think about, you know, there is billions of transistors.

(22:55):
Our latest AI chip has 154 billion transistors.
Actually our newest one now has over 180 billion transistors.
How do you get all those transistors laid out across the
silicon in in the mass vast waveof interconnect that you have to
put this together, It all has tobe perfect.
You can't have one transistor, one connection.

(23:16):
That transistor doesn't work. AI it turns out is very, very
effective at helping ensure thatwe not only have the most
optimum implementation, but it'shelping our test processes and
getting the coverage to make sure that no defects can can
escape through our manufacturingtest line.
And and so it's it's been very, very effective not just in

(23:36):
engineering, but even our business practices.
So being customer zero also brings direct benefits as you
speed the internal AI applications success efficiency.
Let's grab another question. You can see we the audience is
an incredible audience and really smart.
So I I always defer to their questions in front of my own.

(24:01):
And this is from Elizabeth Shaw who says, so what's the
Technology Strategy? Is it thinking of all the
product lines or families as onesystem distributed over
different domains? When you think about a product
portfolio like we have at AMD, you have to be a flexible in how

(24:22):
you chart the direction. What do I mean by that?
Each product itself has to standalone in its own category.
So CPU has, you know, X86 Zen line of CPUs.
It has to be the best X86 CPUs that are out there.
Same thing with our GPUs. They have to be that, you know,
the best engines that, that are out there.

(24:44):
The same thing across our, our, our gaming graphics, our
embedded devices. So you have to think about all
the building blocks our, our road map has to make sure a
hardware and software, the, the,the software enablement that we
are best of breed. And then when you, when you put
it together, what we have is a strategy of how they are

(25:06):
deployed, how we tailor that to our products that meet the
customer needs, how many CPU cores they need.
How do you need to tailor it when it's being used for
database processing versus that same CPU is being used as a head
note, a controller for a a vast number of GPU's that it's
managing 2 totally different usecases.

(25:28):
And likewise that that applies across each of our portfolios.
You have to think through the use case, the dominant use cases
of our customers, making sure you're optimized.
And then there's the other thirdstep, and that is we have to
think across our whole portfolio, how can we get
synergy for AI? What we've done is we have one
software enablement stack, ROCM.That's our name of our software

(25:51):
enablement and that is, is goingto be the the, the top level of,
of making it easy to deploy AI across whether it be our data
center GPU's, our CPU's or our gaming graphic devices or
embedded devices. And that's an area that we're
incredibly focused on right now in 2025 and really opening up

(26:12):
that capability to users. You can now run ROCM on Window
our AMD based Windows PC's. You can now run rack em on the
latest of the Radeon graphics cards and and as well have it
optimized. Even CPUs are people don't
realize CPUs are an essential part of AI processing and we

(26:33):
have them well enabled as well. This is from Chris Peterson who
says on the technology side, where does AMD see the GPU to
GPU interconnects going On the biz side, what's AM DS plan if
the AI industry pivots massivelyin algorithm design or the power

(26:58):
water sustainability just can't keep up with demand.
So 2 questions very, very quickly.
GPU to GPU interconnect and thenthe larger set of business
issues. We have previously are gone
about that with using our proprietary Infinity
architecture, which has been proven through our generations
of CPUs and and GPUs today. But we have a commitment in AMD

(27:22):
to an open ecosystem. We think it's extremely
important that you know again that you don't just have a a few
bespoke offerings because it's aa wall garden approach.
And so we support and we're a founding member of Ultra
Acceleration Link. And what Ultra Acceleration Link
is, we donated the kind of rotocol we used to connect their
GPUs and now it's out there, notan AMD control of a control with

(27:46):
multiple companies that are running the UA Link consortia
that are ensuring that that other switch vendors can play
other competitors of ours that are, are creating their own
accelerators. They can use this ultra
acceleration link standard and use the same switches and, and
you know, connectivity solutionsthat are out there.
Again, we are committed to an open ecosystem and that is our

(28:09):
strategy for for GPU to GPU links.
The second questions of course critically important, you know,
what do we do as as the demand for energy consumption goes up
and up and up to accommodate ourinsatiable demand for more AI
computation? You know, my short answer is
innovation. Now there's just the, the demand

(28:30):
is just so high. You look at, you know, the
hundreds of billion dollar market opportunity we're looking
up there at, at, at, you know, by, you know, just in the next
few years, by 2028, the market size is going to be that large.
And so that's an incredible pullfor innovation.
When you have a market that large, what it, what it means is
we're going to, we're going to continue to innovate.

(28:52):
How do we make these GPU's more efficient?
And we're going to, you know, how do you make that GPU and CPU
work more efficiently together? But even more than that, you
know, how do you, how do you support where you have
algorithms that aren't changing so quickly, so you don't need
that programmability of that GPUand CPU.
We support as well tailored devices and, and custom devices

(29:15):
that our customers could, could work with us on.
So I think we're going to see that whole range.
You know, I'll just say just to show our commitment and we just
hit a milestone for CPU and GPU computation for these most
demanding AI and supercomputing workloads.
We set out in 2020 that we woulddrive a 30X improvement by 2025

(29:39):
S in a five year period 2020 to 2025, we'd have a a 30X
improvement in performance per Watt energy efficiency.
And we actually declared just a couple weeks ago with the advent
of our new MI 350 series, we notonly hit that 30X improvement
and efficiency, we actually hit 38 X.
So we we do that we put out measurable milestones to incent

(30:04):
our own team to be efficiency. We work with professors and
universities to make sure it's not marketing fluff, it's it's
real efficiency that's going to be delivered at the end of the
day. And so now we're on to our next
milestone. Sam Nassinger leads this for us
as just put together again working with the community,
working with professor so we canmeasure it a rack efficiency

(30:25):
gain. So as I told you these now scale
it rack level of how you put these AI compute clusters
together and we've committed to a 20X improvement at rack level
by 2030. So that's our next energy
efficiency milestone that we're focused on.
You've spoken about significant changes in potential or, or the

(30:48):
innovation necessary to manage potential, significant changes
in algorithms. You've spoken about the need to
for sustainability. Are there other big issues that
you have on your radar right nowthat you're focused on?
We don't have a choice, Michael.We have to look at the entire

(31:10):
landscape. And so we worry everyday.
We worry everyday first on disruptive technologies are
coming. We have a a great research team.
They're constantly look ahead atlooking ahead at what's what's
coming in terms of our computation, how we can
interconnect that computation tobe more efficient.

(31:30):
Our networking technologies, we've got very, very innovative
networking through our acquisitions of Violinx and
Pinsando, just super technology of how we how we can network
these AI competition. And now with our acquisition of
ZT Systems, the landscape includes the rack design and,
and driving that innovation. So we, we call it holistic

(31:52):
design. It has to be cross disciplinary
of all of these groups, hardwareand software working together to
not only design the next generation, but look one and two
generations beyond that. What's disruptive, what's
coming? We're partnering with others on
on quantum computing. You know, I won't get in the
debate as when does quantum go mainstream?

(32:14):
Quantum will start off as an accelerator.
It'll be, you know, bespoke applications that can really
benefit from quantum, but we're going to be there already our
Xilinx based Fpgas in our portfolio are used to, to as
controllers in most, most every quantum computer out there
today. And we also, you know, work with

(32:35):
the rest of our portfolio to be that CPU and GPU complex that
can work with those quantum accelerators, just an example.
But it's really the entire landscape, Michael, that we have
to be looking out again, not just today's products, but the
next generation and generation beyond.
Again, from Chris Peterson, AMD,NVIDIA and others seem to be

(32:59):
competing relatively head to head in terms of overall design
for AI. What about plans to compete with
niche players like Sarah Brus orothers that have wildly
different solutions? So I'm interested really in in
both your competition with AMD as as well as with the other
niche players that Chris Peterson mentions.

(33:21):
We are a mass producer of computing technology and so it
doesn't behoove us to be, you know, the the first to market in
a niche application, Cerebus, Cerebus.
I give that team a lot of credit.
Andrew D Roz and team do do a great job of a wafer scale
integration and there's certain tasks that when it fits in that

(33:45):
model and can leverage that, that, you know, flow processing,
data flow processing to a wafer scale it it's going to be
certainly have the advantages, but it's not, you know what it
it is specific workloads that can really benefit there.
And so they're doing well in, inthat we'll watch that if the, if
workloads that can fit in that kind of application growing, I

(34:08):
could say the same about any other the startups that are
focused in certain areas. We watch it closely.
And if we hear from our customers that that's a gaining
traction that more and more models could leverage that
we're, we're, we're going to, you know, bake those approaches
in our plan. But we, we, we have to make sure
that we're listening to as well,given the role AMD plays in the

(34:30):
industry, that we are really looking and, and making sure
that we're one and two steps ahead of mass scale AI
computation. Two of the core themes that you
have mentioned are execution andto simplify it, keeping your ear
to the ground on what's happening in the industry and

(34:51):
where things are going. Do I have that right?
Absolutely. I wouldn't manage my day-to-day
anything any different than I doright now and that I have in the
entire time I've been in the CTOrole at AMD and that is
prioritize having that ear to the ground.
So I prioritize customer visits and I fly to customers as are

(35:12):
set up, you know, virtual meetings like you and I are
having today and, and very much make sure I'm in listen mode.
I don't go them in and just beatmy chest and say, you should be
using AMD. Here's how we can give you the
better total cost of ownership. We can help save you money.
I do mention that I, I leave that in for sure, but that's not
the reason I'm, I'm making that call.

(35:33):
I'm also, I'm first understanding what challenge are
they facing? And I want to make sure that,
that our portfolio can address that and then I can educate them
and, and show them where we can bring them advantage.
And likewise, we follow the competition.
I, I run a regular process within AMD, which is just our,
our competitive review. So we look at any announcements

(35:55):
that our competitors are making because we don't ever want to be
arrogant. Like we can never have an
attitude in AMD that, oh, we, we're just better than everyone
else. We have the right way.
What other people are doing is wrong.
The converse we look and we we are constantly asking ourselves,
do we have that best practice? Is there, Is there if there's a,

(36:17):
a better approach someone else had, let's top that yet again.
Let's let's use that to spur ourinnovation on to make sure that
that we don't find ourselves in any way disadvantaged.
And on this topic, Christine Lofgren from LinkedIn asks about
leadership practices that enableyou to be responsive, to

(36:42):
innovate and do the things that you're describing.
We often look past how importantfundamental communication skills
are. I mean, we, we, you think that
sounds right. Like of course you have to
communicate well. Well, actually I'll tell you
it's paramount. Like, you know, I have, I have
spent 4 decades trying to hone my communication skills and
communication means two way. You think you're listening.

(37:05):
Are you really listening? Did you play it back to the
person who is talking to you to make sure you got it right?
And, and that applies externallyand internally.
It also is just how do, how do you communicate what you're
doing in in your North star of the team or is, is the whole
engineering team aligned on the priorities that we have and,

(37:26):
and, and how we achieve those goals that that takes an
investment in, in excellent communications.
And then with our customers, arewe really articulating in our
value proposition and, and, and what it is we're about in AMD
and how we can deliver, deliver them value.
So across the panacea, internal and external, I I think

(37:48):
communications vitally important.
It has to be married with a sound strategy to win.
And so if you take the time to really develop a strategy that
will be make it clear for all the investments we make, how
we're differentiated, how we bring value, how we're going to
win. And you marry that with

(38:09):
excellent communication that that's that's how you went.
Do you have any quick advice on communicating well for business
executives? Seems like an obvious topic, but
you've just emphasized the deep importance of it.
You have to put the time in it. So it comes easy to me now
because I've been doing this for, for, for decades, but it

(38:31):
didn't always come easy. I, I, I invested the time to, to
hone my skills at, which means you have to put yourself in
uncomfortable situations, force yourself to be where you are not
comfortable. And again, that can be internal
communications or external communications, but get get out
of that zone that you've been living in.

(38:53):
Stretch yourself, hone your skills, get feedback.
Lisa Sue always says feedback's a gift.
It really is like put your thickskin on, get feedback from those
that you know, will tell you thetruth, not, not butter you up
and tell you, Oh yeah, yeah, that was great.
You know, get unfiltered feedback.
And then lastly, in this age of AI, use AII use AI every single

(39:15):
day to help me and give me information to have better
decisions, but also in communication, proofreading what
I did, what was not clear, what was the tone of that
communication? I just wrote it.
It it's that right hand person. You know that that it's, by the
way, getting better and better every generation.
That's there to help you. Isn't that amazing that you,

(39:39):
given your role as CTO of AMD, just referred to the IT as a
they essentially. Why?
Because you use it that way. And so when you're deploying AI,
you're thinking about like, oh, I need help.
Do I get this person to help me,that person to help me, or do I

(40:00):
want the AI to help me? That is how I think about it.
I think that's how most people are starting to think about it.
I use numerous models every single day.
And absolutely, that's certainlyhow I think about it, Michael.
Have you used some of the new reasoning approaches that are
out there and and the the research inferencing
capabilities are out there. Have you, have you tried any of

(40:21):
those capabilities? Yeah, all the time.
And now one of the hardest challenges I face is figuring
out what's the right model to use for the right task.
And if you use the wrong reasoning model, you'll end up
sitting there for 20 minutes while it gives you back the
wrong answer. Absolutely.

(40:41):
How you deploy, how you prompt, how you iterate, So important.
You asked me a question earlier about how does algorithmic
change? How does that affect AMD now
that we took in as we hit, as wehit the reasoning and, and as
well as a gentic processes that spawn off, you know, many, many
tasks. I can tell you it had a huge

(41:02):
change in terms of how our computation engines are
deployed. So when you go through
reasoning, it means that you're going through multiple loops of
inference, you're doing the forward chaining of your
machine, you know, your, your machine learning processing
again and again and, and again to get more accuracy, but
primarily also bigger context windows you're building on.

(41:24):
I ran AI, ran a, an inferencing loop and I got these kind of
answers. Well, I want to remember that,
but I want to build on that and I want to use maybe a different
expert or different kind of model.
I'm going to, I'm going to buildon that.
It turns out that's like a database that you're having to
manage of all that context. And so it needs great CPU's,
which we have at AMD. And So what we're seeing is

(41:45):
more, more and more having that strong CPU paired with the GPU
to, to be able to be effective on these new modes of, of
inferencing with the reasoning and in the research.
And then likewise agentic AI, because agentic AI, you can code
it up to handle a number of tasks.
It can go out and run, you know,get to API, the application

(42:08):
programming interfaces and spawnother tasks.
And those what we're we're finding are often on CPU's.
So it's it's really driving a different mix and the
computation for AI. Are you helping guide your
software technology partners? The model makers in terms of

(42:29):
best practices for offloading certain kinds of tasks to the
CPU versus the GPU and when to go back and forth and so forth.
When you think about that interaction that we have with
the model developers, they're the deep experts of the model
itself. We're the deep experts of of

(42:49):
that communication, you know, the communication turn tuning
across the different compute devices and how to mode match
that with the algorithms. So it's very much a give and
take, a brainstorm that you knowis, is pretty incredible to see.
We have a question from LinkedInchanging gears entirely here

(43:11):
from Doctor Anker who Pattier and he says what are AM DS plans
for future expansion, Any plans with Southeast Asia?
We are constantly expanding. We have grown across Asia.
I mean, before, you know, when Ithink about what we've done and

(43:34):
we've, we've been in Malaysia and Singapore for many, many
decades. I think we've been in Malaysia
50 years. And so you know, we continue to
expand worldwide and including Southwest Asia, India, we're
going to be, we're at 8000 employees today, 8000 engineers

(43:54):
and we're on a pace to be at 10,000 by 2028.
So across, you know, and and youknow across Taiwan, China, you
know, we look at where the othergeos are.
So it's not just there. We're equally expanding in
Europe and you know we are, we are an example of a global

(44:18):
company, a multinational company, MNC.
We serve a global market and so our engineering force and our
and our sales force are absolutely global.
This is from Vishal Bhargava andhe says on LinkedIn he says
which research and inferencing products are we talking about?

(44:38):
Can you please share specific names?
Oh, when I'm talking about research products, I'm talking
about literally like open AI research.
So you can get a research subscription with open AI and
you and, and there's a, it costsmore, but you get you, you truly
have, you know, that that researching capability I was
described and, and you can get the similar functions with, you

(45:01):
know, anthropic and other modelsthat are out there.
Ricardo Deonda on Twitter X comes back again and he says
what are the biggest challenges AMD faces in managing machine
identities across hybrid and multi cloud environments,
especially with the rise of AI driven automation?

(45:22):
Identity is a huge focus across computing.
I mean, we do an authentication every time we talk to any other
computer to make sure that we know it's a valid entrusted
compute device. So I would I would probably need
a little bit more context to answer your question the best.
How do geopolitical considerations and the drive for

(45:45):
supply chain resilience directlyinfluence AM DS, AI, product
strategy, R&D investments, and global manufacturing
partnerships? And I'm not trying to make this
up a political conversation. Our supply chain, like everyone
in the industry has had to become very agile.
And so is, is there's tariffs which apply to certain products

(46:09):
and certain sourcing locations. We need to have the agility to
manoeuvre as best we can to mitigate those impacts.
Otherwise the cost of our products would go up.
And so we, we do that. We have a fantastic supply chain
team. What Kayvon and his group do
are, are very flexible. We, we are again, we're a global

(46:30):
company and we have global manufacturing.
And so we've been agile just like I said, we have to be agile
to adjust our products and the list of our customers and
optimize them for computation. Turns out, in the current
geopolitical, our supply chain has to be equally agile.
On LinkedIn, Christine Lofgren comes back and says What key

(46:51):
trends or developments, both within AMD and the wider tech
industry, do you expect to shapea IS trajectory over the next 12
months? One is there's more and more
inferencing you're just going tosuch innovation on models
already. I'm so impressed with those some
startups or some businesses havebeen able to do with small
language models that are that are really pointed at tasks.

(47:14):
And I just think we're just at the absolute beginning, the tip
of the iceberg of tailored models that that bring
innovations for, you know, you know, very unique tasks that can
be incredibly efficient. So I think model development,
both tailored in small language models, but also to allow us to

(47:35):
to scale the accuracy on the of the large language models is, is
hyper scalers are driving to artificial general intelligence.
So I think we're going to see just continued innovation across
that spectrum. And then likewise, the base
technology itself, how we put ittogether, you're just going to
see tremendous innovation to make it more power efficient

(47:57):
across everything from supercomputing to the smallest
devices or tremendous innovationgoing on in this area.
Everything from materials to newtypes of transistors, how the
memory is is connected more closely and more efficiently to
the compute devices, networking and on and on.
Mark, you mentioned artificial general intelligence, AGI, Can

(48:20):
you, you're in a very unique position working with these
model developers. Do you have thoughts on the
trajectory of the models going forward, say over the next year?
Where do you think the world will end up, say a year from now
or nine months from now? We can see nine months.

(48:42):
I don't know we can go much beyond that nine months to a
year, but but when you look at that time frame, you're seeing a
continuation of build, you know,of, of of more and more accuracy
coming out of these models, larger context, being able to
handle, you know, much more thoughtful answers.
Particularly like I talked aboutearlier with these reasoning

(49:03):
techniques that are being applied, that trend will
absolutely compute, you know, that, that trend will actually
continue over the next year as well as more and more compute
capability needed to support that.
The, the, the tough question is five years from now, where are
we, you know, five years from now, what actually is that
definition of AGI? Do we hit it five years from

(49:23):
now? Lots of debate out there, you
know, and, and I'll, I'll leave that to others to prognesticate
what is that definition of AGI and, and the precise date in
which we hit it. What about agentic AI that
you've mentioned several times? It seems like that is such an
important architecture, if we can call it that, that it's not

(49:45):
going away anytime soon. Thoughts on that?
I talked to the CIO of a of a major, major bank and she was
describing me the progress they've made in agentic AI.
I mean, they, they, they've got almost the entire company
enrolled in deploying AI and what they're, you know, every
week they're finding point applications that can just be

(50:08):
speed up by creating an, an agentic AI and, and think about
that. That's a ripe area.
Banks handle transaction processing.
I mean, so much of their time is, is spent on well defined
tasks, well defined data. And so if you can create an
agentic AI that has, you know, clear goals, clear sets of data,

(50:29):
you know, clearly defined AP is to, to connect different tasks
together, it's going to drive, you know, a very, very quick hit
of efficiency. And I, I, I do think we're,
we're going to start seeing that, you know, more and more,
we're certainly doing that internal to AMD, looking for
processes where a Gentek AI can speed our productivity.

(50:51):
And it leads Michael to the, youknow, you asked me to keep it
short, but I'll just so keep it very short.
But that it's a gentic AI that Iwould urge people to think about
where, where can you make peoplereally more productive?
Like do do people want to be spending their time where it is
very, you know, very much a, a process that a gentic AI could

(51:13):
do for you. Lean in, get that agentic AI to
complete those tasks to free up the time for the innovation, the
creativity, you know, the really, you know, human driven
opportunities that we have in front of us as a society.
That is going to take some time to diffuse that kind of

(51:33):
knowledge and the ability to breakdown processes because it's
one thing folks like yourself who are really advanced
technologists who understand thelink between the technology and
the business process and being able to decompose, but there's a
maturing process across industrythat will need to take place.
Absolutely. It's actually again, cultural as
well as as well as technical. And so therefore it, it does

(51:56):
take some time, people have to get their heads around it.
They have to build the technicalacumen to be able to deploy.
But I think it's going to happenmuch more quickly than we ever
anticipated. Mark, as we finish up advice for
CIOCTOS inside large organizations in the enterprise

(52:16):
at this point in time today and to generalize based on you talk
with lots of customers, what should they be really focused
on? One, I'd say if you're not
moving more quickly than you ever have before, something's
wrong because the, the pace of change, you know, really is, has
really inflected over the last, you know, several years and will

(52:37):
continue and, and, and, you know, the period that we could,
any of us could project. So I think 1 is, you know, if
you're not comfortable embracingchange, you have, you have to
adapt. And I don't mean change without
Mana's risk. One of the things that we drove
in our AMD culture is take the risk, but manage it.
Make sure you have checks and balances.
Don't, you know, don't walk off the edge of a Cliff without, you

(53:01):
know, a, you know, a rope attached to keep you from
falling in the, the chasm. So I think that, you know,
that's one is, is really be receptive to change, but manage
it to ensure that, you know, you're controlling your, your
risk. And and the second I would say
that that comes with that is stepping back and examining

(53:24):
what, what is your core value property?
What is the key things you're doing?
And in an AI era, can it be, cancan you really re architect how
you do things? Are there the very fundamentals
and processes by which you run your business that can be, that
can be re architected in the AI area?
So both, both those would be theprobably the two biggest piece

(53:47):
of of the advice I would share to, to CI OS and heads of
infrastructure out there. And I would say help AMD in our
quest to create a competitive environment where there's an
open ecosystem and competition out there, not one dominant
supplier. I think that's bad for everyone.
And personally I want faster chips so I can render video and

(54:09):
do all the things I do faster and faster and faster and with
with not quite so much heat. That, that, that's unending
quest. I mean, that's what
technologists like myself, we have no choice.
We have to. We have to stay focused there
every single day. I'm Mark Papermaster, Chief
Technology Officer and ExecutiveVice President at AMD.
Thank you so much for taking time to be with us.

(54:32):
I'm very grateful to you, Mark. My pleasure, Michael.
Thank you. And thank you to everybody who
watched and for your great questions.
Before you go, go to cxotalk.com, subscribe to our
newsletter. We have incredible shows coming
up, so check them out and we'll see you again next time.
Thanks so much everybody.
Advertise With Us

Popular Podcasts

Stuff You Should Know
The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.