All Episodes

July 10, 2025 39 mins

Hoover Fellows Dr. Elizabeth Economy and Dr. Amy Zegart discuss the "DeepSeek moment"— when China's DeepSeek AI model surprised U.S. markets by replicating OpenAI's performance using fewer resources and an open-source approach. The two explore the strategic implications of open versus closed AI models, with there being an argument that the U.S. should embrace more open research approaches rather than closed models. They highlight how China is successfully replicating America's historical innovation model—investing heavily in long-term basic science—while the U.S. has reduced federal R&D spending. The two scholars conclude with policy recommendations, including fixing K-12 math education, creating a national computer infrastructure for universities, and strengthening partnerships with allies while emphasizing the importance of including academia in what should be "public-private-academic partnerships."

Recorded on July 2, 2025.

ABOUT THE SPEAKERS

Amy Zegart is the Morris Arnold and Nona Jean Cox Senior Fellow and the Director of the Technology Policy Accelerator (TPA) at the Hoover Institution. She is also a Professor of Political Science (by courtesy) at Stanford University, and a Senior Fellow at Stanford's Human-Centered Artificial Intelligence Institute and the Freeman Spogli Institute for International Studies. The author of five books, she specializes in U.S. intelligence, emerging technologies and national security, grand strategy, and global political risk management.

Zegart's award-winning research includes the leading academic study of intelligence failures before 9/11: Spying Blind: The CIA, the FBI, and the Origins of 9/11. Her most recent book is the bestseller Spies, Lies, and Algorithms: The History and Future of American Intelligence (Princeton, 2022), which was nominated by Princeton University Press for the Pulitzer Prize. Her op-eds and essays have appeared in Foreign Affairs, Politico, the New York Times, the Washington Post, and the Wall Street Journal.

Elizabeth Economy is the Hargrove Senior Fellow and co-director of the Program on the US, China, and the World at the Hoover Institution. From 2021-2023, she took leave from Hoover to serve as the senior advisor for China to the US Secretary of Commerce. Before joining Hoover, she was the C.V. Starr Senior Fellow and director, Asia Studies at the Council on Foreign Relations. She is the author of four books on China, including most recently The World According to China (Polity, 2021), and the co-editor of two volumes. She serves on the boards of the National Endowment for Democracy and the National Committee on US-China Relations. She is a member of the Aspen Strategy Group and Council on Foreign Relations and serves as a book reviewer for Foreign Affairs.  

ABOUT THE SERIES

China Considered with Elizabeth Economy is a Hoover Institution podcast series that features in-depth conversations with leading political figures, scholars, and activists from around the world. The series explores the ideas, events, and forces shaping China’s future and its global relationships, offering high-level expertise, clear-eyed analysis, and valuable insights to demystify China’s evolving dynamics and what they may mean for ordinary citizens and key decision makers across societies, governments,

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:08):
[MUSIC]>> Elizabeth Economy: Welcome to
China Considered,a podcast that brings fresh insight and
informed discussion to one of the mostconsequential issues of our time,
how China's changing andchanging the world.
I'm Liz Economy Hargrove, Senior Fellowand Co-director of the program on
the US-China and the World at the HooverInstitution at Stanford University.
Today I have with me my good friend andcolleague Dr. Amy Zegart.

(00:29):
She's the Morris Arnold and Nona JeanCox Senior Fellow here at Hoover and
also teaches inStanford's Political Science department.
She's a specialist on all thingstechnology and national security.
And today, we're gonna talk about oneof the seminal issues of our time,
the DeepSeek moment.
Welcome, Amy.
It's great to have you here.

>> Amy Zegart (00:48):
It is so nice to be with you, Liz.
No place I'd rather be.

>> Elizabeth Economy (00:52):
Okay, so we're gonna talk about DeepSeek.
Let's just start with where were you and
what was your reaction when youfirst heard about DeepSeek?
What about this DeepSeek moment?

>> Amy Zegart (01:05):
So DeepSeek, as you know, created a deep freak.
And so in the beginning, I was watchingthe press coverage and thinking,
this can't possibly be the full story.
And a couple weeks after, I was ina meeting at the AI Institute at Stanford
with a bunch of computer scientists andothers around the table.
And it was a remarkable conversationbecause what I realized was, first of all,

(01:28):
a lot of the media coverage was wrong.
And secondly, DeepSeek was not a surpriseto any of the faculty in AI at Stanford.
It was a surprise to markets,it was a surprise to investors.
It was a surprise to the CIA.
It was not a surprise to people incomputer science because they've been
following DeepSeek for a couple of years.

>> Elizabeth Economy (01:49):
So what made DeepSeek?
What made the DeepSeek moment?
Why was this such a big deal?
Even if, and I agree with you,
because I happen to be at a meetingin Southern California with a lot of
sort of representatives from AIcompanies here in the United States.
And while I was surprised and some ofthe other China folk around the table
were surprised,they were not at all surprised.

(02:10):
Exactly what you said.
They'd been tracking DeepSeek fora while, but clearly for
the majority of Americans and forthe US Government, it was a big surprise.
What made this so significant?

>> Amy Zegart (02:21):
So I think it's important that it was a surprise, number one.
So the fact that our government was takenby surprise should concern all of us.
So, as you know,
I'm really worried about the broadercategory of strategic technical surprise.
How do we anticipate what's happening intechnology, when could it come confer
advantage to the Chinese that wemight not be able to catch up on.

(02:43):
And sothis was a near miss in terms of that.
But your question,why was this a big deal?
And it is a big deal.
And I think there are really fourreasons why it's a big deal.
The first is the DeepSeek moment said,it's game on.
The thinking,you know China much better than I do.
I think the thinking in Beijing,certainly the thinking in Washington,

(03:04):
was the US is ahead,really far ahead in large language models.
And turns out that's not the case.
The second to the uh-huh partof this DeepSeek moment was it
clarified what race we're in.
So we often talk in technologyabout we're in a race with China.
Well, what's the race?
I think until DeepSeek, most peoplethought the race was to invent.

(03:26):
Who's gonna invent the latest model first?
Well, what DeepSeek revealed wasthe race is to adopt, not invent.
So DeepSeek didn't invent.
It wasn't a major engineering milestone.
But what they did was basicallyreplicate the performance of
the best OpenAI model withoutaccess to anything that OpenAI did.

(03:47):
So they knew that this thing existed andthey could recreate it with less money,
less compute,
less frontier capabilities than the bestcompanies in the United States.
So it's a race to adoptglobally AI platforms.
And DeepSeek is cheaper,it's free in the third world.
And so the US companiesare racing to be first to invent.

(04:09):
And maybe that's not the racewe should be worried about.
The third thing is, I think,and we can get into this,
that it seems like a very nerdy debate,but
it's a really important debate aboutopen models versus closed models.
And what DeepSeek did was sayreally suggest the future is open.
And by that I mean they published whatthey did, their weights were open, which

(04:33):
enables other people to modify, accelerateand replicate what they're doing.
Most US companies, Meta accepted,take a closed model approach.
They don't publish what they do.
They don't share this information.
They think they're creating a moat.
I think what DeepSeek is suggestingis that is a failing strategy.
So for the United States,this is a big deal.

(04:54):
And then last,I think it reveals a lot about talent.
And I know we're gonna talkmore about talent, but
who are these DeepSeek researchers thatcreated this really important moment?
The answer is they were not people whowere trained in the United States.

>> Elizabeth Economy (05:09):
Right and you've done some really novel research into that
last point that you just made.
And I do wanna talk about it, but let mejust pick up on the third point about
the sort of open versus sortof protected models, right,
where they share, you don't share.
I mean, it would seem to methen that if DeepSeek basically
has laid bare everything they did,how they got there,

(05:31):
why couldn't be the case that some veryyoung, talented AI researchers and
computer scientists here inthe United States could take that and
just replicate it and improve upon it,and then we're back in the game again?

>> Amy Zegart (05:45):
Well, I think that's already happening, right?
Researchers are already going totown on the DeepSeek model and
this is a learning from learning.
So the Chinese are learningfrom the Americans,
the Americans are learningfrom the Chinese.
But I think we're in a bizarro worldwhere the US Used to be in favor
of open research.
Universities publish things openly.

(06:05):
We think we accelerateinnovation faster that way.
We have open international collaborations.
And now it's the Americansthat are saying, actually,
we don't want anyone toknow what we're doing.
We're gonna operate in secret.
We won't tell our employeescan't say what they're doing.
We're not publishing what we're finding.
And the Chinese are saying that they'reopen and accelerating research.

(06:26):
And so within computer sciencedepartments, where are researchers
turning to use models to developtheir capabilities in AI China?
Why?
Cuz the models are open andthey can play with them.

>> Elizabeth Economy (06:40):
Okay, we have Meta.
So I'm just gonna push a little bit onthis point, I really understand it.
We do have Meta, which is open, right?
And which does share.
Isn't it okay to have some of ourcompanies doing approaching this one way,
some approaching another way?
And I mean, shouldn't we bea little protective of our IP?

(07:02):
I mean, I think across the fullsort of range of technologies and
over time, where the United Stateshas been all too willing to share,
or sometimes China has justappropriated the technology,
it feels as though there's a reasonbehind what these companies are doing.

>> Amy Zegart (07:21):
Absolutely and you bring up a good point.
And we should distinguish between,there are different levels of openness.
So DeepSeek did not reveal its data.
We don't know what data it trained on andthere's all sorts of consternation about.
Well, really, did they just train on datafrom other models in the United States.
They released the model weights,not the data they use.

(07:41):
That's a big difference forpeople in the field.
And there is this difference that Meta isthe only major company that has sort of
an open weight model.
As you know well, there are real concerns,legitimate concerns about, number one,
how do we make sure these models don'tdo things that are really unsafe for
everybody?
And the horse leaves the barn andwe can't get the horse back in the barn.

(08:05):
And that's a reallyimportant set of concerns.
And number two, what you raise is whywould we just let all this technology and
IP walk out the door for the world?
Like, shouldn't we protect it,like patents, etc.
But the DeepSeek moment really changedmy mind on a lot of these things.
I think,as you know better than anyone, Liz,
this is a technologythat naturally diffuses.

(08:28):
You can't keep it to yourself.
It's always going to get out becauseit's not like nuclear material where you
can control it.
And much of the capability to developthese models is right inside people's
heads.
It's the ultimate portable weapon.
So this technology isn't born classified.
You can't keep it.
And so if that's the case, then shouldn'twe be racing faster to be ahead and

(08:52):
set the standards and understandthe guardrails and try to set norms?
So my thinking really haschanged because of DeepSeek.
I used to think much more that weshould lean more toward pausing,
protecting, and now I actuallythink that is a misguided strategy.
And I'm really concerned thatUS Companies are so focused on beating

(09:14):
their American competitors, they're goingto lose to their Chinese competitors.

>> Elizabeth Economy (09:19):
Well, that is a really important point.
I mean, I would guess that the DeepSeekmoment has sort of profoundly,
I think, shaped your thinking.
My guess is that it's probably shapedthinking of some people at OpenAI and
Anthropic and other places about whatthey need to be doing differently, but
maybe not.
And I think one of the bigissues that you touched on in

(09:43):
the fourth point was the talent issue andDeepSeek.
The sort of the narrative around DeepSeekwas that its model was developed by
a very small team of young scientists anddevelopers, virtually all of whom,
as you said, were educatedexclusively in Chinese universities.
I think your research shows part ofthat's true, part of that's not true, but

(10:05):
has huge implications,which I wanna get into.
But talk a little bit about what youfound in your research, and how you went
about it, because I think really it'sa unique lens into this company.

>> Amy Zegart (10:18):
So let me just take a step back and say,
why do we care about wherethe talent comes from?
I think that knowledge power isfar more important in today's
geopolitical competitionthan it's ever been before.
So, yes, military hard power matters.
Yes, soft power of our valuesstill matters, but increasingly,
because technology is so central toeconomic competition and security,

(10:42):
competition, knowledge, power, the abilityto innovate is really important.
And one key component of thatis where the talent goes.
So, you know they're often these Pentagonmaps that show, how many nuclear missiles
does this country have orhow many tanks does that country have.
I would like to understand that map forAI researchers around the world.

(11:05):
How many cutting edge AI researchersare in Beijing versus in Silicon Valley.
So that's the idea behind it.
And then, so what we did was I asked thisamazing research assistant who you now
have hired full time,named Everson Johnston.
Can you just figure out what you can findabout all the people who are listed on
these DeepSeek papers?

(11:26):
And there wasn't just one paper,which is what the media focused on,
there were five.
Let's look at all five papersthis company has ever produced.
Gather up all the information you canabout the 223 people that were listed as
contributors in some way.
She did an amazing job.
She used AI to study AI.

(11:46):
So she developed her own AI scrapingtools to look across the Internet at
everything she couldfind about the authors.
And of the 223 total authors thatcontributed to any of those papers,
she found incredible data about 201.
So a lot.

>> Elizabeth Economy (12:03):
Yeah.

>> Amy Zegart (12:03):
And then what we did is we looked at where did these people live,
where did they train, where did they work,and where did they go to school.
And what can we infer fromthe patterns that we found?
We found a couple ofreally interesting things.
One is that 98% of them hadsignificant training in China.

(12:27):
This is not the story we often hear,which is China sends all of its best and
brightest in the United States.
We give them all the greatthings that they learn, and
then they go back to China.
What we also found, to me, this isthe most disconcerting thing we found more
than half of those researchers in DeepSeekhave trained nowhere outside of China.
They have spent theirentire life inside China,

(12:50):
which tells me China has an incredible androbust domestic talent pipeline.
We do not have that kind of pipelinein near the numbers that China does.
So we are asymmetrically vulnerable.
If you think about this asa supply chain of talent, or
we rely much more on foreign talentin STEM than China increasingly does,

(13:13):
so they can grow their own in a waythat we can't large population also.
So this suggests thatwe need to compete for
global talent in a much more vigorous waythan we've done, not just grow our own.
We have to do all of the above.
I would say two more things that we found.
One is that of the 49 DeepSeek researchersthat spent time in the United States,

(13:35):
most of them only came for a year.
They didn't spend a lot of time here.
They spent one year here and
they went to 65 differentinstitutions in 26 different states.
This is a very widespread, unusual.
I would have expected a concentration inCS departments at Stanford and Cal and
MIT, right, the three top departments.

(13:58):
That's not what we found.
Whether this is delivered ornot, we don't know.
But it's an interesting pattern and
a short pattern of people coming tothe United States and then leaving.

>> Elizabeth Economy (14:06):
Well, can I just stop you there for a second?
I mean, presumably not all of them can getinto Stanford, at MIT and Caltech, right.
So it's maybe not surprisingthat some went elsewhere.
I mean,were they all studying computer science?

>> Amy Zegart (14:22):
No. >> Elizabeth Economy
So we found some of them were in medical programs,
some of them were inbioengineering programs.
So a much more widespread setof programs that they went to,
not just computer science departments.

>> Elizabeth Economy (14:34):
Right, and I think that was one of the things that
the founder of DeepSeek mentioned, right,when he was talking about his strategy was
that he was bringing together peoplefrom sort of disparate fields.
This was not simply a groupof computer scientists.
So I think that's reflective.
What you found, I think, is reflectiveof some element of his strategy and

(14:55):
who he brought in.

>> Amy Zegart (14:56):
And so I'm glad you raised that point, Liz,
because the mantra has beenthey're copying US companies.
DeepSeek didn't copy US companies.
DeepSeek copied US universities, right?
And that's the model of bringingpeople together across fields and
having a lot of young people involved,right?
And so the last thing that wefound was they may be young,

(15:17):
the DeepSeek talent, butthey ain't green, right?
So you look at citation metrics, right?
So Alexander Wang,who is just hired by Meta, right,
an AI superstar is 28 years old.

>> Elizabeth Economy (15:32):
Yeah. >> Amy Zegart
he is not green.
Well, DeepSeek showed a similar pattern.
So we looked at the citationcounts of the researchers, and
we compared the median and the average tocitation counts to the extent we could at
OpenAI in one of their papers.
And what we found was this was a groupthat was actually pretty highly cited.
They had a lot of researchunder their belt.

(15:52):
They were young butthey were not inexperienced.
And in fact, the sort of medianlevel of citation was higher
relative to the average than OpenAI.
What does all that mean?
It means they relied on fewer superstarsthan the OpenAI paper we looked at, and
their.
Their average bench was better.

(16:13):
That is important.
I also thought,if I recall correctly, though,
that it didn't skew quite as youngas we might have thought, right?
I mean, I have to say, when I readabout it, just from the press reports,
that I was envisioning this, you know,small garage type environment with 20,

(16:34):
15, 20 young people ages, 18 to 25.
But it seemed to me that a number ofthe most senior people were actually quite
a bit older than that, is that fair?

>> Amy Zegart (16:44):
I think so, yes.
I mean, I have to look specifically at thedata, but I also think there's a tendency,
as you know, in Washington to think 25is really young, but in the tech field,
25 is not young.

>> Elizabeth Economy (16:55):
Fair enough, fair enough, fair enough [LAUGH].
It's true, it's true.
All right.
So you see the sort of a different modelthat they've developed, homegrown talent,
obviously, one of the takeaways,which you've already mentioned for
the US is that we're gonnahave to work harder and
run faster to continue to attract moretalent from outside the United States.

(17:19):
Same time, we should probably be doing abetter job of growing our own talent here.
Talk to me about sort of what you wouldsee as the lessons that the United States
should take away from how DeepSeek hasapproached this, if there are any.
Maybe it's just too different and we can'treally adapt our model to their model but
maybe there are some takeaways.

>> Amy Zegart (17:39):
I actually think the biggest takeaway is we need
to remember our own model.
So what China is doing is replicatingthe best of the US model, and
we are headed in exactlythe opposite direction.
What is China doing right now?
You can talk about this better than I can.
It is an S and T, Science and Technologystrategy that says we wanna build

(18:01):
in fundamental research, we wannaempower our research universities,
we wanna invest in big,hairy basic research questions,
not just things that havea commercial application today.
And we're gonna play the long game.
That is exactly the US Modelthat led us to be the innovation
superpower of the worldsince World War II.

(18:24):
But what are we doing?
We're reducing funding to universities,we're reducing support for
that fundamental research that asks big,hairy questions, and
we're putting more moneyinto applications.
So it's wonderful that venture capitalistsare investing in all sorts of companies,
but those are designed to createa return on investment today or soon.

(18:47):
Whereas basic research funded bythe Federal government is patient
capital designed across generationsto develop insights that can
then lead to breakthroughs.
The example I always use is Google.
We all use Google, but very few peoplerealize that Google really emerged from
Federal funding of university researchin something called digital libraries.

(19:13):
The Chinese model is actuallythe American model, and
we've lost sight of the American model.
Just one final thing, which is if welook at R&D as a percentage of GDP,
the Federal government spendingon research and development,
it's a third of what it wasat its peak in the 60s.
And China is spending at a rate thatis six times faster than we are.
So they're gonna eclipseus within a few years.

>> Elizabeth Economy (19:37):
Yeah, I think they're already at 2.7% of GDP and
they're increasing by 0.7% per year orsomething.
And we're at, I don't know,3.5% maybe still,
I don't know depending on whatthe current administration is doing.
So what do you think explains,
this seems kind of obvious that we havethis model that has led us to this point.

(20:01):
What do you think explains the sort ofdecision by the current administration to.
To change course at this moment when,as you've outlined,
we're kind of in the race of our life?

>> Amy Zegart (20:14):
I don't know.
I'm not inside the brains ofthe folks in the administration.
I think some of it, frankly,is the own goals made by universities.
This is a tough moment and
a moment of reckoning forhigher education on a lot of fronts.
We've made ourselves opportune targets.
The complaints about universitieshaving a monoculture are true.

(20:37):
The concern about not being able tohave alternative points of view,
those are true.
And so universities have not donewhat they should do to create
the environment andthe mission that we say we want to do.
That's, I think, point one.
I think point two is we have donea poor job of explaining the innovation

(20:57):
model that the United Stateshas that's made us so great.
I am amazed at how littleunderstanding there is of the role of
basic research in Silicon Valley.
I have had major techexecutives say to me,
what does it matter if we fundfundamental research in universities?

(21:18):
So I think there's an excitement aboutventure capital, and I love venture
capital as much as the next person, butall investment is not created equal.
And all research is not created equal.
And so I think universitiesneed to do a much better job of
articulating what it isthat we bring to the table.

(21:39):
One of the big new initiatives we're doingin the tech policy accelerator at Hoover
is what we're calling Lab to Launch,which is exactly that.
It's gathering the data andtelling the story so
that taxpayers understand the return oninvestment of this fundamental research,
because we're not just drinking lattes andsitting around in the summer.

(21:59):
But amazingly,my own relatives think that's what we do.
And so we need to do a much better jobof actually gathering the fact base and
presenting it in a compelling way of whatit is that we do in universities and
the relationship to the private sector,how those things have to go hand in hand.

>> Elizabeth Economy (22:15):
Yeah, I think I remember reading at one point that of
the top 100 sort of big inventions thathave come out of the United States,
technological inventions likethe Internet, for example,
at least half of them were fundedinitially, at least through government,
were funded with government support.

(22:36):
So it shouldn't be that difficultto look back through history.
But I think it's great that you'retrying to have more contemporaneous
examples of how university basicresearch has led to or continues to lead
to sort of breakthrough inventionsthat have important applications for
bettering our society,bettering our economy.

(22:56):
Do you have one in mind that you couldshare something that you've just been
working on in this Lab to Launch?

>> Amy Zegart (23:03):
So there are a few, I think,
examples that most people don't realize.
So I talked about Google.
The one that I spoke about when we went toWashington, which is a personal one for
me, is the MRI machine, right?
So most of us have had an MRI fortunatelyor unfortunately, at some time or another.
The MRI saved my dad's life, right?
Gave my father more than20 years of longer life,

(23:25):
the ultimate gift because it detectedthe kidney cancer that was killing him.
He got that MRI in 2001.
The machine was reallycommercialized in the 1970s.
The technology that enabled thatmachine to be commercialized was
started in the 1940swhen my father was born.
So, that's a very sort of personalexample of one person's life extended by

(23:50):
this harebrained fundamental research thatno one could imagine it was going to then
lead to this breakthrough machine thatwe all sort of take for granted today.
So that's one cryptography thatprotects us to the extent that our
data is protected on the Internet.
Stemmed from decades of research and

(24:12):
pure math with no sense thatit might be applicable today.
The COVID mRNA vaccine.
Decades of research in universities beforethe baton was passed to the pharmaceutical
industry.
I do not want people listening to comeaway saying that I don't think the private
sector has an incrediblyimportant role to play.

(24:32):
I do but I think the two go hand in hand.
And I just say one other thing, Liz,
which is that emerging technologiesoften don't emerge, right?
And so part of the bargain withuniversities is that a lot of the stuff
we're working on doesn't pan out.
That's all part ofthe innovation ecosystem.

(24:52):
Most emerging technologies don't emerge.
So we have to be patientabout looking at paths that
don't actually become pathways to success.
That's how this business works.

>> Elizabeth Economy (25:05):
I think that's really smart.
So when you went to Washington,can I just ask,
how was it received by the policymakers?
Do you think that you sort of informed andelevated the thinking?
I don't know whether in Congress orin the administration.

>> Amy Zegart (25:22):
So I'd like to say the answer is yes,
but there's a little bitof selection bias, right?
The folks that most wanna meet with us inWashington are those who think this is
important.
So I will say we went with colleaguesof mine in science and engineering,
some of whom hadn't done this before,and they were very pleasantly surprised

(25:42):
at the bipartisan support that wegot when we went around Congress and
when we went in the executive branch,the nonpartisan support.
So I think we met with Senator Rounds andSenator Booker.
Senator Rounds, the Chair ofthe AI Caucus, Cory Booker, Stanford alum.
And they were singingfrom the same songbook.

(26:05):
This is a Paul Revere moment,we need to come together on this.
So I think there's a lot more bipartisansupport for these technological issues,
particularly given the competition withChina today, than most people might think.

>> Elizabeth Economy (26:21):
When I was thinking about the field of AI in particular,
I just was realizing that just last year,when you look at
the most recent Nobel Prizes thatwere awarded in both physics and
chemistry, they were bothtied to advances in AI, and
three were awarded to US Scientists then,one from the UK and one from Canada.

(26:49):
So maybe our system still has somesort of positive elements to it.
Although, I will say that at least oneof the scientists was 91 years old.
And so clearly, as a product of a moretraditional period of development.
But do you see that there's reason forhope here?

>> Amy Zegart (27:10):
Absolutely.
So there's magic in Silicon Valley andin the United States and
how many foreign leaderscome to this area and say,
I wanna create a Silicon Valley inmy country and it can't happen?
And I think there are keyelements to that magic.
One is we wanna be a placewhere the world's best and

(27:32):
brightest wanna continue to come.
I hope that that moodin Washington changes.
But I think people want to live infreedom, I think fundamentally, and
I think they want to be ina place that rewards hard work.
And historically,the United States has been then, and
I don't think it's too far gone forus to have that back.
I also think we have in researchuniversities this thirst for

(27:57):
exploration and that is its own kindof secret sauce that we have here.
Not for the country necessarily,we do it because we do it, right?
It's not top down driven.
It's not because the Chinese CommunistParty is telling us to do it.
We let people do their thing andwe're more likely to have creative,

(28:19):
innovative pathways because we do that.
And then of course, you and I are aroundyoung people in the university.
Whenever I look at the future andI talk to my students, I can't help but
be inspired.
And especially when we think abouttech and how much of our technological
breakthroughs are happening by younger andyounger people,
it gives me great hope actually forthe future, that it's not too far gone.

(28:43):
We just have to get out of our own way.

>> Elizabeth Economy (28:45):
Yeah.
So I mean, the administrationpotentially to its credit has put out
a number of initiative, initiativesthat seemed as though they seem
as though they're designed toadvance AI education and diffusion.
They have this AI education task force,for example.
I mean, if you look at what thisadministration is doing in the space,

(29:10):
do you see things that they'redoing that suggest that
they're committed in a seriousway to advancing sort of
frontier research in the US anddiffusion of AI?
I mean, is there a different kind,
maybe different from what the Bidenadministration was doing?
But do you sense that there isa real drive within the Trump

(29:34):
administration in this area?

>> Amy Zegart (29:36):
I do and I think it's important to look beyond just the Trump
administration.
We talk about the decline ininvestment and fundamental research.
We look at the Federal governmentnot supporting universities.
This is a long time coming.
This spans across administrations,Democratic and Republican, for many,
many years.
So our innovation ecosystem hasbeen eroding for a very long time.

(30:00):
That erosion is accelerating underthe current administration but
it certainly didn't start withthe current administration.
And then you look at sort of the narrativeand some of the initiatives coming out of
the Trump administration,there's a lot to be excited about, right?
The framing of AI opportunity,not just AI safety, I think is right.
We need to look at the opportunity,not just the risks.

(30:22):
And the Biden administrationwas understandably,
really concerned about the risks andstarted sort of a risk first approach.
But things have changed andthe DeepSeek moment is one of them.
And we need to lean more forwardinto the opportunity side.
And that started with the Vice President'sspeech about AI opportunity.
You mentioned the AI Executive Order andEducation.

(30:45):
Education is one of the mostpromising areas to me for
AI, because AI is the ultimatepatient tutor, right?
Where a human gets frustrated whena student asks for the fifth time.
I don't get it,how do you do that math problem?
AI will not get mad at you.

>> Elizabeth Economy (31:00):
[LAUGH] It'll never get tired of explaining it, that's true.

>> Amy Zegart (31:02):
And we'll come up with a different way for as long as it takes.
So I think there's tremendous opportunityto enhance what is an abysmal
education record in our country andK12, if we are smart about adopting AI.
So I'm encouraged by many thingsthat the administration is doing.
I'm discouraged by some things thatthe administration is doing, but

(31:24):
I think that's oftenthe case that you have.
It's a complicated terrain and
people have legitimately differentviews about how to navigate it.

>> Elizabeth Economy (31:32):
Yeah. I mean,
I think as long as there's some funding,right?
Some actual funding behind theseinitiatives, I think that gives me hope.
I think if we're gonna rely on the private
sector to pick up all of what had beensome significant government funding for
basic research,I think that's gonna be problematic.
But let me give you a chancethen to offer two or

(31:54):
three suggestions to the administration,
if you had the opportunity todesign the sort of AI strategy for
the United States, perhaps incorporatingsome of the talent element of this,
what would be your sort of top two orthree recommendations?

>> Amy Zegart (32:12):
This is my queen of the world if I could-

>> Elizabeth Economy (32:15):
Totally,
I do think you are queen of the world.

>> Amy Zegart (32:17):
[LAUGH] I think you're queen of the world.

>> Elizabeth Economy (32:19):
We're on the same page.

>> Amy Zegart (32:20):
We can be more together.

>> Elizabeth Economy (32:22):
Go meet the queen, Queen Amy.
Go to it.

>> Amy Zegart (32:25):
So starting on the talent side,
I think K12 education isa national security crisis.
Now, lots of people have talkedabout K12 for a long time.
I think that crisis is urgent.
We rank 34th in the world in Mathaccording to the latest test, and
we're going down.
And if you look at the top performersin Math, we have 50% as a percentage

(32:48):
of our population of top performersin Math compared to Canada.
We have one-third thatthe percentage of the South Koreans.
If we want to educate the workforce oftomorrow, it's got to be done today.
So, number one, focus on K12 education.
And I often joke to folks whoare in the education space,

(33:08):
the only thing worse than teaching to thetest is not teaching to the test [LAUGH].
So we need some standards,we need some metrics, and
we need to measureperformance against them.
So that's number one.
Number two, capacity building.
So the two most important enablers forAI and
other areas of scientific innovationare talent and compute power.

(33:33):
And right now, compute is dominatedby a handful of companies.
So the example I always give is, lastyear, Princeton University had to dip into
its endowment to buy 300advanced Nvidia chips.
Meta, the same year,bought 350,000 of the same chips.
So I'm not saying Princetonhas to compete with Meta, but

(33:55):
we need to make national computea critical infrastructure for
more organizations and universitiesto be able to do frontier science.

>> Elizabeth Economy (34:06):
Go ahead, sorry.

>> Amy Zegart (34:07):
There's a bill in Congress, a bipartisan bill, to do just that, but
it has stalled.
It was proposed inthe Trump administration,
it was supported inthe Biden administration.
It has still not become law.
Compute is like the highwaysystem of the 1950s.
It has economic and national securityimplications that are enormous.
So I would double down onmaking compute available for

(34:30):
more organizations in the United States.
And the last piece,I would do allies and partners, right?
We have this amazing set of allies andpartners.
We can do much more together withour talent, with our compute,
with our data, than we can do alone.
And so I would fast track moreprograms to harness the data,

(34:54):
talent and compute capabilitiesof our allies and partners.

>> Elizabeth Economy (34:59):
So that sounds like a recipe for success but
I can't resist just asking youon the compute side of things.
Number one, does DeepSeek tell us that youcan can do more with less advanced chips?
Does that open up some opportunities?
And number two, this does sound likean area where a company like Nvidia could

(35:19):
do some public-private partnershipwith universities, right, to make
some of those advanced chips available touniversities at a reduced cost, perhaps.
I don't know, what do you thinkabout those two thoughts?

>> Amy Zegart (35:32):
I would say yes and yes.
So one of the interestingthings about DeepSeek,
when you talk to sort of PhD students incomputer science, they were so excited.
Why were they so excited?
Exactly the point you raised, so theminimum viable compute to be able to do
really cool research in this areawas less than people thought it was.

(35:53):
It reduces that gap betweenthe hyperscalers and
what others on the frontier can do butthat gap ain't nothing.
It's still a pretty big gap.
So that's exciting that the gap hasnarrowed in terms of being able to
compete on a wider scale.
Public private partnerships are gonna bereally important for these companies and

(36:14):
it's not just Nvidia.
I think companies in general inthis moment need to be thinking
much differently about not.
They shouldn't be justreacting to the moment.
They are shaping the international order.
They are shapingthe geopolitical battleground.
And they need to think about beingproactive and strategic, not just for

(36:34):
their shareholders, but for the nation.
And so I think what you suggestedis one component of that but
it's not just Nvidia.
I think many companies need to thinkabout how can we contribute in
a way to build scale in ournation to compete against China.

>> Elizabeth Economy (36:52):
Yeah, I will say,
when I was at the Commerce Departmentin the Biden administration,
Secretary Raimondo was very proactiveabout reaching out to companies
to get them to work withthe administration on various initiatives,
like digital skilling, forexample, globally, right?

(37:15):
So to partner in ways that helped toreinforce the positive message about
the United States on the global stage andto help realize US national objectives
on things like supply chains, forexample, on critical minerals.
So I think there were many ways in whichprivate companies, in fact, did step up.

(37:35):
But I think what you're suggesting isthat they should be developing their own
visions and be more proactive about it,not necessarily wait for the US Government
to come to them and say, pleasepartner with us on these initiatives
that help realize our sort of broadernational economic and security objectives.

>> Amy Zegart (37:54):
I mean, I think Secretary Raimondo, your former boss,
did a fantastic job at bringingthe private sector together.
But we also often leave out the third P,well, actually,
it's not even a P,public private academic partnership.
So it's not just industry and government,it's industry, government, and academia.

(38:18):
We can't leave outthe university piece of this.
And I think that's where wereally need to focus more,
is all three of those legs of the stoolneed to be supported and come together.

>> Elizabeth Economy (38:29):
Okay, Amy, I can't thank you enough, not only for your sort
of excellent analytical presentation andhelping us understand the DeepSeek moment.
But also just foryour optimism and your ideas for
what we need to do to sort of movethis forward and move together,
I think, really, as a nation forwardto advance ourselves in this

(38:52):
really important strategic arenaof Artificial Intelligence.
So thank you forall the work that you're doing.

>> Amy Zegart (39:01):
Thank you for having me.
I'm just trying to keep up with you andall the great work you're doing.

>> Elizabeth Economy (39:05):
[LAUGH] All right, if you enjoyed this podcast and
want to hear more recent discourse anddebate on China, I encourage you to
subscribe to China Considered viaThe Hoover Institution YouTube channel or
podcast platform of your choice.
Next on China Considered,
I interviewed George WashingtonUniversity Professor David Shambaugh on
his fascinating new book abouthow China lost the United States.

(39:26):
[MUSIC]
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.