Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:13):
Welcome to Tech Stuff. I'm Os Voloscian. On this podcast,
we cover emerging technologies and how they reflect but also
shape the way we live. Artificial intelligence, robotics, energy batteries,
space exploration. More often than not, though the conversation comes
back to the same place, China. It's a topic and
(00:35):
a country that everyone has an opinion about, but where
few have true insight. Today's guest is different. According to
a headline last year in Wired magazine, he has quote
the American who waged a tech war on China. The
article continues, China is racing to unseat the US as
the world's technological superpower, not if Jake Sullivan can help it.
(00:58):
As soon as I read the article, I know, I
knew I wanted to have Jake Sullivan on the show.
He served as National Security Advisor for the duration of
Joe Biden's presidency, and according to my queries on groc
Gemini and chat GPT, he spent more time with his
Chinese counterparts, including Chi chimping, than any US national security
advisor since Henry Kissinger. But today, a conversation about waging
(01:22):
a tech war on China, Autonomous weapons on the battlefield
of Ukraine and far beyond, and what comes next for
a man dubbed a future potential president by Hillary Clinton.
Jake Sullivan, Welcome to tech Stuff.
Speaker 2 (01:35):
Thanks for having me.
Speaker 1 (01:36):
I wanted to start by asking you this, which is
you are known as a foreign policy person, right, I mean,
working in the State Department, working on the Iran nuclear
deal during the Obama administration, national Security advisor under Joe Biden,
and yet more and more you seem to be a
tech policy person. And I'm curious you know how that
(01:58):
shift began and when you started to see tank as
such a crucial lens for national security.
Speaker 2 (02:05):
It really began for me after twenty sixteen and I
served as a senior policy advisor on Hillary Clinton's campaign,
And it was over the course of that campaign that
I came to recognize first the degree to which the
intersection of domestic policy and foreign policy was where everything
was happening. And second, at that intersection was technology, both
(02:29):
what we were doing in the United States to invest
at the cutting edge and what was happening in the
US China tech competition. So I spent my years out
of government really digging into a set of technology, economic
and national security issues and when I came in as
National Security Advisor for the first time ever the National
Security Council, we stood up a Directorate on Technology and
(02:52):
National Security so that we could really create an engine
room for the kind of policy work I felt needed
to be done to make sure that the United States
was in the best possible position to make technology work
for us rather than against US.
Speaker 1 (03:06):
Yeah, it's interesting when you when you leave off this,
you know way you go for an exit interview, when
you sat for the Wide magazine piece why there? And
did you like the headline the American who waged a
tech war against China?
Speaker 2 (03:19):
Great question on the headline. I can't say I love
the headline because I personally you may have noticed, don't
use the phrase tech war.
Speaker 1 (03:25):
No, it would be it would be ill advised for
something in your position to use the word war, I
guess when talking about China.
Speaker 2 (03:31):
Yeah, exactly, And I don't think of it in those terms. Actually,
I think of it as on a much more common
sense basis, what do we need to do to promote
the advancement of American technology and what do we need
to do to protect American technology from being used against
us by competitors like the People's Republic of China, and
that was the strategy that we pursued. It was not
(03:52):
a war, but it was hard nosed in the sense
that we felt that for too long we had been
sending our most advanced tool use technologies to China for
them to turn back against us, and we weren't going
to allow that to happen, particularly when it came to
advanced semiconductors.
Speaker 1 (04:08):
When it comes to China specifically, I mean, what was
your worst fear about China getting an edge and what
was your mission to prevent that from happening.
Speaker 2 (04:19):
Well, Look, I've said since the start of my time,
from when I came in as National Security Advisor, that
China makes no bones about the fact that it seeks
to become the world's leading economic, diplomatic, and technological power.
And my view is that the United States is better off,
(04:40):
and frankly, the world is better off if the United
States sustains its position as the world's leading power, including
the world's leading technology power. That is the way that
we are most likely to be able to ensure that
this technological revolution which you have spoken to so many
different brilliant people about works more for us than it
works against US, and so I worried that China was
(05:03):
going to achieve supremacy or dominance in the key technologies
of the future. And wasn't just my worry. I think
a lot of people back in twenty twenty one were
essentially saying, you know, the US is going to lose
out to China when it comes to AI and other
areas too. China has too many advantages. The United States
is too disorganized. It's just not going to get its
(05:24):
act together. And I was determined to make sure that
we competed as effectively and vigorously as possible so that
we could sustain the edge. But I also knew it
was going to be a tough competition because China has
a lot of capability to bring to bear in this
area as well as in other areas, some of which
you mentioned at the outset of the podcast, whether it's
robotics or autonomy or you name it. So I can't
say I went to sleep at night with a single
(05:46):
fear other than I felt it was my job to
help position the United States as best as possible in
this really sustain competition with China over the future of
key technologies that are going to find so much of
national security, but also economics and society in the years ahead,
and AI is essential to that as any.
Speaker 1 (06:08):
There is this theory called the Fucidity's trap, which has
been popularized by Graham Allison, who I know is a
colleague at at Harvard now and kind of a mentor,
which essentially says that you know, if you look at
the history of a declining great power and a rising
great power in should we say in our ten cases,
that will necessarily lead to war? And I think that's
(06:29):
been kind of like a bipartisan framework for thinking about
US and China. Is that a framework that you believe
to be true that there is a about fifty percent
chance that there's inevitability of a capital w war? Like,
how do you apply that framework to you're thinking about
China and then in turn your policy on tech.
Speaker 2 (06:47):
I think Graham Alison's book is excellent. It deserves to
be read by everybody. I have it here in my
office at Harvard, and not just because he's my colleague
down the hall. But I do not believe that it
is excellent because it is describing an inevitable conflict between
the US and China. That this is some kind of
law of nature. In fact, he points out in the
(07:08):
book that there are cases where a rising power and
established power didn't go to war. And my view is
that the US and China are going to be in
a sustained competition. It will be intense, it will be challenging,
but it is critical, it is vital that we manage
that competition so that it does not spill over into
(07:32):
conflict or confrontation. And my view is we can do that,
and I actually think the blueprint for that was laid
out over the four years of the Biden administration. We
were able to enhance America's competitive position, take tough actions,
and also engage in sustained diplomacy so that we didn't
move in the direction of war with China, which would
be a catastrophe not just for the US and China,
(07:52):
but for the entire world.
Speaker 1 (07:54):
He let's talk about some of the tough measures. You
mentioned two countries, Holland and Japan. That this kind of
fascinating secret meeting that you convene with your adoption Japanese counterparts.
What are you trying to achieve?
Speaker 2 (08:08):
So I'll be advised in what I say, because having
some discretion in the diplomatic relationships with Japan and the
Netherlands was critical to being able to achieve the kind
of consensus we were able to achieve on semiconductor manufacturing
equipment export controls. So, for your listeners, there's two types
of export controls. There's the controls on very high end ships,
(08:31):
I think H one hundred Nvidia chips or the Blackwells,
and then there are controls on the manufacturing equipment used
to make the highest end AI chips and other high
end ships. And that equipment is primarily produced by three countries,
(08:51):
the United States, the Netherlands, and Japan, and so having
all three of us come together around a common understanding
of what components and what machines necessary for the manufacture
of semiconductors would be controlled was essential for such controls
to be effective, and that required persuasion, consultation, really laying
(09:14):
out the challenge as we saw it, and it also
required a deep level of technical detail, which is why
this really was not just about the diplomats or the
national security advisors being in the room. It was about
technical experts from each of the three countries really working
through the reality of this so that it wasn't just
a fly by night effort.
Speaker 1 (09:34):
One policy goal was to prevent China from buying advanced
chips from companies like Nvidia, and the other was to
prevent China from developing the technology to produce them domestically,
which was more what the Japanese Dutch diplomatic effort was.
Around January twentieth, literally the day you leave office, Deep Seek,
(09:56):
the Chinese AI Company announces a new you reasoning model
that performs on par with the best American models and
was able to be developed in spite of the export controls. Now,
some people say they claim it's because they actually were
able to develop such as sophisticated technology, they didn't need
as many chips. Others say they just got around the
(10:17):
export controls. I don't know if you know which of
those you have more of a hypothesis to be true.
But either way, what was your reaction to this moment?
I mean, someone said this was basically the moment that
everything you were working on was designed to prevent.
Speaker 2 (10:29):
Well, keep in mind, this is a I'm glad you
used the word moment because it was a moment. It
was a moment in time in an ongoing competition, and
of course since then, open Ai has come out with three.
Deep Seek has come out with yet another version of
its R one model, and we'll see more models coming
out as we go. For me, deep Seek sort of
showed me number one the power of a concerted pr
(10:52):
campaign by China to effectively say resistance is futile. These
export controls have failed. Give up America. I would just
point out that it was the number one issue they
raised with me in every meeting, which suggests to me
they didn't think the export controls were totally useless.
Speaker 1 (11:08):
The number one issue is why you try to prevent
us from buying chips?
Speaker 2 (11:11):
Right? Why have you imposed these semiconductor export controls on us?
And you know, I spent a lot of time explaining
our theory of small yard high fence, and what we
saw is the difference between a national security logic to
export controls in some kind of economic or technology blockade,
which I do not believe that we are engaged in
(11:31):
with China.
Speaker 1 (11:32):
They've responded just to your small yard high fence metaphor
with big yard iron cut in right correct.
Speaker 2 (11:37):
That that is what they said, And I said, well,
we see it very differently. I see it as a
small yard and a high fence. And part of the
reason I know it's small is I know in the
meetings that I'm conducting how much we're keeping out of it,
how much continuing technology trade there is between the US
and China. So but the other thing about deep Seat
that I think is really important for people to recognize
(11:59):
is the big breakthrough was really an open AI breakthrough
that then Deep Seek more or less replicated and then
improved upon. And I don't actually know exactly I'm not
inside Deep Seek, so I can't say, but it seems
on the balance of the evidence that the deep Seek
story is the story that involves them having access to
(12:20):
a range of Western chips, some of which they got
before the export controls ever went into place, and some
they got between the first set in twenty two and
the second set in twenty three when in video basically
designed around the controls what they call the H eight hundred,
So there was an H one hundred, which was at
(12:41):
one point the most elite chip. In Vidia basically reduced
the interconnect speed to get around the control created the
H eight hundred for the Chinese market that got fixed
in twenty three. And so I think the right way
to think about the export control story is it started
in twenty two. The first draft of those export controls
(13:02):
in twenty two we learned lessons from because this was
a new undertaking, and one of those lessons involved how
to make sure that there couldn't be this easy design around,
this easy workaround, and that's why we updated them in
twenty three and updated them again in twenty four. So
these are not going to be one hundred percent foolproof.
(13:23):
What they are designed to do is, to the maximum
extent possible, make sure that the highest end national security
related technology, these very high end AI chips are not
going to China to be used against America currents allies.
And you know, I can't guarantee zero leakage, that it's
not a dynamic scenario where China and others are working
(13:44):
to get around them. We just have to keep at
it and continue to learn and continue to update.
Speaker 1 (13:49):
So just to play a bank to you, you'll view
as far from controls on exports of advanced chips being futile.
And even if deep Seat were able to distill knowledge
from open AI's models, there is a value in these
export controls. And maybe deep Seak's success was because the
(14:11):
export controls weren't effective until let's say twenty three, and
really your legacy will be making sure the next Deep
Seat moment doesn't happen.
Speaker 2 (14:20):
Yeah, I mean, I think realistically time will tell. What
I would just argue is I believe it's already had
an impact, because I think it has placed quite a
limit on China's access to an incredibly important resource for
frontier AI development, Compute, and that we can expect that
(14:40):
impact to be seen over the course of the rest
of twenty five, twenty six, twenty seven. You know, if
you look at public statements from Deep Seek CEO, one
of the things he talks about is access to Compute
being a challenge for him, something the Chinese government doesn't
like to acknowledge, but it something he's sort of very
(15:01):
publicly talked about. The China's effort to have its own
ship has not met with the kind of success that
the Chinese propagandas want to suggest it has, either in
terms of their performance or in terms of the scale
of what they're capable of producing. They cannot produce in
a year anything remotely resembling what, for example, in video
can produce. Not to mention the Google TPUs or Trainium
(15:24):
or other American design silicon. So I think those core
observations are intact sitting here today in June of twenty
twenty five, and now let's see how things play out.
Speaker 1 (15:36):
I want to talk about video for a moment. I mean, obviously,
Jensen Whang, the SEO of and Video, has been out
there loudly and publicly castigating in effect your policy and
also saying, you know, necessity is the mother of invention
and in fact us export controls. And we've heard this
argument before, spurred this great development in Chinese domestic developments
(16:00):
and technology. How do yours boned?
Speaker 2 (16:03):
Well? First, it's an interesting statement is the suggestion that
if deep Seak had more Nvidia chips, it would not
have invented R one. I mean, that is a to me,
a kind of bizarre statement. But the argument that I
frequently encounter in this area is, aha, Look, China's motivated
(16:24):
when it comes to AI and motivated when it comes
to semiconductor production. Because you are placing limits on the
chips and the manufacturing equipment they can access, You've motivated them.
And this is really Jensen's argument, and I think that
this misses just the facts and the sequence of what's
unfolded over the last decade. It was China who came
(16:44):
out years before these export controls were ever in place
and announced to the world that they were going to
be the world leader in ai by twenty thirty. And
that one of the areas that made in China twenty
twenty five, by the way, a policy that was put
in place more than a decade ago was going to
focus on, was chips. And so the semiconductor export controls
(17:05):
did not create this big push by China. The semiconductor
export controls responded to this big push by China. And
so I think that this narrative of necessity as the
mother of invention kind of gets the sequence exactly backwards.
It suggests that the big Chinese push all happened post
export controls, when in fact it long predated the export controls,
(17:26):
and the export controls were in part motivated by saying, well,
wait a second, China's kind of come out to the
world and said this is what we want to do.
We're telling all of you as saying, well, we're not
going to make it easier for you, because you know,
we want to sustain our advantages.
Speaker 1 (17:40):
Just to play devil's advocate, though. I mean, obviously, any
economy has a demand side and a supply side, right,
and clearly you know you're right. The demand side from
the Chinese Communist Party and the government was there get
to ANI supremacy by twenty thirty on shore chip manufacturing.
But there's a lot of demand site mandates from the
(18:00):
Chinese Communist Party which don't translate into enormous successes. The
supply side issue, which was US policy is saying well,
you can't have this from US, may have ultimately been
a much more powerful and motivating signal to Chinese technology
companies than the dictates of their own government.
Speaker 2 (18:18):
Well, I would make a couple points in response to that.
The first is that in a bunch of the other
key areas that China also identified mid and China twenty
twenty five, you've seen huge progress in China, huge progress, right.
So somehow the demand and supply sides lined up when
it came to things like robotics or electric vehicles, or
(18:39):
you go down the list of what they identified as
their sectors. It's not like AI stands out as something
that all of a sudden blipped off the map because
of our export control. So I find it a little
hard to square with what has been a very distinctive
presidentially dictated policy, of which there are are not many
(19:01):
that look like made in China twenty twenty five, or
where they have put as much energy as they have
in this area.
Speaker 1 (19:07):
I know that your Chinese counterpart often like to ask
you this kind of rhetorical question, where's the line between
economic policy and national security? Physical question? And I'm curious,
you know, did you and President Biden look with envy
at President She's ability to onboard massive new energy onto
the grid to dictate the decision making at the top
(19:30):
Chinese tech firms in a way that the US government
and the tech firms are increasingly or often intention I mean,
was there something about the Chinese model in terms of
preparing for this new industrial and AI revolution that you
wish you could have borrowed?
Speaker 2 (19:45):
Well, look, anytime you're working at the White House and
you're dealing with bureaucracy, red tape regulation, the thought is
never far from your mind. Hey, if I was just
all powerful and President Biden was all powerful, we could
just not have to waste time with any of this.
But then you pause and you say, hey, wait a second, Actually,
(20:06):
democracy is a pretty good form of government one. And
two the American technology model, which is messier, which is
more decentralized, which is in many ways more maddening, also
really works. And it's a good thing to bet on.
And that's why I think we're in such a disturbing
(20:26):
moment right now, because, look, some of the advantages of
the US has. One is the ability to attract talent
from all over the world. Another is this ecosystem a
basic research funding supplied by the government, research universities, and
the private sector. And two of those three pillars are
being knocked out by this administration. It strikes me that
(20:47):
the most recent announcement saying basically, we're going to try
to really dramatically reduce the number of Chinese grad students
and undergraduate students and researchers in the United States, all
of these are self harming moves because all of them
take away from this unique model. The US is built
to build and sustain an innovation edge over time. So
(21:08):
the Chinese have their way of doing it. We have
to look at that and say, that's a formidable competitor.
But it is precisely that observation that we are dealing
with the formidable competitor. That motivated us to say, what
do we need to do on the promote side, what
do we need to do to push the boundaries, and
then what do we need to do on the protect
side to make sure that our most advanced technologies aren't
(21:30):
being used against us? So that's it was that sense
of real understanding that this was going to be a
hard race in many of these different technology areas that
motivated us to take the policy steps that we took.
Speaker 1 (21:46):
After the break, Jake Sullivan on how the Biden administration
was preparing for the possibility of agi I, a system
that is superior to humans and effectively old COCA tasks,
and also Jake talks about the role of autonomous weapons
in current conflicts from Ukraine to Gaza. I want to
(22:12):
zoom out a bit, Jake. One of the most interesting
conversations I've seen this year was between as reclined ere
I say it and Ben Buchanan, the White House Special
Advisor on a AI for the Biden administration. Essentially that
they had a conversation where Ben Buchanan said it was
(22:32):
the belief of the Biden administration that artificial general intelligence
is going to come within the next four years, i e.
Not just AI that helps humans do tasks, but a
system that is superior to humans in effectively all cognitive tasks.
Was that your view, with those conversations you were part of.
Speaker 2 (22:53):
I would amend it slightly to say that a premise
of our approach, both on the national security side and
the writ large in the process that Ben and Bruce
reed led, was that that was a distinct possibility, and
therefore we had to plan and prepare for it. Not
a bold prediction that it would certainly arrive, but that
(23:13):
it was and remains a distinct possibility.
Speaker 1 (23:16):
Where did you fail a debate about how proble this is.
Speaker 2 (23:20):
My own personal view has always been I'm not sure
because and the reason I'm definitely not sure about this
one is that incredibly smart people have incredibly different timetables
for when AGI will arrive. And what I have to
(23:41):
do as National Security advisor is take that spectrum of
opinion and say, Okay, is there a sufficient degree of
credibility we would assign to the proposition that it could
come in the next four years. The answer is yes,
then we need to operate against that assumption because it
means we don't have much time to get prepared for
(24:03):
what that will mean from a national security perspective, and
economic perspective, a social perspective, you know, the impact on
every facet of human life. And so we were operating
under the premise that this was a distinct possibility, but
not that it was a certainty, because we couldn't be certain.
(24:23):
And I remain fascinated by the AGI debate today because
a lot of the content I consume in the debate
over AI is about these really quite wildly different perceptions
of both where we are and when the next breakthroughs
are going to come and how. And I find that
just so interesting. It reminds me a little bit of
(24:46):
the very intense debates over the future of the Chinese economy,
where you can find people saying it's going to be
an unstoppable juggernaut, you can find people who are saying,
basically it has it's hobbled by intractable problems, and everywhere
in between. I again in that area, I get asked,
where did you fall on that debate of whether the
Chinese economy? I said, similarly, I didn't take a fixed,
(25:08):
determined view because I had to prepare for a range
of different contingencies.
Speaker 1 (25:13):
I'm curious what probably is as a percentage you would assign.
How on Earth do you respond to this transformative hypothetical
in terms of creating policy.
Speaker 2 (25:23):
Well, in a number of ways. I mean, for starters,
a lot of the work around alignment as safety and
security was work that was really stimulated by our administration
in concert and coordination with countries around the world. You know,
the Bletchley Park meeting in the UK, followed by successive summits. Obviously,
this administration takes a different view on the question of
(25:45):
safety and alignment than we did.
Speaker 1 (25:47):
Vance was in Europe basically saying that god rails are
off innovating in evading at.
Speaker 2 (25:51):
Yeah, exactly exactly. The second thing was actually how we
would deal with China directly on the issue of AI risk.
So not only what's it going to take for the
US to remain in the lead at the frontier, but also,
you know, how do we have a real dialogue for
a joint interest in managing risks to all of humanity,
including all Americans and all Chinese. And President Biden and
(26:15):
President She agreed to launch that dialogue when they met
at the summit in San Francisco in twenty twenty three.
We had a first session of it in twenty four.
I don't know if that's going to continue in this
administration or not.
Speaker 1 (26:27):
What did they talk about was the framework to step
beyond national borders and consider a threat to humanity as
the presidents of the two most powerful countries in the world,
And what were those conversations like.
Speaker 2 (26:40):
Well, first, I don't want to overstate how far the
first one got. These kinds of diplomatic engagements, especially between
two countries that are at once having dialogue and competing,
are always a bit tentative, and both sides come with
their cards a little close to their best. But that
meeting did begin to expose and a common understanding of
(27:05):
some of the big risks, including the risk at the
convergence of biology or biotech and artificial intelligence, the risk
of misalignment misaligned AI causing all manner of harm, and
the risk of proliferation AI getting into the hands of
bad actors who would choose to threaten both the US
(27:25):
and China or other nation states for that matter, or
just seek generally to destabilize. So that first session, I
would say, was a warm up and it still remains
a task for the US and China to have a
much deeper, sustained dialogue on this, something that I very
much support, even as I support the United States continuing
to do the work to stay in the lead. I
(27:47):
think it is not the essence of JD Vance's message
in Paris earlier this year, which was we're in for
a race, so you know, nothing that will slow us down.
Elation of any kind. I think we need to race
as fast as we can, but we also do need
guard rails to ensure safety and alignment. I guess the
(28:10):
word safety is a bit out of vogue with this
administration to call it security and alignment. We do need
those now. At the same time, we have to pay
attention to the fact that if we're pursuing a bunch
of guardrails, maybe alongside other like minded states, and China's
choosing the no guardrail approach, you know, that could give
them an advantage in the race and the competition as
(28:32):
we go forward. So one of the things I draw
upon as a partial analogy, because it is completely imperfect,
is the fact that we were, you know, simultaneously building
up our nuclear arsenals with the Soviet Union and also
beginning to think about both proliferation and the US and
(28:53):
the Soviets work together on the Non Proliferation Treaty and
on control, and over the series of agreements from the
seventies onward, we came to understandings about arms control and
even arms reduction, even as we were developing more sophisticated weapons,
more sophisticated targeting, more sophisticated intelligence the same way. So
(29:15):
I think there are some lessons to be learned from that,
But basically the answer is that we have to essentially
feel our way across the river on the stones, and
there isn't like a map or a guidebook that will
tell you how to strike this balance. That is a
matter that is more art than science, that requires steady
(29:35):
and determined leadership from the President and the White House,
and also requires a lot of technical expertise being brought
in house. I think the current administration's attitude is just
let a rip, and that concerns me, and I think
it puts more inputus frankly on the private sector to
begin thinking through how it deals with the guardrails question,
(29:56):
because it cannot just punt the question to the White House.
Since the White House is kind of said, We're not
going to deal.
Speaker 1 (30:00):
With that, pivoting away from HgI but towards something which
is frankly no less terrifying autonomous weapons systems. A few
days before we recorded this interview, Russia experience, you know
what some are calling its Pearl Harbor, Ukrainian fleet of
Ukrainian drones way away inside Russian borders knocking out some
(30:24):
of their nuclear aircraft. Were you surprised by this and
how did you contend with the theory of swarming autonomous
weapons becoming a reality within your term?
Speaker 2 (30:39):
Well, to take the second question first, I mean there's
the nation state challenge and then there's the non state
actor challenge, the nation and state challenge. You know, the
Pentagon has quite sophisticated, quite adaptable planning processes for new
weapons systems, new types of technology, and new tactics coming on,
and they do it tremendous amount of war gaming and
(31:02):
testing around that. One of the things that I was
very focused on as National Security Advisor was ensuring that
our entire national security enterprise, the Defense Department, the intelligence community,
and then on the financial sanction size, the Treasury Department
or the Export Control side of the Commerce Department was
thinking about applications of both offense and defense when it
(31:24):
comes to AI. So towards the end of twenty twenty four,
President Biden issued a National Security Memorandum on artificial intelligence.
Part of that was around how do we ensure we're
capable of defending against this type of thing a drone, swarm,
AI enabled autonomous weapons of various kinds, and that worries
(31:48):
me greatly, But I feel like, Okay, we have a
lane for that. We can work to be in a
position to deter or defend against that or counteract that.
The thing that keeps me up at night is that
decization of lethal technology. It's the extent to which an individual,
a group, a highly motivated organization, whether criminal or terrorists
(32:11):
or otherwise, can get its hands on highly precise and
capable and lethal systems that they can pinpoint target very
far away. And what the Ukraine case shows us is,
with some degree of resourcefulness and inventiveness, very far away
can really mean very far away from one's borders, including
(32:32):
in quite protected parts of a big country like Russia.
That's a pretty scary thought, even for the homeland of
the United States and that's something that if I were
sitting as National Security advisor today, I'd be diving deep
into the Ukraine case to say, what exactly does this
tell us about the vulnerability and risk fear in the
(32:54):
United States. Was I surprised by what the Ukrainians did? Guess?
And no, no, I'm not surprised. That they're constantly adaptive
and capable, and frankly, I'm quite proud of the work
that we played in helping to fund and see their
drone program over the years. But they deserve the credit
for really taking that drone program to an incredible level
(33:15):
and then coming up with an operation as sophisticated and
complex as this. So yes, like the actual iteration was
surprising in a kind of you know, holy cow, I'm
really impressed by that way. But the fact that they
are doing this kind of thing is just a credit
to their to their bravery and skill.
Speaker 1 (33:35):
Gaza reports that autonomous weapons being used by Israel making
their own kill decisions, including kill decisions that include civilian debts.
Do you know that to be true? I do not
(33:57):
know that to be true. The Israeli Defense Forces indicated
in conversations with their counterparts in the Pentagon and the
State Department that they have a human in the decision
making mode when strikes are taken. So I can't speak
to the reports which I've also seen, and I don't
(34:18):
know about every individual case, but that's the communication that
occurred between the US and the IDEA. When I talked
to you around the world. I mean, there are two
things that come up. One is with the world from Afghanistan,
and the other is the failure to constrain Israel's response
to the Hamas attacks on October seventh in terms of
(34:40):
civilian casualties in Gaza. How much did that frustrate your
ability to bring together a coalition? But I think your
mission was and is, which is that democratic values infuse
(35:02):
tech leadership.
Speaker 2 (35:04):
Well, I think about these cases quite differently, you know,
sitting here today in twenty twenty five, I think the
United States is much better off that we're not entering
our twenty fifth year or more in Afghanistan sending American
men and women to fight and die there. So I
think President Biden got the big call right, and the
withdrawal itself was challenging and it was tragic. But when
(35:25):
you end in war after twenty years with all of
the decisions and the pathologies that are piled up is
not going to be easy. It was never going to
be easy. And anyone who would suggest otherwise, I think,
you know, you know, does not have credibility. So I
think that you know, I would do things differently. And
I've said this repeatedly publicly in terms of that actual
(35:46):
draw down, and we learned a lot of lessons from it.
But fundamentally it is a good thing the United States
is not in Afghanistan now. When it comes to Gaza,
it's just an absolute damn tragedy in every respect. Just
the gut wrenching images to this day, innocent people dying,
innocent people going without food, the tragedy going back to
(36:11):
the seventh itself of October, and the largest massacre of
Jews since the Holocaust, the tragedy of the hostages and
their families. All of these are just awful. And I
spent from October seventh of twenty three to January twentieth
of twenty twenty five living with the burden that we
(36:33):
couldn't stop it until the very end when we had
a cease fire and hostage deal in place. As President
Biden left office and I wrestle with that. I wrestle
with that every day.
Speaker 1 (36:44):
When to an Oscar was then more the administration could
have done to prevent some of that Cornasian tragedy.
Speaker 2 (36:50):
Look, that's a question I will keep asking myself and
keep asking others. You know, the main argument people make is,
you know, just cut off weapons to Israel. And I
think one thing we were very much contending with over
the course of twenty twenty four is Israel wasn't just
facing Hamas, they were facing Hisbella, the Syrian militias, the
(37:10):
Iraqi militias, the Huthis, and even around itself. So they
were being attacked on multiple fronts. And it's it's hard
to walk away from an ally and a circumstance like that.
But you know, I spent a lot of time personally,
day in day out in my office working on the
issue of humanitarian assistance and getting food and medicine into Gaza,
and we didn't get enough in. But I think we
(37:32):
did get a sufficient quantity in, not enough for what
people needed.
Speaker 1 (37:36):
I mean, the real question is how much is mortal
authority the United States mortal authority required to exercise a
kind of self power that is critical to prevail in
if we don't inque to tech war tech competition with China.
Speaker 2 (37:56):
Look, I think that our ability to work closely with
like minded democracies who share our values, who have a
vision for the world that is positive, that is consistent
with the vision we have for ourselves as a country,
this is a critical part of the long term competition
with China. And I actually believe that one of the
(38:19):
things that has been painful to watch is the extent
to which the President Trump, in approaching the competition to China,
has chosen basically to go to war with all of
our allies at the same time, rather than try to
work with them to try to come up with a
common and united strategy for lack of a better term,
the free world to prevail in this competition.
Speaker 1 (38:42):
You know, obviously you've been asked recently by political on stage.
Were you surprised by President Biden's debate performance? You know
what did you know? You said, Yes, I was surprised
and know he was always shot when I interacted with him.
So I don't want to ask you to rehearse that again.
But I guess, as somebody who's been a kind of
leader in the Democrat Party, for you know, fifteen plus years,
(39:05):
how does the party get over this? Who knew what when?
A series of questions.
Speaker 2 (39:12):
Look, obviously, I've been involved in political campaigns in the past,
so you know I have personal opinions on lots of
things about politics. But in the Biden administration, I served
as National Security Advisor and my job was to focus
on the national security issues. So I haven't weighed in
(39:32):
on the big political doing and throwing around all this.
All I've been able to do is relate my experience
with the president in the oval, in the situation room,
and that's where I will continue to be in this
conversation let other people hash out where the party goes
from here.
Speaker 1 (39:51):
Just to close, kind of coming back to the kind
of widest sweep of technology and history and the role
of the US you've talked, I think about how tech
history has had these two waves to date, the first
one being the onset of the Internet, the hope it
carried democrazation, and the second one being how some repressive
(40:13):
governments were able to harness this technology for spying, for harassment,
for repression. What do you hope the third wave will
be and how do we get there well.
Speaker 2 (40:27):
I hope the first the third wave will avoid the
to my view, a certain blind optimism of the first wave,
where we thought, hey, this is so great, it's going
to mean that freedom and democracy reign everywhere, but also
avoid a sense of doom and dread from the second
wave that you know, we're all screwed. I think that
(40:50):
the third wave, if it goes right, will mean that
we are able to harness the most exciting opportunities of art,
if you're so intelligence for human health, for human well being,
you know, for advancing the capacity of people to communicate
(41:11):
better as supposed to worse, and that we will mitigate
the downside risks. We're not going to eliminate them. We're
going to have to live with them. They're going to
be real and acute, but that we mitigate them, and
that the opportunities end up creating a circumstance where AI
is working much more for us than against us. Can
we achieve that? I don't know. I think this is
going to be a close front of thing, because I'm
(41:31):
definitely not a doomer, but I am certainly nervous about
whether we have the tools and the capacity and the
foresight right now to build the kind of guardrails necessary
that will both allow AI to flourish for the positive
but also you know, guard against the downside. I don't
(41:52):
think that we're right now on track, and I think
that if this administration isn't going to step up to
take leadership in this, it's going to require others, many
outside of government, to take leadership in this, because it
is a project that we cannot dally on given the
possibility that this is coming at us very very fast.
Speaker 1 (42:17):
Jakesonlan, thank you so much for joining UK Stuff today.
Speaker 2 (42:19):
Thanks for having me.
Speaker 1 (42:39):
For tech Stuff. I'm os Valoshin. This episode was produced
by Eliza Dennis and Adriana Tapia. It was executive produced
by me caraen Price and Kate Osborne for Kaleidoscope and
Katrina norvelve iHeart Podcasts. Jack Insley mixed this episode and
Kyle Murdoch wrote our theme song. Join us on Friday
(42:59):
for the week Tech when we'll run through the tech
headlines you may have missed and please rate, review and
reach out to us at tech Stuff podcast at gmail
dot com.