Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Sheena Chestnut Greitens (00:04):
Welcome to
Horns of A Dilemma, the podcast of
the Texas National Security Review.
I'm Sheena Chestnut Greitens,editor-in-chief of TNSR, and I'm here
with Dr. Ryan Vest, our executive editor.
We are pleased to have joining us today,national security expert, Herb Lin,
author of "Artificial Intelligence andNuclear Weapons: A Commonsense Approach
to Understanding Costs and Benefits."That article appeared in Volume 8, Issue
(00:28):
3 of the Texas National Security Review.
Herb is a senior research scholar andresearch fellow at Stanford University
with interest at the intersection ofnational security and emerging technology.
He's also the director of theStanford Emerging Technology Review.
Herb, welcome to Horns of a Dilemma.
It's great to have you on the show.
Herb Lin (00:46):
I am delighted to be here.
Thanks so much for having me.
Ryan Vest (00:48):
Herb, this article comes
at a moment of growing concern about
AI and national security and renewedglobal anxiety about nuclear use.
You begin the article by noting thatthe skepticism in much of the commentary
on the application of AI to the nuclearenterprise isn't really the whole picture.
I just wanted to ask or start us offtalking about what was your goal in
writing this piece and how are youtrying to reframe the conversation?
Herb Lin (01:11):
Well, okay, so mostly the
debate over nuclear weapons and by
extension, applying AI to nuclearweapons focuses on the following idea.
Hey, here's a great idea.
Let's give ChatGPT the launch codes.
I'm looking at you on video.
Both of you are smiling about that.
You both think that's a ridiculous idea.
(01:32):
Well, fortunately, most of themilitary, in fact, all of the military
thinks that's a stupid idea, right?
In fact, there's nobody that'sseriously thinking about doing that.
Nobody.
So to the extent that the debate focuseson that question, it's a non-issue.
Nobody wants to do that.
Nobody is saying thatthat's a good thing to do.
And that's where the center ofgravity of the attention of AI
(01:56):
and nuclear weapons is directed.
So let me give you another different idea.
You're typing up a memo for somebodyon the nuclear targeting staff.
So, you're clearly working on a commandand control function —nuclear command
and control— and you use MicrosoftWord to do it, and then you do a spell
check on it, and you run spell check.
You have now used AI to perform anuclear command and control function.
(02:20):
Nobody's gonna argue with you about that.
That's an entirelyuncontroversial use of AI.
No person in their rightmind would object to it.
It's clearly a benign application ofAI to nuclear command and control.
So between the stuff that's clearlyridiculously stupid where, "don't ever
do AI there," and where it's completelyuncontroversial — doing spell check— which
(02:43):
is AI, there's a lot of stuff in between.
And the intellectual task is to understandwhat is there in between those two
different kinds of AI applicationand see whether or not, in any given
instance, does applying AI make any sense?
Furthermore, in point, one of thethings that the article points out
(03:04):
is that, there are even instancesin which AI can help enhance human
control over nuclear weapons.
It's not just a bad thingor benign and harmless.
Sometimes it actually helpsto maintain human control.
So there's a whole range of applicationsthat should be relatively uncontroversial
and I wanna focus attention on them too.
Sheena Chestnut Greitens (03:24):
So I think
I'm beginning to understand why you
subtitled the article, "A Common SenseApproach to Understanding Risks and
Benefits." I wanted to sort of followup a little bit on this idea of a
common sense approach and ask you toelaborate on what misunderstandings
about AI you think in the currentconversation are most dangerous or most
(03:46):
misleading as it relates to this area ofthinking about AI and nuclear weapons.
Herb Lin (03:51):
Well, a lot of people who
think about AI at the lay level, they
think about Terminator and Skynetand so on, as this thing that will
just run amuck, out of control.
And, is it possible to designan AI system to do that?
Sure.
But as the late Dick Arwin oncesaid, "we could be that stupid,
(04:12):
but we don't have to be." There'sno requirement to do dumb things.
And are there ways in which AI could posea catastrophic risk for nuclear weapons?
You bet there are, no question about it.
But let's not do those things.
And there is a sense in whichit really is as simple as that.
And the question is, how do you wantto be thinking about AI in nuclear
(04:33):
weapons and where can it help enhancethe mission and where shouldn't it?
Where shouldn't you be using it?
The article also makes, what I think isan interesting point, that even if AI
doesn't touch nuclear weapons at all,ever, nuclear risk can still be affected
by the use of AI in other domains.
So, if you use, for example, AI inballistic missile defense, or in
(04:58):
anti-satellite warfare, or in conventionalwarfare, the consequences of bad
decisions or even good decisions in AIthere might have implications for you in
whether or not you use nuclear forces.
Which is why I say that it's importantto consider AI in the broad context,
not just as applied to nuclear weapons.
So AI risk can come from manydifferent places, and benefits can
(05:22):
come from many different places too.
And my job in the article was totry to unpack some of that, to
make the debate a more nuanced, andtherefore, a more relevant debate
.Ryan Vest: Early in the piece, you
emphasized that the risks and benefits
of using AI can't be separated froma country's broader nuclear posture.
From a framing perspective forpolicymakers, can you unpack a little
(05:43):
bit more what that means and howunderstanding nuclear posture helps us
grasp the full scope of where AI mightactually enter the nuclear enterprise.
Sure.
So for example, forgetabout AI for the moment.
Listeners know that the US dependson a triad of nuclear forces.
One of those forces in theUS is the silo-based ICBMs.
(06:03):
And as you know, the US has alwaysretained the option of doing a launch on
warning as a way of protecting the ICBMs,so that an adversary could not launch
a first strike on them with confidencethat we would not be able to launch them.
It's not a part of doctrine, but it'sa part of doctrine to have that option.
So, of course we all know that thisintroduces the risk of a false launch.
(06:26):
That is, if your warning turns out to bewrong, you might launch your ICBMs towards
the other guy, and there was no attack.
Instead of responding to an attack,now you started World War III.
This is a bad deal for everybody, right?
So, that's the concernabout launch on warning.
Now people say, "Well, if you put AI incharge, now you're going to introduce
(06:47):
more possible mistakes." And so maybe anAI system will do a launch on warning,
will be more likely to do a launchon warning, or advise the president
or even push the button themselves.
You say, "Nobody wants to do that,but you'll tell the president, " You
have to launch that." That's sort ofthe extreme version of the concern.
It's not an unreasonable concern.
(07:07):
But that's taking US nuclearposture as a fixed constraint.
It doesn't have to be a fixed constraint.
There are many people who advocate gettingrid of the ICBMs— respectable people.
I don't wanna take sides on that debate,but let's say you decided that we don't
(07:29):
need the land-based leg anyway, very much,and let's go to a dyad: bombers and SLBMs.
Let's assume we did that.
In this situation, there's clearlyno risk of a launch on warning.
You don't need to launch theICBMs to prevent them from being
destroyed because there are no ICBMs.
(07:49):
So if I put in AI now and give itsome advisory capacity to advise under
the circumstances of a warning ofattack, which we would still have, the
risks are completely different now.
It can't launch ICBMs on warningbecause there are no ICBMs.
And clearly the risk of AI going roguein that situation, doing the wrong thing
(08:11):
in that situation, much less than thesituation in which you do have ICBMs.
Anybody would agree with that.
So the point is that you canadjust for some of the risk,
you can change the risk profileentirely by adjusting your posture.
That was the point, thatit's not a decision that's
separate from other matters.
Sheena Chestnut Greitens (08:32):
So lemme go
into the meat of the article a little
bit, and at one point, before you startapplying the framework that we've started
to discuss here, you actually lay out fivepoints that you think are really important
for people to understand machine learning.
And I wondered if, before we move intomore discussion of the application,
if you could just walk us throughwhat are those five points and
(08:53):
why it's important for people tounderstand those parts of machine
learning when we're thinking about AI.
Herb Lin (08:59):
First is that humans
remain the most essential element
of nuclear command and control.
I hope I don't have to defend that point.
That's a softball, right?
That you want humans involved in this.
It's not entirely irrational to say,well, there's some humans I don't really
want to be in charge of nuclear weapons.
(09:21):
There's at least part of the debateover sole authority that gets into that.
The second was we addressed thisquestion about thinking about AI in the
context of a nation's nuclear posture.
So that includes the doctrine andthe force structure and, so on.
Do you have ICBMs or not?
That clearly changesthe risks and benefits.
I made the comment that said thatinternational agreements to limit
(09:43):
the use of AI in the nuclear weaponscontexts are unlikely to be achieved.
Now the reason I say that is, I'mnot happy about it, but essentially
AI is software and there's noway you can verify software.
There's no way that we will knowwhether or not the other guy is
running an AI model on their system,and besides what counts as AI anyway.
(10:05):
So, I think that the verification issues,the definitional issues, the trust
issues there, I think are all daunting.
I think you'll never come toany agreement on that point.
Number four is the idea that you can,in fact, mitigate risks from AI by
judiciously choosing your own AI uses.
(10:25):
We don't have to do dumb things.
Don't do dumb things with AI.
And then we can get into an argumentabout what's dumb and what's not.
Sheena Chestnut Greitens (10:34):
Let me ask a
couple of follow up questions about that.
So one of the things that I was reallystruck by in your piece is that you argue
that current national commitments to keephumans in control don't go far enough.
And so I was thinking about theagreement, which is a sentence or
two that was announced between theUnited States and China, and some of
the other discussions that I've seenin international fora about this.
(10:57):
So what for you would it take forhuman control to be meaningful?
And so where are the biggest gaps thatcurrent policies should try to close?
Herb Lin (11:07):
Well, my complaint was at
an earlier level, which is that to say
humans are in control, I can imagineways in which that is a formally true
statement and yet is pretty meaningless.
The example that I have is that, well,here's something that the agreement
says we're not gonna do (11:23):
we're not
gonna give ChatGPT the launch codes.
Fine.
But let's say you have the presidentthere, or your decision maker, the
person at the top of the apex ofthe nuclear command and control
system, and the president has toissue the order of the launch.
But let's say that president is onlysurrounded by machines that give
(11:45):
the president input, and that allof those machines are AI machines.
They're all AI-driven.
There's no human in any of them.
I don't know about you, but that doesn'tmake me feel a whole lot more confident.
That's basically equivalent.
I think you would agree thatthat's almost, not quite, but
that kind of circumvention of thespirit of the humans in control.
(12:08):
That doesn't make you feel better, right?
That's still a bad thing, althoughtechnically, humans are still in control.
The reason that I want to use aphrase like meaningful or appropriate
human control is that as soon as youhear those words, you say, "Well,
what's meaningful? What does thatmean?" It stimulates an argument.
Now get into an argument about it, andthat argument is a good argument to have.
(12:33):
You think it means that you haveto have three real human beings
and so on, and they have to benext to the president physically.
We'll get into an argument, get intoa discussion about that, and that
discussion is really meaningful.
That's where wisdom will come out of.
But if you formally satisfy a requirementlike "humans have to be in the loop,"
I'm not enthusiastic about that preciselybecause it can be circumvented so easily.
(12:55):
So that's why I think that humansin the loop doesn't go far enough.
Ryan Vest (12:59):
So throughout this piece,
you talk a lot about human involvement
in this spectrum that humans shouldbe, or that AI should be involved in
this enterprise, some of which arecritical, some of which are mundane.
I was just wondering if we couldtalk a little bit more or dig a
little bit deeper into the benefitsversus the risks of this concept.
What are some of the cases where AIintegration might be beneficial enough to
(13:20):
outweigh any risks that we might incur?
Herb Lin (13:23):
I have a set of criteria, in
which I say that a deployment of AI in a
nuclear context is less risky if one ormore of the following conditions are true.
And the first one in that is thestakes of application are low, and the
consequences of a mistake are minimal.
An example of that couldbe predictive maintenance.
Predictive maintenance is anAI-based technique to predict
(13:44):
when a part is about to fail, andyou don't wait until it fails.
You replace it just before itfails, and so the system doesn't
go down unexpectedly, and so on.
Let's say your predictive maintenancealgorithm is wrong, and so it
predicts a failure too early.
Then what you've done is you've wastedmoney on a premature replacement.
Or let's say it fails to anticipate it.
(14:06):
That means you have to replacethe part when it fails, you have
to have unscheduled maintenance.
Well, we have that now.
So the stakes, they're not very large.
It's sort of a little bit ofmaybe extra expense and so on.
It's not a big deal.
I'm all for predictive maintenance.
Now, the other criteria I think are,it starts to get more interesting.
For example, I suggest thatif the application in question
(14:27):
that you're considering is beingpursued in the commercial world.
Now, the commercial world hasa lot of advantages to it.
Among other things, the commercialworld has a profit motive and it
doesn't do things that don't makesense from a commercial point of view.
Whereas the military doesn'thave that kind of metric.
If the commercial world is doing it,maybe there are insights that you can
derive from them that you wouldn'thave just doing it for yourself.
(14:51):
Here's an example.
Anything AI-related to the KC-46tanker is probably a reasonable
thing to think about, especiallyin terms of maintenance, and so on.
The tanker is one that's going tobe used for strategic refueling,
strategic mission refueling, and so on.
Why am I focusing on the KC-46 tanker?
Because it's the militarizedversion of the Boeing 767.
(15:13):
And Boeing is doing a lot of AI stuffon the 767 for maintenance, blah,
blah, blah, all that sort of stuff.
So, the lessons that Boeing haslearned on the 767 and providing
AI functionality in various partsof that, well, there's some chance
that might be relevant to the KC-46.
That's a good thing, andyou minimize the risk.
(15:33):
By contrast, there's nobody in the privatesector that's developing an AI application
that will develop different courses ofaction after nuclear war has started.
Nobody in the privatesector is doing that.
There's no commercial value in that.
That would mean that the secondapplication about developing different
courses of action, that's a higher risk.
The third thing that I talk about on mylist is whether or not, the application
(15:56):
affords humans time to review the output.
You want to be able to check the work.
And even if you don't actuallycheck your work, you want to have
the option of checking the work.
So, for example, if I do AI for routeplanning, and you wanna optimize it
with respect to fuel consumption oravoidance of air defenses or something
(16:16):
like that, it provides a proposed routeand I can look at the route and I can
say, does it really do a good job?
So that's good.
That's pre-planned.
It gives me some time to do that.
Of course, if you have to do this in realtime, maybe that's a little bit dicier.
But I don't have time to do it,or doing adaptive target selection
in real time as opposed todoing it in a pre-planned basis.
(16:38):
In a pre-planned basis, I have time.
I can put it into a database,and so if it's doing it in real
time, there's a real war going on,I have less confidence in that.
Another thing (16:45):
is there a ground
truth against which you could
evaluate the application's output?
By this, I mean, it's not only,can you get a ground truth, but
is there one even in principle?
So if you talk about optimizingcourses of action, and coming out
with the best possible alternative,those are very complex judgment calls.
(17:08):
There's no way of knowing whether ornot it, it suggests course A versus
course B, and course A is better.
How do you know what's right?
How do you know that's better?
Whereas, if you say it's optimalwith respect to fuel consumption,
you can calculate that it's prettyclose to optimal fuel consumption.
You check that out.
The next one on the list is, ifyou can separate AI functionality
(17:30):
from the rest of the system.
Now this is a big deal.
Why is it a big deal?
Because the AI functionalitymight go crazy.
It might not work.
And in that case, you wantto be able to turn it off.
Maybe you'll get degradedfunction in your system.
That's okay.
It's better than havingno function at all.
But you don't wanna be in a situationwhere you have the system, the AI's
(17:52):
malfunctioning, and you wish youcould turn it off and you can't.
That's a really bad situation to be in,so you wanna be able to turn it off.
And turns out that turning somethingoff often has cascading consequences.
So you have to build that functionality,the turnoff function, and carefully,
and you have to be willing to do that.
(18:13):
The last thing is that I wantthere to be an independent safety
mechanism that what could recognizereally bad consequences of really
bad outcomes, and then prevent them.
In a factory, you may have seen thesebig robot arms that go around, these big
heavy things that swing around and so on.
There's a safety mechanism there.
What they do is they weld big heavy metalblocks to the floor that prevent the
(18:37):
arm from swinging out beyond those arcs.
So the AI controllingthe arm can go crazy.
It can go swing the arm back and forthand back and forth and back and forth,
but it won't go outside the arc.
Why not?
Not because it was programmed notto go out, but because those blocks
keep it from swinging outside.
It's an independent safety mechanismthat's available in case the
(19:02):
worst thing happens to the arm.
The people who are programmingthe arm have nothing to do
with the welding of the blocks.
Let's translate this into a military case.
It is well known that our test ICBMs thatwe launch from Vandenberg Air Force Base
have self-destruct mechanisms in them,but the operational missiles don't have
(19:25):
those self-destruct mechanisms in them.
We don't want them there.
That's right.
Why don't we want them?
Because we're afraid that if wetry to launch them, the Russians
might compromise them and destroyour ICBMs in flight and, and so on.
Okay.
We understand that.
Now, let's say you had a system thatdecided to do a launch on warning.
You would absolutely insist onhaving self-destruct mechanisms.
(19:46):
Why?
Because in case the systemordered a launch mistakenly, you
would destroy them in flight.
I'm not advocating to launch onwarning, but if you went down that
path, you would absolutely have tohave the destructive flight capability.
You'd absolutely have to haveit because this thing would be
catastrophically outta control, andyou wouldn't want that to be controlled
(20:09):
by the AI that launched the system.
You want it to becontrolled by somebody else.
At Vandenberg, there's a range safetyofficer whose only job is to make
sure that the missile is on course.
It's not gonna go crazy.
I have those criteria and in someloose sense of the term, I say, the
more you can say yes to all of thoseconditions, the less risky it is.
(20:30):
We can get into an argument about whetherany given application has five yeses
or four yeses, but at least we'll havethe argument and I think that going
down this kind of path is a betterway of doing it than going through
any kind of a formal risk analysis.
Sheena Chestnut Greitens (20:46):
As a reader,
this is one of the things that I found
the most helpful in structuring my ownthinking about managing these risks,
and weighing them and thinking abouthow they fit into the larger nuclear
enterprise, is, you know, this setof criteria that you've just walked
us through for addressing whether AIapplications are low risk or high risk,
and how those risks might be mitigated.
Herb Lin (21:07):
I wanna say one other
thing about that list of five
or six (21:09):
it's a work in progress.
If somebody listening to this podcastwants to add another one, send me a
note, and we'll do something together.
I'll give you credit on the next one.
There could be more additional criteria.
And there it's not final by any means.
Sheena, please continue.
Sorry.
Sheena Chestnut Greitens (21:26):
No,
I think that's a great point.
This is very much as technologyemerges, our thinking emerges.
We add, we build frameworks, we adaptthem to what we know about reality.
And this actually raises a questionthat I wanted to get into that appears
I think, a little bit earlier in thearticle, which is the idea that you
propose of this "garbage in, garbageout" problem with the use of AI.
(21:47):
And so I actually wanted to sort ofback up and say, can you explain that
problem to the reader and where it entersinto these conversations and some of
these examples you've been giving us.
Herb Lin (21:57):
These are the realities about
machine learning that I remind the
readers of, that for people who don'tquite understand what computers are.
On the "garbage in, garbage out," it willnot surprise anybody to know that if you
put bad data into a computer program,you're gonna get bad results out of it.
And AI is just another computer program.
It's not magic.
It's another computer program.
(22:17):
And if you give it bad data,it will give you bad results.
And the problem, of course, iswhen you give it bad data and
you don't know it's bad data.
And now you think it's good data andnow you think it's giving you good
results when it's really giving youbad results, that's the real problem.
But people have known this foryears and there's no way you
can get around that problem.
(22:37):
Some other things that it's reallyimportant to remember about AI.
By AI, I usually mean machinelearning for a particular reason,
which I'll get to in a minute.
AI is just another way ofprogramming a computer.
That means all of the things thatwe know about computers are true
about AI programming for computers.
So don't expect it to do thingsthat really computers can't do.
(23:01):
These are computers that are more orless, the same as the computers that
have been around since World War II.
The second point that I raise hereis that the internal operations of
machine learning systems are basicallyincomprehensible to human beings.
At least in conventional programming,you have explicit coding of instructions
and you can follow the path in principle.
(23:22):
But if you look at the inside of a machinelearning system, all you see is numbers
that are changing and you have no idea howthey correspond to any behavioral change.
So, that's a problem.
And the next point I think is actuallypretty significant, which is that
machine learning is basically statistics.
It's meaningful to think about a chatbot like ChatGPT as a conversational
(23:47):
interface of some very powerfulstatistical capabilities that you
wouldn't otherwise have access to.
Basically what's underlyingmachine learning is statistics.
And you remember what youwere taught in Statistics 101:
correlation is not causation.
That lesson is really important,which means that if you think about
(24:08):
it, something that's purely machinelearning will never give you a
causal explanation for something.
It may give you a statisticalexplanation, but it won't ever give
you a causal explanation, and since wetalk about explanations as being mostly
causal, it's not gonna give you that.
And so you're gonna haveto try something else.
(24:30):
People in the field understandthis, but I think it's not
a widely understood problem.
And it remains an unsolved issueamong machine learning experts.
The final point that I wanna makehere is that machine learning, AI,
doesn't change the nature of reality.
It doesn't change the laws of physics.
You may remember that the time that youget from Russian launch of an ICBM to
(24:52):
impact is about 25 or 30 minutes or so.
So when people say, " If we use AI,we'll get more decision time for
the president,"– now the presidentonly gets 15 minutes at most– "we'll
give him more time." Well, you'reonly gonna get 30 minutes at most.
So then, the question is, would thedebate be significantly different
if the president were getting30 minutes to make a decision?
(25:14):
I'm not sure it would be.
So, you know, there's limits tohaving additional warning time, yes.
But the maximum warning timeis constrained by the laws of
physics and AI can't change that.
That's an important point to make here.
A related point, which I don't rememberif I made in the article, if you remember
the false alarm that happened in 1979 whenthere was a test tape that was inserted
(25:40):
into NORAD computers, and it signaled thatthere was an attack on the United States
and it turned out to be a test tape.
You remember that one?
AI would never be able to tellyou that that was a fake attack.
Why?
Because all the data inputs weredesigned to be real, and if the AI were
(26:00):
just processing all of those inputs,which it would be, it would also say
it was a real attack, but you'd haveto go outside to know that it was a
fake, you know, that it was a mistake.
You'd have to go outside the system,outside the test environment.
In which case, you're no longerdoing the test of the system.
(26:21):
Those are, I think some of thefundamentals of AI and machine
learning that you have to understandthe significance of what AI does.
Ryan Vest (26:28):
That's really
interesting, Herb.
As you talk about this, I'm reallystruck by these differences between
the way that AI uses statisticalanalysis versus causation and how
difficult it is for humans to reallyQA and understand what it's doing.
I'm kind of curious, in the articleyou talked about that the current
national commitments to keep humansin control don't go far enough.
(26:50):
And I'm wondering, what does that meanfor human control to be meaningful?
And what do current policiesget wrong about this?
Herb Lin (26:55):
I wish I had a better answer
than the one I'm about to give you.
Okay.
I don't.
Meaningful and appropriate,it's context dependent.
I point out that the words in the DODdirective on lethal autonomous weapons,
3000.09, I think is the number, ituses the term appropriate levels of
(27:21):
human judgment or something like that.
I assume that it means that ifyou have the ability to kill
lots of people, it takes a higherrank to be able to order it.
Whereas, if, you know, smaller number ofcasualties anticipated you can devolve
that further down the chain of command.
I mean, that's the sort of thingthat I would expect, but I don't
know, and it hasn't been formalized.
(27:43):
And it's very much ona case by case basis.
That's the part you won't like.
I wish I could give you a better answer.
There's a lot of literature on whatmeaningful human control means;
this phrase is not original to me.
My recollection is that Heather Rothwrote about this about 10 years ago.
I commend her work asa place to start on it.
(28:03):
But the debate is obviously not settled.
I don't have any particular wisdomon what appropriate or meaningful
should be, but I know that it'san important debate to have.
Ryan Vest (28:13):
How about I turn
it around a little bit and
ask you what it should not be?
What are we doing wrong right now?
Or better maybe.
What do current policiesnot fully encompass?
What are they not doing right?
Herb Lin (28:22):
There's a phrase that
technologists often use about how
policymakers think about this stuff,is that they think they can sprinkle
technology dust over something— pixiedust— and then magical things happen.
I've certainly seen that in cyber.
I see it in AI.
I think people are looking at this thingas a magic solution to very hard problems.
(28:44):
They say, "We'll just give it toAI and AI will figure out what the
right thing to do is," and it won't.
And just because ChatGPT will outputan essay that is grammatically correct,
doesn't mean that it will actuallygive you something that's sensible.
Out of linguistics, there's a famoussentence that Noam Chomsky invented
(29:08):
that's formally correct, but meaningless.
" Colorless green ideas sleep furiously."That is grammatically correct and
yet is a meaningless statement.
There's another wonderful phrase thatactually has good semantic meaning,
but is incomprehensible grammatically,although it is still grammatical.
It's "Buffalo buffalobuffalo buffalo buffalo."
(29:33):
And it turns out that if you lookup that phrase on the internet,
you'll find an extensive linguisticanalysis of how in fact it's a correct
statement and has semantic meaning.
Sort of stupid, but this questionof meaning is endlessly debated.
Sheena Chestnut Greitens (29:46):
So let me take
us in a slightly different direction
and, go back to something you touchedon earlier, but that I really wanted to
follow up with you on, and that's thatyou seem deeply skeptical in this article
about the possibility of a verifiableagreement to limit AI in nuclear systems.
And you mentioned earlier thatthat's related to the idea that
AI is essentially software.
(30:07):
And so I wondered, could you talk to usabout, is that a technological problem?
Is it a political problem?
What does that mean for efforts tobuild norms around AI and nuclear risk?
And do you see then any realistic,productive path forward to
make progress on this issue?
Herb Lin (30:24):
The last question
I can easily answer.
No.
Here's why.
Sheena, what kind ofcell phone do you have?
Sheena Chestnut Greitens:
I have an iPhone. (30:30):
undefined
Herb Lin (30:31):
An iPhone?
What model?
Sheena Chestnut Greitens (30:34):
I
don't know, a couple back.
Herb Lin (30:37):
Okay.
Let's say it's a 13.
Okay.
And it's probably on revision14 or something like that
of its software, let's say.
And let's say Ryan has the same phone.
How do you know that thesoftware that your operating
system's exactly the same as his?
The answer is you don't know.
(30:57):
You're just trusting the manufacturer.
So to get into the code takes onsiteinspection of a very intrusive sort.
I have to actually go to the computer,go to the phone, put instruments on it
and look at the code that it's running.
Then I inspect it.
(31:17):
I say, oh yes, it's got all theright, you know, markers of it on it.
First of all, it's impossible to dothat because there's huge amounts of
code in there, but nevermind that.
You go and say, "Oh, this is allgood," and now you leave, and then
I just take the code out and I putanother version of code in, which is
the one that I really wanted to use.
The problem is that the circumventionissues are so easy to do that you
(31:40):
can't ever verify anything like this.
And then you have to say, "Well, is thereany use to an unverifiable agreement?"
Well, yeah, I think there is.
But once you talk about unverifiableagreements, you're not talking
treaty anymore, you're talkingabout political commitments.
That's okay.
I don't object to political commitmentsas a way of making progress, but that's
(32:03):
the best you're gonna be able to do.
That's the problem (32:06):
that code is
something that changes dynamically.
You can't freeze it.
Sheena Chestnut Greitens (32:13):
Let me turn to
the question of international diplomacy,
if these are essentially politicalcommitments, and that's what we're
left with as thinking about a potentialpath forward, a potential framework.
Are there conversations happening nowthat you think are the most promising,
or are there ones happening thatyou think are really concerning?
Herb Lin (32:31):
Well, okay, so you've
already heard my skepticism
about keeping people in the loop.
And you may remember that the lastadministration invested a certain
amount of political capital in gettinga whole bunch of people to agree to the
declaration on the responsible use ofmilitary AI, which emphasized things
like people in the loop and so on.
(32:52):
I hope that the present administrationwill continue that effort.
I don't know that it will, but I haveno indications yet that it won't, so
I hope that this thing will happen.
And this is a good thing becauseit gets people talking about it.
Am I skeptical about thatit's going far enough?
Yes.
So we already discussed that.
I think that we should be doing somethingelse, but is it better than nothing?
(33:16):
Absolutely.
It's better than nothing.
And, you know, that's sort of the path ofarms control and international grievance
is often it's better than nothing.
Sheena Chestnut Greitens (33:25):
So you close
the article in the Texas National Security
Review, and we're really grateful to havehad the chance to publish the article,
but you close with a call for policymakersto develop better understandings
of AI's risks and applications.
So what does that look like in practice?
How do we do that?
And who especially needsto do that right now?
Herb Lin (33:44):
That's actually a hard
question to answer in some sense.
It's the thing that I devoted a hugeamount of my professional life towards;
that is policy makers who don'tunderstand technology and the limitations,
what it can do, what it can't do.
Often there are technologicalsolutions to things.
Often there are not.
And policymakers often look totechnology to solve things when they
(34:05):
can't and don't look to technologyto solve things when they can.
And I have seen, operationally, manydecisions that have been taken over
the years, in a variety of differentcontexts, where people, I think
made a poor decision or a less gooddecision because they didn't understand
(34:25):
what technology did and didn't do.
Every one of those thingsis a missed opportunity.
And, life is hard enoughwithout missing opportunities.
Sheena Chestnut Greitens (34:33):
Well, thank you.
I think that's a reallyimportant note to end on.
Thanks for joining us on Hornsof a Dilemma from the Texas
National Security Review.
Our guest today has been Herb Linfrom Stanford, author of the article,
" Artificial Intelligence and NuclearWeapons: A Common Sense Approach to
Understanding Costs and Benefits,"which as always can be accessed
for free on our website, TNSR.org.
(34:53):
Herb, thank you very much forjoining us today and for a
really interesting conversation.
Herb Lin (34:58):
Delighted to have been here.
Thanks for having me.
Sheena Chestnut Greitens (35:00):
If you enjoyed
this episode, please be sure to subscribe
and leave a review wherever you listen.
We love hearing from you.
You can find more of our work at TNSR.org.
Today's episode was produced byTNSR Digital and Technical Manager,
Jordan Morning, and made possibleby the University of Texas System.
This is Sheena ChestnutGreitens and Ryan Vest.
Thanks so much for listening.