Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:08):
Welcome to Planet
Now, a podcast by the UCLA
Luskin Center for History andPolicy.
We study change in order to makechange, linking knowledge of the
past to the quest for a betterfuture.
Every other week we examine themost pressing issues of the day
through a historical lens,helping us understand what
happened then and what thatmeans for us now.
SPEAKER_02 (00:32):
Welcome to Then and
Now.
I'm your host, David Myers, andtoday we're going to talk about
technology and the law.
Will artificial intelligencetake over the world, rendering
human beings pawns or victims oftechnology's unrestrained
excesses?
How did we get where we are?
What constraints can the lawprovide to the rapid advances of
(00:52):
artificial intelligence?
And what does the futureportend?
To help us address some of thesequestions, we'll be in
conversation with John VillaSenior, Professor of
Engineering, Law, Public Policy,and Management at UCLA, where he
co-directs the UCLA Institutefor Technology, Law and Policy.
He's also a non-resident seniorfellow at the Brookings
(01:14):
Institution and a member of theCouncil on Foreign Relations.
Professor Villa Signor's workaddresses the intersection of
technology, law, and policy witha focus on topics including
digital communications,artificial intelligence,
cybersecurity, and privacy.
Welcome to Then and Now, John.
SPEAKER_01 (01:33):
Oh, thank you very
much for having me.
SPEAKER_02 (01:35):
So you have an
unusual background that brings
together a number of differentdisciplines engineering,
technology, law.
Tell us how you got to UCLA.
You were an engineer at the JetPropulsion Laboratory.
So how did you make your wayfrom there to UCLA, where you're
teaching both the School ofEngineering and the Law School?
SPEAKER_01 (01:56):
Well, but it's a
yeah, it's an interesting
backstory.
So I was indeed, prior tojoining UCLA, I was at the NASA
Jet Propulsion Laboratory, andthen I ended up joining the
faculty of the engineeringschool.
And back then, this is a longtime ago now.
We're talking really back in theearly 1990s.
And back then, um my work wassolely and only in engineering.
(02:18):
Uh, and so I had what you mightcall a fairly traditional uh
early part of my career uh goingup through the faculty, the
progression of professor ranksand doing um doing a particular
type of research, but prettytraditional uh engineering work.
And it wasn't until later that Ibranched out and started getting
(02:38):
involved in some of these otheruh other interdisciplinary
aspects to the extent that I amnow.
How did that happen?
Why?
Well, it's a no, it's a reallyinteresting question.
I think uh I've I mean I Ishould be clear in stating that
I'm no less interested inengineering as a pure discipline
in and of itself.
I think it's uh a foundationallyimportant discipline, and I have
(02:58):
great respect for it and stilluh have quite a lot of interest
in it.
Uh at the same time, I alsobecame more and more aware of or
cognizant of the importance oflooking at these technology and
engineering related questions,not only uh in the pure
traditional engineering context,but also in terms of the broader
implications, uh, theramifications, the policy
(03:20):
implications, the intersectionwith legal frameworks.
And I saw that there was anopportunity to do that.
And not that I'm the only onewho's ever looked at the tech
law intersection.
Of course, you know, thousandsof people have done that.
But more traditionally, thepeople who do that in academia
are people who are coming fromthe law side.
In other words, legal scholarswho, as one of their areas of
(03:41):
focus, have technology law as aarea of specialization.
And far fewer people who reallycame up through the academic
training system in engineeringbranching out and then actually
engaging formally in the legal,uh, legal scholarship and the
legal academy on those issues.
And so I seemed to me that thatwas a really good opportunity.
(04:03):
Uh, and uh sort of that's what Isort of started emphasizing that
aspect of my work as well.
Um, and I guess now it'sprobably close to 15 years ago.
Uh, and it's been uh it's been areal learning experience and a
growing experience for me aswell as hopefully uh an
opportunity to uh provide somecontributions more broadly.
SPEAKER_02 (04:22):
Maybe by way of
framing our discussion about
artificial intelligence, um, youcan share with us why law, what
what were the legal issues?
Is this a matter of of patent?
Is it a matter of regulation?
SPEAKER_01 (04:35):
Yeah, so I should,
yeah, it's a great question.
I should just sort of back takea step back and say that uh I
didn't originally uh get intothe tech law intersection
specifically because ofartificial intelligence.
Uh I was just more interested init more broadly, you know,
questions of things likecybersecurity and digital
privacy and policy around uhintellectual property.
(04:58):
There's just a whole a wholehost of areas, some of which
involve nothing have nothing todo with AI, and some of which
have come to be very closelyrelated to AI.
But when I first started gettinginvolved in this, you know,
maybe 15 years ago, AI wasn'tnearly uh as dominant in the
public conversation and in theacademic conversation as as it
is uh today.
So my uh my original sort ofinterest in this was not
(05:21):
specifically drawn to AI, ifthat makes sense.
SPEAKER_02 (05:25):
Okay, so let's jump
into the heart of the matter.
Um, I'm sure there are many waysto go about this, but how would
you define artificialintelligence?
SPEAKER_01 (05:34):
Uh so rather than
having me come up with you know
the 10,000th definition ofartificial intelligence, what I
thought I might do is read you aone-sentence definition uh that
was codified into federal law.
There's a something called uhthe National Artificial
Intelligence Initiative Act of2020.
And at least for the purposes ofthat act, uh they gave a
(05:55):
particularly, I think, a usefuldefinition they wrote uh in that
law.
The term artifact, they beingCongress, uh the term artificial
intelligence means amachine-based system that can,
for a given set of human-definedobjectives, make predictions,
recommendations, or decisionsinfluencing real or virtual
environments.
So I think that's as good adefinition as any.
(06:15):
I guess I would add to that thatvery often when we talk when we
talk about AI systems, we talkabout systems that that learn
from and adapt to to theirenvironments based on the
observations, the data thatthey're receiving.
SPEAKER_02 (06:26):
So it seems to the
uninformed observer that there
is there's been a veryconsiderable acceleration in the
development of artificialintelligence over the last few
years.
I mean, it seems almost month bymonth there are significant
leaps.
But obviously, artificialintelligence um wasn't born in
the uh in the 2020s.
(06:47):
Um, how do you narrate thehistory of artificial
intelligence?
SPEAKER_01 (06:52):
It's a fascinating
history, and I I think we can,
you know, many observers will goback to the pioneering work of
um Alan Turing uh in in the UK,who you know famously published
a paper in, I don't remember theprecise year, but it was the
mid-20th century asking thequestion, can machines think?
(07:12):
Which again, today perhaps it'snot uh an earth-shattering
question to ask, but it reallywas a prescient question to ask,
you know, back you know, 75 orwhatever, however many years it
was ago.
Uh and and that was that thatpaper, as much as anything else,
I think you can sort of point toas really the an early
(07:33):
foundational contribution towhat became known as artificial
intelligence.
And then over the subsequentdecades, through really the end
of the 20th century, there wasuh significant work, uh
increasing work uh as the yearswent on in artificial
intelligence.
Uh but you're right, and thathas continued, but it it didn't
really um explode into thepublic consciousness until a few
(07:55):
years ago.
Um, there's a couple of specificreasons for that, but I'll also
say that you know, things likeChatGPT, you know, everybody's
heard of, but even well beforethat was released a year and
however many months ago it was,AI was already very much in um
in our world, even if we didn'tnotice it.
I mean, for example, two yearsago, if you got you know route
(08:15):
finding for rideshare apps andpurchase recommendations, you
know, you were benefiting fromartificial intelligence even if
you didn't specifically knowthat that was behind what you
were seeing or doing.
SPEAKER_02 (08:26):
So I I can think
back to something that seems
significant in its in in mytime, which was when the
computer deep blue bested thechess master Gary Kasparov.
Yes.
That would seem to be a signalmoment in this history, was it?
SPEAKER_01 (08:41):
Yeah, and it's a
great milestone.
But I think it's also importantto mention that uh that as far
as I'm aware, the deep bluecomputer did not actually use
artificial intelligence.
It was programmed, you know,over the course using many, many
person years of effort tobasically try to capture, you
know, the collective knowledgeof the programmers and of you
know what worked and whatdoesn't, and then and also just
(09:02):
do brute force, sort of youknow, look a certain number of
moves forward through bruteforce and try to figure out uh
what things, you know, what'sgoing to work and what's not.
But it was a sign, it was asignal moment in the power of
computing to do things that werewere had up until then been
associated primarily with youknow human cognition.
And even though it wasn't AI aswe think of it today, it was
(09:24):
just uh it was still reallyimportant.
Um, and and if I if I may, I canjust sort of contrast that with
something that happened 20 yearslater in 2017.
There was a Google computernamed AlphaZero, which starting
only with the rules of chess,giving it got no other
information other than the rulesof chess, it was able to teach
itself to play chess at a worldclass level in about four hours.
(09:45):
And so if you look at that20-year time span, you know,
with deep blue, we have, youknow, world leading chess, but
at only at the cost of manyperson years of development.
20 years later, we have AI beingused so a computer can teach
itself to play chess in four,four hours at a world class
level.
That's an absolutely stunningadvance set of advances over you
(10:06):
know 20-year timeframe.
SPEAKER_02 (10:08):
And for the
layperson, can you explain what
that advance actually entailed?
Um, it had to do with the speedand capacity of something.
SPEAKER_01 (10:16):
Yeah, I think I yes,
I think that I think you can,
you know, people have beenworking on AI for, like I said,
three quarters of a century.
But but of course, over the lastreally over most of that time
period, at least the last 50 or60 years, you know, we've had
this exponential increase incomputing capacity.
And that's both in terms of thespeed at which computers can,
chips can compute things, andalso the amount of storage.
(10:39):
The price of storage has uh hasdeclined roughly exponentially
over that time.
And so what I think reallycreated a tipping point for AI
in the last decade or so, andand even more so in the last few
years, is the amount of data andthe speed of computers and the
the low cost of storage all sortof came together uh such that
(11:00):
you you now have trulyextraordinary computing and
storage capabilities that areaccessible to uh people who are
developing these AI systems.
And that created this reallyqualitative leap in what these
AI systems can do.
I mean, if you look at, youknow, people have fun poking,
finding errors with Chat GPT,you know, when it does, when it
(11:21):
creates hallucinations, it makesup facts that aren't aren't real
and things like that.
But but if you take a step back,the things it can do are, you
know, extremely are justremarkable.
And of course, that's you know,it's still early days yet.
So I think, as you justsuggested, it's really computing
power, plus, of course, a a lotof people getting very good at
creating these algorithms andworking on these, you know, it's
(11:42):
a combination of the tools thatwere available and the and the
the knowledge of the peopleusing these tools and those
things uh together is reallywhat has led us to where we are
today.
SPEAKER_02 (11:50):
And is this um a
reality that Alan Turing and his
generation imagined to bepossible?
Well, that's a good question.
SPEAKER_01 (11:59):
I I say that Al,
yeah.
Um I would say Alan Turing is isnot he's he's he's it's someone
in some sense apart from hisgeneration just because he was
so visionary.
And and I think there were veryfew people at that time uh who
would have even entertained thequestion of whether machines can
think.
Um so but I think that's youknow, I mean, someone like Alan
(12:21):
Turing, that the mere fact thathe, the very fact that he could
ask the question, you know,suggests that he would perhaps
be less surprised than others ofhis time uh to see what has
happened today.
SPEAKER_02 (12:32):
So it's curious to
ask what futurists today say of
the potential tomorrow.
SPEAKER_01 (12:37):
Yeah, that's a good
question.
I mean, I'm you know, I'll givean analogy that I think is is
perhaps useful here.
Back back in the late 1990s whenthe internet was first becoming
widely accessible.
I mean, the internet itself hadbeen invented, you know, prior
to that, but it didn't becomeyou didn't have browsers on
people's desktop computers, atleast at large scale until
mid-1990s, late 1990s.
(12:59):
And it was clear to me then, ifyou'd asked me in 1997, 1998,
you know, what's the internetgonna look like in 25 years or
something, I would have beenable, I would have accurately
been able to tell you, forexample, that it was just gonna
make it a lot easier to findinformation, right?
Just information, the whole costof accept of finding information
was gonna just plummet and theefficiency of finding it.
(13:19):
But I would have completelymissed social media.
Like it, you know, it did not,it wouldn't have occurred to me.
I was not able to see in 1997that the internet, among other
things, was going to lead to thecreation and rise of social
media.
And I think it's an importanthistory lesson, at least for me,
because it illustrates thedifficulty of trying to predict
(13:40):
the future.
And so I think there's somethings about AI that we can say
with some high degree ofconfidence, but technologies
have a way of developing in waysthat you didn't necessarily
originally anticipate.
And I think it would be naive atbest to suggest that we sitting
here in 2024 can say with anyreal certainty what AI 30, 40,
(14:02):
50 years is gonna look like.
I'll just give one more kind ofanalogy that or comparison I
think is useful.
If you go back 50 years, we'rein 2024 now.
We go back 50 years to 1974, youknow, it is no easier, it was no
easier in 1974 to predict thetechnological landscape that we
have today than it is today, Ithink, to predict the
technological landscape thatwe're gonna have in 2074.
(14:23):
And so if you look at the vastdifferences we have over the
past 50 years, I think it'sreasonable to expect the
differences will be similar incal in scope in the next 50
years.
And I'm not gonna try to predictexactly what form that's gonna
take because that's that's hard.
SPEAKER_02 (14:36):
But is it the case
that the pace of change is much
more rapid by orders ofmagnitude?
SPEAKER_01 (14:43):
I don't know if I
would say orders of magnitude.
I think the pace of change hasbeen pretty fast.
I mean, certainly the pace ofchange is of technological
change is faster in recentdecades than it has been in say
prior centuries.
I don't know that it's fastertoday than it was in the 1990s.
I mean, if you look at, youknow, if you look at the, you
know, going from 1993 or four to1997 or eight, just a handful of
(15:07):
years, you know, web browserswent from, you know, essentially
non-existent in a populationscale to prevalent, right?
And at least in many, in manycountries.
Um, that's a stunning change.
Uh, if you go back, you know,not too many years before that,
there was a period of time wherealmost nobody had mobile phones.
And over a pretty short periodof time, you know, many people,
uh again, especially in somesome countries, um, were able to
(15:30):
get mobile phones.
Uh, so there's been, you know,those are incredibly
transformative, profoundchanges.
And those were, you know, aquarter century ago.
So I don't think the idea oftechnologically induced sort of
profound changes is is new.
But yeah, things are happeningquickly now.
SPEAKER_02 (15:45):
Yeah, and I just
think back to the the
announcement, the first publicpublic announcements uh to the
general public of of the arrivalof the chat GPT.
It seemed like there was a lotof conversation about the
potential of AI.
And then the next moment therewas this instrument that
everybody could have access tothat uh you know that was a
(16:06):
source of good and not so good,perhaps.
But it seemed to come out ofnowhere.
But I'm sure within the uhcommunity uh it was highly
predictable.
SPEAKER_01 (16:16):
I don't know if I'd
say predict I mean, yeah,
certainly there's people there'syou know large language models
weren't a secret, and you know,neither was the fact that people
were you know working oninterfaces to them.
I think what was a surprise tomany people, even in you know
the community, was how good uhthese things had gotten.
Um I think that was what wassurprising.
Because it wasn't too long agowhere chat bots just you know
(16:38):
were just you know were justvastly less capable.
SPEAKER_02 (16:42):
And what accounts
for that?
SPEAKER_01 (16:43):
I think the the
there's the models are bigger,
there's more computation.
I think the people uh behind thethe best uh large language
models are, I mean, they'rethey're extremely they have a
lot of expertise.
This is what they're this iswhat they do, they're focused on
that, and um, and they've reallyuh produced some amazing
technology.
SPEAKER_02 (17:03):
And as a moral being
in the world, how do you find AI
when you think of its benefitsand its potentially productive
qualities?
SPEAKER_01 (17:16):
You know, I I this
is not a trendy view to hold in
academia because there's a lotof doom saying, but I am uh I am
much more of an optimist than alot of people.
Now, that doesn't mean I'm notthat be careful.
I'm not suggesting that there'snever gonna be any negative uses
of AI.
Of course there will be.
I mean, every technology hasbeen exploited by people for
negative purposes as well aspositive purposes, the same or
(17:39):
almost almost every technology,right?
Um, and so uh, but I'm not ofthe view that AI is gonna you
know take over the world or youknow, make all humans obsolete
or control us or anything likethat.
I think there's just extraincredible potential.
I mean, just one area I'll I'llmention is pharmaceuticals, drug
development, the potential forAI to uh discover new drugs that
(18:04):
otherwise would have remainedundiscovered uh for perhaps
years, decades, perhaps forever,is is truly extraordinary.
Um there's just a long list ofapplications where the benefits
are really, really amazing.
There's a there's advantages, uhthere's opportunities to create,
to broaden access to legalservices, uh to uh improve
(18:25):
medical diagnostics.
I mean, there's just a long listof benefits.
And alongside those, of course,yes, there are there are people
who are going to misuse AI formalicious purposes.
SPEAKER_02 (18:34):
And you don't
imagine that there that those
efforts will continue.
I mean, we saw just in the lastweek uh reports that uh uh some
supporters of the Trump campaignused AI to alter images of Nikki
Haley um and disseminate themvery widely.
(18:55):
Um the capacity to do that kindof thing and promote this and
disinformation seems to bealmost.
SPEAKER_01 (19:01):
So what are the
guardrails?
Yeah, so this is I yeah, so I'mnot I'm I want to I I don't know
and I haven't read about thatspecific thing, so I'm not gonna
comment on that specific uhallegation.
Um, but I will say that the AI,and not only just yeah, AI
creates the opportunity, it thatit makes it possible to alter
(19:22):
images and to uh, and of courseyou could alter images before
AI, but it makes it more you cando it in a more realistic way.
And people can also createimages, synthetic images that
are uh increasingly going to bedifficult to disentangle from
the real images.
And of course, that kind oftechnology uh can be used to uh
to probably really problematicends.
Um, but it can also be used tobeneficial ends.
(19:45):
I mean, I'll give an example.
There's a South Koreanpresidential candidate who, on
his own campaign, recently usedAI because he had been
apparently criticized forappearing sometimes too cold and
unfriendly, and he he used AI tokind of modify his video so he
appeared more approachable andfriendly.
I mean, you can have somediscussion about whether that's
(20:07):
right or wrong, but it'scertainly not malicious, it's
not evil, right?
And and it's not um it's notsomething that we would in the
United States try to blocksomebody from doing if somebody
tried to do it here.
You can also imagine you know AIbeing used in, I don't know,
filmmaking, for example, ifsomebody was going to make a use
AI to make a movie depictingAbraham Lincoln in a completely
(20:28):
realistic form, you know, that'sanother example.
But yeah, there are going to bepeople who who use it to prop to
create and propagatedisinformation, and that is a
problem.
Although I will say thatdisinformation has been a big
problem even at even without AI.
You don't need AI to have adisinformation problem.
There's been there's been morethan more than plenty of that,
unfortunately, in on theinternet for a long time.
SPEAKER_02 (20:49):
So let's turn to the
law, um, where there seems to be
a kind of dissonance or anomalyof sorts, insofar as the very
people who are producingtechnological innovations,
that's a big tech, um, are alsothe people who are called upon
to regulate um their industriesand uh the internet and
(21:12):
presumably the future course ofAI.
SPEAKER_01 (21:14):
Um is that an
accurate reading?
I'm I'm gonna push back a littlebit on that.
I mean, the sense that you knowthere is regul there is
government, you know, there'sgovernment regulation that's not
run by the tech companies uhthat is relevant to AI.
In fact, there's quite a lot ofit.
Um that's you know, uh so itjust depends on it depends on
what you mean.
Let me just just so I don't getyou know taken out of context.
(21:36):
Let me people people often sortof suggest that, well, there's
you know, that we don't have anyrules regarding AI.
Well, that that's not true.
I mean for for example, if abank is using an AI system to
make decisions about who to giveloans to, and it and the bank
ends up making decisions in away that disfavors members of
protected groups, let's say onthe basis of race or gender or
(21:57):
religion or something like that,um, well, that's that's already
unlawful, right?
That under the Fair Housing Act,you can't discriminate in in
home loans.
Uh, and that uh prohibition ondiscrimination isn't any less
just because you know a bankhappens to be using AI.
So there's a whole set ofprotections that we already have
(22:19):
that will apply to AI if AI isused in a way that contravenes
the behavior that those laws areintended to block.
Um so that doesn't now question,does that mean there's not new
ways in which AI could be usedthat are harmful, that fall
through the cracks outside thescope of some of these
regulations that are alreadythere?
That may be the case.
And if that if that is, if weidentify things like that, well,
(22:41):
that is something that we shouldbe looking at carefully in a
balanced way about uh how toaddress it.
SPEAKER_02 (22:46):
So do you think
there is sufficient government
regulation at present of thetech industry in general and AI
in particular?
Like, are we in a good place?
SPEAKER_01 (22:55):
Yeah, I you know, I
I get hesitant when people just
say regulate, and then you justum let me give an example.
So I think regulation has itsrole, and I think there could be
there, there may well be a needfor additional regulation, but I
think it has to be donethoughtfully.
Uh, another example I think ishelpful to cite is back in 1986,
Congress was concerned about uhdigital privacy, and they uh
(23:19):
enacted a law called the StoredCommunications Act, which gave
uh substantially more protectionfor emails stored for less than
six months than for greater thansix months.
Basically, the law enforcementneeded a warrant to get emails
stored less than six months anduh did not need a warrant for
emails stored late greater thansix months.
And by the way, that's stillbook law is still on the books.
(23:39):
Um and um and the logic backthen was that well, nobody
stores emails more than sixmonths because you'd run out of
disk space, right?
Now, of course, you fastforward, you know, 40 years, and
that same law is on the books.
And now the what that means isthe vast majority of our emails
are subject to less protectionbecause of this purportedly
privacy-enhancing law.
So, my my point in bringing thatanalogy is that's a case where
(24:01):
Congress said, Hey, let's getahead of this, let's enact some
regulation about this new thing,you know, these electronic
things, things like email, andit ended up actually being
counterproductive doing exactly,you know, being, in some sense,
an anti-privacy law, at leastthrough tip the lens of today's
technology.
Maybe it was okay at the in theand when the day it was enacted.
So I do get nervous about callsfor, hey, you know, the logic is
(24:23):
we need more regulation, let'sregulate.
That to me sounds like a formulafor unintended consequences, as
contrasted with you say, okay,here's a problem that is
attributable to AI, and let'slook at our existing frameworks,
legal frameworks.
And oh, if we find that none ofthe existing frameworks can
actually address this problem,well, then it makes sense to
talk about potential regulatoryapproaches or other solutions as
(24:45):
well.
But that to have that on thetable seems reasonable.
SPEAKER_02 (24:48):
And how would you
grade the big tech companies in
their own efforts to um produceguardrails against excesses and
misuse?
SPEAKER_01 (24:59):
You know, I'm not,
you know, I I guess I don't have
enough knowledge about uh allthe you know internal fit steps
that they've taken.
Um I mean, tech companies arebig targets, and so um certainly
they have not behaved perfectly.
Um, but I also think you knowthere's a lot of political um
mileage that you can get membersof both parties, you know, can
(25:19):
get by sort of bashing big techbecause it's you know, it's it's
you know the tall the tall thetall tree catches the wind.
Um and so again, I'm notsuggesting that tech companies
have behaved perfectly, I and nodoubt they have not.
Um, but I also think that manyof these tech companies do have
people who are thinkingcarefully about some of the um
you know the ethicalimplications of the technologies
(25:42):
that they're building.
SPEAKER_02 (25:43):
And just thinking of
um a response to an earlier
question, um, sort of you wereidentifying yourself as, I
guess, if not a doomer, maybe aboomer, as I understand uh the
world is divided.
Um why should we be sanguinethat um a group of bad people
won't use AI to build adevastating nuclear bomb or sort
(26:05):
of the more sweeping assertion?
SPEAKER_01 (26:08):
Yeah, yeah.
I mean, there are always goingto be people who use the the
latest technologies in ways thatare problematic.
Uh I mean, again, historyprovides a good example, or not
a good example, but anillustrative example in a um in
in a lot of ways on thesethings.
Um, trying to think of I mean,so for example, um, this is a
(26:28):
grim example, a terribleexample.
In World War II, um uh NaziGermany, you know, their goal
was to put a radio, anold-fashioned AM radio, in as
many German homes as possible,thereby allowing the government,
uh, including Hitler, to sort ofspeak directly into homes.
And that effort, in terms of theeffort to distribute radios, was
(26:50):
was they got radios in a lot ofhomes.
And that's a that's a terrifyinguse of what was then an emerging
technology.
But I think the problem was notradio, the problem was the
Nazis, right?
And so I think we we need to,you know, radio, of course, has
in in the decades since then itled to many innumerably good
uses, right?
(27:10):
We would never suggest thatthere's something inherently
evil about radio.
It's it's the problem is that,like any technology, there are
people who will misuse it.
I think the same is going to betrue for AI.
You know, we will be able tofind it's a big world, there's
billions of people in the world,some subset of them are going to
try, you know, to use AI forreally problematic purposes.
And we should be on guard aboutthat and we should address that.
(27:31):
Um, but that doesn't mean thatyou throw the baby out with the
bathwater.
It's not, it's not the fault ofthe technology, it's the people
who are employing it in theseways.
And just as it just as a globalban on radio in you know the
late 1930s would have made nosense, so too would it not make
any sense to have a global banon AI.
SPEAKER_02 (27:47):
And what prevents
you from lapsing into the
ultimate doomsday scenario of AItaking over the world, the human
race?
SPEAKER_01 (27:54):
I think there's any
number of uh there's any number
of reasons why, you know, itdepends on AI taking over the
world.
Um, but uh I just don't see anysystem or AI collection of AI
systems being so untethered andum decoupled from human
oversight and control that theycould, you know, literally take
(28:16):
over the world.
Could an AI system do damage?
Uh sure.
But I I think, you know, uh asboth longer-term history and
recent history uh makes clear, Ithink the greatest, you know,
the greatest damage uh is doneto people, the the greatest
source of damage for people isfrankly other people.
SPEAKER_02 (28:34):
Not AI developing
decidedly and wholly negative
human characteristics likeresentment, um, jealousy,
indignation, um, impulse toviolence.
SPEAKER_01 (28:47):
Yeah, I mean, um I I
you you get into sort of a
philosophical question ofwhether AI has feelings and all
these kind of things.
So I'm not not quite ready tosort of concede that that at
least today's AI systems havesomething that we could
genuinely analogize or properlyanalogi to feelings in the human
sense.
I mean, they're just they're atthe end of the day, they're
(29:08):
they're just machines.
Uh but even if we got to thatpoint, um, you know, I would
assume that there's they'regoing to be guardrails.
Uh, and again, they're not goingto be perfect.
You know, I don't, you know, ifwe if we over the next decade,
will AI in some cases be usedfor uh really problematic ends?
Of course.
Um, but that's also true of theinternet, right?
There's there's a ton of, Idon't know, financial fraud on
(29:29):
the internet.
Um a ton of it, right?
There's all sorts of bad stuffthat happens on the internet.
Um, and we we do our best tocombat that.
We do our best to hold thepeople who commit those acts uh
accountable, but we don't shutdown the internet, right?
We understand, I think most ofus, that the the benefits of
having an internet uh vastlyexceed the uh downsides, even
(29:50):
though the downsides are verysignificant.
Uh there's no, you know, I don'twant to downplay it, right?
If if you're a victim offinancial fraud, somebody steals
your credit card, you know, andthey're thousands of miles away,
then that that you know that'sstill a big problem and there's
far worse that happens on theinternet.
But again, it's it's it's thepeople using it, not the medium
that's at fault.
SPEAKER_02 (30:06):
So it sounds like
you're not losing sleep at night
over fears of what AI might do.
SPEAKER_01 (30:13):
I don't want to
sound I I don't want to suggest
that I'm sort of you knowwhistling, you know, I I I I
mean it's I I have what I wouldlike to believe and I I respect
that others may have a differentview.
I will I I like to believe thatlike so many technologies like
the internet, like radio, likemobile phones, um that on
balance AI is going to to bringus far more benefits than harms.
(30:37):
Does that mean we close our eyesand ignore the potential for
harms?
Absolutely not.
We need to be vigilant.
We need to think aboutframeworks for minimizing the
probability that those harms areperpetrated or if they are for
identifying them and stoppingthem quickly.
But I don't uh I don'tsubscribe, you know, and I I
guess I you know you suggestedbefore that there's sort of the
doomers and the boomers but I Idon't have a binary view.
(31:01):
I think I I would say uh thatI'm on on on largely optimistic
but also wary and aware of thenegative uses but I don't think
that's a reason uh to to turnour eyes away from the really
amazing opportunities that wehave okay at that point of
cautious optimism um I'd like tothank you John Villasenior for
(31:24):
taking time schedule well thankyou appreciate it for listening
to Then and Now a podcast by theUCLA Luskin Center for History
and Policy.
SPEAKER_00 (31:42):
You can learn more
about our work or share your
thoughts with us at our websiteluskincenter.history dot
ucla.edu Our show is produced byDavid Myers and Rosalind
Campbell with original music byDaniel Reichman.
Special thanks to the UCLAHistory Department for its
support and thanks to you forlistening