Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
[MUSIC PLAYING]
(00:03):
Hello and welcometo Data Nation,
a podcast from MIT's Institutefor Data Systems and Society.
I'm Liberty Vittert and I'm herewith my co-host Munther Dahleh,
the head of MIT's Institutefor Data Systems and Society.
In this episode, we chatwith Dr. David Autor.
David is the Ford professorof economics at MIT.
(00:25):
With all the chatterabout ChatGPT
and the development ofartificial intelligence,
we contacted Davidto get some real talk
about the currentstate of AI and what
we might expect as thetechnology continues to advance.
Well, thank you, David, fordoing this, first of all.
My pleasure.
Thanks for inviting me.
So this is a very hot topic.
(00:46):
And the whole idea of AI,and the future of work,
and the workforce is somethingthat is on everyone's minds.
I'm going to start very broadand ask you about your opinion.
Where do we stand today?
It's like, what isthe general comment
about where AI is interms of the workforce
and what we should beexpecting in the future?
Yeah, I generallydon't like to say,
(01:09):
we're at an inflection point.
Things are changing fasterthan they ever have, et cetera.
But we are at aninflection point,
and things are changingreally rapidly.
We've obviously had computersall around us for the last four
decades if you'vebeen around that long.
The office personalcomputer, the IBM PC
was introduced in 1982,and that was a big deal.
And people thought that it'sgoing to change the world.
(01:30):
It did change theworld in some ways.
But the progress of computingover the last four decades
has been very linearin the sense of,
if you want to get acomputer to do something,
you first had to make explicitall the rules, and formulas,
and procedures.
And then you got towrite that up in code.
And then you canhave a machine that
didn't understandwhat it was doing
and was not flexibleor adaptive carry out
(01:51):
those rules and procedures.
So it was like you hadto basically, first,
put everything on iceand figure it all out
and then have a machine do it.
So we call thatcodifying routine tasks.
And so that's reallygood for office work.
It's good for repetitiveproduction work, anything
that's calculating filings,sorting, storing, retrieving.
But the key thingis, all knowledge
had to be made explicit.
(02:12):
It had to beformalized and codified
before a machinecould execute it.
And that's actually verydifferent from the way
that we do thingsbecause we do many things
without understanding what we'redoing or knowing the rules.
We use what we calltacit knowledge.
So you never wentinto a classroom
and learned how toride a bicycle there.
And if someoneasked you to teach
(02:32):
a class on riding a bicycle, youwouldn't know where to begin.
But you can ride a bicycle.
And similarly, if you'retrying to tell a funny joke,
or make a persuasiveargument, or recognize someone
who you haven't seenin four decades,
you don't know how you do that,but you know how to do it.
But the fact thatyou didn't know
how to do it meant it was veryhard for you to write a computer
program to do it becausethe rules were tacit,
(02:54):
they were not explicit.
And so the progress ofcomputing was very predictable
because we knew this was avery incremental procedure
to go from playingchess, to spell checking,
to checking grammar, tocalculating trajectories,
and all kinds of stuff.
So AI is fundamentallydifferent from prior generations
(03:15):
of software andcomputing capabilities
because you don't need toexplicitly know the rules.
You can say, look, here's thequestion, and here's the answer.
And now, you tellme, you figure out
how we got from there to there.
So here's a billion objects.
They're labeledas such and such.
Now, I'm not going to tell youwhat makes a chair a chair.
You just look at the thing andtell me which of these things
(03:36):
is a chair.
And it turns out,that capability
has become very general.
So it doesn't just applyto recognizing objects,
but generating language,generating faces,
predicting the text you're goingto write as you write an email,
and parsing hugebodies of knowledge
and pulling out relevant things.
So that is extraordinary.
(03:57):
And many people, includingour colleagues here at MIT,
didn't think we wouldget here this fast.
I remember 10 yearsago, there was a group
of computer scientistsand economists
who used to meetrather frequently.
And it was just at the beginningof the contemporary AI spring,
after decades of AI winter.
And some people were saying,well, look, this thing, it's
(04:19):
not going anywhere.
We'll see what thesethings are doing.
They're just going to getthe right answer on average
and get everyinteresting case wrong.
We're going to hitthe flat of the curve.
And other people were saying,wow, it is this new AI.
I mean, I'm surprised.
I didn't think you couldget this far just based
on this notion of theseneural nets that are just
assimilatingstatistical associations
and then doing things.
(04:40):
But it does seem to be reallyworking better than you'd think.
And now, I think we're at apoint where a lot of people
are saying, wow, thisis gone further, faster
than we thought was possible.
Whether you'relooking at ChatGPT,
or you're looking at how fastcomputers can do translation
between languages,or even if you're
writing an email, thefact that it seems
(05:01):
to be able to predict whatyou're going to say next,
that's pretty extraordinary.
So I do think this is a timeof substantial uncertainty
about what's feasible.
And the reason I say uncertaintyis because, I said a minute ago,
we have this linearroadmap, like how do we
get from here to there?
Well, we know the route.
It's a long one.
It's a slow one.
(05:21):
And now, we don'thave a roadmap.
I think when I ask computerscientists this question, what
can you confidently sayAI will not be able to do
10 or 20 years from now?
They don't have a verycompelling answer.
They're really not surewhat it won't be able to do.
So I think it's a timefor a lot of blue skying
and I think one shouldbe hesitant about saying
(05:43):
what won't happen.
I think one should alsobe hesitant about saying,
you know what will happen,because I don't think anyone
has a very, very clear picture.
David, I can't helpbut hear the excitement
about what's happening aroundAI, but without bringing up
the fear, and that the jobsare going to be taken away,
(06:04):
that the workforceis going to be lost,
and the real fear that peoplehave around AI taking away jobs.
But in your TedTalk, and correct me
if I paraphrase youwrong, you mentioned
this really interestingphenomenon that
has human workers who've almosthad increased opportunities
and capacity to be employedin the face of increasing
(06:27):
automation, which is whatno one thinks is happening,
but, I guess, is.
And you had this greatexample with bank tellers
and ATMs to explain howautomation doesn't necessarily
mean less jobs, itcould mean more.
So could you maybeexplain how that works
and why we shouldn'tnecessarily be as scared
as I think a lot of people are?
(06:47):
Sure.
Absolutely.
The example of banktellers, by the way,
goes to Jim Bessen, who'san economist at Boston
University, which I did usewith attribution in my Ted Talk.
And is the case that decadesafter ATMs were introduced,
people were verysurprised to discover
that the number of bank tellershad increased, not declined.
And the answer is, well, why?
(07:08):
We all thought they'dbe automated by now.
What are they still doing here?
And the answer wasa couple of things.
Two pieces.
One is that, because ATMs lowerthe cost of opening a branch,
banks began branchingmuch more aggressively.
And so they would hire a coupleof ATMs and a couple of bank
tellers.
And boom, they had more people.
But the other is that thebank teller job changed.
It went from being less of adispensing machine for cash,
(07:31):
and much more banktellers became salespeople
who were selling accounts,credit cards, loans,
not always for the better.
And so the job changed.
And so it moved awayfrom people doing
the most routinecash-handling transactions,
and much more to addingvalue through these customer
interactions.
Now, it is the case thatthe number of bank tellers
is now in decline.
So this, it's nota universal law.
(07:53):
But I think there are threeways to think about, well,
why doesn't automationjust always replace us?
We have automatedourselves to a huge extent.
40% of employment was inagriculture at the turn
of the 20th century.
And now, it's under 2% And it'snot because we're eating less,
it's because we've becomeso much more productive
in agriculture.
And so very fewpeople need to do it.
But there are few things thatwork in a different direction.
(08:16):
So one, of course, is that alot of what automation does
is it raises productivity.
And when we havemore productivity,
people have higher incomes.
And when they have higherincomes, they spend more.
People are pretty insatiable.
People's consumption risesat about 1.01 the rate
of their incomes.
And so no matter how wealthy weget, we seem to never run out of
wants.
And so that consumptionitself creates demand,
(08:38):
and not just demandfor more of the same,
but for more experiences,more services, more luxuries,
more travel.
And a lot of those thingsare labor-intensive.
A second thing torecognize is that many
of these things that wethink of as automation
are actually just tools forus to do what we do better.
So whether you're awriter, or a researcher,
(08:59):
or you're a roofer usinga pneumatic hammer,
the last thing you'dwant is for someone
to take away the tools that youuse to perform your basic work.
So many times, thetechnologies are
very complementary to whatwe do because they allow
us to focus on whatwe're really good at
and get rid of some of theboring, repetitive stuff.
A third, and this is thehardest to anticipate,
(09:20):
is that we often use thesetechnologies to create new goods
and services that demand labor.
And some of thosethings are having
to do with technology itself.
Obviously, you needAI programmers,
you need people who handlethe hardware, and so on.
But often, it's somethingcompletely different.
It's a set of luxuries thatare made feasible by new tools,
(09:40):
whether that's like avirtual environment,
whether that's everyonebuying a personal assistant,
whether that's everyoneusing the extra money to have
new experiences.
Many of the jobs inwhich people work now
are things that justdidn't exist 80 years ago.
There's all kinds ofmedical specialties.
There's all kinds of newtypes of engineering,
(10:01):
but there's also new types ofsommeliers, and counselors,
and coaches.
And so in work with mycoauthors, Caroline Chin, Brian
Seegmiller, and Anna Salomons,we estimate that about 60%
of the work tasksthat people do in 2020
were not present in 1940.
So there is a lot ofnew work that's created.
So I see no reason to thinkwe're going to run out of jobs.
(10:23):
But that's only half the story.
And the other halfis, well, what jobs?
What will people do?
And there, it's veryimportant to ask
whether the new technologiesthat we create, are they
going to make our labormore valuable, scarcer,
or are they going to complimentour expertise, our creativity,
our judgment?
Or are they going tocommodify us and make
(10:44):
us the last mile of thetask, where the machine does
the hard part and theperson is just the one
picking the thing off the shelfand putting it into a box,
or the person just makes surethat the truck, as it comes off
ramp, it doesn't run into apost, but most of the driving
is done by a vehicle, and so on?
And that's a real concern.
And we've seen both happenover the last 40 years.
(11:05):
So people who doprofessional, and technical,
and managerial work overthe last four decades
have been stronglycomplemented by computerization
because they've made accessto information and computation
research muchcheaper and faster.
If you're a doctor,or if you're a lawyer,
or if you're someone who has tooversee a larger organization,
having really cheap access toinformation and calculation
(11:27):
processing makes yourlabor more valuable.
It allows you to dowhat you do faster.
However, if you were anoffice clerical worker,
or you were a person doingrepetitive production work,
many of those jobshave been eliminated.
And the people who did them, ifthey didn't have the opportunity
to move up into the professions,let's say, many of them
(11:48):
end up doing rathergeneric work.
So food service, cleaning,security, entertainment,
recreation.
There's nothingwrong with that work.
It's valuable work, in fact.
However, because many peopleof sound mind and body
can do those jobs,they're not well-paid.
The labor is notspecialized, it's not scarce.
(12:09):
And so that's the concernwith these technologies,
not that we're going torun out of work per se,
but that they will reduce thevalue of specialized skills
that people have.
And the balance betweenhow much they complement,
how much theysubstitute, and for whom,
that is the area, I think,for legitimate worry.
David, this is actuallyreally interesting
(12:30):
and raising many, manyquestions in my head.
So I'm going to go back a littlebit to your inflection point.
And I remember, when I was akid, and for the first time,
they allowed us to use thecalculator in a math test,
my grandfather was appalled.
He's like, if you usea calculator, what's
left of the problem.
(12:51):
And I think that a lot of itwas about the definition of what
is routine, andwhat is creative,
and what is problem-solving.
And I think you were getting tothe point where, at that point,
with the inflectionpoint we're at,
by saying that we're not reallyclear at this point, what is
actually routine in what we do?
Stuff that we thoughtwas actually creative
(13:13):
now looks like it's routinebecause a machine is
able to do it and doesit in a systematic way.
And that set of routinethings is growing.
And so I think some of thefears that we're having right
now is, how much creativitydo we have, and is there
a limit to this?
So it is those jobs that defineus as humans, the ability
(13:33):
to rise above any other machineand do something that no one has
done before, and youthought a new idea,
I think there's anotherconcern that machines
are going to replace us there.
And I don't know what yourthoughts are about this.
Yeah, so it isabsolutely the case
that the domain of thingsthat could be done by machines
is expanding, andprobably, arguably,
(13:54):
expanding very, very rapidly.
Now, creativity is a highstandard to hold most jobs to.
I don't think most--
we all use creativity in variousways, but most of what we do
is not creative.
Most of what we dois accomplishing
thing that needs to be done,and that's still always
been valuable.
So when you say, I'mdoing agriculture,
I'm doing farming, Idon't know if creativity
(14:15):
is the top thing on yourlist, although there's
creativity involved.
But you'd say, look, I'mdoing a lot of things that
can't be done any other way.
And it's important work.
It needs to be done.
And then I think we alsohave to ask ourselves,
and I think this is impliedby your question, what
is creativity?
What does that actually mean?
And you'd think of, well,writing a new paragraph, well,
that's creative.
(14:36):
You have to produce a newsentence, and a new idea,
and you got to structure it.
And then we'resurprised to discover
that that can be simulated,replicated by machines.
And a lot of what we do isactually, at some level,
pretty predictable.
Now, I would say that thereare important distinctions
between what AI is doingand what people are doing,
(14:56):
and not just how theydo it, which obviously
is pretty differentbecause we don't
have the samelevel of horsepower
for doing certain things,but we do other things
much more efficiently.
But what AI is doing is--
or something like ChatGPT,in particular, different AIs
do different things.
It's doing a lot of predictionof what you would expect
to see, what sentenceshould follow what,
(15:17):
what things should go together.
It isn't doing anotherthing that we do a lot of,
which is verifying againstfact, verifying against logic.
So it actually is missing a veryimportant part of cognition.
You can say to ChatGPT,how many funerals
did John F. Kennedy attendafter he was assassinated?
(15:38):
And it will tell you,it'll give you a list,
and it won't be zero.
It'll just make something up.
And that's because there'sno model in ChatGPT that
says the world works this way.
There is such thing astime, and causality,
and there is such thing asrules of mathematical operations
that constrain whatcan go together.
It's actually funny.
(15:58):
It's actually quite ironicthat the frontier of computers
are machines that arebad at facts and numbers.
You wouldn't think thatwas where we'd end up,
but that's wherewe are right now.
And so I do thinkthere's a lot of things
that you can potentially useChatGPT for that it doesn't
do very well and need, atthis point, to be disciplined.
(16:21):
You can ask ChatGPT to writea syllabus for your class.
It'll list papers thatdon't exist by people who
would be likely to write them.
So you can askyourself, you say,
well, maybe one zoneof complementarity
is I can use the machine forthe first version of something
to write the draft of somethingor help me produce ideas
(16:42):
for X, Y and Z. But then,I could add a lot of value
in that setting.
But we don't know how much.
We don't know wherethat complementarity
comes in versus substitution.
And of course, ChatGPTis just an example
of a technology that'sadvancing quite rapidly.
As we think of whatmore and more AI can do,
(17:05):
the one thing that alwayscomes to mind, especially
in popular culture, is AIbecoming human in some capacity,
or having human intelligence,or human emotions.
Do you foresee AI becomingmore or becoming emotionally
intelligent in the future?
Is that possible?
And would we be able to overcomeour mistrust towards AI's lack
(17:28):
of ability to sociallyinteract or be
able to more carefully explainhow it comes to decisions,
rather than these black boxalgorithms where no one really
knows what's going on andwe have all the bias issues
that we see?
But is that possible,what we see in the movies?
OK, so I'm not acomputer scientist,
so I don't want tooverextend my expertise here.
(17:49):
This is all hearsay aboutwhat it can and can't do.
I think we should ask whether wewant it to have that capability.
It's not obvious tome at all that we do.
There's a real model in thekind of AI computer science
community of replicating humancapabilities with machines.
But we already havehuman capabilities.
If we just have machinesat human capabilities,
is that the best we can do?
Maybe we should think aboutwhat we could use them
(18:11):
for that we can't do.
So having them simulate theemotions of my teenage kids,
I don't know if I want that.
But there are a lot ofthings that AI could
do that could be very valuable.
So, for example, we coulduse it to make education
more immersive, cheaper,more accessible.
(18:31):
We could use it to lowerthe cost of medical care
and bring expertiseto more people.
We could use itto augment people
who are doing skilledwork and allow them
to do a broader set of tasks.
And when I sayskilled work, it could
be construction, or diagnosis,or repair, or maintenance.
It doesn't have to be--
school work, I don't meanto evoke the image of people
at MIT sitting at their desks.
(18:53):
I mean, there's a lotof skilled hands-on work
in the world in which webring foundational skills,
and then we want to be ableto broaden our expertise.
I'll give you a veryconcrete example.
If you want to change someplumbing in your house or rewire
something, you cango to YouTube and you
can watch video that says, thisis how you change the plumbing.
This is how you do the wiring.
Most people shouldn't do that.
They're going to drownthemselves or electrocute
(19:14):
themselves.
Not a good thing.
But if you have somefoundational skills
in dealing with plumbingor home electricity,
you could then use that videoto help you do more things.
So it's a complimentto a basic skill set.
So you can imagine using AI toenable people to do more things.
So that could be, I'm ajet engine repair person,
but I've always trainedon Rolls-Royce engines,
(19:36):
and now, I need to work on aPratt & Whitney engine or GE
engine, would be an example.
In medicine, wehave this phenomenon
of tasks are always goingto the most expensive person
in the room.
Whatever it is, italways moves upward.
It's got to bedone by the doctor.
And that's very difficult.That's very expensive.
Those doctors, therearen't that many of them.
They get paid a lot.
Why can't we devolvesome of those tasks
(19:58):
to people who have expertise,but are slightly less rarefied?
And they could do a lot of them.
In fact, this hashappened in medicine.
So a lot of taskshave been moved
from only being doneby the primary care
physician to being also doneby the nurse practitioner.
And you could imaginethat you could
use AI to enable more peoplewith foundational skills
(20:19):
to do a broader setof valuable things.
And that would be terrific.
And here's a pointI want to emphasize.
My colleague Daron Acemoglumakes this point very often.
What AI does is not up to AI,at least not at the moment,
it's up to us, where wewant to invest in it.
Where do we want to put it?
So China has the world'sbest surveillance state,
(20:40):
they brag, and I don't haveany reason to disbelieve them,
they can physically locate anyof one of their 1.3 billion
citizens within an hour.
They also have the world'sbest content filtering system.
They can delete stuffas soon as you post it,
even before it goesout on the web.
Now, that's an impressivetechnological achievement.
It wouldn't befeasible without AI.
But that's not becausethat's what AI does,
(21:03):
that's because we're tryingto put its money on AI.
That may not be the highest use.
And so whatcapacities we develop
depends a lot onwhat we prioritize,
what the incentives are,and what people imagine
they're supposed to bedoing with this technology,
is highly malleable.
I really like your point.
(21:23):
So I think it's definitelytrue that we are now
in control of how we build thesesystems, but we do a lot of that
without guidance.
There's TikTokcollecting data about me,
flipping through thesevideos, and so forth,
and building a model of whoI am so that the next thing,
they can sell me something.
I mean, this wholesystem isn't necessarily
(21:46):
targeting socialresponsibility, but doing
a lot of random things.
The science fictionstory is that we
are at a battle betweenus and machines,
and someone is goingto win at the end.
And part of that, Ithink, is not so much
that the machinesbecome so intelligent,
but that we becomestupid because we
(22:07):
are becoming followers of whatthe machines tell us to do.
I follow the GPS.
I don't even look at thesites anymore where I am.
I no longer know where I am.
I'm just looking atthis little device.
The same thing you describedin medical profession.
There are situations now wherethe doctor never looks at you,
I mean, actually, constantlylooking at the screen,
(22:28):
looking at the nextsuggestions of what they do
and they're notlooking at the patient.
So this equilibrium.
I mean, how do youthink we're evolving,
and how do we intervene tomake that equilibrium something
that we would like to have?
Yeah, so one way to say this--
I mean, I think are severalthings you're saying, but let
me try a couple of them.
One is, and this goes back to[INAUDIBLE] previous question,
(22:51):
is we often don't know what themachine is doing, as she said.
It's doing a thing, andit's a black box to us.
That's a very hardproblem to solve.
I described earlierthe fact that we
have all this tacitknowledge, and we
don't know how to codify itand write it in software.
Machines now have allthis tacit knowledge
that they don't know how tocommunicate to us either.
(23:13):
You can't just lookinto someone's head
and look at theneurons, and say, oh, I
see what they're thinking.
The informationis there somehow,
but we have no ideahow it's represented.
Similarly, with AI, it'sjust a billion weights
on different connections.
You could look atthat all you want.
You could stare at it for years.
It wouldn't tell you whatthe thought process is.
So it is opaque to us.
(23:33):
And why is that a problem?
Well, one is it makes ithard for it to predict.
You can imagine,every once in a while,
a self-driving cardoes something crazy
and we don't know why.
The other is, it'sdisempowering or it
makes it hard for humansand machines to interact,
because where do you developthe expertise in the practice
if you're so relianton the machine?
(23:54):
So for example, we know therehave been horrific air disasters
because, basically,autopilot stopped working
and pilots didn't know how tofly the systems without them.
And this is why you might havethe same concern about writing.
So now, every time youwant to write something,
you'll just mumbleit into ChatGPT
and it'll turn intosomething coherent,
(24:15):
and then you can workon it from there.
In some ways, that's good.
We wouldn't wantsomeone to not be
able to do engineeringcalculations because they
weren't good at divisionand multiplication.
Have a machine do that.
But it is a concern.
Will we be able todo the higher level
tasks if the foundationaltasks are not things we master?
(24:36):
So I don't really need my kidsto learn times tables anymore,
at least not memorize themto the extent they used to,
but they do need to understandthe basic operations
of addition, subtraction,multiplication, division
to do the rest of math.
So it is a concern howthese things will interact.
So I think there's anoptimistic story that's
often told that says, oh, well,human and machine together
(24:57):
are better than eitherone of them separately.
And first of all, that maybe only true for a time.
And second, humans are generallymore expensive than machines.
So it may be that the humanmachine is better ignoring cost,
but you may say, well,after accounting for costs,
I think we'll justtake the machine.
(25:17):
So it is hard topredict where this goes.
And I try to discipline myselfon this, too, in the sense of I
know that myforesight about what
will be created and augmented ismuch less good than my foresight
on what is goingto be substituted.
And I can predict lots ofthings that will be substituted.
I wanted to bring it back tosomething both of you mentioned.
(25:40):
Munther, you said it, thisidea of robot against person,
but it almostseems, in some ways,
it's going to be person againstperson using AI or country
against country using AI.
And David, you broughtup China, who's
decided to really invest inAI to be a surveillance state.
A lot of the facial recognitionsystems they're using
(26:02):
came from US Tech,and they're using it
against the weaker people.
We've seen these veryterrible consequences.
And so in a world where we'retrying to create something new,
this new AI, wherein a perfect world,
we'd be able to crossborders and share
IP, what's the recourse toenforce this cross border IP
(26:25):
protection?
What could be thecollateral costs
that we're goingto have to accept,
and how necessaryis it going to be
to protect national defense,whatever country you are?
Because it does seemlike it's almost
going to be country againstcountry building these robots.
Yeah, I mean, I think, I agree.
It's really peopleagainst people.
It's not machines againstpeople, at this point,
(26:46):
but people againstpeople using machines.
And the concern is that thetechnology is not expensive
and it's not that hard toget it to do what you want.
So you can make an analogy,oh, well, in the nuclear age,
we had very successfulnuclear containment
and we have had that.
For over seven decades, therehave been no additional uses
(27:09):
of thermonuclear weapons.
And even countriestoday, after many years,
are struggling toproduce the technology.
And so it's been anincredibly successful regime
of control of a dangerouspotential weapon.
And AI is not likethat because hardware
is cheap and getting cheaper.
The tools are easy toaccess and getting easier.
(27:32):
The barriers to entryare really, really, low,
so it's extremelydifficult to contain it
in the way you might want to ifyou were very scared about it.
And I do think we're goingto face a lot of challenges,
and not just at the levelof country versus country,
but even at the level offake news versus real news,
at the level of misinformation,at the level of persuasion,
(27:55):
of overcoming people's barriersto trust when they shouldn't
be overcome.
I do think we've produceda technology that we're not
necessarily especiallywell-equipped to immediately
handle its repercussions.
Now, of course, we've saidthis about many technologies
in the past.
People thought TV wouldrot everybody's brains.
(28:16):
People thought the Walkman, thelittle personal stereo system,
would turn everyone intothe urban zombie who
was totally tuned out fromthe environment, and so on.
That hasn't happened.
But I do thinksocial media has had
real consequences that wehaven't handled especially well.
And so I think there'sreason to think
it's going to take a whilefor us to figure out.
And it is another irony.
(28:38):
I mentioned the ironythat we have machines now
that are great at languageand are terrible with numbers
and facts.
And it is the casethat we live in an era
where, with all theinformation, more information
at our fingertips thanthe world has ever had,
we seem to be lesscertain of truth
as a consequence of ourability to manipulate.
And then at thecountry level, I mean,
(28:58):
the thing obviously that the UShas recognized as was the lever
point here is semiconductors.
This is the thing thatall of this runs on.
The whole world, at thispoint, runs on semiconductors.
And it is still thecase that semiconductors
are arguably the most intricatetechnology that humanity
has ever produced.
(29:19):
And the leadingsemiconductors of the day
basically can only be producedby one company, which is TSMC.
Morris Chang was an MIT student.
Someone who's at thebuilding I'm sitting in
is the Morris &Sophie Chang Building.
And it uses anothermachine that's
only made in the Netherlands,which has a set of suppliers,
(29:41):
300 deep, based allaround the world.
So if the leverage point thatI think the US has correctly
identified is control ofproduction of semiconductors
is the way to limitothers AI capacities,
if that's your objective.
So David, just tobring it home to MIT
(30:02):
and to other educationalinstitutions,
there are many aspects ofeducating the masses is
going to be critical.
One aspect, obviously,is what you described,
which is automation ortechnology displaces
people from certain jobs,but then you create new jobs.
Some of the people you displacedare not ready for the new jobs.
And I think the countriesthat do very well
(30:22):
are the ones that educate thepeople ahead of time or at least
anticipate so thatthe new cadre now
is able to tackle thenew jobs and so forth.
How can universities,MIT and others,
manage this educational piece?
Yeah, so some of it is educationand some of it is assistance.
(30:45):
So if you look acrossindustrialized countries,
everybody loses jobs.
And job losses is damaging inmany ways, both economically
and psychically.
And those are not independent.
But the cost of job lossvaries a lot across countries.
It's much higher inthe United States--
I mean, for an individualbeing displaced
than it is, for example,in Denmark or Norway.
(31:07):
And it's because wedo so little to assist
people who have been displacedeither financially, or in terms
of retraining, or even thepsychological supports that
are necessary to getback into the workforce.
So part of what we should bethinking about, some of it
is exactly as yousaid, it's preempting,
it's investing aheadof time, and some of it
is assisting when theinevitable occurs.
(31:30):
So I think we needto think about both.
But then in terms of going backto the education questions,
I think it starts wellbefore university.
If you said, well,I have to redesign
the educationalcurriculum from the ground
up now because so manythings that we used to think
are important are nowbeing done by machinery.
Some of the things that weteach are archaic, like lots
(31:52):
of memorization of factsand tables of numbers.
Some things actually are asvaluable as they've ever been,
which is quantitative reasoning,communication, the ability
to adjudicate around facts,to reach conclusions,
to form hypotheses.
So what do people need to do?
They need to be able to read.
(32:12):
They need to benumerically literate.
They need to beanalytically literate.
And that's something that wehaven't historically taught.
Now, it used to be the skillwas how to go to the library
and find facts.
Now the problem is how toadjudicate and throw away
most of the facts thatare available to you,
how to you actually assemblethem into something useful.
We've gone from a worldof information scarcity
(32:33):
to information abundance and thescarce capabilities, the ability
to organize that information,to organize that information.
And by organize, I don'tmean sort it alphabetically.
I mean, tell a story, draw aninference, produce a hypothesis
and say, well, whatfacts fit with that.
And then, of course,to communicate
with that to peopleabout it verbally
in the form of presentation,in the form of writing, even
(32:54):
writing assisted byChatGPT, and to lead others.
So those skills will continueto be highly, highly valuable.
So we've got to startwith that, foundationally.
And I think that's aK through 12 thing.
Colleges and universities are--
first of all, themajority of adults
don't have a four-yearcollege degree
and will not have a four-yearcollege degree in America
(33:15):
in the next several decades.
At this point, under40% of the labor force
has a four-year college degree.
And that's rising very slowly.
So I think there'sa lot to be done
at the level ofvocational training.
If we say, well, what's more atrisk from the advances in AI?
Is it blue collar construction,trades, repair, electrical work,
(33:36):
plumbing, or is it people whoare doing management tasks?
I would much moresay the latter.
So we should be investingin allowing people
to use those other skills,a lot of skilled vocations.
And then in termsof colleges, I think
this is where youwant to reinforce
those analytical skills,as well as expert skills.
(33:56):
And there's often a viewthat if you give people
a lot of expertise,you're making them,
in some sense, that's sospecialized, maybe it's narrow.
But in general, expertise iscomplementary to other things.
The more you know,the more you can
use that across other domains.
We don't want to say, oh,we over train these doctors.
They're too expert.
(34:17):
They can't do X, Y, and Z.Usually, the more you know,
the more you canlearn and apply.
A work by David Deming,an economist at Harvard
who has looked at thevalue of social skills
versus formal technicalskills, and many people
think, naturally, oh, STEMhas become so important.
(34:40):
It's all about STEM.
It turns out that the fieldsthat managerial skills,
at least in the lastcouple of decades,
have become more important,but particularly,
interpersonal skills thatcombine with technical skills.
So it's not theperson who is just
sitting in the back room,crunching numbers, but often,
the person who is translatingbetween a body of expertise
(35:01):
and other people.
So a lot of what doctors do isthey don't just look at charts,
but they communicatewith patients.
Or if you're a contractordesigning houses,
it's not justengineering calculations,
it's figuring outwhat the person needs.
And that's true ifyou're an attorney,
if you're a counselor,if you're a marketer.
So at that interfacebetween the technology,
(35:22):
and the domainexpertise, and then
other people andtheir needs, that's
where there's a lot ofvaluable things to do.
So when you think aboutwhere are people most useful
or where are they most valuable,they're useful in many ways,
but where are they mostgoing to be paid the most,
it's where they can bringspecialized expertise
(35:43):
to bear, not just ina technical sense,
but in a translational sense, inan adaptive, interactive sense
with other people.
Thank you for listening to thismonth's episode of Data Nation.
You can get more informationand listen to previous episodes
at our website,idss.mit.edu or follow us
(36:07):
on Twitter andInstagram @MITIDSS.
If you liked thispodcast, please
don't forget to leave us areview on Spotify, Apple,
or wherever youget your podcasts.
Thank you for listeningto Data Nation
from the MIT Institute ofData Systems and Society.