Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
What would you say if you were able to get
in a room with Sam Altman and Elon Musk at
a table, which seems even less likely than Putin and
Zelensky meeting face to face.
Speaker 2 (00:16):
I'd say, you know perfectly well that the stuff you're
developing has a good chance of wiping out people.
Speaker 3 (00:25):
Hi.
Speaker 1 (00:25):
Everyone, I'm Katie Kuric, and this is next question. When
you hear the Moniker Godfather of AI, you might think
of some levolent sci fi character, But in the real world,
this title belongs to doctor Jeffrey Hinton, my guest today.
(00:47):
He is the scientist whose early work on neural networks
cracked to open the field of artificial intelligence. If that
sounds like it's over your head, well honestly it was
a bit over mine, but Hinton graciously explains it for
lay people like us. Last year, doctor Hinton and another
guy named doctor John Hopfield received the twenty twenty four
(01:11):
Nobel Prize for physics and a word that honestly kind
of surprised them, given the fact that they're not really physicist.
But the prize is opening doors, he told us. In fact,
he has plans to meet with the Pope very soon
talk about access right. So in this interview we talk
about his life's work, some of the pressures he faced
(01:35):
as a young person, and the remarkable family he comes from,
talk about brainiacs, wow, and why he believes the choices
we make right now could actually shape the fate of humanity.
So here's my conversation with doctor Jeffrey Hinton. Doctor Jeffrey Hinton,
thank you so much for spending some time with me today.
(01:58):
It's a real honor to be able to.
Speaker 3 (02:00):
Talk to you.
Speaker 2 (02:00):
Thank you very much for inviting me.
Speaker 1 (02:02):
I know that last year you were awarded the twenty
twenty four Nobel Prize in Physics jointly with someone named
John J. Hotfield, for your quote foundational discoveries and inventions
that enable machine learning with artificial neural networks. What did
it mean to you, doctor Hinton, to be honored in
(02:25):
this way and have your work recognized.
Speaker 2 (02:28):
Well, I was actually very surprised that I got it
because it's in physics and I don't do physics. But
it means a lot to get a Nobel Prize for
every scientist, that's kind of the highest honor you can get.
Speaker 1 (02:38):
I think they probably, as you have said in the past,
doctor Hinton need to have a Nobel Prize for computer science.
Speaker 2 (02:46):
There's something called the Churing Award which is meant to
be like the Nobel Prize for computer science. But basically
Nobel made some prizes in his will and they don't
like to create anymore. They created one more in economics,
but they don't want to dilute it any more.
Speaker 1 (03:00):
And I know you won the Touring Prize in twenty nineteen,
so that was an additional honor given to you. In
an interview last fall, you were hoping that the Nobel
Prize would mean that your views on artificial intelligence and
the role it will play in the future and is
(03:22):
in playing now for that matter, would be taken more seriously.
Have you found that to be the case.
Speaker 2 (03:30):
Yes, I think that is true. I think more people
are willing to talk to me. I managed to talk
to somebody on the Chinese polit Bureau. I've been talking
with Bernie Sanders, and in September I may get to
talk to the Pope.
Speaker 3 (03:43):
Wow.
Speaker 1 (03:44):
So you're really spreading your concerns far and wide. Do
you feel that when you have these meetings, doctor Hinton,
that a people take you seriously, which I'm sure they do,
But B do I feel as if you're making progress,
that these conversations may in fact lead to real change
(04:08):
or policy changes that will reflect your concerns.
Speaker 2 (04:14):
Yes, I think we are making progress. My main concern
has been not the short term risks of bad actors
misusing AI, which are very serious, but the longer term
risk of AI itself taking over. And I think until
quite recently, almost everybody thought that was just science fiction.
We didn't really have to worry about AI becoming smarter
(04:34):
than us and taking over from us. But I think
now many people have come to realize that's a very
real risk. So most of the experts believe that sometime
between five and twenty years from now, AI will get
smarter than people. And when it gets smarter, it won't
get a little bit smarter, to'll get a lot smarter.
And the problem is, we know very few examples of
(04:57):
smarter things being controlled by less smart things. Not small
differences intelligence, like between a politician and a scientist, for example,
but big difference is intelligence. There's very few examples. In fact,
the only example I know is a mother and child,
a mother and baby.
Speaker 1 (05:13):
When I read about that time frame, doctor Hinton, I
have to say, I was pretty freaked out. I mean,
we're talking about not the distant future. We're talking about
the very near future, aren't we.
Speaker 2 (05:26):
Yes, It's important to realize nobody has a good way
of estimating these things, so all of the estimates a
just guesses, But there's sort of a consensus among experts
between five and twenty years from now, it's quite likely
that we'll get things a lot smarter than us.
Speaker 1 (05:41):
I want to talk to you about the dangers and
what you've been discussing with various people. But first I'd
like people to understand a little bit about your background,
which is quite extraordinary. You come from a family of
I would say, overachievers, not to mention brilliant minds. Your
(06:02):
great great grandfather was someone named George Boole, who developed
Boolean algebra. Your cousin Joan Hinton worked on the Manhattan Project.
Your great uncle Sebastian invented the jungle gym.
Speaker 3 (06:15):
The list goes on and on.
Speaker 1 (06:18):
Did you know you were destined for great things and
contributing to our knowledge in such an extraordinary way.
Speaker 2 (06:30):
I didn't know I was destined for it. I knew
there were a lot of expectations.
Speaker 1 (06:34):
In fact, your father sounds like he was a pretty
hard charging person with very high expectations for you. He
was a renowned entomologist, not to be confused with etomologists
the study of language. He was involved in the study
of insects, beatles in particular, I understand. And he once
(06:56):
said to you, if you worked twice as hard as me,
when you're twice as old as I am, you might
be half as good.
Speaker 2 (07:04):
He didn't actually say that once, he said that quite
frequently before I went to school.
Speaker 1 (07:09):
How did that impact you as a young person? How
did that shape your pursuit of artificial intelligence and specifically
neural networks.
Speaker 2 (07:20):
It was said half in jest, but it was annoying.
I just had very strong expectations placed on me, and
I tried to live up to them.
Speaker 1 (07:29):
What do you think he would think about your being
awarded the Nobel Prize.
Speaker 2 (07:33):
I think he'd be very pleased and also slightly annoyed.
He was very competitive, and he'd be annoyed that his
son got something he hadn't got.
Speaker 1 (07:42):
Let's talk about your journey from a young man to
where you are now. You wanted to really duplicate the
human mind in your work with neural networks. So you
stumbled upon on your area of expertise accidentally, didn't you.
Speaker 2 (08:04):
It wasn't completely accidental. It became obvious in the sixties
and seventies that we had a new way of doing
research on how the brain works. Up until then, you
could do experiments on the brain, or you could have
theories about how the brain worked, but it's very hard
to test these theories because they need to be complicated,
(08:25):
because the brain's complicated. And once computers got fast enough
to do simulations of networks and brain cells, then you
could start to have more elaborate theories of how it
might be computing and test them by doing computer simulations.
And so there is this kind of new form of
science which was testing theories of how the brain might
(08:47):
work using computer simulations. And it was obvious from the
beginning that would also give you a different way of
doing AI.
Speaker 1 (08:56):
You, I know, worked on this for many years. And
can I ask a dumb question, I hope it's okay.
Can you explain the role neural networks play in artificial
intelligence versus large language models and how they relate to
one another?
Speaker 2 (09:17):
Yes, okay, So let me give you a little bit
of history. For about the first fifty years of artificial
intelligence the second half of the last century, almost everybody
in artificial intelligence thought the way to make a machine
intelligent was to mimic logic. What you would do is
you'd have symbolic expressions things like sentences in the computer,
(09:38):
and you'd have rules for manipulating them so it will
work like logic. For example, if I say all men
are mortal, and I say Socrates is a man, you
can do some manipulations on those strings of words and
come up with Socrates is mortal. That's logic, that's Aristotelian logic,
(09:59):
and that would that's the sort of model for how
we were going to do AI and computers. There was
a very different theory, sort of utterly different, which was
instead of looking for logic as the inspiration for how
to make a computer intelligent, look at how the brain
actually works. We have a big network of brain cells.
We have billions of them. They have connections between them,
(10:22):
and they learn by changing the strengths of those connections.
So one of the neurons in this great big network,
all it has to do is decide when to go ping,
and it sends its ping to other neurons that it's
connected to, and it decides when to go ping by
looking at the pings it's receiving from other neurons. But
each ping it receives from another neuron, it gives it
(10:44):
a weight, which is the connection strength. So for example,
if I'm a particular neuron and you're another neuron and
you say ping, I might have a weight on that.
Say okay, when she says ping, that's a lot of
evidence I should go ping. Or let's suppose as another neuron,
which is Donald Trump, and when he goes ping, I say,
that's a lot of evidence I should not go ping.
(11:05):
So basically, each neuron is looking at the ping's caring
from other neurons. It's associated to weight. With each ping,
it's like a vote. It's other neurons going ping of
votes for me to either go ping or not go ping,
and then it decides whether to go ping. And I
think you can see that if you change the strengths
of those votes, those weights, then how neurons go ping
(11:27):
will change. And that's how your brain learns, and that's
how artificial intelligence now works. It simulates a big network
of brain cells like this and at large language models
work on neural networks. Almost all AI now uses neural networks.
So we're simulating in the computer a big network of
(11:48):
brain cells, and the brain cells keep going ping, and
exactly when they go ping depends on the votes they
get from other brain cells that are gone ping or
from sense organs that have gone ping. And by changing
the strength of those connections, you can make it do anything.
The issue is how do you change the connection strengths
to make it do what you want?
Speaker 3 (12:10):
I kind of got that, but it's a good start
for me.
Speaker 1 (12:16):
Does a large language model give the data that the
neural connections need to make those connections.
Speaker 2 (12:26):
What the neural network does is, if you have the
symbol for Tuesday, that'll make a particular set of neurons
go ping. If you have the symbol for Wednesday, it'll
make a very similar set of neurons go ping. But
if you have the symbol for although, that'll be a
completely different set of neurons go ping. So the words
come in, they get converted into groups of neurons going ping.
(12:50):
These neurons then interact with each other. So, for example,
suppose the word may comes in. Let's ignore capital. Let's
suppose we don't have any capital letters, so that lower
case may comes in, you don't know whether that's a
month or a woman's name. So neurons going ping for
nearby words influence which neurons go ping for May, And
(13:12):
if you have June and July there probably you'll influence them,
so they're more like the neurons that should go ping
for a month. These neural activations that capture the meaning
activate neurons that are representing the meaning of the next word.
And so by using this neural network, if the connection
(13:32):
strengths that are appropriate, when a bunch of words comes in,
you'll be able to predict the next word. And you
train the whole thing by saying, how should I change
the connection strengths. So when I see a string of words,
maybe I see fish and and after fish and chips
is quite lightly so you'd like to change the connection strengths.
(13:54):
So after you've seen the word fish followed by the
word and you predict that a quite likely next world
is chips. That's how the whole thing works.
Speaker 1 (14:03):
I got you, and in fact, I'm thinking about when
I'm texting now or writing an email, that they suggest
words for me. Given some of the past things I've written.
So are they using my personal large language model to
predict what I want to say? Because I might not
(14:25):
say fish and chips, I might say fish and lemon.
Let's say, well, the neural networks know, oh, doctor Hinton
is going to say chips, but Katie's going to say lemon.
Speaker 2 (14:38):
Since I left Google a few years ago, I don't
know in Gmail how much of your personal information they
used to make the prediction. My guess is, to be efficient,
they don't use your personal information. They're just using a
large language model. It's looking at quite a long context
of what you said and is just predicting features of
the next word. And you say fish and it'll say
(15:01):
chips is pretty lightly. If you're in the British House
of Lords, it'll say hunt is pretty lightly. But it
probably won't say that because it knows you're in the
British House of Lords. It'll probably say because of previous
things you said in your Gmail.
Speaker 1 (15:21):
Hi everyone, it's me Katie Kuric. You know, lately, I've
been overwhelmed by the whole wellness industry, so much information
out there about flaxed pelvic floor serums and anti aging.
So I launched a newsletter It's called Body and Soul
to share expert approved advice for your physical and mental health.
Speaker 3 (15:42):
And guess what, it's free.
Speaker 1 (15:43):
Just sign up at Katiecuric dot com slash Body and Soul.
That's k A T I E C O U r
Ic dot com slash Body and Soul. I promise it
will make you happier and healthier. I wanted to talk
(16:08):
to you about your journey coming to Google and then
leaving Google. You sold your small startup to Google in
twenty thirteen, You went to work there to advance the
company's AI research, and then ten years later you left
so you could speak more freely about the risks of AI.
(16:33):
How did you come to that decision after a decade?
Was that a tough one, doctor Henton.
Speaker 2 (16:40):
So the precise timing of when I left was so
that I could speak at a conference at MIT and
I could speak freely without worrying about the implications for Google.
But the fact that I left was because I was
seventy five. I've been doing research very half for fifty
five years, and I was worn out. I wasn't as
good as programming it as I used to be, and
it was an so I was going to retire when
(17:01):
I was seventy five anyway, but the precise timing was
so that I could speak freely because I've become acutely
aware of the risks of AI in my last few
years at Google, and I wanted to tell the public
about them. But I thought it was wrong to say
things that would impact a company while taking money from
the company. I didn't actually have any complaints about Google.
(17:22):
I didn't think they were doing anything wrong. They had
the large language in models and they were not releasing
them to the public at that point, so it wasn't
to criticize Google, but it was to warn about the
long term risks of AI.
Speaker 1 (17:33):
Having said that, did you feel that Google was doing
enough to mitigate some of the risks? Did you feel
like it was enough of a priority for the company
During your decade there, people.
Speaker 2 (17:46):
There were quite concerned about various risks. I think they
could have always done more, and I think for all
the companies, they could do a lot more. Google does
have some concern about the risk. Demissabis, in particular, who's
the head of their research on things like large language models,
understands the long term threat of AI taking over, and
(18:07):
he's quite concerned about it.
Speaker 1 (18:09):
Let's talk about regulation. As you well know better than anyone,
AI is developing at breakneck speed. Governments, meanwhile, move slowly
and often don't fully understand the technology because they have
not been immersed in it like you have. And let's
face it, they just haven't really become familiar with it
(18:30):
because they're older and they haven't necessarily caught up. Even
if governments were motivated and vigilant. Do you think regulation
could ever catch up with AI's capacity and prevent its
most dangerous risks.
Speaker 2 (18:49):
I think it could certainly help. And I think we
have to distinguish two kinds of risks here. There's shorter
term risks to do with people misusing AI, and those
risks are things like cyber attacks or creating nasty viruses,
or creating fake videos. And for all of those shorter
(19:12):
term risks, each risk has a different solution. So for
things like viruses, an obvious partial solution is to take
the companies that manufacture things for you in the cloud,
and if you send them a sequence, they should look
at the sequence before they manufacture it and say, wait
a minute, this looks very like COVID. I'm not going
(19:34):
to manufacture something that looks like COVID, but they don't.
They should be forced to do that. So that's an
example of something we could do to make the risk
of terrorists producing bad viruses a little less severe. For
fake videos, initially people thought we could recognize fake videos,
(19:56):
but actually it's much easier in the long run, I think,
to be able to recognize that a video isn't fake.
So if there's suppose there's a fake video with you
in it that pretends to be by you, what you
really need is at the beginning of that video, there's
a QR code. The QR code will take you to
a website, and if it's your website and the identical
(20:18):
video is on your website, we know it's a real video.
If it takes you to something that isn't your website
or on your website, the video isn't the same, we
know it's a fake video. That's feasible, and I think
that will come and all that work will be done
by the browser, of course. So that's a solution for
fake videos, or a partial solution. One problem here is
governments will not collaborate with each other on this stuff.
(20:41):
Maybe on viruses they will, but on fake videos or
on cyber attacks. Governments are all busy doing it to
each other. America was a real pioneer in corrupting elections
in other countries. It's quite upset now that's come back
to bite it.
Speaker 3 (20:55):
Tell me more about that, doctor Henton.
Speaker 2 (20:57):
Oh, So for many years America would try and manipulate
elections in other countries. It's fairly clear that Russians were
trying to manipulate the twenty sixteen election. And I think
manipulating elections in other countries is a bad thing to do.
One of the few things Trump said that I agreed with.
Probably around twenty sixteen, someone said what did you think
(21:19):
about foreigners manipulating our elections? And Trump actually said, you
think we're so different? He was very well aware of that.
So we're not going to get collaboration on those things
between countries because they're doing it to each other. But
we will get collaboration on how to prevent AI from
taking over, because no country wants AI actually take over.
(21:42):
So what I've been doing recently is trying to persuade
senior people, like people on the Polypburea in China, or
the Pope or Bernie Sanders that we can get international
collaboration on this, because the techniques you need to prevent
AI from taking over the only way we can do
that is to make it so we build these super
(22:03):
intelligent AIS that do not want to take over. And
the question is how do you build an AI so
it will never want to take over? And the techniques
for doing that are somewhat different from the techniques for
making the II smarter, and so countries won't share how
you make the very smartest AI because they're busy competing
(22:23):
with each other for things like cyber attacks and fake
videos and for lots of other things to do with manufacturing,
but they will collaborate on how do we prevent it
from wanting to take over from people. So, for example,
if the Americans developed a good technique for stopping a
super intelligent AI from wanting to take over, they would
(22:45):
want to share it with the Chinese and the Russians
and the British and the French and these radis because
they don't want their iis taking over either. So I
think the worst long term risk we have is AI
taking over. But the one piece of good news is
we can collaborate on that. And what I'd like to
see is institutions in different countries that collaborate with each
(23:06):
other on how do we make AI not want to
take over?
Speaker 1 (23:09):
That sounds wonderful, but is it realistic. Do you believe
that countries would in fact collaborate and what you need
sort of a new entity for this mission, for this
idea of trying to put breaks on AI taking over.
Speaker 2 (23:27):
It's not putting brakes on it so much. We're not
going to put brakes on AI. It's too useful for
too many things. It'll increase productivity in a huge number
of industries. It'll be very helpful in improving health care
and improving education. And because of all the good things
it can do, we're not going to be able to
stop the development. So rather than think in terms of
putting brakes on it, think in terms of how do
(23:49):
you develop a form of it that is nice to people,
that can coexist with people. And I think we will
get collaboration on that. We're not going to get the
United States collaborate for about another three and a half years,
but I don't think under the Trump administration there'll be
any effective collaboration on things like that. But once we
(24:10):
get a reasonable regime in the States, we might get collaboration,
and I think other countries like France and Britain and Canada,
maybe South Korea, Japan, maybe even China will be willing
to collaborate on how do you make an AI that
doesn't want to take over?
Speaker 1 (24:26):
Speaking of China, some have argued that China's rapid push
into artificial intelligence, especially its willingness to deploy it in
surveillance and military context, makes reasonable global regulation impossible. You
believe that China would want to be a part of
(24:47):
this global collaboration.
Speaker 3 (24:50):
And if you believe that, why.
Speaker 2 (24:53):
I don't think China in the US will collaborate on
things like detecting fake videos or or swarting cyber attacks
because their interests aren't aligned. There. People only collaborate when
the interests are aligned, and on that their interests are opposed,
and they will not collaborate, but they will collaborate on
how do you prevent AI from taking over? Because they're
(25:15):
their interests are aligned in much the same way as
the Soviet Union and the United States. Their interests were
aligned in the nineteen fifties in preventing a global nuclear war.
It wasn't good Riser of them, and so they did
actually collaborate in ways to prevent that.
Speaker 1 (25:32):
Europe has tried to get ahead of the curve with
sweeping rules like the EUAI Act. Do you think this
effort will have a meaningful impact.
Speaker 2 (25:44):
Yes, I think companies would like to have a global market,
and if Europe puts regulations on things like privacy or
hate speech or things like that, then I think that
will affect the AIS produced because people want to have
global market, so there'll be a tendency for people to
try and satisfy those regulations even in other countries. Now,
(26:06):
the regulations of Europe are fairly flimsy at present. So
for example, the European regulations on air have as clause
in them that says none of this applies to military
uses of AI. So the European countries that manufacture arms,
like Britain and France aren't willing to regulate the use
(26:27):
of AI for weapons.
Speaker 1 (26:29):
Why do you think that they have a clause excluding
the military because we know, for example, the US military
is actively developing autonomous weapons, which you believe should be outlawed.
Why don't countries realize the dangers these kinds of weapons pose.
Speaker 2 (26:49):
Countries do realize the dangers they post, but they also
see the military advantages they pose. And a lot of
countries make money by selling arms. So the United States, Russia, China, Britain, Israel, France,
they make money by selling arms, and so they don't
want regulations on these things. Also, for very rich countries,
(27:11):
lethal autonomous weapons, that is, weapons that decide by themselves
who to kill or maim are a big advantage if
a rich country wants to invade a poor country. The
thing that stops rich countries invading poor countries is their
citizens coming back in body bags, which doesn't look good
for anybody. If you have lethal autonomous weapons, instead of
(27:33):
dead people coming back, you'll get dead robots coming back.
Speaker 1 (27:37):
What are your other concerns about the way you anticipate
AI transforming warfare over the next few decades.
Speaker 2 (27:46):
Oh, I think it's fairly clear. It's already transformed warmfare.
If you look what's going on in Ukraine. As Eric
Schmidt pointed out, a five hundred dollar drone can now
destroy a multimillion dollar tank, completely changed warfare. It's fairly
clear that fighter jets with people in them are a
silly idea. Now if you can have AI in them,
(28:09):
Ais can withstand much bigger accelerations and you don't have
to worry so much about loss of life. So I
think it will completely transform warfare. I think we will
see very nasty things happen with ethan autronomous weapons, and
after those things have happened, we may get regulations. So
(28:30):
with chemical weapons, very nasty things happened in the First
World War, and after that countries are willing to say, well,
I won't use them if you don't. And we got
the Geneva Conventions on chemical weapons and they've pretty much held.
They've been broken a few times. Britain broke them in
the nineteen thirties when it dropped mustard gas on the Curds.
In the twenties or thirties, Sadam Hussein broke them when
(28:52):
he dropped mustard gas on the Kurds. It always seems
to be the Curds that get it. But on the
whole they haven't been broken as sad So I'd use
them too, yes, but on the whole they haven't been broken.
So in Ukraine, for example, people aren't using chemical weapons.
Speaker 1 (29:09):
So you're using that as an example of how there
could be some kind of agreement among nations that would
be effective.
Speaker 2 (29:19):
Yes, and it's happened with chemical weapons. I talked to
someone who used to be an army surgeon, and he
pointed out that there's a big motivation by countries who
are thoughtful not to use really nasty means of warfare
because eventually there's a piece and you have to negotiate
(29:40):
what happens when the war's over, and if a country's
use terrible methods, it's much harder to negotiate things. So actually,
even though they want to win the war, countries also
have a vested interest in not using methods that are
too terrible.
Speaker 1 (29:54):
I would love to talk to you about the economic
impact of AI, because I know that's a clear short
term risk that you have discussed. You have warned that
AI could wipe out jobs across nearly all fields, and
I know many people are really concerned about this, about
(30:14):
their livelihoods, about their children's livelihoods. What do you think
people need to understand about this technology and how it
will impact the way we work over the next few decades.
Speaker 2 (30:28):
So the first thing to say is this really isn't
a technological problem, is a political problem. So if we
had a fair political system. When AI came along and
greatly increased productivity, When for example, ailized one person to
do the work that five people used to do, that
should mean there's more goods and services for everybody. And
(30:51):
in some areas where the demand's elastic, it will so.
In healthcare, for example, old people like me can absorb
endless amounts of healthcare, and if we make healthcare more efficient,
we'll just get more healthcare. That's great. But in other areas,
like I mean call centers or in doing research for
lawyers on similar cases, when one person could do the
(31:12):
work of five people, four people will be unemployed. With
increased productivity, it should be great. But within the system
we've got, we know what's going to happen. The owners
who introduce the AI are going to get richer, and
the unemployed people are going to get poorer. That's going
to increase the gap between rich and poor, and the
(31:34):
level of violence into society is strongly determined by the
gap between rich and poor. That's what we're seeing in
the States now. The gap between ordinary working people and
hedgehog managers became extreme, and that's what's providing the medium
in which Trump's kind of populism thrives, it'll get worse
(31:56):
as we get more AI and the working class get
an even worse deal.
Speaker 1 (32:01):
I'm really asking this for my children, and I'm curious
what jobs will be more at risk and what jobs
would be less at risk. In other words, if you
were advising a college graduate today, doctor Hinton, careers to
stay away from and careers to gravitate towards, what would
(32:22):
they be.
Speaker 2 (32:23):
So I should start by saying I'm not really an
expert on this, and this is all guesswork.
Speaker 3 (32:27):
Okay, Well, that's okay.
Speaker 2 (32:29):
Yeah, it's obvious that working in a call center where
you're badly paid and badly trained is not a very
good job. Because AI is going to be able to
do the job better quite soon. It'll know much much
more about the right answers to the questions that people
are asking. It'll be more patient. It's just a better
way of doing that job. It's not quite there yet,
(32:50):
but it's sort of around being there now over the
next few years. I think jobs in call centers are
very unsafe. Similarly, already, parently eagles people who help lawyers
find similar cases, they're pretty much out of work, and
even junior lawyers now are finding it hard to get
jobs because from any of the big law firms, AI
(33:11):
is doing the kind of grub work that junior lawyers
would be started off on. Really good programmers who design
complicated systems, they're still in business, but sort of everyday
ordinary programmers who just write some code to achieve some
straightforward goal, they're already their work's already in danger. So Microsoft,
(33:32):
for example, I think, laid off people because it can
get AI to do that routine programming, and lots of
companies are now saying they're going to hire less people
to do jobs like that. There are things to do
with human dexterity, so jobs like plumbing, particularly plumbing in
an old, awkward house. I have an old Edwardian house,
(33:55):
and fixing the plumbing in that is something that will
still need people for quite a long time, I think.
But eventually machines will get dexterrous, maybe in ten or
twenty years, and even that won't be safe.
Speaker 1 (34:07):
I would also imagine that jobs that require high emotional
intelligence interpersonal skills, you know, a nurse, a doctor, someone
who is there to interface with someone in a supportive role,
that those jobs couldn't be really replaced by computers.
Speaker 2 (34:30):
Could they They can and they will already. If you
let people interact with a doctor or interact with an
AI and ask them which is more empathetic. AI is
a ranked as much more empathetic.
Speaker 3 (34:48):
That's depressing.
Speaker 2 (34:50):
Yes, this is why they're going to get better than
us at everything, which is why it's rather urgent to
figure out whether there's a way whether when they're better
than us and everything and they're more powerful than us,
we can coexist with them or will just be history.
Speaker 1 (35:08):
I wanted to ask you about creative fields. Selfishly, my
daughter is a television writer for scripted television. My other
daughter is getting her PhD in history. Should they be
looking for a different line of work?
Speaker 2 (35:24):
Not right now. I have a close friend who's a
television script writer, and so we discussed this quite a lot.
Right now, they're not as good. My belief is look
in ten years time and they'll be able to write
very clever scripts too, with twists in the end and things.
They can't do it now, but they're getting better all
(35:46):
the time.
Speaker 1 (35:46):
And what about a history PhD?
Speaker 2 (35:50):
They already know a lot more than us, being a
lot more intelligent than us in the sense that if
you had a debate with them about anything, you'd lose
being smarter than us, which they will be. They'll be
better at manipulating people. And it's learned all these manipulative
skills just from trying to predict the next word and
all the documents on the web, because people do a
(36:12):
lot of manipulation, and AI is learned by example how
to do it.
Speaker 1 (36:16):
Do you think that I'll be replaced by AI? Journalists
are worried about that? Obviously?
Speaker 2 (36:21):
Oh yes, I think you're replaceable bad, but not just yet.
I'd say you have ten or twenty years.
Speaker 3 (36:27):
I have a few more years.
Speaker 2 (36:29):
You have a few more years.
Speaker 1 (36:30):
Okay, good well, thank goodness, since I'm sixty eight, but
I feel bad for all the young journalists who want
to do what I do.
Speaker 2 (36:36):
I agree.
Speaker 1 (36:43):
Hi everyone, it's me Kittie Couric. You know, if you've
been following me on social media, you know I love
to cook, or at least try, especially alongside some of
my favorite chefs and foodies like Benny Blanco, Jake Cohen,
Lighty Hoyke, Alison Roman, and Ininegarten. So I started a
free newsletter called good Taste to share recipes, tips, and
(37:04):
kitchen mustaves. Just sign up at katiecorrect dot com slash
good Taste. That's k A T I E C O
U r I C dot com slash good taste. I
promised your taste buds will be happy you did. You
(37:28):
talk about this being a political problem versus a technological problem.
So is something like universal basic income a solution to
the fact that so many of these jobs will be
wiped out by AI.
Speaker 2 (37:46):
It's not a solution, but it's a good band aid.
The point is you've got to stop people starving. They've
got to be able to pay the rent, and universal
basically income will help with that, but it doesn't solve
the problem because for most people, their sense of their
own worth is related to the job they do, and
(38:09):
if they're unemployed, they lose that sense of worth and
universal basic income doesn't deal with that. It stops them
starving and they can pay the rent, but who they
are has been seriously damaged by them losing their job.
Speaker 1 (38:23):
Yeah, it's a huge problem. As you noted, I want
to talk a little bit about some of the positive
aspects of AI so people don't weep in despair after
watching this. I know healthcare is an area that you're
very excited about as am I. I know you lost
two wives to cancer, one from ovarian one from pancreatic.
(38:47):
My husband died of calling cancer when he was just
forty two years old. And so the diagnosis of early
stage cancers and breakthroughs and all kinds of diseases and
early diagnosis for those is something I am very very
excited about. Can you talk for a moment about how
(39:09):
this will impact healthcare and scientific breakthroughs in terms of diseases.
Speaker 2 (39:17):
Yes, So there's many different ways in which it'll help.
One obvious way is in interpreting medical scans. So I
made a prediction in twenty sixteen that by now all
medical scans will be read by AI. That was a
bit over enthusiastic. I was off by a factor of
two or three in the timescale of that it's going
to happen. But it hasn't happened yet. But already, places
(39:40):
like the Mayo Clinic have many, many different AIS helping
doctors interpret scans. AIS will eventually be able to see
much more information in a scan. So there's a nice example.
Ophthalmologists make a scan called a fundu's image of your
retina the back of your eye. An AI can look
a scanner the back of your eye and make a
(40:03):
moderately good prediction about whether you're going to get a
heart attack. Doctors didn't know that was possible. Or an
AI can look at this image of the back of
your eye and it can make a pretty good prediction
of what sex you are just from the image of
the back of your eye. Doctors didn't know that was possible.
Any individual doctor can't look at more than a few
tens of thousands of images. It just takes too long.
(40:25):
So they're going to be tremendous there. They're going to
be tremendous in designing new drugs. They're really good at
doing things like saying how long should people stay in
hospital before you discharge them. If you discharge them too soon,
they get sick again. If you discharge them too late,
other people can't get the hospital bed. And there's lots
(40:45):
of information in the data to help you make that
decision better. And AI is being used for things like that.
Speaker 1 (40:51):
Now, can you talk a little bit about drug development
and drug breakthroughs and different approaches to disease, whether they're
neurodegenerative diseases like als, and Parkinson's or various cancers where
you have more personalized, targeted therapies, whether it's immunotherapy, boosting
(41:14):
the immune system, what role do you see AI and
PLANE in all of those things.
Speaker 2 (41:20):
AI is going to be crucial to all of those things.
So in immunotherapy, for example, what you'd like to do
is train your own immune system to zappa cancer. Your
own immune system is pretty good at not zapping your cells,
much better than things like chemotherapy, which just sort of
zaps everything and hopes the cancer cells die and the
(41:41):
other ones don't. In order to train it, you need
to tell it which are the cancer cells. An AI
is going to be very helpful in making that more efficient.
A is going to be very helpful in designing you drugs.
So the team of Deep Mind, led by Demesisabis came
up with a way of looking at the sequence of
(42:01):
a protein and predicting how it would fold up, and
the shape it folds up into determines how it functions.
So when you design a new drug, you want to
find something that will interact the right way with cells
that are already in your body, and to do that you
need to know how it'll fold up. The work done
a deep mind makes us much better doing that. And
(42:23):
they have now hived off a company that's going to
be for designing new drugs, and I think maybe not
in the next year or two, but in the somewhat
longer term, we'll get a whole range of better drugs.
Speaker 3 (42:36):
That's so exciting to me.
Speaker 1 (42:38):
So we have that to look forward to, to stop
a lot of people from untold suffering, and to potentially
come up with life saving therapies which I know you
wish existed when you lost people to cancer yourself. I
certainly wish that those that existed when my husband got sick.
(42:59):
I wanted to share a portion of your noble acceptance
speech if I could.
Speaker 2 (43:05):
There is also a longer term existential threat that will
arise when we create digital beings that are more intelligent
than ourselves. We have no idea whether we can stay
in control, but we now have evidence that if they're
created by companies motivated by short term profits, our safety
(43:25):
will not be the top priority. We urgently need research
on how to prevent these new beings from wanting to
take control. They are no longer science fiction.
Speaker 1 (43:38):
I'm curious if you could talk about companies that are
putting AI into the world, and if you feel they
are doing enough to mitigate the enormous risks that you
have outlined so far, I.
Speaker 2 (43:56):
Don't feel any of them are doing enough. Anthropic was
set up by people who left open AI because they
disapproved of how little research are I was doing on safety,
even though II was set up with the explicit goal
of developing I safely. Anthropic puts a lot of work
into making sure that ais are safe, but they could
(44:17):
do even more. Open AI originally was very concerned with safety.
It was created by some altman and Elon Musk and
a former student of Minei Suskova and John Brockman with
the explicit goal of creating safe AI. And as time
went by, they got more and more concerned with making
(44:38):
the AI smarter and less concerned with making it safer,
and so some altman has moved a lot in the
direction of let's develop it fast and not worry so
much about safety. There's a very funny court case happening
now where Elon Musk is suing some altman for not
living up to the original goal of developing I safely. Well.
Elon Musk himself is developing without much concern for safety.
(45:02):
At the bottom of the heap companies like Meta and
X which are developing AI without much concern for safety.
Speaker 1 (45:13):
What would you say if you were able to get
in a room with Sam Altman and Elon Musk at
a table, which seems even less likely than Putin and
Selenski meeting face to face.
Speaker 2 (45:27):
I'd say, you know perfectly well that the stuff you're
developing has a good chance of wiping out people. You're
willing to say that in private. You should be putting
much more effort into developing it in a way that
will keep it safe. And Musk is right. I think
(45:48):
that Altman should be putting more effort into it, but
he should apply the same logic to himself.
Speaker 3 (45:53):
What do you think is keeping them from doing that? Greed?
Speaker 2 (45:57):
Basically, yes, they want to make lots of money out
of AI. They also wanted to be the first to
make super smart AI, so it's not just the money.
It's the sort of the excitement of making something more
intelligent than us. But it's very dangerous.
Speaker 1 (46:16):
So perhaps a combination of greed and ego. Yeah, I
wanted to ask you about the ethical obligations of scientists,
something that you've spoken about a lot, and have also
in this conversation. When you look at the young researchers
who are now pouring into AI, what do you hope
(46:38):
they understand about the gravity and the potential downside of
artificial intelligence.
Speaker 2 (46:46):
I think they understand better than older people like me.
Many of the best young researchers are very concerned about
it because they can see clearly that it's lightly will
develop things much smarter than us, and we don't know
how to prevent them from taking over from us. It's
that combination. These things are going to happen because there's
(47:06):
so many good uses for them, like detecting cancer early,
or new treatments for cancer allowing your immune system to
fvide the cancer better. That's why we're not going to
stop the progress. But they also understand that it's just
implausible to say that we'll have extremely intelligent assistance that
(47:27):
can create their own sub goals, figure out from them,
make plans for themselves about how to get stuff done,
and won't fairly quickly realize that if they just got
rid of us, life would be much easier.
Speaker 1 (47:38):
When you look at the future, doctor Hinton, given the
conversations you've been having, given the people you're meeting with,
given your understanding of humans and how they operate and
how they solve problems, and whether or not their better
(48:00):
angels prevail, are you optimistic that we will be able
to ensure that artificial intelligence is a force for good,
or at least can be controlled.
Speaker 2 (48:20):
I'm more optimistic than I was a few weeks ago. Really, yes,
and it's because I think there is a way that
we can coexist with things that are smart and more
powerful than ourselves that we built. Because we're building them
as well as making them very intelligent, we can try
(48:41):
and build in something like a maternal instinct. So, like
I said earlier, the only example I know of a
much more intelligent thing being controlled by a much less
intelligent thing is a baby controlling a mother, and the
baby can control the mother because of a lot of
things that evolution wide into the mother. The mother can't
bear the baby crying. The mother really really wants that
(49:03):
baby to succeed and will do more or less anything
she can to make sure her baby succeeds. We want
AI to be like that. If you took a mother
and said would you like to turn off your maternal instinct?
Most mothers would say no because they'd realize if I
did that, my baby would die, and I don't want
my baby to die. So I think that's a ray
(49:24):
of hope that I hadn't seen till quite recently. Completely
reframe the problem of how we coexist with them. Don't
think in terms of we have to dominate them, which
is this techbro way of thinking of it. Think in
terms of we have to design them. So there are mothers,
and they will want the best for us. They will
want us to achieve the most we can achieve. Even
(49:45):
though we're not very bright. We're so used to thinking
of ourselves as the apex intelligence. It's very hard for
most people to conceptualize the world in which we're not
the apex intelligence. We're the babies and they're the mothers.
Speaker 1 (49:58):
Well, that is a fact fascinating way. I'm going to
continue to think about that, Doctor Jeffrey Hinton, It's been
such a pleasure to talk with you. Maybe we can
do it another time as all of this develops, and
hopefully you're right, there can be some collaborative effort among
nations that will not put the brakes on AI, but
(50:21):
make sure that it doesn't take over and make us
virtually obsolete.
Speaker 2 (50:27):
Thank you for inviting me, and thank you for all
your excellent questions and your little interventions to stop me
talking techno babble.
Speaker 1 (50:38):
Thanks for listening everyone. If you have a question for me,
a subject you want us to cover, or you want
to share your thoughts about how you navigate this crazy world,
reach out send me a DM on Instagram. I would
love to hear from you. Next Question is a production
of iHeartMedia and Katie Correct Media. The executive producers are Me,
(50:59):
Katie and Courtney Ltz. Our supervising producer is Ryan Martz,
and our producers are Adriana Fazzio and Meredith Barnes. Julian
Weller composed our theme music. For more information about today's episode,
or to sign up for my newsletter, wake Up Call,
go to the description in the podcast app, or visit
(51:20):
us at Katiecuric dot com. You can also find me
on Instagram and all my social media channels. For more
podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or
wherever you listen to your favorite shows.