All Episodes

February 12, 2025 36 mins

Geoffrey Hinton is a computer scientist, cognitive psychologist, and winner of the Nobel Prize in Physics. His work on artificial neural networks earned him the title, ‘Godfather of AI,’ but in recent years, he’s warned that without adequate safeguards and regulation, there is an “existential threat that will arise when we create digital beings that are more intelligent than ourselves.” Hinton sits down with Oz to discuss his upbringing, research, time at Google and how his experience with grief informs how he thinks about the future of AI.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Thanks for tune into Tech Stuff. If you don't recognize
my voice, my name is Osvalshin and I'm here because
the inimitable Jonathan Strickland has passed the baton to Cara
Price and myself to host Tech Stuff. The show will
remain your home for all things tech, and all the
old episodes will remain available in this feed.

Speaker 2 (00:18):
Thanks for listening.

Speaker 3 (00:20):
I was in a hotel in California and I saw
that the phone lit up, and I thought, who's calling
me at one o'clock in the morning, And then this
Swedish voice came on. Then they said I won the
Nobel Prize in Physics, and I thought, this is very odd.
I don't do physics, so that's when I thought it
might be a prank.

Speaker 1 (00:38):
Jeffrey Hinton won the twenty twenty four Nobel Prize in Physics,
an honor held by Albert Einstein and Marie Curie. A
certain j Robert Oppenheimer were shortlisted but never won.

Speaker 3 (00:50):
My big hope was to win the Nobel Prize in
Physiology or Medicine for figuring out how the brain worked.
And what I didn't realize is you could fail to
figure out how the brain worked and still get a
Noble Prize anyway.

Speaker 1 (01:02):
Welcome to tech stuff. This is the story with our guests,
the Nobel Laureate Jeffrey Hinton. Each week on Wednesdays, we
bring you an in depth interview with someone who's at
the forefront of technology or who can unlock a world
where tech is at its most fascinating. His recent Nobel
Prize win was for quote foundational discoveries and inventions that

(01:24):
enable machine learning with artificial neural networks. Now, artificial neural
networks are learning models inspired by the network of neurons
present in the human brain, and Hinton's desire to figure
out the brain was a key inspiration for his pioneering
work on AI. I was particularly fascinated by Hinton because

(01:44):
his work went completely against the mainstream of computer science
for decades, and yet he stuck to his guns. It's
an incredible story of dedication in the face of personal loss.
Also fascinating is Hinton's relationship to his own creation. Here's
what he said at the Nobel Prize banquet.

Speaker 3 (02:04):
There is also a longer term existential threat that will
arise when we create digital beings that are more intelligent
than ourselves. We have no idea whether we can stay
in control. But we now have evidence that if they
are created by companies motivated by short term profits, our
safety will not be the top priority. We urgently need

(02:28):
research on how to prevent these new beings from wanting
to take control. They are no longer science fiction, thank you.

Speaker 1 (02:40):
So when I got the opportunity to sit down with
Jeffrey Hinton, I wanted to know how he went from
someone who wanted to understand the relationship between the mind
and the brain to someone who paved the path for
AI as we know it. And he obliged me by
telling me about his trajectory from student to researcher, to professor,

(03:01):
to Google employee to finally AI safety advocate.

Speaker 2 (03:06):
Am I riding thinking with all due respect to Steve
Jobs and Bill Gates and marks up a bug that
you were, in a sense the original college dropout, not.

Speaker 3 (03:15):
In the sense they dropped out, Because what I did
was I I went to Cambridge and after a month
I dropped out, but then I went back the next year.
And then while I was doing my PhD, I dropped out,
but then I finished it. So I'm not like them.

Speaker 4 (03:30):
I went back. I'm a failed to drop out.

Speaker 2 (03:34):
Failed to drop out, that's good. What were the reasons for?
Was it uncertainty or ambivalence or curiosity or so.

Speaker 3 (03:42):
When I first went to Cambridge, it was the first
time I'd ever lived away from home, and I'd always
my image of myself was that it was one of
the clever ones. And when I went to Cambridge, I
wasn't one of the clever ones. Everybody was clever, and
I found it very stressful. I worked very, very hard,
so I was working like twelve hours a day doing science,

(04:04):
and the combination of working very hard to keep up
and living away from home for the first time was
just too much for me.

Speaker 2 (04:12):
So there was a fantastic profile of you in the
New Yorker which said, quote, Hinton tried different feels but
was dismayed to find that he was never the brightest
student in any given class, which made me smile. But
I guess dismay and stress are quite a close cousins,
And I suppose the stakes for you were high given
the family you came from. Can you speak a little

(04:32):
bit about that.

Speaker 3 (04:33):
Yes, I had a lot of pressure from my father
to be an academic success, and my mother sort of
went along with it. So from an early age I
realized that's what I had to achieve, and that's a
lot of pressure.

Speaker 2 (04:49):
How did they exert that pressure? How are you aware
of it?

Speaker 3 (04:53):
My father was a slightly strange character. He grew up
in Mexico during all the revolutions without a mother, somewhat odd.
Every morning when I went to school, not every morning,
but quite often, as I left, you would say, get
in their pitching. If you work very hard, when you're
twice as old as me, you might be half as good.

Speaker 4 (05:14):
Well that's sort of pressure.

Speaker 2 (05:18):
Did you find that motivating?

Speaker 4 (05:21):
I found it irritating, but I think it probably was motivating.
He very inconsiderately.

Speaker 3 (05:27):
He died while I was writing my thesis, and he
never saw me being a success.

Speaker 2 (05:34):
You were at Cambridge, you left briefly and you came back.
I think you settled on experimental psychology as your degree.

Speaker 4 (05:42):
In the end, I was doing natural sciences.

Speaker 3 (05:45):
I started off doing physics chemist ring crystalline state because
of the success in decoding the structure of DNA. Crystalline
state was a big thing, and I left after a month.
Then I reapplied to do architecture. I've always liked oarchitecture,
and after a day I decided I was never going
to be any good architecture because I wasn't I wasn't

(06:06):
artistic enough. I loved the engineering, but the artistic bit
I couldn't do very well. So I switched back to science.
But then I did physics, chemistry and physiology, and I
really liked physiology. I'd never been allowed to do biology
at school.

Speaker 4 (06:21):
My father wouldn't allow it.

Speaker 3 (06:23):
Why not, Ah well, he said, if you do biology,
they'll teach you genetics. And he was a Stalinist and
he didn't believe in genetics. Now, he was also a
fellow of the British Royal Society in Biology who didn't
believe in genetics.

Speaker 2 (06:41):
Gosh, complicated man, Yes, but I mean he really you
won't exaggerated him. He said to you, you can't study biology.

Speaker 4 (06:50):
Now.

Speaker 3 (06:50):
He had other reasons which weren't so bad, which is,
you can always pick up biology when you're older. What
you can't pick up when you're older is math. And
I think that's probably true, and so that was a
more valid reason.

Speaker 2 (07:04):
Yeah, yeah, So what did you end up graduating from
Cambridge with a degree in psychology?

Speaker 3 (07:10):
So I did physics, chemistry and physiology for a year,
and I did very well in physics. I got a
first in physics. That's obviously a good predictor of a
Nobel Price.

Speaker 2 (07:22):
Said, your tutors would have been there.

Speaker 3 (07:24):
Then I dropped it all and did philosophy, philosophy, philosophy.
I did a year of philosophy and I developed strong antibodies,
and then I switched to psychology, and so my final
degree was in psychology.

Speaker 2 (07:39):
MHM. And was there a single question or set of
questions that you were in search of.

Speaker 3 (07:45):
Yes, I wanted to know how the how the brain worked,
and how the mind worked, and what the relationship was.
I decided fairly early on that you're never going to
understand the mind unless you understood the brain.

Speaker 2 (07:58):
Was that a popular view at the time.

Speaker 4 (08:00):
No, not really.

Speaker 3 (08:01):
There was this kind of functionalist view that basically the
view that came from computer software, which is that the
software is totally different from the hardware, and what the
mind is all about is software, the heuristics you use
and the way you represent things. The hardware has got
nothing to do with it. That's a completely crazy view,
but it seemed very plausible at the time, and so

(08:22):
the computers we designed so that we could program them,
had programs being a completely separate domain from hardware.

Speaker 4 (08:30):
But that's not how the real brain is.

Speaker 2 (08:33):
Was as funny this constant dance between us expecting our
computers to be like us, and then expecting us to
be like our computers, right as a kind of continual
dance between those two things.

Speaker 3 (08:46):
Yes, well, you always try and understand things in terms
of the latest technology. So when telephones were new, the
brain was clearly a very large telephone switchboard.

Speaker 2 (08:58):
But is it different this time in our AI has
taken off and become ubiquitous. Do you think this is
indeed more than the telephone?

Speaker 3 (09:07):
Yes, I think these artimicial neural networks for training are
in many respects quite like real neural networks. Obviously, the
neurons are much simpler. There's all sorts of properties that
are different in the brain, but basically they're working in
the same way. They learn things by changing connection strengths
between neurons, just like the brain does.

Speaker 2 (09:27):
And when does that question for you of wanting to
understand how the brain works really really begin?

Speaker 3 (09:33):
When I was at high school, so even before I
went to Cambridge, I had a very bright friend who
was always smarter than me called Inmund Harvey, who came
into school one day and said, maybe memories in the
brain are spread out over the whole brain. They're not
localized like a hologram, because holograms when you so he
was trying to understand memories in the brain in terms

(09:55):
of this new technology of holograms.

Speaker 2 (09:57):
And you were stimulated by this idea.

Speaker 4 (09:58):
I was very stimulated by that.

Speaker 3 (09:59):
It's very interesting idea, and ever since then I've thought
a lot about how memories are represented in the brain.
And then that also led into well how does the
brain learn stuff?

Speaker 1 (10:11):
Coming up how Jeffrey Hinton ended up at Google.

Speaker 2 (10:14):
Stay with us, so bear with me, but I need
to try and summarize what the Boltziman machine is because
the Nobel Prize Committee credited the Boltzman machine with your win.
According to their press release, the Boltzman machine can quote

(10:36):
learn to recognize characteristic elements in a given type of data.
The machine is trained by feeding it examples that are
very likely to arise. The Boltzmer machine can be used
to classify images or create new examples of the type
of pattern on which it was trained. Hinton has built
upon this work helping initiate the current explosive development of

(10:58):
machine learning. Now, in fact, the timeline right, you graduated
from Cambridge, you worked on your PhD, and in the
eighties you wound up at Connigie Mellon and that's where
you work on the bolt of machine really took off.

Speaker 4 (11:12):
Uh yes, just before I went to Carnegie Mellon.

Speaker 2 (11:14):
Now am I writing thinking? At the time, there was
a kind of debate that you were on the unpopular
side of about artificial intelligence.

Speaker 3 (11:23):
Okay, so there's two things to say about that. From
the earliest days of AI in the nineteen fifties, there
were these two approaches, two kind of paradigms. So how
you build an intelligence system. One was inspired by the brain,
that was neural networks. Then the other paradigm was no, no, no, no.
Logic is a paradigm for intelligence. Intelligence is all about reasoning.

(11:45):
Learning comes later once we figured out how reasoning works,
and reasoning depends on having the right representations for things.
So we have to figure out what kind of logically
unambiguous language the brain is using so that it can
use rules to do reasoning. It's a completely different approach
and it's very unbiological because reasoning something comes very late,

(12:06):
and actually from most of the second half of the
last century, neural networks weren't seen as AI. AI was
believing you have symbolic rules in your head and then
manipulated using rules, using to screet symbolic rules.

Speaker 2 (12:20):
And what was what were neural? They were seen as
statistics or physically.

Speaker 3 (12:23):
They were seen as this crazy idea that you could
take this random neural network and just learn everything, which
was obviously hopeless.

Speaker 2 (12:29):
And what what gave you this conviction?

Speaker 3 (12:32):
Well, the brain has to learn somehow, and of course
there's a lot of innate structure whir i'd in, which
explains why a goat can sort of drop out of
the womb and five minutes later it's walking. But so
stuff like learning to read, that's all learned. That's not innate,
and we have to figure out how that works, and
it's not symbolic rules.

Speaker 2 (12:51):
This conviction that you have now, did you always have?

Speaker 4 (12:56):
Yes?

Speaker 3 (13:00):
Sorry to say it seemed to me it was obviously right.
I think part of that is I was sent to
a private school when I was seven from an atheist family,
and they started telling me about religion and all this rubbish,
all the stuff that was obviously nonsense, and I had
the experience of being the only kid at school who

(13:20):
thought this was all nonsense and turning out to be right.

Speaker 4 (13:24):
And that's very good training for a scientist.

Speaker 2 (13:26):
I brought it up earlier, but there was a wonderful
profile of you in the New Yorker titled why the
Godfather of Ai fears what He's built, And there was
a quote that I found quite stunning where you said
I was dead in the water at forty six.

Speaker 3 (13:43):
Yes, my wife and I adopted to children, to babies,
and then my wife got ovarying cancer. But she also,
even though she was a science she started believing in
homeopathy and she tried treating her very in cancer with homeopathy,

(14:05):
and so she died and.

Speaker 4 (14:09):
I was left with.

Speaker 3 (14:10):
Two young children, one of three and one of five,
who were both very upset, as was I, and I
began to appreciate what life is like for female academics,
which is impossible.

Speaker 4 (14:28):
You can't.

Speaker 3 (14:30):
Looking after small children makes it very very hard to
spend long periods of time thinking about your current idea.
It's just very difficult.

Speaker 2 (14:42):
So you were bereaved, you were looking after two small children,
and then yes, that moment in twenty twelve when you
publish a paper called ImageNet classification with deep convolutional neural networks,
which to the layman doesn't sound like something that would
change the world and how we live, but it did well.

Speaker 3 (15:02):
It changed the views of people in computer vision and
the views of people in other areas of computer science.
It basically showed neural networks actually do work. Now, people
have showed that before, but it hadn't convinced people the
same way.

Speaker 2 (15:16):
See, you published that paper in twenty twelve with Iliot
sutch Keva, who went on to be a very important
figure at open AI and I want to talk more
about him. But within a few days of the publication
of that paper, you had an offer of millions and
millions of dollars to move to China.

Speaker 3 (15:33):
Ah. Yeah, either to move to China or to let
them invest in our group.

Speaker 4 (15:39):
I think it was a bit longer than a few days,
but it was. It was that fall for sure.

Speaker 2 (15:43):
Did you kind of know, Okay, we're going to publish
and this is going to change everything or are you
surprised by this response.

Speaker 3 (15:51):
We thought it would have a big effect. We didn't
realize quite how big.

Speaker 2 (15:55):
And when you got the call from Baidu, Chinese tech company,
What did you think, I mean, it must have been tempting.

Speaker 3 (16:02):
H yes, I think they said they could fund our group,
or they could we could go and work for them,
or various possible alternatives. And in the end I just
asked them, well, how much money are we talking about?
Are you talking about like ten million dollars? And they said, yes.

Speaker 2 (16:21):
You localed yourself.

Speaker 4 (16:23):
I didn't think it at the time.

Speaker 3 (16:25):
I thought of the biggest number I could think of,
it was five million dollars and doubled it. So once
they said that, I realized we had no idea how
much we were worth. And so at that point we
decided to set up an auction.

Speaker 2 (16:39):
What does it mean to set up an auction for
an academic paper?

Speaker 3 (16:43):
The three of us founded a company. The company that
belonged to the three of us owned these six pandent applications,
so then we had something to sell, but mainly we
were selling ourselves. But I insisted that I still be
an academic so I could continue to advise my current students.
That was a big problem because Google had never done

(17:04):
that before. They were one of the companies.

Speaker 2 (17:06):
The bid for us, so by do Google.

Speaker 3 (17:08):
Microsoft and deep Mind, which wasn't owned by Google then.

Speaker 2 (17:13):
And not only do you invent this well together with
your colleagues, this advancement of neural nets, but you also
invented your own process to sell this company essentially, right.

Speaker 3 (17:24):
Yeah, we decided we just have an auction by Gmail,
and you'd send me Gmail with your bid, and the
time of the bid would be the timestamp on the Gmail.
I would then immediately send the bid to all the
other bidders, and you had to raise by a million dollars,
and if there was no bid within an hour of

(17:47):
the last bid, that was the end of the auction.
I was amazed to see that the bids would come
in fifty nine minutes after the previous bid. Would be
sitting there think okay, it's over, and then fifty nine
minutes later a bid would come in.

Speaker 2 (18:00):
Only everybody trusted Gmail.

Speaker 3 (18:02):
I had worked at Google over the summer, and that
meant two things. One, I did trust that they wouldn't
read my Gmail. I knew they were very serious about that.
And also I really wanted Google to win the competition
because I really like Jeff Dean, who ran the brain
team at Google. So I wanted Google to win, and

(18:23):
in the end we slightly fudged it. So after deep
minded Microsoft had dropped out, it was between Google and
by Do, and we were scared by You would win,
and I didn't want to go to China.

Speaker 4 (18:36):
I couldn't travel at that point, well, so I felt
I wouldn't.

Speaker 3 (18:39):
Really understand what was going on in a Chinese company.
So it got to forty four million, and we just
said we've got an offer we can't refuse, and it's
the end of the auction. And the offer we couldn't
refuse was to work with Jeff Deane.

Speaker 2 (18:54):
So you got what you wanted in the sense of.

Speaker 3 (18:57):
We've got more money than we could imagine, and we
got to work at the company that I wanted most
to work.

Speaker 1 (19:03):
With when we come back. Nobel Laureate Jeffrey Hinton on
why he advocates for AI safety. In twenty eighteen, you

(19:27):
receive the Touring Award, which is kind of like the
Nobel Prize for Computer Science. But also in twenty eighteen,
you were widowed for a second time. You lost your
wife Rosalind in nineteen ninety four, and then twenty four
years later your wife Jackie passed away also from cancer.

Speaker 3 (19:45):
Yes, that was that was difficult. I didn't have My
children were much older then, so I didn't have the
problem of having to cope with young children at the
same time as everything else. And Google was very understanding.
Part of my deal had been that I would spend
three months a year in Silicon Valley, and they let

(20:09):
me out of that and said I could spend my
whole time in Toronto, and they helped me set up
a small lab doing basic research in Toronto, so that
was much less stressful, so I could be with my wife.

Speaker 4 (20:23):
She had cancer as well, she got pancreatic cancer.

Speaker 2 (20:29):
One of the most striking things, again in the New
Yorker piece, was the way you talked about observing the
way in which Rosalind and Jackie approached their cancer as
a mental model for how to think about the implications

(20:50):
of artificial intelligence.

Speaker 4 (20:52):
Yeah, that's a rather sort of dark scenario.

Speaker 3 (20:57):
So occasionally when I think, well, lis stuff probably will
take over, which I sometimes think, then there's the issue
of should you go into denial and say no, no, no, no,
this can't possibly happen, which is what my first wife did. Actually,
she wasn't my first wife I was married a long
time before that, just briefly, and so Ross went into

(21:25):
denial and Jackie was very realistic about it. And maybe
we should be very realistic about the possibility machines will
take over. We should do our best to make sure
that doesn't happen, but we should also think about whether
there's ways of making that if that does happen, whether
there's ways of making it not so bad, whether people

(21:46):
could still be around even if the machines were in control,
for example.

Speaker 2 (21:49):
Yeah, I mean there's something. I mean, you use the
word dark, but you've seen your life's work in many
ways kind of fruition, haunted by these thoughts of how
to deal with something as awful as the terminal council diagnosis.

Speaker 4 (22:07):
Yeah, you have to be careful what you wish for.

Speaker 2 (22:11):
Yeah, I mean my mentioned Oppenheimer at the beginning in
terms of a Nobel Well, he wasn't almost a physics gloriate.
He wasn't in the end.

Speaker 3 (22:22):
Yes, it's rather absurd, isn't it That I've got a
Nobel Prize in physics and Oppenheimer didn't.

Speaker 4 (22:27):
That's utterly ridiculous.

Speaker 3 (22:30):
I should say something people, particularly journalists, like to say well,
how would you compare yourself with Oppenheimer?

Speaker 4 (22:37):
And there's two big differences.

Speaker 3 (22:40):
One is that Oppenheimer really was crucial to the development
of the atomic bomb. He managed the science of it.
He was a single, extremely important figure with the development
of AI. There's a bunch of us, and if I
hadn't been around, all stuff would have happened. That's one difference.

(23:02):
The other difference is that atomic bombs aren't good for
anything good.

Speaker 4 (23:07):
They're just for destruction.

Speaker 3 (23:10):
They actually did try using them for fracking in Colorado,
but that didn't work out too.

Speaker 4 (23:14):
Well and you can't go there anymore.

Speaker 3 (23:16):
The big difference is most of the uses of AI
are very good. It can lead to huge increases in productivity,
huge improvements in healthcare, might help a lot with climate change.
So AI is going to be developed because of all
those huge beneficial uses. And that's very different from atomic bombs,

(23:38):
where there was a possibility of not developing the H bomb.

Speaker 2 (23:42):
And why have you taken it upon yourself as your responsibility?
And you quit Google in twenty twenty three and since
then have become one of the most vocal and qualified
people in the world warning of these risks.

Speaker 4 (23:57):
Well, I'm old.

Speaker 3 (24:00):
I'm too old to do original research anymore, but people
listen to me, and I really believe these risks are
very real and very important, so I don't really have
much choice. We are going to develop AI because it's
got so.

Speaker 4 (24:16):
Many good uses.

Speaker 3 (24:18):
So I'm not warning against developing it, and I'm not
saying slow down. What I'm saying is try and develop
it as safely as you can. Try and figure out,
in particular, how you can stop it eventually taking over,
but also think about all the other shorter term risks
like fake videos, corrupting elections, and loss of jobs. I'm

(24:39):
saying we need to worry about all those things, and
it might be rational to just stop developing it, but
I'm not. I don't think there's any hope of that,
so I'm not advocating that.

Speaker 2 (24:51):
You said in December there was a ten to twenty
percent risk that AI would cause human extinction in the
next thirty years. How do you count with those odds?

Speaker 4 (25:04):
I just make them up.

Speaker 3 (25:05):
That's I'm just like, no. If you think about subjective probabilities,
they're based on intuition. I have a very strong intuition
that the chance of super intelligent machines taking over from
people is more than one percent, and I have a

(25:25):
very strong intuition that is less than ninety nine percent.
We're dealing with something extremely unknown, so your best bet
for things totally unknown is maybe fifty percent, but that
doesn't work very well because it depends on how you
partition things.

Speaker 4 (25:40):
So clearly the chance is.

Speaker 3 (25:42):
Much bigger than one percent and much les than ninety
nine percent, and maybe I should just stick at that.
But I'm hoping that we can figure out a way
that people can stay in control, because we're people and
what we care about is people.

Speaker 2 (25:54):
Now, do you view it as inevitable that if we
if we quote unquote lose control, our destruction that is
the is the next No, they.

Speaker 4 (26:02):
Might I een on Musk for example.

Speaker 3 (26:05):
I talked to him and he pushed the line that
they'll keep us around as pets because we're quite interesting.

Speaker 2 (26:12):
M hmm.

Speaker 4 (26:13):
It seems a rather sort of thin thread to hang
human existence by.

Speaker 2 (26:18):
So, I mean, you've been vocal about the kind of
overall societal threat, but you've also been specifically critical of
Sambleman in particular.

Speaker 3 (26:31):
Yes, because I think open Aye was set up to
develop a GI safely, and it's just been progressively moving
away from that towards developing a for profit and so
it's best safety researchers have left and it's now trying

(26:52):
to convert itself from a not for profit company into
a for profit company. And that seems entirely wrong to me.

Speaker 2 (26:59):
And you're I'm a colleague and student. Ilius Atsgev, who
worked on the twenty twelve paper with took a big
stand on this, which ultimately didn't break his way.

Speaker 3 (27:10):
It didn't break his way in terms of Sam being
fired as the head of the company. It did break
his way in terms of people understanding that open Ai
was going back on its pledge to develop Aji safely
and India is now set up a company.

Speaker 4 (27:26):
I'm that's trying to do that.

Speaker 2 (27:28):
I mean, if if Samaltman called you tomorrow through Toronto
and he were sitting with him, he said, you know,
where have I gone wrong? And what should I do?
What do you say?

Speaker 4 (27:41):
I'd say, you're not Sam Altman.

Speaker 2 (27:46):
Fair enough?

Speaker 4 (27:47):
Okay.

Speaker 3 (27:47):
So suppose something very surprising happens and Sam Altman suddenly
has an epiphany and says, oh my god, we shouldn't
be doing this for a profit. We should be doing
it to protect humanity. I'd be very happy to doctor.

Speaker 2 (27:59):
But what if you could affect his opinion in one
way or that of other.

Speaker 3 (28:04):
I would say, keep developing AI, but use a large
fraction of the resources as you're developing it to try
and figure out ways in which you might get out
of control and what we could do about that. So
if Sam Olman said we're going to use thirty percent
of our compute, how much compute we have, we use
thirty percent to get highly paid safety researchers to do

(28:26):
research on safety, not on making it better, developing for
the bit on making it safe, I'd take it all back,
and I'd say he is a great guy.

Speaker 2 (28:36):
I mean for a non for profit, that doesn't sound
like unreasonable exactly.

Speaker 3 (28:40):
That that was what I thought was happening to begin with,
and that was what idiot thought was happening.

Speaker 2 (28:47):
Right before Christmas, there was an article in the Wall
Street Journal basically saying that chat GIPT five was behind
schedule and that kind of the pace of improvement in
deep learning was lowing for different because of lack of
real world data. Could be other reasons, But then O
three came out and Settlement sort of said that AGI

(29:08):
is here. Where do you think we are on this agi?
Do you think it's even a relevant metric?

Speaker 3 (29:14):
So Ever, since twenty twelve, there've been people saying AI
is about to hit a wall. So Gary Marcus made
a strong prediction in twenty twenty two that AI was
hitting a wall and wouldn't get much further. So you
have to see that against a background of repeated predictions
that I is about to hit a wall.

Speaker 4 (29:35):
This is a bit more.

Speaker 3 (29:36):
Real in the sense that we really are reaching peak
data or peak easily available data. There's actually hugely more
data in silos and companies and in videos. So yes,
we're running out of easily available data and that may
slow things down a bit. But if you can get
them to generate their own data, then you can overcome

(29:56):
that problem, and you can get to them to do
that by reasoning. So if you look at even with
things like chess, there's only a limited number of expert moves,
but you can overcome that by getting the system to
play itself, and then you get an infinite amount of
data to train on. And so the neural nets that
are saying would this be a good move, or saying

(30:17):
how good is this position for me, they now get
an infinite amount of data, or rather unbounded amount of data.
You can always generate your data at the appropriate difficulty
level two. And so nobody says your networks for Chest
and Go are going to run.

Speaker 4 (30:33):
Out of data.

Speaker 3 (30:35):
They're already far better than any person, and we can
make them much much better than that if we wanted to.

Speaker 2 (30:41):
Shortly before you quit Google, you tweeted caterpillars extract nutrients,
which are then converted into butterflies. People have extracted billions
of nuggets of understanding and GPT four is humanity's butterfly.
Can you explain that?

Speaker 3 (30:58):
Okay, So if you look at insect most insects have
larvae and they have adults. But let's take butterflies obvious example.
And if you look at a caterpillar, it's not optimized
for traveling and mating. A caterpillar is optimized extracting stuff.

(31:20):
You then turn that stuff into soup, and then you
get something very different. So what humanity has been doing
for a long time is understanding little bits of the.

Speaker 2 (31:30):
World, translating the world into data through photographs and words.

Speaker 3 (31:34):
And yes, and now you could take all that work
we've done at extracting structure from the world, like a
caterpillar extracting nutrients, and you could take that extracted stuff
and turn it into something different. You could turn it
into a single model that knows everything.

Speaker 2 (31:54):
When I read it first, I thought it was quite
beautiful and optimistic, and then I read it again I
didn't think that anymore.

Speaker 4 (32:05):
Yes, I mean you could read it as we're.

Speaker 2 (32:08):
History, Yes, being farewell to our caterpillar. Yes, how do
you read your own metaphor?

Speaker 3 (32:15):
I'm probably somewhat influenced by a piece of William Blake
poetry which goes the caterpillar on the leaf repeats to
thee thy mother's grief, which is basically saying the butterfly
is much prettier than the caterpillar. I think I think
that's what it's saying. You know, I don't know whether
we're going to get replaced. I hope we're not. I

(32:36):
hope people stay in control, but I hope we stay
in control with assistance that are much more intelligent than us.

Speaker 2 (32:44):
Well. As you said, mothers are the only creatures in
there that we know of who are controlled by less
sophisticated beings. I think he said something on this line.

Speaker 3 (32:55):
And babies aren't much less intelligent than the mothers, like
at most a factor of two. We're talking about huge factors.

Speaker 2 (33:02):
You mentioned Blake just now. But the other person I
thought of when I was reading that butterfly metaphor was
your was your father, who of course was an entomologist.

Speaker 4 (33:10):
And yes, that's why I got my interest in metamorphosis.

Speaker 2 (33:13):
Yes, so jet jet GBT is humanity is butterfly, but
also in a sense your butterfly, And this is a
metaphor to don't know your.

Speaker 4 (33:21):
Father, I guess grudgingly.

Speaker 2 (33:26):
In the worst case scenario where AI does cause our extinction,
what are the ways in which that could happen? And
the best case scenario where it doesn't, what are the
ways in which that could happen.

Speaker 3 (33:39):
Okay, so the obvious way it could happen is we
make air agents that can create subcoals. And they realized,
because they're super intelligent, that a good sub goal is
to get more control, because if you get more control,
you can achieve your goals. So even if they're trying
to achieve goals we gave them, they're trying and get
more control. It'll be a bit like I don't know

(34:00):
if you have children. But if you have a three
year old who's finally decided they want to try tying
their own shoelaces, but you're in a hurry to get somewhere,
you let them try tying their shoelaces for a few
minutes and then you say no, no, I'm going to
do it. You can learn that way you're older. AIS
will be like the parent and will be like the children,

(34:20):
and they'll just push us out the way to get
things done. So that's the bad scenario. Even if they're
trying to achieve things that we've told them we want,
they'll basically take control. And that scenario gets worse if
ever one of those super intelligent ai I thinks I'd
rather there were a few more copies of me and

(34:41):
a few less copies of the other is super intelligent aies.
As soon as that happens, If that ever happens, you'll
get evolution between superintelligent aies and we'll be left in
the dust, and they'll develop all the nasty things you
get from evolution, being nasty and competitive and very loyal
to their own tribe and very aggressive other tribes, all

(35:01):
that nonsense that we have. So that's the bad scenario.
The good scenario is we figure out a way where
we can guarantee they're never going to try and get
control away from us. They're always going to be subservient
to us, and we figure out how we can guarantee

(35:21):
that that'll happen, and then we will have these wonderful
intelligent assistants and life.

Speaker 4 (35:26):
Is just really easy.

Speaker 2 (35:29):
Jeffrey, thank you, Thank you.

Speaker 1 (35:33):
That's it for this week for Tech Stuff. I'm os Voloscian.
This episode was produced by Eliza Dennis, Victoria, Di Mingez,
Shino Ozaki, and Lizzie Jacobs. It was executive produced by
me Kara Price and Kate Osborne, The Kaleidoscope and Katrina.

Speaker 2 (35:48):
Novel for iHeart Podcasts.

Speaker 1 (35:51):
The Engineer is Beheath Fraser Offspin mixed this episode, and
Kyle Murdoch wrote our theme song. Join us on Friday
for the Week in Tech. We'll break down the headlines
and hear from some of our expert friends about the
latest in tech. Please rate, review, and reach out to
us at tech Stuff podcast at gmail dot com.

Speaker 2 (36:08):
Thank you.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.