Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Is AI an intelligent agent or is there a totally
different way that we should be thinking about this. Perhaps
it's more like a piece of cultural technology. What in
the world is cultural technology? And how would rethinking this
change the way we approach what to do next? And
(00:25):
what does any of this have to do with the
myth of the golom or Socrates, or the printing press
or Martin Luther or the story of stone soup. Welcome
to Intercosmos with me David Eagleman. I'm a neuroscientist and
an author at Stanford and in these episodes we examine
(00:46):
brains and the world around us to understand who we
are and where we're going. So let's start with the
(01:09):
appreciation that we are smack in the middle of the
most dramatic technological shift in human history. Every few weeks,
a new AI system is released that can answer questions
of increasing complexity. As we've all seen, it can write
beautiful prose. It can punch out incredibly good code for software.
(01:30):
It composes music, It mimics voices. It produces images so
realistic that we've long ago lost the ability to tell
if a photo is real or not.
Speaker 2 (01:40):
And this increasingly applies.
Speaker 1 (01:41):
To video as well, So with every leap forward, the
same questions become louder. Is this an intelligent agent? Is
it conscious? Will it one day surpass us? And if so,
what happens to us? Today's episode is about how to
think about this from an angle that.
Speaker 2 (02:01):
Will probably surprise you.
Speaker 1 (02:03):
So as it stands now, in the public conversation about AI,
we really have just one metaphor, which is that AI
is an intelligent agent and of increasing intelligence. We all
talk about these systems as digital minds that can reason
and plan and act and perhaps even at some point desire.
(02:25):
This narrative of an AI with its own mind has
always been with us in science fiction, of course, but
today we hear it constantly in policy conversations and in
media headlines. And whether the tone is optimistic or anxious,
the underlying premise is the same that these are minds
in the making, that we are witnessing the birth of
(02:46):
a new kind of intelligence. But what if that metaphor
is misleading so much so that it's sending our conversations,
our policy, our research priorities off course. So today's episode
is about reframing what large AI models really are and
(03:06):
what they aren't My guest today is Alison Gopnik. She's
a professor of psychology at Berkeley, very well known in
the areas of cognitive and language development. She studies infants
and young children to understand how learning takes place. And
she was just by the way, elected to the National
Academy of Sciences. But I'm talking with her today about
(03:26):
a new paper she co authored in the journal Science
about AI with colleagues Henry Ferrell, Cosmishalsi, and James Evans.
Speaker 2 (03:35):
The paper argues that.
Speaker 1 (03:37):
We should stop thinking of large models as intelligent agents
and instead see them as a new kind of cultural
and social technology.
Speaker 2 (03:47):
Now what does that mean.
Speaker 1 (03:48):
Well, I'll give you a quick preview and then we'll
jump into the interview. All throughout history, humans have built
tools to organize information and transmit it. Think of spoken
language and then writing, and then printing, libraries, television, the Internet.
Each of these systems reshaped human culture, not because they
(04:11):
were intelligent in themselves, but because they allowed information to
be shared and transformed and coordinated in new ways. Just
think of how the printing press amplified voices, or how
something like markets distill the messy complexity of economies into
a single price.
Speaker 2 (04:32):
Signal, how much does this thing cost?
Speaker 1 (04:35):
Or how bureaucracies take the chaos of signals and sort
it into categories. These are not minds, but they are
powerful technologies of culture, technologies that change how we all
think and how we.
Speaker 2 (04:50):
Act and how we live.
Speaker 1 (04:51):
So the argument we'll hear today is that large AI
models are best understood in this lineage. They don't think,
but they process the vast collective output of human thought.
They are trained on millions of texts and images and voices,
everything from Shakespeare to Reddit threads to government paperwork, and
(05:13):
they summarize and reorganize and remix that cultural data. And
they also surface patterns in the data that maybe hadn't
been seen before. And so when you're interacting with such
a piece of technology, let's say, asking it to write
you a poem or explain a concept, you're not talking
to a mind. You're participating with a kind of cultural
(05:36):
compression and recombination machine. So see what you think of
the perspective that you hear today, because it can change
our concerns and our eventual legislative approaches if we stop
assuming that these are minds and instead treat them as
cultural infrastructures like search engines or even democratic institutions, then
(05:58):
we can start asking the questions.
Speaker 2 (06:01):
One quick thing before we jump into the interview.
Speaker 1 (06:03):
You've heard of large language models llms, and more recently
large multimodal models that are trained on words and images
and increasingly other data as well. So nowadays we just
refer to these as large models. So here's my interview
with Alison Gothnik. So, Alison, before we get started talking
(06:25):
about AI, you have built a very wonderful career studying
scientists who are unusually small and spend most of their
time lying down, So tell us about that.
Speaker 3 (06:35):
So what I've always been most interested in is how
is it that people can figure out the world around them?
How is it that human beings with just a bunch
of photons hitting our eyes and little disturbances of air
at our ears, nevertheless, we know about a world of
people and objects and ultimately quarks and distant planets. How
could we ever do that? How could we ever learn
(06:56):
so much from so little? And of course the people
who are doing that more than anyone else are little children.
So for the past forty years, what I've been doing
is trying to figure out how is it that even
little children can learn so much so quickly from such
little information. And one of the questions is what kinds
of computations, what's going on in their brains? What are
(07:17):
their brains and minds doing that lets them solve these
really deep problems so quickly and so effectively. And that's
been the central idea in my career. And it's turned
out that by looking at kids empirically, by actually studying
them as scientists, we've discovered that they both know more
and learn more than we ever would have thought before.
(07:37):
They're the best learners.
Speaker 4 (07:38):
That we know of in the universe.
Speaker 2 (07:39):
Amazing.
Speaker 1 (07:40):
So we're in this quite remarkable time where for both
of us we've been doing, you know, the same research
that we've been doing for many decades, and one might
have thought five years ago, okay, we'll probably be doing
that in twenty twenty five, But suddenly the world has
really changed around us because of A and so you
(08:02):
and I both are spending a lot of our time
writing about that and thinking about how to position AI,
how to understand what it does and does not mean.
So a lot of people, of course, are concerned about
super intelligence and the alignment problem, and so on. But
you and your colleagues have a quite different take that
you just wrote up in the journal Science in March,
(08:25):
and I thought it was a really lovely paper. So
that's what I want to ask you about. So you
are talking about the right way to look at large
models is as a social and cultural technology.
Speaker 2 (08:36):
So let's unpack that, right.
Speaker 3 (08:38):
So, as I said, you know, my career has been
about how could we learn as much as we do?
And how do children learn as much as they do?
And part of that has always been if we wanted
to design a computer or design an artificial system that
could learn the way children do, what would that system
look like, what could we put in, what kinds of
things would it have to do? So for twenty years
(08:59):
I've been laborating with computer scientists about what would that
kind of artificial system look like? But as you say,
even though this has been a long project, in the
last five years or so, these advances in AI have
really made us think about that in a different way.
And one of the interesting things is the big advances
have been in machine learning. They've actually been in designing
(09:20):
systems that don't just know things, but can learn things.
And as I say, children are the best learners we
know of in the universe. So there's been a really
interesting development, which is a lot of the people in
AI have been turning to developmental psychologists like me to say, look,
could we get some clues from how children are learning
to design systems that could learn that could learn in
(09:41):
the same way. Now, the interesting thing is that what
actually has happened in AI, specifically in the last five
years or so are these large models, these large language
models and more recently large language and vision models, and
they are the things that have really revolutionized our everyday
interactions with AI. It's important to say those are really
(10:02):
really different from children, and in fact, they're different from humans.
They're doing something that I think is really really different
from human intelligence. And it's very natural for people to think, oh, okay,
look I talked to chatchept and I get an answer back,
it must have the same kind of intelligence that my
friend does or my child does. And it turns out
(10:23):
that that's not true. Those systems are really really different.
Speaker 1 (10:26):
So before we go on, let's unpack that a little
bit in what ways are they different?
Speaker 3 (10:30):
So a very common kind of model for how a
AI works is to think of it as if something
like chatcheept is an agent, an intelligent agent in the world,
like a person or even an animal that you know about.
But that's actually, I think, an illusion. That's what we've argued.
A better way to think about it is that as
(10:51):
long as we've been human, we've learned from other people,
and we've had great technological advances that have helped us
to learn more effectively for more and more people. So
if you think about language itself, or writing or print,
those are all examples of technological changes that made us
able to get more information from others. And what the
(11:13):
large models do is not go out into the world
and learn and think the way that babies do. What
they do is summarize information that human beings have actually
already discovered. So what they do is take all the
information and knowledge that human beings have put out on
the web essentially, and then summarize that in a way
(11:34):
that lets other people access it more efficiently. So it's
much more the technological development is much more like something
like writing that lets you find out what other people
are thinking than it is creating a system that could
learn and think itself.
Speaker 2 (11:50):
Yeah.
Speaker 1 (11:50):
In a previous episode of Intercosmos. They did a calculation
showing that the amount of information that one of these
dllms consumes would take you one thousand lifetimes.
Speaker 2 (12:00):
For you to read.
Speaker 1 (12:02):
And so it's consumed more than you could ever imagine.
And what it's doing fundamentally is when you ask it
a question, it's giving you an echo of the human
intelligence that's already in there. So I call this the
echo intelligence solution, where we feel like, wow, that thing's
really smart. But it's not smart. It's not smart in
(12:23):
the same way that a human is. It's taking advantage
of all the things that are already out there.
Speaker 3 (12:29):
So here's here's a way I like to I like
to convey this. I think you know, storytelling, as you know,
is really important. So here's two stories you could tell
about how current ani work. So one story is sort
of the story of the Gollum, right, the Rabbi of
Progue and the Gollum. You create this artificial system and
(12:49):
it's magical and you put special magic in it, and
then it turns into something that's almost alive.
Speaker 1 (12:55):
And it's interesting for anyone who doesn't know about the
story of the Gallum. That was a figure made of clay.
He was brought to life and defended the community.
Speaker 3 (13:05):
But then, well, it turns out that these stories about
what would happen if you had something that wasn't human,
that was artificial that you brought to life are really
ancient there, way before even the Industrial Revolution. And I
can tell you right now it never ends well, the
story always variably. The end of the story is that
(13:26):
some terrible thing happens and the column goes mad and
causes trouble and chaos, and.
Speaker 1 (13:32):
That inspired Frankenstein some hundreds of years later. They don't
have the same character.
Speaker 3 (13:37):
So there's this very basic human fear about what would
it be like if there was something that wasn't actually
living that you treated as if it was living, as
you treated as.
Speaker 4 (13:47):
If it was an agent.
Speaker 3 (13:48):
And I think that basic picture, that's the sci fi picture,
that's the picture that a lot of people, including people
in the AI world themselves, have about what's happened in Ai.
Here's a really different story, also, a different ancient story.
This is the story of stone Soup. So what's the
story of stone soup. The story of stone Soup is
there's visitors who come to a village and they say,
(14:10):
we'd like some food and the villagers say, no, we
don't have any extra food. And they say, it's okay,
we're going to make stone soup. And they take out
a big pot. They put a couple of stones in it,
they put some water in, they start to boil it
up and they say, this will be delicious. We're going
to make stone soup just with these stones, and the
villagers say really. They say, yeah, it would be even
better if we had an onion and a carrot in it,
(14:31):
but if we don't, we don't.
Speaker 4 (14:32):
And the villager says.
Speaker 3 (14:33):
I think I have an onion and a carrot somewhere,
and they go and put it in and then they say,
you know, when we made this for the rich people,
we put barley and buttermilk in it, which makes it
even better. But it's okay, it'll still be good stone soup.
And another villager goes and gets the barley and buttermilk,
and you can imagine.
Speaker 4 (14:49):
How this goes.
Speaker 3 (14:49):
And they say, the king said that we should put
a chicken in it, which would make it really royal,
but we don't have any chicken. So another villager goes
and gets the chicken from the back, and by the
time they're done of course, they have this really wonderful
soup with all the contributions from all the villagers, and
they go to eat it, and the villagers say, this
is amazing. There's this wonderful soup and it was just
(15:10):
made from stones. Okay, here's the modern version of this.
There's a bunch of tech guys and they go to
the village of computer users and they say, we're going
to make artificial general intelligence just from gradient descent and
transformers and a few algorithms. And the computer used to say,
that sounds great. We're gonna have artificial general intelligence. And
(15:33):
they say, yeah, but it would be better if we
had more data. What we need for this, as you
just said, David, is lots and lots of data.
Speaker 4 (15:39):
Could you guys put all of.
Speaker 3 (15:41):
Your texts and pictures on the internet for us and
then let us use them to.
Speaker 4 (15:45):
Train our systems, And the computer user to say, oh,
that sounds good.
Speaker 3 (15:49):
We'll just keep putting more of our pictures and our
writings and our books on the internet, and I guess
you can just use them all for free, and then
the then the tech person, oh, this is really good.
This is getting to be more intelligent. But you know,
it still says really stupid things. A lot of the
time it says weird things. So what we could do
(16:10):
is reinforcement learning from human feedback, which is actually a
really important part of these systems. What we'll do is
we'll give them to humans and then people can say
whether what they're saying is good or not, and then
we'll use that for the training.
Speaker 4 (16:24):
The computer used to say, oh, okay, we're happy to
do that.
Speaker 3 (16:27):
We'll actually go out and say whether this is good
or not. There's a whole and this.
Speaker 4 (16:31):
Is literally true.
Speaker 3 (16:31):
There are whole villages in Kenya that we'll do this
for very small amounts of money.
Speaker 4 (16:37):
And the tech pro said, oh.
Speaker 3 (16:39):
Look see it's even smarter, but it's still saying really
stupid things sometimes. How about if you did prompt engineering.
So think really hard about exactly how to ask it
the right questions so that you can get the right answers,
because otherwise it's going to say stupid things. And the
users say, oh, okay, we'll do that. We'll sit down
and we'll figure out how to do prompt engineering. At
(17:00):
the end of this process, the tech bro say, seeing,
we told you we made artificial general intelligence and it
was just from a few algorithms and the computer you
to say that's amazing, that's amazing, We're going to have
artificial intelligence, and it's just you, brilliant tech guys who
invented it. So, of course, the point of this is
that it's a sort of debunking story, but it's also
(17:23):
in both versions, a positive story, because the point is
when you have a combination of lots and lots of contributions,
of lots and lots of intelligent people, lots of humans
who we know are intelligent, both in terms of the
data they provide and in terms of things like reinforcement
learning and from human feedback and prompt engineering, you've got
something that that's bigger than any individual human could have.
(17:47):
But it's not that what you've got is a gall
it's not that what you've got is an agent that's
gone out and been intelligent itself. It's really just a
system for putting together the thoughts of other agents.
Speaker 1 (17:58):
So the lesson that surfaces here is that although we
humans love to anthromorphize things and love to make inanimate
objects into agents in our minds, it's probably not the
right way to think about these language models. Or let
me say, these large models, language, vision, multimodal. So what
(18:19):
is the right way to think about it? So let's
really unpack this issue about what a social or cultural
technology is.
Speaker 3 (18:25):
Yeah, so, ever since we've been human, we've made progress
by learning from other humans, and a number of people
like Joseph Henrick, for example, and Brob Boyd have argued
that that's kind of our human our great human gift,
that's really our secret sauce is not so much that
(18:45):
we can individually learn things that other creatures can't, although
I think that's part of it, but that we can
take advantage of all the things that other humans have
done over many, many, many generations. I like to think
of this in terms of the postmenopausal grandmother. I think,
you know, one of the distinctive human things is that
we have these postmenopausal grandmothers, and a lot of what
they do is tell us about the things that they've
(19:08):
learned in their long, wise lives. And by taking advantage
of what granny says, you can make progress, even if
what you're doing is now finding new things that you
will tell your grandchildren. And that capacity is really the
capacity that makes us special. And we've had special technologies
ever since we evolved, that tuned up that capacity, that
(19:30):
made it more powerful. So language itself, of course, which
is one of our distinctive things about humans, lets us
learn from others. But even more if you think about
something like the invention of writing that enabled us to
learn from you know, not just our own granny, but
granny's who were far away in space and in time,
and it's fascinating. Socrates famously has a whole section about
(19:52):
why he thinks writing is a terrible idea because exactly
because he thinks, people will read something in a book
and they'll think it's actually a person. They'll think that
it's a person who said this, and it's not. It's
just something that's written in a book. You won't be
able to have Socratic dialogues with something that's written in
a book. It's not really a person, but because it's language,
(20:13):
will treat it as if it's a person. So writing
is a good example of something that even though the
books aren't intelligent, the books in some sense don't know things.
In another sense, we know things because of books a way.
I think if this, sometimes they suppose someone asks you,
who knows more me or the UC Berkeley library. Well,
(20:35):
the library has much more knowledge in it. It's got
vast amounts of knowledge that I could never actually have
in my head. But it's not the sort of thing
that knows. I'm the sort of person who knows and
I know things because I can do things like consult
the library. And then you have print, which is even
more powerful and has even more powerful effects. You have
(20:56):
video and film. You have pictures, which I think are
a really important medium that we don't pay enough attention to.
So when we talk about vision models, for example, they're
not actually using vision. What they're using is all the
pictures that we put on the Internet, and pictures are
a really important source of communication to So to say
that it's a cultural technology, to say it's one of
(21:19):
these technologies that lets humans learn from other humans is
not at all to dismiss it. Those cultural technologies are
the things that have led, for better or for worse,
to the world that we have now. But it's just
a really different thing. It's what philosophers would call a
category mistake to think that it's like an intelligent agent,
which is not to say that at some point in
the future AI might not develop intelligent agents, but that's
(21:42):
not what the large models are doing, and that's a
much much harder lift, something we're much further away from
than this fantastic, interesting, powerful new cultural technology that we've invented.
Speaker 1 (22:11):
So when we're thinking about cultural technologies, those are all
amazing examples that you gave about the invention of writing
and the printing press and then the Internet and so on.
In the paper that you wrote with your colleagues, you
mentioned other things also, like markets and bureaucracies and representative democracies.
Just give us a sense of how those are our
(22:33):
cultural technologies as well.
Speaker 3 (22:35):
Yeah, so this paper, it was a wonderful collaboration between
me and Henry Farrell, who's a really distinguished political scientist,
James Evans, who's a fantastic sociologist, and Cosmo Shalizi, who's
the statistician. So it was kind of like everybody in
the social sciences. And what Henry and James have pointed
out is that we have things like writing in print,
(22:55):
but if you think about things like a market, this
is an old observation in economics, what a market really
does is to summarize information from lots and lots of
individual people. Imagine that you're in a forager culture and
you want to exchange you know, I have a two
turnips and I want to exchange them for some beads. Right,
(23:16):
I have to work that out for each individual group
of people. And what a market does is let you
do that literally on a planetary scale, lets you do
it for billions of people. And the price is a
kind of summary of here's all the desires and all
the goals and all the preferences of all of these
people just summarized in this one, you know number that
you find when you look on an Amazon. Now, of
(23:39):
course that's interesting because it didn't even rely on you know,
markets start in the industrial age, but they don't rely
on computations. They they're way before we have computers. They're
way before we even have calculators. But the invention of
markets was a kind of information processing invention that let
us take individual desires and put them together. And democratic
(24:01):
elections are like that too, where we have all of
these people who have different preferences, and the democratic process
lets us figure out a way of combining them all.
And that's another thing that these large models can do,
that can take lots of information from lots of different people,
put it together in a single format. And again this
is for good or for ill, and maybe we could
talk about both of those sides in a bit.
Speaker 2 (24:23):
Great. Well, actually let's go there now.
Speaker 1 (24:25):
So all previous social and cultural technologies always come with
good and bad.
Speaker 2 (24:29):
So what do you see.
Speaker 1 (24:31):
As far as large models go with our current moment
of AI. What do you see as the potential good
and bad?
Speaker 4 (24:38):
Yeah?
Speaker 3 (24:39):
So, one thing that I think is worth pointing out
is that actually the big technological change was not the
change in large models. It was the fact that around
the year two thousand there was this remarkable thing happened
that nobody really noticed or paid attention to, which is
that all the previous media got turned into bits. So
it's fascinating. Around two thousand, you get the first computerized movies,
(25:05):
You get things like Toy Story and Pixar. You get PDFs.
PDFs are taking print and turning them into bits. You
get HDTV, so you get TV that's now digital, And suddenly,
in a very short space of time, the only analog
media are.
Speaker 4 (25:21):
In museums in kindergartens.
Speaker 3 (25:23):
You know, everything else is everything else is digital, which
means that now not only do you have information, but
it's infinitely reproducible, and it's instantaneously transmissible because it's in
the format of bits. And once that happened, then it
was just a matter of time before we found new
ways of accessing and summarizing and organizing that information. And
(25:44):
large models, I think are the are the result of that. Okay,
so what do we know about these past changes in technologies?
Speaker 4 (25:52):
You go back to writing.
Speaker 3 (25:53):
As I mentioned, Socrates was very dubious about whether writing
was going to be a good thing or not, because
he pointed out that you don't have Socratic dialogues when
you have books, when you have writing, and you don't
memorize all of Homer when you have books, and he
was right, we don't memorize all of Homer anymore. Well,
we do tend to think that things that are written
(26:14):
down are truer than they actually are.
Speaker 2 (26:16):
And by the way, in.
Speaker 1 (26:17):
The fifteenth century, once the printing press was invented, there
were a lot of these same complaints that surfaced, where
people said, look, you know, if you ask a kid
a question, now they're just going to go to the
shelf and pull it right off and there's the answer.
Speaker 2 (26:29):
They don't have to think about it.
Speaker 3 (26:30):
That's exactly right, and so there was misinformation. You know,
when you can pass on information, one of the first
things that happens is you can also pass on misinformation.
An example that I really like to give is, so
you were mentioning about the fifteenth century when printing starts.
Something that I just found out recently that I think
(26:50):
is really fascinating is, you know how we all have
this mythology about Luther nailed the articles to the door,
and that was, you know, the big defiance thing. Well,
it turns out nailing things to the door was like
having post it notes, Like everybody was always nailing things
to the door. That was just like the way that
you distributed it. What Luther did was print his ideas,
(27:11):
and when they were printed, then they could be distributed
to everybody, including like the common people. And that was
the revolutionary thing. That was the disruptive That was the disruptive.
Speaker 1 (27:24):
Thing, analogous to let's say, off Hillary using radio in
the thirties, reaching a much wider audience.
Speaker 3 (27:30):
But my favorite example is in the eighteenth century, the
late eighteenth century, there were further technological changes in printing,
which meant that it became extremely I mean Essentially, anybody
could go and find a printing press and print a
pamphlet and distributed.
Speaker 4 (27:45):
And there's a very good.
Speaker 3 (27:47):
Argument that this was responsible for the Enlightenment, that you know,
something like it's not a coincidence that Ben Franklin was
a printer. A lot of the source of the American
Revolution was these pamphlets that were spreading new ideas from
the Enlightenment, things like Tom Payne's common Sense, the idea
that people could have a democratic state. Those were all
(28:11):
things that were distributed through printing. So we all think
that's great, like, that's fantastic, we could get things really quickly.
But at the same time, the scholar Robert Darten pointed
this out a long time ago. If you actually look
and read all of the pamphlets that were produced in France,
so they were happening in America, they were also happening
(28:31):
in France.
Speaker 4 (28:32):
You will be amazed to hear this, David.
Speaker 3 (28:34):
But most of them were libel lies, misinformation, and a
lot of soft core porn. That's the first thing when
people have a new cultural technology, that's the first thing
they use it for. And a lot of the French
Revolution came because, for example, Marie Antoinette saying let them
eat cake.
Speaker 4 (28:53):
That was a meme. That wasn't something that she said.
Speaker 3 (28:55):
That was something that came out of this underground of
people just beating pamphlets. So you get the same benefits
and drawbacks. You get information being distributed really quickly to
many more people in a way that means that new
ideas can spread. But it also means that misinformation and
(29:15):
libel and other kinds of things you don't want can spread.
And I think we see this happening with the current
systems as well. So lms are representing the humans you know,
who are putting You know that the soup just tastes
like what all the people have put in the soup,
And that means that if people are putting in things
that are racist or sexist, or just wrong or misinformation
(29:42):
or outrage, that's.
Speaker 4 (29:44):
What's going to show up from the llms.
Speaker 1 (29:47):
And of course that's no different than the Berkeley Library
in the sense that if people are writing books and
they understand something about planetary motion or dark energy or whatever,
and they write books on it, that's that's all we
have to draw from R two things.
Speaker 4 (30:01):
What is it that they can't do well? What they
can't do is go out and find something new.
Speaker 3 (30:05):
And that's the great thing that human beings can do,
and especially actually human children can do, is go out
into the world say this is what everyone's told me
is true, but you know what, I'm not sure it
is true. Let me go and find out something new.
That's what kids do, that's what teenagers do. That's what
scientists do. Go out in the world, revise, change, think
(30:26):
about things in new ways, find out something new about
the world.
Speaker 4 (30:29):
And that's exactly the.
Speaker 3 (30:30):
Thing that ellms can't do. What elms can do is
summarize what all the other humans have said. They can't
go out and find something that's not just not just
an extrapolation from what people have already done.
Speaker 1 (30:44):
So there's something really interesting about this, because what elms
are great at doing, of course, is interpolating between different
things that have been said, and sometimes those are lucune,
those are holes that humans haven't explored in before for
whatever reason. So what I and some of my colleagues
has been doing for a while is, you know, pitching
these really strange scientific questions at it and saying, you know,
(31:09):
what's your hypothesis, and it comes up with something and
we say, okay, you think you know. More broadly, just
come up with something that could explain this, and you know,
and you keep pushing it and it comes up with
pretty creative things. And they're creative in the sense that
they are remixes of what it's taken in before. And
it can do something, you know, interpolating between points that
(31:31):
are already known.
Speaker 2 (31:32):
Now I totally agree with you. Of course, what it
can't do.
Speaker 1 (31:36):
Is think about something outside of the sphere of human knowledge,
which is what let's say, you know Einstein does when
he says, what if I'm writing on a photon and
what would things look like?
Speaker 2 (31:45):
And so on.
Speaker 1 (31:45):
It comes up with a special theory of relativity that is,
as you say, taking all the stuff that's coming forward
and saying, hey, maybe that's not right and there's a.
Speaker 2 (31:55):
Completely different way to look at it.
Speaker 1 (31:57):
And of course with that, Record Buyers is the ability
to not only come up with a new idea, but
then simulate that idea to its conclusions and say, oh,
you know, that explains things better than what we currently have.
Speaker 3 (32:12):
And a very important piece of that is it also
involves going out and testing it, so you know, it
wasn't just Einstein, it was people going out and actually,
you know, measuring the eclipses. That meant that that theory
was confirmed. So one of the things that even little
kids do is test things, go out experiment. When kids
(32:33):
do it, we call it getting into everything. But we've
been recently doing a lot of work showing that even
when little kids are getting into everything, what they're actually
doing is trying to get data that they can use
to change what they think, to revise what they think.
And that's a very human kind of intelligence. The reason
why the llms hallucinate is what people often call it.
(32:55):
It's not that they're hallucinating, it's that they just don't
They're not designed to know the difference between truth and falsehood.
They're not their objective function, as they say. The thing
they're trying to do is not to get the truth.
It's not going out and doing an experiment. It's not
changing their minds. It's trying to get the best summary
of the things that they already have heard from they
(33:15):
already have heard from other human beings.
Speaker 1 (33:17):
Now, just for clarification, everything that you and I are
talking about right now is with current large models. But
what a lot of people are talking about now is
the third wave. The next wave of AI is probably
going to be agents who are experimenting in the world,
who are doing things in the world and gathering data
that way, because clearly we've already done the common crawl,
(33:40):
where these AI agents have crawled everything that has been
written by humans and there's no more data to be had.
Speaker 2 (33:47):
So the next step.
Speaker 1 (33:48):
Is run experiments in the real world, try out hypotheses,
and so that's going to change everything again.
Speaker 3 (33:56):
Yeah, I mean that's where the kids are like a
wonderful because that's what kids are doing. And you know,
you mentioned something that we think about something like Einstein
as being this big change. But think about I think
about this with my uh my grandchildren. For example, when
I look at a phone, right or a computer, what
I say to myself is okay, well you use.
Speaker 4 (34:18):
You use a keyboard.
Speaker 3 (34:20):
But then there's these other gimmicks about, like I could
talk to it or I could touch it.
Speaker 4 (34:24):
It would work.
Speaker 3 (34:25):
My grandchildren, even my eighteen month old grandchildren, think, oh no,
this is a system that you talk to and you
touch and then things happen when you talk. You know,
if you actually have this little physical object and you
talk to it and touch it, you're going to get effects.
And they think that the you know, the keyboard is like,
what is this? This is this weird, strange, awkward thing
(34:47):
that you know, peripheral device.
Speaker 4 (34:49):
We don't want to have anything to do with that.
Speaker 3 (34:50):
Eighteen month olds, right aren't reading, let alone using a keyboard,
so they're even just you know, the difference between my
vision of what a computer is and what my eighteen
month old grandson's vision is is already a really radically
different vision.
Speaker 2 (35:09):
That's right.
Speaker 1 (35:09):
So the long arc of moral progress, as well as
the arc of human knowledge is all about is all
about us passing on to the next generation. Hey, here's
the things we've learned exactly, and then they pick it
up and they springboard right off of that.
Speaker 3 (35:25):
So that's going to be you know, if you look
at the difference between how much progress has been made
with the LMS and something like robotics, where there's progress,
but it's much slower and still very very far removed
from what every baby's doing. You know, I do think
and we've been thinking about whether you could design a
system that, for instance, let me describe this one of
(35:50):
the kinds of AI systems, not an LM. But a
really different approach is what's called reinforcement learning. And reinforcement
learning is when a system does something and then learns
from the outcomes of what it does. But that's still
very very labor and computation intensive. It's hard, it's hard
to do efficiently, and it has a big problem, which
(36:12):
is in reinforcement learning, what those systems are doing is
just trying to, say, get a big higher score in game.
So reinforcement learning is how they solved go and chess,
So you can say, okay, you want to win the game.
What do you have to do to win the game.
But of course a lot of what children are doing
is just trying.
Speaker 4 (36:27):
To figure out how the world works.
Speaker 3 (36:29):
It doesn't really matter whether you're winning or losing, or
you're getting things or not. It's you just want to
figure out how the world works. So one thing we've
been doing is saying, what would happen if you had
a reinforcement learning system and instead of trying to get
a high score, it was trying to get more information,
or it was trying to figure out how to be
more effective or figure out how cause and effect worked.
(36:50):
Would that be a better way of describing that certainly
is much closer to what the kids are doing. But
part of the problem is that in the systems that
we have now you know, you were mentioning, well, you
could go to a chat subutee and say, okay, give
me five different ways of answering this question. But the
way that they currently are generating that kind of variability
(37:11):
and novelty is basically just by being random, just by
what they sometimes call turning up the temperature, just doing
things that are more different from one another. They're not
able to say, Okay, this is a plausible answer, this
could be the right answer, and this is just you know,
completely irrelevant. That's part of the reason why they hallucinate
(37:32):
is because they'll generate something and it seems like it
could be potentially be an answer, and they're not evaluating
does this make sense or not? And that's something that
we know that kids are doing, and we don't have
a very good account of.
Speaker 4 (37:45):
How they do it.
Speaker 3 (37:45):
So, you know, this is the old This shows how
old I am. There used to be a show about
you know, kids say the darkness things, and there's.
Speaker 4 (37:54):
Slightly more expletive latent things like that on the internet.
Speaker 3 (37:58):
Kids are always saying the creative strange things, but they're
not random, like they're not. It's not that they're just
randomly saying saying things that make no sense. They're really
different from what a grown up would ever say, but
they're not, but they kind of make sense. A lovely
example recently is one of a post doc was taking
(38:19):
her child out for her four year old out for
a walk on the Berkeley campus and the campus has
a companyli it's a bell tower that's, you know, a
clock that's very high up on top of a tower.
And a little boy looked at it and said, there's
a clock up there.
Speaker 4 (38:34):
And then he thought to himself and he said, why
did they put the clock up there? And he said
it must be so the students and the children couldn't
break it, which is of course a wonderful explanation, right,
like you put it up really.
Speaker 3 (38:48):
High so it'll be out of reach of the students
and they can't go and break the valuable clock. Not
something that a grown up would think of, but something
that's kind of plausible. And it's that kind of capacity
to generate something that isn't random but makes sense that
kids are really really good at doing. And we've done
a bunch of experiments at show. In some cases they're
(39:10):
better at doing that than grown ups are. But that's
something that's still very much not part of what's available
in AI. And there's an old observation in AI, sometimes
called Mirovich's paradox haunts Morovit was originally the person who
noticed it, which is that a lot of the things
that we as humans think are really really hard and
(39:32):
require a lot of intelligence, like playing chess, are actually
not that hard for AI systems. Things that we think
of is just taking for granted, like picking up a
chess piece and putting it on the board, turn out
to be a lot harder than playing chess. So the
thing that the kids can do, which is take a
bunch of you know, take a box full of mixed
(39:53):
up chess pieces and pull out the right ones and
put them in the right place, or have someone say,
are right, now, we're going to play chess, but we're
going to have a different role. You can move the
ponds as many spaces as you want. Those kinds of
abilities are exactly the ones that even the great chess
playing programs are going to have a hard time of doing.
Speaker 1 (40:28):
That's right, and this is what I was mentioning before
about this third wave. So the first wave of AI
was really about reinforcement learning entirely. It was let's nail chess,
let's nail go, and it's all about let's play millions
of games new reinforcement. Then the second wave of AI
was totally different. It was, Hey, let's just absorb everything
that's out there. And so the third wave is really
(40:49):
something like becoming closer to a child than interacting with
the world and getting that feedback, right, And that's the future.
I mean, we don't have that yet in twenty twenty five,
but it'll be great to revi is that this part
of the conversation in twenty twenty eight and see where
things are and what they look like.
Speaker 3 (41:06):
Yeah, I mean it maybe that we're headed for another
you know, the famous AI springs at AI Winters. My
intuition is that we're getting to the limits of what
the LLM cultural technology piece can do now that will
change the world, There's no question about it. And in
a way, you know the fact that kids can get
on the internet and find out kids anyone can pick
(41:27):
up something that's in their pocket and find out all
of this information about the world that is going to
change the world. But if we're thinking about intelligence, then
I think we still have a long way to go
to have something that has the kind of intelligence that
every child has.
Speaker 1 (41:42):
Has in the AI being an intelligence being exactly right,
exactly right.
Speaker 2 (41:47):
I'm curious what you think about this.
Speaker 1 (41:49):
I've been sort of campaigning on this point for a
long time that I think the next generation is going
to be smarter than we are simply because of their
access to the world's knowledge. With the rectangle in their pocket,
the moment they're curious about a question and have the
right neurotransmitters present, they get the answer, and the exposure
(42:10):
to everything that's known is so incredible as far as
filling their warehouse so they can do remixes and come
up with new ideas.
Speaker 2 (42:18):
What's your take hoting children or children growing up in
the digital children.
Speaker 4 (42:23):
Growing up in the digital age.
Speaker 3 (42:24):
Yeah, so I think that's a really good That's obviously
a really good, profound question. And I think again, if
you think about those past examples, like what did writing
and pictures and print and then the Internet itself, the
ways that that really changed in some ways that really
(42:45):
really beefed up human intelligence in really important significant ways.
The amount of things that's just the number of things
we know, right, the number of things we could know,
the number of things we could access.
Speaker 4 (42:58):
All of those things.
Speaker 3 (42:59):
Those culture technologies have really changed, and I think there's
every reason to believe that that will happen with the
new technologies as well. Now. At the same time, there's
always been these trade offs where other kinds of intelligence
that we had before, like the kind of intelligence that
you need to have to be a hunter gatherer, for example,
being able to be out in the world, take in
lots of information, go out and act, those kinds of
(43:22):
intelligences might suffer and probably will suffer as a result
of that. So, you know, being able to being able
to build a house, right, which is a really useful
thing to be able to do, is not something that
you can do just based on YouTube videos, although maybe
YouTube videos can maybe YouTube videos can help. My suspicion
(43:45):
is that what tends to happen with these technological changes
is they make all the difference in the world and
nobody realizes it because by the time they've made all
the difference, you just sort of take it for granted.
So an example that I like to give is as
I'm walking down the street. Now, I'm spending a lot
of time decoding text, So as I walk down the street,
(44:06):
I'm completely surrounded by these signs. You know, go call
our lawyer for your accident in case you're in an accident,
right or whatever, all the signs that are on the street.
I don't think to myself, God, it's exhausting every minute.
I never get to just walk down the street. I'm
always putting all this energy into trying to take all
(44:29):
this text and read it. And if you think about it,
if you were a pre literate person, it probably would
be exhausting if you had to sit down and figure
out what is it that each one of these signs
is saying. But of course we don't even I mean literally,
we're not even conscious of it. It's just part of
what goes on in the background. And we know from
neuroscience that in fact, parts of our brain have been
adapted to just do this kind of processing really quickly,
(44:52):
so we don't even notice that we're doing it. And
my suspicion is that that's what's going to happen with
the next generation and the the internet and information. We'll
be doing these things we won't even think about it.
But there's a really important caveat to that, which is
that if you go back to those examples of print
and writing, why is it that we didn't just have
(45:16):
all the evil misinformation and libel of the French Revolution indefinitely. Well,
what's happened is every time we've had one of these
new cultural technologies, we've also developed systems for keeping them
under control. So newspapers, you know, again we just sort
of take newspapers for granted, and newspapers are sort of disappearing,
But newspapers were away of taking that printed information and
(45:39):
curating it and having editors who said this is true,
and this isn't true, and this is something we want
to tell our readers, and this is something we don't
want to tell our readers. And you had norms that
developed things like journalism or journalism school or libel laws
that said, no, here's a way that we can take
this new culture and make control it in a way
(46:01):
that will let the good parts come out and keep
the bad parts under control. And every time there's a
new cultural technology, we've had new laws, we've had new norms,
we've had new institutions, we've had new kinds of people
who were there to try to make these institutions work
for and the same thing, by the way, it's true
for markets. You know, think about the way that markets
(46:23):
are great for coordinating people, but we all know all
the terrible things that can happen with markets do so
you need to have laws, you need to have institutions,
you need to have ways to regulate them. And I
think the same thing's going to be true with AI.
If we're going to succeed, we're both going to have
to have.
Speaker 4 (46:38):
Internal you know, internal norms.
Speaker 3 (46:41):
I think you see some of that happening with kids,
where kids will say things like, oh no, I don't
you know this is the wrong thing to pay attention to.
If you go to that YouTube site, it's full of
you know, it's full of nonsense. This one is actually
this one is actually better. And also we're going to
have to have all those boring things like legislation and
(47:02):
code and regulatory agencies that's already sort of starting that
will make sure that things are good rather than bad.
Another example, I like to give a great technology, technological
change that we don't think about very much. Think about
it's nineteen hundred and people are saying.
Speaker 4 (47:20):
You know what we should do.
Speaker 3 (47:21):
We should take all our wooden houses and we should
put electricity in them. So we should put wires that
have electricity, which we know burns things, and we should
put them in everybody's house, including everybody's wooden.
Speaker 4 (47:32):
House, and it'll be fine, right, it'll be great.
Speaker 3 (47:35):
Well. The only reason why we can do that, and
we did have a lot of houses burned down to
begin with, is because we have, as anyone who's you know,
anyone who's a contractor or has said a reno, Nos,
we have this thing called code, which is this book
that's like this thick that has all the rules about
here's what you have to do, here's how the wiring
(47:56):
has to work, here's all the things that you have
to do to make electricity work. And nobody thinks about
that big book of code as being a fantastic human invention.
We mostly think of it as, oh, God, that's why
my contractor is charging me so much. But that invention,
in a way, is just as important as the invention
of electricity itself. The invention of electricity itself was a
(48:16):
big scientific advance, but you couldn't use that unless you
had this other kind of advance, which was the code,
the laws, the regulations, the legislation. And I think that's
something that a lot of people in AI are realizing.
Speaker 1 (48:32):
So when you're thinking about what will be the legislation
and the norms that are coming down the pike, when
you squint into the future, can you see what that's
going to look like?
Speaker 3 (48:41):
I think that's a good question, and it will have
to be somewhat different. You know, each one of the
way that we deal with language, which is, you know,
sort of having a moral principle that you shouldn't lie,
for example, is different from what we have to do
with writing, is different from what we do with print,
is different from what we had to do with the Internet.
Speaker 4 (48:58):
One of the points that.
Speaker 3 (48:59):
We make in are in that paper that I think
is really important is that there are real dangers in
the fact that this technology has been monopolized by a
few big agencies, for example, a few big companies.
Speaker 4 (49:14):
So one thing is.
Speaker 3 (49:15):
How are we going to make sure that that it's democratized,
that in fact, people can use it outside of the
control of just a few big companies.
Speaker 1 (49:25):
And because of just a quick interjection, I actually I
have no worries about that, and I'll tell you why.
Speaker 2 (49:30):
It's because everything starts this way.
Speaker 1 (49:32):
And you know, we saw when deep Sea came out
recently they had reportedly done it for six million bucks
instead of billions. And it's only going to get cheaper
and easier to do this.
Speaker 2 (49:46):
So I think it.
Speaker 1 (49:46):
Will quickly become democratized, just like every other example we've
had of things like this leg printing. I mean, now
we all have a printer on our desk at home, right,
whereas a printing press was a big.
Speaker 2 (49:57):
Deal that only a few people had.
Speaker 4 (49:59):
Yeah, well, I think that's partly true.
Speaker 3 (50:01):
But on the other hand, the fact that these all
depend on these pre trained systems that do take an
enormous amount of money to get started, and even with
deep Seak, it seems like part of what Deepseek was
doing was taking advantage of the fact that this pre
training had already been done by other systems.
Speaker 4 (50:22):
So I think that will happen.
Speaker 3 (50:24):
But I think the question about who will have control
is a really important political and economic question. Here's another
interesting point that my colleague Henry Ferrell made in that
has made and we made in that science piece. If
you think about cultural technologies they intrinsically always involve attention
(50:48):
between the creators and the distributors. So the ideas cultural
technology is I can get information from other people. That's
essentially the idea, But that means someone has to make
up that information, someone has to generate it, someone has
to find out what's true, for example.
Speaker 4 (51:05):
And there's this kind of paradoxical synergy between the people.
Speaker 3 (51:09):
Who are out there creating things that are new and
then the people who are distributing them. So there's no
point in writing, as you and I both know as writers, right,
there's no point. Well, or maybe there's some point, but
there's not much point in writing a book unless you
have a publisher who can actually get it out into
the world.
Speaker 4 (51:24):
Right.
Speaker 3 (51:26):
But for the publisher, there's no point in being a
publisher unless you can get people to write books for you,
unless you can get new content, new ideas.
Speaker 4 (51:36):
But that also means that there's.
Speaker 3 (51:37):
This intrinsic tension, economic tension between who's going to pay right,
So for the distributors, it's always going to be in
their interest for the creators.
Speaker 4 (51:46):
To get paid as little as possible.
Speaker 3 (51:48):
And for the creators it's always going to be in
their interests for the distributors to.
Speaker 4 (51:51):
Get paid as little as possible.
Speaker 3 (51:53):
So if you're thinking about that from an economic perspective,
there's always going to be this tension between the people
who are creating and the people who are distributing. And
you can already see that in what's happened in journalism,
for instance, or what's happened to the death of local newspapers.
We had this kind of strange equilibrium where you know,
the want ads and the weather reports paid for the
(52:16):
investigative journalism and the arts criticism. And that's once that
two thousand digital convergence happened, that wasn't going to be
there anymore.
Speaker 4 (52:26):
And now it's.
Speaker 3 (52:27):
Really sort of up for grabs about who is how
people are going to get compensated, and who is going
to have the power of the distributors or the creators,
And I think that's going to be a really important
issue as we're going forward with this too.
Speaker 1 (52:41):
One idea that people have been floating recently is to
have digital dividends come back to the creators, so that
the people whose work has been used to train the
models and maybe provided point one percent of the answer
that that model just gave gets a few pennies back
each time. That's idea how those economics models will evolve
(53:05):
over the coming year.
Speaker 3 (53:06):
I mean, if you think about it, like what we
did for print was we invented copyright law, so we
have a whole lot of laws about Look, you can't,
you know, I can't just take a David Eagleman book
and copy it and say that it's mine. But of
course that involves the actual text. So copyright depends on
the fact that you're taking word for words something that
(53:26):
someone else has written. How about if Google decides to
use David Eagleman's book to train its large language model.
It's not taking every single word, but the reason why
it's giving you good answers about neuroscience is because it's
been trained on those books. And we know as a
matter of fact that those large models have been trained
on my books and your books and all the other
(53:49):
books that are out there.
Speaker 4 (53:50):
Now.
Speaker 3 (53:50):
They're not copying it down exactly, but the fact that
you could go to chat ChiPT and it could tell
you here's what Alison Govnik would say about about children's learning,
comes because it's gone out there and used all that
information in all of those books.
Speaker 4 (54:06):
So it's really tricky.
Speaker 2 (54:08):
Yeah.
Speaker 1 (54:08):
What I think is really tricky about this is the
reason you and I are able to write books is
because we've sat in thousands of lectures, we've heard ideas
from people, and we've put things.
Speaker 2 (54:18):
Together in our own way and our own voice.
Speaker 1 (54:20):
And in a sense, the lllm's doing what humans do
in terms of creativity.
Speaker 2 (54:25):
We're remixing all of our inputs.
Speaker 4 (54:27):
Yeah, but of course we wouldn't.
Speaker 3 (54:30):
Our books wouldn't be worth reading if they were just
summaries of everything that had gone before. So part of
what we want is to have exactly this.
Speaker 4 (54:39):
People in.
Speaker 3 (54:41):
A field called cultural evolution actually study how is it
that information comes from one generation to another.
Speaker 4 (54:48):
And we've studied this in children. It's really fascinating.
Speaker 3 (54:52):
In The Gardener and the Carpenter, which is my book
about parents and children, looking at how children learn from
other people, there's lots of examples in there about how
children can take information from their parents, for example, but
they don't just swallow it hole they think about it
in new ways, they remix it. In cultural evolution they
(55:14):
talk about this as the balance between imitation and innovation.
Speaker 4 (55:17):
So you need both.
Speaker 3 (55:18):
You need to be able to imitate the other people
around you to make progress, but then you also want
to do something that's not just what the other people
around you have done. I mean, there'd be no point
in imitating if somebody hadn't innovated.
Speaker 4 (55:29):
At some point in the past.
Speaker 3 (55:31):
And it's a really hard scientific problem about how do
you get that balance.
Speaker 4 (55:36):
And what you see with children is that.
Speaker 3 (55:38):
If you give children information, they will They're very likely,
especially little children, will kind of imitate you literally. You know,
you produce a gesture and the kids will imitate it.
But then they'll also change it depending on whether they
think it makes sense or not.
Speaker 1 (55:56):
Yeah, And as you know, in the animal kingdom, see
that all animals have this trade off between exploration and exploitation.
So they'll spend eighty percent of a time exploiting the
things that they know, as in under that rock, there
are these grubs that I eat, and then they'll spend
twenty percent of their time exploring other places that they
(56:16):
haven't seen before. Across the animal kingdom, we see this
exploration exploitation trade off, and it strikes me that the
imitation innovation trade off is essentially the cognitive equivalent analogy
of that.
Speaker 4 (56:28):
No I think that's exactly right. And what I've argued.
Speaker 3 (56:31):
Is that if you think about childhood right, childhood is
very evolutionarily paradoxical, like why would we have this long
time when not only are we not producing anything, but
we're extracting resources from the other people around us. And
there's a really interesting biological generalization which is that the
(56:51):
smarter the adultonimal, the longer childhood it has. And that's
very very general. It's mammals, birds, marsupials, even insects see
this relationship between the length of childhood and the intelligency
in the anthropomorphic sense of the adult, how good the
adult is learning and figuring out the world. And I've
(57:11):
made the argument that this is really an example of
this explore exploit trade off. So childhood is evolution's way
of resolving the explore exploit trade off, because what it
does is it gives you this period of childhood.
Speaker 4 (57:25):
Well you don't have.
Speaker 3 (57:26):
To worry about exploiting, you don't actually have to worry
about taking care of yourself and going out in the
world and getting resources. As I always say, babies and
young children have exactly one utility function, as the economists say,
which is be as cute as you possibly can be,
and they're very very very good at maximizing that.
Speaker 4 (57:43):
And as long as you're as cute as you possibly
can be, you don't have to worry about anything.
Speaker 3 (57:47):
Else, right, you don't have to worry about getting fed
and taken care of.
Speaker 4 (57:51):
But what that means is that that.
Speaker 3 (57:54):
Frees you up to do the things that babies and
young children do, which is play and explore and experiment
and have weird, crazy, imaginative pretend and do all these
kinds of explore functions. And then because the children are
doing that, then the adults can take advantage of all
the novelty and innovation that the children are producing. So
(58:17):
and it's interesting that in the computer science literature they
also talk a lot about this explore exploit trade off,
and there's literally proofs that you can't. It's a trade off.
You can't have both of them at the same time.
And often the best solution is, and you know, I
think people can feel this intuitively, is start out exploring
when you're trying to solve.
Speaker 4 (58:38):
A new problem.
Speaker 3 (58:39):
Start out just brainstorming, having crazy ideas, not worrying too
much about whether you're getting there or not. And then
narrow in and say, Okay, now I have a solution.
Now I want to figure out how do I find
tune that solution. And I think childhood is nature's way
of implementing that idea. Explore first, explore later, and again,
(59:01):
if you're thinking about these artificial intelligence systems, it's a
really deep problem about how could you get design a
system that can trade off those two capacities in an
intelligent way?
Speaker 1 (59:14):
And you know, I think that's that's the third wave
that's coming. Yeah, that's the part that we don't have currently,
But that's what's saying next.
Speaker 3 (59:21):
What's interesting is in humans, the way that we do
it is because we have a particular what biologists call
life history, We have a particular developmental history. We start
out being children, we become adults. Then for humans, we
become elders, we become post mental pausal grandmothers. Now we
tend to think, and you hear a lot of this
(59:44):
from AI guys, with guys being the relevant term that
somehow the thirty five year old, the thirty five year
old adult male is like the peak of the psychologist
or the philosopher or the AI guy is the peak
of human intelligence. And then there's this thing called intelligence
that you can have a little of or a lot of,
(01:00:05):
and the thirty five year olds or maybe even the
twenty five year olds like they have the most of it,
and then childhood is just building up to it, and
then elderhood is just falling off from it. But that
doesn't make much sense from an evolutionary perspective. In fact,
what seems to be true is that we have these
different functions, really different, radically different kinds of intelligence that
(01:00:27):
trade off. They're not just different, they trade off against
one another, like exploration exploitation, and our developmental trajectory from
childhood to adulthood to elderhood is a way that we
managed to deal with those kinds of trade offs. So
we have this childhood that lets us explore and lets
us get information from other people. We have this adulthood
(01:00:50):
where we can go out and use that information to
do all the things that Gronim's do, like find mates
and resources and find our way in the hierarchy. And
then we have this elderhood where now we're motivated to
use our resources to help the next generation, to pass
on the wisdom and information that we've got to the
next generation.
Speaker 4 (01:01:11):
So away.
Speaker 3 (01:01:11):
I put this perhaps a little meanly sometimes, is basically,
we're humans up to puberty and after menopause, and in
between we're sort of basically glorified primates. In between we're
doing all those things like you know, mating and predating
and finding resources and all the fun stuff is what
the kids and the grandmom's fit to do, which is
(01:01:32):
play and explore and tell stories and pass on recipes
and do Broadway show tunes, all the things that I
do as a grandmother with my with my grandchildren. But
there is something that's deeper than that, which is that
typically AI has not kind of had this developmental perspective.
And I think that's another thing that developmental science can
(01:01:54):
contribute to AI, is think not just about that there's
this thing called intelligence that we want more of or
less of, but how do you in a society and
in an individual across time trade off these really different
kinds of intelligences.
Speaker 1 (01:02:12):
So to zoom this back out to the big picture,
then the idea of not looking at AI as an
intelligent agent, but looking at it as a cultural technology.
One of the things that suggests, and you say this
in the paper, is that it's not just engineers who
should be studying AI, but it's social scientists who should
be commenting on it.
Speaker 4 (01:02:31):
Yeah, so just yeah, it's sort of fascinating.
Speaker 3 (01:02:34):
So if we wanted to say, let's try and figure
out what's going to be good, what's going to be bad,
how do we regulate it, how do we make.
Speaker 4 (01:02:42):
It good or bad?
Speaker 3 (01:02:43):
The people we should be talking to are people who
know something about how print and writing and pictures affected society,
or how developing markets and democracies and bureaucracies restructured society.
And they all had enormous effects, right, I mean in
a way like it's easy and fun to bring another
(01:03:04):
intelligent agent into the universe. It's a lot more fun
than coding. Actually, we know how to do that and
we're doing it all the time, and it makes a
little difference, but not an enormous difference. Bringing a new
cultural technology into the universe like print, that's a giant change.
But we've done it before. It's not as if this
is a singularity. It's not as if this isn't something
(01:03:25):
that humans have done before. The people who know about
it are psychologists and political scientists and sociologists and historians
of technology, and.
Speaker 4 (01:03:33):
One thing that I think would be really.
Speaker 3 (01:03:35):
Really helpful would be to get this historical perspective into
the way that we think about AI, rather than having
the sort of Gallum Frankenstein's story perspective of this is
something that's unlike anything that's ever happened in.
Speaker 4 (01:03:50):
Humankind before.
Speaker 3 (01:03:51):
It's something that is important and really has had significant effects,
but it is something that's happened in human life before,
and it's something that comes out of the way that
human beings work, not something that comes out of some
strange alien inhuman kind of development.
Speaker 1 (01:04:12):
That was my interview with Alison Gopnik, professor at Berkeley.
There are two points I want to return to. I've
previously argued this point in Inner Cosmos in episode seventy two,
that AI is best thought of as a processor of
the intelligence of billions of humans. And that's because I
was noticing, even back then, the number of people who
(01:04:33):
typed in a sophisticated question and they got back what
appeared to be a sophisticated answer, and they concluded, Wow,
this thing is truly intelligent, but they were simply confusing
that with the intellectual endeavors of humans before them. Maybe
dozens of people had written about that topic, or maybe
hundreds or thousands, but they simply didn't know that, and
(01:04:55):
so they heard the echo and they misinterpreted that as
the proud voice of AI. So I named this the
intelligence echo illusion. Second point is, right after my conversation
with Alison, I found myself thinking a lot about the question,
when there's a giant machine that collects up data and
processes it, why are we so quick to anthropomorphize it,
(01:05:19):
to see it as a being. Well, it turns out
that brains are very quick to do that. For example,
imagine you hear some sound in your home at night.
You assume it's a living, intentional creature, even though you
might discover after some investigation with a flashlight in a
baseball bat that it was just the wind blowing and
(01:05:40):
knocking something around. Presumably this is a survival mechanism to
assume everything is alive and has intention. But I think
there's another aspect to it also, which is that we
seem unable to think about complex systems and so we
have to assign a central character to it. Examples of this,
(01:06:00):
which I'm going to talk about in an episode in
a few weeks from now, but for now, I'll just
use as an example what's known is the Great Man
view of history. We look to some historical figure, this
could be a woman as well, and we attribute a
historical outcome to that person. But this is a very
misleading narrative tendency, because anything that happens historically is a
(01:06:25):
hugely complex event, shaped by thousands of people and often institutions,
and sometimes random contingencies. We say that Hitler led Germany
into World War Two, but that glosses over the collaborators
and the predecessors and the cultural scaffolding that made all
these things possible. My assertion is that this isn't just
(01:06:48):
a storytelling shortcut. It reflects something deep about human cognition.
We're just not able to hold that level of complexity
in our heads, so we become story tellers. We anthropomorphize
to one or a few central characters. In other words,
it's really hard for our brains to grasp sprawling, distributed
(01:07:09):
networks of influence, so we turn a vast social tapestry
into a single silhouette. Okay, so now let's zoom back
out to the main picture. What happens when we stop
asking whether a large model is intelligent and instead ask
what kind of cultural machine it is. Today's conversation with
Alison Gopnik took us through a reframing that I think
(01:07:32):
is clarifying and possibly really important. When we think of
large models as minds, as proto agents, it's easy to
get swept up in a speculative drama.
Speaker 2 (01:07:44):
We ask if.
Speaker 1 (01:07:44):
They'll surpass us, if they'll enslave us. But when we
shift the lens, when we see these systems as cultural technologies,
more akin to libraries or markets, then our questions become
more grounded and actionable.
Speaker 2 (01:08:01):
Large models are changing.
Speaker 1 (01:08:02):
Everything about our lives, how we write, how we seek information,
how we work. They are reorganizing the structure of knowledge
in real time, and like every major information technology before them,
whether that's writing or printing or broadcast media, these tools
are going to shape culture not because they think, but
(01:08:24):
simply because they change the flow of ideas. So maybe
the first step towards wise stewardship of this new era
is to improve our metaphors less Golum and Frankenstein, more
printing press and library, less digital brain, more public infrastructure,
(01:08:45):
less will it become sentient?
Speaker 2 (01:08:47):
And more? What cultural shifts does this give rise to?
Speaker 1 (01:08:52):
There's a lot at stake and how we frame these systems,
because our analogies and our metaphors are never neutral. If
we imagine a large model as a proto, human will
fear it and will regulate it accordingly. But if we
see it as a new kind of social technology, something
like a market or a library or a sprawling editorial system,
(01:09:15):
we can draw on the history of how societies have
dealt with the cultural impact of information systems. In other words,
how we think about new technology will shape the world
we're about to live into. Go to eagleman dot com
(01:09:35):
slash podcast for more information and find further reading. Check
out my newsletter on substack and be a part of
the online conversation there.
Speaker 2 (01:09:44):
Finally, you can watch.
Speaker 1 (01:09:45):
Videos of Inner Cosmos on YouTube, where you can leave
comments Until next time. I'm David Eagleman and this is
inner Cosmos. You can help you to have to