Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
What's it like to have a much lower IQ than
you currently have or to have a much higher IQ?
Welcome to the inner Cosmos with me David Eagleman. I'm
a neuroscientist and an author at Stanford and in these
episodes we sail deeply into our three pound universe to
(00:26):
understand why and how our lives look the way they do.
Today's episode is about intelligence. What is it What would
it be like to have the intelligence of a mosquito,
(00:47):
or a horse or a squirrel. What would it be
like to presumably understand only very basic things right around you,
not doing sophisticated simulation of the future like we do
as humans. And what can we say about the present
and future of intelligence that is artificial? Okay, so let's
(01:12):
start with this question of what is it like to
have a different level of intelligence? I see people post
this question sometimes online on forums like Quora. Someone will write,
I have an IQ of sixty eight, what is it
like to have a higher IQ? Now? First of all,
(01:34):
I think this is an amazing question because it acknowledges
that not everyone is having the same experience on the inside,
and so someone is taking the time to ask, what
would it be like to have what is called a
higher intelligence level. What's the experience of that? Now, the
(01:55):
interesting thing in life is that we can't run a
control experiment on our own experience of the world, and
so whatever IQ you have, you sort of have just
that one experience of reality. But to get at this,
let's start by thinking about what it would be like
to have a much lower IQ than you do now.
(02:18):
One way to get at this is to ask the
question of what would it be like to be a squirrel.
I'm choosing a squirrel just because I was watching one
in my backyard yesterday, and I'm watching him run along
the top of the fence and climb up the tree
trunk and find a little scrap of food and look
around nervously. And a squirrel's cerebrum has about one hundred
(02:41):
million neurons, while ours has about one hundred billion neurons,
so a thousand times more. Now, size isn't everything, which
we'll return to a little bit. Presumably the issue is
the algorithm that's running. But we can watch the behavior
of lots and lots of squirrels over lots of time,
(03:04):
and it certainly doesn't seem like they're having the kind
of capacity for thought that we are. So I was
watching this squirrel, and I thought, what would it be
like to be able to jump around from branch to branch,
but have no hope, presumably of ever discovering that force
equals masstimes acceleration, or for that matter, or not even
(03:28):
ever being able to discover that e equals mc squared,
or just basic things like how do you build a chair,
or how do you think about the competition between Amazon
and Netflix, or just the idea that the telephone lines
they're running on top of are carrying megabytes of information
flowing as zeros and ones as one member of our
(03:50):
species communicates to another, or that more broadly, we have
airwaves and fiber optics carrying a flow of zetabytes information
in a massive ocean of data around us. For the squirrel,
none of this exists. For the squirrel, none of this
(04:11):
is comprehensible. It's thinking about its acorn and where it
hid its last one, and it's thinking about safety and
maybe about mating. As far as we know, or as
far as we can tell, which of course has its limitations.
The squirrel is not ruminating on a play it saw
last night. By the squirrel equivalent of Shakespeare and what
(04:35):
it means about the aspirations of a monarch and the
cruelty inherent in the competition for power. It's not thinking
about how to get to the moon or how to
build the next vaccine. Now, it's not to say that
there aren't specialized kinds of intelligence. Every move that the
(04:55):
squirrel makes along the tree branch is very impressive. There
is no way that I can could hope to stick
landing after landing like that from the branch of one
tree to the next to the next. But the squirrel
is capable of doing that. But even though it performs
this incredible ballet in the face of gravity, it's presumably
(05:16):
never going to get to the point where it can
characterize gravity in equation form to understand how it should
move if it were on a different planet, or to
conceptualize gravity as curvature in the fabric of space time.
Although humans are capable of getting that in high school,
presumably squirrels of any age would find the concept well
(05:41):
beyond what their brains could even hope to have a
flicker of. And when you look around the animal kingdom
around us, we find lots of creatures presumably occupying very
different levels of intelligence. So a bat has let's call it,
ten million neurons, and again it's not the size but
the structure that matters.
Speaker 2 (06:02):
But presumably they are not writing the equivalent of bat
books or building a little bat Internet where they can
capture for eternity everything that every generation of bats before
them has learned.
Speaker 1 (06:18):
A fish has only about one hundred thousand neurons, a
house cricket has fifty thousand neurons. A common fruitfly has
only about two five hundred neurons in the equivalent to
its brain. So if you were a fruit fly, you
simply don't have the notion of seeing the moon in
the sky and thinking, okay, that's an orbiting sphere, and
(06:41):
I'm going to derive a plan with my fellow flies
to get there by building new technologies that we can
fit our little bodies inside of so that we can
survive in low oxygen. And what if you're a mosquito,
with your little mosquito brain, all you know is the
mad attraction to certain odors which indicate a warm blood
(07:02):
and animal, and the pleasure of slipping your proboscis in
and satisfying your thirst with the warm liquid. You presumably
don't even have the concept of blood, that it has
plasma and cells that are specialized for grabbing oxygen, and
all kinds of useful machinery for defending the host animal,
and so on and so on. So I've thought about
(07:23):
these issues for years, and I think there's a way
to understand what it is like to be so limited.
The way that the mosquito might look at the human,
if it could even have a concept of a human,
how might look at the human and think, oh, my gosh,
I can't believe they understand all that stuff, and they
do it all at once. And the reason we can
(07:44):
understand what it's like to be limited is because we
are up against problems all the time that we're just
smart enough to recognize, but not yet smart enough to solve.
Take the origin of life, this is a massively difficult problem.
I worked with brilliant scientists like Sidney Brenner and Francis
(08:06):
Krik at the Salt Innstitute, who worked on theories of
the origin of life. And these were among the smartest
biologists of the twentieth century. And even they were like
little fruit flies when trying to tackle that problem. They
constantly were aware of the enormous gaps in whatever story
they were hoping to put together, because we just don't
(08:28):
have much of any lasting data from the past three
point eight billion years, and we're talking about how trillions
of atoms might come together in just the right way
over time to form things that can self replicate. It's
the kind of problem that when you really start to
reach your arms down into it, you realize that even
(08:49):
very smart human brains just aren't equipped for a problem
of that size. Or just take something like thinking about
the cosmos with one hundred billion galaxies in it and
one hundred billion stars inside each of those galaxies, and
uncountable numbers of planets rotating around those stars, and then
(09:12):
trying to picture or answer whether there is life elsewhere
in the galaxy and what it would look like. It's
clear that our human brains aren't so good at grocking
numbers like that. Even though we can estimate the numbers
and we can use words to talk about them, we're
not really capable of understanding them. And it's the same
(09:34):
thing when we study neuroscience. Of course, we've got in
the ballpark of eighty six billion neurons and each one
of those is connected to so many of its neighbors,
about ten thousand, that if you took a cubic millimeter
of brain tissue, there are more connections in there than
there are stars in the Milky Way galaxy. This thing
(09:54):
that we're facing on a daily basis this thing the
brain that tens of thousands of people on the planet's study,
this three pound organ that we have completely cornered. It
is so vastly complex that there is no way for
a human brain to understand itself. And in neuroscience we
(10:16):
have foundational problems that we can't answer, like why something
feels good? Take an orgasm? Why does an orgasm feel good?
We can, of course tell the evolutionary story, which is
that it benefits the species to reproduce, so it is
advantageous for it to feel good. But the question from
(10:37):
a neurobiology point of view is how do you build
a network that feels anything. Take a big, impressive artificial
neural network like GPT four. It can do incredibly impressive
work by taking some prompt and written language and generating
words that would statistically go with that prompt and so on.
It's mind blowing how well it appears to do. But
(11:01):
GPT four presumably can't feel pain or pleasure. There's nothing
about one of its sentences that it generates that it
appreciates as hilarious or tear jerking. It doesn't have any
capacity to feel concerned about its survival or demise. When
(11:21):
the programmers turn the computer off. It's just running numbers
down a long, complex algorithmic network, and that's it. So
how do we ever come to feel something? This is
perhaps the central unsolved question in neuroscience. It's usually summarized
as consciousness, and specifically the hard problem of consciousness, which
(11:45):
is to say, why does all this signaling moving through
networks of cells feel like something? How could you ever
program a computer to feel pain or detect some wavelength
of electromagnetic radiation and as purple, or to detect some
wavelength of electromagnetic radiation and experience it as purpleness, or
(12:10):
to enjoy the beauty of a sunset. These are totally
unsolved questions in neuroscience, and presumably there are whole classes
of problems that we are not even smart enough to
realize our questions that we could be asking. So, despite
the incredible pride filling progress of our species. Our ignorance
(12:34):
vastly outstrips our knowledge, and that affords us just a
little bit of insight into the limitations of our brains,
the glass walls of our fish bowl, and lets us
even very roughly imagine what it would be like to
have the intelligence of a squirrel. And these are the
(12:56):
things that I was thinking about when I wrote a
fictional short story called Descentive Species in my book Some
and so I'm going to read it here, and then
I'll come back to the question of intelligence. In the afterlife,
you are treated to a generous opportunity. You can choose
whatever you would like to be in the next life.
(13:17):
Would you like to be a member of the opposite sex,
born into royalty, a philosopher with bottomless profundity, a soldier
facing triumphant battles. But perhaps you've just returned here from
a hard life. Perhaps you were tortured by the enormity
of the decisions and responsibilities that surrounded you. And now
(13:40):
there's only one thing you yearn for, simplicity that's permissible.
So for the next round, you choose to be a horse.
You covet the bliss of that simple life. Afternoons of
grazing and grassy fields, the handsome angles of your skeleton
and the prominence of your muscles. The peace of the
(14:04):
slow flicking tail, or the steam rifling through your nostrils.
As you lope across snow blanketed planes, you announce your decision.
Incantations are muttered, a wand is waved, and your body
begins to metamorphose into a horse. Your muscles start to bulge,
(14:27):
A mat of strong hair erupts to cover you like
a comfortable blanket in winter. The thickening and lengthening of
your neck immediately feels normal as it comes about. Your
carotid arteries grow in diameter, your fingers blend hoofward, your
knees stiffen, your hips strengthen. And meanwhile, as your skull
(14:50):
lengthens into its new shape, your brain races in its changes.
Your cortex retreats as your cerebellum grows. The homuncular smelts
man to horse neurons, redirect synapses, unplug and replug on
their way to equestrian patterns, and your dream of understanding
(15:12):
what it is like to be a horse gallops toward
you from the distance. Your concern about human affairs begins
to slip away, Your cynicism about human behavior melts, and
even your human way of thinking begins to drift away
from you. Suddenly, for just a moment, you are aware
(15:34):
of the problem you overlooked. The more you become a horse,
the more you forget the original wish. You forget what
it was like to be a human, wondering what it
was like to be a horse. This moment of lucidity
does not last long, but it serves as the punishment
(15:56):
for your sins. A Promethean entrails hecking moment, crouching half horse,
half man with the knowledge that you cannot appreciate the
destination without knowing the starting point. You cannot revel in
the simplicity unless you remember the alternatives. And that's not
(16:18):
the worst of your revelation. You realize that the next
time you return here with your thick horse brain, you
won't have the capacity to ask to become a human again.
You won't understand what a human is. Your choice to
slide down the intelligent sladder is irreversible, and just before
(16:43):
you lose your final human faculties, you painfully ponder what
magnificent extraterrestrial creature enthralled with the idea of finding a
simpler life, chose in the last round to be come
a human. So in the story, I try to give
(17:06):
a way to think about the possibility that something could
be much smarter than us, that we are not at
the top of the latter but maybe somewhere in the middle,
and that to some other creatures in the universe, we
would appear to be like the squirrels are to us.
It certainly could be that in this vast cosmos there
(17:27):
are intelligences that are so much higher than ours that
we lack even a good imagination or vocabulary to paint
these creatures in the same way that presumably the squirrels
would be unable to give a reasonable description of us
and what we're up to. Maybe these extraterrestrials can understand
(17:48):
the entirety of cosmic evolution by the time they're in
second grade, and they can keep in mind the trillions
of animal species on this planet and all the other planets,
and keep track of all the interactions and therefore understand
the biological history and future of a planet at depth.
(18:09):
While we mostly just use the word evolution to capture
something that we can't comprehend at a deep level. Now,
let me put on the table that I'm completely uncompelled
by the claims that there are UFOs, or nowadays they're
called UAPs. But I have a very smart friend named
Kevin who told me the other day that he has
(18:32):
no problem believing that that's true. Now, I'm not defending
his position, but his stance was simply that if you
imagine the aliens are much smarter than we are, then
the particulars of what we're looking for, some Morse code
signal or some take me to your leader's sign, that's
actually the wrong thing for us to be looking for.
(18:54):
Because if we imagine some civilization that is, say, totally
differ from us and three million years ahead of us,
and they are to us as we are to the squirrels,
it's certainly not difficult to imagine the possibility that we
are simply not smart enough to construct a good model
(19:16):
of them and therefore even recognize them. Now, you might
assume that if they're much smarter than us, then they
could dumb themselves down to communicate with us the way
that we sort of know how to talk with a
child at a child's level. But our ability to model
lesser intelligences is still pretty terrible. I mean, you still
(19:39):
have no idea how to go out in your yard
and communicate with squirrels. Just try having a meaningful conversation.
Good luck. We're so much smarter than a squirrel, but
we have no idea how to plug into their neural networks.
Or just go to a zoo and try to have
a conversation with a panda bear and commune unicate to him.
(20:01):
Take me to your leader, or do that with a
camel or a dolphin. You get the point, which is
that just because you are smarter doesn't necessitate that you
know how to talk to these other animals. And this
is the situation that we could hypothetically be in with
extraterrestrial civilizations. That we are here even though we don't
(20:23):
recognize that they are there, because we don't even have
the capacity to imagine them, and they have no meaningful
way to communicate with us, because our needs and desires
are so different from what they can even understand. Now,
this lack of communication across species or across planets is
(20:43):
really brought into relief when we consider that intelligence is
not one thing, but there are many different behaviors that
we might put under the umbrella of intelligence. To this point.
In nineteen seventy four, the philosopher Thomas Nagel wrote an
essay called what is it Like to Be a Bat?
(21:05):
Because fundamentally, being a bat is a pretty different experience
than being a different human. If you are a blind
echo locating bat, you emit chirps in the dark, and
you receive back echoes of your chirps, and you translate
those air compression waves into a three dimensional picture of
what is in front of you. You make a mental
(21:27):
map of your surroundings this way. So Nagel asked this
question of what it's like to be a bat in
the context of consciousness, as in, given that we have
such a different sensory world, is there any way that
we could understand what it would be like to be
in such a different way of detecting and sensing the world.
(21:48):
But this same question could be applied to what we're
thinking about here, which is intelligence. Intelligence in the context
of a bat allows the bat to navigate around and
find food, and talk with other and adapt when the
conditions change. But it's hard to directly compare it to
human intelligence because they have traits and adaptations that are
(22:10):
very sophisticated in their own way. Like I said, with echolocation,
they're creating this three D map in their space. They're
using auditory information in real time, and they can have
such precision that they can detect an object as thin
as a hair and fly around, that they can figure
out the size and shape and speed of objects like
(22:33):
a little moth flying around, so they can zoom in
on it and grab it. And they also have sophisticated
social behavior, but presumably about different social things than what
we care about. And we know that they do all
kinds of problem solving, but it strikes me that it's
really difficult to know what sorts of problems they solve,
(22:55):
because some of the problems are so foreign to us
that we don't even know how to think about them.
So all this leads us back around to the main
question for today, which is what is intelligence? How do
we define it?
Speaker 2 (23:10):
Well?
Speaker 1 (23:11):
As it turns out, this has not been an easy
question for scientists, and it has come with lots of debate.
And this is one of those things where we all
have an intuition about what we mean by the word.
But the trick from a neuroscience perspective is how do
you rigorously define it and therefore, how do you study it?
When we talk about intelligence, let's say just human intelligence,
(23:32):
what are we even talking about. We all have a
sense of what an intelligent person is, but what is
happening in their brain that is different from someone else
who you might think is not so intelligent. How do
giant networks of individual neurons, billions of them manipulate information
that you've taken in before and simulate possible futures and
(23:56):
evaluate those and throw out all the information that doesn't matter.
And do people who are intelligent store knowledge in a
different way, Maybe not categorically different, but just perhaps in
a way that's more distilled or more easily retrievable. So
these are the kind of questions we're facing now. The
first thing to appreciate about intelligence in the brain is
(24:18):
that size does not seem to matter. Andre the giant
had a brain volume that might have been eight times
the size of yours, but he was probably not eight
times smarter than you. In fact, what is so remarkable
is that brains that are enormous, like in elephants, and
(24:39):
brains that are very tiny, like a little mouse brain
can both tackle very complex problems like foraging for food
and setting up a home and mating and defending itself
against predators. The Spanish neuroscientist Santiago Romoni Cajol, like many
neuroscientists before and after him, was really struck by this thought,
(25:02):
and he had this beautiful comparison of large and small
brains to large and small clocks like big ben and
a wristwatch both tell the time with equal accuracy despite
the size difference. So all this is to say that
when we stare at brains, this secret of intelligence is
(25:24):
not immediately obvious just from looking at the brain. Now,
(25:46):
when we look across species, we can see what we
might mean by intelligence. For example, good problem solving skills.
Some primates are really good at using tools, like orangutans,
while well other primates like bonobo's are really good at
social intelligence. If you look at purposes, you find that
(26:07):
they are better problem solvers and do much more than
say other swimmers like catfish. And when we examine humans
we see that somebody can be a genius in one
domain but quite bad at another. Rinaldo is a genius
at soccer, but he might not be so great at
differential equations. I recently saw a video of a kid
(26:29):
who can do a Rubik's cuban about three seconds, but
he's autistic and therefore is not particularly good at anything
involving social interaction. So how can we put a measure
to what we are talking about here? About a century
and a half ago people started working on the question
of how you could quantify this. The British scientists Sir
(26:52):
Francis Galton was one of the first that I know
of who said we should be able to measure intelligence.
So is that you could quantify it by measuring things
like the strength of someone's eyesight and hearing, or the
strength of their grip. So that approach didn't last long,
but by nineteen oh five, two scientists, Alfred Binet and
(27:14):
Theodore Simon built the Simon Benay test as a way
of quantifying some number for intelligence. And then in nineteen
sixteen here at Stanford University there was an educator named
Lewis Turman who developed the test more and he renamed
it the Stanford Bena test, which you might have heard
of because it is still used now. These sorts of
(27:36):
tests allow us to put a number on something, but
we still know what exactly we're measuring. The psychologist Charles
Spearman was intrigued by this question, and he made an observation,
which is that if you do well on one task,
something like verbal skills, you tend to also do well
at other tasks, like spatial skills, and so these things correlated,
(27:59):
and he speculates that there was some sort of general
intelligence involved here, and so he used the letter G
for this idea of a general intelligence factor, like a
general skill set of the brain. And other researchers noticed
this correlation also between very different sorts of tasks like
(28:19):
memory and perception and language and solving new problems and
pattern recognition and a whole bunch of others, and so
it still wasn't clear what intelligence is, but it's clear
that these things correlated, and so in nineteen twenty one
researcher wrote that while it is difficult to define precisely
what intelligence is, tests tested. So some people felt that
(28:44):
there's one thing, this G, that underlies lots of different skills,
and others felt that maybe these are completely separate things
and intelligence is not one thing. So that's the debate
that got rolling over a century ago and it remains
an unsolved issue. The fact is that if intelligence were
just one thing, you might expect to sometimes see a
small bit of brain damage where someone loses skills across
(29:08):
different types of intelligence, or with the introduction of brain
imaging some decades ago, we might be able to see
a single small network becoming active even with very different problems.
But interestingly, this is still unresolved because some researchers ask
participants to do very different kinds of tasks like verbal
(29:29):
and perceptual and spatial things while their brain is getting scanned,
and they find that all of these tasks lead to
activity in an area called the lateral frontal cortex, and
so it might be interpreted to support the unitary intelligence
hypothesis because you're seeing one area becoming active even when
people are doing different kinds of tasks. But on the
(29:52):
other hand, we're always faced with the problem that our
current brain reading technology only lights up areas where there's
a lo a lot of activation, and it doesn't catch
the areas that are more diffuse where the real detailed
action might be happening. And there's also an issue that
highly intelligent people find particular tasks less challenging and so
(30:14):
they often show less activity in the frontal cortex, not more.
And so it may be that even with our terrific technology,
it's still a little bit too crude to tell us
what intelligence is by simply going around and looking for
a spot or a collection of spots in the brain.
This is in the same way that you're not going
to look at chat GPT and say, ah, what makes
(30:36):
it intelligent? Are these few nodes here out of the
billions of nodes. Instead, it's a function of the whole
of the activity running through the enormous system. So it
may turn out that intelligence is not going to be
captured by a single brain area or even a system.
For all we know, it might not even be about neurons,
(30:57):
but about what's going on at the molecular level inside
of neurons, which means we might just be looking for
some correl it at the level of neurons. Now, all
that speculative, but I just want to make clear that
often in neuroscience we are like the drunk looking for
the keys under the street light because the lighting is
better there, even though we dropped our keys over there.
(31:19):
Our technology has its limitations, and we often gravitate towards
the street light and ask if we happen to be
able to find the keys there, And sometimes that strategy
works and sometimes it doesn't. Now, part of the challenge
in asking what intelligence is is that the word probably
tries to hold up too much weight by itself, because
(31:41):
what we call intelligence is almost certainly made up of
multiple facets. For example, some people break this down to
analytic intelligence like you use in math problems, or creative
intelligence like writing a caption for a cartoon, or practical
intelligen just like how to operate well in the world.
(32:02):
So one question is whether these different categories of intelligence
truly represent different things with fence lines around them, or
whether they're underpinned by the same mechanisms in the brain
or overlapping mechanisms. But the problem is even trickier than that,
because even within any of these categories, we still have
to answer questions like how knowledge gets stored and retrieved,
(32:26):
how it can get restructured, how it can get erased,
and so on. So the question of what intelligence is
has attracted scientists throughout the ages to propose all kinds
of different answers, none of which may be mutually exclusive,
but they're all different angles on answering what it is
when somebody is intelligent. So let's look at some proposals.
(32:49):
One proposal is that intelligence has to do with squelching distractors. Technically,
this is called resolving cognitive conflict. So for example, let's
say we're playing the Simon Says game, where I say,
Simon says, look to your left, and then you do it.
But let's say I say lift your arm, but I
(33:09):
don't preface it with Simon says. Then what you're supposed
to do is override your reflex to lift your arm.
This is an example where you'd have cognitive conflict. So
the way neuroscientists study this is, for example, by using
something called a three back task. So imagine you're watching
(33:31):
a series of faces getting presented on the screen. So
first you see Tom Cruise, and then you see Beyonce,
and then you see Taylor Swift, and then you see
Anthony Hopkins and so on. Your job is simply to say,
when you see a face that matched the face that
you saw three faces ago, in other words, three back.
(33:53):
If you then see Emma Thompson and then Taylor Swift again,
you'd say, yes, Taylor's matched what I saw three faces ago.
But if you see Zendaia and then Jennifer Lawrence and
then Zendia again, that's a distractor because her face was
only two a go. And so you're supposed to hold
(34:14):
your tongue or specifically not press your button. So to
perform this task requires not only a small window of
working memory, but you have to squelch distractors. You have
to squelch faces that matched what was two faces ago,
or four faces ago or five faces. You can only
hit the button when the face matches what was three ago. Now,
(34:37):
you run this test on a whole bunch of people
with different levels of G generalized intelligence score, and what
you find is that people with a high G are
better at the task, in large part because they don't
respond to the distractors. When you do this in brain imaging,
you find that particular areas come online, like the anterior
(34:59):
singular texts and the lateral prefrontal cortex, and these areas
seem to be necessary for overriding the cognitive conflict. So
that's one idea for what intelligence is, but other studies
suggest no, it's not about conflict resolution. Instead, intelligence is
about how many things you can hold in working memory. So,
(35:20):
for example, our visual memory can only hold let's say
three or four objects in mind at any given time.
So let's imagine that I show you some colored shapes
like a green triangle and a red circle and a
blue square, and then a moment later, I show you
a similar image, and I ask you where any of
(35:42):
these shapes are colors different? And you can probably do
this for three or four objects. But as it turns out,
some people are only able to retain the information from
one or two objects, and other people can hold more,
let's say five objects, And so some people have suggested
that that is really related to intelligence, with the idea
(36:02):
being that critical reasoning depends on how many things you
can hold in your working memory. If you can hold
more things in your head at any one time, you'll
be better able to manipulate things for solving problems. So again,
people have done brain imaging with EEG and fMRI and
found a little area in the posterior paridal cortex that
(36:23):
seems to give a memory bottleneck and correlates with what
different people can hold in mind. Now it seems likely
(36:48):
that working memory capacity won't be the final unlock to
the question of intelligence, but it probably plays a role.
So what other ideas are there? Well? As it turns out,
people in the late nineteen nineties got excited about the
idea of forming associations in the brain. And there's a
particular type of receptor in the brain called an NMDA receptor.
(37:13):
Don't worry about the details here, I'll link a paper
on the website. But you can genetically engineer this receptor
in a mouse and show that the mouse can link
things more strongly, like this light predicts food, or this
is the location where some reward is located. So a
scientist named Joe Chen and his colleagues at Princeton engineered
(37:35):
a strain of mouse to have more of this NMDA
receptor subunit. And this hit the news at the end
of the nineties because these mice called Doogie mice after
the TV show Doogie Howser MD, which was about a
really smart kid. These Doogie mice outperformed normal mice in
recognizing things they had seen before, or swimming their way
(37:56):
through a pool of milky water to remember where a
hidden plat form was. Now, this news made a real
splash when it came out because the idea was that wow,
we've just invented intelligent mice. But we do have to
ask whether we think the doogie mice are more intelligent
just because they can do these laboratory tests better. After all,
(38:18):
intelligence is more than simply nailing down associations, and the
other thing to keep in mind is that all animals
have to balance the things they know against exploring new possibilities.
This is known as the balance between exploitation and exploration.
The reason animals have to balance this is because the
(38:39):
world changes and you never know exactly how and when
it's going to change. So if you are an animal
who's used to finding worms under the green rocks, you
want to spend some of your time exploring under the
blue rocks and the red rocks too, because you never
know when things in the world are going to change.
So the googie mice seemed to be more about exploiting
(39:03):
knowledge that they learned and less about exploration. But that's
not necessarily a good thing. It depends on what happens
with the world. So just forming stronger associations is probably
not going to be the full answer to what intelligence is. Now,
there's another pathway we can sniff down when we're looking
(39:23):
for the root of intelligence, and that is the Eureka moment.
That is what happens when two concepts suddenly fit together.
Like I remember the moment when I was a kid
when I learned that fog is just the same thing
as a cloud, but it's low to the ground, And
it was a physical sensation for me to have these
(39:46):
two concepts fit together. Or if you're a detective, you
might have a bunch of clues on your desk and
then suddenly, aha, it all coalesces into a narrative because
all the facts fit. Now, what has just happened in
your brain? And how does your brain know and alert
you that a fit has been achieved. This is the
(40:08):
restructuring of information. And I just want to make clear
we are nothing like a computer that takes in files
of facts. Instead, we're always structuring and restructuring information. Now.
One of the places we can see that is when
a monkey learns a task. What you'll notice is that
(40:29):
you can't tell the monkey the rules of the task.
They have to figure it out themselves by doing it
over and over and getting reinforced with let's say, juice
in their mouth or something like that. Over hundreds of trials,
and monkeys can learn this way and they can get
better through time. Their performance just rises like a shallowly
sloped line. But if you give the same task to
(40:51):
an undergraduate, something very different happens. They'll try a few
things and then they'll suddenly get it, and they're performance
jumps up. Suddenly they have an Aha moment, they have
a Eureka. Now, this observation implies that humans are doing
something that monkeys can't. Perhaps this has to do something
(41:12):
with restructuring knowledge, or perhaps the human student gets to
try out lots of hypotheses and evaluate them and then
restructure things accordingly. But whatever the issue is, this certainly
seems to play a role in what we think of
as intelligence. And it also suggests that animal models of
intelligence are going to be too limited for some of
(41:36):
the forms of sophisticated reasoning that we care about. And
I'll give you another thing that we might look for.
What if intelligence is about the ability to make good
predictions about the world. In previous episodes, I've talked about
the internal model, and I've emphasized that the only reason
(41:57):
the brain builds an internal model is so that we
can make better predictions about the future. So emulation of
possible futures is a giant part of what intelligent brains do.
As the philosopher Carl Popper said, this is what allows
our hypotheses to die in our stead. My friend and
(42:19):
colleague Jeff Hawkins has emphasized this for a couple of decades,
that we only have memory in order to make predictions.
So the idea is that you write down things that
happen to you that seem salient, and you use those
building blocks to springboard into possible futures. As Jeff puts it,
intelligence is the capacity of the brain to predict the
(42:44):
future by analogy to the past, and we can find
lots of evidence for that in examples of brain damage,
where people lose the ability to store memory and as
a result are unable to simulate the future. So this
whole memory prediction framework almost certainly plays a role in intelligence.
(43:05):
But there are a lot of unanswered questions here. For example,
there are a huge number of possible future moves. How
does the brain simulate them all? Perhaps an intelligence simulator
saves time by developing tricks so that you don't have
to simulate everything. So there are lots of proposals and
(43:25):
possibilities for what intelligence is in the brain, and probably
there are many other possibilities that we haven't even begun
to explore or know how to explore. So I want
to pose a question about intelligence, and this one is
really important, and that is the question of why do
we have lions in zoos? After all, a lion is
(43:50):
so much more powerful than you are. A lion can
easily kill a human. It has these razor sharp claws,
and its body is all muscle and speed, and yet
we put lions in zoos. How well, there's only one
thing we have over lions, and that is intelligence, and
(44:13):
intelligence enables control. We don't brute force the lion into
the cage, we don't wrestle a man. Demand Instead, we
do things like set up traps, or develop chemicals that
happen to interact with their neurochemistry and put them to sleep,
(44:34):
and then we package that into a syringe and use
explosives to launch it really quickly down a metal barrel
so it punctures their skin. All of these things are
moves that the lion cannot possibly predict because it couldn't
possibly conceive of them. And that's what makes so salient.
(44:55):
Our contemporary discussions about AI, because often when someone is
thinking about the question of whether AI could control humans,
they think about physically manhandling us with robots. But that
seems really unlikely because it's so hard to build physical robots.
You're constantly tending to the toilet of the robot machinery,
(45:16):
You're trying to keep all the pieces and parts together
and not have a wire pop somewhere. But the important
concept to get straight is that for AI to control humans,
they don't need brute force. Why because intelligence enables control.
Could we imagine a scenario in which the AI does
(45:39):
something that we can't predict because we can't possibly conceive
of it. Sure, And the interesting part is that there's
a whole space of scenarios that we can conceive of
and write science fiction novels about, But there's also the
space of the unknowns Now, I'm not suggesting that modern
(45:59):
AI is going to move in that direction, because at
the moment it's just doing very sophisticated statistical games and
it doesn't have any particular desire for power. But I
think for sure things are going to get strange as
we grow into a world with another intelligence, one which
has read every single book and blog post ever written
(46:21):
by humans, and knows every map that we've ever made,
from streets to chemical signaling, and can create a video
of any new idea, and can simulate new combinations of
machines and fractions of a second. So this is the
reason it's important to understand what intelligence is when we
talk about artificial intelligence now. Earlier this year, I published
(46:45):
a paper about how we might meaningfully assess intelligence in AI,
and I discussed this in episode seven. In other words,
how would we know if some artificial neural network like
chat GPT we're actually intelligent versus just computing the probability
of the next word based on a slurry of everything
(47:07):
humans have ever written. Well, for sure, it is just
computing the probability of the next word. But the surprise
has been all the stuff that we didn't expect it
to be able to do. With this straightforward statistical prediction model,
it does more than it was programmed or expected to do.
So that has left the whole field with a question
(47:29):
of whether simply having enough data gives us something that
is actually intelligent or whether it just seems intelligent. So
in that previous episode, I proposed that the tests we
currently have, like the Turing test, are outdated as a
test for meaningful intelligence. Why because the Turing test can
(47:52):
already be passed and it still doesn't tell us really
what we need to know. And it's the same with
other tests that have been proposed in the AST, like
the Loveless test, which asks whether computers could ever be creative,
and all it takes is a few seconds with mid
journey or chat GPT to see that that landmark is
also in the rear view mirror. So what I've proposed
(48:15):
is not about moving the goalpost. It's about fundamentally asking
what is the right test for a meaningful sort of intelligence.
So what I suggested is that we will know if
a system has some real intelligence once it starts doing
meaningful scientific discovery and puts all the scientists out of business,
(48:37):
because scientific discovery is something that requires a meaningful level
of intelligence. And I'm not talking about the type of
science that's just piecing together things in the literature, although
that's of course very useful. I'm talking about the type
of science where you think of something new that doesn't
already exist, and you simulate that and you evaluate whether
(49:00):
this crazy model you just came up with would give
a good understanding of the facts on the ground. So,
for example, when Alfred Wegner proposed that the continental plates
were drifting, that gave a totally different explanation for all
kinds of data, including the fact that South America and
Africa seemed to plug into each other like puzzle pieces,
(49:23):
And it gave an explanation for mountain ranges and so on.
And he simulated what would be the case what we
would expect to see if this were true, and he
realized it made a good match to the data around him.
Or when Einstein imagined what it would be like to
ride on a beam of light and this is how
(49:43):
he derived the theory of special relativity, or when Charles
Darwin came up with a theory of evolution by natural
selection by thinking about all the animals that weren't here.
I suggest that these are the kind of things that
humans can do that represent real intelligence, the kind of
intelligence that has made our species more successful than any
(50:07):
other on the planet. So is modern AI intelligent in
this way? As of this recording, there's no simple answer
to this. There are arguments on all sides that generative
AI has actually reached some sort of intelligence or that
it hasn't. But it's not easy at the moment to
come to a clear conclusion on this. And although AI
(50:27):
intelligence might not be quite the same thing as what
we have, I suspect it's going to matter a lot
for us to better understand what human intelligence is made of,
so we can understand when AI grows up to be
the same or better and why. And I suspect that
the simple existence of AI is going to help us
(50:49):
think through these problems, because we're going to try things
and get over our naive assumptions about what intelligence might be.
For example, from at least the nineteen fifty onward, the
old way of trying to build artificial intelligence was to
give a computer a giant list of facts. You explain
(51:09):
that birds have wings and beaks and feathers, and they fly,
and then maybe you have to teach it that there
are some exceptions to the rule, like ostriches or penguins,
and you keep giving it these rules and structure. And
that approach never worked, and the field of artificial intelligence
descended into its winter. So what we learned from that
(51:30):
is that intelligence is probably not a series of propositions,
but rather it's stored in a very different way, for example,
a giant cascade of information in vast networks. And so
studying intelligence that is artificial, that's what's going to sharpen
our focus on intelligence that is evolved. So let's wrap up.
(51:55):
As you know, if you've been a listener to this podcast,
I'm obsessed with the way that we all see the
world from different points of view, not least because we
have subtly different genetic details in our brains from person
to person, as well as different life experiences which have
wired up the circuitry. And as a result, we also
have different intelligences that allow us to see the world
(52:19):
differently and sometimes with more or less clarity. And what
we've done today is looked at the complexity of what
seems like a simple question, what is intelligence? We know
that there are differences between species and even within members
of any species, but we don't always know how to
capture that. And the fact that we can address the
(52:41):
question but after one hundred years still not come to
a clear answer probably indicates that the word intelligence simply
holds up too many different things, different skills, whether that's
the squelching of distractors or the number of things you
can hold in memory at any given moment, or the
formatting of information or making associations, or the ability to
(53:05):
simulate possible futures. It seems to me that one of
the most meaningful tests for the intelligence of our species
will be this. Will we be able to define and
understand intelligence before we create it and perhaps get taken
over by it, That will be the true test of
(53:28):
the intelligence of our species. Go to eagleman dot com
slash podcast for more information and define further reading. Send
me an email at podcast at eagleman dot com with
questions or discussion, and I'll be making more episodes in
which I address those. Until next time, I'm David Eagleman,
(53:53):
and this is Inner Cosmos.