Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Matt (00:00):
Today on the Numenta On
Intelligence podcast.
Jeff (00:04):
You see something to
believe the earth is flat and
you give them all these factsthat don't fit that model.
Now, the Earth is flat predictscertain things.
Right.
And, and, and they will do theirdamnedest to try to fit these
new facts into their model, nomatter how, that's what they're
going to, otherwise you have tothrow the whole thing away and
start over again, which is areally disconcerting thing for
(00:25):
them.
Matt (00:26):
It is.
It is uncomfortable, to have tothrow away a whole frame of
reference.
Jeff (00:30):
Basically, the models you
build in the world are your
reality.
Matt (00:34):
It's your belief system.
Jeff (00:34):
It's your belief system.
It's your reality.
This is what you know, that'sthe, oh, that's what it boils
down to.
There is no other reality interms of your head.
This is it.
Instead of, if you say, well,you know what?
Everything you believed thesereferences, and you have to
start over again.
Um, then it's kind of like, oh,you're going back to square one.
Matt (00:49):
Jeff Hawkins on defining
intelligence.
What is it?
What makes us intelligent?
How was our intelligencedifferent from a mouse'?
We're narrowing down the answersto some pretty existential
questions.
I'm Numenta community manager,Matt Taylor.
Thanks for listening to NumentaOn intelligence.
(01:10):
All right so I'm here with Jeffand we're going to talk about
intelligence in general andabout how we might define it.
And so I like to think aboutthis from the question of
evolution in a way.
Like what are our brains do forus now?
How, what did they evolve overtime to do for us as humans or
as this life formed animals?
Jeff (01:31):
Yeah.
Yeah.
Matt (01:32):
It's a good place to
start.
Jeff (01:33):
That's a good place to
start.
Well, sure.
Evolution is a good place tostart for almost anything
related to living.
Um, yeah, I mean I kind canviewed it that way too.
I start off by saying, uh, Itell a an evolutionary story.
And the evolutionary story says,well, life formed on earth, you
(01:53):
know, billions of years ago andit was pretty dumb for a long
time.
Bacteria basically for billionor so years.
And, um, and then when thingsstarted to move, um, movement
requires, there's two ways youcan move.
You can move blindly, you know,which is pretty stupid.
No point in moving if you don'treally, you know.
If you're going to move with apurpose to achieve something,
(02:14):
you have to have some sort ofknowledge about the world.
Um, and that knowledge can beencoded in genes, the genetic
material, but it's some sort ofmodel.
Even a bacterium has, uh, amotor plan, which is, you know,
if I can move and I'll move,I'll turn towards gradients
that, you know, that's my,that's my strategy.
(02:35):
That's my model of the world.
But any, any animals that startmoving around a lot, um, then
they needed, they need to havesome sort of concept of where
they are and where things are.
And so all of a sudden, ifyou're going to have mobility
and then you're going to have tohave some sort of way of
learning the structure of yourenvironment and knowing where
you've been, how do you get backto some place that if you're
(02:55):
going to a home and you get awayfrom it and you have to come
back, you have to have some wayof doing that, right?
And so nature evolved a seriesof ways of sort of building maps
of your environments anddifferent animals do the
different ways.
And uh, and mammals have acertain way.
And that's, uh, that was firstevolved in the hippocampal
complex, the grid cells and theplace cells, which is a really a
(03:16):
map of the world and learningwhere you've been and knowing
where you, what you did thismorning, things like that.
And then, um, and then finally,the theory that I propose here
is, uh, that released fromNumenta is that, um, that sort
of mechanism that was highlyevolved for in the, in the, in
the hippocampal complex gridcells in place, cells and
(03:36):
things.
That method of, of mapping yourenvironment, um, became, um, uh-
evolution discovered they couldmake us sort of a generic
version of that and make lots ofcopies of it.
And this is, this is what wethink the neocortex is.
So this is the whole ThousandBrains Theory of Intelligence.
Matt (03:57):
So you are saying that the
mechanism that the hippocampal
entorhinal complex has done foran organism to navigate through
its own environment is copiedthousands of times.
Jeff (04:06):
Well, it's been refined
and copied.
There's not a direct copy.
Matt (04:10):
But it's running in
parallel.
Jeff (04:11):
Yes, so you know our
theory that came out in the
paper in December on theframework paper, and a couple of
papers before then basically isarguing that there are grid cell
like cells in the, throughoutthe neocortex and that the same
mechanisms that were basicallyused to figure out where you are
- typically a rat is in someenvironment or we are in some
(04:35):
environment.
We have a sense of where we are.
That's in the grid cells in theentorhinal cortex.
Um, that, that same mechanismwas sort of nature say, hey, I
can make a, a sort of slim downgeneric copy of this mechanism,
which is a mapping mechanism.
It's basically learning maps ofthe world if you can think of
that way.
And um, and now it's copied tothroughout the neocortex and now
(04:58):
that, so intelligence has takenthat sort of mechanism which was
evolved from specifically onething, navigation and
remembering where you've been.
Matt (05:05):
Egocentric navigation.
Jeff (05:06):
Well yeah, I don't want to
use those terms.
It's navigating in some rooms.
I have to learn like, uh,where's, where's my environment?
Where whether it's a woods or myhouse or the rat's nest under
your house or whatever it is.
Um, and learning that, wherethings are and how to get around
and navigating whether it's, andso this is complex.
That system is very complex todo this.
(05:27):
The right there is egocentricand allocentric models there in
the entorhinal cortex.
So that, but again, that's beenthe evolutionary pressure for a
very long period of time.
Of course, the neocortex, whichwe think of as the organ of
intelligence is fairly new.
It hasn't been around very longat all.
And it got big really rapidly.
And so the idea that got bigreally rapidly by taking a
single thing and making manycopies of it.
(05:49):
And that single thing we believeis sort of a, an essence of what
you see in the old brain and thehippocampal complex, the
entorhinal cortex.
So now we are, um, so this is avery long answer to your
question, but, um, instead ofbuilding just maps of, uh, uh,
you know, rooms and, and, andwhere you are in the forest and
things like that, we now buildmaps for objects and instead of
(06:12):
moving just my body movingthrough the world, it's my
finger moving relative toobjects or my eyes, moving
relative to objects.
So now we started, we havestarted having the ability to
build, uh, uh, models of otherthings besides environments.
Sothe bottom line of all thisis, is that intelligence is tied
to the quality of the model thatyou build about the world.
Right.
And to say we're smart and wesay, Oh, you're smart.
(06:35):
Cause maybe we know about, youknow, the planets we know about,
you know, black holes.
That just means we have a modelin our head for those things.
And,
Matt (06:42):
and the, the meat of that
is in the new part of the
NEOCORTEX.
It's interesting that, that theneocortex spread so quickly
over, evolutionarily that meansthat it was doing something that
was really useful.
Speaker 2 (06:55):
Yeah.
The only way you can getsomething big like that, quickly
in evolutionary terms, it's justmake more copies of it.
Matt (07:01):
Yeah.
Which essentially what it is
Jeff (07:03):
Yeah.
So, so that was VernonMountcastle's big proposal in
the 1970s was that the, youknow, the NEOCORTEX got big
rapidly.
And by you look at thestructure, the structure is very
similar everywhere.
So he says this is just, it'sthe same algorithm being
operated on 100,000 times.
You know, we have a hundredthousand or 200,000 copies of
the same circuit,
Matt (07:24):
But each one, they're
working together to provide a
rich model of reality.
Jeff (07:28):
Well, this is the Thousand
Brains Theory, which of course
is in the December paper.
The Thousand Brains Theory ofIntelligence says you've got all
these parallel models.
So basically there's tweaks butthey're basically doing the same
thing.
But they model different things.
They're not all the same.
All of these models aredifferent.
They're not like, it's notredundant.
(07:50):
They're parallel.
And so there are thousands ofmodels that are modeling the
visual space and there'sthousands of models of modeling
auditory space and there'sthousand of models that are
being used in language and soon.
But each column, um, isessentially modeling its inputs
doing of sensory motor modelingof it inputs.
It doesn't know what- eachcolumn doesn't know what it's
doing.
But depending on what its inputsare and what motor behaviors it
(08:13):
could control, that's what itbuilds a model of.
And some of them are going to bebuilding models with input from
your eyes and your fingers.
Some of them are going to bebuilding models of models,
models of models and they'regetting input from other parts
of the neocortex.
But they're all doing basicallythe same thing.
And then they vote.
Again, this is all in thesepapers we've written.
Um, they vote by, um, uh,because often they're, they're
(08:36):
observing the same object orthey're thinking about the same
thing.
And then, and so even thoughthey're slightly different, um,
each, like each part of mysomatosensory cortex represents
different part of my fingers, myskin.
And when I touch this cup,they're all measuring a
different part of the cup.
Yeah.
But they're all sensing the cup.
And so they can vote and say,yeah, I only have this limited
(08:57):
input.
I have this limited input, Ihave this limited input, but
let's vote together.
And together we can agree thatthis is a, we all know about
cups, but, uh, we all just agreethat, you know, we all can only
agree on right now that this hasto be, this cup.
It can't be anything else.
Matt (09:11):
So how can we take what we
know about how that works and
help it to define whatintelligence is?
Jeff (09:18):
Yeah.
Yeah.
So I think this is a, one ofthe, um, I think today we're in
this sort of a weird state aboutintelligence, especially in
machine intelligence, which isdominated by the idea that
intelligence is a capability.
Can I do something?
Can I play G better than thebest human player?
Can I drive a car better than,you know, a human, or can I, you
(09:43):
know, whatever, pick yourfavorite task, analyze a medical
scan and detect cancer.
And so these are, this is how wemeasure today, this is how the
world of AI measures theirprogress.
Matt (09:54):
The value of the
application.
Jeff (09:56):
Well, just, you know,
against human level behavior,
right?
Take a task that a human does.
Say, can I get a machine to dothat task?
Um, and this goes back to Turingwith the Turing test, right?
Uh, he proposed this theimitation game, which we now
call the Turing test, right?
But, which is all about matchinghuman level performance.
But that's a really poor way ofmeasuring intelligence.
(10:19):
Um, there's lots of intelligentthings out there today, animals,
who couldn't possibly pass theTuring test, they can't speak to
us, you know,
Matt (10:26):
But are vastly more
intelligent than
Jeff (10:28):
Yeah, a dog is pretty
intelligent.
Right?
And some dogs more than others.
And, so, you know, the Turingtest doesn't capture that at all
or a human who speaks, you know,Portuguese, is asked to take,
you know, do a Turing test withsomeone who speaks English.
It's meaningless.
It's like, you know, they can'tdo anything.
Matt (10:47):
It's like we're framing
intelligence in the wrong way.
Jeff (10:48):
yeah, we're framing it in
the wrong way.
So we're, we're using this sortof ends method to do it.
Like, well, if you can do x...
The problem with doing it thatway, is that you can always
engineer a solution to do xbetter than, than anything.
Right?
Matt (11:01):
That's what we have today.
Jeff (11:01):
So we end up with, we have
basically these AI systems that
are really good at one thing,but they can't do anything else,
nothing else.
And then you could argue, well,I could take this convolutional
neural network and I can trainit to play Go.
And then a moment later I cantrain it to play, you know,
Pacman.
Okay, fine.
And then a moment later I cantrain it to drive a car.
Matt (11:22):
It doesn't understand
anything it's doing.
Jeff (11:25):
Well, yeah.
So there's a real question,let's go back.
There's a couple of things.
First of all, we have to askwhat kind of model of the world
does it have.
Matt (11:30):
Or does it have a model?
Jeff (11:32):
Yeah.
Well, anything that acts has amodel, it can be a very poor
model.
I mean, a model, basically
Matt (11:37):
It could be a hardcoded
model.
Jeff (11:38):
It could be hardcoded
model.
The point is, you know, you, ifyou have some input and you're
gonna act on it, then you need amodel to decide how to act.
Could be really stupid model.
Matt (11:46):
Could be a lookup table.
Jeff (11:47):
It could be, but it'd say,
here's instructions on how to do
this right?
Or it could be learned.
But the point is in our brains,we have this a fairly general
purpose way of learning modelsthe world.
My brain can learn mathematics,it can learn Portuguese if I so
desire to do so, I can learn toplay a guitar.
You've been playing guitarrecently.
(12:07):
Um, we can learn all kinds ofthings.
I can learn art, I can learnengineering.
Matt (12:12):
Anything
Jeff (12:12):
Well anything we know of,
but not anything.
But pretty much all of thethings we think about of human
endeavors across all time aredone using the same basic
algorithm.
And not only that, we do themall simultaneously.
It's like, it's not like I'm notdedicated to one thing.
So I have a, I have this, you'vegot to picture that in your
head, in your head right now.
You've got this model of theworld.
I know you're interested inthis, you know, astronomy and
(12:34):
space and physics and time andspace.
And then you also have a modelof your family and your car and
how to fix the things in yourhouse and all these things.
So we have this very, very richmodel that we've learned and the
single organ, the neocortexlearns, can learn this very,
very rich model of the world.
And then if we want to ask howintelligent something is we
(12:55):
really need to ask what is itsmodel like and how rich it is
and, and by what methods does itwork?
Because some models of the worldare very, very specific and they
in some way, let's say this way,so many things learned.
It can be very, very specific.
If I set up a Go playing, uh,let's say chess.
If I set up a chess playingcomputer, I want to build the
(13:15):
world's best chess playingcomputer.
Um, the model may be builtaround chess.
You may be like, oh, there's a,the the only moves this thing
can make are chessboardmovements.
And the only thing you can knoware chess board positions.
And so we structure theinformation in this computer, in
chess board coordinates.
And so that's a great aframework for learning chess.
Matt (13:37):
It's the only frame of
reference the system knows.
Jeff (13:39):
That's right.
So we talked about, you know,grid cells, like a frame, a
reference frame, right?
And, but it's, it's a verygeneral purpose reference frame.
I can apply grid cells to allthese different things.
It's more like Xyz Cartesiancoordinates.
It's very general purpose.
You can apply x, Y, Zcoordinates to lots of different
things.
Matt (13:54):
So you could say grid
cells are like a general purpose
tool our brain has decided touse to map things in their own
reference frame?
Jeff (14:02):
Well, yeah, it's a general
purpose.
So, um, we talk a lot aboutreference frames around here.
So um, a reference frame is justa way of locating something and
assigning knowledge tosomeplace.
So, um,
Matt (14:14):
in relation to other
things.
Jeff (14:15):
Yeah, you need to know
where things are, in
relationship to other things.
And that's what you need toknow.
Basically that's how youassemble information to
something useful.
Like, you know, what makes acomputer a computer is because
these are components that are inrelation to each other and how
they move relative to each otherand so on.
So you need a reference framefor storing structure about
anything.
And there's different types ofreference frames.
You know, a good general purposereference frame is the one I
(14:37):
just mentioned the X, Y, z one.
Matt (14:40):
Cartesian coordinates.
Jeff (14:41):
Cartesian coordinates.
Um, that's, that's prettygeneral purpose.
You can apply to any threedimensional or two dimensional,
one dimensional structure andyou can add more dimensions if
you want.
Grid cells are another generalpurpose one.
There similar in that regard,but they work differently than
Cartesian coordinates in aclever way.
And, and uh, just a little asidehere, the reason they're really
clever is because there is noorigin to them, but they, the
(15:04):
locations are tied together bymovement.
And so it's all, it's, it's a,it's, it's like a Cartesian
coordinate frame and in somesense, in that it's general
purpose, but it's, um, movementdecides how you get between
things.
So, um, but you can think of itas a general purpose reference
frame.
So I can, I can use a generalpurpose reference frame to learn
(15:24):
chess.
I can say, oh well there's a setof a board here and maybe I'll
use my sort of x, y, Cartesiancoordinates for that.
And that won't be as good as achess coordinate frame because I
could build a specific one, areference frame just for chess.
And it'd be probably better.
Matt (15:38):
You could encode the
movements of the pieces.
Jeff (15:40):
Yeah.
So basically the only movementthat exists here are the
chessboard movements and, and soit's that system could think in
chess very, very well.
But if I asked it to think aboutcoffee cops, to learn to
structure the Coffee Cup, itwould totally fail.
Matt (15:53):
So you're saying today's
AI or weak systems are sort of
hard coding their models tospecific reference frames of
reference?
Jeff (16:00):
Well, I wouldn't say that.
I would say some might be.
Um, it definitely kind of caseby case basis.
Um, if I, if someone built, youknow, I don't know how the team
that did Alphago did theirthing, but if they encoded
knowledge about chess boardsinto it specifically
Matt (16:16):
Go boards
Jeff (16:16):
Oh excuse me, Go boards,
then, then it might be spiffy.
Matt (16:20):
Well they'd have to,
Jeff (16:21):
Well, no, actually it
turns out most of what's going
on and convolution neuralnetworks these days, there's no,
there's no actual...
Matt (16:28):
Oh, it's just mimicking
or...
Jeff (16:30):
Well, there's no encode-,
there's no assigned reference
frames, there's no evenknowledge.
It's essentially has to belearned and no one really
understands how it's learned.
And so you end up with this sortof weird sort of mapping between
inputs and outputs that no onereally understands very well.
Um, so the fact that there is amapping between inputs, outputs
tells me there has to be somesort of model.
(16:51):
Whether that model has areference frame is hard to tell.
If all I'm doing is sort ofsaying, here's an image and
here's the label, Here's animage and here's a label.
I actually don't need areference frame.
It could just be this bigcomplicated lookup table in some
sense.
Um, but if the system is goingto um, to um, you know, make
predictions and move and make,you know, and, and, and sort of
(17:14):
say, if I do this, what's goingto happen here?
And so on it then you think areference frame is required.
But yeah, but we know in brains,brains have reference frames and
if you don't have a generic orgeneral purpose reference frame,
you cannot be smart.
You cannot be generally smart.
Matt (17:29):
Right.
And you're talking about, sothere's different types of
reference frames.
Like I like to think, there'sthe easy separation of, I have
an egocentric reference frame.
I am in the center of it and myenvironment is the frame of my,
where my organism exists.
And then there's the referenceframe of every object that I
could imagine has its ownreference frame and then I can
somehow combine them.
(17:50):
So I can imagine an object in myhand, it's not there.
Jeff (17:53):
Let me be clear there.
When you say there's differenttypes of reference frames,
there's two ways of thinkingabout that.
One is what is the physical ofthe reference frame?
Like chess boards versusCartesian coordinates, versus
grid cells versus latitude andlongitude.
Those are different types ofreference frames.
Matt (18:07):
True.
I'm not talking about that.
Jeff (18:09):
A reference frame can then
be applied to different things.
So reference frames are always,are always in some sense
anchored to something.
Matt (18:15):
Yes.
Jeff (18:16):
Right.
That's what you were referringto.
Matt (18:17):
That's what I was
referring to, in space.
Jeff (18:21):
Even if it's not-
Matt (18:22):
- or to each other.
Jeff (18:24):
So like you can anchor a
reference frame to my body and
that becomes an egocentricreference frame.
So that, you can think of it ascoordinates related to my body.
And that's what we think isgoing on in the way of regions
in the brain.
Then you can have referenceframes that are anchored to
physical objects, meaning if theobject moves, the reference
frames move with it.
That's what we think is going onin what regions in the cortex.
Um, then, uh, there are, um,yeah, those are, those are the
(18:47):
two big ones.
Matt (18:49):
And those working together
allow us to model all the
objects, sort of in our space.
Jeff (18:56):
Well, I can model objects
using like what reference
frames, like, you know, areference frame around that
object.
Matt (19:01):
Yeah but for me to know
there's a chair there, there's a
table there...
Jeff (19:04):
Yeah, for me to know where
it is relative to you and how to
get to it.
So, you know, the, there's,excuse me, there's these well
known, um, when people firstdiscovered the what and where
regions in the neocortex, these,these parallel sensory paths for
vision, hearing and touch andthey first discovered them in
vision and um, uh, it was veryodd because if some, if a human
(19:28):
has a damaged, um, What pathwayin vision, they, they can look
out into the world and I can'trecognize things.
They can look at, you know, thecomputer and this cup and they
will say, I don't know what thatis.
I can't tell you.
But they know something's thereand they can reach for it.
Matt (19:44):
They can actually grab it.
Jeff (19:45):
They can grab it.
They don't, they're surprisedwhen they grab it.
It's kind of like, ah, and thenby the way, then they know what
it is.
Matt (19:50):
Yeah, because they touched
it.
Jeff (19:51):
Because they touched it.
So they're like, oh, that was acoffee cup, of course.
Um, but the point is with, with,uh, with only the egocentric
reference frame, you can knowhow to reach things and you can
know where things are relativeto you, but you cannot recognize
what this thing is because youneed, you need an object-centric
reference frame to do this.
You flip it around, the peoplewho have a damaged Where pathway
(20:14):
but a functioning What pathway,they say, oh, there's a coffee
cup.
And they say, and you say, well,why don't you pick it up?
And I said, I don't know how todo that.
Matt (20:21):
They can't relate it to
their
Jeff (20:23):
They don't know where...
They see it.
They know it's out there, butthey can't, they don't know how
to move their hand to that thingbecause they've lost this, this
visual egocentric reference.
Matt (20:31):
I don't know what's worse.
Jeff (20:35):
So when we, when we say we
model the world, we need to be
careful here, I model the CoffeeCup with object centric
reference frames.
Um, and uh, I model the solarsystem that way and I model the,
you know, the universe, youknow, cause I can't actually go
out and touch those things, movearound.
But, um, but um, but to be afunctioning animal with moving
(21:00):
limbs, you have to haveegocentric reference frames.
And, uh, and so, you know, soyou know, you take a more
primitive animal, I always liketo talk about like a crocodile
and you know, they see some foodand they may not have a clear
image of what that food is orhow to exactly shape and...
but they know how to reach it.
(21:24):
Then know how to move their facetowards it and bite the thing.
So it's really important to beable to like move your body to
capture some prey or somethinglike that.
Um, so there's these limitedabilities to do these things in
other animals.
So these, these ideas aren'tjust in the neocortex that are
in the old parts of the braintoo.
Matt (21:41):
The interesting thing
you've mentioned about the
crocodile, you think about acrocodile versus a mouse and how
they eat.
A mouse will have little bittyfingers and it will, it will
manipulate the food.
Really carefully.
Jeff (21:52):
I saw this video, they're
showing this just in the last
year for the first time.
And I didn't realize this.
You think a mouse paw is notvery interesting.
But they, they pick it up like,you know, like the Shakespearean
actor looking at the skull, youknow, it's like, ah, what's this
little thing?
I mean, I'm going to eat it thisway.
And it's like a mouse does thisand it's really surprising.
Matt (22:10):
A crocodile couldn't even
Jeff (22:14):
Yeah, yeah, a mouse looks
at the piece of cheese and
decides which way to hold it.
It's pretty impressive.
They do it really quickly.
So if you don't slow down thevideo, you miss it.
Matt (22:23):
But it must mean they've
got a rich model.
Jeff (22:25):
They have a model.
It's not nearly as rich as oursof course.
But the idea that they can seethe structure of that object.
What that tells you is that theysee the structure of the thing
they're about to eat.
They can then they know, theyrecognize each orientation and
what features are on it and thenthey can move their hand and
grab it in the right way tobring it to their mouth in the
right way to eat it.
(22:45):
And so it's not like it's justsome, you know, piece of food
and they don't know what it isor what its shape is or you
know, they're not just stuffingin their face.
They're picking at it, like youand I might pick at a piece of
fruit that's got some bad spotson it, you know.
Matt (22:58):
So while we're talking
about, um, different types of
animals, let's talk about theintelligence spectrum cause
cause there's, we're moreintelligent than mice obviously.
And you would just probably saymice are more intelligent than
crocodiles.
Jeff (23:09):
There's so many things are
confused here.
So, first of all, we can ask,um, the mechanisms by which the
animal makes a model of theworld.
Are those general purposemechanisms or are they specific
mechanisms?
Then we can ask what is thecapacity of that model, because
clearly a mouse's neocortex isquite small and, um, you know,
(23:30):
it's the size of a small postagestamp at best, a rat's, not that
big.
So, a mouse is probably evensmaller.
Um, so it doesn't, it's notgoing to have a very big model
of the world, but it learns thatmodel the same way you and I
learned that models.
So it's a general purpose model,but very limited capacity,
right?
Whereas, you know, a, an ape,it's got a much bigger model
(23:54):
like, you know, a monkey.
And, um, and then there'shumans.
We have this really bigNEOCORTEX, so we all learn the
same way.
We all use the same mechanism.
We're all general purposelearners.
We all learn, we learn rapidly,continuously and using sensory
motor inference.
Um, um, and we have the samesort of rich, um, modeling
structure.
(24:15):
Um, but we clearly, so we haveto, we can make, we can divide
animals or systems along thatline.
What is the mechanism you'reusing to learn and then there's
a capacity issue.
So often we think about, youknow, intelligence is people who
know a lot of things, right?
Um, well that's a separateissue.
That's like, okay, well a mouseI would say has the same sort of
(24:38):
mechanism for learning that wedo.
It's a mammal, so it's going tohave exact same neocortical
structures, or very similar, uh,but it's just limited.
Uh, sometimes I make the analogywith computers.
What is a computer?
Well, there's a formaldefinition for a universal
turning machine.
It's a mathematical thing,whether something's a universal
(24:59):
Turing machine or not, and youcan build them out of tinker
toys.
You can build them out ofsilicone and, um, and all
computers that we think of todayas computers are universal
Turing machines.
Not all, you can build an ASICchip that does something, it's
not even a universal Turingmachine, but computers are.
But computers come in all thesedifferent sizes, right?
You can get the teeniest littlecomputer, still has to have a
(25:21):
CPU Still has programming andmaybe only eight bits, uh, has
limited amount of memory.
Matt (25:25):
Powers your toaster.
Jeff (25:26):
Powers your toaster.
Um, and that's still a universalturning machine.
Um, and then you have the roomsize.
Matt (25:33):
That just means you can
program it to do different
things.
Jeff (25:35):
It means it works on the
same principles of general
purpose computing.
If you gave it enough memory andenough time, it could compute
anything.
Matt (25:42):
General Purpose is keyword
there.
Jeff (25:43):
Yeah.
Right.
It's basically, it has this setof attributes, which in theory
could solve any problem if yougave it enough memory and enough
time.
Matt (25:53):
But there's this huge
spectrum between toaster and
supercomputer.
Jeff (25:56):
That's right.
But they are all universalTuring machines.
And there are other systems thatare not like that.
There are other systems thatsolve problems.
I could, I could have a toaster,my toaster.
It's funny cause I think it'sreally amazing.
But my older toaster wasn't soamazing.
It had some mechanical thingsthat let it figure out how to
toast much.
And so it did sort of the samefunctions, but it wasn't as
(26:17):
good, but there was no computerinside of it.
Right.
So, um, but it didn't many ofthe same things.
So, um, uh, so anyway, so youhave this one spectrum, which is
the mechanisms that are beingused and in the computer world,
that's universal Turingmachines.
And then there's assessmentsection of capacity.
And then there's a third thingwhich is like, okay, what does
(26:37):
it been programmed to do?
So now we can have, we havethree similar metrics on
intelligence.
We have, what are the basicmechanisms by which the model is
being learned.
So a mouse and a dog and human,all of the same mechanisms.
Matt (26:49):
And we're talking about
reference frames.
Jeff (26:52):
We have a general purpose
reference frame.
And we, and we, um, the wholemechanism that we use for
building a model of the world iswhat we wrote in the frameworks
paper.
Yeah.
And so we all, we just all usethe same one.
Um, then there's a capacityissue.
So mouse has very limitedcapacity.
We have much more capacitybecause we just have bigger
brains.
And then finally the equivalentto what the programming is, is
(27:13):
like, well, what have welearned?
I could have a big human brainand not have learned very much.
Right?
I mean, maybe I just didn't getan education.
Matt (27:20):
I didn't think about that
very much.
That aspect of intelligence.
You can have a system that ispotentially intelligent, that
hasn't learned enough to beclassified as intelligent.
Jeff (27:28):
Well, when we think about
intelligence, we have to think
along these metrics because thesame thing with computers.
It is those three metrics forcomputers, the method which
operates the capacity of thesystem and then what has been
programmed to do and it tells uswe have the method of learning a
model which is reference framesand general purpose and sensory
motor inference and all thosethings.
Then because they can varytheir, so maybe the Go computer
(27:51):
doesn't have a general purposereference frame.
Um, then there is the capacityof the system.
How much memory does it have?
How big a brain, how manycolumns do I have, how many
neurons, how many synapses,things like that.
And then there is what is, whatis it been trained to do?
What has it learned.
And so you and I can be veryintelligent people and have very
different sets of knowledgeabout the world.
(28:12):
So if I had never been, if I wasraised in the woods by wolves
and you came along and saidJeff, what do you think about,
you know, about the Milky Way?
And, and I said, what the hellare they talking about?
I don't know what the hell theMilky Way is.
Matt (28:26):
They still talk to me
about the spirit in the sky.
Jeff (28:28):
Yeah, whatever.
Or I'll talk to you about theseplants that, you know, you don't
know about these plants?
You know.
Matt (28:32):
Don't eat them!
Jeff (28:32):
You don't know that?
So, so we can have, then we haveto, we have to be really
careful.
And I think the problem we havein the field of AI is that
people aren't talking about itin a structured way like this.
They're not talking about thesedifferent components of
intelligence
Matt (28:47):
How something is
intelligent?
Jeff (28:49):
Yeah.
These three components (28:50):
the
method by which it works.
Is it general purpose or not, dowe have a general purpose
learning algorithm for learningmodels of the world or not?
Then your paths through thesystem and then what it's been
trained and how it learns.
Right.
And so right now we're focusedbasically on, hey, can I beat
some humans at something.
Which is really the wrong metriccompletely.
(29:12):
If I'm going to, I think theworld of AI in the future, if
you go far enough, it's going tobe dominated by machines that
don't do things humans do atall.
So why, you know, why try torecreate a human for all these
things, you know.
Let's have them do things,there's some overlap perhaps,
but it's not the goal to passthe Turing test.
That's, that's the wrong goal.
Matt (29:32):
Um, along those lines, I
want to talk, we just talk about
the intelligence spectrum.
I would assume that we humansare probably near the top of
that, intelligence spectrum onthis planet.
Jeff (29:40):
Well on this planet.
I would say, I would say, yeah,there's no question about it.
Matt (29:44):
So one of the things that
we do that perhaps intelligences
down the spectrum, don't maybeis have this ability to have
these rich abstract models ofthings that don't actually
physically exist in nature.
How does the idea of referenceframes apply in that arena?
Jeff (30:01):
Yeah.
Um, we touched on this in theframeworks paper and I'm writing
a lot more about it right now.
Um, uh, first of all, the way Iapproach this is not saying
like, oh, I see that these arereferences.
It's more like, no.
The neocortex does these things.
And neocortex looks the sameeverywhere.
Therefore these things are basedon reference frames.
(30:21):
It's like, let's figure out howas supposed to starting with
saying, you know what?
I think language is based onreference frames.
No, it was like, dammit,language must be built on
reference frames.
So let's think about it a bit.
Um, so, boy it's such a bigquestion, Matt.
There's lots of ways you canattack it.
Um,
Matt (30:40):
We can use examples
because I, I'm a programmer and
so I think in different ways indifferent engineering situations
and that feels like, forexample, if I tackle a
programming problem in onelanguage, I will pull a certain
reference frame into my brain toexecute commands in that
language.
It has similarities to otherlanguages that I've used in the
(31:00):
past.
Like all these different areasof expertise.
Jeff (31:04):
Let me get really, really,
um, uh, highbrow about this.
So it's kind of hard to bring itdown to like everyday
experiences, but, knowledge.
I'm just trying to define whatknowledge is.
Knowledge is sort of informationor facts arranged in a useful
(31:25):
way.
Matt (31:26):
Correct.
Agreed.
Jeff (31:26):
Okay.
So I can have a whole bunch offacts and I can say, yeah, you
and I see the same facts, but ifI'm knowledgable about it, I
look at those facts and say oh,I know those things.
And what does it mean to knowthose things?
What, what I believe it means toknow those things, is that I
take that information and I canassign it to a reference frame.
Everything has a location inthat reference frame and what
you get with a reference frameis you get the ability to
(31:49):
navigate through these facts andnavigating between the facts is
like navigating through a roomor moving my finger relative to
the Coffee Cup.
But in this case, I'm moving mylocation in this space of, of
facts or space of things, um, toachieve certain results.
So an example I've often used isa mathematician where you say,
okay, some people look atmathematics equations like, Oh
my God, it's like Greek.
(32:09):
I can't understand a word that'sgoing on.
But the mathematician looks atit and all these equations are
friends.
They're like they're friends.
So like, I know that and I knowthat.
And I know that.
Even numbers are friends, youknow?
Oh yeah.
You know, 169 that's 13 cubed.
So, and what they do is, and sothe one thing you think about is
like if I'm trying to do is I'mtrying to solve a mathematical
(32:32):
theorem or something like that.
I start with set of uh, pointsthat set of equations or set of
things I know are true, in somereference frame, and I'm trying
to get to a new point in somespace and I'm trying to figure
out the right behaviors to getme there.
And the behaviors aremathematical operations.
So I do a plus transform here, Iget this and if I do this kind
of transform here, I get that.
(32:52):
And if I do a multiply, I getthis.
And so in this sense where thebehaviors are mathematical
transforms, that move youthrough a space of mathematical
constructs or concepts.
Matt (33:04):
Like a theoretical space?
Jeff (33:05):
It's literally using grid
cells.
Um, but the space is, there'stwo things.
Think of it like, grid cells arelike a map.
Okay.
N Dimensional map.
So what do you assign to the maplocations?
You assign mathematicalconstructs.
You can think of like equations.
Okay, when you move, what areyou moving?
(33:26):
We're not moving physically, butyou are mentally moving through
this space by applying certainoperators instead of moving the,
instead of the operating beingflex this muscle, the operator
is multiply.
It's like, yeah.
And um, and you get the samebasic thing.
You're moving through this spaceof ideas.
And so that's what you do whenyou're thinking about something.
So this, so this is the generalidea, that high level knowledge
(33:48):
is the organization of conceptsin a reference frame and, and
your ability to be really,really smart at something is
knowing how to navigate and whatkind of behaviors you do to get
some place.
Another example would be apolitician.
You know, a politician wants toget some bill enacted.
They have something they wantand they have all these
obstacles to get there, right?
(34:10):
So they say, I'm at this pointhere where this was the state of
the world and I want to get tothis point here where this thing
gets, you know, gets passed.
How do I get there?
And they have all thesepotential things they could do.
They could have a rally, theycould hold a forum, that could
do some sort of publicity, theycould get some endorsements and
they, and then politician'sexperienced enough to say they
(34:31):
know what will happen when theydo these different behaviors.
Um, and so since your choice ofbehaviors is your choice of
which way you want to movethrough the space of political
things.
Matt (34:43):
So his model of politics
as he knows it is, is a
reference frame that he'sevolved over time due to his
experience in that field.
Right?
Jeff (34:53):
More likely a woman
because they're smarter about
these things.
But um, um, yeah, what yougained, what'd you, what the
comes an expert is an expert canlook at the same facts, but they
have through experience, they'velearned how, what happens when
you do things.
So as a programmer, you sitthere and you said, ok i want to
(35:14):
solve some problem and you say,well, I might solve it using
this kind of sorting system.
I want to use this kind ofstructure or this framework I
want even this tool.
And you as an expert knowexactly what would happen to do
those things.
It's very equivalent to being anexpert in the woods.
You know, I go out for a walkwith you and we say, you know
what?
We need to find the source ofwater.
(35:34):
We're getting really thirsty.
If I spend a lot of in thewoods, I'll say, yeah, I
understand all this.
I bet you I know there's waterdown there because I learned
these kinds of patterns before.
Yeah, and so I recognize thescene as, yeah, I understand
this.
I see these trees and I see thatkind of tree and I see these
hills over there and I hearthese sounds.
(35:55):
To me, I understand this stuff.
I have a reference frame.
Matt (35:58):
You may not even have to
think about it.
Jeff (35:59):
No, you have a reference
frame.
It's just like seeing.
When you see this coffee cup,you don't say, Oh, where's my
reference frame?
You say that's a coffee cup.
Well, a person who spent a lotof time in the woods will look
at that same set of data thatyou look at and understand it,
and they'll say, oh yeah, well Iknow how to do that.
I know how to get to the waterbecause water is typically going
to be at this kind of thing downthere, right there and you'd be
like, How did you know that?
(36:21):
And uh, and same thing, ifyou've never been exposed to
something, an everyday objectlike a coffee cup and you were
raised in the woods and neversaw any kind of physical thing
like this, you would might lookout and go, ah, I kind of see
what it is, but you wouldn'timmediately go, that's going to
be great for carrying the fuelwe need to carry.
You know, or whatever.
Right?
Um, so that's what makes anexpert an expert is someone who
has spent enough time looking atexperimenting with various
(36:43):
pieces of facts or data orconcepts or whatever they are,
that they've learned the correctreference frame or a good
reference frame to assign thesefacts.
Matt (36:50):
One that's close to
reality.
Jeff (36:53):
Well yeah, what makes it
reality is that the reference
frame is accurately predictingwhat's going to happen.
Matt (36:58):
I guess that's even more,
that's more important to
accurately predicting...evenbetter than reality.
Jeff (37:04):
You know, different people
could take the same set of facts
and organize them in a differentreference frame.
And we see this a lot.
Um, that doesn't happen foreveryday objects as much, but we
see it for things that areconceptual and so we can, uh,
you know, uh, differentreligious beliefs take the same
facts about the world.
We all observed the same thingand they come to completely
different conclusions about whatwould happen when they do
(37:26):
certain things.
And, um, and that's, that's acase where, um, you know, the
reference frames are reality,right?
They can't all be correct.
Most of the, you know, all thedifferent beliefs we have in the
world can all be that conflictwith one or they cannot all be
correct.
So one could be correct.
Probably most of them are wrong.
All, but, um, but the point is,um, uh, you know, that's, that's
(37:49):
the test of reality is or testof the accuracy of the model is
how well it predicts the future.
And, um, and when we don'treally have good data about the
future, people can get wrong orif they limit the amount of
exposure to what they see.
Matt (38:02):
Yeah.
You know, or, or just the databeing given to the intelligent
system is biased in any way.
Jeff (38:08):
Yeah.
Yeah.
So you see someone who believesthe earth is flat and you give
them all these facts that don'tfit that model there.
Earth is flat predicts certainthings, right.
And, and they will do theirdamnedest to try to fit these
new facts into their model, nomatter how- Otherwise they're
going to have to throw the wholething away and start over again,
(38:29):
which is a really disconcertingthing.
Matt (38:31):
It is.
It is uncomfortable to throwaway a whole frame of reference.
Jeff (38:35):
Basically, the models, you
build of the world are your
reality.
Matt (38:38):
It's your belief system.
Jeff (38:40):
It's your belief system.
It's your reality.
This is what, you know, that'sthe, that's what it boils down
to.
There is no other reality interms of your head this is is.
And so if you say, well, youknow what?
Everything you believed, thesereference frames, and you have
to start over again.
Um, then it's kind of like, oh,you're going back to square one.
Matt (38:54):
In a way.
I mean, you think about howthese are represented by neurons
in your brain- a reference frameyou can think of as a physical
structure.
I mean, it is caused by physicalstructure and if you have to
tear it down and build it backup, it could be painful.
Jeff (39:07):
Well, you wouldn't be
taking away the neurons, but
you'd have to redo all of them.
It wouldn't be physicallychanging, but you'd have to redo
all the connections, thesynapses.
Matt (39:15):
Sure.
But you'd have to ignore all ofthe predictions that you're
getting that are telling youit's flat.
It's flat, right?
Jeff (39:21):
It'd just be like, you
know, starting over.
Matt (39:24):
That's hard because you
have to drop your beliefs and
that's the hard part is droppingwhat you think, you know,
Jeff (39:30):
I mean nature has designed
us to, to want to build models
of the world.
That's what we do.
That's the first number of yearsin our life.
Matt (39:37):
You feel rewarded for it
and when you realize something
works a certain way, you feelgood,
Jeff (39:43):
And there's a evolutionary
survival advantage of that.
So like we can learn new modelsof the world.
Each new child that's born getsto learn anew.
And um, and there's an advantageto that.
That means we can adapt veryrapidly and we adapt during our
lifetime.
But if you build, once you'vebuilt your world model, um, and
(40:03):
you've been living your lifearound it, uh, then someone
comes along and says, the entirething is wrong or some portion
of it's wrong.
It's, uh, that's, that's veryupsetting.
That'd be all of a sudden youfeel vulnerable.
It's like I have a model thatmakes me successful in the
world.
Matt (40:19):
And in a way, it's part of
you.
Jeff (40:22):
Yeah.
So now, now it's like, it's likefeeling lost.
You know, if you, if you're, ifyou're out in the woods and all
of a sudden there's, you can'tanchor yourself in the woods,
every tree looks the same andnow your reference frame is
lost.
You literally, if you cannotlocate yourself in the reference
frame, you feel lost.
That's not a good feeling.
That's an emotional thing.
(40:43):
And that's how people feel,people who don't like math, they
look at these math equations,they feel lost.
So if I, if I was trying tounderstand someone speaking
Russian, I would be lost.
Um, I always feel like oh mygod, I can't do anything.
Um, so, um, that's anuncomfortable feeling.
Matt (40:56):
And so you're always
grasping for anchors like this
looks somewhat similar.
Jeff (41:00):
Yeah, that's right.
So we have these referenceframes.
You have all these models of theworld.
We want to stick new data intoour models of the world.
Right?
And by and by, if you have toabandon one of your basic models
of how the reality is, then youfeel lost.
And that's an uncomfortablefeeling and you don't know what
to do and you don't know how toact.
And it's just like being lost inthe woods.
Matt (41:20):
So that's great.
Let's come up with those threethings that you're mentioning
before that sort of that we'resaying define intelligence.
So Reference frames.
Jeff (41:27):
Well I'd say the method by
which the system learns.
Matt (41:30):
The method.
Jeff (41:30):
Yeah.
Which is going to, tobeintelligent, you have to have
a general purpose referenceframe, right?
So grid cells are generalpurpose reference frame, right?
There are other reference framesthat are not so general purpose
like latitude and longitude ormore specific or chessboards and
things like that.
So you have to have a generalpurpose reference frame and you
have to, you have to of coursebuild a model of the world
around using that method.
So there's a lot of thingsinvolved in that- sensory motor
(41:52):
movement and continuouslearning.
And so on.
Matt (41:54):
But by building a model,
using the general purpose
reference frame, you were sayingwe're going to have a general
purpose model.
Jeff (41:59):
It's like saying that
that's square one.
To be truly intelligent, youhave to at least have the
mechanisms.
It's like the universal turningmachine.
You have to have the basicsubstrate, which is can learn
different things.
Yes.
Do Movement.
Yes.
Okay.
And the next thing is thecapacity of the system.
(42:20):
That's a simple one.
Um, you know, you can learn moreand build bigger models.
Matt (42:24):
That's what a lot of the
work is happening today in Ai is
still in capacity or...
Jeff (42:31):
But if we're talking about
intelligent systems, um, then,
uh, then of course you say, whyis it, uh, the dog smarter than
a mouse and why am I smarterthan a dog?
It's mostly to do with thecapacity of the system.
Not completely, but mostly.
Um, and then, then finally iswhat does the system actually
been trained on?
You know, again, you could takethe world's largest
(42:52):
supercomputer and have to playtic TAC toe.
Matt (42:55):
Won't be very smart.
Jeff (42:56):
Well, it's, that's fine,
but you know, but I can't take,
you know, the world's largestweather simulation and run it on
the computer on my toaster.
It's, I can't do it in any kindof real time that'd be useful.
So, um, so those are sort of thethree metrics and we just
confuse them all the time.
Um, and, and I think even worsethan that, we confused them
(43:16):
because they only look at mostAI looks at what humans do,
right.
Only what humans do.
And then even then, like passingthe Turing test brings us into a
whole other domain, which islike, well, now you're trying to
emulate the emotionalcapabilities of a human.
And my definition ofintelligence we just talked
about it does not includeemotions.
It doesn't mean humanlike.
It doesn't mean, you know, it'smore spotlight then, then, you
(43:38):
know, you know, I get that.
Matt (43:40):
Right.
I never, I never want so, so,so, so, so he's a bit, he, I
think he was like smart withoutemotion.
Something like that.
Yeah.
Okay.
So think a spot and um, uh, youknow, there was this, you may
be, you want to build a machinethat mimics a human and it shows
emotions, but that to me is notintelligence.
That's a separate problem.
It's a separate issue.
But something that could beadditional
Jeff (44:02):
That's a flavor, you know,
you can add to tack on the...
if you want to tell them to be acaretaker, a robot, maybe you
want that thing to have someemotional states
Matt (44:13):
or be empathetic,
Jeff (44:15):
But I don't think that's
intelligence.
That's part of the humanexperience.
But if you think about theneocortex itself, then the
emotional centers are not in theNEOCORTEX.
Matt (44:24):
They contribute to all of
our models of things in
different ways.
Jeff (44:27):
Yeah, they contribute to
how we learn.
Yeah.
It's a longer topic, but um.
Matt (44:32):
We'll do another talk
about that.
Jeff (44:34):
But basically think of it
like we had this super and sort
of Spock like thing on ourheads, the necortex.
It sits on top of a veryemotional older system.
They interact.
Um, but you can think about thethree causes of our brain that's
just the NEOCORTEX.
That's really, even though it'sinfluenced by what we learned.
It's influenced by emotions andso on.
It is actually an emotionlessmodel.
Matt (44:54):
Sure.
It's driven by its input.
Jeff (44:56):
Yeah.
It's basically what is thestructure of the world and can I
discover that structure of theworld, whether it's physical
objects or, or mathematics oryou know, politics, whatever.
I'm going to try to learn thestructure of the world in an
actionable form so that I cannow see new things, place them
in this model and know how toact related to them and so on.
Um, but those, those are thethree basic things and we don't,
(45:17):
we just don't have thatconversation enough in AI.
AI is dominated by how well didI do on this particular task.
And if I, if I was trying to doan image classifier AI system,
all that matters is you got thebest performance.
That's it.
It doesn't matter how you didit, no one, it's important, but
that doesn't, you get credit forbeing the best.
(45:38):
And so people are just, you,you're not being, we're not sort
of moving in the direction ofsaying well how general purpose
is it?
And uh, what else can it learn?
And how did it learn it?
And um, and all those things.
And so we're not asking thesequestions.
No one's sitting aroundsaying,people do point out that
the chess playing computerdoesn't seem to know anything
(45:59):
else.
But they don't ask WHY.
Why doesn't it know anythingelse?
Cause it's got a chess referenceframe and that's all that's ever
been trained on and it probablycan't learn anything else.
Anyway.
That's, it's not a simple onebinary answer that those three
things that all have theirseparate dimensions.
I think that computer analogymight help because that has the
(46:21):
same dimensions.
Matt (46:21):
Well I think that was a
good discussion.
Jeff (46:23):
That was fun, Matt.
Matt (46:26):
Okay.
Thanks Jeff for sitting downwith me for the podcast.
Jeff (46:26):
It's great.
I hope, I hope people find itinteresting.
I don't know.
Matt (46:30):
I'm sure they will.
Jeff (46:32):
All right, thanks.
Matt (46:40):
Thanks again for listening
to Numenta On Intelligence.
This was a conversation ondefining intelligence with Jeff
Hawkins.
I'm community manager, MattTaylor.
Tune in next time.