All Episodes

July 18, 2018 33 mins
In this in-depth interview with Numenta Co-founder Jeff Hawkins, host Matt Taylor dives deeply into concepts of location and object representation in the neocortex. In Part 1 of this 2-part interview, they discuss location, unique spaces, object compositionality & behavior, movement and learning, sequence memory, and the definition of “space” itself.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Matt (00:09):
Welcome to Numenta Intelligence, a monthly podcast
about how intelligence works inthe brain and how to implement
it in non biological systems.
I'm Matt Taylor.
Today I'll be talking with JeffHawkins about the latest
research Numenta has been doingwith regards to grid cells and
hierarchical temporal memory orHTM.
While this is the first episodeof the Numenta on Intelligence

(00:30):
Podcast, Jeff and I will begoing very deeply into the
theory very quickly.
Therefore, I prepared in theshow notes a bunch of
educational resources so you canlearn at your own pace about HTM
in grid cells.
I also broke this episode upinto two parts, so it's easier
to digest.
If you liked this conversationand you don't know how HTM
sequence memory works, I suggestyou watch through the HTM School

(00:52):
videos on YouTube.
One of the main ways we sharethe research Jeff talks about in
this episode is by giving talksat various academic events and
workshops and we've had a busysummer doing just that.
Our research team spent sometime at the CNS 2018, which
stands for computationalneurosciences.

(01:15):
The event was held at the AllenInstitute and the University of
Washington.
Our VP of Research SubutaiAhmad, gave workshop
presentations and a couplemembers of our team joined him
to present two posters as well.
Like we do with all events, wemake the presentations and
posters available afterwards.
So visit numenta.com if you'dlike to check those out.
So without further ado, I'm MattTaylor and here with Jeff

(01:38):
Hawkins.
We're at the office.
Jeff is the founder of Numentaand he's my boss.

Jeff (01:44):
We work together.
Hi Matt.
How you doing?

Matt (01:47):
Uh, so I wanted to talk to you specifically about some of
the newer stuff that's going onin, in HTM research here at
Numenta.
Last time we talked we did theHTM chat video, that was right
around when we released thelayers and columns paper.
Uh, so it would be great if Icould get you to elaborate a bit
on to that whole location ideain the brain and what have we

(02:08):
learned over the past year or soabout location in the brain.

Jeff (02:11):
Okay.
Well that's very exciting.
So let's just go back a littlebit to the time you talked about
a year ago, so we uh, well itwas last October, released what
we called the Columns Paper,right?
And that introduced a really bigidea and that is that everywhere
in the neocortex is a locationsignal.
And so when we think about howthe brain processes information,
if you think about just how thesensory input comes in, that's

(02:32):
only half of it, right?
And the paper we published inOctober talked about some of the
consequences of that.
And we also said we didn't knowwhere this location signal was
coming from or how it wasgenerated.
We knew where it was, but wedidn't know how it was generated
because it's a kind of an oddidea.
But we knew that there wasanother part of the brain called

(02:53):
the entorhinal cortex which hasa location signal.
And we said, hey, maybe it'susing the same mechanisms.
So we put that in that paper.
We said we think that it may bethe same mechanism used by the
neocortex, things called gridcells, which many listeners may
have heard about.

Matt (03:08):
Right.

Jeff (03:09):
So since that time we've been exploring that idea.
It's, um, it's certainly true.
First of all, there's been otherexperimental evidence suggesting
there are grid cells in thecortex.
So that's something we wouldhave predicted.

Matt (03:22):
There's a lot of research in the field right now.
Yeah, well grid cells are a veryhot topic.
It's been a, it's one of the fewplaces in the brain when people
have made a lot of progress.
We started thinking about, okay,how do they play in the cortex
and what kinds of things canthey do?
And we've discovered severalmajor new ideas related to grid
cells, which no one knew about.
Um, and uh, we think they applyprobably in the old parts of the

(03:45):
brain, but they certainly applyin the neocortex.
So it's, it's actually quiteexciting.
We're showing all of a suddenthat a whole bunch of things are
coming together that, uh, wekinda knew what has to happen
somehow and now when we studygrid cells in the cortex, it all
sort makes sense.
Um, so we can go through some ofexactly what some of those

(04:06):
things are if you want.
Well, the interesting thing tome is all of the grid cell
literature from the Nobel prizedays and everything is all about
egocentric location of anorganism within an environment.

Jeff (04:17):
Yeah.

Matt (04:18):
So what's the difference in how we're trying to apply
that in the neocortex?

Jeff (04:21):
What we think is- the big idea in the paper I'm working on

right now is the following (04:24):
the old part of the brain where grid
cells are found in theentorhinal cortex, they
basically, as you said, a saywhere an animal lives in like if
you're in an environment or in aroom, something.
As you move around, these gridcells modify their activity that
reflect where you are and thisis how an animal and you, you
too, you have an innate sense ofwhere you are, even in the dark,

(04:47):
if you move, you know where youare and these cells are updating
as you do that.
And the whole point of the oldpart of the brain is sort of to
map out your world and knowwhere you are and how to get
back to someplace.
It's like, oh, how do I get backto the kitchen now that I had
been wandering down thishallway.

Matt (05:01):
A useful skill.

Jeff (05:01):
Yes, it is.
Um, that same mechanism whichwas evolved a long time ago
because the animals needed toknow where they were, is now
been applied to two differenttasks in the neocortex.
One of them is to map outobjects in the world.

So imagine this (05:17):
you're an animal.
You are and I am- humans orrats, whatever.
And as we move around, we have alocation in the environment and
we sense things in thatlocation.
And we build up a map of theworld by moving around, knowing
where we are and what we sense.

Matt (05:31):
You mean our body has a location.

Jeff (05:32):
Our body is moving.
And, and as you move, the gridcells says, Oh, I moved over
here and I moved over here.
So it has this interesting waythat it's represented, but we'll
leave that aside for the moment.

Matt (05:40):
Sure, if you want more information there's links in the
show notes to details.

Jeff (05:43):
Okay, that's great.
And so the same basic approachcan be applied to figuring out
what some object is like acomputer or a Coffee Cup or
telephone.
Imagine now that I have myfinger and I'm moving around and
as I move it's mapping out mylocation and it's discovering
the structure of an object thesame way it discovers a

(06:04):
structure of a room.
Now we had this in the paper inOctober, but we didn't know how
it was grid cells.
Uh, so let me tell you a coupleof things that have really come
out of this since.
Uh, so if you're following me sofar-

Matt (06:14):
Well let me, but I am sort of.
Well, let me just point out,you're talking about the finger
at being a sensor as if it weresort of like the organism in a
room.
The entorhinal cortex- flippingit a little.

Jeff (06:25):
Yeah.
Imagine research is mostlystudying rats and rats running
around some maze or environment.
And as that moves, it knowswhere it is.
We're saying right now thatevery part of your body, let's
just talk about your somaticbody or your fingers and your
hands and your skin and so on.
Every part of your body,actually, when I touch an
object, those parts are touchingobjects, are like little rats.

(06:45):
They're all individually knowingwhere they are scurrying around
this Coffee Cup.
I'm holding the Numenta CoffeeCup in my hand right now.

Matt (06:52):
Interesting visual.
And my fingers are like littlerats scurrying around all
simultaneous and it's like Icould have five rats in the room
at once.
Sure.
Yeah, yeah.
Um, and they all work togetherfiguring out, hey, what is this
thing we've got here?
And your eyes do the same thing,which is a little hard to
imagine, but the parts of yourretina are looking at the Coffee
Cup as well and saying, Hey, Iknow I'm seeing this feature at
this location.
I'm seeing this feature at thislocation and so on.

(07:12):
So now we have a very concretemechanism.
The grid cell mechanism for howthat signal is generated, how
the location signal isgenerated.
But now let's talk about wherewe went from there.
Okay.

Jeff (07:24):
One big insight we had, um, and this came out of Marcus
who was one of our team membershere.

Matt (07:30):
Marcus Lewis.

Jeff (07:31):
Um, he, so that you could do, you could take two
locations.
This is a little tricky now.
Imagine I'm- and this is thething we did in the paper I'm
writing now.
I have the Numenta Coffee Cup infront of me right now.
And you can imagine this genericWhite Coffee Cup with a Numenta
logo on it.

Matt (07:49):
Yes.

Jeff (07:49):
Now the Coffee Cup is something I know.
And then the Numenta logo issomething I know.
How is it that I learned thatthe Numenta logo is on this cup?
What does that mean?
How do I learn a new cup thathas a logo and a cup?
And what we showed is that eachobject, the Coffee Cup and the
Numenta logo and every otherobject in the world has its own
sort of- it's like its ownenvironment and it's its own

(08:10):
sort of space.
This is one thing that welearned from grid cells that uh,
every environment has its ownspace, its own physical mapping.

Matt (08:19):
A dedicated-

Jeff (08:21):
A dedicated set of locations that say this, it's
not like, I mean location x andy one and two.
It's like I'm in this locationin this room and that's unique
to that- another room and everyother location.

Matt (08:33):
Can we call it to like a reference frame, a coordinate
frame, something?

Jeff (08:37):
It's sort of like that.
Although it's more like, it'sjust unique.

Matt (08:42):
I don't like using the term coordinates, but it's some
type of unique frame ofreference.

Jeff (08:47):
Yeah.
It is, it is.
It's just every.
Yes, but it's different than welearned in high school about-

Matt (08:53):
You wouldn't represent two objects in one of these things,
right?

Jeff (08:56):
Each object has its own space.
There's like every point inspace is unique to a particular
object.

Matt (09:00):
This is a point that I really want to hammer in on
because it introduces somethingthat was counterintuitive to me
initially when I firstunderstood that.
Because I thought one of thethings about this theory is that
every cortical column isattempting to represent objects
on its own.
And it has an idea of whatobject, uh, is, is the sense
that it's getting, what thatmight be depending on its

(09:23):
location in sense.
And, and, and the idea that eachone of these cortical columns
has a completely unique frame ofreference for each object was
counterintuitive to me because Ithought, how do they compare
objects?
Right.
Does that make sense?

Jeff (09:36):
Uh, well, up until the point you said, how do they
compare.

Matt (09:38):
Yeah.
Well, then correct me thenbecause that's always been my
point of confusion.
Like I would think that you'dwant these cortical columns to
be able to compare with theirneighbors.
What are you sensing?
What am I sensing?
Why don't they use the samereference frame?

Jeff (09:50):
They, uh, well they do compare to the neighbors but not
at that level.
It's like each column is saying,uh, I think I'm looking at
something here and these are thethings I could be looking at.
I might be touching a coffeecup, I might be touching a
telephone, I might be touching apen and they communicate at that
level.

Matt (10:09):
Oh, not their spaces.

Jeff (10:11):
Not the spaces.
So I'm looking at space, youknow the location, that might be
a location on the cup, but itmight be location on a pen, so
on, but the representationallocation is going to be unique
for that particular column.
The other column, it's a littlebit more complicated than this,
but the basic idea is theyreally can't share locations.
Each one learned its own model,but they can share what they

(10:32):
think the model is like what theoutput is.

Matt (10:34):
So they're still sharing.
They're not sharing at the verylow level.

Jeff (10:37):
Yes.
We had this, uh, if thelisteners read our paper last
year, we had this idea in thepaper that the columns are
sharing in layer three, we stillhave that.
What's unique now when we didn'tunderstand back then is that the
locations themselves are uniqueas well.
We didn't have that in thatpaper.
That's something we learned fromgrid cells.

(10:58):
So alright, we're off on atangent or maybe not.

Matt (11:02):
Yeah, I threw you way off.

Jeff (11:02):
So we start with this idea that every column knows the
location and it's sensingsomething.
Now we've, now we've got thisidea that every column, the
actual locations are unique perobject.
It's a detail that's not superimportant at this level of
discussion right now, but itmakes a big difference in
mechanisms.
What we've now learned is how isit I can represent an object as
not just a set of features butas a set of other objects?

Matt (11:26):
Compositionality.

Jeff (11:27):
Compositionality.
So I don't want to learn- Again,looking at my Coffee Cup, I
don't want to have to learn thisand the Numenta logo on the cup.
I don't want to have to relearna Numenta logo.
I want to be able to say, hey,there's a thing I already know
called the Numenta logo and it'shere in reference to, it's
relative to this thing.
Yeah, so what we've come up withis a mechanism and we think this

(11:47):
is happening everywhere in thebrain.
Now.
It's a pretty big idea that youcan take two objects which have
their own spaces.
That's an important detail, butthe point is you can take these
two objects and we can defineanother, a sparse
representation, another cellularrepresentation which says, Hey,

(12:08):
I represent the logo on theCoffee Cup at this location.
So it's an associative link, alink between two existing
objects and it's very efficient.
It's extremely efficient

Matt (12:18):
It essentially places an object inside the reference
frame of another object, right?

Jeff (12:22):
Basically what it does is it ties them together in a way
that you could say these two areat some position.
It's not inside of it.
It's just like a link between.

Matt (12:32):
It's just a wink, like in programming.
It's like a link.
So you can follow it.
And it's very efficient tofollow that.

Jeff (12:37):
Yeah, very efficient.
So now objects in the world arebuilt of other objects.
Um, and so that's how we buildoptics.
So that's how we've learnedstructure.

Matt (12:45):
That makes sense because you think about a car, you just
think about a car, you don'tthink about all the components
that are composed and- you CANdive all the way down.

Jeff (12:52):
That's right.
But when I learn a car, it haswheels.
I don't have to relearn thewheels and the wheels have
tires.
I don't have to releand thetires.
And the tires have valves.
I don't have to relearn thevalves.
All of these things are sort ofconnected by linkage through
this associative linkingmechanism.

Matt (13:06):
And you just learned the links?

Jeff (13:07):
And you just learn the links.
So, and when I'm forming, if Isee something new, I say, oh,
what are its components, it'sgot this and this and this, and
it tells me where they arearranged relative to each other.
Um, and now I have a new objectvery efficiently represented
which has all the knowledge ofprevious objects.
So this, we now understand theneural mechanism for this.
And we realized this is thebasis of many things.

(13:30):
So for example, objects havebehaviors.

Matt (13:33):
Yes.

Jeff (13:33):
They move, they change.
Think about a car.
Car Door opens, it closes, thewheels turn, the, uh, the
steering wheel turns, you push abutton and the radio turns- on
all of these behaviors.
An example I'm using in thepaper I'm writing is a stapler.
Simple object.
But a stapler has some lookdoing it.
Oh, I know what that stapledlooks like, but it has
behaviors.
If you push it down, a staplecomes out of it.

(13:54):
If you pull it up, it opens andswings open.
You can put new staples in it.
It's still a stapler.
So the, what we now understandthe neural mechanisms by how the
brain represents not only whatthe stapler is and what it looks
like and it feels, but also howit behaves.
And so if you're, if you'refollowing here, and I just told

(14:14):
you in a moment ago that anobject is composed of
sub-objects in a certainarrangement.
So a stapler, you can think ofit as, oh, it has a bottom of
the stapler and the top of thestapler and they have a certain
arrangement most of the time.
It's hinged, and it has somearrangements, at a slight angle.
Um, so that's the arrangementbetween those two parts.
Now I have to imagine thestaplers just defined as two

(14:35):
things.
The bottom of and the top.
And so I have one, a sparserepresentation which represents
the bottom half of the staplerto the top half of the stapler.
Now as the top half of thestapler moves, imagine I'm
opening up and it swingingthrough about, you know, maybe
120 degrees, something likethat.
As you do that.
So the relationship to thebottom, the, the, the position
of the top to the bottomchanges, you go through a series

(14:59):
of sparse activationsrepresenting the top relative to
the bottom in a sequence.
So as I move it, it starts offlike, oh, the top has this
position relative to the bottom,and as it starts moving up, that
position changes and positionchanges.
And so now I can represent thebehavior of the stapler, opening
up as a sequence of sparserepresentations.
And so I can just learn asequence using our sequence

(15:20):
memory.
And I've now, not only do I knowwhat the stapler looks like or
feels like, I now can, I knowit's behavior and what it's,
what it is, as it's opening, Iknow where it's gonna go and
what's going to happen to it.
But I can represent all of thatvery efficiently in a single
column in the cortex.
Now, all the columns are doingthis simultaneously.
We now have a way of using, um,uh, moving sensors like your

(15:43):
fingers, your eyeballs to learnthe structure of an object.
The structure of the object isreally composed of other
objects.
We can do this very efficientlyand now if the structure of the
object changes because thepieces, the individual pieces
move relative to each other, nowwe have a way of representing
that behavior of an object.
And the way this works is thatit can be what an engineer would

(16:05):
say.
It would be, um, hierarchicalconstruction or even a reentry
construction, meaning I could, Icould define an object, like a
Coffee Cup as having a logo onit.
The logo itself could have acoffee cup on it.
So that's like a reentrantcoding screen recursive.
That's right.
The better word, recursive.

(16:26):
And um, this is a general ideaand so we are going to represent
all knowledge in the world thisway, including things that are
sort of language.
It has these recursivestructure.

Matt (16:39):
This touches on a core misconception and I always had
that hierarchy in the brain wasrequired to recognize
hierarchies of objects,structures in the world.
Now what you're saying here,flips that a bit.

Jeff (16:53):
Yeah.
I wrote that too.
That was in the book OnIntelligence.
Yeah.
I said that too.

Matt (16:57):
Probably why I thought that, Jeff.

Jeff (17:01):
Uh, you know, life goes on, we learn, right?
It's more, but composition, thereality is when you're just
talking about associations, evenif they're recursive, there's no
hierarchy required.

Speaker 3 (17:12):
There's no physical, physical hierarchy required.
When I look at the car, I can, Ican walk through the associative
links so I can say car, cardoor, car-

Matt (17:24):
Forever.
it's not memory intensive.

Jeff (17:25):
But I'm not doing all of it at once.
So I can't see all the parts ofthe car at once.
So I can follow links and then Isee a door handle.
I said, oh, the door handlereminds me of, that's part of
this thing over here.
And so, um, yeah, we got thatwrong about the hierarchy.
It's not completely wrong.

Matt (17:40):
What happened?
I mean it makes more sense thisway honestly.

Jeff (17:43):
Now in hindsight it's completely obvious.

Matt (17:45):
Isn't that the way science is?

Jeff (17:47):
Yeah at least to us, I think it's totally, to most
people now it's, this wouldstill be very a foreign concept.
Um, so part of our job is todocument this stuff and promoted
and get papers written about itand make it clear.
I know there's a lot ofcommunications on our forums
that people think talking aboutour paper that came out last
year and they, they're askingquestions about things we're

(18:07):
working on right now.
So we really we're, um, we needto get this done.
It doesn't end there by the way,uh, this now that we know that
grid cells are sort of in thecortex doing these things, it
explains a whole bunch of otherstuff.
So it also explains in the brainhow it is that you know how to
move your limbs from one pointto another.
This is something that'sassociated with what are called

(18:30):
where pathways in the brain.
So it's not just about objectmodeling.
It provides a framework.
The grid cells and locationsprovide a framework for and
explaining everything thatcortex does.

Matt (18:40):
The way I think about it, forgive me if I get this wrong,
but when I understood what gridcells really were, it's when you
build up a model of grid cells,as you learn as a baby or any
organism does, you're buildingup a model of space itself,
right?
You're not just learning aboutthe spaces that you're in right
now or those environment.
You're just learning to mapspace with your sensors.

Jeff (19:01):
Yes.
Uh, well, it's just sort ofspace is inherent and uh, the,
the concept of space is in here,but the brain probably has to
learn, you know, when I move,what does it mean to move in
spaces like a, there was aninternal sort of movement
command which is just a bunch ofneurons firing and then there's
what actually happens in theworld.

Matt (19:20):
Sure.

Jeff (19:20):
It has to learn that connection.
This is why the whole system canwork for, as I mentioned a
moment ago, can work forabstract objects.
The whole mechanism that youlearn to move your fingers over
a coffee cup and then what acoffee cup feels like and how it
behaves.
That same mechanism can beapplied to nonphysical things

(19:40):
and movement doesn't have to bephysical movement.
It can be, um, it can bemovement of concepts or movement
of I've got some equations on abook on a whiteboard.
I can be moving them around.
Things like that.
Behaviors can be like transformsin mathematics and anyway, the
mechanism says I'm going to tryto figure out if there's some
sort of quote behavior unquote,um, some sort of input and I'm

(20:03):
going to figure out thestructure of this thing in some
location based space.
And now the properties of a, um,I talked about a moment about
compositionality andhierarchical compositionanality
and so on all apply to thisstuff.
It's a very kind of fundamentaltheory about how we form

(20:24):
representations of the world andhow knowledge is represented.

Matt (20:26):
Yeah.
Speaking of formingrepresentations, learning object
learning is really fascinatingto me.
I do this thought experiment.
I think you probably startedthis.
We talk about reaching into darkboxes all the time, it seems
like, so it fascinates me thatthe grid cell space is just an
enormously huge, so when youreach into a box and sense

(20:47):
something you've never sensedbefore, you're essentially like
sort of randomly picking a spacein the ether to start an object,

Jeff (20:54):
A point

Matt (20:54):
A point, some point and start defining an object and as
you move sensors over thatobject, you're defining that
object in your brain over time.
That's fascinating I thinkbecause it means that you have
like almost an unlimitedpotential to learn.

Jeff (21:10):
You do well, it's a well unlimited potential to
represent.

Matt (21:13):
Right to represent things.

Jeff (21:16):
This is a basic property of sparse distributed
representations.
People who've been followingyour work and our work know what
that is.
But basically you can take a setof a few thousand neurons and
activate them sparsely and saywell, how many things can I
represent?
It's nearly infinite, greaterthan the number of atoms in the
universe type of thing.
So we have this hugerepresentational space.
Um, the challenge is in thebrain, if you want to learn

(21:41):
something, then the amount youcan learn is limited and um, you
know, we can't learn everything,but it is the same wonder you
are expressing.
I, I, I feel too, it's like,it's almost unlimited number of
things you could learn.
You can't learn them all atonce.
There's almost unlimited butthings you could learn if you
had time and you had enoughneurons and enough synapses.

(22:02):
But, um, in some ways you couldsay, well, how many different
locations can a set of gridcells represent and for those
who were knowledgeable aboutgrid cells will say some number
of good cell modules, sothousands of thousands of cells.
Um, that's, the number oflocations you can represent is
extremely large and it'sunlimited in some sense.

(22:23):
Then when you pick one location,you can just start randomly and
then you can, all the pointsaround all the locations around
that you can move to themovement.
They're all nearby.
Um, but, uh, and it's sort of anisolated island of points in
this huge monster space ofpoints of locations.
So like we have this almostunlimited.

(22:43):
It's like, it's like imaginethat you have your grid cells in
your cortex and represent theuniverse and now you land on a
planet where you can explore allthe property around that planet
or you're never going to easilymove to the next planet.
It doesn't, you know, it's like,it's their theory.
You could move there, but notreally.

Matt (23:01):
Well, you said you have associative links to other
things.

Jeff (23:03):
You could do this.

Matt (23:04):
You can compose.

Jeff (23:05):
I can say.
Yeah, yes, but I was just sayingthat actual movement, you can't
really, you can move locally andmap out your local environment,
but I'm, I'm not.
The point is as I, as I updatemy location on earth, I'm not
accidentally going to think ofall the sudden I'm on some
location on planet Xenon, youknow, it's not going to happen.
Those are the locations onplanet Xenon are going to be all

(23:27):
unique from the ones on earth.

Matt (23:28):
So, uh, what, how does this idea of these grid cells,
how does that interplay with HTMsystems?
How does that work with minicolumns?
The idea of groups of minicolumns.
And the different layers.

Jeff (23:39):
Well you're asking some great, that's a great question.
It's a very detailed question.
Um, uh, I'm going to have toassume that the listener knows
something about our temporalmemory.

Matt (23:49):
I have warned them upfront, and provided resources
in the show notes.

Jeff (23:54):
Because there's other things we could talk about that
are less tactical than that.
But let's dive in.
Okay.
So what we came up with thetemporal memory algorithm eight
years ago, maybe nine years ago,something like that.

Matt (24:04):
It was before I was here

Jeff (24:06):
And it has quite a few innovations in it that I think
are actual things are happeningin the brain and one of them is
the use of mini columns andwhat, here's one way to look at
it.
You want to represent something,In this case, in the temporal
memory, wanting to representsome sensory input.
Right?
And we've run it through anencoder which you've been

(24:27):
talking about.
So you have some, you have arepresentation of that input.
In our temporal memory system,that representation is actually
the mini columns themselves.
It's not the individual cells inthe mini columns it's which
minicolumns are active.
That's the output of the spatialpooler and you say, okay, I have
some input.
I'm gonna represent it by someactive, sparse activation.

(24:49):
Every bit in that, in thatoutput of the of the spatial
pooler, every bit in thatrepresentation is associated
with a group of cells.
We'll call them minicolumns andthen what we're gonna do is
we're going to say I can pickone cell in each column to be
active at any point in time, soif I have an active minicolumn,
I'm going to pick one cell to beactive.
What that allows me to do isallows me to represent the input

(25:15):
in very large number ofcontexts, so depending on how
many cells are in each area, butit doesn't take many.
If I had 10 cells per minicolumn I'd be good forever.
It does not take a lot becausewe're not- It's not like oh, a
one cell per minicolumn I canrepresent one thing.
The cells in minicolumsn can beused in many different contexts,
so I could have- Imagine if Iwas representing a note in a
melody, and so here's a notecoming in and I, uh, and I want

(25:36):
to represent it, learn that notein many different melodies.
The same note could be in many,many, many different melodies,
many locations in the samemelody, and it's almost
unlimited the mathematics workout, like we talked about
earlier, even if I have just 10cells from many columns, I have
almost an unlimited number ofways of representing the same
thing in different contexts.

Matt (25:56):
Yes.

Jeff (25:56):
That's what the minicolumns get you.
Now, let's translate it intogrid cells.
I'm representing a location,right?
Well, I'm going to do the samething.
I'm gonna say, well, I have alocation, but I might want to
represent that location in manydifferent contexts.
Now, why would I want to dothat?
If I have a location on thiscoffee cup?

(26:17):
Why?
What are the different contexts?
Well, let's go back to ourstapler.
Okay?
Okay.
The stapler changed its shape.
Okay.
Now it's the same stapler, soit's the same space of points at
one point in time, one context,a point is occupied by the tip
of the top of the stapler andanother form of time, the tip of

(26:38):
the top of the stapler issomeplace else in that space.
So there's a location at onemoment in time had the physical
stapler and then another time itdoes not have a physical
stapler, but it's the samepoint, it's the same location,
so what I'm wanting to be ableto do is say that location is
the same location in thestapler, but at one moment in
time it's occupied by thiscomponent and other moments,

(26:59):
not.
There's different states of thestapler.

Matt (27:01):
Yeah, within the reference frame of the stapler.

Jeff (27:04):
Imagine looking at your mobile phone and you look at the
screen.
There's a location on thescreen.
Well, different things appear inthat location all the time.
A menu comes up, then a graph,then a number, then something
else.
That same space of the cellphone, the same locations
associate with that cell phone,have many different features

(27:24):
appear at different points intime, right?
I need to know the state of thecell phone to know what's going
to appear at that location.
The same location can havedifferent things occurring there
under different contexts.

Matt (27:35):
It depends on behavior.

Jeff (27:38):
What app am I running right now, right?
It's the same phone, the samelocation.
In this case, the morphology orthe shape of the phone hasn't
changed, but there's a locationon the screen that one moment
represents the one app and othermoment represents another app,
another moment represents a menuicon and so on.
So the point is this idea thatyou represent, you have
something like, oh, we startedby talking about this sensory

(27:59):
input and I want to representthe sensory input in different
contexts.
I now have a location on anobject, but I want to represent
that location under differentcontexts because if I'm going to
predict what's going to be atthat location, I need to know
the context or the state of thecell phone or the state of the
stapler.
So this basic idea, this youasked me about the, what's the,

(28:20):
what's the relationship withminicamps?
We just Kinda, we think this ishappening everywhere that you
represent something like asensory input or like a location
or like this display factor Iwas talking about earlier.
Um, and, but I want to be ableto represent many different
contexts.
And um, and so that's where therole of minicolumns come into

(28:42):
play.
And even when, even animals,they don't know, they don't see
that there's many columns, butthe same basic principle seems
to apply that there's a bunch ofcells that have the same sort of
receptive field property andthey differentiate under
context.
So it doesn't have to be, someanimals like humans and monkeys
have physical minicolumns youcan see.
Some animals, you don't see themphysically, but they're kind of

(29:05):
conceptually still there.

Matt (29:06):
Sequence memory is happening and at least one layer
of cortex, and so there'sminicolumn activations happening
there driven by sensory input.
Elsewhere there's some grid cellstuff happening that somehow
synched up with that?

Jeff (29:19):
Well, let's getting complicated.
I don't think we want to gothere now.
We're writing a paper, anotherpaper about this right now about
how it is that grid cells andsensory input coordinate.

Matt (29:29):
Okay.

Jeff (29:30):
Right, so imagine I reach my hand into that black box you
mentioned earlier and I touchedsomething with one finger.
Well, I can't tell you what itis.
It could be a lot of things.
I might say, well, it feels alittle bit the Coffee Cup and
maybe it feels a little bit likethe stapler.
Maybe it feels a little like apen.
I don't really know.
What happens is, is that now youmove your finger and you sense
something else and what thebrain does now though, what the

(29:52):
columns representing thefingertip do, they say, oh, what
object do I now know that'sconsistent with the first
sensation and the secondsensation displaced by that
movement?

Matt (30:01):
Oh, right, right, right.

Jeff (30:03):
And so you start eliminating things very quickly.
Um, so if you only have like onesensor, like you're touching
with your finger or you'relooking at the world through a
straw, you have to move aroundand look at and as you move
you're sort of saying, oh, I seethis feature, this feature, this
position, and you have to sortof, it's not just the features,
it's the features in relativepositions to one another.

(30:23):
Right.
And, um, so, uh, that's thatmechanism by how that works.
Uh, we think we understand agood portion of it.
It's related in the Cortex tolayer four, which is the sensory
input layer and layer six, whichwe think is a grid cell layer
and how they interact.
So exactly how that works, wedon't know exactly, but we have

(30:43):
some pretty good ideas of thebasic mechanism and so we're in
the process of preparing a paperon that concept as well.

Matt (30:51):
Great.
And Subutai said hopefully whenwe get to a point where we're
ready to try and get itpublished somewhere and we'll
give it an open access.

Jeff (30:59):
Oh yeah, totally.
I mean, um, I mean, I don'tthink these papers are in a form
right now that would even makesense to share with anybody
right now, but my goal, um, is,is, uh, as you know, I have some
talks in the fall I'm going togive and I want to have those
two papers, that cover all thetopics we've been talking about.

(31:19):
I want to have them availablepublic at that time.
They will unlikely be acceptedinto a journal at the time, but
we will post them on bioRxiv forthe preprint servers.
So people could read them.
Um, and uh, we'll do it as quickas we can.

Matt (31:36):
We've always tried to be as transparent as possible and I
like that we do this.

Jeff (31:40):
Yeah, I don't, I, you know, we haven't ever talked
about like posting our, youknow, scrap writing as we're
going along here.
I don't know if that would be agood idea or not, but-

Matt (31:49):
I do it all the time.
But I wouldn't advise you to doit.

Jeff (31:53):
We'll try out language and then we change the language, you
know, and it can be veryconfusing to people if we
suddenly start using differentlanguage for the same things and
no one knows, understands andthere's holes and it's quite
messy at the moment.
Uh, but we're making goodprogress on both of those
papers.

Matt (32:10):
This was part one of a two part interview with Jeff
Hawkins.
The next podcast episode willcontain part two.
If you like what you hear on thepodcast and you want to discuss
ideas like this withintelligent, friendly people, be
sure to join HTM forum atDiscourse.numenta.org.
Our online community was createdaround the Numenta open source
project and continues to thriveon HTM Forum.

(32:33):
Hundreds of folks interested inHTM and related theories, share
ideas, experiments, and opensource code.
If you are an HTM theorist,engineer or a programmer, or
just a hobbyist, HTM forum is afriendly place to keep up with
the latest in HTM technologies.
Thanks for listening to NumentaOn Intelligence.
Be sure to subscribe to ourpodcast on your favorite podcast
service.

(32:53):
To learn more about Numenta andthe progress we're making on
understanding how the brainworks, go to numenta.com.
You can also follow us on socialmedia at Numenta and sign up for
our newsletter.
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.