Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Matt (00:06):
Welcome back to the
Numenta on intelligence podcast.
This is Numenta communitymanager, Matt Taylor.
Last month, we released part oneof an interview with Dr.
Konrad Kording who runs theK-lab and UPenn, where we talked
about uncertainty in the brain.
Coming up is a continuation ofthat conversation, which will
also be available as a video onthe HTM School Youtube channel.
(00:31):
So you mentioned motor output,motor representation in the
visual cortex.
So let me talk, let's talk aboutthat a little bit.
Why do you think motor output isproduced across the entire
neocortex?
Why is there motor everywhere?
Konrad (00:45):
So I mean like this, of
course, the motor serve in this
argument.
Why do you have the brain?
To move better.
That's the only reason why youhave a brain.
Matt (00:54):
What about the visual
areas?
Konrad (00:55):
Visual areas– so that
you can move better.
Matt (00:55):
It's everywhere,
everywhere.
So does it feel like your wholebrain is like basically modeling
your whole body constantly, allat the same time and the space
around it?
I mean, it has to be, right?
Konrad (01:15):
Well, yeah, I don't know
but like you can make a pretty
clean argument that movement iswhat things are about.
The only way how you affect yourevolutionary success is to
somehow sending the rightcommands to the right muscles at
the right time.
Even speech is ultimately amovement act, and suddenly a lot
(01:37):
of the way of the way peoplethink is built on circuits that
were more simple movementcircuits.
Matt (01:46):
Have you seen, I saw, so I
saw this x-ray recently of a
mouth, like with an MRI orsomething while it was talking
and the complicated movementsinside your mouth is ridiculous.
It's like a gymnastic feat.
It's amazing.
Konrad (02:02):
It's off the charts.
But then if you think about it,if you do sports, you might
spend an hour every day.
Or if you're a world-classathlete, you might spend several
hours everyday training.
In that sense, all of us areworld-class athletes at
speaking.
You spend hours everyday talkingto your friends.
Matt (02:23):
We started learning as
soon as we started making noise,
you know?
Konrad (02:27):
That's right.
So we have tens of thousands ofhours of experience moving our
tongue or mouth, and the way wespeak makes a huge difference on
how people perceive us.
It truly is important.
Matt (02:47):
When you think about it
from that way, we're much
further from real languageunderstanding than we think, you
know?
Because I feel like I'm on thesame page with you.
Motor, I think, is essential.
The motor aspect that theintegration with reality, with
your motor commands, and that'sso crucial to language as we've
learned it.
I mean, because that's the onlyway we know how language exists
(03:10):
is through movement and feedbackwith other people we're
communicating with.
Konrad (03:14):
That's right, yeah, and
how they perceive us and how we
share ideas.
Like some of the most amazingthings that people do is they
have shared intentionality.
Like you want to understandintelligence, I want to
understand intelligence.
Like at some level, what makeslife as scientists so enjoyable
(03:37):
is that we share that.
Matt (03:40):
Right.
It's the social interaction.
It's brains and brains, thecollection of brains, you know,
that really make us human.
I think.
Konrad (03:47):
Yeah, it's a crucial
aspect of what makes us, us.
Matt (03:52):
That's great.
Okay, so I got another topic.
It's a technical topic.
It's from one of your papersabout, well you call it, I like
to use it, the phrase"time-warping", but maybe you
can explain the phenomenon I'mtalking about in the brain.
You know, when a monkey forexample, reaches or does
something or does a task and howthe neurons involved in that
(04:16):
task, at least observed in thattask, don't necessarily happen
when the reach happens.
Why is that?
Konrad (04:23):
It's really interesting
if we start at the other end.
Matt (04:29):
Great.
Konrad (04:29):
Let's say you see
something, I show you a flash of
light and I ask you to push thebutton as soon as you see the
flash of light.
Okay, so it takes a little bitfor the information to make it
from your eye to your higherorder brain areas.
What is interesting is how longit takes for it to make it cross
(04:51):
your eyes depends on how brightit is.
If I give you a super brightflash, it's kind of going to
make it to your retina fasterthan if it's a dim light.
Matt (05:03):
I can see that, yeah.
Konrad (05:04):
So what that means is
that the processing that's going
to happen in your visual cortexwill be faster for the bright
things and slower for thenot-so-bright things.
Now, that introduces thatbasically that you aren't
time-locked to the outsideworld.
(05:24):
Sometimes you're faster,sometimes you're slower and it
depends on things likebrightness.
Now we can make it morecomplicated.
There's like this famousdrawing, where you have an
elephant but the legs aren'tright, so it's clearly not an
elephant.
So if you're in that situation,it takes you a while to parse
that image.
And sometimes some people havethat Eureka–"oh my god, that
(05:46):
doesn't even work, those legs"element like very quickly and
some of them haven't veryslowly.
And how fast or slow you have itdepends on the situation.
So what that means is that theinside of your brain isn't
locked in time to the outside.
There's like random time delayshappening.
Sometimes things go faster,sometimes things go slower.
(06:10):
This is a huge problem for theway we analyze brains because
what we do, typically, is wegive a stimulus and we measure
the neural activity that happensafter the stimulus.
But what if sometimes the neuralactivity is early, and sometimes
(06:31):
it's late?
Well, it means that it's allgoing to be washed out in a way.
Matt (06:35):
You mean later than normal
or just like it's not always
perfectly happening–
Konrad (06:40):
Let's say that let's
take the easiest case.
Let's say we have a brain cell.
It shows me after some delay, alittle spike when it sees
something.
Now it means that if sometimeslet's take it only the
brightness.
Sometimes it's 20 millisecondsearlier, sometimes it's 20
milliseconds later.
So at that point of time, if Iask what the average activities
(07:04):
of the neuron, it will betotally washed out.
The spike will sometimes beearly, sometimes it will be
late.
So it will look like this is avery boring, very sluggish, very
smooth cell where maybe what itdoes is something like it says
like yes, no, I just saw it.
It's actually very preciselytimed.
(07:24):
It could be incredibly preciselytimed relative to when you
actually see it.
Matt (07:30):
But specifically to the
context or something, you know?
Konrad (07:34):
That's right, exactly.
It could be really complicated,and what that means is if we
analyze that data as we do atthe moment, it would look like
the whole brain is boring,everything's kind of lowpass,
nothing much happens there.
Well it turns out that in onetrial where you see it early and
then another trial you see itlate, and then you can't even
(07:56):
say which cell is earlier orlater because well they will,
maybe you'll recognize both ofthem sometimes early, both of
them late at a later point oftime, so the interpretation then
gets to be difficult.
Now I think this is much moreproblematic even on the movement
side.
When I ask you to say, plan totouch the tip of your nose with
(08:18):
your hand.
Well sometimes you might do itlike just in time for the
movement, or sometimes you mightdo it now and like keep talking
with Konrad, and then youexecute it.
And what that means is thatthere's no alignment of the
outside world with what's in thebrain.
But everything we do in neuraldata analysis, or most things
that we do, is based on theassumption that it's locked to
(08:41):
the outside world.
Matt (08:41):
Right, right.
Konrad (08:43):
Okay, so now
time-warping is like a technical
set of algorithms to kind ofundo those kinds of problems.
Matt (08:49):
It's a data-processing
function, right?
To help identify.
Konrad (08:54):
Exactly.
It's like the way we use it isif you give me lots of neurons
and I want to ask the question,well, how fast are they like
jointly stepping through thatprocess that they normally do?
And it allowed, it basicallyallows that on some– that
sometimes it might be earlier,sometimes it might be later, but
(09:16):
this is a universal thing.
Like your brain is nottime-locked to the outside
world, and once you realize thatanything you analyze in neural
responses is getting much morecomplicated.
Matt (09:32):
Yeah, that makes sense.
I mean there's certainexperiences you have that seem
very slow and others that seemvery fast, and there's some
evidence that has to do with,you know, the balances of
chemicals in your brain andstuff like that.
I mean, in the context forcomputation in your brain.
Konrad (09:48):
Right, and the
temperature.
If I heat up your brain a littlebit, the wave–
Matt (09:53):
No thanks.
Konrad (09:53):
The valley of spikes
might go out and get, go through
it a little faster, like alittle bit like you can just do
it, get warmly dressed, runreally hot in the sunshine.
Matt (10:07):
Alright, well next time I
take an exam, I'll dress very
warm.
Konrad (10:12):
Nice.
Matt (10:14):
Ok.
Dr.
Kording, a couple more questionsfrom our forum.
We have a couple, almost acouple thousand people at our
HTM Forum and I sort of let themknow sometimes who I'm going to
talk to and a couple ofquestions I'll give you from the
forum.
Somebody read your research pageand said that they quoted the
page saying that you sort ofhave these two angles and one
(10:36):
is, and this is for addressinginformation processing in the
nervous system.
One angle is analyzing andexplaining electrophysical data
and you study what neurons do,and the other being analyzing
and explaining human behavior,which is setting with all those
neurons do together.
And I thought this wasinteresting, his question is,
"How do you begin to model thathuge gap?" You know, those seem
(10:59):
like they're going in twodifferent directions, so he'd
like to hear you talk aboutthat.
Konrad (11:04):
Yeah, so this is a huge
problem, that there is this huge
gap between basically behavior,which is complicated.
It includes billions of neurons,and what an individual neuron
does.
And I'm not sure how to crossthat gulf.
(11:26):
In fact, like I've written acouple of papers where it kind
of like, voiced the worries thatI have about that.
So when we make that bridge inneuroscience, we are often very
imprecise.
Say we take some brain area, wefind that there's some neurons
that do something.
And then we say, oh yeah,therefore that part of the brain
(11:47):
solves the face recognitionproblem.
And that logic doesn't work likethat.
It's basically finding thatthere's a difference if I show a
different face doesn't mean thatthese cells don't do other
things, and the correlation withfaces doesn't mean which role it
has in communication.
Like for all that I know, if Igive you a task where you press
(12:10):
your right finger when you see aface, that like muscle has a
really strong correlation withthere being a face, and yet
arguing that your muscleprocesses faces is perfectly
pointless.
And so yes, there's that hugegulf between those two views and
it's a little unclear how tobridge it.
(12:34):
And you might argue that itcould be impossible to naively
bridge the gap.
And let me kind of make thepoint on how it could be
impossible.
So it could be that the way howall the neurons interact.
Let's say, I look at you and Ilike phrase a sentence that that
way is of such a complicatednature that people could never
(12:56):
learn it.
And the analogy that I want touse here is like deep learning
systems.
Let's say you take ImageNet, abig data set of images that are
labeled and we have like sinceAlexNet, we have like good deep
learning systems that can solvethat.
They aren't very good because Ihave Amazon Alexa here here who
(13:18):
just decided to turn itself onwhen I used it, when I used its
name.
Matt (13:24):
That happens to me all the
time.
Konrad (13:27):
So basically if the
brain is something like a neural
network– which means that itoptimizes its properties, it has
plasticity, it adapts to theoutside world– then at some
level, the way how your brainoperates is as complicated as
the world in which it is.
So you can't properly describemy brain, unless you also
(13:52):
describe how dinosaurs work.
Matt (13:57):
This is, yeah.
This is the relation to yourdinosaur mascot.
Konrad (13:57):
That's right.
I try to bring in an example ofa dinosaur on anything that I
say.
So basically any reflection ofthe thing– Anything that I know,
it must be reflected in asatisfactory model of Konrad,
which means that cannot be acompact model of Konrad because
Konrad knows stuff aboutdinosaurs.
(14:18):
So if you cannot compress amodel of Konrad, then in a way,
we can't produce something thatis both a workable model of
Konrad and can be understood.
Matt (14:30):
Perhaps.
However, my counter would be– Imean a model that has learned
how to be Konrad, I would sayabsolutely, you're right.
But there's also the substratein which that model learned
reality.
Konrad (14:48):
That's right, and I
agree with you, that at some
level maybe the reason why thegulf is so big is because
behavior, as people exhibit,kind of contains all those
things that we learned from theworld, and rightfully so.
Matt (15:06):
That's absolutely right.
Yeah.
Konrad (15:07):
And if it contains all
those things, then maybe we are
barking up the wrong tree.
Maybe what we should rather sayis, okay, what is the substrate?
What is the learning algorithm?
What other cost functions thatthe brain may be optimizing?
Is it using an optimizationalgorithm?
Those questions then become verycentral, whereas the question of
(15:29):
kind of like how does it work atthe low level, like, okay,"how
do neurons contribute tobehavior?" might be the wrong
question.
Like the fact that likesomewhere in Konrad's early
visual cortex, neurons have GABAwavelength.
That might not be meaningfullypart of a description of how
(15:50):
Konrad works.
Matt (15:50):
Right.
It's so hard to decide what tostudy and what is contributing
to the overall model.
Konrad (15:57):
Yeah, so in that sense,
my answer to that gulf is the
gulf might be of the nature thatwe need to rethink what we want
to study on both sides.
Maybe the way we study behaviorisn't quite right, maybe the way
we study neurons isn't quiteright.
But how are those two can cometogether, that is something that
(16:18):
people tend to expect thatsomeone else will solve it.
Matt (16:21):
Right.
Well, I'm hopeful inneuroscience right now because I
dunno how much you know aboutwhat we're doing at Numenta, but
we're really excited about gridcells.
We've incorporated into ourneocortical theory, you know,
knowing that I've talked aboutgrid cells a lot in other
interviews.
(16:41):
So I don't think I need to likegive a basic description of what
grid cells are, and I'm sure youknow what they are.
Konrad (16:48):
Okay, but look, here's
the problem.
So grid cells, you refer to thetuning of cells.
We thought tuning cells thatbasically as you keep walking in
some direction, likeperiodically like go up and down
in the activity.
Is this really something aboutthe brain[inaudible].
Maybe this is rather somethingthat characterizes the specific
(17:10):
environment in which rodents areraised with like that kind of
representation is useful.
If you had different mice, itmight be totally different, or
different humans.
So the question is to whichlevel are tuning of neurons
really the right level to reasonabout intelligence?
Because the problem with tuning,like grid cells, is that it
(17:34):
reflects the experience in thewhole world.
And therefore, that it might beusually dependent on the world
in which you grow up.
Matt (17:43):
And that makes total sense
even in the grid cell community
right now because there's stillquestions about, do rats create
a two dimensional representationof space?
Is the grid cells they'recreating only two dimensional
versus other animals that movethrough 3D space?
Are they fundamentally differentin the way that they represent
space?
Konrad (18:01):
Right.
And what if you gave mice likelittle flying things, which they
can move in the 3D space?
Matt (18:06):
Well we know the brain is
so plastic and malleable, who
knows?
I mean you just don't know.
But I definitely agree with you,like my grid cells that work in
my brain were built off of myexperience with the world, and I
don't think that they would workwith it for anybody else.
You know, or maybe any otherspecies for sure.
I mean, there could be somethings that are the same within
(18:27):
species.
I don't know.
I'm getting way out of my leaguehere, but the way everyone
interacts with reality has aspecific, what I like to call– I
always go to a Max Tegmarkbecause he describes these
different layers of reality.
There's an internal reality thateveryone has that is unable to
be shared.
I cannot share what red is to mewith you, except through a
(18:51):
consensus reality, which islanguage to where we both
labeled these things and we havesymbols to represent them and we
can understand them.
And then there's actual reality,which we all try and understand
as best we can and communicateabout with consensus reality.
And this whole idea is that myinternal reality or the grid
cells that I have are a part ofthat.
The grid cells a mouse has are apart of its internal reality.
(19:12):
So it's really hard todifferentiate what those mean to
them versus us versus that.
Konrad (19:17):
Right, exactly.
So that's just why I'm likeworried a bit, like which role
findings like grid cells shouldhave in the way we conceptualize
intelligence?
Matt (19:30):
Right.
And that's an open question.
Konrad (19:34):
So but in that sense, to
come back to the question that
was asked, it's like how tobridge that gap?
I don't know.
And I'm pretty convinced thatright at this moment, very, very
few people have thought hardabout it.
It's a huge gap, and it's a gapthat we need to acknowledge that
(19:55):
we don't know the solutions.
If we pretend that we do[know]them, we will misguide people.
Matt (20:01):
The dangers of this gap is
that there's, it's so big.
I think there's so many crazyideas between one and the other,
that it's really hard todifferentiate between whose idea
is crazy versus whose isbrilliant.
I mean, in this space, it's hardto say sometimes.
Konrad (20:18):
No one knows, exactly.
Matt (20:21):
I know.
So anyway, Dr.
Kording, it's been a pleasuretalking to you.
Thanks a lot for taking yourtime and giving it to us in our
community.
It's been really great.
So is there anything that youhave a soapbox on that you want
to talk about, while you havethis opportunity?
Konrad (20:37):
Absolutely.
I want to talk about causality.
Matt (20:39):
Okay, great.
Konrad (20:41):
So I think a lot of the
language that we want to use
when we talk about brains iscausal language.
We want to ask how neurons makethings happen, how brain areas
to make things have, or howneuromodulators might make
learning happen.
Those are all causal questions.
We know how, like we want toknow how one thing makes another
(21:01):
thing happen.
When you look at the bulk of theapproaches in neuroscience, the
correlational findings– I showyou a face and I see what the
activity in your brain is.
And those two are very differentstatements, so correlations are
not really indicative butunderlying causality.
(21:21):
And I want to encourage everyonewho wants to think about
intelligence to start thinkingabout causality.
The problem we solve in theworld is to understand the
causal chains in the world.
We don't care what's correlated.
We care about which things wecould do to the world to make
the world more pleasant for us.
And we– Same thing asscientists, we fundamentally
(21:48):
care about causality– whichthings make which other things
happen, and it's just so easy tomeasure correlations.
And I believe that a large partof the community therefore
effectively start equating thetwo of them, and that's
something that we should avoid.
Matt (22:05):
That's a good point.
What do you think companies likeus that are trying to work in
this space can do, can benefitfrom that type of a perspective?
Konrad (22:16):
I think for a company
like Numenta, if you want to, I
mean like ultimately what do youbuild into your models?
Is it causal chain?
You say this is what this neurondoes to that other neuron.
So in that sense, wheninterpreting the existing
literature, you could benefitfrom thinking about it in terms
(22:37):
of causality.
What does, what do theexperiments actually say and
what do they not say aboutcausality?
But also then when it comes tosay, if you're implicitly
building objectives that thesystem has, the thing– the
question is, what are themeaningful objectives?
What's their causal role?
How do they cause behavior inthe end?
(22:59):
And so in that sense, I think itcame, the concept can be
usefully implemented in anymodel of neural activity.
Matt (23:10):
Right.
Yeah.
It's just so difficult to putlots of the models together, you
know, in a way that makes sensefor everybody involved.
Konrad (23:19):
That's right, but the
concept of causality, that is
something, if you asked yourselfhow you think about
intelligence, the concept ofcausality is what makes it
intelligence.
Matt (23:32):
But I mean, at any point
in time, I've got neurons that
are predicting what's, what'sgoing to be happening in my
environment right now.
You know, that's sort of thebrand as a"prediction engine"
sort of idea, right?
Konrad (23:45):
That's right.
Matt (23:46):
And the causality of those
predictions being made involve
vast amounts of past experience,not just the past second or the
past minute, but years, years.
Konrad (23:58):
That's right.
And you could, but you could forexample view the prediction– If
you view the prediction enginelike that, you can say that the
wanting to predict things is thething that causes the tuning
curves in the end.
Matt (24:16):
Oh yeah.
I can see that, wanting topredict things.
Konrad (24:20):
Yeah.
The goal of trying to predictthings is what gives rise to how
they compute.
Matt (24:28):
I see.
I'm under the impression thatthe prediction's not a goal,
it's just something that happensas a part of the mechanism of
the neural network.
At least in HTM, you know, wehave a sequence memory
theory/algorithm, and thepredictions just occur if you
connect them to the input theright way.
Konrad (24:48):
That's right, but the
way how you set up the
connecting them to the input inthe right way is such that they
change themselves so that theyget better at predicting.
Speaker 2 (24:59):
Yeah, absolutely.
Yeah.
And it's super complicated, it'slike the world is topological,
so you've got to have all theselocalized computations that, I
mean, it's super complicated.
But I mean I think I agree withyou.
I mean, causality is superimportant and we can't make any
assumptions about why we'reseeing what we're seeing if
we're monitoring neuralpopulations necessarily, unless
(25:23):
we know the– and we can neverlook at the internal reality of
the system to verify it anyway,so we have to be very careful
about the assumptions that we'remaking, right?
Konrad (25:30):
That's right.
And I want to add one morething.
The back propagation of erroralgorithm is also just a causal
inference algorithm.
It tries to figure out whichchanges in neural properties
would make performance bebetter.
So local prediction versusglobal optimization leads to
very similar logical structure,where you have an objective in
(25:55):
learning that gives rise tocomputation.
Matt (25:58):
Right.
Right.
Well, that sounds right to me.
Alright, well Dr.
Kording, thanks again forjoining us.
Konrad (26:07):
Thanks for having me on.
Matt (26:08):
No problem.
Again, this is Matt Taylor.
Thanks for listening to thisepisode of Interview with a
Neuroscientist on the Numenta OnIntelligence podcast.