Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Matt (00:00):
Florian Fiebig is a
research scientist at Numenta
recently graduated from the KTHRoyal Institute of technology in
Stockholm, Sweden with a PhD incomputational neuroscience.
He's been sharing with us hiswork on Hebbian memory networks
which he describes in his PhDthesis linked in the show
description.
He's recently been featured inour research live streams,
(00:22):
talking about synapses andplasticity.
You can find upcoming livestreams featuring Florian by
subscribing to the NumentaYouTube channel.
His thesis not only provides newmemory theory but also provides
a general introduction to memoryand learning in cortex.
It is called active memoryprocessing on a multiple time
(00:43):
scales in simulated criticalnetworks with Hebbian
plasticity.
I won't make you remember that.
You can find a link for it inthe show notes where you can
download a PDF for free.
Thanks for listening to theNumenta On Intelligence podcast
and I hope you enjoy thisinterview with a neuroscientist.
(01:05):
First of all, welcome toNumenta.
It's a pleasure having you here.
Florian (01:09):
It's a pleasure to be
here.
Matt (01:10):
I have been spending a
good amount of time reading your
PhD thesis and the three papersin the back.
A lot of the things that I'dlike to ask you about come from
this thesis.
For those listening, it's calledactive memory processing on
multiple timescales andsimulated cortical networks with
Hebbian plasticity.
Florian (01:30):
That's a mouthful.
Matt (01:31):
And we'll put a link in
the show notes so you guys can,
it's a freely accessible in PDFformat, which is great b ecause
it's really good.
Honestly, it's a greatintroduction to memory
frameworks in general.
I think in neuroscience,especially Hebbian memory, which
we'll talk about i n[inaudible],
Florian (01:50):
I wrote, like in part
it's a lot bigger than a
dissertation has to be justbecause I wanted my family and
friends to be able to read someof it and not just my
colleagues.
Uh, so
Matt (02:00):
was super nice of you to
do that.
Cause I mean it opens the doorsfor so many other people because
I mean you go to, I mean I'm nota neuroscientist.
I've been reading neurosciencepapers for a few years now and
some of them are just likeimmediately a brick wall that
you can't get past because I gotto look up that.
I gotta look at that.
I go to[inaudible].
Florian (02:19):
Well, the good thing is
just like, just like Jeff, I've
gone through this experience ofsort of coming from a different
field before and then fallingmadly in love with neuroscience
and then, you know, likesqueaking your way into the,
into the field.
And so I've had this experiencefor like one year, you know,
dedicated, trying to push intoneuroscience and tried to learn
about neuroanatomy andelectrophysiology and whatnot.
Matt (02:41):
Well, maybe you can talk
about that, your background and
what led you to Numenta now?
Florian (02:45):
Yeah, my background is,
I'm an engineer, so I studied,
uh, the technical university,Humboldt uh, general engineering
science, um, pretty much everybranch of engineering that is
just because I couldn't getenough of it.
So chemical engineering,electrical engineering process,
engineering, mechanical systemscenter, uh...
Matt (03:03):
You migrated to the really
big questions.
Florian (03:05):
Yeah, little stuff.
Like I had a lot of siteinterests and I had a hard time
making up my mind.
So like, you know, patent law onbasic economics, whatnot on the
side.
Uh, but so eventually I gotinterested in process
optimization, uh, which also ledme then into, um, AI and
robotics.
And so I found this really coolmaster program in Stockholm,
(03:27):
which was quite a modular, sortof appealed to my, uh, sense of
navigating enough, being able toelect any courses that I would
like because systems control androbotics, that's kind of
everything.
And then I specialize in AI andmachine learning and getting
increasingly frustrated withmany of the, um,[inaudible] a
little bit with, um, not thesimplifications per se.
(03:50):
I like simplified models, butthe refusal to look at the brain
for an inspiration.
Um, and so, um, I was aware ofJeff Hawkins and his work for
quite some time and so Ieventually figured out if I'm
going to do this, which I'mgoing to have to do it myself.
And so I just sort of retrainedmyself like on the side in
(04:13):
neuroscience and startedsneaking into neuroscience and
extras and asking people in the,in the back row for the password
for the course webpage.
Wow.
Wow.
Like seriously?
Yeah.
Anybody can not anything if youeven want to.
It's, I mean you don't needanybody's permission to learn
something.
Matt (04:29):
I mean your doctorate's in
neuroscience, right?
Florian (04:31):
Yeah.
Oh, mine is computationalneuroscience.
But of course that's a, that's abranch of neurosciences, right?
Matt (04:36):
Were you bestowed a sword
when you were giving your PhD,
cause I heard in Finland anyway,you get a sword when you've
become a doctor.
Florian (04:46):
No, no sword.
Uh, I got a bunch of like, youknow, heavy rocks of sorts.
Um, like, um, they're likeartistic things.
One of them was actually verynice and it's like this inset
stone into a glass sculpture ofa head.
So it looks like you're peeringinside the brain.
Ooh.
Um, I forgot the name of theartist, but it's actually quite
(05:06):
beautiful and you get thisdoctor student, I still should
do like a proper like ceremony,like a PhD, you know, graduation
ceremony.
There was one now or that Imissed of course, cause I went
here.
Uh, but you know, they don'ttake that away from you.
You can go whenever I get aticket.
And technically my PhD is alsodual degrees, so I get one from
the university of Edinburgh aswell and they get a kilt.
(05:29):
So that's one more reason toshow up at another and not to,
to get myself a proper kilt.
Matt (05:33):
Excellent.
Oh, well speaking of your, yourdoctorate thesis, so I haven't
read all of this, but I readenough to think several times
this guy should be working forus.
This is, so many times I waslike, that's exactly how we
think the brain should, shouldwork.
(05:54):
So I love that you're here andlet's talk about some of this
Hebbian learning stuff.
So we're going to get into someof the nitty gritty of
neuroscience and this, this is apodcast about neuroscience.
So let's, so let's do this.
Let's talk about Hebbianlearning.
One of the things that the twofactors in Hebbian learning,
right?
(06:14):
There's a, there's a learningrule that that requires a pre,
why don't you talk about thatfirst?
Florian (06:19):
Uh, so I mean we can go
back all the way to sort of this
a simplified version of what[inaudible] had actually said.
Like you know that fire togetherwire together.
It's not what he actually wrote,but it's what everyone says.
That's what everybody says hewrote.
Right?
Um, but the, the, the point isthat, um, fire together
(06:41):
obviously means, um, that youneed to have some metric, some
measurement by which you canobserve whether they are
actually active together.
And so since there are twoneurons, um, you always have
what is called the pre- synapseand the post synapse.
So, um, and so you need to trackactivity on two sides of the
equation.
And then you also want tomeasure, uh, how many times the
(07:03):
active together, right?
So you want some, some.
Um, so essentially you havethree measurements, right?
Um, the, the, the presynapticspiking, the post synaptic
spiking, and then what yourpeculiarly interested in in
Hebbian learning is, um, the,the correlation between the two.
Matt (07:19):
How closely together and
time, right?
Florian (07:22):
How closely in time and
how closely are together in
intensity and like differentlearning rules capture different
aspects of that.
Right.
Um, and of course, uh, itmatters a lot because there some
learning rules have a tighttemporal kernels like STP, uh,
spike time dependent plasticitykernels, um, tend to be very
(07:42):
narrow.
So like if there's spikes on thepre and the post synaptic site
don't occur within like, youknow, 10, 20 milliseconds, then
there won't be much of asynaptic change.
But many of the protocols reallyusing for inducing lasting
associated change in neuronswhere we, you know, bombard
neurons with long tetany ofpulses, uh, which are not
(08:04):
necessarily biological, but veryeffective.
And potentiating synapses makingthem strong, right?
During experimentation.
Right.
So like you have a, you haveneurons that you've grown in
addition, there's a synapsebetween them and now you want to
investigate how the synapsechanges with, with activity.
And so, um, then we do protocolsthat are a lot better at
(08:25):
capturing sort of raterelationships rather than the
individual spike time.
And so that's why it alsomatters to understand that
there's so many different formsof plasticity.
Matt (08:34):
So if you're thinking
about a rate, you're going to
look at it over a time window,
Florian (08:40):
But you're averaging,
that can be like an exponential
average or like some shiftingtime window.
Matt (08:46):
So there's a lot of ways
you can apply heavy and
learning.
Sounds like you could do it onespike at a time and have a very
simple rule or you could havesome oscillatory rule.
Speaker 2 (08:55):
So quite a quite a
zoo out then.
The interesting realization isof course it's not just the zoo
in terms of mathematical modelsthat sort of abide by this broad
idea of Hebbian learning, butthere's somehow capturing
correlations.
But there's also a zoo of biomolecular mechanisms that
(09:16):
support that, those, uh, thoseideas.
Um, there's many forms ofplasticity Hebbian learning,
obviously just one of them.
Um, but there's several of, youknow, uh, possible mechanisms
that could implement somethinglike that in the brain.
Matt (09:30):
Is there, uh, does, does
deep learning have anything
similar to Hebbian learning likea two factor rule?
Like we're talking about at all?
Florian (09:38):
Oh, that's a good
question.
Um, well, I mean, like gradientdescent per se doesn't really do
that.
Matt (09:49):
You need some type of
localized feedback, right?
Florian (09:51):
Yeah.
Because of course, I mean the,that that is exactly right.
I mean, it had been a local rulebecause all the information that
the synapse has is only theinformation on the pre and the
post synapse and we think ofthem as separate, but in fact,
there's a lot of, u h, messagingbetween them, not just the, you
know, pre-synapse, u m, sort ofreleasing neurotransmitter that
(10:13):
then gets detected and l eads to a change on the other side.
But there's also a retrogradesignaling and sort of, there's a
lot of communication, but i t'sall very local, just a small
volume so you don't get thesebig, a full network kind of u h,
analysis things to kind ofreshape the whole network.
You k ind o nly d o i t from thesmallest element u p and see
what emerges when you c hange the local rule.
Matt (10:36):
So everything's locally
learning simultaneously.
That's sort of the emergentthing.
Florian (10:42):
That's right.
And that's what's makes it alittle bit hard to wrap your
head around because we are soused to designing systems top
down rather than bottom up.
So that's why also many of thesethings require, you know,
building things small and thentesting how they actually behave
when they grow bigger because wemight not be able to predict a
priori sometimes with a verysimple network, you can predict
(11:05):
these things.
You know what the distributionsof weights for example, are
going to look like a certainconditions.
And you know, what kind of, youknow, dynamical structures kind
of get formed or not.
But as soon as you go have anyamount of biological detail you
get so many non-linearities andu h, u h, a nd you know, so many
differential equations thatinteract with i t, with, u h,
(11:27):
with another that, u h, it getsimpossible to predict much.
So that's why all of my work isvery heavy simulation science.
Matt (11:36):
Now, you hinted earlier
that there were different types
of plasticity in Hebbianlearning.
There's a fast plasticity andthen there's a slower
plasticity.
Is it, is that synonymous withshort term plasticity and what
you might call longtermpotentiation is it the right
term?
Florian (11:55):
Yeah.
So a longterm potentiation isobviously a super important word
in neuroscience, uh, oftenabbreviated LTP.
Um, and so becauseneuroscientists have been asking
this question, what does it taketo change a synapse?
You know, in a lasting way.
And like, how do we learn?
We learn things, you know, thatstick with us just to buffer the
(12:17):
last three words that were justsaid, right?
Uh, but you know, how do Iremember, you know, the name of
my grandfather or something?
Um, and so, so, uh, so it turnsout there's very good
experimental, um, sort offindings, uh, protocols that
(12:37):
have are very nice in predictinghow to potentiate synapses long
term.
The problem with many of thosewho said that require these, you
know, long tetany, this longstimulus bursts at rates that
are just outrageous for that fora real brain.
Um, in part they require thembecause these are cell cultures,
so they're taken out of the sortof InVivo context, right?
(12:58):
So they don't exactly behavelike the real thing.
And also neuroscientists areimpatient and they want reliable
results.
Um, and so, um, because ofcourse in the real brain, you
know, activity is a lot moremessy and if there's any sort of
burst activity tends to be veryshort.
Um, and so as a consequencethough, the most of the
(13:20):
plasticity research early on wasreally does LTP research because
it's relatively easy to do.
Whereas these more fleetingforms of short term potentiation
and there's a whole slew ofthem, some of them Hebbian and
some of the non Hebbian.
And in part my thesis makes theargument that particularly,
Hebbian ones might be superuseful for expanding things like
working memory.
(13:42):
Um, and so it feels to me like alot of the research is still,
you know, reasonably new, um,because it's not really, you use
the term long term or shortterm, but the problem is that
many of these uh, forms ofplasticity are not really
necessarily time dependent,they're more activity dependent,
(14:02):
right?
Matt (14:03):
Activity of the agent or
activity of the cells around the
neurons?
Florian (14:09):
The pre and the post
synapses.
You can do get like a short termpotentiation using very simple
paradigms.
But if there's any amount ofongoing activity, you know, just
a couple of extra spikes, thatpotentiation will very quickly
go away.
Whereas if you then silence thatcell culture or you cut all the
inputs, go away for six hoursand then you come back and then
(14:30):
you test just with a presynapticping, right?
You just send one spike throughand detect what comes out on the
other side.
The synapse is still strong.
So it turns out short termpotentiation is not always short
term potentiation.
And in fact, the term short termis kind of misleading.
Matt (14:46):
What is it?
What should it be?
Florian (14:47):
Yeah, I'm not sure yet
to be honest.
Um, because.
Matt (14:51):
there's something else
besides time.
Florian (14:53):
Yeah.
There's something else besidestime.
And we always model thesedynamical systems which, um,
with sort of some time constant.
So there's a lot of temporal, uh, equations and that's all
right.
I mean, if there's a certainlevel of noisy activity, then if
there's activity dependent decayor some signal then that will,
(15:13):
you know, kind of get mappedonto time constant.
But it kind of suggests thattime itself is the driver for
the, for the, you know, for thesignals.
And it might actually not be.
And so it's a little bitmisleading to always start these
discussions about potentiationby saying, Oh, short term,
intermediate term, long term, um, whereas in fact, you know, we
(15:36):
might want to differentiate this.
Matt (15:38):
Maybe the context is not
so much time, but other
biochemicals that are involvedin it or some other form of
context around the memorysystem?
Florian (15:47):
So, yeah.
Um, I mean I gave, um, tworecent research talks now at
Numenta about sort of fast formsof plasticity, which are not
necessarily Hebbian, calledfacilitation and augmentation.
And many of those are, are likereliant on calcium signals.
Um, and it turns out thatcalcium diffuses relatively
(16:10):
quickly and because of thatthere's a certain timescale to
this phenomena and there's,there's iron pumps in the
membranes that sort of restorethe original level of calcium.
And so that's why the signalwill go away.
Um, and so at least in the sensethat the calcium originally
drove it, of course thesechemical cascades can get
(16:31):
complicated.
So it's not really just, youknow, calcium, but it's often
sort of the initiating mechanismof salt and calcium.
And so, so then you really wantto understand it's not really
time dependent plasticity, it'smore like calcium dependent
plasticity.
Matt (16:48):
That's more of the
context.
Usually calcium is there becauseof some time, something about
time.
Florian (16:55):
Yeah, you know, like a
number of spikes that got
transmitted over a short time.
And so you had a lot of calciuminflux.
Matt (17:01):
A lot of this is just, um,
trying to maintain some
homeostasis in the cell or in oroutside the cell.
So there's biochemicals movingin and out just to get to a
stable state.
Florian (17:12):
Right, right.
Homeostatic mechanisms are superimportant to keep the balance on
many of these systems thatotherwise would run away to
weird indefensible states and Imean very rarely do we get a
glimpse into that.
Like in epilepsy when you haverunaway excitation and there are
these waves of activity thatyou'll squash over the brain and
activate every neuron.
(17:35):
And I often get asked by friendssort of, um, you know, what mom,
just, you know, like how much ofour brain is active and like
this popular myth that we onlyuse it a little part of our
brain, right?
Yeah.
And then I tell them, look,I've, I've seen three times in
my life, I've seen people with ahundred percent brain activity.
It's not pretty to look at.
Matt (17:52):
I bet
Florian (17:53):
You have to be careful
that these people don't swallow
their tongue.
Matt (17:57):
Yeah, it's not doing
anything useful it seems.
You know, I always think aboutthis.
Um, I used to, do you know whata Bobcat is?
It's a, it's a, it's a smalltractor, right?
And it's got a scoop.
So it's sort of like they callit a backhoe or a loader in the
Midwest, but the controls yousit in it, it's just like one
(18:17):
seat and it's like a small car.
Right.
And it's got a loader in thefront and usually just move dirt
or drill holes with it.
So he's like, why is he talkingabout this?
Well, okay, so the controls onit are two sticks.
It's not a steering wheel.
There's like a stick in yourright hand a stick in your left
hand.
If you want to move the righttrack, you move the right one
forward and you turn right.
(18:37):
Yeah.
So it was very much like a tank,cause you're on tracks or
sometimes it's wheels.
But um, so there's a state youcan get in very easily.
If you stop too abruptly, bothof the sticks go forward and
then you go forward to moveback.
And then you go forward and moveback and forth, back and
forward.
The only way you can releasefrom that is let go or else
you're just in the loop forever.
(19:01):
And it's really, reallyoff-putting.
Florian (19:03):
Interesting.
That's such a great example oflike viscerally experiencing.
Matt (19:06):
Oh yeah.
Florian (19:08):
Auto feedback.
Matt (19:09):
Right.
It's definitely a bit like anepileptic seizure, I guess.
Florian (19:14):
You just get stuck in
this, uh, in this, in this
state.
Matt (19:17):
Yeah.
And all you can do is disengage.
It's the only way to get out ofit.
And you only have to do thatonce and you learn it because
it's such an easy way to get outof it.
But you know, someone has totell you Let go of the handles.
And I remember that happening tome once.
Anyway.
Florian (19:38):
That's just a
fascinating topic actually.
Matt (19:41):
Let's talk about working
memory, which a lot of your
thesis is about working memory,short term memory versus
longterm memory.
So I found this fascinatingcause you know, um, I, I've been
thinking a lot about, um, wherepathways, what pathways, you
know, the different streams.
And, uh, in your thesis you talkabout short term memory and is
(20:02):
it the medial prefrontal cortexsomewhere around there?
Florian (20:06):
Typically, I mean,
frontal cortex in general is
involved in all these top downthings, but particularly for,
um, sort of the kind of semanticdeclarative working memory
maintenance and also lateralprefrontal cortex has been
described to be i mportant.
But that maps onto, if you mapthat onto a rodent, they don't
(20:26):
have a[inaudible] o r prefrontalcortex f or, t hey talk about
the medial prefrontal cortex.
A nd so s ometimes it gets alittle bit tricky when you're
pulling studies from differentanimals.
Matt (20:35):
Yeah.
Well, I mean, mice don't everhave to make shopping lists or
things like that.
Cause that's what I always thinkabout.
I'm like working memory.
I liked your paper because, u h,I don't how best to explain it,
but you've got, u m, areas whereyou say longterm memory, maybe
two areas of the brain that aredifferent sensory modalities.
(20:55):
Right.
That are, that are performinglongterm memory and your working
memory is, u h, in p refrontal,u h, and that is sort of a
scratch board or someplace thatyou can index, I like the word
index there.
So if I, if I go, this is what Ido when I w ant t o see what I
want to buy at the grocerystore, I go to t he f rigerator
and I s tart looking for thingsand I'll look for something and
(21:16):
that will trigger, Oh, I alsoneed milk i n addition to eggs
and coffee and t hat sort ofthing
Florian (21:21):
Associative nature of
much of memory.?
Matt (21:26):
And, and so, so the, the
longterm memory is getting
sensory cues that identifyobjects, which then cues the
short term memory.
Uh, right?
Maybe you could explain thisbetter.
Florian (21:36):
Yeah, no, no.
We do this all the time.
Like it turns out we have someof our working memory mechanisms
are like really quitespecialized for certain things.
Like, so cognitive scientistshave long described, for
example, this, um, this auditory, uh, phonetic loop, I think
it's called.
So this idea that you can, evenwithout much understanding, you
(21:57):
can loop a phonetic sequence ofsome sort.
So, um, rather than thinking ofthese five items that you want
to buy and like visualizing themin front of your mind, you just
speak the words.
Yeah.
Um, or, um, you build yourself,you know, like a little rhyme or
(22:17):
something, buffering things.
So butter, milk, egg, toast, anduh, I don't know.
Cheese, butter, milk, egg, toastcheese, butter, milk, egg,
toast, cheese.
Like I can repeat butter, milk,egg, toast cheese faster than I
can even think of these conceptsand what they would look like in
the packaging on this shelf inthe supermarket.
And so you can use thatmechanism to sort of buffer
that.
And then you have these indexes,butter, right?
(22:38):
Butter.
Which b utter was it?
That one.
And y ou'll recall what it lookslike because you have a
representation.
The longterm memory is allthere, you know what butter is,
right?
Matt (22:49):
Yeah, you just need a
trigger.
Florian (22:49):
You need a retrieval
mechanism.
And so sometimes you can loop inthese short term systems that
are specialized.
Um, you know, to veryeffectively get an index onto
longterm memories.
And that is of course anassociative thing because you're
binding these things together,right?
Matt (23:08):
It seems like an extremely
useful place to construct plans
and goals and you know, thingsthat you want to do, projects
that you want to create.
I mean, that's what theprefrontal cortex is all about.
Florian (23:21):
yeah.
All the executive and theplanning and the imagination.
Yeah, that's true.
That's why these are all sort oflike neighboring, um, functions
there.
Matt (23:30):
So, but, uh, in longterm
memory we've got long term
potentiation.
Right?
So the different type oflearning.
So in the short term area, wouldyou call that fast Hebbian
learning?
A different type of...
Florian (23:44):
yeah, that's typically
how you would prescribe it.
It's noteworthy though that thatis a very new idea.
Um, like the idea that um,longterm memory is associative
has long been around has beenshown in these LTP experiments
that, you know, you get bindingof co activated units and if you
anti activate them in an anticorrelated way, then they will
(24:04):
actually uncouple.
So like synaptic deeppotentiation, synaptic
depression, it's called LTDinstead of LTP.
Uh, that is long been known.
But the idea for working memorywas for for a long, long time
that it would be some kind ofpersistent activity signal.
So rather than having a systemthat buffers these items and
(24:25):
some, um, synaptic changes, sopeople would think there's some
kind of reverberating activitythat self perpetuates somehow,
it mapped very nicely onto earlymodels of neural networks like
these[inaudible] networks, whichhave these beautiful attractors
so they can, once you let themonce you kick them off, they can
(24:46):
keep going for awhile.
The problem is that you needdedicated attractor for all of
these items and you can think ofa billion different things.
And if you d o, you get to thepermutative complexity of all of
these things you can think of.
A nd then it very quicklybecomes clear that you would
much rather have a scratch boardof some sort where i t can put
(25:07):
things down and then getsquickly, gets erased.
S o y ou don't have a dedicatednetwork for all the working
memory things you might want tokeep s hort t erm on your mind.
Even while you might have, youknow, lasting attractors for
sort of p rincipal things you'veunderstood about the world.
Matt (25:22):
Right.
That's so interesting.
I never, I've never thoughtabout the prefrontal cortex as
sort of being something thatdoesn't necessarily hold
anything at the moment.
It's just, it's used for like a,a desktop, you know, to put
everything together.
Florian (25:38):
Like these structures
are very multidimensional.
Um, and they're very, like theseneurons might be doing one thing
and one task and somethingcompletely different and a
different task.
Um, and so that suggests, um,that they're, you know, rather
than university, they getrecruited into things and fast,
Hebbian mechanism would be oneway to recruit them into some,
(26:01):
doing something useful for sometime.
But because it's a fastmechanism, it also means it can
be read to be quickly overwritten, but it might just be
the temporal bridge that youneed in order to kick off, you
know, more lasting memorychanges or just to perform the
task.
Right.
Oftentimes we don't actuallyneed the longterm memory.
We just need be able to executethe task, which means retrieving
(26:21):
some longterm memory thing, beable to do it, but then we can
forget it.
So we don't actually want toconsolidate the full thing.
You don't want to remember everyword that I said today.
Matt (26:31):
Right.
There's no point in that.
We forget almost everything weever...
Florian (26:36):
yeah, the vast majority
I think of the brain as like
this massive filter, right?
So you have like all theseimpressions that you could
perceive and a tiny percentageof that is even perceptible to
the kinds of organs that wehave.
We only see certain wavelengthsand we only hear certain
frequency bands.
Okay.
And then out of all the thingsthat you could perceive, only a
tiny thing pass through yourattentional gate.
(26:58):
So, and so you're aware of evenless than what you actually
perceive.
Clearly there's like, you know,pressure in my, I don't know, in
my foot right now because it'sinside a shoe, but I don't
perceive it right.
Unless I focus on it and then Inotice, yeah, my foot has
weight.
Um, so, so you're not even awareof most of what you perceive and
then even less of the thingsthat you perceive actually
(27:20):
become short term memory.
So something that you are awareof for any amount of time, like
some 10, 20 seconds.
And you know, so I mightremember, you know, the p ast
sentence that I just used.
U m, and so a tiny fraction ofthese things then b ecome
something that y ou mightremember tomorrow and o f all
the things you remember fromyesterday, you know, you're
(27:40):
going to forget most of it.
It's very unlikely that y ou know much of that will be left
after a decade f rom y ouranswer unless something
extraordinary h appens.
Matt (27:47):
And it's not necessarily
that you'll remember, I hope
these terms are right- thedeclarative experiences that
happened to cause your semanticmodels to update.
Right?
You're just going to rememberthe facts that got updated in a
lot of cases, it depends on theevents that had happened.
Florian (28:05):
I usually like to use
this example of like, just try
to remember, I don't know, likeyou might know that the capital
of France is Paris, but do youknow who told you?
Matt (28:14):
No, of course not.
Florian (28:16):
So one of the
fascinating things is we have
this episodic memory where we,you know, remember the exact
event and what happened and thisperson walked in and said
[inaudible] happened, but wealso have this, you know, much
more semantic memory and sort ofthings that stick around and
things we learned to sort ofdeep connect from the context.
And in fact, one of the mostinteresting discoveries that I
(28:38):
made early on sort of that whatI thought was super interesting
about memory research was thefact that we have all these
different systems and theredon't seem to be connected very
well in the sense that you canhave one but not the other.
Like you can, like, knowing howto ride a unicycle does not mean
, um, you know, you might have amemory of when you learned how
(28:59):
to ride the unicycle, butactually they're completely
different systems and you canlose one without using the
other.
Matt (29:06):
Um, this is something they
showed in H M.
Florian (29:09):
yeah.
H M this famous, um, famous casewhere they could teach him all
kinds of interesting motor tasks, but you would never remember
any, any of it in the episodicsense, right?
Matt (29:23):
Right, he would get
better.
Florian (29:23):
Yeah.
You could train them to be, youknow, happy for something or to
be afraid of something by fearconditioning or teach them
certain modal tasks, you know,writing backwards in a mirror,
things like that.
Right?
It's hard to learn, but youcan't learn it.
Um, but the interesting thingis, of course he would get
better like everybody elsebecause he had these systems
(29:45):
that are necessary for learningthese things, but he did not
have the hippocampus, which is,you know, this super important
bridge into a longterm episodic,uh, consolidation of memory.
So you could leave the room, youknow, I've come back two hours
later and he would not even knowthat he had done that task and
actually improved hisperformance on it.
Matt (30:04):
Sure.
That's fascinating.
So we can get better at thingswithout even realizing how we're
getting better at it.
Florian (30:11):
Yeah, exactly.
Uh, and that also means that weneed to get a little bit outside
of like this story narrativewe're always telling ourselves.
We understand ourselves throughwhat we remember episodically.
But of course our learning ismuch bigger than that.
Um, that is particularly obviousto all the kinds of learning
that requires, uh, sleep and forall the kinds of memory learning
(30:34):
that isn't declarative.
So things you cannot state,right.
Matt (30:38):
Things you can do, you
have to do to learn them.
Exactly.
Performances.
Florian (30:43):
Yeah.
Maybe we should have likeexplained the term, right?
So there's the text out ofmemory, our memory, so you can
declare.
So facts and events, so thesemantic and what happened.
Um, but then there's also allthe non declarative memories,
which are, you know, modalskills and uh, and, and fear
conditioning.
And, um, like all, all kinds of,uh, memories that you cannot
(31:04):
state but you can still learnand acquire.
Um, if some of those are likemotor things like you can walk,
right, but tell somebody how towalk.
It's tricky.
Matt (31:14):
You have to try it.
Florian (31:15):
How do you, can you
tell somebody how to ride a
bicycle and then they will beable to do it?
Matt (31:20):
Uh, no, I don't think so.
Florian (31:22):
Probably not.
They still need to practice
Matt (31:24):
They've goe to do some
trial and error.
They don't know what it feelslike to sit on a bicycle and to
balance that thing.
Um, yeah, that's,
Florian (31:32):
so that's sort of a
fundamental distinction.
And the, obviously those arethen associated with different
brain areas.
Um, and that's not that we don'tuse the other brain areas while
we are learning those things,we're always using our whole of
the brain.
Right.
But,
Matt (31:45):
but those non declarative,
uh, memories, I guess you could
call them are very personal.
It's like it's not, they're notfacts and figures.
They're tied directly to yourbody, your senses and the things
that you do with them.
Florian (31:59):
Which is why they don't
transfer so well as words.
Right.
Whereas you can learn thecapital of France.
No problem.
I can tell you.
And then now I know this fact.
Right.
Excellent.
Matt (32:09):
You mentioned this in your
thesis and I find it fascinating
because I'm, I'm a musician andI like to perform and practice
music.
And you noted that like adrummer, any drummer will tell
you that, um, the rhythms thatthey learned that day aren't
really gonna get locked in untilthey've had a good night's sleep
and they come back the next dayand then they'll pick it right
up.
And that's absolutely true.
(32:29):
And I think any, any, uh,athlete or performance artists
in any way, does it have to be acreative but an athletes the
same thing.
You, you have to have you tohave that practice, that
movement through space.
You know, the update of all ofyour models and then get some
rest, come back, try it again ina different context and then it
sticks.
Florian (32:47):
Yeah.
And one of the fascinatingthings, so we can of course ask
and that I like did much of myearlier work, uh, is how do
these transfer processes work,right?
How do, how does something gofrom this sort of initially
acquired memory into somethingthat is longer lasting?
And so that's where all theseinteresting particular when it
comes to, uh, like spatialmemory, um, um, but also, uh,
(33:10):
just episodic memory and in somesense, uh, there's this strong
involvement of hippocampus,which we can read out and in
rodents very nicely right.
And you get all this interestingevidence of like this strong
replay during sleep where wholeepisodes get compressed into
like these short, like a hundredseconds short wave ripples.
(33:31):
So you get these fastoscillations riding on a slower
wave and they compress the wholemovement sequence off the road.
And that you saw before, youknow, like behaving, you know,
for 20 seconds or something,compress it down into this, this
very short burst with the exactsame neurons activating in the
exact same sequence but not justonce or twice, hundreds of
times.
And that obviously got peoplethinking is that causally
(33:53):
related to their memory?
And then people started meddlingwith it.
So if you're not suppressedthese, so you can have a
microcontroller listening tothose and then target to
interrupt some of them but notothers.
Matt (34:04):
What is it that, uh, that,
uh, tells your brain when you're
asleep to take some events thathappened during the day and
reinforce them, but others don'tworry so much about?
Florian (34:12):
that's interesting,
right?
What about biases?
Matt (34:15):
There must be some type of
biochemical- dopamine is
probably involved.
Florian (34:20):
I mean, yeah, that's a
big argument that, uh, that
modelers have been making.
I don't know how exactly strong,there's just sort of an
interventionist sense whetheryou can prove that by like
intermittently, you know,blocking[ inaudible] modulation
of plasticity for example.
U h, but that it's true thatlike the presence of dopamine at
(34:41):
l ike these synapses i s, can, um, can make the, u h, synapses a
lot more plastic.
Meaning they're stronger.
And the hypothesis that my firstpaper makes is that the stronger
synapses will then, u h, b emuch more capable of
reactivation.
And because they reactivate,they get stronger.
So t h ey're s o rt o f l i ke as elf perpetuating dynamic.
Matt (35:02):
They stand out sort of
from the rest of the.
Florian (35:05):
So the most powerful
memories kind of run away.
And so in that sense, the reallyquestion is if you want to
remember something as we needtwo plans, right.
Um, plan a is a just repeat it alot.
Right?
Make it sort of strong byrepetition, repetition.
That's why y ou practice your,you know, vocabulary not just
once or twice.
Y ou go through your cards 50times and that increases the
(35:27):
odds that they will getconsolidated.
R ight?
C ause it doesn't matter that,you know, now what matters is
that, y ou k now, you know, intwo weeks.
Matt (35:33):
I like to change my
context, gooutside and do
something.
I go over here and study thesame thing.
Florian (35:39):
And then there's of
course the, the other strategy
where instead of doing massiveamounts repetitions, you make it
really, really relevant.
Matt (35:46):
Right, right, right.
Florian (35:47):
Um, I dunno if you
could have like outrageous
example.
Like you slap somebody in theface and tell, tell that person
your name.
They will never forget yourname.
Matt (35:55):
It's true.
That's a good tactic to use.
Florian (35:58):
Yeah.
Right.
Yeah.
I don't recommend that I'm not abeliever in violence.
I'm very kind.
But the point is we are verygood at recognizing what is
relevant and when something isrelevant to us of much of that
as obviously biologically, pre,pre coded.
Right.
So your pleasure and pain and uh, you know, anything with sex,
you know, it's a lot betterremembered.
(36:19):
Um, no, this was just true.
Anything that has socialrelevance on the stories
involved and people, that's whyall these memory artists, they
map all their tasks, alwaysonto, you know, people, places,
locations, you know, things thatwe are all like automatically
good at because we havededicated circuits for that and
things that mean something tous.
They create like palace of themind.
(36:40):
You want to make these objectsthat you're putting in the
space, like outrageous.
You know, like you don't justwant the chair in the corner,
but know that the chair is like,you know, fiery red and like, I
don't know, it's leg is on fire.
Well no, you're not gonna forgetthat chair.
It's a weird picture of one ofyour mind.
But if that helps you build thatpalace of your mind for your
(37:00):
memory task, that is in factwhat they do with the more
outrageous the story, the betterour memory is.
Matt (37:06):
Yeah.
Okay.
I mean there's, there's a reasonfor any performance artist or
athlete that when they performsomething well, it feels really
good.
Right?
Because you get immediatefeedback from that-you're,
you're paying attention to themusic you're playing or the ball
that you're hitting or whatever.
And when everything works, it'sa great feeling.
Right?
Florian (37:25):
Yeah.
Beautiful.
Cause they are beautiful things.
Matt (37:28):
Yeah.
Serendipity.
Um, let's talk about attractorsfor a little bit.
Right?
Um, attractors are tricky todescribe.
So not coming from a mathematicsbackground, it was very hard for
me to understand what anattractor even is or what a
dynamical system even is.
Uh, I don't know if you can helpexplain that or not cause it's a
(37:49):
hard thing to explain.
You can try.
Florian (37:50):
I think I can.
Um, I think, I mean put like,you know, highly simplified,
right.
An attractor is quite simply ifyou have a, um, a system with a
lot of elements, um, there'sobviously the, the space in
which that system can be interms of all the Combinatorics,
all the different elements canbe in a, you know, a combined,
(38:13):
span a wide space of what theconfiguration at any-
Matt (38:16):
Did you call it a high
dimensional space?
Florian (38:18):
Yeah.
So it's like a high dimensionalspace.
You have like, I dunno, let'ssay like a hundred units and
they can be on or off.
That's a very big space in termsof what could be represented.
Um, but the interesting thing iswhen you build, put some
structure into this, uh, whenyou connect these nodes and you
associate some things, then youcan have substructures within
(38:39):
that and more than that is thatthe, when you have sort of some
number of connected notes, callthem, for simplicity sake, um,
then when you activate just someof them, they might recruit the
rest of that, what we call anensemble.
So sometimes these terms, I useinterchangably like, um, you
(39:01):
know, like, like an attractor oran ensemble or, um, there's a
couple of more times that escapeme now.
But the point is quite simply,sort of the, the first main
important, um, characteristicof, of any attractors that they
can do things like patterncompletion.
So whenever you get close to thestate that is encoded in an
(39:25):
attractor and it's thatparticular configuration of
activation, then the networkactivity will be attracted to
that configuration.
Meaning it's gonna change to theinteractions of the elements
into a configuration that isthat attractor state.
Right?
The cool thing about that isthat you can have many of those
(39:46):
in a big network.
You can embed tens, hundreds,thousands of attractors and they
rely on connections between yournodes, right these nodes inthe
brain would be neurons, right?
So you would get some kind of apopulation code where different
configurations of active neuronstogether represent something.
(40:09):
And then if you activate just atiny piece or you get close in
the activity space to one of theencoded attractors, then the
network would fall into thoseattractors.
Right?
So you would, and then as amatter of research off to us,
that is, well, I'll give you acue and you retrieve the memory,
right?
That's why attractors are usefulmemory models because they are
(40:29):
in fact-
Matt (40:29):
You feel something fuzzy
perhaps.
And you match to fuzzy objectsyou've touched or something like
that.
Florian (40:35):
Something like that.
Yeah.
So, um, that already gets alittle bit complicated because
of course when there's manyfuzzy objects, that might mean
at any given point you mightfall into different attractors
and then becomes relevant.
Well, what biasing elements doyou have?
So maybe you also, you know, seeif certain color or certain
shape or something.
Context matters then, and thenthe bias is the evolution of the
(40:58):
network when all these unitsactivate and inactivate each
other through their interactionsinto a certain direction.
Right?
Um, the little weird thing aboutsort of like these dynamical
attractors is that they don'tjust fall into a place like the
simplest forms of attractornetworks, like so-called
Hockfield networks, right?
(41:19):
They have a number of embeddedattractors and as soon as they
enter that attractor, thenetwork is stable there so it
will not change over time,right.
In neural networks and in thekinds of, um, biophysically
detailed spiking networks that Ihave built, um, attractors
dynamically destabilize.
(41:40):
That does not mean thattheattractor is deleted from the
memory.
It's still in the network in thelasting connections.
But neurons will, will tire out,they will wear out, which is
just thing called neuraladaptation.
Uh, and the, the signalingchemicals that are available for
immediate transmission will,will deplete.
(42:01):
That is called synapticdepression.
And so you can use those twoelements to build a system that
can fall into an attractoractivated for awhile and then
automatically be pushed out ofit because the resources that
are necessary for maintainingthat item and activity get
depleted, which then means thenetwork is free again to wander
(42:22):
around and find some otherattractor.
Matt (42:24):
Does that relate to short
term memory at all?
Like holding things in your mindand if you don't think about
them while they just go away?
Florian (42:31):
Uh, yes.
No.
So, uh, so the idea is there'sonly one thing on your mind at
any given moment in time.
You can jump between differentthings, but there's always sort
of a foreground of activity,which we obviously relate to
from the conscious experience ofalways attending to something
(42:52):
that can be external, right?
But it can also be internal whenyou're thinking of your own
memories and what you might wantto think of.
And so it turns out that eventhough the brain is like this
massively parallel system withall these neurons that are
active at any given time, you'restill playing it like a, like a
keyboard.
So there's only one, you know,configuration active at any
(43:12):
given point in time, even thoughit's capable of many different
ones.
And some of them are built totransition.
Like when you're learningsequences, one series of three
tones activates the nextactivates the next etc.
So sequence memory, right.
The thing that the HTM neuronmodel at Numenta really excels
at, um, these, these kinds ofmemories are also a form of
(43:35):
attractor because the activityis pulled forward into
something.
It's just that that attractor isnot one fixed point cause that
point immediately it evolvesinto something else.
Matt (43:48):
It's initiated by well it
doesn't, not always, but
potentially initiated by sensoryinput to roll through one of
these attractor sequences.
And then you're constantlycomparing sensory input to the
sequence if you're playing asong or just singing along with
something or whatever.
Florian (44:05):
Yeah.
The system does that by itself.
Matt (44:07):
So like there's constantly
attractors going on in your head
from one thing to the next.
And this could represent anynumber of things, right?
Florian (44:15):
Like essentially called
the cortical attractor theory of
neocortex or neocortical memory.
The idea being that you havethese distributed
representations or specificbrain states that actually get
encoded, right?
So there's strong connectionsbetween them, right?
And they are competing foractivation at any given moment
in time.
(44:35):
Some of them are associated withone another so they can activate
each other go forward orbackward.
Um, and one of the nice ways toembed these memories into a
system like that is to useHebbian learning roles, right?
Or in my case it's specificBayesian um, uh, derived
Bayesian statistics derivedHebbian learning rules.
(44:56):
So which is in some senseoptimal because you're not just
computing correlation betweentwo units, but you're also
normalizing that by the, by theprior stuff, the pre and the
postsynaptic neuron, which is abit of technical detail I guess.
But for those interested inBayesian models, that might be
interesting.
Matt (45:16):
Um, okay.
Let's talk about the bindingproblem, uh, specifically when I
was reading your third paper inyour thesis about the short term
memory versus longterm memoryareas.
Um, what, what is this bindingproblem?
It's a big thing inneuroscience.
And, and how, how do, how do youthink it should be addressed?
Florian (45:38):
Right?
Um, so there's a lot of in the,the, one of the problems of, uh,
of my work right?
And the, the, the field that Iwork in, it's, um, that it's
very, um, it's veryinterdisciplinary, right?
So there's ideas from cognitivescience where the binding
(45:59):
problem comes from, which I getto a second.
There is obviously aneuroscientific evidence on all
kinds of things and details thatconstrain models when you build
them, and then there's the wholecomputational side.
How do you build these systemsand neural network simulators
and how do you put these thingstogether?
And as a consequence, my worktries to speak to all of these
(46:20):
different groups, right?
I want to build systems that arereasonably defensible but thsy
also do something usefulcomputationally.
Um, and that also, you know,sort of demonstrate what we can
do in terms of a brainsimulation, but also do
something cognitivelyinteresting.
Matt (46:36):
Have some utility.
Florian (46:39):
All right.
And so the, the, the, one of thekinds of, uh, of the binding
problems is this what's calledrole filling.
So I think I use this example in, in that paper of I'm telling
you that the, the, the, the nameof my parrot, maybe Charlie.
And so, you know, Charlienormally has a name of somebody,
(47:02):
right?
You might have a friend calledCharlie, it's just a label,
right?
But if I now tell you when,after telling you that the name
of my parrot is Charlie, youwon't be like, you won't be
surprised if I tell you that,you know, Charlie can fly and
you might even be able to tellme now that Charlie can probably
speak because you've nowconnected these elements.
(47:23):
And the interesting thing is youdon't have a longterm memory
representation of that becauseyou literally just learned that
and the kinds of longterm memorythat we're talking about with
the LTP- that takes at the veryleast hours to express.
Um, so you cannot have a brainstructure that represents a
(47:43):
flying parrot called Charlie,but you're still capable of
making that association veryquickly.
Matt (47:48):
Yeah.
Right away.
I can think about it.
Florian (47:52):
Exactly.
So like you, you know, you dothese memory talks about you,
people just, you flash them outa card for like a fraction of a
second, then they can tell whatcard did you just see?
Well, it'ss the ACE of spades.
Okay, cool.
Um, so turns out thatassociative binding can be very
quick and then you really haveto ask where does that binding
take place?
And one of the ways that peopleused to think about that in
(48:14):
terms of persistent activity isthat all of these things get
activated together.
So the name Charlie, right, withor without your friend that has
that name.
Matt (48:22):
That could be perhaps in a
language area.
Florian (48:26):
Exactly.
Be all active together.
Yeah.
The problem was that is that,um, that is nice if it's one
memory or two, but if you go nowdoing a working memory task and
you're holding on to otherthings like we are talking about
all kinds of concepts here andyou still know that the name of
my parrot is Charlie, right?
Um, and so, um, somehow we mustbe able to buffer these things
(48:50):
even when the activity for thembecomes undetectable.
There's all these wonderfulstudies about activity, silent
working memory, uh, that havecome out in recent years, um, of
people who've been looking forthe signatures and finding them.
So you can read out the contentof the working memory, but
there's the substantial periodwhen you distract people.
(49:11):
Or when people have to shifttheir attention to a multiple
items they have to hold on towhere the entire activity
signature of that working memoryitem gets lost.
You cannot get detected as soonas you provide a clue.
It's back again.
Matt (49:24):
Yeah, I know what you
mean.
As a software engineer, I knowwhat it's like to be deep in a
coding project and getinterrupted and come back and
have no idea what I was doing.
But then as soon as I find thatnugget, I'm like, Oh, that's
okay and I'll get right backwhere I was.
It's like your short term memoryis pulling everything from
(49:45):
longterm memory and then puttingit together.
Florian (49:49):
and so the, so the
answer clearly must be, at least
in my understanding of it, thatif the, if the encoding is not
in the activity space, eventhough it's often visible in the
recordable activity of neurons,then if it's going to be silent
for some while the informationmust go somewhere.
Yeah.
Speaker 2 (50:07):
And the only place in
the brain where information can
go that is not sort of active,you know, in the neural
activity.
Well it's in the changes in the,in between the neurons, the
synapses.
Yeah.
So that must be very fast,right?
Because again, I've flushed yourcard and put it down and you
already know what it was.
Yeah.
It must be very fast buffers.
That might be activity dependentand reasonably fast non-activity
(50:32):
dependence, synaptic changes,and if they are going to be
associative, they're likelyHebbian in some form.
Yeah, and the cool thing aboutthose very fast forms of Hebbian
plasticity as a mechanism tosolve this binding problem to
put things together, right, isthat you don't have to record
the full context.
You already know what a parrotis.
(50:53):
You're all familiar with that.
Charlie is a name, it's not someweird phonetic sequence.
Right?
Matt (50:57):
But you need a long
connection, right.
For somehow from the short termmemory area.
Florian (51:02):
Yeah.
The problem was that the, theserepresentations might be far
apart.
Right?
They might not just beneighboring assemblies that now
can immediately click together.
Matt (51:10):
You've got a sound of a
parrot in the auditory area
versus the visions of parrots.
Florian (51:15):
Exactly.
You somehow need to put themtogether and the brain can
build, you know, like thesepowerful associations across
neocortex.
But again, if it's longtermstructures, like your longterm
memory, it's going to take time.
And if we buy the story oflongterm memory conservation
during deep sleep inhippocampus, that may takes
hundreds of reactivations duringdeep sleep.
(51:37):
So you need some othermechanisms while you're awake
and behaving to do, to solvethis binding problem.
And so the solution that we'vecome up with is this idea that
prefrontal cortex does notactually hold the full content
of working memory.
All that it needs to do is tohold a temporary short term
index with which those thingscan be linked.
(51:59):
And I've shown in my work thatthe connectivity that is
required for that can actuallybe very low.
You don't need a lot of silences, um, particularly when the
items that you want to bindtogether are very strongly
encoded.
So you already know thatgeometric shape and you already
know that, you know, phoneticthat that word or that sound
right.
(52:19):
So these are very strongassemblies that will self
complete if you give them a tinypiece of it.
So you don't need muchinformation, but you need to
direct the bias that makes surethat when you now see the shape
or full name or whatever, youknow, things you are
associating, you are capable ofbridging, right.
Only a few brain areas that aresituated in a way that they
(52:41):
could do that.
Matt (52:42):
So let me restate this and
make sure I understand it
properly.
In the short term memory area ofthe prefrontal cortex, all you
really need to create anassociative, associative links
between objects when you'reputting together working memory
is just enough to kickstart theattractors, right?
And so just a, just a nugget ofit so that they attractor can be
(53:02):
invoked in whatever other partsof the brain and then an
association between thosenuggets, right?
Florian (53:08):
So they become active
together.
Matt (53:11):
And then if, if in
longterm memory, if you have a
sensory input that comes throughsensory cortex, that can trigger
your short term memory to invokethe whole association.
Yeah.
Florian (53:22):
Right.
So I do this example right whereI a have a very brief cue which
activates one longterm memoryrepresentation through sort of
sensory input.
Then that thing is how itactivates an index in prefrontal
cortex and retrieves anassociated longterm memory in a
brain area that might not, youknow, might be far away like
(53:42):
we're talking centimeters awayhere, right?
So where there's very unlikelyto be direct connections between
the things that you're wantingto associate, but we know that
the working memory is reasonablyuniversal.
You can- of all the things, youcan have in longterm memory, you
can associate almost anything.
Yeah.
Um, so since there can't be adedicated population for any
conjunction of things you mightbe able to come up with because
(54:04):
the permutative complexity istoo high, um, you need a
flexible system.
And so a fast, Hebbian mechanismmight do just that.
Luckily enough, last, there'ssome early findings of such
mechanisms.
Again, they're hard to observebecause they're so fleeting.
Particularly many of the oldexperimental paradigms that like
(54:25):
tried to measure synapticplasticity all the time by
pinging the synapse.
They might actually delete thatencoded stage very quickly so
that you might be blind to the,to the observation you want to
make because you couldessentially by looking, you are
deleting it.
It's a little bit like the, likethe example of shooting a cat,
right?
Right.
Um, like the cat is alive and adead dad at the same time, but
(54:47):
you can't look, uh, as soon asyou look at it's either one or
the other.
This is a little bit like thatbecause if you want to observe
it, if you test the synapses andnot by a lot of pings to see
what its strength is, thenyou're eroding whatever, you
know, it's coded state was.
So silence is like, you know,the way that working memory
(55:10):
might get preserved, notactivity.
Matt (55:12):
Interesting.
Well, I think, uh, if anybodywants to know more about any of
this stuff, uh, your, yourthesis is a great place to
start.
Once again, active memoryprocessing on multiple
timescales and simulatedcortical networks with Hebbian
plasticity.
Maybe there's an acronym we canmake out of it.
Florian (55:29):
Yeah.
People are fighting over thenomenclature for this so what is
STP short term plasticity, youknow, some people are now
arguing, well, you know, itmight be STSP, um, because you
want to differentiate, you know,different subclasses of that.
And then some people call it[inaudible] longterm
potentiation because it'slasting, but it's fragile.
(55:54):
So there's a zoo of new terms,um, and it's a little bit like
the nomenclature on inhibitoryneurons.
It's like a zoo and there's lotsof different classification
schemes and are just waiting forneuroscientists to agree on
something, but we're not lettingthat stop us.
We're building models anyways.
Matt (56:10):
Oh, you're operating on
the, essentially the frontiers
of science here, so you're goingto have that problem for sure.
Florian (56:16):
Right.
But that's also makes itexciting, right?
I get to read a lot of what ishappening and experimental
notice science.
I'm always of course hunting forwhat is the consensus, what is,
what are the mechanisms thatmight be useful?
What are the missing pieces onthat that makes it very exciting
day to day.
Matt (56:32):
Yeah, absolutely.
Uh, well Florian, it was apleasure.
Uh, I should remind Watchers or,uh, listeners that if you want
to hear more from Florian, youcan go to our YouTube channel,
the Numenta YouTube channel,where we have live research
meetings.
Florian's given a couple ofthose recently, as you mentioned
earlier.
And uh, there'll probably bemore, I'm sure there will be
(56:53):
more.
Yeah.
Yeah.
So I can get more Florian there.
Um, thanks again.
Fist bump, it's what we'redoing, all right.
Thanks for watching everybody.
Thank you.
Thanks again for listening tothe Numenta On Intelligence
podcast.
I am Matt Taylor from Numenta.
You can get more at our YouTubechannel.
(57:14):
Just search YouTube for N umentaand also follow us on Twitter at
Numenta N.
U.
M.
E N.
T.
A.