Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:05):
Professor Terence Deakin, Professor Michael Levin.
This is a long time coming. I think many of the listeners
and viewers of the show have been waiting to see this
conversation. Both of you touch distinguished
careers and have massively shaped and influenced
developmental evolutionary biology and cognitive science as
a whole. And you have so many overlapping
(00:26):
areas of expertise and common interests.
With that being said, Mike, perhaps I could start with you.
What do you think of Terry's work over the years and which
aspects of his work do you find most fascinating?
And then, Terry, I'll let you answer the same question when
Mike is done. Yeah, look, I, I, I have read
your stuff since the 90's, the developmental neuroscience
(00:49):
papers, especially the cross species neural transplants and
things like that. I thought it was incredibly
interesting. Since then, I mean, there's been
some amazing developments that that you've, that you've LED
the, you know, incomplete natureand the, the focus on the origin
of and the importance of semiosis I think is absolutely
critical. And, you know, all your ideas
(01:10):
about the, the, the tearing downthe binary kind of Cartesian
dichotomies. I mean, I'm all, I'm all, all,
all on board with that for sure.The flows between form and
function. I mean, there's, you know, I
could go on and on, but but all of this, this, this, this, this
merger of these deep ideas between philosophy, evolutionary
biology, developmental biology, I find for absolutely critical
for progress. So, yeah.
(01:31):
So thank you for doing all that.In many respects, those are
exactly the things that are exciting about your work.
And and in particular where we overlap very strongly is this
recognition of in a sense the deep commonalities that are
underlying everything is biological, including the
(01:54):
informational part. But also the deeper than evo
divo in the sense that that it actually utilizes sort of the
self organizing logic that we find in all kinds of systems
that are really critically critical pieces of development
of evolution of cognition. You know, having spent a good
(02:16):
part of my career looking at howbrains develop, for example,
seeing that that a lot of similarities are involved in the
development of, for example, Planaria.
A lot of your work on Planaria that shows a lot of that similar
sort of pattern generation stuffthat goes on.
That is not not strictly, you know, biological.
(02:37):
It's certainly not designed in any obvious sense.
It's a very different kind of logic and that that search for
that underlying logic, which is,I think where we overlap so
strongly. And of course, the, the very
fact that that that you had specific time to work with, with
Dan Dennett, who had a significant influence on me
while I was in Boston. He and I have have had these
(02:59):
conversations back and forth over the years on different
sides of the same issue, often times sort of not agreeing, but
but recognizing, you know, exactly how these different
perspectives inform each other. And and I think that's where you
and I will probably fit because we have a common interest, but
very different backgrounds in terms of the particular topics
(03:19):
we've worked on specifically. Yeah, yeah.
And Dan talked about your work alot, you know.
I mean, would that be what do you guys, what are your guys
thoughts on on Dan's work and his legacy and philosophy and
how has it impacted both of you?Why?
Don't you go first, Mike? Since since since you've been
working with him directly or youworked with him directly?
(03:41):
Yeah. For for me, there's a few
things. He was, he was always to, to,
to, to me, He was, he was an incredible example of somebody
who wanted to understand things in a, in a fundamental way.
That was, you know, the, the, the things he said about that
(04:02):
there is no philosophy free science.
There's only, what do you say? There's only side with science
where you smuggle in, you know, philosophical assumptions
without the examination, right? So, so that I think is very,
was, was very critical and his, his commitment both to really
clear, concise, rigorous philosophy and the, the, the
data, you know, the, the experimental work that that is
(04:23):
necessary to revise your philosophical outlook is, is
very important. Because I, I find, you know, in
our field, I find a lot of the times when, when, when, you
know, I give these talks that are partially this kind of
philosophical stuff and partially data.
And, and some people, some people say, OK, the data are
great, the experiments very good.
Don't talk like all this philosophy stuff, you don't need
it. Don't, don't talk about it.
(04:44):
You know, and I, I, you know, and I said, well, why do you
think we did those experiments? You know, we wouldn't have done
them otherwise. That's the whole, that's the
whole point, right? And, and, and I think especially
in our, in like some of the molecular biology approaches and
things like that, people tend tofeel, or at least they act as if
they're really touching reality directly, that there is no
(05:05):
philosophy underlying their perspective.
They think that the things I sayare metaphorical, but pathways
and things like that are real. You know, we got real things.
We got these pathways. And, and, and just, you know, I,
I always like to dance the kind of unflinching fusion of, of the
philosophy and the and the data.I would say that's exactly the
same for me that that that sensethat that he's he's a
(05:27):
philosopher that's totally focused on the science, the
technology that it always influenced his thinking.
It just it always is very important.
And as a result, you know, he was coming largely and, and
particularly the, the, the 80s and 90s when we spent a lot of
time interacting his, his inputswere from, you know, computer
(05:50):
science and cognitive science has done particularly from the
MIT groups. And mine of course, was from the
biology side. And we go back and forth on
this. I remember he used to have
regular meetings at his home andwe would, we'd sit around and
argue over these things and, andhe was always in the middle.
He was always seen in a sense recognizing, well, you know,
(06:11):
listen to listen to each other, that kind of thing.
Just really, really helpful. And yet, and and yet was after
some of these really basic philosophical questions, you
know, the the free will questionwas, was a big part of his
thinking that the the idea of experience of consciousness,
that everybody sort of gave him a hard time for coming up with a
(06:33):
title consciousness explained that, you know, that wasn't what
he was about. He was actually about, you know,
interrogating the problem. You know, what's why do we think
the way we do? And as a result, even though he
and I came to very different conclusions about that, at least
during those years, there was itwas just a really constructive
(06:53):
discussion. And I find like like with
Michael. So I I began, I should say, just
to be clear, my career was driven initially by the
discovery of a philosopher in the late 1970s.
I've read the work of Charles Sanders Purse mostly by
accident. I was thinking I was going to go
(07:15):
on and do, you know, just sort of standard bench science and
the philosophy just sort of kicked me hard really on that.
He was asking questions at the end of the 19th century, the
beginning of the 20th century that nobody was even asking.
Now, 7580 years later, and and so I approach when I began to do
(07:35):
my neuroscience and evolutionarywork, it was always driven by
this sort of deep philosophical question that sort of drove it
about, you know, the nature of of representation of information
and the processes that generatedit.
These are so in that respect, I always had like Michael, in many
respects, this, this sort of a philosophical grounding kept,
(08:00):
kept, kept forcing me to ask these questions.
Now wait a minute. You know, it's not that simple.
It's not just, it's just something more.
Well, beautifully said guys. I mean, rest in peace to Dan.
This conversation has so many overlapping themes.
While the two of you and your work that was very difficult to
actually decide on what to what to ask the both of you.
(08:20):
But I managed to set a few cardstogether.
I think let's start off with a deep one.
Origin of purpose. At what point, Terry, perhaps
you could start, at what point do physical systems without
brains, let's say, begin to exhibit purpose like behavior?
Well, I actually approached thisin a very different sense that
(08:43):
as I realized that that even simple organisms like bacteria
were way too complex to actuallysort that out.
And so I've actually tried to simplify it as much as I could,
and I've approached it mostly from a thermodynamics point of
view. One of things that I've never
felt comfortable with, and yet it's sort of the standard
(09:05):
philosophical approach is a listof properties.
You know, what's the list of properties that's going to
define life? And as we've seen over the
course of decades and decades and decades, that that list of
properties, it's always leads you to, to not have a definitive
answer. And So what I wanted to ask was,
you know, what is it about the thermodynamics of life that's
(09:28):
that's very unusual. And because when we look at and
I I think what happened is that over the course, I think it
begins partially with with this book, What is Life, a series of
lectures that the physicist Erwin Schroedinger produces in
the 1940s. He kicks us off in asking the
question, you know, why does life act so different than other
(09:51):
systems? And you know, he recognized that
there's an informational issue, but in particularly recognizes
there's a thermodynamic issue. And, and I came to believe
partly because of purses work, but also because of later work
that in effect the thermodynamicquestion and the information
question were the same question.The problem is that I think
(10:15):
really beginning in the early 1980s and and on until the
present, many people in that particularly approaching this
physically have been satisfied with the far from equilibrium.
I describe this more of a dynamic, but a lot of people
call it self organizational processes that that's all you
have to really cover. Because according to the
(10:37):
Schrodinger perspective, the really, the really interesting
question about life was that it's far from equilibrium and it
keeps itself there and it produces form rather than
destroying forms, produces regularities rather than having
regularity spontaneously breakdown like the second law
would have suggested. What I came to realize and
(10:57):
really as I was struggling with how brains develop and, and how
and why language is so unusual for communication amongst other
species, that in effect, there'sa third transition that we, we
can't just stop and look at how form gets generated.
We have to think about processesin which form gets remembered,
(11:20):
gets generated and remembered and repaired so that in effect,
if it gets modified, it gets modified back again to where it
was. It's like a memory that gets
passed on. And that wasn't answered here.
And so I wrote this book in the mid 90s, began this book and
then then it doesn't get done until, you know, a decade later
(11:41):
that came out to be incomplete nature, which was basically to
say no, the problem of emergence.
First of all, we had to get rid of the magic.
It's not magic. It's about, you know, novelty of
some kind. But and typically it's
combinatorial novelty in an interesting way.
And what I realized is that what's happened is that we have
oversimplified the thermodynamics story and that we
(12:02):
needed to have a thermodynamics in which self organizing
processes themselves organize each other in such a way that
they rebalanced and then had something like memory.
And so that I really saw the origins of life question and the
origins of interpretation, not the origins of information,
because I think we have oversimplified the concept of
(12:23):
information. We flattened it so that it we
don't really deal with the question of evaluation.
We don't really deal with the question of, of reference, how
something present can be about something that's not there.
But that that there's an organization, the thermodynamic
organization that makes that possible.
And that therefore it's not a surprise that that some of the
(12:46):
mathematics of thermodynamics and the mathematics of
information have a lot in common.
Even though they're talking about very different things,
they have a lot in common because there's some similar
processes. So to just cut to the end of
that story, I realized that the the problem that everybody has
in biology is with viruses. Are viruses alive or are they
(13:09):
dead? And I realized that the the
question about being alive was not the right question.
It's the question about this contortion of thermodynamics
that allows something to be about something, you know,
allows something to make predictions about the future and
to then maintain those predictions.
(13:29):
How can a chemical system do that?
So the question always dogged me.
I actually wrote a paper with a sort of title is that how does a
molecule become about something that it's not that it, it, it
just, it just doesn't have that.We just can't assume that DNA
and RNA are just information. They're, they're about
something. They're not just copying
themselves. They're about something and
(13:51):
they're about themselves in a, in an interesting sort of
twisted way. So I realized that the, the
question of viruses was really the, the crucial question.
Viruses are at that boundary where we're not sure we can say
it's alive or not. And yet it does all the things
that living things do, that is it evolves, it protects itself.
We can talk about, you know, killing viruses, all those
(14:14):
things we can do, even though weknow that the, it's not quite
the same language. So I, I've sort of in the last
decade of my career, I've been focusing on that question, which
I think is a way to ask the question about what is
information in all of its various forms, what I would call
the semihotic concept of information and, and then how it
(14:34):
comes about and how it evolves. Mike so.
That's a long answer your question.
Yeah, that's fine, Terry, Mike, same question.
I mean, at what point do physical systems exhibit purpose
like behavior? But also at this point, you
could also address whatever you think that Terry said is
important, valuable and whim. You might disagree or agree.
(14:55):
Yeah, not much to argue with there.
I, I agree with, with with almost all of it.
So maybe, maybe I'll just talk about a, a slightly different
perspective on some of these things that, that, that I've,
that I've been developing. So what I'm interested in, I, I,
I think, I think a lot of times dichotomies lead us astray.
So trying to, you know, is it alive?
Isn't it alive? Is it the cognitive?
(15:15):
Isn't it like, I think these things land us in the pseudo
problems that are probably UN unresolvable?
And So what I'm really interested in are models of
scaling and transformation. So, so as, as Terry said, was,
was, was saying, I want to know,you know, how, how, how do the
things that we're interested in?So the ability to generate form
to maintain and to have this memory to all of these things.
How do they scale up from, from the tiniest beginnings?
(15:37):
And so, so that's, that's, that's one, one thing that I'm
really interested in is the scaling.
And the second thing is this, this idea, when I think biology
does this really well, is polycomputing.
So this is something that Josh Bongard and I have been
developing this idea that what you really have instead of 1
objective, correct, you know what this, this set of physical
(15:57):
events is computing this function or this has information
about this thing is to consider all the different observers.
And so within a living structure, you have many nested
observers, right? You have, you have both
laterally and vertically, you have the system's parts, you
have the, the the other systems around it that are all
continuously interpreting each other and what each other are
(16:17):
doing. And because they're all hacking
each other, right? They're all, they're all trying
to figure out where do I end? Where does the outside world
begin? How do, what are the things I
have control over? What can I affect?
How, what are the most efficientsignals I can get it to do other
things and so on. And so, so this idea of and, and
of course, of course, systems also, as, as you said, systems
also have to interpret themselves, say that's the most
interesting thing about significant agents is that is
(16:41):
that they, they, it's not just that others apply a sort of
intentional stance to them and see goal directed behavior, but
they themselves are able to close that, that strange loop
and then do it for themselves. So, so this is the sort of thing
that we've been working on is, is the stories of scaling, which
has many implications for, for, for cancer and for, and for
other things. But one, one thing in particular
(17:03):
that I'm really interested in is, is the concept of mistakes.
So as you, as you observe chemistry, chemistry doesn't
make mistakes. Chemistry just sort of does what
chemistry does. But developmental biology can
certainly make mistakes and behavior can, can, can make
mistakes. And so the extent to which you
have a system and, and this is how I see all of these things as
navigating different spaces. So of course you have, you have
(17:24):
metabolic spaces and, and physiological spaces and
transcriptional spaces and then anatomical amorphous space,
which is, you know, sort of whatI study mostly.
And then you have 3D motion, space of motion and so on.
As, as systems navigate this space, it is very natural to
look at that scale that, you know, the Wiener Rosenbluth,
Bigelow scale from, from passivematter to, you know, sort of
(17:47):
active matter. And then, and then the, how much
you know what, what are the, what are the, what are the tools
that you can bring to understandthe navigation of the system in
that space? Is it a random walk?
Does it do delayed gratification?
Does it have memory and forward planning?
Does it, you know, what are, what are all the things that
that it does does. And this site.
And, and So what we've been doing is taking tools that are
(18:07):
usually deployed all across, let's say the right side of that
spectrum. So tools from computational
neuroscience and, and behaviour science and so on.
And asking whether they apply tovery simple things and simple
in, in other spaces that are really not easy for us to, to,
to, to sort of visualize. And and that's been incredibly
enriching because it turns out that if you're willing to do
(18:28):
that, if you're willing to say that this is an empirical
project, we can't just assume, assume that that this is, you
know, how cognitive different things are.
You have to actually do the experiments.
When you do the experiments, youfind out some amazing things
even at the very left end of that spectrum.
So, for example, gene regulatorynetworks, not the cell, not that
all the stuff that goes in it, but just the mathematics of a
(18:49):
few nodes linked to each other by, you know, these the ordinary
differential equations. Just that alone can do six
different kinds of learning, dynamical system learning,
including Pavlovian conditioning, including
habituation sensitization, They can count to small numbers,
things like that. So you get that sort of for
free, so to speak, from the mathright at the very beginning.
(19:10):
And when we talk about purpose, you can sort of define, you
know, purpose versus goals, right?
Do you know you have a purpose? I mean, that's kind of a
metacognitive, you know, sort ofthing.
But but the ability to navigate that space using the tools that
we associate with some level of intelligence, I think goes all
the way down. We we see it in extremely
(19:30):
minimal systems. We've even done, and you know,
my, my background is, is originally computer science.
And so I, I like, like very, youknow, very simple kind of
algorithmic things. And it turns out that even in
very simple deterministic thingslike sorting algorithms and so
on, you can find surprises that look like different kinds of
problem solving that you have nobusiness to expect from the
(19:50):
algorithm that you thought you wrote.
So, yeah, you know, to, to, to me this, these things are start,
start very low down and, and we just have to be to do it, to do
experiments to, to, to see if wecan find them.
Yeah. The way I like to think about
that transition is the transition from chemistry to
normative chemistry. Chemistry which there is good
(20:12):
chemistry or bad chemistry or chemistry that works and doesn't
work. The question is for something to
be normative, there has to be a beneficiary.
Something has to benefit or, or,or be harmed in the process to
do better or do worse. So that it's, it's the critical
transition for me. So rather than so, you know, I
(20:33):
in, in some respects, I'm at thevery bottom and the top because
I'm, I was very interested in, in brains and particularly the
uniqueness of human brains and their anatomy and function.
And then also it forced me to the very bottom.
Ask the question, well, when canyou say it starts?
When is there finally a beneficiary?
(20:54):
When can I now say something actually benefits as opposed to
just being preserved like a rockis preserved or generated like
like a chemical reaction generates more of something.
None of those have beneficiariesin the sense that a beneficiary
is something that in a sense does work to maintain itself.
(21:14):
And and that's that's an interesting question.
So work comes part of it. That's why it's necessarily
thermodynamics, but why I also force myself to think about the
simplest possible systems, simpler than a than a virus,
because viruses are, are parasitic and they're taking
advantage of the lots of stuff. But but basically virus like
structure has a lot to it that has this kind of, you can think
(21:38):
about it. It's just a, you know, I, I, I,
I don't think about it algorithmically, but certainly
in terms of it has a something like self.
Whenever we talk about living systems, we talk about self
regeneration, self repair, self reproduction, self correction,
you know, all of those things. We've got, we've got this notion
of self. I've often thought that this is
(22:01):
one of the critical problems in philosophy of mind is that we
just sort of assume self. Self is actually a really
complicated issue. Even though it might be very
simple. It's actually something that if
we just leave alone, we just assume a beneficiary.
We just assume goals and don't talk about how they're
(22:22):
generated. I think we get into into
circles. Yeah, I mean something very
interesting about this concept of beneficiary is also related
to valence and reward and punishment.
So, so we can all see that you can write, you can, you can
reward and punish A paramecium, simple enough.
But then you can ask yourself what, what does it actually, you
(22:43):
know, if I started backwards andI say to people, OK, can I
punish a chemical network? And they say, well, absolutely
not. And then and then you show them
paramecium. Well, well, what's this?
Because, because you can do thathere, right?
And what, what we're finding that's really interesting and
this will be this, this paper will be out and I don't know
hopefully, hopefully soon if, ifyou, if you have a model of a
(23:06):
pathway or a gene regulatory network.
So not the rest of the cell, no evolution that you know nothing.
Just just a small set of a smallset of nodes that are connected
to turn each other on and off. In addition to the learning that
we find there, you can also findsomething very interesting,
which is that if you take metrics from causal information
theory that people are using to apply to brain.
(23:26):
So, so people like Giulio Tononiand Eric Hole and others are
applying these things to try to understand, do I am I looking at
a pile of neurons or is there somebody home in there?
You know, is there a person, is there a human patient that's in
there, right. So it turns out that not only,
not only do some of these networks have significant causal
emergence, but that causal emergence goes up after you
train them so you can calculate it.
(23:48):
And So what happens is for some of them, not for all of them,
there are classes, In fact, there are, I think, 5 distinct
personalities that these networks fall into.
But for some of them, if you, asyou start to train them, and
when I say train, I mean you, you stimulate some of the nodes.
So let's say in a, in a Pavlovian paradigm, you know,
you've got your unconditioned stimulus, your condition
stimulus, and then there's some other node that's your response
(24:08):
node. And so, so you, you start to
pair the stimulations and then you see what happens to the
response. So as you do these things for
some of them, the cause of emergence goes up.
And, and so you've got this really interesting loop where by
virtue of learning about their environment, when in this case,
being trained, not so much learning more, being trained,
they become more integrated as atiny little self.
(24:31):
They become right. They, they, they acquire this
kind of integration. And for some of these things,
what we, we looked at biologicalnetworks versus random networks.
And so, so it's very interesting.
The biological networks are better at this, significantly
better at this. They do more memories, they do
more, you know, causal emergence.
So, so, so clearly evolution, I think like this feature, but
(24:52):
even the random ones do it a little bit.
And so that suggests to me that even before you get into any
kind of selection or any kind ofevolutionary cycles, just from
the math of alone, you get this little free gift that, that,
that, that kick starts some of the stuff.
And then of course, you can optimize the heck out of it, you
know, as you as you do these, these selection cycles.
But, but right at the beginning,just just the, the, the math
(25:14):
that, that, that governs these kind of networks.
And of course, some of that, youknow, Stu Kauffman talked about
some of this stuff a long time ago, but it's, it's not just the
dynamics, it's not just the dynamical system.
It actually has both learning ability and this, this kind of
cohesive integration that grows as a result of experience.
So I think we can trace and as you said, you know, both both
very, sort of very high and verylow.
(25:35):
There is this kind of weird cycle where you can take metrics
from the kinds of things that that that you know,
neuroscientists do with, with human patients and look at how
this works with very simple systems.
Or I was going to add to this issomething that we've actually
been done at the other end, thatis the neuroscience and I have a
(25:56):
couple of colleagues at University of San Francisco and
the in which we're looking at simultaneous EEG and, and fMRI
and asking the question. I mean, because my assumption in
all of this is that the, the dynamics or what you might want
to call it computing, either called semiosis.
(26:19):
But but basically the, the, the informational processes are
generating the physical material, energetic processes
that make the information processes possible.
That is, there's a what what Hofstadter would have called a
strange loop in which the dynamics and the semiotic
(26:40):
activity is generating and maintaining the physicality that
makes it possible. So you get this, this loop
between sort of the, you might say, the ontology and the
epistemology of a system in which each depend on each other.
Well, we thought that that must be possible also for how brains
work, that we've been ignoring this in part because we thought,
(27:02):
we assumed, if MRI was telling us that when these neurons get
information and they're active, they change their metabolism.
What we've begun to discover is that there's depending on the
task, sometimes the metabolism anticipates and precedes the
neural activity and vice versa. And you can look at it in terms
(27:23):
of different parts of the nervous system, you know, in
general, sort of these broad distinctions like like like a
default network and things like that, but, but also between
brain stem, midbrain and and forebrain and so on, so on and
so forth. There's these, it's interesting
exchange that it looks as thougha lot of what we would, what we
would describe as self generatedactivity, intrinsically
(27:45):
generated activity. The metabolism precedes and it
drives up neural activity which changes the metabolism, which
drives up the neural activity and then shifts metabolism to
other parts of the brain. So that, that in effect, there
is, there is both a leading and following feature in which the,
you might say the computing or the, the semiotic activity that
(28:09):
is a signal generation activity and it's substrate are
critically linked. And it turns out they're linked
at a number of levels, not just neurotransmitters, but also
things like nitric oxide and, and, and various ionic channels
that sort of ionic potentials that sort of pass through large
(28:29):
areas. All of these areas are linked so
that there's no clear distinction between what you
might call the information processing and its physicality.
It's all part of the same thing.And and that's, that's sort of
the the other side, the other extreme, I think of what you've
been talking about, Michael. You guys were talking about
(28:51):
causal emergence. I'm going to do my best to just
be a family on the wall of this conversation, by the way.
So if you guys have any questions that you want to ask
each other directly, by all means.
But how do let's let's talk about constraint and causality.
Michael, let's let's start with you.
How do constraints shape outcomes more powerfully than
initial conditions? Let's say, for example, is
(29:12):
absence more causal than presence?
Yeah, well. I mean, the first thing is I'm,
I'm completely on board with things like absences being
causal functional in inputs intointo everything that happens.
So, so I am on board with that. And, and I think, you know, I
think I think Terry will, will, will say probably the, you know,
(29:34):
the most about it. What I would like to say is that
I, I, I in, in addition to constraints, I, I tend to track
two other two other things that,that are kind of in that, in
that Class 1 is this interestingaspect that was really
highlighted by the, by morphological computing, by the
way, you know, Pfeiffer and, andBonguard and, and, and folks
(29:56):
like that where when you have embodiment and, and I study
embodiment, not just in the three-dimensional world, but
embodiment in these other spaces.
And I think it actually, it actually applies perfectly well
to, to all these other problems,spaces that life works in.
There are many constraints that guide what's going to happen,
but there are also another thingthat's kind of amazing is that
in addition to the constraints, there are also these ways that
(30:19):
that you can offload an awful lot of computation onto the
environment. It's like, and, and this is
what, this is what they show when they have these robots and,
and other things that basically there is no controller that has
to handle all the things that has to happen.
They're basically letting the environment do the compute for
them. And this is this is something
very new in our lab that we're just getting into is asking what
does that look like in these other spaces?
(30:40):
What does it look like to offload transcriptional problem
solving, physiological problem solving, anatomical problem
solving to the rules of these other spaces that absolutely
provide constraints that will guide subsequent events.
But in addition to the constraints, they, they're also
helping out. I mean, that's that's the part
that I think is, is, is, you know, really, really wild is
that there's actually all kinds of benefits that that come from
(31:03):
that, that where the environmentis is actually helping you do
the do the thing that you want it to do.
And, and the, and the other thing that that I track a lot
are patterns that arise not fromphysical facts.
In other words, there are mathematical structures, there
are laws of computation, laws ofgeometry, of number theories,
all these things that evolution,I think uses a lot.
(31:27):
They, it uses them as, as in effect free lunches, because you
can save an awful lot of evolutionary compute time if you
can just be handed to certain truths from the, you know, from,
from the laws of mathematics andcomputation.
And, and I think those are especially interesting because
they're not, many of them are not determined by physical
facts. They wouldn't change if you
shuffle the constants at The BigBang and things like that.
(31:49):
And so those kinds of causes that are not the traditional,
you know, why did this happen? Oh, because the BMV, you know,
what molecule is well, that thatsome of these causes are not
like that at all. They're like a completely
different class of things. And what's really critical for
us now is because, because we try to drive a lot of these
things into very practical programs in regenerative
(32:09):
medicine and in bioengineering and, and so on is to how to take
advantage of these things. So I, you know, we, we would
like to know as we look for causes that are beyond physics
and history are the things that biologists love as, as causes.
But then as we, as we see, thereare these other things and being
able to exploit those and exploit that space of, of, of
(32:31):
these of, of these patterns thatthat can be used for both for,
for our construction and can help us communicate our goals to
cells and tissues in regenerative context.
So, so yeah, yeah, constraints and and and various other things
that are like them. There's so much in this question
and, and, and we'll go back and forth on it maybe in this
(32:51):
process. But but one of the, the things
that sort of drove my thinking about this originally and pushed
me back into the theory was, wasan interest in the Fibonacci
spirals of plants and precise mathematical relationships.
These wonderful spirals that yousee and for example, the seeds
(33:12):
of a of a sunflower and knowing that the genetics was not doing
the math that the genetics was in effect only really
controlling thresholds in which various regions of the plant was
releasing auxins. These these these various growth
hormones in the plant. And yet producing these
(33:32):
remarkable spirals that we find everywhere.
And recognizing that one of the advantages of these spirals is
that if it goes up the length ofthe stem, for example, for
producing leaves, then the leaves are maximally out of each
others way. If you want to capture sunlight
really well, what you want to have is things that are just,
(33:54):
you know, they're as tight as they can be, but maximally also
out of each others way. And it turns out that it's a,
it's wonderful that evolution would want this, but it didn't
have to in a sense encode it because it was there in the
geometry itself. And all you have to do is you
want to, you know, create different kinds of spirals,
(34:15):
different kinds of Fibonacci's with tighter or looser spirals.
You just, you just change the thresholds in which things
respond. And so at one point in time
somebody said that I should callthis the lazy gene hypothesis
that that that genes will only control what they have to
control. That in a sense, if they are
(34:37):
over controlling something, it'sgoing to eventually degenerate
if there's a redundant capacity out there, that if that
redundancy is produced by the constraints in this case of
geometry of space. That caught my attention.
And in fact, it's now been part of a recent work that I've been
doing that I that I call inverseDarwinism as we begin to look at
(34:59):
evolution. I mean, this was recognized back
as early as 1970. That is that a lot of genetic
evolution is by virtue of duplication, but it's not just
gene duplication, it's duplication at lots of levels.
And one of the things that that has to happen for evolution, for
life is that you have to always over produce.
(35:20):
You can't just keep up with the second law of thermodynamics.
You've got to go a little bit faster.
So you've got to generate too much.
And that allows you two things the redundancy provides.
You know, in information theory,redundancy is the correct
errors, but redundancy in biology is to make you error
tolerant. That is, if you have redundancy,
(35:42):
then things can go wrong and youcan still keep going.
Well, if you got a system that always overproduces stuff, that
always makes redundant cells, redundant genes, redundant
proteins and whatever, you've always got a little error
protection there. You've got a lot of little
plasticity, which means you've got exploratory space available
(36:02):
to you. Well, it turns out that in
evolutionary theory, of course, we're, we're thinking only of
the eliminative side of things. But in fact, what we're talking
about is that the generative side of things is not just
generating random variation, it's generating redundancy.
And redundancy has a very, very special role and it plays a
(36:22):
critical role in information theory.
Now, this will bring me back to the question of constraint
because a lot when we look at the original mathematics,
grammatical theory of communication that Claude
Shannon puts together, there aretwo measures of information.
They both involve what he calls entropy, which is just basically
(36:44):
a way to talk about the possiblevariety of something either in
time or in space or or whatever.But there's the original entropy
of what could have been produced, and then there's the
entropy of, you know, what results.
So if you're sending a message, there might be a bunch of
messages you could send, but theone that's picked up does not
(37:05):
have all that all that variety in it.
And so in effect, there's two entropies for Shannon, and
mostly we focus on one of them, which is just what he calls the
the original, the entropy of a channel.
But the entropy of a message is the difference between what the
channel carries and what was actually received.
And that tells you that what's happened is there's been a
(37:26):
constraint imposed upon that information, and that
information was carried on something physical.
But because it's something physical, it's subject to
thermodynamic laws as well. And this is one of the reasons
why the thermodynamic story and the information story are
linked, and they're linked precisely by this concept of
(37:47):
constraint. That is the the received
information. That's a constraint on the
possible information. In effect, it's the result that
there were something had to act on the signal, something had to
act on it to constrain its thermodynamics, which then would
constrain its information entropy.
(38:11):
And that tells you this is how it could be about something,
because the constraint of the channel can now be in a sense
about work that was done on it. Because to shift something to a,
a less probable state takes work.
And this then helps us understand the whole concept of
work, which is of course necessary for, for everything
(38:32):
we're talking about here, that that it's not just the release
of energy, it's the release of energy in a constrained context.
You've got to constrain the the energy release.
So you know, even a, a piston inan automobile engine that where
you constrain the explosion allows you to push something far
from equilibrium to push a car uphill, for example.
(38:54):
So it constraint is unnecessary,but often times ignored or just
sort of assumed part of work when we talked about biological
systems. But it's.
But if you recognize from information theory that it's the
constraints. That carry the.
Information. But constraints are also what
(39:16):
determines the structure of the work that was done, and the
structure of the work can generate new constraints.
So if you can generate new kindsof work with constraints, and
that work can generate new kindsof constraints, you generate new
kinds of work. The whole possibility of
evolution is the result of this interesting relationship between
constraint and the release and the control of energy.
(39:39):
And that means that if you have a system that can both generate
and then remember and transmit constraint, now you have a
system that can evolve. On let's sticking with that,
let's talk about regeneration memory.
You'll, you'll notice that the first question I asked, well
this previous one was more tell it towards your work, Terrence,
(40:01):
but this next one's more towardsyours.
Mike, perhaps Terry, you could start with this answer regarding
regeneration and memory. When a planaria regrows its head
and retains memories, where do you think those memories are
stored? And what does this tell us about
the body as memory substrate? It's interesting because memory
(40:24):
assumes a lot of what we've talked about already and and it
assumes one thing that we don't talk about a lot, but I think
it's very characteristic in Planaria, it's characteristic in
how brains develop in multi cellbodies and that sort of thing.
And that is that you can initiate the system started
going with a highly compressed source of information.
(40:49):
DNA is compression. And so when we talk about
complexity, one of the problems we want to talk about is, you
know, you got a list of numbers.Is that list compressible in the
sense that there's some redundancy in it?
If it is, you can take advantageof that redundancy in
reconstructing from a compressedversion.
(41:11):
And this is where algorithms often times play a critical
role. And in fact, we've analysed
complexity these days since, since Chaitin and Kolmogorov and
others in terms of, you know, what I would call a
decompression rule? A decompression rule is you've
compressed something and now yougot a decompression rule.
The key is what what is the decompression process?
(41:37):
It's not that all the information is stored in the
DNA. What you've got is a highly
compressed system that if you, if you pair it with a
decompression system, whether it's algorithmic or is a set of
constraints that are built into the geometry of the, of the
world or built into the recursive activity of some of
(42:00):
some process, then you can take advantage of this is that lazy
gene idea that I was talking about that in effect, what we
want to talk about is that whatever you can rely on in your
decompression process, you can now leave out of the compressed
source. And so, you know, this is even
true for something as trivial as, you know, architectural
(42:25):
design. And for the architect knows that
there's going to be wood and metal and tools and people
putting it together knows that somebody will know how to
decompress this model and do something with it.
The decompression is ignored because he can just assume it's
there. That's the problem is that we
tend to ignore the decompressionprocess.
(42:49):
And So what I like to like to say in terms of my favorite
example of this is, you know, isa frog built out more complex
than if you've taken the body ofthe frog and thrown it into a
blender and blend it up into a mix.
In one sense, the mix is much more complex.
(43:11):
You would have a hard time coming up with a systematic set
of algorithms to figure out where different kinds of
molecules were located. But in the adult frog, it's much
more simple in one sense. So it confuses our concept.
It's, it's simple in the fact that if I know that there's a
bone cell here, I know there's likely to be a bone cell next to
(43:33):
it. If there's a skin cell here, I'm
like, you know, you don't have that in the frog smoothie.
You basically don't have any predictability.
And yet at that level it's more complex.
But it's, but it's a misunderstanding of complexity
because what's actually happenedin the case of the actual frog
is that by taking a highly compressed representation that
(43:58):
is its genome in a decompressionsystem, it's cell that now
depends upon other cells, their position, what they release,
what their electrical ionic activity is, you know, how
diffusion processes, when they release molecules will affect
cells close by versus those different from them and so on.
(44:20):
All of that's part of the decompression process.
But the decompression process isnot to be confused with the
compressed representation and and so it's.
I've been very much interested in two ways.
The decompression process is thefunctional part, but in fact
(44:41):
evolution has to work backwards.It has to figure out
compressions. And I think one of the real
problems we have is that we tendto think of it only in one way.
We look at the function of something and say, now let me
just now sort of break it down into his parts and his function.
Now it evolved backwards in somesense.
The compression comes second. So I've always told people that,
(45:05):
you know, this view that DNA or RNA came first is a mistake.
They that was a compression of some other dynamical process.
And once we understand the decompression process and how
the two have to, in a sense, evolve together, we realize that
things get better and better at compressing and decompressing.
And that's the complexity issue.But it's a pairing of those two
(45:28):
processes. Yeah, yeah, So, so super.
Super important this, this compression decompression thing
very is, is is critical because one of the kind of meta
constraints that biology has is that the substrate is
unreliable. In other words, on a large
scale, you're guaranteed you're going to be mutated.
On the small scale, you don't know how many copies of your
(45:49):
proteins you have. Things are being degraded.
And that's, that's quite different than all of the
computational architectures we make now where you work very
hard to have abstraction layers,where if you work at a high
level language, you don't, don'tworry about your, your, you
know, a copper of your, your, your registers floating off and
so on. And so, so I think one of the
implications of that if, if, andwe've done a lot of simulations
(46:11):
on this now that evolving over an unreliable medium.
And so, so there's two components that, that, that I'm
really interested in. One is the unreliable medium and
the other is the fact that in, in living systems, you have
problem solvers all the way down.
In other words, there are systems all the way down that
are not just mechanically doing whatever they do, but they
(46:31):
actually do have a homeo dynamicand other other problem solving
capacities. And, and that works.
What, what, what that ends up doing is first of all, the, the,
the, what One of the amazing things about the decompression
side is that because you've lostinformation during the
compression, it means that the, the right side of that bow tie
(46:51):
is, has to be creative. I, I tend to think that the left
side, the compression can be outthe rhythmic, but the right side
has to be creative because it's kind of like in, in, in the, on
the cognition and it's the same thing.
We don't have access to the past.
What we have access to are the Ngrams that was, that were made
by previous versions of you in your brain and body.
And then it's your job every, I don't know, 300 milliseconds or
(47:12):
whatever it is to continuously interpret those things and build
up an ongoing, A labile story ofwhat you are, what, what you're
doing, what's the outside world?What do your memories mean?
And in doing that, it's basically a semiosis because
these are messages left for you from your past self.
They're just like messages you get laterally from, from your,
(47:32):
you know, from things that are going on at the same time.
And it's, it's this process of, of building the best story that
you can at any given moment, notnecessarily the story you used
to have. And that becomes really
interesting in, in, in development because if you don't
have, if you, if your allegianceis not to the fidelity of the
interpretation, meaning I need to do it exactly the way my
ancestors did. But instead what I think it's
(47:53):
doing is I think it's a problem solving agent that says, here
are the prompts that I've been given.
What is the best thing I can puttogether right now?
And that results in some amazingplasticity in biology.
I'll just give you a couple of my, my favorite examples.
This this, this Fankhauser work was, was, was was incredible.
So you got this Newt and you take a cross section through the
(48:14):
kidney tubules and you see there's like 8 to 10 cells that
work together and they give you this thing with a lumen in the
middle. And it's this nice, nice tubule.
So you can make these polyploid nudes.
If you make polyploid nudes, they have, you know, extra
copies of of their, the genetic material.
And as a result, what they do isthey make their cells bigger to
accommodate the bigger nucleus. OK, so that's cool, but the Newt
stays the same size. So you say, well, how did that
(48:36):
happen? Then you take the cross section
and you see that actually. Well then there are fewer of
these larger cells to build the same thing.
If you really go all the way, you know, all the way up and,
and you make I think there were like 6 and newts or something
like that. You have one gigantic cell that
bends around itself and leaves and leaves a hole in the middle.
Now, now that's a different mechanism of cytoskeletal
(48:56):
bending instead of cell to cell communication.
And just just think about what it means.
If you're a new coming into the world, what can you count on?
I mean, you know, you can't really know what your
environment's going to be, but you can't even, you can count on
your own parts. You don't know how many copies
of your genetic material you're going to have.
You don't know how big your cells are going to be, how many
cells as, as, as Cook showed youcan, you know, take away cells,
add cells, It doesn't matter. It'll still rescale.
(49:16):
You have to be able to do the best you can under whatever
circumstances you find yourself.And in bioengineering, we see
this all the time. Life is incredibly
interoperable. You can do these crazy things as
you know, chimeras and, and and you know, you can engineer all
kinds of novel materials and scaffolds.
This is one reason I love so so we make xenobots, we make
anthrobots, which are these human cell derived biobots.
(49:37):
They have they have these incredible features because it
will work like hell to get to the, the species specific target
morphology that that that's the default.
But if it can't do that, it willcertainly put together something
else that's coherent and that has, you know, these anthrobots.
No, no change. We don't touch the genome.
(49:57):
So exactly normal wild type genetics.
They have 9000 differently expressed genes than their
tissue of, of origin. They're in the same kind of
familiar. They're in the same, but they
have a different lifestyle. There are these little silly
things that run around and do interesting things.
Half their genome is now expressed differently.
Xenobots express about 900 genesdifferently.
They, they express a cluster forhearing.
(50:18):
And so we've actually tested that.
And it turns out that that unlike for unlike frog embryos,
these, these xenobots can actually respond to sound.
And so they, they have this new,you know, they have this new
lifestyle. So, so I think this creative
aspect, the, the constraint is, it's, it's kind of like a meta
constraint. The constraint is you can't
count on the past, then you can't take literally.
You need to take the messages you get from the past, both your
(50:39):
own memories and the genetic, you know, experience of your, of
your ancestry. You take it seriously, but you
don't take it literally. And as a result of that, because
you can't count on the, the, the, the, the, the, the details
of the past, you have to commit to a creative interpretation,
which makes it makes for incredible plasticity.
It's why these tadpoles that thethat the, that the Doug
(51:01):
Blackiston in my lab made that have eyes on their tails.
They can see out-of-the-box. You don't need new the the eye
doesn't connect to the brain. It connects to the optic nerve
until the the optic nerve connects to the spinal cord.
No problem. No new rounds of selection and
and mutation. No, they just they can, they can
see out-of-the-box. So all of this, this this
amazing plasticity is I think isbecause they could never depend
on doing it the same. Maybe C elegans can, I don't
(51:23):
know, you know, maybe nematodes liquid or something.
But but most things are are are they, they're going to find
themselves in all sorts of novelty and they never assume it
was going to be the same. They have to resolve the problem
each time. And that and that when, when we
actually we see this in the in the evolutionary simulations
that we do, because what happensis as you start simulating this
(51:43):
kind of unreliable and yet competent material, what happens
is. You, you, you.
Have some, some mutations, but the material tends to fix it.
And then selection can't see thegenome as well because because
the phenotype doesn't, doesn't show evidence.
You know you fixed it. And so when, when selection
can't tell whether your genome was great or actually your, your
(52:04):
competence was good, all the work of that evolutionary cycle
starts being put into making the, the algorithm better.
And the more you do that well, the harder it is to see the
genome. And so you've got this feedback
loop. It's just like positive feedback
loop where over and over again, what you're getting really good
at is doing something coherent, even if the starting material
and the starting information is different, degraded, you know,
(52:25):
unreadable, whatever. And then and then that's how you
end up with things like Planariathat have a ridiculously noisy,
you know, genome that's mixaployed where every cell has
a different number of chromosomes.
And, and yet they're the ones that have this like incredible
cancer resistant, no, no aging, highly regenerative animal, even
though, even though the genome is, is, is basically junk.
(52:47):
So yeah, I, I, I love the, I love the, the interpretation of,
of that, that aspect of it, you know, about how to, how to
interpret that information. Right.
So I like to think about this also as the as looking at the
hypothesis generation side of things that in fact that's
what's going on is that you use the same old information but in
(53:08):
a different context. You you generate A hypothesis.
The hypothesis already starts with bias.
It's like this, this process we call Bayesian, but the
hypothesis are actually quite sophisticated in biology because
you've already got a system thatevolved to be confident in
multiple ways that involve evolved to be able to interpret
(53:31):
under a lots of diverse conditions.
I used to tell my students when we're talking about development
of the nervous system, that thatnervous systems always in a
sense adapt to the bodies they find themselves in during
development. You add an extra eye, it adapts
to it. You, you move the eyes.
But you know, in in those of us who are mammals that have eyes,
(53:55):
sometimes the side of our head, sometimes towards the front of
our head. It turns out that that the way
that that system develops in thein the visual cortex in effect,
is an adaptive system. It's taking information, saying
use that information to guide how it's wired up, adapt to the
fact that that No2 individuals will have eyes in the same
(54:19):
relative position. And so the system couldn't work.
It couldn't evolve if it had to be fine, fine-tuned by virtue of
a of a kind of deterministic wiring.
It has to be adaptive in order to work for precisely these
reasons. But that's the in a sense, the
(54:40):
the the decompression side of things, the decompression side
of things has to itself be a kind of evolutionary process,
the. Hypothesis.
Right, in testing hypothesis, Yeah, something something.
That, that we, we wanted, we recently did.
And so this is this preprint will go up.
I think this week is we made some xenobots and we stuck some
(55:02):
neural cells in it such that what, what is the structure of a
nervous system that doesn't havea, you know, a selection of
history of selection for a specific purpose?
So they're finding themselves ina completely new context.
They're sitting excess cells, you know, that, that, that that
shouldn't be there. What you know, what are the
nervous systems going to look like?
And these things are incredibly plastic.
And because of this, because of the, the material has this
(55:25):
ability to interpret and reinterpret the information you
get, you get these really interesting competitions between
interpretations. So one of my favorite examples
is in the early embryo, one of the things that tells the eyes
where they should be. There's a particular
bioelectrical pattern in the nascent face that has these
spots and see, this is where theeye goes.
(55:47):
So if, if, if you, if you injectone of the number of channels
into some other location, you set up a similar bioelectrical
pattern, those cells will get the message and they'll and
they'll make an eye. But what's really interesting is
The thing is that there's actually a competition because
if if you if you section those eyes, what you find is that
there's a few cells that we injected and then they
(56:09):
secondarily recruited a bunch oftheir neighbors to participate
in making that eye. We didn't touch them.
It's, it's a, you know, the cells we did inject broadcast,
there needs to be an eye here. And then so, so many cells need
to sort of cooperate. But sometimes you do that and
sometimes you get no eye at all because what's happening is
there's a competing interpretation where there's a,
which is a cancer suppression mechanism where the surrounding
(56:31):
cells are saying your voltage iswrong and we're going to, we're
going to equalize you out to us.They make gap junctions and they
try to equal. So you get this, you get this
battle. So these cells are saying no,
we're gut and these other cell cells are saying no, you should
be, you should be I. And it's the degree to which we
are good or not good at providing convincing
interpretations via these, thesevoltage gradients that we can
(56:52):
specify a successful eye or something that you can even see.
Sometimes you can use animals that are transgenic for like
early eye genes, like Rick's oneor pack six or something.
And so you inject your, your RNAand you see like there's 88
ectopic spots. You say, oh, we're going to have
a tadpole with eight eyes. Well, what you find out is that
a bunch of them get, get winked out by their neighbors and, and
(57:12):
you, and you don't get those. And then some, if you, you know,
if you do it right, some of themwill.
And so, so, so that battle of interpretations, I, I think is
also really interesting because that, that material, it's not a,
it's not passive matter. It can support things like this,
like a, like a, like a debate between different, different
anatomical, you know, world views about what are, what are
we building here, you know. Jerry, anything you'd like to
(57:34):
add to? That so much but but I'm not
sure I want to I want to keep going on this and there's
there's so many pieces. One of the, one of the things
that, you know, when I, when I think about cognition, there's
a, a, a piece that I think that Michael was involved in, in
terms of, of talking about, you know, getting the biology back
(57:57):
into the notion of cognition. I I have often thought that the
way that that we misunderstand brains is that we think of them
in too much of A computational like sense.
And I think that brains are actually doing what embryos do,
that brains are doing what evolution does, that, that we
(58:19):
want to look at a thought process as in a sense what
sometimes described as microgenesis.
That is, it's in a sense you're differentiating something as
opposed to activating it or inactivating it.
And the differentiation process,we experience it, you know, as,
you know, new ideas come to fruition or when we're sitting
(58:41):
here talking and a new idea popsup, we differentiate it.
It starts out pretty undifferentiated.
And I, and one of my approaches to language is to think about in
effect, you know, what's the sentence before I've produced
it? Well, it's undifferentiated.
It's not that I go about and I say, OK, it's a bunch of rules.
And I want to pick out a bunch of, you know, rules and
(59:04):
positions and words. No, the words don't come first.
The words are at the very end ofthe process.
The sequence is the very end of the process.
And it's, and it's flexible. Thank goodness that I can, you
know, move things around and it still works.
In effect, it's we think about cognition too much like we think
about reasoning, you know, in a formal sense, when it's in fact,
(59:27):
you know, biology uses the same strategy wherever we look at it.
So one of the ways I like to study different functional
regions of the brain is to, first of all, look at how that
region developed its circuitry. Now a lot of it's heavy and in
the sense that there's a statistical logic to it.
(59:47):
But actually if you look at how different regions utilize the
information that they're getting, the the over connection
that they received and so on, and the difference in connection
from different areas and how they settle it, neurons don't
change their strategy when they become mature.
They're doing the same thing at all levels.
(01:00:08):
So they're, you know, whether it's a nervous system just
getting started and starting to wire itself up as a result.
The best way from my perspectiveto look at what a particular
region of the brain is doing, what this class of neurons are
doing is to ask the question, how did they get that way in the
1st place? You know, what developmental
(01:00:28):
process were that was each neuron involved in?
How did it amplify and eliminatecertain connection patterns?
How did it, you know, adapt to the the diversity of time frames
that are coming in? All of those things in
development determine how its circuit is developed.
But I think that's what's going on, you know, in you and I right
now that each of these is sort of a, you might say a micro
(01:00:52):
Embryology, a micro evolution, each thought that we produce.
But then then what we were talking about in terms of this
sort of compression, decompression, creating memories
is, is the compression process. We, you know, we strengthen
certain synapses and we can certain others in order that
(01:01:15):
when we throw information back into the system, we put
metabolism back into the system,it will bias the dynamical
pattern. That's that's the result.
But what's going on is that the interpretation or you might say
the decompression is going on dynamically.
(01:01:35):
So I think of it very much like I think about the evolution of
compressed signals in evolution,the compressing it into DNA,
creating the what's sometimes been called the engram is
basically this highly compresseddifference in synaptic strengths
over a network that's very, verydifferent than how we do
(01:01:58):
computing. And yet it's of course
accomplishing something similar.But it's doing it in a sense,
it's biologizing cognition. And so so I know Michael likes
to talk about cognition all the way down.
I like to talk about biology, biologizing all the way up.
(01:02:20):
Mike, anything you? Want to say about that?
Yeah. I'm, I'm, I'm on board it.
It makes sense. These are these are kind of
symmetrical views that that we have here.
Terry is there. Any part of Mike's work that
when you first read, you found particularly different from your
own and that you questioned and that you'd like to perhaps ask
(01:02:40):
him about? Wow, good.
That's a good question. Because some of the earlier work
troubled me, and particularly with planaria.
Planaria are a very unusual kindof animal because they're,
they're, they're weird genetics and but also because of all of
these things that we see in manyplants where a part of a plant
(01:03:00):
can generate the whole plant. But we don't see that very often
in animals, mostly simple animals, you know, hydra and
stuff like that have these capacities.
But as we get more complex animals, it becomes harder and
harder, except for parts, you know, parts of salamanders,
parts of some frogs and so on there that they can do this.
(01:03:21):
And of course even some, some ofour parts, but minimally and
they've become minimal. But I was worried that
initially, particularly talking about the electrical and ionic
systems, that it initially sounded a little bit like what I
would call preformationism. That has initially sounded like
something that a man who I neverliked his work, a guy named
(01:03:45):
Rupert Sheldrake. He had this idea that there was
sort of you had a field for a final map that everything just
sort of, you know, if it disappeared, there's an
electrical map of how things were supposed to go.
At first, when I read this stuff, I was troubled by that
and I thought, no, this the pre formation ISM is not what we're
talking about here. But but in a sense reading
(01:04:08):
further and seeing how this stuff has developed in Michelle
work, it's it's very clear that it came back towards this sort
of decompression reinterpretation logic, that it
was not a freeform map, that there was not an electrical
preform map here. I of course, in working with the
development of the nervous system, initially did not have
(01:04:29):
any of the ionic information to deal with and studying it
because I was studying, you know, the nervous systems of
rats and mice and monkeys and eventually people.
And so there I was looking at sort of macroscopic features
having to do with, you know, howaxons find their targets and
things like that, mostly chemical and, and material
(01:04:49):
processes. And from that it was very clear
that there was not, although things found their targets went
through and, you know, interpreted signals and in a
sense developed 3 dimensional complicated structures.
There was not an initial map. There was this compressed system
of biases that set up, set up another higher order physical
(01:05:13):
process that was interpreting those biases and decompressing
it. But, but in hindsight, when I
look back at this, this early work with Planaria, I realized
that it there wasn't quite this map.
And I particularly like the the later stuff you've done,
Michael, where you've, you've cut them in interesting ways and
produced, you know, sort of starheaded Plenaria and and do 22
(01:05:37):
headed Plenaria where you can see that in fact, it's it's not
a free form map, but it's a, a complicated interpretive process
that you can bias the interpretation in interesting
ways by sort of critically doingit.
So in this respect, I was wary of the early work and I've
recognized that it converts converges really well with how I
(01:05:59):
like to think about these processes.
But but I'm particularly curiousabout the relationship of
Lanaria and Hydra to plants and how they generate from a part
the whole. Yeah, yeah, there's a very, very
good point. So, so let me let me talk about
this this this business a littlebit.
(01:06:20):
And I'll just preface, preface it by saying that it is
absolutely an interpretation anddecoding process.
I'm not saying that there is a pre formed map that sits there
as it is. However, if you zoom into the
center of that bow tie, if you take if you take a slice of it,
there are time periods where thebioelectric pattern acts as as
(01:06:41):
if it was a map. And I'll just give you a simple
example, a couple of simple examples.
And it's not just Planaria. We've done this in, in, in, in,
in frog, and we're now doing this in Misa.
This is, you know, this is, thisis, I think a general thing.
So, so let me, let me just describe what, what we were
looking for in the, the, what, what, what I was looking for is
(01:07:01):
a physical encoding of a set point for this homeostatic
process. So, so the thing about planaria
and, and, and you know, maybe sometimes I think that all
development, like even mammals, you know, half of us can
regenerate an entire body from 1cell, right?
That's a, that's a kind of regenerative process.
You're, you're down to 1 cell and you can sort of re inflate
the back into the body, right? So, and we know that all of
(01:07:24):
these systems will work like hell to get to where they're
going. Even if you perturb them, you
can cut them into pieces, you can move things around.
We can, we can make a Picasso, the frogs where everything's in
the wrong place and and they'll still kind of come go back to
where they need to go. What I was looking for is the
encoding of the set point. Not, not magic, but
nevertheless, any homeostatic system has to have somewhere you
(01:07:44):
have to store, you have to have a memory of, of what, what, what
it is that you're reducing erroragainst, right.
There's some sort of error minimization schemes I was
looking for. So it turns out that both in
Planaria head tail decisions andin the frog face, if you look
very early before all the genes come on that are going to
regionalize the face and all that, there's a bi.
(01:08:05):
So, so let's just talk about thefrog face for a second.
There's a bioelectrical pattern which is readily decodable and
there are many other patterns that we are working hard to
decode and some are very hard. But the frog face one, it looks
like a face. You can tell exactly.
Here's where the eyes are going to go.
Here's where the mouth is going to go here where the plaque goes
out to the side going to be. And, and sure enough, if you
(01:08:26):
move any of those voltage states, not the cells, the
voltage states, and we can do that without the genetics and
with other tools. If you move any of those things,
the gene expression follows and the anatomy follows.
And so you can and in the planarian, it's the same thing.
We if you cut a then you say, how does this thing know how
many heads it's supposed to have?
There's an actual voltage pattern that says one head and
one tail. And I mean now we know that
(01:08:46):
that's what it says. And if you artificially change
it to say 2 heads, sure enough, you get a worm with with two
heads and and and and parenthetically, if you then
start chopping those worms, you forever more will get 2 headed
worms, even though the genetics are still completely wild.
That so So for the duration of that homeostatic process that
(01:09:06):
says what am I supposed to be reducing error against?
There is in fact a pattern that has a very important feature.
The feature is I don't have to worry about like with any good
homeostat, I don't have to worryabout the hardware of how it
does it. If I understand the encoding of
the set point, I just change theset point and then I take my
hands off the wheel and the system autonomously builds to
(01:09:28):
that set point. So for that period of time, I
think you are looking at a, a, amap that isn't some, some
eternal, you know, static thing that sits there, but it is the
set point against which that that drives error minimization.
Now that pattern has to, has to,has to emerge from, from, from
(01:09:49):
prior States and then eventuallydisappears.
So it is not a permanent thing that just sits there, but it's
sort of that center of the bow tie.
It's it's the previous events have given rise to at this
moment, we're building the face.Later, we're going to build
some, some, some other thing. It, it gives you the set point
that guides the behavior. And that set point is really
convenient. It's convenient because if you
(01:10:11):
try to stick with, I mean, the standard story of development
that, that, that we were all taught is that it's a feat.
It's an open loop complexity emergence process.
There's a bunch of local rules that everything goes according
to the local rules. And eventually it's like voila,
you know, emergence and, and youget this complicated thing.
And yes, that can happen. And we know like a cellular
(01:10:33):
automata and other things, you will always, you can get complex
things from simple rules, but that's not what this process is,
is actually doing. If you start to interfere with
it, you will see that it works actually really hard.
It's not an open loop process. It, it tries really hard to get
to a particular outcome. So that means and, and, and the
thing that's really hard about the standard story is that if
you wanted to do, let's say regenerative medicine and you
(01:10:54):
wanted to I, I, you know, this is the, this is the wrong
outcome. I want a different outcome.
Or like what, what Terry was just saying about about
evolution, like going backwards,right?
And, and, and figuring out how to encode it.
It becomes really hard if all you have is this feed forward
that like that, that inverse problem is, is is not solvable
in general. But if you have a component
where what I have is a self organizing pattern, it's self
(01:11:15):
organizing because of some math and also because of the kinds of
ion channels that evolution has provided us with.
So it's an excitable medium. And then of course also, you
know, Turing patterns of chemical signals and bio
mechanics and everything else, the math will give us a, a, a
set point that for a period of time will allow us to get to a
particular point. And serves as a really tractable
(01:11:36):
control knob for when you want to make 2 headed worms or fixed
birth defects. So that's one of the things we
we, we, we work out now is a very simple stimuli that that
that repair very complex defect.For example, notch mutations,
right? If you mutate notch in the in
the brain is just completely wrecked.
But the first thing that gets wrecked from that is the
bioelectrical pattern that dictates the shape and size of
(01:11:57):
the brain. And if you force the correct
pattern, you can have perfectly normal tadpoles with behavior
indistinguishable from control, like learning, they have
learning and distinguishable from controls, even though that
dominant notch overactive notch mutant is, is still there,
right? And so, so for a period of time,
you get access to that set pointthat, that is, is very
convenient. But of course, it's not just,
(01:12:18):
you know, sitting there the whole time.
It comes and it goes and, and very much it's this inflation
and deflation process. And what you're looking at is
that, that, that bow tie, no, that snapshot of that of that
process. Yeah, I like, I like that
because and this is that that's the differentiation logic that
is you, you start from, you know, I'm thinking about the
development of of vertebrate embryos.
(01:12:40):
You know, you start with a head tailed dorsal ventral
distinction. Then once that's set, now you
can partition it into segments. Once those segments are set, you
can now take segment A and resegment recut, cut it in a
different way, whether it's in gene expression in terms of
(01:13:03):
polarities of things and so on and so forth.
But it's, it allows differentiation.
Differentiation always involves this sort of sequence of steps,
like you say, these, these sort of the center of the bow tie
where or basically it's got a, it's got a point where now this
has been accomplished, I can move on to the next
differentiation, shut down the old system.
(01:13:26):
And now once it's set there and I've got all of these divisions,
dorsal ventral, for example, division set, I can now do
something else. But it's a differentiation
process that begins undifferentiated and then
becomes progressively step by step differentiated.
And each of those is a decision like like you're talking about
(01:13:46):
it sort of a note of decision. Now I can do this because that's
been accomplished, but this is also the case that then if you
mess up one of the early ones, the later ones have to do
compensation to deal with what was modified in the first.
But as a result, you can get because the differentiation
process as you, you're describing it sort of like a
(01:14:08):
telescoping process that that that amplifies something, it can
amplify something in a very different direction as well.
And I think that's one of the other things that we're finding
interesting about all of these processes because this is that
generative side. This is why I think it's been so
exciting in evolutionary theory to begin to focus on the
(01:14:29):
generative side of things. It was originally described as
evo devo, but I think it's a, it's a much more subtle
distinction than that. Evolution is taking advantage of
both the compression, decompression side of things and
how the differentiation can sortof be built upon prior
differentiation. I can start from like a single
(01:14:50):
cell and develop a body. You, you can start with a, you
know, that's why I like to say that, you know, in thinking I
have what's a thought before I've differentiated it.
You know, it's, it's a sort of, I sort of know where it's going,
but it's not done yet. And like this sentence I'm
producing right now, you know, I, I sort of know where it's
going to go. You sort of know where it's
(01:15:11):
going to go, but it's, it's undifferentiated.
And the words just sort of pop in at the right time.
Thank goodness, Mike. Is there anything, any part of
Terry's work that you can articulate that you found
particularly fascinating but yetalso disagreed with and think
you could work on and build upon?
I. Didn't I?
(01:15:32):
Didn't. Disagree with anything?
The only thing if I so I've beensitting here trying to remember
what the good example of it. The, the only thing I guess we
could talk about somewhere, and I don't remember where it was,
you had talked about teleonomy as a kind of watered down
version of, of teleology, you know, and right.
(01:15:52):
That's, I'm remembering that correctly.
So, so the only thing, the only thing I'll I can say to that.
And, and I, I think I take teleology very, very seriously.
I think, I think it's, it's absolutely a real thing.
I use teleonomy slightly differently.
I don't use it as a way to, to, to sort of placate the, the
folks who don't like actual teleology.
Like I'm full on teleology, I think.
(01:16:14):
I think what's what's, what's perhaps useful.
And I don't know if this is one of these cases where I should I
should use a different word or whether we should rehabilitate
the existing word. But what I like about teleonomy
is that it reminds us that what we observe is from the
perspective of an observer. So when people say it's apparent
teleology, I don't mean it's a lesser version or not real or
(01:16:36):
some kind of like, you know, notthe real thing.
What I mean is what I think it does for us, which I, which I
think is important is to remind us of, of this basically of this
kind of intentional stance idea that it's not that it's not
real, it's that you have to be an observer who's capable of, of
seeing it. And finding teleology and things
is an IQ test for all of us because sometimes you don't see
(01:16:57):
it. It's not because it's not there,
it's because we didn't know how to look or what space we were
looking at or what the system was doing.
And, and, and, and again, the system of a good agent will have
its own perspective. It doesn't need our perspective.
But nevertheless, we, when all of these things like, I think I,
I tend to think of a lot of these some cognitive terms that
people use as kind of as, as basically interaction protocols.
(01:17:21):
What we're really saying is not that there's some objective, but
unique perspective and we shouldfight about which, which one is
right. But but what you're, what we're
saying is this is the formalism that I see that I claim is going
to be applicable to the system. And maybe it's some sort of high
level planning, or maybe it's a simple, you know, homeostasis or
something in between. And, and then I'm going to
(01:17:42):
interact with that system using this as a guide.
And we'll all find out how well I did.
And, and, and if you have a different estimate, we'll find
out how well you did. So, so that's kind of the only
thing that I found to, you know,even quibble about.
It's just that I think teleonomyhas another use besides trying
to water down teleology. It's just to just to remind us
of this observer relevant to therelativity of some of these
(01:18:02):
things. That's an interesting way of
putting it because my my work has actually gone in a different
direction in that respect. So, so thinking about the
history of the term teleonomy, of course it is an observer
perspective. And the idea says that, you
know, they go back to Bigelow and Wiener in that group and and
(01:18:25):
fit and rich when he comes up with a term, basically, he said,
you know, we don't know. And at the time it was in a
sense, even a stronger claim that there is no such thing as
real teleology, that that it's just sort of feedback and and
deviation minimization processes.
And and that rather than make a commitment to some metaphysical
(01:18:50):
concept like teleology, the ideawas that, you know, come up with
a description. Teleonomy is describing
processes that are and directed accomplishing the same end from
different origins, you know, allthe ways that people used it.
Ernst Meyer did an interesting job of saying, but, you know,
(01:19:12):
there's different kinds of end directedness in nature that
maybe we should distinguish. So he came up with the idea of
teleometry, teleometric processes and he said the second
law of thermodynamics, it's a teleometric process in a sense
that it it, it goes towards an end, it has end directedness, it
looks like it's indirectedness. It's not trying to get there.
(01:19:35):
Things just tend to go in that direction.
And he said that, but telianomicprocesses again descriptive.
And This is why your perspectiveway of thinking about it is
appropriate. There's from the outside
looking, looking at something, it seems to be going in a
direction, but it's a different kind of direction.
He said this is a direction that's that's error minimizing
(01:19:58):
or deviation minimizing and thatwe can see all that.
And he said, but you know it, itmay be.
And he he was not willing to make a commitment that there are
some things that actually are not just not just deviation
minimizing, they use deviation minimizing, but they also.
Have goals and ends. And the question is that when we
(01:20:21):
design a system like a like a thermostat or a heat seeking
missile or something like that, the the end is divide is
designed into the mechanism. But of course, with life, we're
dealing with systems that that create their own ends, design
their own ends, design their ownthermostats, you know, sort of
(01:20:42):
keeping my body warm. You know, my, my body, the
process is very homeostatic in the sense that it's, you know,
it's constructed like a really complicated thermostat system.
Now it's, it's, you know, peoplewho could even describe it.
It's not accurate, but roughly mechanistically in the trivial
sense. But but the fact that my body
(01:21:05):
has evolved to have that mechanism to set that value is
is like me building the thermostat to have a certain
behavior, to have a certain end.The question is where does that
come from? And so this is one of the
reasons why I think we are really forced to go back and
talk about information about things, because if teleology or
(01:21:31):
teleonomy or teleometric, all ofthose refer to ends.
But the question is, how do endsget established?
How do they come into the world?That's been the problem I've
been struggling with. And the answer is that you no
longer can do this by the observer perspective.
You want to ask the question, what kind of a system generates
(01:21:53):
them? What's the architecture of a
system that generates its own ends?
And so that's been how I've approached it.
And as a result, I've, I've in asense, as I said early on,
trying to do this thermodynamically, talking about
how you move from systems that have the kind of end, the
teleometric end that the second law of thermodynamics suggests.
(01:22:17):
And then the kind of stable dynamical end that say Benard
convection or, or various self organizing processes have.
And then the kind of end in which the, the end is not just a
pattern, but it's, it's a representation in which there
(01:22:37):
can be a representation to that end, for example, in a
compression like like DNA or, orwhatever else.
That's that's what evolution is about.
Evolution is about generating that.
But in that case, what I always found troubling about the
telonomic perspective is it always stayed at the observer
(01:22:58):
perspective. I'm outside of the system and I
would describe its behavior, butnot actually explaining how it
generates its behavior. And so I basically want to
approach this in the other way around.
Ask what kind of a system has these behaviors and generate
something that I would feel comfortable of saying it's
(01:23:18):
intrinsically about something and that about this is something
that it's not yet that could happen, that might happen, that
might be the result of repairingdamage and so on.
So that's, that's, that's basically how I've approached it
and and one of the reasons why Ihad a problem with the term
teleonomy over the years, yeah. Yeah.
No, I, I agree with all of that because because the interesting
(01:23:40):
systems have their own inner perspective on on the goals that
they have, which is not the sameas as what we see from third
person observation. Yeah, so, so, so a few of us,
Josh Bond guard and I and and and and Chris Fields and some
other people have been, have been working on very minimal
systems to see what it takes to evolve exactly what you just
(01:24:02):
said. So observe, so observers that
where where their various components are about something,
you know, that that that have that reference.
And one of the things that we'refinding is that, and this, this
gets to, to the distinction thatyou just made around, you know,
there are goals that you give a system when you create the
thermostat and then there are sort of intrinsic goals that
(01:24:22):
evolve and, and things like that.
One of the things that we're finding and, and there's not
time to go into all the details,but I'll send you some of this
stuff after we're finding that there are very minimal systems,
very minimal systems to in fact,deterministic minimal systems
where there is, there is no magic in the sense that there is
(01:24:42):
no, you know, in biology, there's always some mechanisms
you just haven't found yet. Or you can say, well, there must
be something that you just haven't.
So these things, there's that we, we can simulate them, but we
on the computer, we know exactlywhat all the ingredients are.
And what we see is that there are the things that it's doing
because you made it do that, because the algorithm says to do
that. But then there are also these
(01:25:05):
weird side quests that The thingis doing that are nowhere in the
algorithm. And these additional sort of
intrinsic these, these weird, some of them, some of them are
what you would immediately recognize as competencies,
things like goal delay, gratification and things like
that. But but others are just are just
(01:25:25):
new sort of new goals that the thing we see obviously pursues
that are not specified in the algorithm at all.
So, yeah, I, I, I'm starting to think that that ability to, to
have its own goals that are not nailed down by the mechanisms or
the algorithms or the materials or all the stuff that we're used
to is really pretty baked in to the world.
(01:25:47):
And I mean, I think evolution does an amazing job at scaling
that up to the point where we start, you know, where it
becomes obvious to us and that everybody can see that, that
this is what this thing is doing.
Or or even to the point where hecan self report like us.
But my at this point, my view isthat that's actually I and in
fact, Josh and I are writing a paper called It Doesn't Take
Much, which is basically all about these very basic, like
(01:26:08):
very minimal systems that are already doing things that nobody
told it to do. So yeah, I think, I think it it
it permeates the the world that we live in.
Well, Jed's it's. We've got a hard cut off soon,
so yeah, that's. Right.
Got a 4/4. Minutes left.
I just want to say this has beenamazing.
It's been it's been wonderful tobe a family on the wall watching
(01:26:31):
the two of you. I'm surprised it's taking both
of you so long to have a conversation.
There's any final words from thetwo of you?
We've only got about 3 minutes left so it's a quick one please.
Terry wow, there's, there's, there's so much to deal with.
And I would say that one of the challenges, and maybe it's a
difference in the way you approach this and, and partly
(01:26:53):
it's maybe the result of of Michael sort of coming from the
computational world and, and me coming from this sort of Percy
and semiotic realm. But I'm really interested in
this notion of self and, and this notion that I tried what I
call teleo dynamic, a dynamic that actually has a represented
(01:27:15):
end by, by represented, I mean literally there's a, in a sense
of compression that that will tell you whether you've received
it or not. And that is not externally
generated, but that that I thinkthat requires self.
And so I would say that the distinction between teleonomy
and, and, and teleology in that respect is that that teleology
(01:27:38):
is semiotic. It has, there's a notion of
representation. And the problem for philosophy
is that these are the most troubling concepts as, as
positive that we can think of, you know, purpose and
directedness, self sentience, benefit, cost, these are things
(01:27:59):
that are just terrible for philosophical discussion.
But in effect, we have now sort of crossed into domains where
we're beginning to have tools toask those questions, something
we haven't talked about. I think we've got close.
To the hard cut of if OK, but the.
Last thing I want to say is thatwe've never talked about it at
(01:28:20):
all, but I think one of things that's also bringing us these
questions is these changes in so-called artificial
intelligence, they're forcing usto re ask these questions.
So we'll touch on all of that. We'll have we'll try and have
around too soon. Mike, any final words from your
side? Sorry.
Yeah, just to say thank you. To me, there's nothing more
fascinating than this symmetry between between the self
(01:28:40):
construction of minds and the self construction of bodies.
I think Turing already sort of saw that when he wrote that
paper on, you know, on chemical organization, that this was,
this was two sides of the same coin.
And yeah, I'm, I'm delighted that that we're working on it
from different angles. And yeah, happy to discuss
again. Thanks so much and, and thanks.
Thanks. Thanks, Michael.
And thanks, it was. Great.
Really appreciate having you go.Up both on.
(01:29:02):
But yeah, Michael, we know you have a lecture to give, so I
think I got to. Go teach.
Yeah, OK. Thanks so much.