Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
What is intelligence And if we look hard, might we
find it in very weird places, not just in brains,
but in all kinds of structures in our universe? And
how would we even recognize it? And what does any
of this have to do with a dog born without
front legs learning how to walk bipedally, or making new
(00:28):
little organisms out of single cells, or how Wikipedia might
be like an axilautel and why we are so blind
to the vast variety of minds that might surround us.
Welcome to Intercosmos with me, David Eagleman. I'm a neuroscientist
and an author at Stanford, and in these episodes we
(00:51):
sail deeply into our three pound universe to understand some
of the most surprising aspects of the world around us.
(01:15):
Today's episode is about intelligence, not in the way that
I've talked about in earlier episodes, about how the structure
of the human brain gives rise to intelligence and how
we can measure whether AI's intelligence. Today's episode is way
beyond that. Today I'm going to talk with one of
my most brilliant and creative colleagues, Michael Levin, about how
(01:37):
we might find intelligence all around us in ways that
we don't typically into it. So let's start at the beginning.
What is intelligence. It's a word that we usually reserve
for something abstract and cerebral, something associated with problem solving
and planning and passing IQ tests. We tend to picture
(01:58):
intelligence as a property of brains, and especially big human brains.
We're generally willing to grant some intelligence to dolphins and
chimps and clever birds like ravens, but it's hard to
know how to think about so many other things happening
in the world. For example, my skin cells heal a wound.
(02:21):
Is that intelligence or is that just biochemical cascades? A
plant grows towards sunlight intelligent? A worm gets its head
cut off and it regrows it. That's amazing, But we
don't tend to call that cognition. But what if we've
been looking at the whole notion of intelligence too narrowly.
(02:41):
What if intelligence isn't just about neurons and genes, but
it's about goals, and specifically, it's about the ability of
a system to navigate towards an objective, to adapt to
its circumstances, to make decisions.
Speaker 2 (02:57):
Along the way.
Speaker 1 (02:58):
That's a broader definition of intelligence. And if we apply it,
suddenly intelligence doesn't just belong to creatures with brains. It
becomes something that shows up in places we didn't expect.
Think about really simple creatures like a tadpole. It's millions
of cells collaborate and communicate and organize into an eye
(03:21):
and a spine and a heart without anybody orchestrating the
whole thing. There's no central planning, it's just a kind
of emergent intelligence at work. Or think about a flatworm
that can be cut into pieces and each piece regenerates
a complete, properly shaped body. How does each fragment know
(03:41):
what's missing? Where exactly is that information stored? What is
guiding the process? And as we ask these questions, that
leads us to ask how we can learn to talk
to these systems in the language that they understand, like
voltage gradients or bioelectric circuits or chemical signals. Can we
start reprogramming the goals of tissues? Can we tell a
(04:04):
clump of cells to build something new? And can we
use this kind of knowledge to regenerate organs, or repair
birth defects or.
Speaker 2 (04:15):
Create entirely new forms of life.
Speaker 1 (04:17):
These are the kinds of questions that today's guest has
spent his career exploring and his work leads us to
the conclusion that we're probably surrounded by minds, almost all
of which we don't recognize. Minds are pervasive, and they're
not restricted to brains, but spread across all kinds of
levels of organization, from single cells to societies. My guest
(04:45):
is Michael Levin. He's a distinguished professor of developmental and
Synthetic biology at Tufts University, and I've had him on
the podcast before because he's really one of my favorite
thinkers in the field. He's massively creative and always pulling
off amazing results that the frontier where biology meets information
theory or computation or philosophy, and as you're going to see,
(05:07):
his work always challenges our deepest intuitions about agency and
memory and selfhood. So you've heard of SETI, the Search
for Extraterrestrial Intelligence. Recently, Levin proposed SUTI, the search for
unconventional terrestrial intelligence. As we're about to hear, his position
is that right here on Earth, there are already aliens
(05:31):
among us that stretch and sometimes break. Are typical ways
of thinking about minds. So if you've ever wondered where
intelligence begins, how far it reaches, or whether you might
share more in common with blobs of cells than you think.
This episode is for you. Here's my interview with Mike Levin. So, Mike,
(05:56):
let's start by telling us how you define intelligence.
Speaker 3 (06:01):
Okay, what I use is a definition that helps us
move forward in the lab. I do not claim that
it's the only definition or that it captures everything there
is to capture about intelligence. But I like William James's definition,
which is some degree of the ability to reach the
same goal by different means. So it's some level of
ingenuity to get your goals met when things change. That
(06:21):
doesn't capture play. It doesn't necessarily capture creativity, things other
than problem solving.
Speaker 2 (06:25):
But this is what we focus on experimentally.
Speaker 1 (06:27):
So typically when we think about intelligence, we think about
brains and nervous systems. But you think it doesn't even
necessarily require that.
Speaker 3 (06:35):
Correct, Because if you're looking at it in this way,
that it's basically a functional capacity to navigate some kind
of problem space and meet specific goals under changing circumstances.
There are apparently a wide range of architectures that can
do this, and in order to see that what you
need to do is to relax some really constraining assumptions
that we often have about the problem space that we're
(06:56):
working in.
Speaker 1 (06:56):
And so you often describe intelligence as scale free. So
just give us a sense what you mean by that.
Speaker 3 (07:02):
Yeah, I mean that you know, as humans, we are
because of our own evolutionary firm where we are so
obsessed with three dimensional space and moving around in three
dimensional space, to the point where if people see some
sort of AI that isn't rolling around in some sort
of robotic body, they're going to say, this is not embody, right,
because they're expecting a body. People are expecting a body
that moves through three dimensional space. But actually the biology,
(07:23):
for example, has been solving problems and navigating all kinds
of spaces that are hard for us to visualize. So
the space of gene expression states, the space of physiological states,
the space of anatomical possible outcomes, and so on, and
so in order to understand how we navigate those spaces,
you have to think in other scales. Some of these
things happen very slowly, some of these things happen incredibly quickly.
Some of these things are very small, some of them
(07:45):
are very large, and we are just you know, focused
on medium sized objects moving at medium speeds. But you know,
I'm not claiming it's entirely scale free, but I'm saying
that there are very deep scale invariant principles that operate
at many different scales besides the ones we're used to.
Speaker 1 (08:01):
So how have you gone about looking for intelligence at
other scales?
Speaker 3 (08:06):
So one thing that you that you can do once
you've bought into the fact that we can't assume how
intelligent anything is or what kind of intelligence it has,
but you have to do experiments. Then then what it
turns out you have to do is you have to
posit some sort of problem space you have that the
system is working in. You have to posit some sort
of goal. That is, these are all hypotheses on some
(08:27):
sort of goal that it's it's trying to reach. And
then what you have to do is perturbational experiments to
prevent it from reaching its goal. And then you see
what how you know how how smart the system is
in getting around whatever you did to it, So you.
Speaker 1 (08:38):
Knock it off track and then you see how it
comes back, or if it comes back.
Speaker 3 (08:42):
Knock it off track is a good one add barriers
of some sort in whatever space you're working doesn't have
to be a physical barrier, but whatever space you're working in,
add a barrier, or in fact manipulate.
Speaker 2 (08:51):
The system itself.
Speaker 3 (08:52):
So change the system, right, It's not all about the environment,
so it can be about the system itself. And so
we've done this now in many different context Here are
a couple of favorites of ours. The biggest one that
we do most of our work in is morphogenesis. So
we all make a journey from a single cell to whatever.
You know, we're going to be a human, a giraffe, plant, whatever,
And that journey, as it turns out, as a matter
(09:12):
of experimental fact, it turns out that journey is not
a kind of open loop. What you know, the way
people model the emergence and complexity, lots of simple things
happening again and again, and ultimately some sort of complex
event happens.
Speaker 2 (09:24):
That isn't how it works. It's very contact sensitive.
Speaker 3 (09:27):
If you try to prevent let's say, embryos or regenerating limbs,
or or you know, in any you know, metamorphosis, any
of these processes, you try to prevent them from reaching
their goal. They often have extremely ingenious ways to get
there anyway, okay, And you can quantify this, and you
can say, what are kinds of perturbations that it's able
to deal with? And does it have delayed gratification? Does
(09:48):
it have planning, does it have a representation of the
of counterfactual states? Does it have learning and memory? Does
it store you know, a map of its environment? You
can you can test all of these things empirically.
Speaker 1 (09:59):
So I mean in terms of let's say an embryo developing,
what we think traditionally in textbooks is that the genetics
somehow gives a blueprint and the whole thing just donepacks.
But you're asking is how is the system intelligent if
we knock it off track or put barriers in the way,
how does it figure out how to come together correctly? So,
what's the specific example of something you've done here? Here
(10:22):
there are some of my favorites. These first two are
not my work. These this is like classic, classic work
in the field.
Speaker 3 (10:27):
So if you take normally, imagine cutting a cross section
through the kidney tubule of a newt. Normally what you'd
find is like eight to ten cells and they work
together to build to build this tube like structure. So
what you can do is you can make polyploid neutes
that have multiple copies of their chromosomes, which means their
cells have to get bigger. Those newts are still the
(10:47):
same correct size. So that's the first interesting thing. Wow,
well the cells get bigger, the thing scales down. How
does it do it by using fewer but bigger cells
to make the exact same structure.
Speaker 2 (10:57):
So that's an adjustment, right, never mind the environment.
Speaker 3 (11:00):
Your own parts are changing, and this thing is figuring
out how to get to the exact same goal, the
same neud same shape, same size, fewer of these bigger cells.
Speaker 2 (11:06):
So let me ask you a quick question.
Speaker 1 (11:08):
Is this analogous to the fact that a mouse's heart
and an elephant's heart are doing the same thing, but
they're made of a completely different number of cells. It's
a massive heart in an elephant, very tiny mouse, but
it's doing the same function.
Speaker 3 (11:24):
It's similar, but there's only but there's one major difference
both in a mouse and in an elephant. What people
will say is, well, both of those have had long,
long history of being what they are, and so this
is just mechanical.
Speaker 2 (11:35):
It just does what it does.
Speaker 3 (11:36):
My example is different because you've done something to this
new that it does.
Speaker 2 (11:40):
Not normally do.
Speaker 3 (11:41):
You've given it a completely novel circumstance, and then it
adjusts in a new way. And the craziest thing happens
when you make the cells really gigantic. Okay, these are
like I think six or eight and the newts so
massive polyplaty. What happens is the cells are so big
there's no room for more than one cell.
Speaker 2 (11:58):
One cell will wrap around und itself and give you
the lumen of the tubule in the middle.
Speaker 3 (12:03):
Now, now this is crazy because because that is that's
a different molecular mechanism that side of skeletalle bending before
it was sell to sell communication and tubulo genesis. So
think about what this means. If you're a nude coming
into the world. Never mind the environment. You don't really
know what your environment's going to be. You don't know
how many copies of your chromosomes you're going to have,
you don't know how big your cells are going to be.
You don't know which of your many genetic affordances you
(12:25):
can use. Right, you have different molecular mechanisms you can use.
You have to figure out what to do in a
novel circumstance and still get the job done. I mean
this sounds like every IQ test you've ever heard of
when people show you, here's a little box, some tax
and a candle, and I want you to, you know,
solve this this particular problem. Right, Yeah, you have genetic affordances,
and then that morphogenetic process is not just doing the
(12:45):
same thing every single time.
Speaker 2 (12:47):
You have to solve these problems. That's one of my
favorite examples.
Speaker 3 (12:50):
Another one that we discovered is tadpoles become frogs, and
in order to do that, they have to rearrange their
face because their face looks actually quite different from and
you know, so the eyes, the jaws of reading has
to has to move.
Speaker 2 (13:00):
What people thought was that this is a hardwired process.
Speaker 3 (13:02):
Basically, somehow the genetics just tells every organ how far
to moving, what direction, and then you get from a
normal tadpole to a normal frog.
Speaker 2 (13:09):
So we decided to test that.
Speaker 3 (13:10):
Because you can't assume these things, you have to test
and so what we created was something called Picasso tadpoles,
so basically scrambled all the initial positions, so the mouth
is off to the side, the eyes on the back
of the head like everything is completely scramble. And what
we find is that they make pretty normal frogs because
all of these things will move in novel paths to
(13:31):
reach the correct goal, and then they stop. Actually, sometimes
they go too far and they have to double back
and stop. This is another example. You start them off
in the wrong position, they don't just blindly go the
same distance. They actually go until they meet their goal.
And you know, it's a goal.
Speaker 2 (13:42):
And when I say goal, I don't mean it's a human.
Speaker 3 (13:44):
Level, like I know what my goal is. That's a
kind of metacognition that I'm not claiming here. I'm saying
it's in the cybernetic sets like your thermostat has goals.
It's a set point, and now how clever are you
to be able to reach that set point when things change?
Speaker 1 (13:58):
As an interesting analogy, what's going on at the level
of brain plasticity. We tend to think that, let's say,
a dog's brain is pre wired to drive a dog's body.
But one of the examples that I talked about in
my book Live Wired was this dog who was born
without front legs, and so she just walks bipedally and
(14:18):
she moves all around and.
Speaker 2 (14:20):
Gets by that way.
Speaker 1 (14:21):
Why, because she needed to get to her her dog
bowl and her water and other dogs and so on,
and so she just figured out. It turns out it's
not that hard for a dog to walk on back legs.
And the question is could all dogs walk on their
back legs? Presumably, but they don't have the proper motivation
to do so. But the point is that the dog's
(14:43):
body is very flexible.
Speaker 2 (14:44):
It meets the goals of the world.
Speaker 1 (14:46):
Another analogy is the world's best archer, as in he's
got the world record for the longest accurate shot in archery.
Is a guy named Matt Stutsman who happens to have
no arms, and he got interest in archery and figured
out how to pull the bow with his legs, and
so he shoots with his legs and became a great
(15:07):
archer that way.
Speaker 2 (15:09):
Amazing, amazing. Yeah, yeah, the plasticity is incredible.
Speaker 3 (15:12):
And you know, the earlier the earliest example that I
know of the of this hind leg thing is is
Slipper's Goat, which was this I think it was in
the forties. This guy Slipper, a published study of a
goat who, again born without four legs, learned to walk
on its hind legs. When they dissected the goat, they
found out that a lot of the adjustments that you
need for bipedal locomotion, right, So things about the hips,
(15:34):
you know, the spine, all kind of stuff, we're all
there right as opposed to what you normally think of
for the evolution of modern humans as you know how
I mean, many hundreds of thousands of years you need
for that. And this is what's really interesting about this
plasticity is that you can project it into other spaces.
So so, as you pointed out, you know, can a
dog brain run an upright body?
Speaker 2 (15:52):
Right?
Speaker 3 (15:53):
Now?
Speaker 2 (15:53):
Look at individual cells?
Speaker 3 (15:55):
Can the same genome run a completely different anatomy and
set of behaviors? And this is what we've I mean,
other people have shown other examples of this, but for example,
in our lab, xenobots anthwrobots, right, these living constructs that
have a completely different body than what they normally do.
They have a different behavioral repertoire, no genetic change, same
gene regulatory networks are running a completely different body.
Speaker 1 (16:17):
For the listenership, can you define anthrobots and xenobots which
you've built?
Speaker 2 (16:20):
Sure, let's start with the zenobots.
Speaker 3 (16:22):
So in the cases of zennabots, what our team did,
and this is in collaboration with Josh bond Guard's lab
at University the e Vermont and this is Doug Blackiston
in my group and Sam Kregman did a lot of
the computational work for it. What happens is that when
you liberate some epithelial cells from an early frog embryo,
normally what they do is they form this like two
dimensional outer covering of an embryo and the outer skin layer,
(16:43):
and they do that because they're induced to do that by.
Speaker 2 (16:46):
The other cells.
Speaker 3 (16:47):
Well, if you get them away from the other cells,
you sort of liberate, then find out what they really
want to do on their own and what they do.
Speaker 2 (16:52):
They could do many things.
Speaker 3 (16:53):
They could crawl away from each other, they could die,
they could make a flat layer like cell culture. What
to actually do is they form this little ball with
cilia that are on the outside. He's a little moving
hairs and they organize them so that the thing can
swim and it starts swimming around.
Speaker 2 (17:07):
It has all sorts of interesting behaviors.
Speaker 3 (17:08):
A couple of years ago we show that they do
kinematic cell for replication, which is that if you sprinkle
a bunch of loose skin cells in their environment, they
will collect them into little balls and guess what, those
become the next generation of zenobots. Right, So they can
do this weird kinematic replication that, as far as we know,
no other creature does. They express hundreds of genes differently
than they do within the embryo. No genetic change. By
(17:29):
the way, this is we're not adding anything. There are
no scaffolds, no, no synthetic circuits. But they use their
transcriptional affordances differently. They turn to hundreds of new genes
and among other things, it turns out they're sensitive to
acoustic vibrations.
Speaker 2 (17:43):
That's the latest thing that just came out a month ago.
Speaker 3 (17:45):
Is that we get because we found they were turning
out a bunch of genes related to hearing.
Speaker 2 (17:48):
And we said, is it possible at these things we hear?
Speaker 3 (17:50):
And so Vipofpie my group put a speaker under them
and showed that, yeah, there's actually sounds you can send
them that they will respond to.
Speaker 2 (17:57):
So that's zenobots.
Speaker 3 (17:58):
Anthrobots are a similar story because when we first did it,
some people said, well, you know, they're embryonic cells and
amphibia are plastic. Maybe that's why this is like a
frog embryology thing. You know, this is specific to and centebat.
So I said, okay, what's the furthest you can get
from an embryonic frog.
Speaker 2 (18:13):
Well, I'll be an adult human.
Speaker 3 (18:14):
And so we went and we took trachael epithelial cells
from adult human patients and we showed and this is
Gizem Gumushke's work, PhD student in my group.
Speaker 2 (18:23):
Who developed a protocol whereby.
Speaker 3 (18:26):
Again simply by taking the cells out of their normal context,
you get to release their the various possible outcomes that
they can do, and they make anthrobots. It's a little
round little thing that zips around. It has a couple
of interesting properties. First of all, it can heal neural wounds.
So if you played a dish of human neurons and
you put a big scratch through it with a scalpel,
(18:47):
they will. When they find the scratch, they settle down,
a bunch of them. We call it a superbot cluster.
They settle down and within about four days, if you
lift them up, you see that what they did. For
what they did meanwhile is they healed across the They
healed across the gap. Okay, who would have thought that
your trachular ethelial cells that sit there quietly dealing with
you know, mucus and and and the air particles have
the ability to to to to heal neurons. And these
(19:09):
guys express about nine thousand genes differently than right, so
what almost half the genome they express differently than than
they do in the body. They're, by the way, younger
than the patient, than the than the cells that they
came from. So so actually that process of becoming an
answer what actually rolls back the epigenetic clock.
Speaker 2 (19:27):
So so they're they're they're a bit younger.
Speaker 3 (19:29):
This is fascinating, you know, behaviors and and all of
this is run by that standard controller. So so that's
kind of my point is that is that there's amazing
plasticity in the brain and nervous system. But this goes
all the way down. This is not just for you know,
fancy fancy brains.
Speaker 1 (19:57):
So we think about this as problems solved by the system.
And what's interesting, let's just come back for a second
to the dog or the goat without fore limbs. We
generally assume Okay, Look, if you're born with the typical
structure of the animal, then you just develop in this way.
But otherwise there's a lot of deep problem solving that
(20:18):
has to go on. But I know that you think
about it as, Hey, maybe the system is always problem solving.
Speaker 2 (20:24):
Maybe it's problem solving no matter what if you have
front legs or not. It's just figuring out what to
do to get to the goals. Yeah, I think I
think that's right.
Speaker 3 (20:34):
And and you know, in the last couple of years
we've really emphasized this and started to develop this idea
that you know, you can think about it as beginner's
mind basically, the way that the reason that all these
incredible plastic the plasticities exist. You know, when when we
make a Doug Blackiston years ago in our lab, many
tadpoles with eyes on their tails, and these these guys
could see they were.
Speaker 2 (20:55):
Not connected to the brain.
Speaker 3 (20:56):
They make an optic nerve that connects sometimes to the
spinal corse, sometimes to the gut, sometimes no where. They
can see it, and they can learn visual tasks. Why
does that work out of the box. Why don't you
need you know, new rounds of selection mutation? You know,
basically adaptation.
Speaker 2 (21:09):
All of these things.
Speaker 3 (21:10):
Plasticities work, I think because it never expected everything to
be in the right place to begin with. It has
to solve the problem from scratch every single time. And
that goes back to the idea that biology is fundamentally
dealing with an unreliable medium. Think about the way we
build computers today. So we have error correcting codes, we
have abstraction layers.
Speaker 2 (21:29):
Right.
Speaker 3 (21:30):
The reason that we do, you know, our microchips can't
can't scale down easily, is because you don't want the
data interfering with each other. Right when you get to
that atomic limit that you know, the memory, the bits
that are in there are stars starting to you know,
interact with each other, and you don't want that.
Speaker 2 (21:44):
All of all of our current.
Speaker 3 (21:46):
Computer technology is built around the fidelity of the data.
And that's because the interpreter of that data is us,
the user. We don't you know, the computer has no issues,
It doesn't need to interpret the data. We interpret the data,
so all the computer has to do is keep the
data still. Biology is exactly the opposite. First of all,
you have no hope of keeping anything still in biology.
You have no idea, never mind your environment but you're
(22:06):
going to mutate as a lineage, you're going to mutate.
Speaker 2 (22:08):
You can't count on your parts.
Speaker 3 (22:09):
You can't count on, you know, knowing how many copies
of any protein you're going to have. Things degrade, the environment,
you know, internal millire plus or minus, you know whatever, homeostasis.
But things are always changing. So I think what biology
really cranks on. And we've done computational simulations showing how
this happens. That the minute you have this kind of
problem solving material, I call it an agential material because
(22:31):
it's not just the computational material, it's actually an agential material.
And the minute you have that material, evolution it starts
to hide information from selection because you're not looking at
the genome. You're looking at what's going on after you've
you've solved the problem using whatever.
Speaker 2 (22:44):
Tools the genome has given you.
Speaker 3 (22:45):
And that means that evolution starts to spend a lot
of its time cranking on that problem solving capacity.
Speaker 2 (22:50):
It spends you know, less of its time on the.
Speaker 3 (22:54):
On the hardwired mechanisms, and more of its time on
that creative, confabulatory problem solving.
Speaker 2 (22:59):
So I see all of these.
Speaker 3 (23:00):
Things, you know, behavioral memories, genetic memories, meaning you know
your genome, of your lineage. All of these things are
basically messages. They're messages from your past self. They're prompts,
but at any given moment it's up to you how
you're going to interpret them. And the biological material has
eons of pressure to learn to tell good stories with
whatever it's given, whatever information it's given.
Speaker 2 (23:22):
And that's morphogenesism, behavior and so on.
Speaker 1 (23:26):
Yeah, you know, as an analogy, this is exactly the
argument that I made in Live Wired, is that the
genes do not specify the blueprints for making the brain. Instead,
it's just specifying how to build this problem solving organism.
And as you know, you know, children can get a
hemisphere ectomy, which means half of their brain is removed.
Speaker 2 (23:47):
For example, if they.
Speaker 1 (23:47):
Have an epilepsy that affects an entire half of the brain,
So the surgeon removes half the brain and the kids
grow up to be just fine because the other half
that remains takes over the missing functions.
Speaker 3 (24:00):
So it's actually I wanted to ask you about that.
I want to see what your take on it is.
So we reviewed recently. I've got this Karina Coffin that
I reviewed these cases where people have massive amounts of
brain missing, like the part that's left is incredibly small.
So most of them, of course have very reduced function.
But the interesting cases, and there are some amazing cases
where it's a massive reduction on both sides, right, so
(24:22):
it's not a hemisphere, I mean, and yet they have
normal or in some cases above normal intelligence. What do
you think is going on in these in these you know,
fairly unique but still have to be explained cases, What's
going on there?
Speaker 2 (24:32):
Yeah?
Speaker 1 (24:33):
So this is the uh, this is the magic of
live weiring. I think one of the cases that used
in that paper was people with hydrocephalis, which means you
get this build up of this pressure in the ventricles,
these fluid filled spaces in the brain, and the whole
ring gets squished up against the sides of the skull.
So when you look at it on MRI, it looks
like it's essentially empty space and the little bit of
(24:56):
brain is squished up against it. What that demonstrates is
exactly what you I both love, which is how flexible
this material is. Because you know, you can't run over
half your laptop and expected to still function, but you
can squish this stuff anyway you want, and it just
figures out how to accomplish the goals, in this case,
(25:17):
the cognitive and movement goals that it needs to do.
There's one of the cases in the medical literature this
guy who was forty years old and he went to
the doctor because he was having a little bit of
leg pain. And the doctor couldn't figure out why the
guy's leg was hurting, so he said, hey, why don't
we just take a brain scan, And that's when they
discovered that most of the brain scan just looks like
(25:38):
empty or fluid filled space.
Speaker 2 (25:40):
But you know, he was married, he had a job,
normal IQ.
Speaker 1 (25:44):
It's quite remarkable how different this livewear is from the
way that we think about building things in Silicon Valley.
And of course you're what you do which is so
remarkable is is look at all the cells, the whole system,
and this massive flexibility and the collective intelligence of all
the pieces and parts all the way up. Tell me
(26:05):
you think this is a good analogy about collective intelligence.
I was thinking about I was just trying to think
of an analogy, and I was thinking about with Wikipedia,
everybody's doing their little thing depending on their own expertise,
they put in some things, and nobody who's doing this
knows the giant shape of the full Wikipedia.
Speaker 2 (26:24):
It's much too big for any given human.
Speaker 1 (26:28):
But nonetheless everyone's doing their things, and what you get
is this collective intelligence out of it. And I was
thinking about whether I could stretch its analogy. You know,
if some part of Wikipedia got cut off, like an
Axi Lotel's limb, it would grow back and it would
take the right shape again, because that knowledge is somehow
(26:49):
stored in all the individuals who are writing the stuff.
But again, nobody knows what they're doing. Everyone's just contributing
where they see a gap. Does that seem like an
interesting analogies?
Speaker 3 (26:59):
It is and what it suggests to me, And I'll
actually I'll actually talk to Eric Hole about this and
see I see if this is a good analysis to do.
Because there are now computational tools from from information theories
and then Eric Hole was one of the one of
the key developers of some of this where you can
actually in a specific given circumstance, you can actually ask
whether the higher level has caught more causal power than
(27:21):
the lower level. And so so it's actually an amazing
advance because it means that questions that before used to
be philosophy, and people argued about this for you know,
probly thousands of years, whether the reductionism or you know,
was was all you need or whether sometimes you have
these higher level things that are causally powerful. Now, now
now there's actual maths to answer that question. It's it's
it's quite amazing. And so so you know, there are
Python toolkits now to estimate in your given system, is
(27:43):
everything explainable by the lower levels or is there a
higher level that does something that the lower levels don't do.
And actually, people like Julio Tononi and Laris Albontakis in
his group, they apply this to all kinds of human patients. So,
so coma you know, locked in a sleep, anesthetize the wake, right,
are you dealing with a pile of neurons or is
there a human.
Speaker 2 (28:03):
Being in there, you know, some kind of collector?
Speaker 1 (28:05):
Right.
Speaker 3 (28:05):
So, so now what you're making me think, is this whole,
this whole process of Wikipedia, we could we could in
theory apply those tools and and and really empirically asked
the question is there a collective there that's bigger than
just the individual processes? That go on when people get
on and and edit those you know, at those those
Wikipedia entries.
Speaker 2 (28:25):
Let's do this experiment. I love it.
Speaker 1 (28:26):
I want to make sure that we have enough time
to talk about diverse intelligences. So one of your interests
is in not just looking at human brains and thinking about, okay,
how is this intelligence? So on, but saying, what are
other systems that are intelligence? So let's let's dive into that.
Tell us about how you think about diverse cognition.
Speaker 3 (28:47):
Yeah, so so because of some of the things that
we've already discussed, meaning that problem solving is something that
biology has to grapple with at the very beginning, you know,
at the very origin of life, and in fact, probably
long before that. You know, you can't afford to be
like a Laplacian demon that's going to track micro states.
You have to coarse grain your environment. You have to
start telling yourself kind of agential stories about what's going on.
(29:10):
In other words, you have to make models of the
environment where you course grain a whole bunch of different
things that are happening, and you say, okay, I'm going
to treat all those as one thing, and this is
danger or this is food, or this is a conspecific
or this is you know, low pH you know for
an ambarrow or whatever it is. So that kind of thing,
having to having to tell these kind of agential stories
that you can then eventually turn on yourself and say wait.
(29:30):
And I also am an agent that does things and
the need to improvise, continuously improvise meaning for the information
that you get, because you're not told nobody's going to
interpret anything for you. You have to interpret your own genome,
your own physiological states, your own memories. And so I'm
really interested in the different ways that this gets amplified
in evolution. And of course, you know brains, you know,
the familiar brains are one way that that happens, but
(29:51):
there are many other ways that that happens. And I
want to want to just briefly give you two quick
analogies that I think illustrate some of the aspects of
what the field of diverse and tell legence is about,
at least the way I see it. First of all,
think about the electromagnetic spectrum. So back in the day
when we didn't have a proper theory of electromagnetism, we
had lightning and static electricity, and light and magnets and.
Speaker 2 (30:13):
Various things like that, and we thought those were all
different things.
Speaker 3 (30:15):
We thought they were all categories, like sharp crisp categories.
Nobody thought that light and magnets were the same, and
those are you know, we have categories for all those things.
Speaker 2 (30:25):
And also, so that's the first thing. We thought these
were all distinct and because of our own.
Speaker 3 (30:29):
Evolutionary history, we were only sensitive to a tiny part
of that spectrum. There were huge examples of this phenomena
that we were completely blind to. And then we eventually
we ended up with a good theory of electromagnetism.
Speaker 2 (30:40):
We did two things.
Speaker 3 (30:41):
First of all, unified it says, no, these are all
actually in a very meaningful way. They are all examples
of the same underlying phenomena. Okay, so a deep unification,
so that's great.
Speaker 2 (30:51):
And two, they allowed us to make technology useful, technology
that allows.
Speaker 3 (30:55):
Us to operate across the spectrum, to be able to
detect and modulate things that before were completely in visible
to us, and meaning we didn't think they existed, but
now we know better. So something like this is what
I think is going to happen for cognition. I think
we are sensitive to an extremely narrow spectrum among the
gigantic space of possible minds. I think they are all
around us, but we are totally mind blind to most
(31:17):
of them, you know. I think that's a good term
mind blindness, is that we just don't recognize these things
because we don't have a good theory that explains why
the problem solving of an amoeba, of a thermostat, of
you know, of an organ of a human, of a
collection of humans doing Wikipedia whatever, why these are all
actually on the same spectrum. We don't have a good
theory yet. And and the second thing is we don't
(31:40):
have the technology. And this is something else that I
think we have a lot to talk about in terms
of prosthetics, Okay, cognitive and physical, bodily prosthetics that would
allow us to interact with these other beings that are
all around us.
Speaker 1 (31:52):
So let's dive into some examples of diverse intelligence. Sure,
so let's just start from from the beginning and work
our way up. So, so brains, okay, we all know brain.
Any kinds of animals exist.
Speaker 3 (32:02):
Then, because of what we understand about navigating other biological spaces,
we can think about plants, and we can think about cells,
and we can think about tissues and organs, which also
solve problems.
Speaker 2 (32:14):
They store memories, they can learn, they can be communicated with.
Speaker 3 (32:17):
This is what all of the biomedical efforts in my
lab are pointed at, which is learning through in particular
bioelectrical interface. They're all oriented towards communicating our goals to
cells and tissues. So for full on regenerative medicine, it
is not going to be sufficient to try to micromanage
the receptors or genetic states. We are going to have
to get the buy in of the cells, respecify their
(32:39):
goals at a high level, and get them to do
these complicated things that we can't possibly micromanage.
Speaker 2 (32:44):
So give us some specific examples.
Speaker 3 (32:46):
So one of the things that we have learned to
do is much like neuroscientists read electrical patterns in the
brain and they try to decode them.
Speaker 2 (32:54):
So this is neural.
Speaker 3 (32:55):
Decoding, where people want to read the electrophysiology of your
brain and say here's your memories or goals or preferences
and be able to read that out. We've learned to
do that, and we developed the first tools to do
it in the early two thousands for the rest of
the body. So when I say that the early embryo
navigates anatomical MorphOS space to the shape of whatever it's
going to be, and that it is an active agent
(33:17):
that has a memory of where it's going, it has
a representation of where it's going, that's a very big claim.
You then have to say, well, what's the mechanism for
storing the representation where is it?
Speaker 2 (33:25):
Can you decode it? And can you rewrite it? And
so this is what we've done. We've developed tools to
read the electrical memories of collections of cells. This goes
right back to what you said.
Speaker 3 (33:34):
No individual cell knows what a face is, or what
an eye is, or how many fingers you're supposed to have,
but the collective absolutely knows. And we can read this
out now. In a few cases, we can literally see
the in images and videos, the memory, the electrical pattern
that is of the future shape that is guiding the
sell activity. Moreover, it serves as a kind of cognitive
glue that binds all the cells towards one story, one
(33:56):
story of what a correct embryo is supposed to look like.
This is why you say it's an embryo and not
a pile of cells because they've all committed to the
same journey in that space. This actually, this idea is
at least as old as Harold Burr in the thirties.
He without anything other than a good vaultmeter, he was
able to kind of already figure this out.
Speaker 2 (34:13):
Amazing.
Speaker 3 (34:14):
And so now we can read those memories, we can
decode those memories, and we can rewrite those memories.
Speaker 2 (34:20):
Because if I take.
Speaker 3 (34:20):
A plenarian flatworm and I say, oh, look, this is
where it says that you should have two heads if
you're injured, we can rewrite that. And this is Falon
Durant's work when she was a PhD student in my group.
We can rewrite that electrical pattern, no genetic modification, just
brief application only takes about three hours, a brief application
of ion channel drugs that we've chosen specifically in tune
(34:42):
with a computational model of how you would.
Speaker 2 (34:44):
Do that, and we change that pattern. Instead of saying
one head and now it says two.
Speaker 3 (34:48):
Now that becomes a false memory because the worm currently
doesn't have to It has one and it'll sit there
perfectly happy. The anatomy does not match the memory. It's
a latent memory until you injure the thing, and when
you cut it, bang, that's when the cells consult the
memory and memories says, build two heads. Well, that's their
ground truth. I don't know any different, and so they
will go ahead and they will build this new vision
of what a worm is.
Speaker 2 (35:07):
And it's a memory because it is permanent.
Speaker 3 (35:09):
If you take two headed animals and keep cutting them,
they will continue regenerating as two headed, even though their genome.
Speaker 2 (35:15):
Is a perfectly standard genome.
Speaker 3 (35:16):
If you were to sequence that, you would have been
none the wiser that this thing has two heads. So
this kind of thing, the ability to put new goals
into the mind of the collective is the kind of
an earliest example of communicating with it because we can,
we can in some cases, we can train it. Another
thing we're really working on is to actually ask it questions.
That would be really cool because sells have all kinds
of problem solving capacities.
Speaker 2 (35:37):
I would love to be able to actually ask them
questions in a way. And AI is a.
Speaker 3 (35:41):
Very powerful tool that we're now using to start to
communicate with these things. So that's kind of the first
weird kind of mind, meaning in our body we have
I can't you know, I don't think you can count them.
I think that you know, it's not probably not really infinite,
but but a very large number of different cognitive units
inside your body, solving their own problems in their own
time scales and so on.
Speaker 2 (36:00):
But you can get weirder than that. Which is which
is this? You know?
Speaker 3 (36:05):
I'll start with a very quick story, and this goes
back to us, to US sci fi story from from
that I read years ago. Imagine, these creatures come from
the core of the earth. They live, they live in
the center of the earth. They're super dense. They come
up to the surface. What do they see, Well, they
don't see physical objects as far as they're concerned.
Speaker 2 (36:20):
Everything here is like a thin gas. It's like a plasma.
They are so dense.
Speaker 3 (36:23):
They walk right through us the way that we walk
through you know, patterns of pollen in the garden, and
we don't even we don't even notice it. And so
one of them is a scientist and he's looking and
he says, you know, this gas that we're that we're
walking through. I kind of if you actually look at
patterns within the gas, it almost looks like they're doing something.
It almost looks like they're agential. They like these patterns,
you know, they walk around, they have behaviors, they're doing stuff,
and and the others say, well, that's crazy.
Speaker 2 (36:44):
We're real physical beings. Patterns can't be agents.
Speaker 3 (36:47):
Patterns, you know, patterns and an excitable medium can't have,
you know, their own their own memories and their own goals.
And by the way, how long these patterns last? He says, well,
they dissipate after about one hundred years. He's like, yeah, no,
it's not anything, right, So okay, So so what that
reminds us of is that the distinction between you know,
we too our patterns, right, we're metabolic patterns that hold
ourselves together for some amount of time and then we dissipate.
(37:07):
And this distinction between patterns and objects is in the
eye of the beholder. And so that leads you to ask,
what are the things that we think of as mere
patterns and an excitable medium that might be agents themselves.
Speaker 2 (37:19):
And so that's the second that's the next kind of
level is can we communicate?
Speaker 3 (37:22):
Can we can we recognize and communicate with patterns, patterns
of gene expression, patterns of bioelectric state. You know, this
whole thoughts are thinker's idea from William James.
Speaker 1 (37:50):
So you look around, you see these patterns everywhere, and
you think, which of these are agential, which have what
we might call intelligence unpack the thoughts or things idea
for us?
Speaker 3 (38:01):
Yeah, yeah, so so this is uh and and I
admit I haven't I haven't looked for the actual reference,
but but I'm pretty sure I saw this in in
James's book, where what he's pointing out is that, look,
you have fleeting thoughts. They come and they go, right,
they sort of run through your your your memory medium,
and then they and then they go. Then you have
persistent thoughts, and these are a little harder to get
(38:21):
rid of. They do a little niche construction, as you know,
they they kind of change some of your brain to
enable it to be to make it easier for them
to to persist. Right, these these intrusive, persistent, you know
kinds of thoughts. And then you have you go further
on the spectrum and you have personality fragments like from
a you know, like from a dissociated identity kind of situation.
And then you keep going and then you have a
full coherent human personality and you say Okay, well that's
(38:43):
the thing we're we're kind of used to. But but
it's on a spectrum. And then and then who knows, right,
some people claim there's like a superhuman you know, sort
of a larger, larger superman mind and so on.
Speaker 2 (38:53):
I don't know. So that's the idea. And so there
are two there are two ways to to think.
Speaker 3 (38:57):
About any of these situations that were sort of given
to us by by the by the touring paradigm. You
can say that the cells those that's that's your touring machine.
That's your that's your machine. That's the real agent. And
the patterns that move through it, the information, the energy
slash information patterns that.
Speaker 2 (39:14):
Move through it. They're they're just they're just patterns.
Speaker 3 (39:16):
They're passive data and and and it's the agent that
processes the data. Right we you know, our brain moves
around the information that moves the energy, and our body
does the same thing.
Speaker 2 (39:25):
Okay.
Speaker 3 (39:26):
Or you can flip the whole thing, which is what
we're working on now, which is to say, what if
it's the patterns that are the agents and everything that
happens to the machine, meaning all the outcomes of gene expression,
of protein movement, of cell behavior of morphogenesis. What if
that's those things are just a scratch pad. It's the
it's kind of a stigma gee the way that any
ant colony will eventually you know, particles and pheromones and
(39:46):
things will move around because the ant colony mind is
kind of doing its thing as the ants, you know,
send messages to each other.
Speaker 2 (39:51):
What if the the.
Speaker 3 (39:52):
The anatomy and physiology that we see and and and
the and the body of the touring machine is the
scratch pad of of the actual age, which are the patterns,
you know, working out their dynamics in the physical world.
And it actually has some real implications just very quickly,
for example, in our program on aging, right, so we're
trying to understand an address agent. So imagine the classic
(40:15):
way of thinking about aging from a bioelectric standpoint is
we know that during embryogenesis there's a bielectric pattern that
guides morphogenesis. And so probably what happens is that those
memories become fuzzy in adulthood, and as the age, they
just get fuzzy and fuzzier.
Speaker 2 (40:29):
Their cells have no idea what to do.
Speaker 3 (40:30):
The memory degrades, and the agent, the physical body doesn't
know what to do anymore.
Speaker 2 (40:35):
That's the standard approach, and that's what you know. That's
one thing we're doing.
Speaker 3 (40:38):
But you can flip it and you can say, what
if the agent is actually the pattern that it's trying
to the vocabulary kind of fails us year, but it's
trying to ingress into the physical world through our medium.
And maybe what happens as we age is that the
cells become less and less able to implement it, They
become unresponsive, the machine slows down. Maybe the mind of
(40:59):
the agent, of the morphogenetic intelligence is perfectly fine, but
the machine doesn't respond. And so those are experimentally distinguishable,
and we're doing those experiments. We actually have some data
for this now and so those are those are just
very different. And the way you then would address aging
from two different from those two different viewpoints is quite different.
So that's what we would love to do, is to
is to learn to recognize and communicate with other kinds
(41:21):
of agents that are not even physical objects as such.
Speaker 2 (41:24):
They are they are.
Speaker 3 (41:25):
Persistent patterns that may have all kinds of energe, you know,
their own agendas.
Speaker 1 (41:31):
So let me ask you a couple of rapid fire questions.
If they're diverse intelligences everywhere. If we can start understanding
these patterns around us as being cognitions of their own,
what does this mean for ethics?
Speaker 3 (41:44):
Yeah, this is a huge problem. This is an absolutely
huge problem. I think that it is foundational to the
development of ethics as a mature species to learn to
recognize and ethically relate to minds that are nothing like ours,
that are basically not on the you know, at least
in some cases, because you can actually believe or now
you could get much even weirder than this pattern thing
(42:06):
that I'm talking about, And so it's certainly above my
remit to try and formulate the ethics. But what is
very clear is that we need to learn to recognize them,
we need to learn to communicate with them, and we
need to start thinking about what do we owe other
beings that live with us, that live in spaces and
have goals that are really hard for us to visualize.
Speaker 2 (42:25):
What are the.
Speaker 1 (42:25):
Implications for AI at this moment that we're in.
Speaker 3 (42:29):
People tend to have a very kind of a binary
view on this. They will either say, oh, yeah, it
talks like us and therefore it's like a human brain,
or they'll say, oh no, this thing is a machine,
and therefore it's nothing like us.
Speaker 2 (42:40):
So I think both of those are terrible.
Speaker 3 (42:43):
And first of all, because in order to be intelligent
and have meaningful cognition and maybe moral worth, you don't
need to be like a human mind.
Speaker 2 (42:51):
There are many minds that are nothing like a human mind.
You don't have to be like humans.
Speaker 3 (42:55):
And I don't believe at this point, as far as
I know, we don't have any ais that are like
a human mind, but that doesn't mean they're not minds.
And the other problem is that there is no such
thing as a machie. And if you believe that algorithms
and the facts of physics around the silicon and copper,
and the kinds of things we make computers out of,
if you think that those things tell the entire story
(43:15):
of artificial intelligence, then you should think that the story
of biochemistry is everything you need to know about the
human mind. And that's you know, I think that's blatantly false.
And so for both in both cases, I think we
have to be extremely open to the idea that we
do not understand how different kinds of minds ingress into
the world through different interfaces. And I realized this is
(43:36):
a weird way of putting it. This is not the standard,
the kind of neuroscience way where intelligence is created by
the hardware.
Speaker 2 (43:42):
I don't actually believe that's true.
Speaker 3 (43:44):
I think I think consciousness is separate and what we
see what we provide when we make you know, ais, robots, embryos,
the biobots, all of this stuff. We make interfaces different
interfaces for it, and we are currently very bad at
guessing ahead of time what is going to appear when
we make certain kinds of interfaces. And you know, I
(44:05):
think I think one of the most relevant pieces of
our work for this is the stuff that we detaining.
Zhang and Adam Goldstein and I wrote this paper on
unexpected competencies in sorting algorithms like bubblesort. These are things
that that computer science students have been studying, you know,
in first year CS for I don't know, sixty years,
I guess, and nobody had actually looked at it the
(44:27):
way that we had looked at it, and we found
this thing has delayed gratification and has these weird little
side quests that it goes on that are not in
the algorithm at all. In other words, if you just
stare at the algorithm. You know, it's six lines of code.
It's a deterministic algorithm. There's no magic that, there's no
new biology to be found. You know exactly what it's doing,
and yet it does things that we do not expect
it to do in the algorithm, not just randomness, not
just complexity, not just unpredictability, but things you would recognize
(44:50):
as cognitive competencies.
Speaker 2 (44:52):
And that means that if we don't, if we can't,
you know.
Speaker 3 (44:55):
Sometimes people say me, well, I build I build language models.
It's just linear alogib I know what they're doing. There's
nothing as if look, we don't even know what bubble
story is doing. If you can't, if you don't know
what bubble story is doing, you sure as hell don't
know what these language models are doing. And so we
need to treat all of these things as empirical questions,
not philosophical decisions that we can make, and we have
to get much better at understanding how new minds ingress
(45:17):
even in tiny interfaces like low complexity, very simple kinds
of interfaces.
Speaker 1 (45:26):
That was my interview with Mike Levin, biologist at Tufts.
Every time I talk with Mike, it's hard to look
at the world the same way. So we started by
asking what is intelligence? But instead of finding a crisp,
singular answer, we were handed something far more powerful, which
is a new lens, a new way of thinking about intelligence,
(45:48):
not as a static property that some lucky creatures have
and others lack, but as a multi dimensional space of
gold directed behavior, shaped by evolution and context and purpose.
And this is of course a reframing of life itself
because through this lens, intelligence isn't only confined to a cranium.
(46:11):
It's not restricted just to animals with brains. Instead, it's
something that shows up wherever you have systems that are
solving problems and adapting to things they didn't expect and
correcting errors. Wherever there are goals, there may be something
in play that is like a mind. And when we
look through this lens, the universe becomes alive in strange
(46:35):
and beautiful ways. Cells are more than dumb building blocks.
We can see them as decision makers. Organs are more
than machine units that are chugging along. We can see
them as negotiating parties in their own societies. A regenerating
flatworm is more than a textbook collection of cells. It's
(46:56):
an entity that knows what it's missing and takes action
to restore itself. In a sense, it remembers what it
used to be, and it holds that shape in its
future and moves towards it. So, Mike Levin's work suggests
that the basic machinery of cognition, like memory and problem
solving and preferences, this all might emerge way earlier in
(47:19):
evolution then we've assumed. Because cognition might not require neurons,
it might not even require consciousness in any familiar sense.
What it does require is something more basic that you
can achieve with lots of architectures, a capacity to act
in service of a goal. And this raises all kinds
of great questions. If we accept that intelligence exists in
(47:42):
a multi dimensional space, what else around us counts as intelligent?
How about a tree sending resources through its root network.
How about a colony of ants adjusting its forging behavior.
How about your immune system adapting to a virus.
Speaker 2 (48:00):
How about a cluster of.
Speaker 1 (48:01):
Engineered cells navigating a maze. And one thing I think
is really important here is thinking about what all this
means for the future of AI. At the moment, we're
only building machines inspired by brains. But I think when
we look back in twenty years, that will seem quaint,
and we will be seeing a lot more emulation of
(48:21):
the distributed, adaptive self regulating qualities of other more spread
out and sometimes more creative biological systems. Could we design
machines that do physical things, not just like minds, but
like cell assemblies and bodies. And finally, with everything that
we talked about today, what does this all say about.
Speaker 2 (48:43):
Who you really are?
Speaker 1 (48:45):
Because when we're being honest, we are not individuals in
the traditional sense.
Speaker 2 (48:50):
We are collectives.
Speaker 1 (48:52):
We are billions of cells and trillions of microbes, all
operating with partial autonomy some goals, and this vast ballgame,
which is much larger than we can conceive, is somehow
coordinated into the illusion of a unified self. The story
(49:14):
of you is a kind of consensus reality emerging from
many smaller parts, most of which have no idea you exist.
Speaker 2 (49:24):
To my mind, this is a call for awe.
Speaker 1 (49:27):
I don't know why we'd only talk about this stuff
occasionally on a podcast. Why aren't airplanes flying around with
banners celebrating this kind of stuff? Why aren't we talking
about this on CNN instead of local political cycles, because
the lesson from today's episode is that intelligence is probably
not rare but common. We always look at it as
a strange exception to the rules of nature, but maybe
(49:49):
it is the rule. And if this is the right
lens to look through, what it means for us is
that the world is full of minds, strange and ancient,
and in many ways alien minds, some fast, some slow,
some huge, some microscopic, some we've built ourselves, and most
that have.
Speaker 2 (50:09):
Been here all along waiting.
Speaker 1 (50:12):
For us to notice. We're just starting to map this territory.
And I think one of the lessons from Levin's lab
is that the boundary between mind and matter is more
porous than we generally assume.
Speaker 2 (50:23):
And the more we study this, the.
Speaker 1 (50:24):
More we're going to need to update our science textbooks.
Speaker 2 (50:28):
But more importantly, we're going to need to update our.
Speaker 1 (50:30):
Intuitions about what it means to be alive and to
be intelligent.
Speaker 2 (50:36):
We'll need to tune into the.
Speaker 1 (50:37):
Fact that the whole world around us might be more alive,
more curious, more goal seeking than we thought to imagine.
In that light, the story of intelligence isn't a peak
that we have reached, but a vast landscape where agency
is common, and every living system, no matter how small
or strange, might be solving problems that we have yet
(51:01):
to understand. Go to eagleman dot com slash podcast for
more information and to find further reading. Join the weekly
discussions on my substack, and check out and subscribe to
Inner Cosmos on YouTube for videos of each episode and
to leave comments until next time. I'm David Eagleman, and
(51:24):
this is Inner Cosmos