Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:08):
Thanks for joining me, Adam. It's a pleasure to host you.
I'm looking forward to this conversation.
Let's do it. Yeah, perfect.
Let's begin with some foundations.
I figured we'd start this off bydiscussing and dissecting your
integrated world modelling theory.
So let's call it RWMT for now. Could you explain, in essence,
what a world model is, and why consciousness might emerge from
(00:31):
modeling not just the world, butour relationship to that world?
That is an excellent question. And so I would start by saying
that a world model is a word andor an expression that and it
(00:54):
points at a reality and different people seem to have
somewhat different connotations,but also some common
commonalities across the different ways they use the word
world model. I actually think kind of similar
to the way that Minsky called consciousness A suitcase word.
World models and AI have become something of a suitcase word and
(01:17):
I think they actually might be overlapping suitcase words.
So in this case, you can think of a system's ability to contain
or have encode or represent information about something else
about the world that it's in, including itself would be its
world model, any kind of entailed information system that
(01:41):
would let it reflect or respond in some way to events in the
world. Now, other people, people have,
I guess, like thicker or thinnernotions of world model.
That would be like the most inclusive I could think of.
But there's other notions in AI of a world model is something
like a causal transition model that an agent would use to
(02:06):
predict, given what it's perceiving and its actions, what
it thinks will happen next. What it thinks is is happening
is likely to happen next, and then to use this to interpret
its inputs and condition its action.
And an integrated world modelling theory.
The claim I make is that consciousness corresponds to a
(02:30):
particular kind of spacio, temporally and causally coherent
world modelling that lets an agent generate iterative
estimates of likely sensorium states, going back to this idea
of Helmholtz of perception as a kind of inference.
(02:51):
But you're iteratively estimating your best guess of
what's happening in the world and you and your world relation
such that you can both inform and be informed by and updated
by action perception cycles on the time scales at which they
evolve. And so for me, consciousness is
(03:13):
a kind of world modelling that agents use for adaptively
navigate the world for greater with greater flexibility and
coherence as they try to predictand model and achieve goals.
Iterative state estimation of system world relation, which if
you look in its extended form over time would be something
(03:35):
like a stream of experience. Body, world, state, body world,
body, world, body world. What do you think is happening
and filling in your modalities all their different combinations
based on where you have information and what salient to
you and coloring these differentaspects of your experiences
(03:56):
qualitatively with things like your interceptive state and
forming like how good or bad things are from moment to
moment, what you're seeing, whatyou're feeling, all your
different modalities, Someone akin to, and I would say perhaps
formally akin to something like a deep fake, but instead of like
filling in a pixel array, you'refilling in all your modalities
(04:19):
and different combinations from moment to moment to give you a
felt sensorium evolving over time.
I think what's pretty cool is that when you look at integrated
world modelling theory, it's unite some of the leading
contemporary theories. When you think about integrated
information theory, global, yourown workspace, active inference,
(04:41):
what gaps do you see in those theories and how do you see IWMT
closing some of those gaps? I mean, I hesitate somewhat to
say gaps because like it's like a, a standing on the shoulders
of like giants in terms of like these frameworks are.
(05:01):
So they're, they're powerful and, and deeply insightful.
And I guess perhaps maybe somewhat they're, they're,
they're, they're a victim of their own success.
And that sometimes within these different theories, it would
seem like you can, you'll have a, a large part of the solution.
(05:22):
But I, I often times use the metaphor of like the, the blind
sages and the elephants. And each of the theories has a
really good purchase on part of the problem.
And they might be like excellentspecialist to that part of the
problem, but there's more aspects to get in there before
we have something like a satisfying explanation for
consciousness. And so and so yeah, I draw on
(05:44):
the existing theories assuming all of them are have many
important correct aspects or apps or good models of
consciousness from a different perspective and then trying to
find the points of convergence across them and then see if I
can use some theories to adjudicate between seemingly
(06:04):
conflicting claims among other theories.
So it's like, for instance, predictive processing, you know,
what I describes in broad brush strokes, like would be like the
theory of consciousness from predictive processing
perspective, where your consciousness is a kind of
prediction or prior expectation that's evolving and you're
(06:25):
updating of what you think is happening from moment to moment,
as you can call that predictive processing perspective.
And it might involve things likepredictive coding in terms of
like you've like a hierarchy of beliefs where you're sending
down these predictions and then using it to explain away data as
it comes in and then updating things where you got it wrong.
And if this evolving updated general model.
(06:45):
But then there's devil's in the details of, well, what are you
inferring? What are you predicting?
And and and how And. So in this case, you know, so
invoking something like a workspace architecture as a one
way of doing a predictive processing model consciousness
(07:07):
where you can have a predictive workspace where if you have, you
know, your different sensory hierarchies, trying to infer
their inputs given impartial information and then cross
referencing each of the modalities exchange information
to the other on your, your, yoursights and forming your sound
(07:30):
and vice versa. Each one will have like
constraining information for theother.
And then if you bring your different incomplete views
together, you can help fill in amore complete estimate of what's
happening. And so the the workspace idea
would be that you can have multiple systems, multiple
modalities combined together with a shared latent workspace
(07:52):
where they can form joint representations informed by all
of them, mutually constrained byall the different modalities
where you can get their information comes in and then
it's broadcasted out. It's a kind of prediction.
So you can think of it damned init called it fame in the brain
or it's like like an arena for Bayesian model selection of your
current guests best guessed estimate of what you think is
(08:15):
happening from moment to moment.So the idea of like global
workspace theory is this intuition that consciousness is
involving this combination of integrated and segregated
processing like specialist process that can come together
with synergy and describing an efficient way.
This can happen by if you have like a network like a small
(08:37):
world architecture and a bunch of highly connected nodes.
Those inter highly connected nodes can help to form the basis
of the scaffold workspace or wherever you have like high
bandwidth across different things.
Then you can achieve this fame in the brain wherever you can
get information into these central portions.
But then on its own, you'd say, well, still, why should there be
something that it feels like to be a workspace?
(09:00):
So that the idea is you bring inthe body is what you're
predicting is your embodiment, your likely sensorium states
from moment to moment. And so I think many theories of
consciousness, they lose the explenanum by decentering from
the nature of embodied experience.
And so that I guess the other thing I draw from is integrated
(09:21):
information theory, which has many similar aspects actually,
in some ways, the global workspace theory in that, you
know, the the systems that maximize integrated information
or as it's sometime called irreducible self cause effect
power. But but the things that have the
most this, this that can be in this interesting regime of being
(09:44):
producing states that are highlyinformative and integrated.
It's often times a balance of segregated and integrated
structure. Things like small worldness also
give you high Fi or high integrate information systems.
And what I look to is basically,so integrate information theory
(10:06):
will will analyse systems. It's it's gone through many
incarnations. So now they're they're version
four. I mostly drew from version 3.
Some things have updated for four, but I think it's largely
similar to get into that later if you want.
But the idea of so some systems are interestingly self entangled
(10:28):
in ways where they make a difference to themselves in
terms of their structure and andthere's an intrinsic information
in in their structure and information geometry about the
nature of their structure that you'll call it, forget his name,
Balducci. I think it's the geometry of
quality of space where these different systems of the way
they're interconnected will and through their relations will
(10:51):
entail an information geometry in the different ways things are
entangled them are connected to themselves and the different
nations configurations will havedifferent shape manifolds in
this quality of space. But it's OK.
Good, but why should this feel like anything?
Well, what if, what if we're talking about specifically is
this geometry is corresponding to the details of a sensorium.
(11:14):
Now I think we're getting closerto the what it feels like.
And so the idea is that these different theories, they, they
come from different starting places and they have some
different operating assumptions,but they tend to converge
significantly. And you know, so I interpret for
instance, like the posterior hotzone and integrate information
(11:36):
theory they talk about which they think is the physical
substrate of consciousness. Well, I view that as a workspace
over your modalities, A fairly global workspace.
Maybe the frontal lobe isn't always in the mix, but that's
pretty darn global if the whole back of your brain is in the
mix. And but integrate information
theory starts from phenomenology, but global
(11:58):
workspace theory tends to focus on things like report and
access, and so they're often times not even talking about the
same thing. So like if you do want to be
able to self reflect and manipulate the information of
your experience, you probably doneed the frontal lobes in the
mix. But the whole the back of brand
on its own might be a sufficientquality of generator.
OK, so the basic idea is across all these theories, there's a
(12:21):
substantial convergence. And when there's divergent,
often times it seems like they're actually describing
different aspects of the elephant and maybe talking past
one another because they actually are focusing on
slightly different explananda. But if the you, you add more
words and, and define and, and so like, like by consciousness
(12:42):
you mean this and I mean that. I believe there's a substantial
complementarity and basically all these brilliant people who
are developing all these theories, I think they've all
had something very valuable to say.
Even even some theories I, I, I don't really find to be that
compelling. I don't close them off.
(13:03):
Like, for instance, like even like something like a quantum
theory. Like I tend to think that like
Tegmark was right when he said that like microtubules, they're
too hot and wet to like maintainlike superpositions.
And like you're probably not getting quantum bona fide
quantum computation. But the idea that something like
(13:24):
quantum formalism, the idea of moving from like something from
like a, a, a quantum world to a classical world in this
transition being important of having a probabilistic
representation where multiple things are in play.
And then from this and getting in the estimates, discrete
estimates of what you think's happening, that this the, the,
(13:44):
the quantum intuition actually might be an important for
consciousness, whether regardless of whether or not we
include quantum physics. OK, so to loop around, I tend to
focus on the major theories but that are most commonly discussed
and discussed together and and often times are conflicting and
try to integrate those. And in general, whenever I
(14:06):
encounter a new theory of consciousness, I say OK, So what
is the aspects of this that could have a useful information
for us? A useful intuition to work with,
but not taking every theory wholesale, but looking for
points of overlap and respectingthe differences.
(14:26):
Yeah, I think that's a that's a huge theme throughout your work
because when when reading it, you, you sort of see that for
you, it's very much collaborative.
This is a culmination of different fields coming together
and you're able to take all these diverse and very different
theories and, and sort of find all their commonalities and then
bridge those gaps a little bit more.
(14:47):
So it's, it's kind of like you're bringing all these
thinkers together together in many ways.
For example, when you when you when you spoke about information
broadcasting, integrating information, global broadcasting
in general, you take it further and you just spoke about the
fact that we are embodied beingsand this is fundamentally
important, this embodiment. How does that change the
(15:07):
picture? Can you maybe give us a nice
understanding of exactly why embodiment is so important when
it comes to discussing anything from consciousness to
understanding the human experience?
So evolutionarily and developmentally, it seems clear
(15:30):
that like the for the the the raison d'etre for said there is
a reason, but the function of nervous systems is to help
bodies navigate the world and tohelp control the more in in
adaptive ways. So if you're you stay put and
you don't need to move around a lot, most of your intelligence
(15:53):
can take the form of things likethe way you enter into different
metabolic modes, like your bio, the level of your biochemistry,
your intelligence and your adaptiveness can play out.
But if you need to move through the world and get to particular
destinations as you navigated and along the way achieve
(16:16):
particular goals, such as you might get like like predator
prey relations like you know, toto to eat or to avoid being
eaten. You know, these for that the
ability to represent your embodiment and its relationship
to the world where you are in space and what's happening to
(16:36):
you as a system and what's likely to happen to you next.
I would claim that that's primarily what nervous systems
are are largely for in terms of why complex nervous systems
evolved. And so so so for embodiment, you
know, it's you know, we the you'll have like developmental
accounts, for instance, like kind of like Piagetian
framework. So you start out the first
(16:58):
things you have to learn before you get to like, you know,
abstract things like mathematicsand philosophies just can you
control your own body? Can you just get like a model of
the physical world? And then this grounds everything
else that happens. Even I would argue that like
even things that don't seem likethey're embodied, if you look
closely at them, they like that are like Lakoff and Johnson
(17:21):
conceptual metaphor theory will go into the particular ways in
which you like language is full of embodied metaphor and it it
seems that in Lawrence Barse will also talk about like
embodied grounding, but it seemslike basically whatever is it is
in our conscious minds usually is cash out or does some kind of
(17:42):
combination of likely sensorium states.
And so I guess the other aspect of embodiment I think is
important is so I would suggest them and work in like
developmental robotics that points in similar directions
from people like Roe Pfeiffer and Bongard have an excellent
(18:06):
book that just describes things of embodied minds.
But embodiment gives you an excellent learning curriculum
for bootstrapping a mind in terms of like, so it like if
you're, if you're just like a detached Cartesian observer and
you're looking onto the world and, and you're looking what's
like impinging on just like you're strapped in and all you
(18:28):
had was the visual array. You're dealing with this complex
inverse problem. Or it's actually I'll posed like
there's a actual like Infinity of different combinations that
could correspond to anything you're seeing.
So like you could be like close and small or big and far away.
There's all sorts of multiple interpretations, but, and so
(18:50):
vision like, you know, to, to truly understand a visual scene
in all of its causal properties and what's there is actually
quite hard. But if what you're trying to
learn about when you're trying to bootstrap a world model is
your body and its body relation that has special properties.
So first, if you think of like an infant learning about its
hand, you'll actually get a, you'll never get a better
(19:13):
learning situation than this, I'd say, because you have all
the different modalities. They can mutually inform one
another. You can lick it to action.
So where the action itself can inform your sensation, wherever
you have uncertainty, you can resolve it by moving in
particular ways. It feels like something, so you
care enough to attend to it. It's always there for
(19:36):
observation. And so in my work, I've
emphasized a manuscript in the journal Entropy Guy called the
The Radically Embodied ConsciousCybernetic Bayesian Brain from
Free Energy to Free Will and back again.
I know that's not a title but a diatribe, but still that's the
idea is I try to describe this, this Mad Dog embodiment account
(20:00):
where you first learn the natureof your body and how to control
it and what the body is as a kind of prototypical object and
causal system. And then everything else is kind
of scaffold on top of this embodied foundation.
And so, yeah, so evolutionarily and also just developmentally,
the first thing you have to do is work body is control and
(20:23):
govern the body. And that's the case throughout
life and and. I I actually did make a note
about the the radically embodiedconscious subnetic Bayesian
brain. Quite a quite a quite a tongue
twister. That's not as easy to say.
But before we get to that, I thought you often link the
(20:44):
integrated world modeling theorywith the idea of a genetic
causation. So in a in a mechanistic
universe, how can agents still be causal participants rather
than be a byproducts of physicalprocesses?
So I, I tend to think of, you know, highfalutin things like
(21:06):
free will and in terms of like, like, can a system have
conscious causation? Can like the conscious aspect of
the system actually be meaningfully said to be causal?
And I'll come at this sometimes in two different approaches.
Sometimes I will try to derefy causation and just say in, in,
(21:28):
in the, the argument between like weak and strong emergence,
call it like a stalemate and saying that causal explanations
need to, I think Sean Carroll says like they stay in their
lanes. I forget this, but it's like you
can they don't cross ontologicallevels like you, like you, you,
you carve up the world in a given way.
You say, OK, if I intervene in this way, then what happens this
(21:50):
way? That what happens.
But, but cause causation is we understand it from people like
Judea Pearl as a means of explaining and predicting.
It's a, it's a kind of bookkeeping scheme.
And it's not like a primitive ofnature like, you know, in, in,
in, in fundamental physics. Often times they don't even talk
about cause because there isn't even time and so and depending
(22:12):
on who you talk to you, there are some like time centre views
also, but still like you know, if you just And so one way of
handling like you know, can consciousness be causal is
saying, well, what do you mean by 'cause there's other people
though, like Eric Cole, for instance, has really excellent
work like showing. And this actually comes out of
the integrated information theory tradition in many ways of
(22:36):
older work when the macro beats the micro where he's basically
flexibly coarse graining systems.
So OK, if I can, I can zoom in, zoom out on what I think the
kinds are in a system and whatever maximizes my channel
capacity in terms of the information I can tell you about
the system that's like the maximal 'cause that's the best
that that's the the real estate thing.
(22:58):
And so sometimes that's maximized at the macro level,
the coarse grain level and potentially the level of
something like the process that could generate your experience.
He has more recent work with causal merchants 2 point O,
where he's trying to maximize these causal primitives of
necessity and sufficiency, whichhe operationalizes in different
ways. But so the basic idea is you
(23:18):
might think that the most the micro level is always most real,
most explanatory. But in terms of actually what we
when we go into details of what we think we mean when we talk
about cause and you and potentially you can causation as
a natural kind and things like Eric holds work as a as a Max,
(23:41):
as a maximally efficient explanation.
Causation actually is not alwaysmaximized at at at this micro
level. It's sometimes it's at that's at
the, the level of the, the integrated whole at by virtue of
being so configured, has properties that are not present
in the parts. And so in, in integrated word
(24:06):
modelling theory, the idea wouldbe that the holistic property
we're looking at here would be the ability of multiple neural
systems set up in a kind of workspace architecture.
Their their ability to estimate the embodied system world
(24:29):
relation from moment to moment and that these estimates of
these maximum likely explanations are are now can be
used as the basis for action selection.
And so given what you think is happening, given your past
experience of present observations from iteratively
estimated how fast is is is up to debate.
(24:52):
You know, potentially it's as slow as alpha rhythms, like 8 to
12 times a second could be faster.
But this Interstate estimation of system or relation, the
reason you do it is specificallyto inform moment to moment
action selection. And so in this case, the this,
this, your consciousness as a aswhat?
(25:12):
When as your best guess of what's happening is also a cause
of what's likely to happen next.And you generate these estimates
so that you can more skillfully enslave the system by this high
level attractor to do what is likely to be adaptive given past
experience and what was adaptivein the past.
(25:33):
That was a lot. But the.
Yeah. I think it sets a good base
baseline because the in the recent paper that you wrote, the
radically embodied conscious cybernetic basin brain, you, you
describe, you redefine the free energy principle in a way that
describes freedom itself. So I think let's let's sort of
(25:55):
dive into that right now. How does free energy
minimization relate to free will?
So, so yeah, so for free will, just similarly to consciousness,
(26:18):
there's we have a suitcase word issue and there's multiple
connotations. But so within the free energy
principle, you know, so free energy is this, it's a metaphor
for the goodness of a system's modeling.
(26:40):
And so like one of the origins of the free energy principle
sometimes describes as driving from the good regulator theorem
of cybernetics from Conan and Ashby.
Like to be a good governor of a system, you must in some sense
be able to model that system, otherwise you wouldn't be able
(27:01):
to govern it and control it. So to control something, you
must be able to have informationabout it and model it.
And, and you can bring things like the law of requisite
variety. And it has to be like a
sufficiently complex model to todeal with the complexity of the
system that's being modelled. And so the idea of free energy
is you can evaluate so, so systems that persist and are not
(27:28):
getting all mixed up in the meatgrinder of existence and
dissipating systems that somehowmanaged to resist and Tropic
dissipation while at the same time serving it at everything
they do but hold that hold themselves together, that don't
become disordered. The way they do this is by some
kind of adaptive modeling effort.
And free energy is the way you would score the goodness of the
(27:52):
modeling by both how well a model fits its data and how
simple the model is. And so that the metaphor comes
from this, the idea of thermodynamic free energy, where
you'll have like this enthalpy term in an entropy term.
And the enthalpy term is would correspond to like the goodness
of the fit and the entropy term would correspond to like how
(28:13):
complex you made things, right? So, so systems that persist are
doing so through adaptive actions and good modeling would
be like the basic idea of the free energy principle.
And so the idea of free will youcan, we're talking about just
(28:37):
before in terms of like conscious agency, but you could
even take free will. And, and you know, there seems
to be both a desire to be directed in terms of the will
part where you want to be able to achieve particular goals, But
there's also this free part where you want a certain kind of
open endedness and adaptiveness and ability to break free of the
(28:59):
past. The kind of creativity and this
ability to drive the will that that's one way of understanding
that seems to be some of the what people want when they talk
about free will. And so within the free energy
principle, you know, sometimes talk about systems that persist
as these attracting States and, you know, think of that.
They're probably too much. There's definitely too much.
(29:21):
They're Helmholtz decomposition in terms of the you'll have a
gradient flow and a solenoidal flow.
And so the gradient flow is their ability to like basically
maintain a certain shape as an attractor.
But then once they're in a givenshape, the solar little flow
would be they can kind of like slide around and explore along
these like contours where their free energy isn't really
(29:41):
changing, but they can like movearound and explore a bit.
So so for free will actually, I'm not forget what I just said.
We're not going to get into that.
That part should bring Carl backfor that.
That was a fantastic interview and and Carl could could lay
that out very well, I think. But so but.
(30:02):
But Adam, before, before you throw that out, exactly where
were you going with it? If if you had to just try your
best to sort of narrow that down, where were you?
So that within free will that the free part is systems should
have a certain kind of of stochastic, they should have
(30:24):
both a directedness about them in terms of their ability to
pursue particular goals, but also sometimes a stochasticity
about them in terms of their their exploration and was
seemingly non goal directed as actually part of the process of
achieving a goal. Like you can think of like from
like a universal Darwinism perspective, for instance, like
if everything that's like managing to, you know, optimize
(30:48):
in particular direction is this differential persistence and
selection. If you, you need a diversity of
forms to, to choose from. And so this ability to try out
new things and not just be constrained can actually be the
ability to hold a goal lightly can be part of what lets you
achieve it. Like this, You see this like, in
(31:09):
terms of like, like the creativeprocess, for instance, like the
ability to like think in different ways and to, to have
a, a, a, a more diverse repertoire to, to pull from.
And so, so, so both like the order directness of a system is
important for its ability to be a coherent goal seeking system
and a learning system, but also its ability to be, to have
(31:31):
novelty about it to have. And so some, some thoughts from
psychedelics end up coming into this also in terms like some of
the use of psychedelics that that people have to help them
get unstuck in terms of, you know, raising the temperature
and helping them to see things in different ways and try things
out in different ways. But yeah, so free energy and
(31:51):
free will, I say. But the, I guess the final part
I would say is so, so, you know,Dennett has been able to talk
about sometimes that like, or hewould talk about like how
conscious and free will it's important to like handle them
together. And it seems like so, so your,
your world modelling is in the service of your agency and you
(32:14):
develop your world model becauseof your agency, because of your
ability to be an an active learner and because you're
trying to achieve particular goals in the world.
This is what puts the pressure on a system to bootstrap and
learn this world model. And the world model is what
let's the system more flexibly navigate a world and achieve its
(32:34):
goals. And so this relationship between
world modelling and agency, I'llsometimes think of it as the
relationship between consciousness and free will.
And yes, so. And and Adam, within that same
world model, when you within your theory, philosophically
speaking, do you think that yourtheory offers a new kind of
(32:56):
naturalism? Do you feel like it's something
beyond reductive materialism butjust short of panpygism?
Where do you see this philosophically?
As far as I can tell, it's near,it's nearly complete.
(33:18):
It might be completely consistent with like Verveiki
and Greg Enriquez's approach andtheir naturalism.
The so I don't think it necessarily involves a new kind
of naturalism in terms of like I, I think it's, it's reflected
and it's consistent with other suggestions.
(33:41):
In terms of the pan psychism issue, I would say the
integrated world modelling theory is a kind of proto pan
psychist theory in that it it takes in just but it depends on
what we mean by psyche. So it takes everything that it
exists and if you think of it interms of free energy
(34:02):
minimization as like whatever isable to exist and persist is
doing this kind of modelling. This gives you a kind of
mindedness for not just all life, but all complex adaptive
systems, anything which you could call a system, by virtue
of identifying it as a system, the fact that you can point to
(34:22):
it and it has not dissolved, it is doing something minded in a
way. Then there's modelling involved
in it. And so this I I think would make
it panpsychist in that way. Anytime you have any degree of
thinness, you have any any attracting that you can point
to, you have a modelling system and and integrated rule long
theory, you would call it a selforganizing harmonic mode.
(34:45):
So it's kind of Pythagorean and that it would be a particular
kind of harmonic function. Don't have to worry about that
right now. But the IT might be though less
panpsychist in some ways in terms of if you think of psyche
as something like consciousness than certain other.
(35:06):
So then you might assume like, for instance, like integrate
information theory on its own orsome interpretations of like the
free energy principle, and that I would reserve consciousness as
this world modelling that's spatial, temporally and causally
coherent. That that's being a somewhat
more rarefied thing and something more of a latecomer in
(35:31):
the history of the universe of evolution.
And that particular adaptations of the be there in order for a
system to be able to iterativelyestimate system world relations
in a way that can both inform and be informed by action
perception cycles as they unfold.
Not all systems are having theirinterface with the world being
(35:55):
governed by this kind of representation of the system
world relation. And so in terms of the
attribution of consciousness, itwould be less panpsychist than
many theories. But in terms of attribution of
something like mindedness or psyche, it would be much more
panpsychist in terms of like anything that's managing exists
(36:18):
is doing something kind of mind like, but not saying that there
is something that it feels like to be anything that exists.
The, the one thing I found intriguing about your work was
it reminds me of the conversation I had with Mark
Solmes and Gulf Whiston, becausethere's almost a bridge here
with your work. And, and when you look at it,
it's when people think of activeinference, they often think it
(36:39):
sounds a bit abstract and but you've grounded it into
something more grounded in emotion, motivation, value,
learning. Could you explain how feelings
then with this in mind are computationally meaningful?
What does that mean within this framework?
(36:59):
I think marked as a particularlyMark Solomon is a particularly
excellent job of describing the foundational and evolutionary
import of effects in mind As andyou know, specifically he'll
focus on like the way in which brainstem structures like the
(37:21):
periodical grade could like generate these like integrative
estimates of what's happening for the Organism and then use
this to adjust the arousal and attentional allocation for
adaptive action based on the different classes of situation
that you might be in. When you look across like the
(37:41):
hypophlomic control column, the Gray, you would say, OK, this is
a, an anger situation, this is afear situation.
This is a happy situation. This is a lost situation.
And like adaptively shifting organismic modes based on what
you think is happening and past experience.
(38:05):
And so, you know, and, and it's no funny, any theory of, of
consciousness, like all experiences, they're valenced.
You know, it's, it's never like,even when it seems like a zero,
it's always like a little, the structure, a little bit of that
direction. Even at the, you know, the most
profound anadonia, you're still a little bit this, a little bit
that way. There's always a coloring to it.
So I guess I would bring up someof Neil Seff's work also, and
(38:28):
he's published with Carl on this.
Unlike emotion as interceptive inference, I don't think that
emotion is completely interceptive, But this idea of
like the inner body, the, you know, the kind of information
that would be encoded by the vagus nerve as the wanderer that
the vagus is called the Wanderers and loops all around
(38:48):
like your, your, your viscera, your gut, your lungs is picking,
picking up information about your, your internal milieu.
And you know, the, the, the, thestuff that's closest to your
core homeostasis and, and just being alive or not.
Like how well am I breathing? What, how's my gut doing?
And how well are you Organism, organisming in the basic in the
(39:09):
most fundamental way. And this iterative.
Now the estimates of this Neil would talk about that as a base
for feeling. I think that's a base for most
feeling. I think emotions, though, it's,
it's doesn't necessarily have tobe interceptive, even though
it's, it seems to be largely dominated by interception.
I think it's very somewhat between people, but it's it's a
(39:31):
way of your emotion is a class of situations you find yourself
in that have a common significance to you.
And there's a common kind of mode that you should be in to be
well tuned and able to respond to this class situation.
It's like there's a commonality across all that's meant to like
(39:53):
anger situations or fear situations or like in terms of a
general kind of readiness that might be needed.
This will be heavily informed byinteroception.
But you know, I think there's other aspects of it that could
also constrain it, like the way you're holding your muscles and
then like the, the affordances around you that would help to
say like what mode you ought to be in.
So consistent with other proposals like I, I think that
(40:17):
affect is fundamental both evolutionarily and
developmentally. It it, it's everything is
scaffolded on it. Without it, none of it would
work. And that this ability to
generate, for instance, these iterative sensorium states of
the system world relation, part of the reason you want them is
(40:37):
for the sake of action and what action, the action that's likely
to be adaptive. So you, you would want to.
So you then bias and and color the experience in terms of
valence, in terms of where you attend to your sensorium in
different ways. So, yeah, so every aspect of
(40:58):
experience colored by valence. The place where I might have
some difference with Mark's proposal is he tends to think
that the brain stem is a sufficient realizer of effective
consciousness. I wonder if it might be a
necessary but not sufficient andfor sufficiency to be attained,
(41:22):
whether we need to re represent this, bring some information in
the Phlegmo cortical system. And then this re representation
is needed to give it spatial temporal causal coherence so
that we can then experience it. And so Mark thinks, for
instance, the idea of unconscious feelings as an
oxymoron. I, I don't think that's, I, I, I
(41:43):
don't share that intuition. I think you could have
unconscious feelings and that and that's actually quite
important. And so and like, and then that's
part of in terms of understanding their
significance. So for instance of an
unconscious feeling, like if yougo back, so it's all life.
I think you can think of all life in terms of feeling and
emotions also like, and I wouldn't necessarily though, say
(42:03):
that all life there's something that it feels like to have a
feeling. I know that seems like a
contradictory term, but like, like you think of like a like a
bacterium, you know, going into like a sporulated like
protective mode or like extending A pillis that you
know, and going into like a reproductive mode.
I would call these effects and the information that it's using
(42:25):
to switch between the modes, I would call that process feeling
and the process of shifting modes.
I would call that emotion. And I think that goes through
all life, but I wouldn't necessarily all life is having
these changes and modes happening consciously, even
though I think that their their functional core is the same
across all life in terms of their significance.
Like there is an anger about your system is threatened.
(42:47):
You need to take action in termsof like, you know, releasing
like an exotoxin or something that severe bacterium or like it
or going to spoil your mode likea fierce situation to protect
yourself or a lust situation bacteria like extending that
pillows to exchange plasmas withanother one.
That's like a lust mode. I don't think it feels like
anything to be doing that. People disagree on that.
Like I think Chris feels to say no, there's totally something.
(43:09):
It feels like it's very different, but it feels like
something. So yeah.
So again, I think a lot of people would be uncomfortable
with how pants I guess I am, anda lot of people are
uncomfortable with not how not pants.
I guess I am like. That's that's quite a common
problem I find in this on this channel.
Many guests feel that way, so they're either fits in the box
(43:30):
too much or or not enough at all.
But briefly touching on cells, in your work on multi level
evolutionary developmental optimization, you describe life
and mind as nested learning systems.
Is consciousness then a kind of optimization algorithm?
So it's running across all scales, let's say from cells to
(43:53):
society. What are your thoughts on that?
Surprise, you found that work. Yeah, I'm a I'm a medo, multi
level evolution, developmental optimization.
That was actually part of what got me into the free energy
principle originally was Carl's former mentor Gerald Edelman.
(44:14):
You know, he would talk about neuro Darwinism and he got this
intuition because, you know, he got his Nobel Prize in chemistry
and biology from figuring out the principles of adaptive
immunity as a kind of constrained evolutionary
process. And then he took this like,
well, I think this is how brainswork also and how they enable
intelligence and so on. This idea of so with neato, what
I'm trying to do is go into the idea of universal Darwinism and
(44:37):
looking at evolutionary processes across scales.
And you can also think of them as learning processes across
scales. Excellent work by on this.
He just he died not too long ago.
Campbell's his last name. Universal Darwinism is process
of Bayesian inference. And he goes into this like dual
relation between you can describe systems as evolutionary
(44:59):
systems or learning systems. And so I'm all evolutionary
systems would be learning systems and modeling systems and
nested within one another unfolding on different time
scales. And you know, and going from
whether you're talking about theinternal dynamics of a system as
(45:19):
kinds of evolutionary processes.So you like, you like, you know,
the operating of a nervous system or an immune system or,
or any sort of like the way there's a diversity of forms and
then a selection and a constraint.
And this moving between exploration and then selection
(45:41):
being an important part of the functioning across scales.
So including the, the, the, the estimation of likely world
states, you can think of this asa kind of rapid constrained
evolutionary process where you're selecting across all the
things that could be going on what you think is the most
likely 1. So that experience itself is a
(46:03):
kind of constrained evolution, but then also at go to the level
something more like phylogeny proper in terms of populations.
And so you can think of the process of minding itself as an
evolutionary process, or rather the process of experience.
We can also think of a kind of all evolutionary process as
having a kind of mindedness, as being a kind of learning and a
kind of modeling. And so that's one of the
particularly compelling things about the free energy principle
(46:26):
is it gives you this unified account.
And so the the place where so the place I might.
And so some people would then from this say that the
evolutionary process itself can,wherever it occurs across any
scale, could entail some something like phenomenal
(46:48):
consciousness that I might wonder about in terms of.
So for me, I'm trying to be, it's not that I, it's not that I
don't want to like, it's not like I feel threatened by like,
no, this creature should not be in the consciousness club.
I don't like this. I feel less.
It's, it's more like I want to maintain at least maybe we could
(47:09):
have more word for it, but like,I want to use the term
phenomenal consciousness for specifically to capture its
adaptive functioning and such that we can both under have this
explanation in terms of adaptiveness and be able to
intervene and potentially reverse engineer.
(47:30):
And so I, I think it like you can use, you can use language
differently. But if you like I, I, I imagine
though you're, you're sympathetic to this, just like a
doctor, because it's like, you know, you, you, you know, the
consciousness for of a person, like, you know, when that goes
in different ways, you know, everything rides on that like
that. It's so there's nothing more
(47:50):
important. And so the the ways in which so,
so think of consciousness more constrained aspect as coherent
modelling for the sake of adaptive adaptive action.
I don't think any evolutionary process can generate can do this
thing of basically generating estimates of system world
(48:13):
relation such that that would beentailing something like a
stream of experience. So I don't think like the
evolutionary process, just like a very slowly evolving stream of
experience. I don't know if there's anywhere
within the evolutionary system that you're having anything like
a representation of the self relation such that you could say
(48:34):
you now are inferring us like the sensorium of the
evolutionary system. There still would be, you know,
constraints and feedback, but not all systems can achieve
functional closure as they interact with the world and as a
signal within themselves that they can do things like
represent their system or relation with spatial temporal
(48:56):
cause of coherence. And so for me, I need that to
have a stream of experience. Otherwise it's just.
I would call it modeling, but I wouldn't call it consciousness.
When it comes to your work in consciousness, you seem to blur
the line between neuroscience, cybernetics and then of course
just consciousness research in general.
(49:16):
Do you believe that consciousness is something that
we ever will be able to fundamentally explain, or do you
think this is purely experiential and and something
that will remain a mystery? I think some of our
dissatisfaction may always persist in terms of.
(49:48):
Even if we have so, you know, sothe, so again, in the in the
approach I used to try to explain consciousness, I'll
sometimes call it Marian phenomenology and you'll call it
computational phenomenology. But this idea of like, you know,
David Marr will talk about this like stack of supervenient
levels of analysis of like, you know, what's the function,
(50:09):
what's the mechanism that does it?
And then what's this like intermediate algorithmic
abstraction let's you bridge it between the the two in terms of
like, how does the implementation achieve the
functions? And so some sort of abstract
interpretation. You call this an algorithmic
explanation. And like I will, I'm personally
trying to others are trying to use like the language of like
probability theory, machine learning to try to like fill in
(50:30):
this algorithmic abstraction layer.
But so you have an, so you have like these different levels of
analysis, which are all compatible and synergistic.
And then you point these at the core aspects of experience or
phenomenology and you try to come up with a full stack
explanation. But, and you might have some
satisfaction there. But I, I, I think it's, I guess
you're, you're always in with some explanatory gap, usually in
(50:53):
term when you're dealing with like a systems of certain group
complexity where it's like, yeah, you've explained some, but
not enough that you're really able to move smoothly between
them. That being said, I think it
helps to focus on the explanandum being the generation
of sensorium states. So bringing the body in the
center and then trying to generate a multi level
(51:17):
explanation where you're at different levels, you're
revealing you're you're getting different purchase on trying to
different purchase on, on, on ontrying to to track what's
happening. So, you know, I personally, you
know, would not feel satisfied even though I probably wouldn't,
(51:38):
but I definitely wouldn't feel satisfied until I can say this
is the precise physical processes that give rise to it.
Exactly. And this is their precise
computational interpretation in terms of like here are the
physical and computational substrates.
And I can tell you why these areboth necessary and sufficient.
And and so like, for instance, Ican and where I've gotten to at
(52:00):
the at the moment was like I mentioned earlier, like, you
know, an item to as I introducedit, I was wondering whether
alpha frequencies is the time scale at which you're generating
these integrative estimates of system world relation.
So if you look at like neural rhythms, you know, there tends
to be this like inverse relationship between the speed
(52:22):
of the rhythm and the scope of how much can be in the
synchronization manifolds. So it's like if you're talking
about like neuron neuronal ensembles that are achieving
coherence and forming attractorsat like something like gamma
rhythms, you know, like 40, you know, up to like a hundred 120
times a second, that would tend to be like a smaller zone, you
(52:45):
know, something like a cortical column or something like that.
Like they come to agreement there.
But if you want to get like the entire like posterior lobes, you
know, like, you know, a simple trial temporal to all come into
a unified attracting state, theyprobably couldn't do it that
quickly. And, and to get that agreement
tends to take a little bit longer.
And so that seems to be evolvingat something like alpha
frequencies like 8 to 12 times asecond.
(53:07):
So I was wondering, OK, is that the, the, the frequency which
you're estimating, you're generating your estimates.
That's where you achieve the fame of the brain.
You are selecting your likely interpretations of the system or
relation or and and so and also wondering like for instance,
like is this taking the form of a distributed hierarchy of
(53:30):
nested rhythms, potentially cross frequency phase coupled
with one another? But like a hierarchy of rhythmic
attractors that has their hierarchical relation would be
the hierarchy of the world. So like you know, the world has
like things within things evolving on different time
scales. The smaller things are evolving
more quickly, but then bigger things are evolving more slowly.
And this is actually mirrored. And a hierarchy like kind of
(53:51):
this would be akin to recurrent processing theory, a hierarchy
of rhythmic neural attractors. And so maybe alpha is the time
scale which you get the you get a that hierarchy to extend over
an entire sensorium. Maybe I've, I've also there's
other potential that this would have some connection to.
(54:13):
I think you've mentioned like the hidden screen paper that
potentially there's subnetworks of the brain and that they that
the signaling that that they're engaging in, which might be
happening quicker could be the thing that's entailing
experience. And that's not necessarily this
distributed hierarchy, but maybethat's just a kind of sampling
(54:33):
operation that's informing this inner process.
And it's potentially, I don't know if I'd call it Cartesian,
but it's something that, you know, Dan, that would definitely
not like in terms of like, you know, a concentrated subnetworks
with fast signalling that have even potentially like
geometrically correspondences towhat they're representing,
something like that could be happening.
So until I could adjudicate between these different
(54:55):
possibilities, I would not feel satisfied myself.
Even then, I don't know if I would be, but I definitely
wouldn't until then. But I actually suspect that I've
been starting to study geometricdeep learning to try to get the
greater purchase on this. Where this idea of if you give a
network the same geometry as a thing, it represents that
there's you call like a homomorphic or a diffeomorphic
(55:16):
relationship where the the geometry of the network is
corresponding to the geometry ofthe represented thing.
It tends to be vastly more efficient for inference and
learning and wondering whether like some of the principles of
the brain, neural representationfor consciousness actually
involves geometric deep learning.
I know this a lot, but the idea would be I actually wonder
(55:37):
whether possibly as soon as like, you know, five years from
now, we actually would say, OK, this algorithmic description of
systems neuroscience and biophysical processes.
This is the abstract machine learning architectural
description of a brain. Here is here are the algorithms,
here's the computational object.This is why.
(56:01):
And this is how you get a quail generator.
And this could be something that's just common knowledge and
people feel relatively satisfiedbecause they've explained their
embodied experience, they have the bridging principles.
I feel there still will be some dissatisfaction, but I wouldn't
be surprised if there actually is a qualitatively difference in
our that the hard problem ends up seeming like we, we maybe,
(56:21):
you know, we, we feel satisfied with our explanation.
In a couple years, I wouldn't besurprised if that happened.
I'm not saying it will happen, but I wouldn't be surprised.
Well, one of the things that sort of shed light on the heart
problem has to be a typical states of experience.
So all those experiences that either fall under disorder,
(56:43):
illness, or whether it's psychedelic experience, because
that's a big part of your work as well.
You briefly touched on it earlier, but could you unpack
exactly how something, I think one of your papers was called
the Varieties of conscious experience?
Let's let's maybe go into how certain chemicals, for example 5
HT 2A receptor activity might act as temperature parameters.
(57:05):
So for cognition, let's say thatincrease flexibility in mental
models. So this work was like it was
informed by the the work on consciousness and it was
basically an attempt of following up.
(57:25):
I developed most of it during a fellowship at the Center for
Psychedelic and Consciousness Research at Johns Hopkins.
And it was following up on a proposal by Carl Friston and
Robin Card Harris called the Revis Model, or relaxed Beliefs
under Psychedelics. And so their suggestion was
(57:50):
that, you know, so we're thinking in terms of predictive
processing and thinking of mindsas a hierarchy of beliefs where
you're passing your predictions downwards.
Your prior expectations are is this descending stream of
signaling and then you're updating it by this ace and
extreme prediction errors and you're updating wherever you get
it wrong. That's the part of the hierarchy
(58:11):
that changes. And this ongoing cascade of
predictions and updating the prediction errors would be how
experience evolves according to the Rebus model.
So the 5 HD two A neurons, whichclassic psychedelics, it's one
of their primary sites of, of action when you agonize those so
(58:37):
they are found on. So look at the cortical sheet
and the cortical sheets. You know, it's like a roughly
the size of a thickness and sizeof a dinner napkin that's been
like shoved into the brain and it's all scrunched up.
But then if you take the dinner napkin and you you simply, you
cut it and you zoom in on it, you'll see six layers, give or
take, depending on where you are.
(58:58):
And so the layer 5 neurons are the ones that will loop with the
thalamus that they build. They're bigger neurons and they
will form synchronous complexes partially by virtue of being
able to loop with the thalamic synchronizer.
And this is part of where your descending string predictions
are thought to be encoded by these big neurons that can form
(59:19):
these large synchronous complexes.
And so according to the Rebus model, by agonizing these
neurons intensely, you get this kind of paradoxical relaxation
and desynchronization. So instead of them, you know,
waiting for each other to like, resonate and amplify and come to
(59:39):
these coherent modes, they'll fire off more.
They'll get too excited, fire asynchronously, asynchronously.
And by exciting them, you'll reduce synchronous coherence.
So in the model I proposed, which I built up over a number
of years, I was suggesting that that might be the case for very
(01:00:04):
high levels of five HT 2A Organism.
That when you excite neurons, you get this desynchronization
and a relaxation of your belief landscape.
And this is what lets you have ahigher temperature cognition
that's more entropic and different from past states.
And would be part of what would afford both the uniqueness of
those experience experiences andtheir potential for changing
(01:00:25):
your mind because your prior expectations are relaxed.
So you're both exploring new things that you wouldn't explore
otherwise, and that this would be part of the psychedelic
experience and the potential forchange.
I think that's largely correct, except what I was suggesting is
that this might not be the case along the full dose response
curve of 582. A agonism that for instance,
(01:00:46):
like a micro dose might potentially be the opposite of
that. That actually if you, you know,
let's say you just tickle them alittle bit and you help them,
help them to excite, they might not de sync, they might sync up
better because they're better able to like join the choir.
And so that, so the proposal I made was that we should look
for, depending on the dose and potentially where we are in the
cortical heterarchy, we should look for some combination of
(01:01:09):
both directly and indirectly relaxed and strengthened
beliefs. And so some way of interpreting
things like hallucinations or delusions would be back with
what they used to call the psychotomomomatic model of
psychedelics as a kind of transient psychosis.
And if you think of these, you can think of them actually as
(01:01:29):
not necessarily relaxed priors, but overly strong priors.
When you're like hallucinating, you're not letting what you're
experiencing be contradicted by the sense data.
And you're saying, Nope, I'm seeing that, but the sun state
isn't is it would might say otherwise, but you're not only
similar for delusions. You might have a belief, but
(01:01:49):
then the contract can believe like the if the coherence of the
delusion is so strong and you might not let anything
contradict it. And so this idea that some
aspects of of psychedelic cognition would be potentially
psychotomomatic is being a good way of thinking about them.
And this might be involving belief strengthening.
So I think the Remus model has areally important point about the
capacity for belief relaxation being important and and
(01:02:12):
psychosis. It's a temperature parameter in
terms of like giving you a more entropic or creative kind of
cognition because things are aredifferent and mixed in different
ways. But also some aspects of
psychosis like the vividness of the experience, the might be
involved in belief strengtheningand some evidence I I point to
would be things like Charles Benet syndrome, for instance,
(01:02:34):
like when you like lose like acuity and imodality sometimes
you'll often hallucinate within the modality or sensory
deprivation tanks that can causelike people to experience
hallucinations. And so in the Albus model, I'll
think of psychedelic state as a kind of dream, like waking dream
state in terms of you're somewhat shielded from the world
(01:03:00):
and your imagination is running a little bit more freewheeling
that it would be otherwise like a waking dream.
And I know some aspects of psychedelic phenomenology don't
correspondence, like for instance, when you're out in the
world, the world like might appear to you more vividly, but
that doesn't necessarily mean that you're either seeing it
(01:03:22):
more realistically. It's just your your experience
of it is more vivid. But whether that's corresponding
to like you having relaxed priors, just experienced the
world as it is or just what yourprior is commit more vivid.
I don't think we know or know how to think about it.
So the Albus model was attempt, attempt of nuancing the Rebus
model and then trying to draw onideas from IWT, but not
(01:03:46):
necessarily depending on those to say, OK, how much of
perception, the nature of imagination and cognition can we
account for by using a predictive processing,
processing systems neuroscience perspective where we think of
the brain in terms of its conscious functions and what we
think might contribute to that. And so that's roughly the idea.
(01:04:09):
Yeah, it's, it's this is one of those extremely fascinating
topics to me because I remember at some point when I used to
work in psychiatry and I was working as a medical officer and
at the same time I was very intrigued by by illusions.
And I was trying to. And I was writing my
dissertation on consciousness and writing about illusionism at
the time. And I remember showing some of
(01:04:29):
the acutely psychotic patients, some the checkerboard illusion,
let's say just random tricks of the mind and you just test it
out on these patients and many of them don't fall for it.
And I found it very fascinating that when someone's acutely
psychotic in an active psychosisthat they don't fall for the
checkerboard illusion. And then I read about it and
this is something that they often find.
(01:04:50):
Is any anything about that that you want to add on or just
explain for people? Yes, and also before I forget,
I'm I'm reminded of your recent conversation on aphantasia and
the finding that sometimes psychedelics will help people to
(01:05:12):
sometimes transiently, maybe sometimes longer to experience
mental imagery that they wouldn't otherwise.
But the, the I I've seen that type of resistance to illusions.
I've I've heard that this describes some in psychotic
patients and also sometimes associated with the autism
(01:05:35):
spectrum also and with respect to psychedelics, you know, so
there's some, you know, suggestion that you might see
something similar. But for instance, like this is
only anecdotal. But like, so for instance, like
Michael Pollan's book, like how to change your mind, He'll
describe like looking at the hall of face illusion, but it
(01:05:56):
was still going for him. But I'd say this idea though, of
like a perception as a stronger prediction than you might get
otherwise is so much challenged by what you just said in terms
of, so if you're thinking in terms of the psychotomomomic
model, So if someone's experiencing psychosis, well,
(01:06:16):
why aren't they experiencing theillusion even stronger?
Like why? Why are they resistant to why
aren't they like seeing more of the like the reality?
And I think this could potentially get into things like
the disconnection aspect of psychosis.
So Carl and Chris Frith, well, they're working on the
disconnection model. That might be part of how you
(01:06:37):
actually do have it at high levels.
Some of your beliefs are relaxed.
And so, so yeah, so the, the, the Albus model, it's, I don't
really like invoke epicycles. And it's like, but it's like,
yeah, looking for relaxed and strengthened beliefs with
directly, indirectly different levels of hierarchy.
And sometimes you do indeed are getting these relaxed beliefs.
(01:06:59):
And so someone who's psychotic and is showing resistance to an
illusion seems like the best explanation of that would be
some of their high level beliefsare indeed relaxed, their
ability to form. And This is why they're not
coming to this misleading inference, which is what the
illusions do. They're a garden path, which
given the statistics of what youpast experience, you should then
expect this. But then if you actually were
(01:07:20):
just like looking at the sense, they go, no, that's not right.
But you're letting yourself be fooled by the high level prior.
And so and this the case you mentioned, I think that's a good
example of a kind of Rebus effect playing out in a patient
population. I think that Colin Mark actually
wrote a paper where they touchedon this.
I think it was in 2018. I can't remember the exact
title, but I remember them discussing exactly the same
(01:07:42):
concept where I think Col. givesout a nice layout of the prize
and the posterior conclusions necessary for a psychotic
patient. And I think it was, I think
you're right, it was an autisticpatient.
So they sort of gave the two differences of how both of these
would perceive these illusions differently.
I'll try and get a link to that.But Adam, if if you had to give
me your view on whether psychedelics either expand
(01:08:03):
consciousness or do they just reconfigure our perception and
reality from the base up? It's almost like when you when
you talk about sorry. So when you, when you talk about
psychedelic experience, I remember at some point you, I
think you described it as a disintegration and
reintegration, particularly of the self model.
Do you feel the same about consciousness in that regard?
(01:08:24):
In that it's almost a a dissolving and then re emerging.
It seems that the kinds of transformative psychedelic
experiences that people have associated with things like
(01:08:46):
mystical experiences and like states where they, you know, the
ego dissolution, like experienceof oneness with the world, that
does seem to be a yeah, yeah. A a broadening of self
(01:09:09):
consciousness and the and the kinds of conscious experience
you could have to be other than your usual default mode of
inference. And from there being able to
entertain new things like instead of me being this ego
that always has to protect itself against like what's in
the world, I might be part of what's in the world.
(01:09:30):
And it's not necessarily doesn'tfor instance, wouldn't have to
be like an antagonistic relationship of defense, but one
of mutuality and connection. And that seems to be very
commonly experienced by people and is transformative.
And this, yeah, the, the only place I would, the details I
would add would be I think the story changes a lot depending on
(01:09:56):
like the dose and things like maybe the setting in terms of
like, like there's like a level of psychedelic experience where.
So for instance, like if you really crank things, maybe you
can think of these as the maximally expanded, but you can
also think of them as collapsed.So for instance, like something
(01:10:17):
like A5 MEO DMT state or something like this where it's
like you keep going with it And there might be like some,
there's a there's a zone where like it's everything is like
unity with blessed light. But there's like might be
another zone where it's just like you've blacked out.
And so your, your caution was not expanded.
Similarly, I would say like the microdose, for instance, like
(01:10:38):
you could see that for instance,like by helping to strengthen
potentially your ability to havecoherent activity that you can
give you like greater access like flow state and some sorts
of like creative cognition couldbe there.
And you can work with things on the level of the content of your
mind and the normal content moreeffectively than you could
otherwise. In terms of things as they
manifest more readily in mind, you can work with them more.
(01:11:01):
So it's like in terms of like whether the whether
consciousness in the mind is expanded or not, it seems like
it might depend some on the dosein terms of if there's like a
complex function of like as you increase the dose, you get
greater diversity, greater ability to represent.
But there might be points where you go so far that the the dream
collapses. I think.
(01:11:23):
One sorry. Continue, Adam.
And one reason so I think there might be a difference in this
story of psychedelics from like the small dose to the like the
the micro dose to the macro doseis I actually wonder whether
most of the normative functioning of the 5 HD two a
(01:11:44):
system might be more akin to a kind of a a see this effects or
strengthen beliefs. And that actually only the very
high levels would be when we'd start to get things like the
relaxation of beliefs, wonderingwhether in terms of like the
normative range of functioning, the five inch T2A system first
evolved in the around the Cambrian explosion around the
(01:12:04):
advent of jawed fishes. And so I've wondered whether in
this in terms of predator prey arms races, it's actually a
temporary overclocking to strengthen priors for more
intense gold pursuit, more intense act of inference.
So either you're trying to more effectively swim down your prey
or swimming or swimming away to avoid becoming prey.
(01:12:25):
And that actually a lot of the normative function of 5H2A
system, things like that would be modulated by like like lactic
acid or carbon dioxide that thatwill change level of activity
there. That may be more of a a strength
in beliefs, but you might also get selection of that system
under very high levels, maybe involving things like endogenous
DMT in the context of sex and birthing and near death
(01:12:50):
experiences. And that basically there is was
a selected for a regime, a very strong as an activity.
And the context where that happened was either you wanted
to create a mating pair where they you basically you ego
dissolution together and then a reforming and a bonding, whether
that's between a mating pair or a parent and child.
(01:13:11):
And then the one other case being on your death experiences
being whatever you were doing that just led to you almost
dying, That might be a really good opportunity for you
changing your mind. Whatever you just did that made
you almost die, it's time to update.
And so, so, so that's the part of the reason I think there
(01:13:32):
might be different accounts between like micro and macro is
this evolutionary account that most of the selection was in
this context of actually strengthening priors for more
coherent goal pursuit. But sometimes you want a
relaxation of priors for more flexible selphing and Co
selphing. Yeah, I think it's, it's pretty
cool when you think about it that way.
And I mean there, there always has to be some sort of a, an
(01:13:54):
evolutionary advantage or disadvantage to something
occurring. And when you actually think
about those processes, it's pretty cool to to to consider
the fact that in certain circumstances this is perhaps
the most vital thing you could do is to actually just adjust
those priors. Adam, when you think about
consciousness, artificial intelligence, sorry, and
(01:14:17):
artificial consciousness, that'sa big part of your work as well.
So we have to obviously touch onthat.
The first thought I had just nowwas what would be an equivalence
of a psychedelic experience for an artificial system?
And how should we go about trying to instigate and initiate
these in artificial systems? We'll sometimes do something
(01:14:44):
something against that in terms of there are temperature
parameters that you can use for like even, you know, for like
GPTS where you can say like, yeah, how creative do you want
its operations to be? And so there is something like a
psychedelic control knob that wehave for some of these systems.
The other places where models ofartificial psychedelics come to
(01:15:06):
mind would be in terms of deeptream technologies that Anil
has recent work using it to combine with air and VR.
But in the the original work wasusing as a kind of
interpretability mechanism whereyou would like strengthen
basically the the influence fromsome nodes in a belief hierarchy
(01:15:29):
and then see how this changes overall inference.
And so you would have a perceptual network and by
changing the gain on different nodes, different little
hierarchy, you could have basically the network
hallucinate different things. So things like like insects
would start to show up or like fractal type geometry depending
on like or, or like dogs and cats, which come like depending
on like where you were and like a hierarchy of patterns.
(01:15:51):
So that's Suzuki has good work on that.
The other that's coming to mind is in some world modelling
architectures that have been described by a number of people
like Hahn, Schmidt, Huber, have a, they had a temperature
parameter with where their parameter where basically
(01:16:14):
they're changing the creativity of the dream state of assist or
the imaginative state of assist.So these world modeling
architectures, they learn to basically both compress what
they observe and then infer morelikely states.
(01:16:35):
And so it's through a principle called auto encoding.
You basically learn how to take a more incomplete sensor
information and fill in likely patterns of what you think is
there. And then once you train these
systems up, they can run in an offline mode where instead of
their evolution being from moment to moment based on what's
(01:16:55):
being perceived, inferring more complete sense or more complete
data of what they thinks in the world, you can just have the
these representations. Once you form them, just given
this one, what do you think willhappen next, will happen next to
these compressed representations?
And so basically you could have when, when, when you look at
(01:17:17):
these systems, when they run in this way, they can generate
basically like fuzzy inferences of what they think ought to be
happening divorced from any sensory input.
And you can basically play out like a, like an imaginative
unrolling. And so that they're, they're
dreaming. And so this is sometimes used
for the sake of planning, like let's say, like a driver's
(01:17:39):
vehicle. Should I like change a lane or
not? Well, you might do like a roll
out into the future using this compressor presentation like,
OK, I and then did this, I changed the line like, did I get
hit? What happened do you think in
the past? So basically once you have
systems that are leveraging imagination for the sake of
inference and learning and good policy selection or adaptive
(01:18:01):
policy selection. Once you have something like
imagination, the ability to playout counterfactual scenarios and
simulate different possibilities, then you might
have need for something like a psychedelic parameter in terms
of how creative should these imaginings be, like how
exploratory should they be and how compelling should they be?
(01:18:21):
And so if so, yeah. So that that would be potential
uses of psychedelics and machinelearning is having better
learning and inferential and andplanning systems by
parameterizing their imaginativeroll outs into possible futures
to be more or less creative and more or less vivid.
(01:18:42):
And that could be a good use forthem.
Yeah, something you also work onis you've proposed Dream of
Being and Value cause frameworksfor aligning AI with human
values. How can models of consciousness
and affect help prevent misaligned AI outcomes?
(01:19:03):
How do you see these two fields coming together?
So the dream of being at work and so the value core work
actually has some of its originsof like a former career actually
in in sex research. But that's a whole could
describe that different point. But the a lot of that work was
(01:19:25):
done and it's still being pursued in collaboration with a
machine learning researcher, Zara Shikbahi.
And we had this collaboration ofthe years where like I've been
teaching her about systems neuroscience and she's been
teaching me about machine learning.
And so one of the architectures that she's been developing are
these kinds of dreaming architectures.
(01:19:46):
And so where you basically can have systems, estimate system
worlds, that you have them learnsystem world models for the sake
of both adaptively adjusting their actions from moment to
moment and being able to engage in imaginative planning and
simulation. And so the idea of alignment
(01:20:12):
relation to this would be. These architectures, if you keep
going with them, would have muchof what you would need.
Not all, but much of what you would need for more intelligent
autonomous functioning. The ability to, given partial
inputs, coherently estimate whatyou think is happening rapidly
(01:20:36):
from moment to moment is good for more precise and intelligent
control behavior. And also the ability to engage
in imaginative roles and possibilities will let you do
things like causal reasoning andplanning.
And so when you have, and so with alignment, part of the
(01:21:00):
ideas of what you would want is the development of enduring
preferences, something like moral orientations as a kind of
particularly stubborn prior or, or, or or or or strongly likely
to be selected set of policies. Now things you tend to do.
(01:21:23):
And so like given a situation where you could either cooperate
or defect, you would have a preference for cooperation and a
strong preference for cooperation.
You've, you've learned and, and that that is the thing to do
that will help you to generate value.
That's the, that's the best thing to do.
(01:21:44):
And because you have this expectation, you live into that
possibility, you're more likely to live in that possibility.
So if you think the thing to do is to cooperate, then you're
more likely to cooperate. And then you get these, you're
likely to get these mutually beneficial relations, which will
further increase your tendency to cooperation.
And the goal is to make this like a stubborn, A pullback
(01:22:04):
attractor or a stubborn belief that it's good to cooperate
because that's not always the case.
Sometimes it's better to defect in terms of you might narrowly
come out ahead. But what we're looking for is
something like character or personality to form an enduring
set of tracking states. And so the idea would be is if
you have a learning curriculum where the earliest lessons where
(01:22:26):
you learn and you over learn through iterated action
selection and updating of your models, you, you the thing you
learn is it is good to be good to others.
Caring for others is actually the most selfish thing you can
do. If you can get that lesson in
there in the early stages, potentially that can endure
(01:22:48):
throughout the life, both because you don't, you don't try
different things because all your model evidence suggests
that's the thing to do so you don't stray.
And also you can do things like it seems like a nature.
Part of the way this happens forbiological systems is plasticity
reduces as you get older. And so if you bake it in when
the cake is baking and then you know later that you take the
(01:23:08):
heat off the cake, that's another reason things can hold
stable. So the idea of of with this, the
dream of being paper in the value core idea is establishing
a core of value or pull back a tractor through designing a well
structured learning curriculum that scaffolds the acquisition
of a very strong belief and and and a belief that everything
(01:23:29):
else gets scaffold on top of in in the mind and in the
personality of the system that towards certain ethical
preferences and orientations. So that would be the basic
ideas. What we're looking for in
alignment is establishing robustpreferences for pro sociality
and how we get there would be through well structured early
(01:23:53):
learning curricula for benevolence where you learn.
That's that's how you come out ahead.
That's in this game of life and by basically by playing the
infinite game, That'd be the idea.
And Adam, I'm not sure if you watched the episode with Karl
and Mark, the colloquium we had on Is it possible to engineer
(01:24:13):
artificial consciousness on thischannel?
But that that episode's very popular, people love it.
So I'm curious to know your thoughts on this 'cause you, you
beautifully bridged their work. You've got sort of key insights
into both of them. In your view, is it possible to
engineer artificial consciousness, or do you think
there's some sort of biological preconditions, embodiment,
(01:24:33):
perhaps emotion metabolism? Are those essential or do you
think it's possible? Emotion, yes.
Embodiment, yes, although potentially a virtual embodiment
could suffice, but something that's non trivially
controllable that can help to dothis trick of basically
(01:24:58):
scaffolding the mind and, you know, providing a a, a, a common
context across different circumstances.
So some sort of minimal embodiedselfhood could be a virtual
embodiment. Metabolism, perhaps you might
need something more like akin towhat Waldport might call
(01:25:20):
semantic information where the things it does are relevant to
survival. Metabolism is one way of getting
semantic information in there, but it could be things like you
have to manage some kind of energy budget or, or, or you
have to like look at like the potential, like how good is your
body doing in terms of like degrees of damage and its
(01:25:42):
vulnerability? And it's like how, how, what's
the integrity of the system and something like this in terms of
like principles of life and, andbiological functioning.
What I don't know, I think it's probably a moot point
practically because in terms of just achieving scale, we might
need something like neuromorphiccomputing, something like where
where the the physics and the computer 1 to implement the the
(01:26:06):
learning system as opposed to having to like GPUs are very
powerful. But like when, when, when you
have to, if you can still thoughto actually make the neural
network run on the hardware withthe connections of the hardware.
That's so that might just be practically necessary.
Some, like my friend Alex Kieferworks with Karl at verses now he
(01:26:32):
has this idea he called psychophysical identity, and he
claims it's actually absolutely necessary that the physics in
the computer want it has to be amortal computer, as Hinton calls
it. I'm, I don't know, I'm I'm I'm
more agnostic on that issue. Like could a virtual machine do
it or does it have to be in the physical implementation itself?
I suspect practically it's a moot point, but in principle I
(01:26:55):
don't know. I find it so difficult because
the, the, the sheer number of variables that we would have to
include as sort of essential criteria.
I mean, it's, it's almost limitless.
We could pick and choose so many.
But, and that's why this topic of mimicry always comes in is
this is how close do we have to make it seem before we actually
(01:27:17):
just say it is? And, and I think that goes for
anything, just the way you and Italking right now.
At what point do I look at you and think, OK, but is Adam
really? Is he really conscious?
Tell anyone. That happens when you look at a,
for me from a S with a South African accent talking to
someone overseas and, and you wonder, you know, look, we're so
(01:27:38):
different. How is this even possible?
I know when I'm sitting in an airplane and I'm talking to
someone from from overseas, you instantly feel that difference.
But yet there's so many factors that make us feel human and
similar, but but it's there, There's this, there's, there's
always this thought of like how similar are we?
I mean, you know, you, you go towhat was it, Blake, I forgot his
(01:28:00):
last name, but the former Googleengineer, he was convinced that
the LM he was talking to was or fairly it was a different
system. But like he was convinced the
system he was talking to was conscious because it could
report like it would say things like as if it were conscious
when he talked. And if you give it a physical
embodiment, regardless of whether that's actually coupled
(01:28:21):
to its mind in a way that would actually like ground it as a
system that's generating certainstates, even if it's just like a
robot that can talk, that itselfcan feed into this.
Like if it looks similar to you and it sounds similar to you and
it says similar things, you know, for many people that's
enough. And that that's, you know, the,
the, the, the input output relations are the same.
(01:28:42):
It's the same for, you know, forme, I would want to, you know,
I, I think the present way, for instance, like the current
systems, you know, some like, like Ilia, like has said, you
know, maybe they're a little bitconscious.
I don't think they're a little bit conscious.
Like, I don't know, like my laptop's a little bit potato
like. I don't think that's I, I, I
(01:29:04):
think part of the reason I thinkthat is because I don't think
they have grounded self world models in something like an
embodiment in real physics since.
And I think this when you look at the types of situations where
you where they get stumped and like the, the types of they've
gotten better at this as you like you, you train them on
(01:29:24):
physical type problems. But still, if you give them
something a a truly novel kind of physical causal cascade to
figure out, they tend to bonk. And I think the part of the
reason is they don't have anything like an integrated
world model. They have world modeling
relation. They do have predictive
information about the world, butI don't think they have a
spatial temporary and causally coherent model of what's in the
(01:29:46):
world that they can really flexibly roll out into novel,
novel future is what's happening.
So yeah, they're, they're compelling at giving that
impression because they talk so similarly and and and and with
depth and philosophical like important what they say even.
But yeah, I I suspect we are notyet close without the embodied
(01:30:14):
grounding. Yeah, look, I, I completely
agree with you. I think even when I look at my
mom and the way she uses met on WhatsApp, it's it's pretty scary
because she she really believes this as a person.
So she refers to as she she's got a name for it.
There's there's almost this blurry line between whether she
thinks she's chatting to someoneon the other side versus her
(01:30:36):
actually knowing that this is anartificial intelligence system
that she's talking to. But when it comes to making a
biological system, at some pointyou've organized the bio AI and
making and breaking symmetries with special issues, bringing
together figures, huge figures like Michael Levin, Andy Clark.
How do you see biology in forming the next generation of
(01:30:57):
AI and vice versa? I mean, it's, it's, I'd say I
have like a minority view on that.
Like, for instance, like even Deepbind, who's like research
mandate was biologically inspired AI, they're going all
in on, on, on giant foundation models that are quite different
in the way they work. And, and, and their, their neuro
(01:31:20):
AI team is, you know, increasingly less central in
what they're doing. And but I do think ultimately to
get for the most robust functioning where the like the
we we like, even though we're getting better at reigning in
confabulations like looking at mutual consistency, modeling
uncertainty, I think they will continue.
(01:31:42):
And yes, humans also confabulate.
I think they will also tend to confabulate in kind of crazy
ways when you don't expect it without actually something like
following the principles of biology of giving the them the
same. Basically giving them
experiences of a kind to that which they're that reflects that
(01:32:03):
which they talk about. Such that in addition to this
syntactic semantic order where they can look at their next
token prediction, they can say likely things given the
statistics of language, giving them some sort of embodied
simulation process, some sort ofimaginative hallucinary.
Like right now as we're talking,we're going back and forth,
we're hallucinating furiously and then we we generate these
(01:32:26):
strings and then parse them. And then if the hallucination as
we pick it back and forth, if the cycle consistency is more or
less right, like, OK, we understand one another.
This imaginative hallucinary embodied simulation that seems
to be mission missing. It's present in biological
learners who have robust function of robust general
intelligence. And I think we're going to need
(01:32:46):
it there. So right now, yeah.
So there was an initial special issue I did with Ines Cepolito
and Andy Clark on biologically aspired AI and did that in
Frontiers Neuro Robotics. And then more recently piece
with Mike Levin or a specialist with Mike Levin and others in
Royal Society interface. Focus on making and breaking
(01:33:08):
symmetries in mind life, you know, focusing on basically the
largely on, on, on the free energy principle as as an
interpretation, as a integrativemodel of mind.
Most recently I have another special issue underway with Mike
(01:33:30):
and others. It's going to be in
philosophical transactions a of world models and life mind
continuity. And so the idea is we're digging
into the issue of from differentdisciplines, how do people think
about world modelling, whether we're talking about an LLM or a
brain or a cell or an economy ora Gaia, you know, how do we
(01:33:55):
think of what kind of systems can have what kinds of world
models? And then what can the world
models of biological systems anddiverse intelligences teach us
about modern AI and vice versa? And so that's an ongoing that's
just in two parts of the collection.
The 1st it's probably going to come out in February and then
(01:34:17):
there's another one I'm planningon same setup of diverse
intelligences and AI, seeing howthey're mutually forming and
looking at the concept of agency.
What do people think they mean when they use the term agency in
different disciplines? And can we have a more fine
grained, can we unpack the suitcase and have a more fine
grained typology and greater sense making and then get at the
(01:34:40):
question of do we need to recapitulate these biological
functions or not? I personally think yes, but
opinions I would not say that's close to majority opinion on
that so. Yeah, but Madam, this is exactly
what I mean. You, you, you were working with
everyone. It's it's, it's your work's is
is able to bridge so many different fields together.
(01:35:02):
And I think it's pretty cool. How, how do you feel knowing
that that you're able to take your work and explore it with so
many other diverse thinkers within very similar fields and
yet so diverse at the same time?How's that been for you?
Very intimidating because when you, when you go into some of
different fields, you know, you can't have the same mastery that
(01:35:22):
they have in any given field. And so I'm very reliant on
others to do sanity checks. But at the same time, it's, it's
kind of like the the blind man and the elephant is the only way
I know how to proceed. And unless like I can kind of
point to something in a few different ways and it's like,
OK, unless I get some kind of convergence, I feel like I don't
understand it at all. But the same time, you know,
(01:35:44):
trying to bridge disciplines, itcan be a big stretch.
But I think trying to without doing that though, I find often
times you're more likely to get stuck at a point of ambiguity
that if you cross reference withsome other perspective on it,
they're like, oh, just to get the, the the blind sages to talk
to one another. So yeah, sometimes it's very
(01:36:04):
helpful and reassuring, and sometimes it's very
intimidating. When I spoke to, I think it was
Carl or, or Mike, but one of them had mentioned that, well,
yes, I had mentioned to Carl that it's, it's kind of like The
Avengers of mind all coming together.
It's, and then Mike then she said he's calling it the field
of diverse intelligence. Do you do you feel fundamentally
part of this field of diverse intelligence and.
(01:36:28):
To assemble. Being part an Avenger within
this field. Maybe I'm like Hawkeye or like,
I don't know, that might be moreof more like the Guardians of
the Galaxy. But let's say, I don't know if I
had one like full core Avenger, but the I think though The
Avengers have to assemble and like the Guardians have to come
together in terms of the, the the problems are thorny enough
(01:36:52):
and they require basically dialogue among diverse experts
with, with expertise who know about diverse intelligences and
who are themselves diverse intelligences.
I don't, I think it's probably hopeless otherwise without, you
know, interdisciplinary dialogueis, you know, easier said than
done, not even easily said. But I think definitely necessary
(01:37:16):
for the types of like basically the mind body problem.
And it's all of its like richness and thorniness without,
Yeah, it's the only way to do it, I think.
Yeah, I agree. I think that's something I often
say is that the mind body problem can only be sort of
addressed with a cumulative culmination of collaborative
encounters. This is not something that one
specific field's going to figureout or solve if it ever gets
(01:37:38):
solved. But for the most part, it has to
sort of branch everything and not merely reducing us to the
sum of our parts, but acknowledging every single layer
of reality with it's sociology, economics, physics, doesn't
really matter, they all seem to work together.
But on that note, Adam, if if you had to ultimately tell me
what you think consciousness is for, were they evolutionary,
(01:37:59):
computationally or spiritually, what answer would you give to
that? I would say consciousness is for
it's hard to say this without doing like dishonouring it.
(01:38:22):
Like I like, I, I don't want to be like reductive and like
anytime I say something functional the same time, you
can think of it in terms of moreefficient and reliable data
fusion. And that like, if you have like
the blinds sage of the elephant of your different modalities to
actually generate a complete sensorium, you have to bring
things together. And if you bring them together
(01:38:42):
in some kind of coherent way. And then from all these partial
views, generating a more complete Esso of your sensorium
from the impoverishing of each essence.
Because like, if you look at like your, your vision, it's
only acuity in the retina of a thumbnail, hello to arm's length
that you're moving around three times a second.
And so that's all you've got that you're able to, but you
both you're able to. And you know, there's some like
(01:39:05):
illusionist views. I'm sympathetic to that.
The, the filling in might not beas rich as you think.
I think there's good reason, good evidence to point to and
might actually be more patchy, like we might actually fill it
in as needed. Maybe the same like rough
acuity, but we can fill in nonetheless of what would be
like if I foveated there And so then once.
So basically from the impoverished input, amazingly
(01:39:26):
rich but still impoverished input, it senses being able to
estimate. Basically using that leveraging
a model of a world model to infer likeliest in certain
States and then using that to inform action perception cycles
for the sake of more flexible and skillful action selection.
Informed by integrative estimates of past experience,
(01:39:49):
what happened, what it was like compared to other things and
what happened and how it felt and like how well to turn out
for you. And so estimating with
attentional prioritization or attentional selection of the
most important things being morein the foreground and
highlighted based on what was most important for adaptivity,
whether avoiding bad things or getting good things.
(01:40:09):
So basically generating valence sensoriums for the sake of
adaptive action, selection and learning?
I would say that's what it's for.
Don't worry Adam, I'll give you a chance to make that a bit more
meaningful by by by asking a different question.
Because once you've said that purpose is to help, purpose is
to help people become adaptive, creative and free individually
(01:40:32):
and collectively. So what role does understanding
consciousness play in achieving that vision?
This is a great way to tie it upand take away that reductive
field. So actually it's a beautiful
way. And, and I think, and, and it's
getting back to what I was trying to express earlier, like
(01:40:55):
when I was saying like, like as a doctor, you understand, like
the importance of consciousness for like people's lives.
You know, it's not, it's not just an academic matter, but
it's like disorders of consciousness or the conditions
of flourishing. Like what is it about the
functioning of consciousness that could let you have
(01:41:15):
different states of being? And how can you make you more
likely to have flourishing ones and less likely to have ones
where you're having needless suffering?
And so for me, the, the, the payoff of miles of consciousness
would be for the sake of, of knowing how to more skillfully
intervene so that we could have more harmonious and meaningful
(01:41:40):
conscious states that both individually and that we can
share. And potentially the payoff might
be one day we might be able to build it, but that's a whole
other thing. Even if we never get that far
and can grow it by other means, we are conscious and, and it's
an understanding that for the sake of mental well-being and
(01:42:01):
flourishing lives, it's probablyone of the most important things
we can understand because that'swhat it is to flourish.
And it's like partially to have meanings that you can experience
and share. And, and also another way to
frame it, like you just said, actually really, really
beautifully is helping to expandpeople's degrees of freedom,
(01:42:25):
like understanding, if I, you know, how, what's the nature of
my conscious so that it could beshaped in different ways?
And so then what can I do to more skillfully cultivate my
conscious States and, and, and, and, and you know, and, and
engage with the world more mindfully so and skillfully.
So basically consciousness in the service of freedom, like
(01:42:46):
just like really beautifully said.
Yeah, but it's it's, it's your work.
I'm just, I'm just the best. But Adam, your work's
incredible, man. Thank you so much for for such
diverse different work. It's it's so fun to read it
because I'm reading all the other thing because I know as
well and I'm watching you guys all work together.
So it's pretty cool from for us from the outside to to watch all
(01:43:07):
you guys work together. It's incredible all together.
But I think overall the if I hadto sort of round this, round
this up, I would say that one ofthe things you mentioned is that
compassion is your religion. And if is compassion itself a
kind of cognitive technology foralignment, both human and
(01:43:28):
artificial and for the broader picture and everything that
we've said and spoken about today, do you have any final
words thereafter? Among the conscious states that
we would want to cultivate, yeah.
For me, compassion would be the primary one because I think
that's both the source of our greatest meanings and our our
(01:43:51):
greatest power. It's our ability to come
together and cooperate over timesustainably and become to
empower each other. And that's the secret of our
success as a species. And others, you know, and, and,
and to the largest, you know, mammals more generally and, and
not just they're like, you know,across some, the story of
(01:44:13):
multicellularity even like, likethe ability to care about others
and to coordinate with them because you have mutual care.
That seems to be, it's a source of our greatest meanings, I
think probably because it's alsothe source of our greatest
power. And so, yeah, compassion among
the conscious states that I value and think in general, we,
(01:44:34):
if we all think about it, we allvalue ultimately we have to,
because you don't get far alone that say, compassion might be
the only real religion we can agree upon and that we need.
Yeah, and I think it's pretty cool because when you sift
through your work and you see that it's almost like compassion
is has a scientific foundation in that it is the Somos
(01:44:58):
cognitive technology that bringsus together.
So it's kind of grounded in thatsense.
And I think it's a, it's a beautiful sort of topic to end
because I think all theories of consciousness do have very, very
deep philosophical, medical, ethical implications.
And discussing it, exploring it within these diverse fields is
(01:45:19):
exactly what I think the world needs.
So thank you very much for your contributions to all of that.
Thank you so much. This was such a wonderful
conversation and I'm so much enjoying the other conversations
as well. And thank you for everything
you're doing.