All Episodes

September 2, 2025 92 mins

Professors Karl Friston & Mark Solms, pioneers in the fields of neuroscience, psychology, and theoretical biology, delve into the frontiers of consciousness: "Can We Engineer Artificial Consciousness?". From mimicry to qualia, this historic conversation tackles whether artificial consciousness is achievable - and how. Essential viewing/listening for anyone interested in the mind, AI ethics, and the future of sentience. Subscribe to the channel for more profound discussions!Professor Karl Friston is one of the most highly cited living neuroscientists in history. He is Professor of Neuroscience at University College London and holds Honorary Doctorates from the University of Zurich, University of York and Radboud University. He is the world expert on brain imaging, neuroscience, and theoretical neurobiology, and pioneers the Free-Energy Principle for action and perception, with well-over 300,000 citations.

Professor Mark Solms is director of Neuropsychology in the Neuroscience Institute of the University of Cape Town and Groote Schuur Hospital (Departments of Psychology and Neurology), an Honorary Lecturer in Neurosurgery at the Royal London Hospital School of Medicine, an Honorary Fellow of the American College of Psychiatrists, and the President of the South African Psychoanalytical Association. TIMESTAMPS:(0:00) - Introduction (0:45) - Defining Consciousness & Intelligence(8:20) - Minimizing Free Energy + Maximizing Affective States(9:07) - Knowing if Something is Conscious(13:40) - Mimicry & Zombies(17:13) - Homology in Consciousness Inference(21:27) - Functional Criteria for Consciousness(25:10) - Structure vs Function Debate(29:35) - Mortal Computation & Substrate(35:33) - Biological Naturalism vs Functionalism(42:42) - Functional Architectures & Independence(48:34) - Is Artificial Consciousness Possible?(55:12) - Reportability as Empirical Criterion(57:28) - Feeling as Empirical Consciousness(59:40) - Mechanistic Basis of Feeling(1:06:24) - Constraints that Shape Us(1:12:24) - Actively Building Artificial Consciousness (Mark's current project)(1:24:51) - Hedonic Place Preference Test & Ethics(1:30:51) - ConclusionEPISODE LINKS:- Karl's Round 1: https://youtu.be/Kb5X8xOWgpc- Karl's Round 2: https://youtu.be/mqzyKs2Qvug- Karl's Lecture 1: https://youtu.be/Gp9Sqvx4H7w- Karl's Lecture 2: https://youtu.be/Sfjw41TBnRM- Karl's Lecture 3: https://youtu.be/dM3YINvDZsY- Mark's Round 1: https://youtu.be/qqM76ZHIR-o- Mark's Round 2: https://youtu.be/rkbeaxjAZm4CONNECT:- Website: https://tevinnaidu.com - Podcast: https://creators.spotify.com/pod/show/mindbodysolution- YouTube: https://youtube.com/mindbodysolution- Twitter: https://twitter.com/drtevinnaidu- Facebook: https://facebook.com/drtevinnaidu - Instagram: https://instagram.com/drtevinnaidu- LinkedIn: https://linkedin.com/in/drtevinnaidu=============================Disclaimer: The information provided on this channel is for educational purposes only. The content is shared in the spirit of open discourse and does not constitute, nor does it substitute, professional or medical advice. We do not accept any liability for any loss or damage incurred from you acting or not acting as a result of listening/watching any of our contents. You acknowledge that you use the information provided at your own risk. Listeners/viewers are advised to conduct their own research and consult with their own experts in the respective fields.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:05):
Professor Mark Solms, Professor Cole Friston, thank you so much
for joining me one more time on the show.
You have both been part of the show for so many years now and
it's a privilege and honor for me to host the both of you on
the show together. So thank you so much for joining
me. What I figured we'd do is my
role will be very minimal. I'll try to give prompts when
needed if needed. But for the most part, I think

(00:26):
most of the people TuneIn to listen and watch you guys and
the topic today is something we're we're all thinking about.
We're all talking about it. Is it possible to engineer
artificial consciousness? I think it's a fitting topic for
the two of you, both pioneers inthis field.
And I think to start this off, we should begin with some
foundational definitions. How do you perhaps call you

(00:49):
could begin. How do you define intelligence
and how do you define consciousness?
Right. Well, that's I'll start off with
the easy ones, then Mark can take the the difficult ones.
I'm being ironic. So intelligence, well certainly
natural intelligence is in my world just inference.

(01:11):
It's just sense making within aninactive aspect.
So there's also decision making that's predicated on that sense
making and therefore can be described in terms of inference
or self evidencing. As Jacob Howard would like to
say, it's from the physicist perspective.
This is a particular kind of self organization or a

(01:33):
particular reading of self organization and a a particular
petition into self and non self or something and something else
that has that can be mathematically described as the
Bayesian mechanics. And that Bayesian mechanics, I
repeat, can be interpreted in terms of inferring the causes of

(01:56):
your sensations and deciding what to do in order to solicit
the right kind of sensations to engineer or author your own
sensorium in order to make the best sense of it, in order to
make the best decision. So there's a inherent circuit
causality in the self organization if you survive and

(02:16):
you survive as an agent. And for me, an agent would be a
system that can be read as possessing or acting as if it
had a world or a generative model of the consequences of its
own actions. So it has a a model of its own
private future and it has to select particular paths into the

(02:38):
future. So it's a future pointing kind
of inference that is private to the individual, the agent or the
intelligent artefact in question.
Consciousness, I, I, you know, you should be asking Mark this.
So consciousness I think is, youknow, is something else,

(03:00):
especially if you're talking about phenomenology or feelings.
And one would have to identify the particular attributes of the
generative models in play that must be have this future
pointing agentic aspect in orderto qualify and support either
consciousness in a vague sense that it could be different
levels of consciousness. Or if you want to draw a bright

(03:22):
line between things that are self aware or indeed are just
aware in, in, in a, in a phenomenal sense, you'd have to
identify what parts of their world model, equip them with the
capacity to feel and to and to experience.
And then I'll I'll hand over to Mark for what that particular
architectural feature would be. Thanks.

(03:46):
Thanks, Tevin. Thanks, Carl.
Let me start by commenting on your introductory remark, Tevin,
that you're probably going to say very little because people
are interested in what Carl and I have to say.
I tell you, what worries me is that, and, and this is going to

(04:06):
be interesting from my point of view if I'm wrong, but what
worries me is that Carl and I are going to agree on
everything. And so it therefore won't be
very interesting to hear us echoing each other, but maybe we
won't. And so, so let's see what
happens. But that's what I, what I
anticipate. And to, to illustrate the point,

(04:29):
everything that Carl just said Iagree with, but I will try to,
unlike Carl, us S Africans will speak English instead of,
instead of active inferees. And also I'll try and I'll try
and keep it simple. To me, I mean, although I really
do agree with everything that Carl said, I, I, I, I don't

(04:51):
think in such a complex technical way as he does.
So for me, the intelligence which comes by degrees is
capacity to solve problems. And I of course, I understand
why you're asking us these questions.
It's because you're setting up the difference between

(05:12):
artificial intelligence and and artificial consciousness.
So intelligence is the capacity to solve problems.
And those problems can be as narrow as how to play chess, or
it could be something much more general.
And that's, you know, that's thewhere, where, where things get

(05:35):
interesting. And This is why I say
intelligence, You know, it comesin, in, in, in degrees.
The current artificial intelligences don't have that
much intelligence because they can only do very narrow things.
Now consciousness, I define thatas the capacity to feel like

(06:03):
something. In other words, I followed
Nagle's definition. A system is conscious if and
only if there's something it is like to be such a system,
something it is like for such a system.
And that's a very elementary definition in, in both senses of

(06:23):
the word. Elementary in that it's simple,
and elementary in that it's talking about the most simple
form of consciousness. I mean, you don't need our human
type of consciousness to to, to meet that definition.
You just have to be able to feellike something.

(06:44):
There's some quality, some phenomenal experiential property
to being a thing that is conscious.
It doesn't have to be able to reflect upon it.
It doesn't even have to know that it is a thing in the sense
of having any consciousness of that of its own being.
It just has to feel like something to be it.

(07:07):
But I hasten to add, because this is what leads to so much
confusion, that of course there are there are layers upon layers
of complexity that can be super added to feeling.
And then you get things like human consciousness, which
unfortunately too many people, even people in our field take as

(07:33):
their kind of model example. It's it's what they have in mind
when they use the word. And I think that this is
unfortunate. Mark But I, I completely agree
and I think that's part of the reason why I wanted you guys to
chat is it's because I'm not thelisteners are quite familiar
with the story, but when I wrotemy dissertation, you and Carl

(07:55):
were cited many times and often times together in the same
paper. So it makes a lot of sense that
you guys would think alike. But I think it's still going to
be valuable and I think people are going to take a lot away
from this. With that being said, Carl, you
have the free energy principle, which describes adaptive systems
and minimizing surprise. And then Mark, you place affect

(08:16):
or feeling at the very core of conscious experience with if we
were to design a system that does both, let's say minimizes
free energy and generates activeeffective states, sorry, would
that system be conscious or would this be via mimicry?

(08:36):
Or would this be a what? Mimicry.
So something that that makes consciousness.
So who? Who?
Who are you asking that? Question, would you like to
stop? Oh, right, yes.
So I think before trying to giveyou an answer to that question,

(08:59):
I think it's important to to state before we go any further
that consciousness, however readis not something that you can
know of something else. I mean, you know, common sense
tells you that is true, but it'salso mathematically true under

(09:20):
the free energy principle in thesense that the only things you
can read about the internal machinations of an agent are
exposed on their mark off blanket or on their, well,
specifically the states that actupon the world because you are
part of the world from the pointof view of the agent.

(09:41):
So you can only ever infer something is conscious and that
requires the observer. So what I'm doing is developing
a very observational relational position just to acknowledge
you. You will never know which is
going to be interesting in relation to machine
consciousness. So that that means that, you

(10:07):
know, the observer has to have the fantasy, the hypothesis, the
construct as part of their explanatory repertoire to make
the inference that this thing IEU is conscious.
So that puts a special, if you like it, contextualises, I

(10:28):
think, discussions about consciousness.
Only conscious artefacts can ever recognise consciousness in
another, and that recognition isjust another act of
intelligence. It's just an act of inference.
It's an explanation. You know, this explanation that
this thing is conscious in the sense that I am conscious is a
good enough explanation to explain all the observable

(10:50):
behaviour. So there is no ground truth
here. And I think that sort of takes
up a lot of pressure of what could become quite toxic
arguments and certainly machine consciousness and machine
welfare. You cannot adopt A
fundamentalist position on consciousness because by
definition you will never know. But it also means that these

(11:11):
debates about machine consciousness or aspirations to
build or discussions about could1 build machine consciousness
can only be had by people who subscribe to the notion that
consciousness is a suitable fantasy or hypothesis to explain
certain kinds of self organization.
I probably wanted a bit off point there, but I I enjoyed

(11:34):
making that point. But can you remind me what your
question was? And so if we were to design a
system that minimizes free energy and and effective states,
is this a conscious system? Yeah.
Well, I mean, so you know, from the point of view of what I've
just said, yes and no. Yeah, it will, if it mimics

(11:55):
consciousness to a suitably accurate extent, you will
certainly infer it is conscious and that's as far as anybody
could go. So, you know, just to make this
really, really, really concrete,you know, I'm, I'm pretty sure
that you're conscious. Pretty sure, you know, you know,

(12:17):
I can't be 100% sure, but I, I could certainly put a sort of
base incredible interval around it.
And, you know, and it would be all the evidence would be very,
very strong with a base factor of, say, above 5.
What about a microbe living in your gut?
You know, I'm not so sure. It might be some, you know, in

(12:38):
some sort of pan psychic or somevery elemental sense.
But now my confidence in the inference that the microbe is
conscious is sufficiently small that I have no qualms about
poisoning it with antibiotics, whereas I would have great
qualms about poisoning you because you are sufficiently

(13:00):
like me to provide enough evidence that you might be
conscious, you know, like, like I am.
So I think I wouldn't decry mimicry.
You know, mimicry is just reproducing the kind of evidence
that that satisfies a particularhypothesis that is the basis of
inference. So to if you are sufficiently

(13:23):
good mimic, then you know to allintents and purposes, under the
condition that you can only infer something is conscious,
yes, it would be conscious. Mark, anything about that you'd
like to add on? Yes, I, I want to first of all
reiterate what I said at the beginning.
I'm looking forward to learning whether there's anything on this

(13:46):
topic that Carl and I disagree about.
And, and I mean that seriously, you know, I, I, I would be very
keen to know if there is anything we disagree about so
far not but, but I, you know, I,I'd come at things in a slightly
different way from Carl. And so let me, let me address
the same question in my own way.And, and I will, I will

(14:11):
emphasize, well, I will, I will culminate my, my, what I say
will, it leads me to say something about mimicry because
I think that's the, that's the crux of your question.
Because the way that you've formulated your question before
you use the word mimicry, I thought, well, it's a sort of,
you know, the, the, the questionanswers itself.

(14:31):
You know, when you say if you have a free energy, free energy
minimizing system functioning byactive inference on the basis of
its generative model, etcetera, etcetera, and it also has
affect, you said then would it be conscious?
And I thought, well, if it has affect, yes, of course it's

(14:52):
conscious. Because as I just said earlier,
for me, you know, the, the, the most basic form of consciousness
is feeling. In other words, affect.
I like the word feeling because it leaves one in no doubt that
it's something that must be conscious.
You can't have a feeling that you don't feel.
So affect is a kind of functional term.
Feeling is a descriptive term, you know.

(15:14):
And so if we're using the word affect and feeling synonymously,
then such a system that has affect must be conscious.
The the problem, though, arises exactly with the issue that Carl
just raised, which is, you know,the inevitable issue that we

(15:37):
have to address early on in a discussion of the kind that
we're having today. It is how would you ever know?
How would you know from the functionality of a system
whether or not it's conscious? Because consciousness, by
definition, that is something that can only be observed
subjectively. You know, if consciousness is

(15:59):
the is the property of the beingof a system, then only the
system can, can register it in an, in an empirical, in a direct
empirical way. In other words, actually feel
the feelings. And as Carl said, you know, we
can't be sure that each other are conscious.
We can only be sure that each ofus is for, for, for that very

(16:22):
reason. But, you know, now I'd like to
pause for a moment and say, comeon, let's not be silly.
You know, I mean, it's, it's only philosophers say things
like this. I can't be sure that Kevin Myers
conscious, for heaven's sake, that in, in, in reality, you
know, I don't doubt it for a moment.

(16:43):
I can't know it as an absolute, empirically, demonstrably,
observably true fact. But it's absurd to doubt it.
And I, I would say it's absurd to dot consciousness in
certainly all mammals and certainly all vertebrates,
because it is such a reasonable inference from my own experience

(17:09):
and from what I know about the, the, the, the mechanistic basis
of my own experience. What I know about the
mechanistic basis of my own experience is that I mean on
because I believe all human beings, the three of us are all
conscious. What is it that makes us all
human beings? Well, Karl says, well, we behave

(17:29):
like each other, we look like each other, and so on.
But you know, it's scientifically more to the point
is the fact that we all have thesame structures in our brains
that we know empirically in the case of humans on the basis of
all sorts of methods. You know, we know that if you

(17:50):
lesion that it will produce a coma, if you stimulate that it
will produce in an intense stateof arousal with affective
quality. These things have been
demonstrated time and time again.
Therefore, I I start on the absolutely reasonable assumption

(18:10):
that any other creature with thesame anatomical infrastructure
that it's going to have the samefunctionality as me.
It's behavior suggests it just on ordinary naturalistic
behavior. But more interestingly, I can
make predictions and I can say because I know that stimulation

(18:31):
of this in human beings producesintensely negative affective
experiences, I predict that if Istimulate this in that creature,
it's going to avoid the stimulus.
And you know, conversely, with those that produce intensely and
you know, the, the, the, the, the prediction is, is, is is
confirmed every upheld every time.

(18:52):
So the answer the scientific answer by the ordinary
scientific method, you know, in other words, falsifiable
predictions. It's it's it's confirmed every
time the problem starts for me. I mean, let me pause.
Sorry, I'm probably overstating over elaborating this, but this
is such a fundamental thing. You know this problem of other

(19:13):
minds issue to our whole discussion.
The, the, the, even that, what I've said so far, it's amazing
how many people are skeptical. Neuroscientists are skeptical
about consciousness in some other mammals and consciousness

(19:35):
in even more of them, you know, in, in, in all vertebrates.
They, they, they're very dubiousabout it.
Many colleagues, you know, when I, I, I have absolutely no
hesitation. I have as little hesitation as I
have about whether Termin and Carl are conscious.
I have as little hesitation as to whether a zebra fish is

(19:58):
conscious because it's got the same anatomy.
The, the crucial anatomy is there.
And when you do, when you intervene, the predictions that
you expect, the behavioural outcomes that you expect are
always, always confirmed. But even there, there's,
there's, there's what, what I'm going to just call prejudice,

(20:19):
you know, it's anthropocentric prejudice.
Or or, or, or, or or primate centric prejudice or, you know,
something like that. That's, that's the nub of the
matter. Because as soon as you move
beyond vertebrates and you know,there's some very good

(20:40):
candidates for consciousness of beyond vertebrates.
Like for example, you know, the octopus is the, is the, is the
creature that is most commonly invoked in this, in, in this
context. But you know, if, if, if you
look at the literature, there's,there's pretty good evidence
that a lot of invertebrates are,you know, might be conscious and

(21:01):
the but the prejudices are enormous.
And, and from, from this point onwards, once we go beyond
vertebrates, I think that the, the, the problem is not only
prejudice, it's also that you, you, you lose the, the, the,
the, the obvious grounds for testing predictions by, by, by

(21:25):
homology, because they don't have homologous anatomy or they,
or their anatomy is, is, you know, it's, it's, it's much more
questionable whether these are true homologs of the crucial
structures in humans. And so you got to start using
other criteria. And so it ends up becoming
functional criteria. And, and, and, and how are we

(21:48):
going to agree on what are the functional criteria by which to,
to measure this question or to decide this question?
Because the functional criteria vary depending on your
particular mechanistic understanding of how
consciousness works, what consciousness is.
So we, we're in, so the, the, the, the best we can do there is

(22:13):
we have to in advance. And when I say the best we can
do, I mean if we're going to function follow ordinary
scientific method, the best we can do is agree a reasonable
number of experts. It would have to agree in
advance on a reasonable set of criteria by which they would say
if you meet X number or if your system meets X number of these

(22:37):
criteria, then we're going to have to give it the benefit of
the dot. And it has to be done in advance
because that's as close as you can come to what what I was
saying about the paradigm of falsifiable predictions.
These are no longer predictions from anatomical homology, but
rather predictions from reasonable functionalist and

(23:01):
what behaviours you would expecton the basis of, you know,
reasonable functionalist assumptions.
So I think that's that's the best we can do.
But to come now finally to the question of mimicry, I think
that there there are two things that I want to say.
The one is maybe they're the same thing.
The one is that, forgive me for invoking authority, but even as

(23:28):
as as pertinent an authority in this context, as Tom Nagle says,
an affective zombie is an impossibility that that any
system that has the the functional mechanisms that
generate affect, because as I said, for me affect and feeling

(23:50):
are the same thing. Happily, Nagle agrees.
You know that that if you have the functional mechanisms that
generate feeling, then you can'thave those without feeling.
You know, it's just like it's a,it's a, it's a, it's a self
contradictory argument. I can, I know where traumas is
coming from when he talks about zombies, but affective zombies,

(24:15):
I think it's not quite the same thing.
You, you, you can't say that a thing that has that functional
architecture and displays all the behaviours that go with
affect. You can't say that it's that
it's a zombie because affect is perforce felt.
That's the what the what we that's the the function we're
talking about here is the function that generates feeling.

(24:38):
If you have that function, it has to, it has to come along
with feeling. And so to make my second point,
which as I say, may be the same point, I don't see how you can
mimic, if you have, it's a, it'sa question.
If you instantiate that architecture that it's

(24:59):
reasonable to believe is the mechanistic basis for whereby
feelings are generated, then the, the having such a mechanism
is not mimicking what we have. It is instantiating what we
have. And so, you know, I, I, I, I
would quit. And it all boils down again to

(25:20):
what we said at the very beginning that that conscious
states are felt. And so, you know, the, if you're
generating an affect, you're generating something the measure
of which is from the, from the viewpoint of the system, that is
that, that mechanism feels like something I, I, I hope I'm

(25:41):
making myself clear. Cole, anything about that you'd
like to respond to or Adam? Yes, just to pick up and
reinforce or reiterate a couple of key points.
So the first is this acknowledgement that we are

(26:04):
using homology as the basis of the evidence that we assimilate
when making a decision about whether you or some artefact is
is conscious. Mark articulated that in terms
of the scientific process, he deferred to Popper.
Not everybody would, but he did.And that's perfectly fine.

(26:26):
And that's exactly what I meant,that consciousness is something
you infer about something. It's it's an act of measurement.
It's an observation that entailsa degree of inference.
And because that means there hasto be evidence for that
inference. And I think this is going to be
practically relevant when we come to machine consciousness
because you're going to, you know, people like Joshua back

(26:47):
are asking, you know, what kind of evidence based approaches
should we take to this artefact to say whether it is conscious
or not. So this is a really important
question. And I think Mark has very
elegantly just sort of framed that as the as the key issue.
And he, I repeat, articulated interms of preparian scientific

(27:08):
inference or hypothesis testing.And that's exactly what I meant
in terms of inferring something is conscious.
And then what he went on to say was, well, OK, we can certainly
use structural and anatomical homology as one source of
evidence. So if I were able to breach your

(27:29):
mark of blanket and look inside,say unfortunately you had died
and I was able to dissect you and I can look at the structure
of your brain and I can certainly find those source of
sources of ascending classical neuromodulatory systems, you
know, around the base of your brain that are those that
machinery, that part of your anatomy that is necessary to

(27:52):
support feeling. And I could infer even though
you are no longer conscious because you're dead, because
I've broken your mark off blanket by literally slicing
into your brain, then I could infer that you were conscious
simply on that structural homology.
Could I apply that to a zebra fish?
Well, to a certain extent we canbecause we can actually image

(28:14):
the neuronal circuitry at least and and look at the common
homologies and you may argue from the point of origin of
life, there are lots of different structural chemical
homologies that might, you know,one might appeal to as sources
of evidence to make an inference.
A best guess is this thing conscious or not?

(28:35):
And of course, what you're saying is if it's sufficiently
like me, it's conscious and the more that homology is broken and
the further you wander away fromzebra fish down to viruses, the,
the, the less you can rely upon that homology.
And then what Mark said. OK, well, are there other
criteria? Because it's, if I do, if I
dissect my personal computer, I am not going to find these

(28:59):
things. It just doesn't have the right
anatomy. It doesn't have the right
architecture, which may tell youthat my PC will never be
conscious, but let's just pursuethe argument.
But there may be some other homology there which was a
functional 1. And I think that's, you know, is
that true? Can you ever have a divorce
between the anatomy and the function?

(29:21):
Can you have a divorce between the structure and the function?
The the, the OR the dynamics on the structure?
I would actually argue you can't.
So I don't think this is a disagreement, but it's a
particularly strong position, which I'm not, let us say I'm
not necessarily committed to, but I would certainly argue, say
if I was Ross Ashby and or indeed Carl Friston, as you

(29:44):
know, the author of the energy principle, I would argue that
the anatomy is the substrate that embodies or entails the
function. So this is the good regulator
theorem. It's also appears again and
again in the context of the freeenergy principle, that we
install the cause effect structure of our lived world

(30:08):
into our anatomy so that it has a hierarchy, so that that
reflects, for example, the scaleinvariance and the hierarchical
composition of things that generate our sensorium.
It has a separation of temporal scales, It has dynamics, it has
and so on and so forth. And there are lots of, if you

(30:29):
like, aspects of the anatomy that are tie you down to a
particular Physiology, that tie you down to a particular
functionality. And in many respects, most of
the theories that attend the free energy principle are about
that. So is my brain a predictive

(30:49):
coding machine? You know, does it work like a
camera filter? Or is it oblique propagation
machine? Does it work like message
passing on a factograph? Both have quite plausible
hypotheses that specify a particular anatomy, a particular
connectivity, a particular structure.

(31:10):
So from that point of view, whatwe are saying then is if we are
looking at homologies and both the functional and anatomy, both
are basically two sides of the same coin, two sides of the same
coin, then the inference that something is conscious is only
by homology. Which just means that you can

(31:32):
only be conscious if you're sufficiently like me, which
means you will not be able to recognize consciousness in any
other kind of artifact or any other space-time scale.
I think that's important from a machine consciousness point of
view because you cannot now makeconsciousness in a machine that
does not look like me. So now what you're saying is you

(31:56):
have to turn to mortal computation, substrate dependent
computation, before you can build an artifact that's going
to pass the Turing test, Turing test of of consciousness,
because it has to have that homology.
It has to have the same anatomy,the same computational anatomy,

(32:16):
the same functional anatomy, thefunctional architecture as me,
because that's the only source of evidence.
Why is it the only source of evidence?
Because it's only me that has feelings.
So yeah, if this thing has feelings, it has to be like me.
And I think that's important, you know, to say out loud when
it comes to questions about consciousness.

(32:38):
That tells you two things. That machine consciousness is
only going to be emergent when it's evinced in transaction with
other things that are conscious,namely people.
And it's going to have to have some mortal aspect to it and
possibly have the morphic structural aspect to it as well.

(32:59):
But certainly the transactional aspect, the relational, the
observational aspect is going tois, is is quite important.
So you're not going to find consciousness on the edge.
You're not going to find it in supercomputers.
You're not going to find it. You could even argue on von
Neumann architectures simply forjust extending and formalizing

(33:20):
the the the the the truisms thatMark was just articulating,
which all rest upon homology. Is this thing like me?
So I want to come in there. First of all, I want to say
something about Papa. It's not an endorsement of Papa

(33:40):
that I invoke him here. The reason I do is because of
the conservative, you know, dispositions of of the
scientists who going to be most sceptical as as I've
encountered, even when it comes to trying to persuade some of my
colleagues about vertebrate consciousness being ubiquitous.

(34:02):
So I'm saying, well, let's use the most conservative rules and,
and laws, principles, customs, you know, that we use in
science. Just about every natural
scientist, you know, functions by those rule, those poparian
sort of criteria in, in, in terms of, you know, what your
peers are going to expect you todo if they're going to accept

(34:24):
the results that you want to publish.
So I, I just, I have nothing against Popper particularly, but
I don't want to be seen to be endorsing Popper as the one and
only voice that we should be guided by in the philosophy of
science. It's just that it he, he does
have such hegemonic control overover over contemporary science.

(34:47):
And so I'm saying, well, if we're going to follow those
rules, then we must follow them fairly.
We must, we must say, OK, well then let's apply them, you know,
beyond it's, it's a way of trying to get around prejudice
basically in a word. But now back to the more
substantive issues that that Carl has just articulated, I
think, and the reason I'm going to say what I'm going to say is

(35:10):
that there's a lot of potential slippage in the way that what
Carl has just said will be heard.
So people will people with certain positions will say, you
see, Carl Friston agrees with me.
And people with a completely different position will say, you
see, Carl Friston agrees with me.
So because they will hear what he's saying in their own ways.

(35:31):
So, so I want to just go slowly here.
The at the one extreme, Carl will have been heard now to have
articulated a kind of biologicalnaturalist position, saying that
unless you have the same anatomyas me, or at least the same

(35:52):
essential your your anatomy has has recognizably homologous
structure, then I don't think it's possible that it can have
the same functionality. Some people will hear what he
said as that as a biological naturalist, you know, the

(36:17):
substrate, this business is not substrate independent.
And others would like me will hear him as saying by anatomy he
means architecture. And he in fact used the word,
you know, in that way. To my ear, he was saying an

(36:38):
anatomy in the sense of it's, it's not just a functionalism in
terms of, it's what it does. It doesn't matter on what
substrate it does it. So if the outcome is the same,
you know, then it is the same. I, I, I don't think that if the
outcome is the same in regard toa good many functions.
And I, I want us to speak here in broad terms because I don't

(37:02):
want there to be exceptionalism when it comes to consciousness.
You know, that that's another one of the prejudices that I'm
trying to prevent it as far as is possible.
Consciousness is just part of nature.
It's, it's, it's something that must have mechanistic causes

(37:22):
like everything else. And it must be possible for us
to eventually discern what that mechanism is.
And therefore it must be possible to be able to create
that mechanism. If it's not, then conscious is
something utterly special, completely different from the
whole of the rest of nature. So I'm saying when we speak

(37:44):
about outcomes, not to do with consciousness, but outcomes, you
know, you could say, well, you know, because you can that
travel to the same destination by aeroplane or by Oxcott.
You know, therefore Oxcott and aeroplanes are the same thing.
And that's ridiculous. They're not, they're completely
different things. They, they have the same outcome

(38:06):
of getting you to Paris, but theone takes you 10 years to get
there, another, you know, 10 hours, you know, because they're
completely different things. It's not just a matter of the,
of the, of the destination. If the, so, you know, coming to
biology, you know, there, there,there are prostheses which have

(38:29):
the same functional architecture, which are doing
the same thing. Not in the sense that I was just
speaking now in terms of, you know, what the output is, it is
what they're actually doing, youknow, so the, when you, when
you're having open heart surgery, you know, there's a,

(38:50):
there's a, an artificial machinethat keeps your blood
circulating, but it's a completely different substrate.
It's not made of muscle. You know, it's, it's, it's, it's
a, it's a, it's a, a, a, a machine in the colloquial sense
of the word, you know, as opposed to a biological
substrate. It's a, it's a, it's a

(39:12):
mechanical artificial substrate.But it does do the same thing.
So I think that we're looking atwhat Carl's talking about is a
functional, A functional architecture.
So it's not just the functional output, it's the functional
architecture. But I think that's what he's
saying. That has to be homologous.
It it can't just be it has it, it has the same behavioral

(39:36):
output. It it has to be doing the same
thing. But that doesn't mean it has to
be a biological creature. And I found it.
I don't know. Carl, are you a commentator on
the Neil's paper in BBS? The, the Yep.
So our, our close colleague, I mean, are close in in every
sense of the word, but most importantly for present

(39:56):
purposes, he, he shares the sametheoretical assumptions for the
most part as as Carl and I do. And yet he's he's recently,
well, this paper hasn't come outyet.
But it's, it's doing the rounds because it's not in the process
of open peer review. So I think it's, you know, it's
OK to comment on it. It's, he argues A biological

(40:20):
naturalist position when it comes to consciousness.
And I was rather surprised, but that's what he did.
He, he argues explicitly argues a biological naturalist
position. But then when you actually drill
down into the article, he ends up saying, you know, because you
know, it has to have this functional architecture, it has

(40:43):
to be actually trying to maintain the life of the system.
You know, consciousness is, is, is, is.
And I agree with them. You know, in terms of
consciousness, it's the most basic form, which is just
feeling. It's got everything to do with
homeostasis. It's got everything to do.
So therefore with the free energy principle and active
inference and all of that. But you know, in the service of

(41:05):
homeostasis and in the service of things which are
fundamentally biological or the,the, the Organism giving a damn
to, to quote Hagelin's and a famous and a computers don't
give a damn. You know, the, the, the, if the,
if the thing doesn't give a damn, then it then it, you know,
it, it about its own existence. And if it isn't seeking to

(41:26):
maintain its own existence and, and deploying affect in the
service of this, this subjectivegoal, etcetera, this existential
goal, you know, then it says, says Anil.
I don't see how it can be conscious.
But then he goes on to say, but I grant you that they could be a

(41:47):
robot which has, you know, non carbon based instantiation of
this same functionality. And he says then it would have
artificial consciousness, real artificial consciousness, he
calls it. So this is the slippage that I'm
talking about. It sounds like biological
naturalism and and Neil actuallyclaims to be a biological

(42:10):
naturalist, but he ends up saying something which is
functionalist in the sense of functional architecture doing
something for the system of thiskind.
I don't it doesn't have to be ananeel basically concedes the
point. It doesn't have to be made of

(42:31):
neurons and muscle and and vessel and so on.
So that maybe we are beginning to enter into an area of
disagreement. I don't think so.
I'm more worried about how Cole might be misheard if I if if
I've interpreted him correctly. I don't think, Cole, you're a

(42:51):
biological naturalist. Good.
Thank you. No, you're absolutely right.
Is it interesting you pick up onthe sort of the architect and
I've certainly in the past decade, 3 decades ago when we
were doing a lot of brain scanning, the inception of

(43:13):
posture and emission tomography and subsequently fMRI, we're in
a position for the first time toimage the anatomy of the brain
in function. And we'll use the term
functional anatomy as distinct from structural anatomy or the
kind of anatomy that anatomist did so from.
For decades I've used the word functional anatomy, but I've now

(43:33):
deliberately changed it because that's too, what did you call
it, biological, Whatever, whatever it it's too
constrained. Yeah, what what what people are
talking about are functional architectures.
That's what I meant by dynamics on structure.
So you're absolutely right. And it doesn't have to be carbon
based and it certainly doesn't have to be, doesn't have to be,

(43:57):
for example. And I think this is sort of
practically important because you will find a community out
there who are committed to a particular kind of neuromorphic
way forward in terms of artificial consciousness based
upon spiking neural networks, SSNS, SNS.
So you know, that is really going, you know, a particular

(44:18):
commitment to a biomedic neuromorphic kind of computation
where they're actually emulatingthe spiked by spike Physiology
of synaptic transmission. I don't think that is what is
implied by certainly the self evidence implied by things like
the free energy principle. What is implied is the anatomy

(44:40):
of the cause effect structure being installed in the message
passing on the dynamics in your computer, whether your computer
is your brain or your personal computer.
That it is that architecture that that absolutely I was
talking about and doesn't need to be cast at the scale of
spikes. It could be population dynamics,
and indeed you could argue as a physicist, all physics,

(45:01):
including the physics of sentience and the physics of
consciousness is just a a statement of probabilistic
dynamics. It you know, So you don't need
to actually simulate the every element of an ensemble over
which you have a probability distribution.
You just need to simulate the functional architectures that

(45:24):
describe the density dynamics, the probability density
dynamics. And I put it like that because
you can always reduce any functional architecture to a
graphical model. And it's just the connectivity
and the architecture of that graphical model that I was
implying when, you know, when I talked about sort of functional
anatomy. One final point is to again to

(45:45):
sort of reiterate Mark's point. Well, perhaps we'll come back to
this, but you know, I was tryingto sort of pick up on a Neil's
arguments and amplify them in a way that I thought he would be
pleased with. And I, I didn't spot the

(46:08):
biological aspect too much, but I, what I did spot was the
commitment to in memory processing and mortal
computation. So I'm not saying the substrate
has to be carbon based. And what I'm saying is that the,
the, the stuff in and of itself has to do the computation.
And that becomes very relevant when we're talking about von

(46:30):
Neumann architectures and in memory processing, such as say
with remembrances or photonics. So I'm going to put that on the
side. I think that's something we
might have to come back to, because when I talk about mortal
computation and when people likeAlex and Jeff Hinton talk about
mortal computation, we're not talking about biological neural

(46:51):
networks. We are talking about the
functional architectures that render the substrate locally
doing the computation. And it could be, I repeat say,
saying memristors. And there's another take, which
is Alex's take on mortal computation, which resonates
beautifully with what Mark was just saying that OK, what is

(47:14):
mortal computation? You could say it is
intelligence. And if you've got affect or
feeling under the hood during the immortal computation, then
you have consciousness. What's the, what will be
immortal computation? Well, anything that can be run
by software that is immortal. So software that can be run on
any machine is immortal. But if the software is a thing

(47:40):
that is the is conscious, then it has no need to worry about
persisting, surviving, responding to drives or needs
because it's immortal. So I expect it's a really
interesting point. The the, the, the, the
existential imperatives that underwrites self organization of

(48:00):
the kind that induce the notion of drives and needs and effects
and feelings can only be an attribute to something that can
die because it's it's own purpose is to prolong the period
between its introduction into the world and the point that it
departs. So it has to be mortal in this,
in this sort of, you know, common sensical sense.

(48:23):
Guys, before let us come back tothat, I made a note about it
just so that we don't accidentally forget, forget to
answer the question call perhaps.
Let's continue with you for now.Is it possible?
With that being said, I tried topreface this with the
philosophical conversation firstabout consciousness, and there's
something else I'd like to touchon that still hasn't been asked.

(48:44):
But let let's start with the main topic at hand.
Is it possible then to engineer artificial consciousness?
Yeah, I see no reason why not. You just need to identify the
right functional anatomy. And as you say, it is going
beyond creating agents, but I think you'd have to have agency

(49:06):
as part of your, your intelligent artefact and then
you would need to equip it with affect and the the machinery
necessary to support feelings. How would you do that?
Well, we've just touched on on sort of two, well three issues.

(49:28):
First of all, you would I think have to commit to to mortal
computation. I don't think he can do this in
a von Neumann architecture. And we can talk about why in a
second. If we did, if we did, we could
certainly simulate consciousnessin a, in a sort of layman sense

(49:49):
on a von Neumann architecture. And this would be very much akin
to Mark's example of a heart lung machine keeping a patient
alive during heart surgery. So it's perfectly possible to
simulate the kind of belief updating and message passing and
potentially simulate the affect read in this instance as

(50:10):
investing certain certain messages or certain belief
structures with a confidence or a precision or igniting them in
the right kind of way. So that make a difference to to
your to your intelligent processing.
But it, you know, if it's on a von Neumann architecture, it
would be mimicry in, in a sort of vernacular sense, in the same

(50:35):
way that the heart lung machine is mimicking the, the
functionality of the biological heart and pulmonary system,
which is of course the the real thing can do forever.
The heart lung machine is going to run out of electricity at
some point or break, you know, it's just not fit for purpose
for an, for an extended period of time, unlike the mortal

(50:58):
computation or the mortal circulation that we enjoy with
our actual heart and lungs. So I think it's going to have to
be mortal, which basically meansI think you're, you're looking
at I, I would imagine membristers of possibly
photonics, but I think more morelikely to be membristers
architectures. And then the, the, the other

(51:20):
thing we talked about is this relational thing that the, this
artefact has to learn to be likeyou or to be like things that
feel in order to have as part ofits generative model the notion
of affect or feeling, or at least be able to report that.

(51:43):
Mark would probably argue not. He would argue that you just
need to equip it with the right neuromodulators so that it can
realise its inferences. And what would they look like?
Well, they look a little bit like attention head in
transformer architectures, but with one key difference.

(52:03):
These attention heads have to beselected by some other, by some
other aspect of the generative model.
So it's like having, if you likea context sensitive learnable
attention head. And I think if that was
implemented on a using in memoryprocessing or processing in

(52:27):
memory memory computer architectures with the right
kind of reactive message passingor actor model of as as Keith
Dugard likes to likes to talk about, I think you're getting
very close to it. And it would have to be done.
It would have to be done in a, in an interactional way.

(52:48):
You'd have to, you know, you can't just set it off and use
reinforcement learning, come back, make yourself conscious
and let me know when you've doneit.
It would have to be an ongoing process of interaction within an
ecosystem where there was some shared generative model.
And I forgot to say earlier on again, this, this notion of
homology is absolutely central, I think because when I talk

(53:08):
about these things, consciousness depends upon a
shared narrative or shared charity model.
And certainly talking about it does because communicational
language does. So another aspect of homology is
the fact that we all come to share the same functional
architectures because we all learn the same things.
We learn the same language. And of course, that's a
fundamental aspect of minimizinguncertainty and minimizing free

(53:31):
energy when we all come to predict each other.
And from that emerges language and communication and of course
the notion, the hypothesis that,you know, we all have feelings
and we can talk about the feelings, not necessary to have
the feelings, but we can certainly talk about them.
So if you want to have that as part of your machine

(53:51):
consciousness, that means you'regoing to have to evolve in very
much in the way that we grow. You know, we bring up our
children. It's going to have to be a
transactional process. You're not going to be able to
write down with an RL algorithm,you know, or use a large
language model go and make me a conscious artifact.
It's going to require a lot of alot of interaction, possibly

(54:13):
dyadic or possibly Federated, you know, in a sort of ecosystem
that includes things that can feel and we know, we think feel.
Marks same question, Is it possible to engineer artificial
consciousness And thereafter youcan also address all the points
that Cole made. Yeah, yeah.
I, I mean, this is the question that our meeting is designed to

(54:36):
address. So, you know, I'm, I'm glad
we're being asked to answer it with, you know, half an hour to
go. So we, we still can can talk
about mopping up the details. But we've each, we, we must hear
each commit to an answer, a position on this question.
Before I give you my answer, which I don't think is going to

(54:58):
come as a big surprise. I just want to pick up on one
detail in what Carl said because, you know, as I keep
saying, I'm not sure there's anything we'll disagree about.
So my comment on his previous response was I'm worrying some
people might think we disagree. So I clarified that and I'm glad

(55:18):
we don't disagree about, you know, about about substrate
independence as opposed to, you know, functional architectural
homology that that we agree about.
And that's the nub of the matter.
And it turned out to be the nub of the matter in in a Neil's
Article 2 the the point that Carl said Mark might disagree.

(55:44):
I want to pick up on that point again in the spirit of wanting
to clarify where do we disagree?If we do, it was on the question
of reportability. And so I want to say something
about that which might seem sortof peripheral or even
gratuitous, but but it turns outto be quite important.
Many of my colleagues working inthe this field of, you know, of

(56:09):
consciousness neuroscience of consciousness, they hold
reportability to be the empirical gold standard.
They're saying if you, if, if you, and they're speaking
because they're anthropocentric,you know, they, they have in
mind humans and they're saying, look, if the person like, for
example, when you reducing the speed of the stimulus in a

(56:30):
tektoscopic setup or something like that, there has to be a
certain point where the person says, I no longer see it.
Or there has to be a certain point at which they say, now I
see it. And you know, that is the
empirical criterion. It's because we're talking about
consciousness, we're talking about experience and you have to
report I am experiencing it or I'm not.

(56:51):
And they, you know, so the, the,this was thrown at me, for
example, recently, you know, when I speak of Hydra and
cephalic children who've got no cortex, they say, well, there's
no empirical evidence that they can't, just because they can't
report. And I say, yeah, you know, my
Labrador can't report. My, my, my, my children couldn't

(57:12):
report before they learned to speak.
And are you really seriously, you know, claiming that my
Labrador is, is not conscious because it can't report.
And, and the, the, the argument I, I, I make there is it's
reportability is it's the easiest thing in the world to
get something to report that it's conscious when it's not,
you know, it's so I, I don't disagree with Carl about

(57:36):
reportability in the sense that,you know, I would like my agent
to be able to report that it's conscious, but it's very easy to
mislead. It's very easy to get a entirely
non conscious agent to say like,like a large language model.

(57:56):
You know, it can say I'm feelingsuch and such, but there's no
reason whatsoever to believe that it is.
And so we we've got to get beyond reportability.
And this brings us back to, you know, what do we really mean by
what is a conscious state? And it comes back again to this
question of mimicry. If, if we, if we accept and not

(58:19):
everybody accepts it, but if we accept, for the sake of
argument, my claim that raw feeling is the elemental form of
consciousness. All you have to have is feeling.
To be able to report it is great, but you don't have to be
able to report it. You just have to be able to feel
it. To me that cannot be mimicked

(58:41):
except in the sense of misleading us, you know, So
that's why I'm saying reportability is a very
misleading criterion because it's so easy to create an, an
artificial intelligence that's going to report also and sort,
which is mimicry in the sense ofmisleading us.
But you can't mimic a feeling that because it's it's presence

(59:02):
or absence is is a subjective presence or absence, you know.
So if there's a feeling there, it can't be mimicked, It is
felt. Whether it's an artificial
feeling or a biological feeling is a different matter.
But to me, the thing that we're talking about when we speak of
consciousness, to me fundamentally we're talking
about feeling and a feeling whenit's there will be felt.

(59:28):
And therefore it's not something, it's not a
behavioural thing, It's a, it's a, it's a felt, it's a, it's a
state of the system. So now to your question, the,
the big question, which as I said, my answer is not
surprising. Absolutely.
I believe that it is possible toengineer an artificial

(59:48):
consciousness. I see no reason whatsoever why
we can't do it. And it's because of what I said
earlier. So I won't repeat it in detail.
I'll just give you the headline again.
It's part of nature. It, it's, it's not exempt from
the, the, the, the if something's part of nature, to

(01:00:08):
quote Feynman, you know that that statement that he wrote on
his blackboard, which was found upon his death.
If I can't create it, I don't understand it.
Was what he wrote. And so, you know, if you
understand how, how consciousness in the sense that
I'm using it, how feeling arises, then you should be able
to create it. So I believe we, we, if we, to

(01:00:36):
the extent that we do understandhow feeling arises, we can
create it now already. But I'm not so sure that we have
a complete understanding of how feeling arises.
But I think we're in the right sort of ballpark.
If we're in the utterly wrong ballpark, somebody else will
understand how feelings arise and they will create it
artificially. So I'm, I'm, I'm unequivocal

(01:00:58):
that it's possible and it's, andit will happen.
It's just a matter of when and how.
So let me add then some detail in terms of, you know what, what
do I think causes feeling? And I'll try to be brief.
Obviously, you know, because of time constraints, I'm, I'm not

(01:01:19):
going to be, I'm not going to dofull justice to the whole thing.
But I think that a mark of blanketed self organizing system
from the point of view of such as first of all, such a system
does have a point of view, whichis an important starting point.
There's an important prerequisite for selfhood.
You know, it has to have a pointof view.

(01:01:41):
There's a, there's the point of view of the system upon what is
not the system. That that from the point of view
of such a system. And Please note what I've
described is not something complicated.
You know, this is something easily engineered a, an
artificial self organizing system with a, with a mark of
blanket, which has a point of view.

(01:02:02):
And from its point of view, increasing free energy just is
bad. It just is, you know, so and,
and, and it's important to emphasize it's only bad from the
point of view of the system. So there we have subjectivity
and mechanistic basis for, for describing what we mean by and I

(01:02:24):
don't mean felt subjectivity. I'm I'm saying there is a point
of view of such a system and it and it's registering something,
some mechanistic property is being registered within a value
system, which from the point of view of the of the system,

(01:02:44):
increasing free energy is bad for the system.
So there we have valence, mechanistically described
goodness and vadnais subjectively construed in
entirely mechanistic terms. And then what you need to add to
that is that that's a, that's a continuous variable, you know,

(01:03:06):
free energy going up and down. The valence is not, it's not,
it's not. It's a value.
But I think you need to add something more, which is that,
that, that that complex systems of the kind I have in mind have
multiple, they have multiple needs, multiple survival needs.

(01:03:29):
I mean, they're multiple categories of things that have
to be kept within their viable bounds for the system to
persist. By the way, I need to, I should
have emphasized that at the beginning.
I completely agree with Kahul that such a system has to be
artificially alive. It's it's, it's, you know,
artificially alive that it's, it's what I'm describing is a

(01:03:51):
system that is seeking only to continue to exist as a system
That that's the what I've just described mechanistically is
such a system. But if it has multiple
categories of need, multiple categories of viable bounds,
then those must be treated as categorical variables by the

(01:04:11):
system. They must mechanistically, they
must. They can't.
You can't add 8 out of 10 of thirst to 8 out of 10 of
sleepiness and say that's 16 outof 20 of total need.
Therefore all I need to do is sleep.
If you don't also drink, you will die.
So these these needs have to be treated as categorical

(01:04:32):
variables, which means, and here's the crucial point that
they necessarily are qualitatively differentiated.
That's what we mean by a categorical variable that when I
say that 8 out of 10 of sleepiness is not the same thing
as 8 out of 10 of thirst, they are qualitatively different.

(01:04:52):
And so there I think we have a mechanistic account of, of, of
the Ground Zero of qualia that from the point of view of such a
system, it's got that point of view.
It has a subjective goodness or vadnais, which must be
qualitatively differentiated. And all of this is registered
only internally by the system for the system.

(01:05:16):
I think that that that's the kind of basic functional, very
basic. Remember, I'm not going into
details, but for me, that's the basic functional architecture of
a system that that that has an elementary form of feeling.
There's a lot more detail, but that's that's my.
So when I say if we're in the right ballpark, that's the

(01:05:39):
ballpark I'm in. Mark, anything about that you'd
like to comment on and and just to try and play devil's advocate
to try and see if there's anything you guys do disagree
with. How much importance do you
please call Sir on on feeling and this version of feeling that
Mark just defined and described valence and qualia in in terms
of consciousness and then tryingto engineer it.

(01:06:03):
I put a lot of emphasis on on that for a particular reason,
which I will conclude with. But just just to take that key
point that Mark just made from atechnical perspective.
If you remember before I was glibly saying you can't just use

(01:06:27):
a large language model to say goand make me an intelligent
artifact or use deep RL. I had a reason for saying that.
And the reason is exactly what Mark was describing in terms of
multiple variables that need that provide constraints on the
kind of thing that I am or the states that are characteristic

(01:06:49):
of the kind of thing that I am. And if I violate those states,
then I will be very surprised I will have a high free energy.
So just keeping myself within those viable range of essential
variables of of any sort is justa definition of existing and
living. And of course if your

(01:07:10):
physiologist, it's just a definition of homeostasis and
possibly homeoresis analostasis.If you're a statistician or of
at least a statistical physicist, what you're talking
about is the maths of multiple constraint satisfaction.
So I would read the multiple categories that Mark was talking
about that cannot be collapsed into one dimension.

(01:07:32):
He was talking about thirst and hunger, for example.
These are two-dimensional structures.
In fact they're multi dimensional structures.
So you got multiple constraints that have to be satisfied.
And I use that phraseology because that is exactly what you
get when you turn to the good and great of statistical
physics, for example, ET Janes. So he describes physics,

(01:07:56):
certainly it's meteorological measurement aspect of physics,
just as maximum entropy under constraints.
And it's the constraints that shape the kind of thing you are
and violating those constraints just is increasing free energy.
So from a physicist perspective,that what marks sort of emphasis

(01:08:20):
on multiple ways of being surprised, multiple ways of
dying, multiple ways of being out of kilter in a homeostatic
or emotional or social or whatever sense.
This is what basically provides the the shape of you and
yourself organization. And complying with those

(01:08:43):
constraints just is a constrained maximum entropy
principle that just is the free energy principle.
So on that level, I thought thatwas the important thing, which
is why you can't do with RL. You can't do with reinforcement
learning because you've only got1 category to reinforce the
money or sweeties of likes. You can't do multiple constraint

(01:09:07):
satisfaction properly with RL, but you can do it with your
constraint maximum entropy principles or the free energy
principle or active inference, provided you you've got the
right functional architecture inplay that has all of these
multiple constraints and dimensions.
Everything, everything that either can be observed or

(01:09:29):
everything that is brought to the table to explain those
observations, they all have to have a shape.
Some of them will be very, very precise.
You know, I can only be ±.6°C away from this preferred
temperature. Some can be very, very
forgiving. You know, I can be me in the

(01:09:51):
North Pole, I can be me in New York, I can be me in Cape Town.
So, you know, some things have enormous, very precise
constraints and sometimes and sometimes they have less, but
they still have to be constrained.
They define, they define. So you need a calculus, you need
an algorithm, you need a, a description, a mechanics of self

(01:10:11):
organization that allows for this.
And, and I think that that that's, that's a point which,
you know, I've just reiterated in terms of is, you know, Mark
did say he was keeping it reallysimple.
To my mind, I think he kept it abit too simple.
I think just making inferences under constraints, I think

(01:10:39):
denies him the opportunity to wax lyrical about the encoding
of uncertainty and the very essence of beliefs.
You know, we, we, we start off by saying that I can only
believe you are conscious. I can never know.
So we're talking about a calculus of, of belief
structures of probability densities.

(01:10:59):
And of course, the most important, well, the second most
important aspect of a probability distribution or
belief structure after it's locational content is it's
shape. And for me that would be simply
described in terms of it's neg entropy or, or or it's
precision. So the encoding of uncertainty

(01:11:21):
in working out how to say to your needs to me is the heart of
feeling. As I read Mark, he didn't say
that, so I'm going to pass it back to him to see if he wants
to be slightly less simple. Yeah, just having an inference
machine that that complies with some pride preferences.

(01:11:43):
I, I, I, I mean, it may be right, but I get the feeling
that that much of your thinking and the Yeah.
And I know, of course Tevin doesas well.
But for those people don't know,Mark actually has a group of
bright young things actually trying to build artificial
consciousness at the moment, andhe's taking it a little bit
further than his straightforwardexplanation would suggest.

(01:12:07):
Yes, thank you, Carl. I, I, I really want when I'm my
mind, my eye on the clock. And so I wanted to keep it
absolutely simple. And the thing that I was
emphasizing is in very broad brush strokes, what do we mean
by a mortal system and why, why does what, what kind of mortal
system? Where does the possibility or

(01:12:29):
the necessity of feeling fit into it?
And so I was saying the necessity of feeling fits into
it in the sense that it's it's values in terms of trying to
remain in existence. In other words, it's monitoring
of its free energy has to be hasto be distributed across

(01:12:50):
multiple categories, which meansthey must be qualitatively
registered by the system. But that's what I said is
certainly not sufficient to to. So let's start with what Carl
ended with when he spoke about aprior preference distribution,
he said, you know, it's, it's not enough just to for such a
system just to be maintaining its prior preference

(01:13:10):
distribution. So I'll add here a little bit of
detail. And before I do that, I want to
say picking up on on, on what Carl has just said, that I am
leading A-Team together with my colleague Jonathan Shock, a
brilliant young physicist, whichis trying to do this.

(01:13:32):
We're trying to instantiate the sorts of functional
architectures that we're talkingabout.
And so I know from experience how difficult it is when it gets
down to the detail. I mean it's extremely, we've
been at it for a few years and we're dealing with minute
complex things all the time. So you know, even what I'm going

(01:13:54):
to say now is still very broad brush strokey compared to the
level of granular problem that we that you end up having to
grapple with. So picking up on this prior
preference distribution, there is this is the, this is the,
this the essential next, next point is it's a matter of

(01:14:14):
prioritizing which of those categories at anyone point in
time is going to be, is going tobe subject to, to, to, to
voluntary requires voluntary behaviour.
In other words, the capacity to change your mind, to choose one
path of action as opposed to another.

(01:14:37):
You can't do that with all thosecategories simultaneously.
You, you can't do everything at once there.
There's a, there's a, an action bottleneck and A and an
intentional bottleneck. And it's they, they boil down to
being produced very similar things, which is, which is
extremely interesting. So it's a matter of I've got to,

(01:15:02):
I have a need which is heading in into non viable territory and
I now need to institute a policyas to how to satisfy this need.
In other words, how to return tomy preferred state, which is an,
which is not just a preference in this, in this, it's, it's,

(01:15:23):
it's an existential requirement,you know, so I've got to return
to my preferred state. And so I act in accordance with
my policy. And this is what voluntary
action boils down to. I am able to register how well
or badly is this going before it's too late.
That's the crux of why we feel so those values I was talking

(01:15:47):
about don't only they the value of it's good to survive and bad
to die. You know, don't only work on a,
on the level of natural selection over generations, but
for me now as a mortal system, how do I know before it's too
late whether the, whether the policy that I'm following is
succeeding or not? And so it's feeling my way

(01:16:10):
through the problem and and changing my behaviour
accordingly. That is the mechanistic basis of
that is what Carl was just talking about.
It is palpating my confidence inthe policy.
So the and that's got to do withthe precision.
So it's, it's modulating my confidence, modulating the

(01:16:33):
precision in this policy versus alternative, alternative
policies. In other words, this policy or
some future policy changing yourmind at the system, changing
it's mind. I don't hesitate to use the that
phrase as as it's going along. And while it's doing that in the

(01:16:55):
in the prioritized category of of need, the other needs are
being they can't be disregarded,but they are being relegated to
automaticity. So as so they're they're,
they're fixed precisions, the policies I'm running those

(01:17:16):
policies on that one, but on theand that doesn't have to be 1
prior. It doesn't need.
It can be two conflict, competing conflicting needs.
But the point is that there's a,there's an attentional spotlight
in terms of what is going to be subject to voluntary action.
In other words, to the palpatingof my precision in my action
policies as I go along. And all of that is tethered to

(01:17:36):
feeling in, in the sense of thisis going well or badly in terms
of this particular category, this particular quality.
Meanwhile, everything else, you know, is, is relegated to
automaticity for this action cycle.
But then what is to be prioritized next, You know, in,
in, in each, in, in each sort ofartificially divided action

(01:18:01):
cycle? It's, it's, it's the, the, the
focus may not shift to another need or on the basis of another
opportunity. And that too is all is all
regulated on the basis of precision.
So it's a matter of precisions in the needs and then precisions
in the policies that we're following to meet those needs.

(01:18:22):
All of that has to do with the modulation of precision and
coming back to functional architectures and to actual
anatomical architectures. This, this is what happens in
the brain, at least our vertebrates.
That's the, the, the, the, the part of the system that's that's
modulating this precision in allthe in in all of the sensory

(01:18:47):
motor message passing is the reticular activating system or
these these arousal structures which are, which are modulating
post synaptic gain. So the functionality that we've
just described is the functionality of those
anatomical systems in the vertebrate brain.

(01:19:08):
So, you know, this is the kind of thing that gives me
confidence beyond the right track.
You. You're muted, Tevin.
Thanks, Mark. Carl, anything you'd like to add
to that? And also while we edit, Mark, at
some point you can just tell us as well some of the problems you
might be having on encountering because you're actively working

(01:19:29):
on this. Maybe Carl has some some some
answers to some of those problems.
Just make a couple of just to pick up on something that was
implicit but really important. So notice that Mark where he's
actually getting into the job ofbuilding machine consciousness.

(01:19:50):
Notice that everything was aboutpolicies, which means that we're
talking about functional architectures that support
agency. So that's quite important.
You're not going to get consciousness from a large
language model, because the large language model doesn't do
the prompting. It doesn't act upon the world.

(01:20:11):
So you have to have a, a different kind of functional
architecture to have policies tohave in mind the consequences of
your actions and indeed to use Mark's word, to feel your way
into the future and then select the most likely and adjust in
the face of evidence that you secure by pursuing this policy
or or that policy. I thought it was just important

(01:20:32):
to say that, you know, there arecertain architectures that are
requisite if you ever want to walk towards, you know, a
conscious and that notion of agency.
I think it, it's just worthwhilenoting that the very word to
feel is a verb. So to feel is not a state of I

(01:20:54):
am not in a state of being of feeling.
Mark is the word palpate. To feel is to palpate.
So what we're talking about is again, something that is
quintessentially inactive and agentic.
It's just agentic on the inside.It's just mental action, action
on the inside. If you're a psychologist, it
would be called attention if youactually have to build, in the

(01:21:16):
spirit of Feynman, a machine that has attention.
Hence my reference, possibly disingenuous reference to
attention heads, but Mark is referring exactly now to the the
kind, the precision that could also that does the determines

(01:21:37):
what things are selected for belief updating or committing to
a particular policy in exactly the spirit that a psychologist
would think about attentional selection.
So that gating, that coordination, it's all action on
the inside, it's all acting and it's all about palpating,
palpating the confidence, the uncertainty.
And indeed, my one of my favourite phrases from Mark is

(01:22:00):
felt uncertainty under eyes, under eyes, quality of
experience, but it's an active feeling.
So again, we come back to this, this, and the final point is
that if you actually try to simulate classical kinds of
either economics games or rewardlearning games that evoke phasic
dopamine responses and transfer from a condition to an

(01:22:24):
unconditioned stimulus, what yousee is that the brain's
evaluation of the precision of the distribution of all the
policies I could take seems to predict almost exactly
dopaminergic discharge. And of course, that can go up or

(01:22:45):
it can go down. So you've got failures for free
simply because the average of the expected free energy just is
this entropy or neg entropy thatthat mark is talking about.
Notice that there are lots of other probability distributions,
you know, the state of the world, beliefs in certain
contingencies or likely mappingsor transition dynamics.

(01:23:06):
But specific beliefs about what I'm going to do next, beliefs
over policies or condition probability, probability
distributions over policies, they have an attribute of
precision. And that seems to explain
exactly dopamine. And it is the average expected
free energy. So which can go up or down and,

(01:23:26):
and has sort of valence for free.
So I think it's a beautiful closure here.
You know, when you actually walkthe path that Mark is walking,
Mark is walking this path, I am not.
So it's unlikely I'm going to have any answers for him, but
I'd be very interested to hear the questions.
Yeah, Mom, what's what's been puzzling you guys the most,

(01:23:49):
would you say? That that would take a very long
time. There are many, many things.
The thing that's exercising me the most and and I think we
might have time for me to ask Carl a question in in this
direction. It has to do with what I
mentioned earlier when I said I think we need to draw up in
advance a list of functional andbehavioural criteria whereby

(01:24:13):
reasonably qualified stakeholders, you know, can
reach a reasonable amount of consensus about, you know, these
sorts of these sorts of tests. If, if these boxes can be ticked
with sufficient number of them, you know, then the weight of the
evidence is in favour of inferring consciousness in such

(01:24:37):
an agent. So we've been, I was at a
conference of all places in Kathmandu a little while ago.
It was called that the focus wasnot artificial consciousness,
but but, but non human consciousness.
So there were, there were many animal consciousness experts

(01:24:59):
there and artificial consciousness people.
And we had a, a session in this conference.
In fact, we broke up into littlegroups and then we all came
together to give our the outcomes of our discussions
where we were asked to agree what were the tests for

(01:25:19):
consciousness that we found mostconvincing, bearing in mind that
we're talking about tests which have to be applicable both to
animals, non human animals and and to artificial systems.
And the test that won the vote as having the the highest
confidence of, of, of us expertsassembled in this conference was

(01:25:41):
something called the hedonic place preference test.
So condition place preference behaviour is what let's go back
to those zebra fish I was talking about earlier.
They, they tend to hang out where the food is delivered on
this side of the tank. But if you then deliver on that

(01:26:01):
side of the tank as something which does not have nutritional
value, but which does have hedonic value, like since Carl's
speaking about dopamine, like cocaine for example, then if
this, if these fish have feelings, then you might get

(01:26:21):
them to prefer to hang out there.
That you might predict that theywould prefer to hang out where
the cocaine is, even though it has no nutritional value to
them, that it's only value is hedonic.
In other words, it's felt it's effective.
So the the the hedonic place preference test is if the agent
shows that sort of behaviour, then that's weighty evidence for

(01:26:45):
it having subjective feeling states.
And the I want to just point out, here's another thing that's
really been exercising my team that to engineer a system that
that displays this sort of behavior is not to engineer the
most efficient system. It's to engineer a system which

(01:27:07):
can make mistakes, which can do things which are not in its own
best interests, but it thinks orfeels that they are.
So you there, you have a gap between the sort of hardwired,
forgive the phrase, you know, the kind of RL type of thing,
and this feeling my way through the problem.
And by the way, I have to add, what Carl said is 100% true.

(01:27:28):
It's a verb. It's pulpating your, your
uncertainty. So one of the things that's
exercising us at the moment is how do we engineer this kind of
functionality? And it's, it's, it seems as if
it's, it's, it's certainly, it is doable, but, but it's, I'd
love to before we close to hear Carl's views about that, about

(01:27:51):
the both whether he thinks that is a, a, a compelling test for,
for felt uncertainty and, or forfeelings and, and, and whether
he has any ideas about how one would go about instantiating
this kind of distinction within an artificial agent.

(01:28:16):
We've run out of time, but I can't resist this noting.
So your, your solution then is basically to ensure that we can
make computers into drug addicts.
And if we can do that effectively and simulate all
the, the, the perverse pleasure of possibly suffering.
That's a brilliant idea. I haven't thought about it.

(01:28:36):
I thought you were going to say the Salian test or some sort of
theory of mind or perspective taking.
But making limited drug addicts is exactly the right thing to
do. And I should just say that you
know that both Mark and I are involved in a number of
institutions and and bodies who are really trying to drill down
on this for the common good, including the consume

(01:28:59):
organization and also the California Institute for Machine
Consciousness are really desperately trying to understand
these issues. So some of the things that Mark
has said and I have said are resonating with and probably
mimicking some of the discussions that that are that
are that are ongoing with, with a, a wide group of people, you

(01:29:20):
know. So I just wanted to acknowledge
other sources of sort of or perspectives on this issue and
why it's such an important issuefrom, from, from many
perspectives, how you instantiate it.
I think it's quite simple, you know, and I'm sure Mark has
actually done this. It's just basically to make sure
that you have as part of your functional architecture, the

(01:29:43):
ability to palpate or to feel your uncertainty, and that
necessarily introduces a certainkind of hierarchical or
parametric depth to your generative models.
It is unremarkable from the point of view of statistician
because the whole point of statistics is to estimate your
your felt uncertainty in the form of standard error.

(01:30:04):
So people like Fisher and Box, the whole of parametric
statistics was just invented to get a feel for or a handle on
the uncertainty when making an inference.
So yeah, the maths is there. It is a question of putting that
into a computer architecture. I think perhaps the tension
heads are baby steps in that direction, but that's not good

(01:30:27):
enough. Just having the mechanism to
exert the products of your felt uncertainty does not actually
prescribe way, way, you know, the actions of actually
estimating the standard error orthe uncertainty.
But, you know, in principle we know, we know how to do that.
As a statistician, all Mark has to do now is to build it into a

(01:30:51):
in in the spirit of Fineman. Make one, and then he'll
understand it. Well, gentlemen, this, this was
absolutely amazing. Thank you so much.
It's truly an honor and privilege to have you both
chatting together and I hope youhad a great time.
And it was an absolutely great discussion.
Thank you for having us, Tevin, and thank you, Carl.
It's always an enormous pleasureto interact with you.

(01:31:13):
And I always learn something new.
But this, this topic is the closest to my heart at the
moment. And so it's, it's, it's, it's
really enjoyable to be able to engage with the, with the, with
the mind like, like Carl's and to have a host like you, Tevin.
I, I want to under score in closing, one thing that Carl
just said, which was which it was in what he said.

(01:31:37):
But again, it needs, it needs emphasis that we are working
both he and I together with organizations who are concerned
about the ethics of all of this.And so we are not, or at least
I, since it's me that's at that particular coalface, I want to
reassure our audience that we'renot going at this like Cowboys

(01:32:00):
and we're not going at it alone.Thank you.
Thanks. Can I, can I also say thank you
before you all go away? Thank you very much.
I really enjoyed that. Until next time.
Cheers guys. Thank you so much.
Have a great. Thank you.
Thank you. Bye.
Bye. Bye.
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.