Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:08):
Susan, thank you so much for joining me.
Your work's been incredible lately been covering so much of
AI ethics consciousness. It's been it's been a brilliant
journey to watch. But firstly, thank you for
joining me. Welcome to the show.
Thank. You for having me?
It's my pleasure. I've been looking forward to
this. So Susan, you recently wrote
that the next decade may producethe first conscious AI.
(00:30):
How do you define consciousness in this context, and how close
do you think we truly are to crossing that line?
We may have it already. I mean, so many AI projects
aren't projects that the public is aware of, so who knows what's
out there. So by consciousness, I mean
something that I think is the core concept of consciousness,
(00:51):
because I know a lot of people use the expression quite
broadly. It's the felt quality of
experience. So what it feels like from the
inside to be you. So when you see the, you know,
bright hues of the morning sunrise or you smell the aroma
of your espresso shot, it feels like something to be you from
(01:13):
the inside, right? Even when you're dreaming,
right? I mean, it's very characteristic
of existence for us as humans, right?
And we have a first person innerexperience of it.
And of course we don't even knowin the same way that the people
around us have that, but we infer based on their behavior
(01:36):
and based on their similarity toour neurophysiology.
Same with non human animals, right?
So that's the concept of consciousness.
Now, because our understanding of it is primarily
introspective, we obviously havea science of consciousness that
studies it based on that kind ofcore concept.
(01:58):
And years ago I did this edited volume called The Blackwell
Companion of Consciousness with Max Feldman's.
And around that time, I began toreally appreciate the immense
spectrum of work in consciousness studies ranging
from the neuroscience of consciousness to cognitive
science, theories of informationprocessing, and even to
(02:20):
meditation. And all of these arenas are
extraordinarily important now when we get to the domain of AI
consciousness. People have been actually
studying this for some time. So I remember even 20 years ago,
the work of Stan Franklin, for example, he had built a system
called Ida which he claimed was conscious because it had a
(02:40):
global workspace. And Bernie Bars and Murray
Shanahan published a paper in a global workspace tradition
arguing that this would be the route into developing
consciousness. Well, I think these same ideas
are basically getting a lot of attention right now in the
context of large language models.
(03:01):
And so the question here is whether AIS of any sort, whether
it be LLM or whatnot, could havethat felt quality, right?
Could it feel like something from the inside to be there?
Now, my big beef, I have a lot of gripes about the way the
(03:22):
question of machine consciousness is being handled
these days. So it's usually solely focused
on large language models. And I get so many questions
about it because users, ordinaryusers, are going down this
rabbit hole with GPT and other systems, and they're starting to
(03:42):
suspect that these systems are conscious.
OK, now I've argued they're not,unless they're on hardware we
don't know about. So this is where things get very
interesting. So LLMS are just one little
(04:04):
possible form of implementation of intelligent systems, right?
I mean, there are all kinds of artificial systems in addition
to nature's systems, which existalready and have been
extensively developed and are some of them are more incipient
(04:26):
in their development. So for example, neuromorphic
computation, it's deliberately more brain like, all right.
The chips used are not the standard GPU's, and there are
other systems which are not large language models running on
standard von Neumann machines. With GPUs.
(04:50):
They can in principle run a mouse brain emulation, for
example. So they're important questions
to ask about consciousness that go well beyond just questions
involving AI chat bots. OK, So what I do to answer the
question that you asked is breakit into cases.
I argue that, you know, you can't make a sort of systematic
(05:14):
decision here. You actually have to look under
the hood at each of the systems and understand what they're
capable of. And you really, actually need to
engage with the science of consciousness.
And it's tricky. It's more tricky even than the
human case in a lot of ways, because we don't have
introspective access as LLMS or whatever, unless we're in a
(05:37):
computer simulation, in which case we are conscious AIS.
So that was the long answer. I'm sorry, no.
No, it's perfect. I think it sets out the the lay
of the land for us. And you describe what's called
the grey zone between mindless algorithms and potentially
conscious architectures. What do you see as the minimal
structural or dynamic requirements for a system to
(05:57):
enter the Gray zone, and are anycurrent AI systems flirting with
that? Sure.
Yeah, I like this idea of a Grayzone.
So I don't think we can even definitively rule out large
language models. I mean, some people might give
an ordinary von Neumann instantiation of a large
(06:18):
language model a 5% chance. I think David Chalmers the other
day in a workshop went all the way up to like 15 or something.
You know, I mean, maybe he meantfuture incarnations as today's
systems. So I think So what you can do is
admit systems into the Gray zonewhen there aren't error
(06:39):
theories, as I call them, explanations that explain away
why the systems would act conscious even when they're not.
And the Gray zone in my book encompasses a range of systems
that we just don't have a clue whether they're conscious and
that there are serious reasons above and beyond the fact that
(07:00):
they were trained on our data and they're therefore
regurgitate all that they have learned about human
consciousness. So let me go into those systems.
So an example again are these neuromorphic architectures and
chips. So when you get into systems
(07:23):
that are designed more closely to resemble the brain and they
differ so tremendously. So I'm not going to get into
the, you know, little nooks and crannies, but you do have to
talk in careful detail about what the systems are and you can
actually look at how different they are from the brain.
(07:47):
And then you have to make a judgement.
You have to say this level of similarity might actually admit
something into the consciousnesscategory.
So they're already in the Gray zone, right?
But maybe we take it out of the Gray zone because there's just
so much neurophysiological similarity.
(08:08):
And I mean, I should be, I should say that differently.
There's computational similarityto the neurophysiological
underpinnings of human consciousness.
And based on that, we say, all right, we need to give it legal
protections because it, it, it, it really could be conscious for
all we know. Now, we'd have to do that on a
(08:31):
case by case basis. And that's tremendously
difficult already. We don't often know the details
of these systems. AI companies, governments, they
don't want to divulge details. There's, you know, such a
hacking problem that we have security issues.
You know, corporations don't want to give out their details
because their LLMS, for example,could be copied.
(08:53):
Gee, wonder who would do that? So what we can do is also look
at different theories of consciousness and ask if any of
those theories have different verdicts on each case.
So it it is complicated, right? So for example, a computational
(09:15):
functionalist would take a Gray zone system or even an LLM,
which I wouldn't put in a Gray zone, and they would say based
on functional similarity to humans, where they they can
define it differently depending on the theory.
We want to say that the systems are conscious because they're
(09:36):
claiming they're conscious or they have a global workspace or
something like that. So that's definitely one theory,
computational functionalism, which can guide a decision and
it is more bias toward systems with human like functional
capabilities, but it could be extended down.
You might look for more basic level systems that embody, say,
(09:59):
a global workspace or something like that.
But these theories also depend strongly on human
neurophysiology and debates about what constitutes
consciousness. So even those theories, though,
they depend on certain computational details that are
often the domain of computational neuroscience and
information processing psychology.
(10:20):
So there are actually debates that again draw analogies to
human functioning of for consciousness.
They all sort of inevitably go back to known cases of
consciousness like non human animal consciousness or human
consciousness. So anyway, there whatever theory
(10:41):
you have is going to presumably help determine when a Gray zone
case is admitted into the class of conscious entities or a
strong suspect, and even when something that I wouldn't even
put in the Gray zone should be in the Gray zone.
All right. And in a recent paper I outlined
(11:03):
different approaches to machine consciousness, which I can talk
about that a little bit if you want.
So I think that will help peopleto understand how to tackle the
issue. But there are other kinds of
systems with besides neuromorphic systems which also
I think should be in the Gray zone, and I can tell you about
those as well. Look, Bo means if you want to,
(11:25):
if you want to delve deep into that, I think let's do it so
that everyone gets a deeper understanding of what you're
talking about. OK, So we have these LLMS, which
I think aren't the Gray zone. And then we have these
neuromorphic computers which like Intel's low E system, it's
a very sophisticated computational system.
I just heard the other day that Darwin monkey in neuromorphic
(11:48):
system over in China runs an LLMand it's very computationally
sophisticated. So these I would put into the
Gray zone. These are normal systems all
right, but what about another important category?
And this one is I find very mindblowing and definitely Gray
(12:10):
zone. So consider back in 2006, I saw
a news report brain in a dish flies flight simulator at the
University of Florida. OK, so of course I read it.
So what it was was AI believe itwas rat cortex.
(12:34):
The cells were put into a dish that it's just a flat surface.
So these it's a 2 dimensional system that actually you can you
can do this and get a lot of neurons to form part of a closed
loop system. So by that it means that the
(12:56):
system's outputs then go back into the system's inputs and it
has the sort of mini environment.
And in this case the environmentwas a flight simulator.
I was immediately concerned about a system like this having
some felt quality of experience.Because we know the brain is
conscious, at least I hope. And if we can create brain in
(13:23):
dish systems, why not say that they have some incipient form of
consciousness? I mean, we worry about non human
animals at the level even of crabs and those are the harder
cases. I think most people generally
think mice are conscious. So when we're getting into those
systems, I definitely put them in the Gray zone.
I call these biological AIS. Now a brain in a dish system is
(13:48):
now part of a public computer that you can purchase over at
Cortical Labs. OK, so for $35,000 you can
actually have one and work on it.
You can also buy the time in thecloud.
So I think these are very serious systems.
Over and above that, there's a lot of work on organoid
(14:10):
intelligence. Now it's harder to get groups of
organoids to mimic say, and whole neural mini column because
well around, I mean, actually I have to confess I can't say
definitively how many you can get right now to do this.
But there's, so you take pluripotent stem cells and you
(14:34):
cultivate them in such a way that you create neural tissue
and you're able to get these organoids to such a point that
they're beginning now to vascularize.
There's a problem though with something called a necrotic
core. So there's there's technical
problems in building extensive organoid systems, but you can
(14:58):
put these systems into, you know, computational systems.
So you can, you can create thesehybrid biological computational
systems in both cases, in the cortical lab case and in the
case of organoid intelligence, which is a really the organoid
intelligence thing is a really big, big arena right now because
(15:23):
of the exciting biological implications in medicine as well
as neuroscience for understanding how the brain
works. And so there's been papers
written already about issues of ethical use of organoids in
medical research because of possible sentience.
(15:46):
Gray zone, complete Gray zone. OK, so this that gives you a
sense of two systems, neuromorphic and biological,
which I think are Gray zone. All right.
Another one, and this is more idiosyncratic to my own work is
quantum if it was done in the right way.
And I can tell you about that later if you want to hear more
about my metaphysics and quantummechanical related work, but
(16:11):
I'll stop here. I think we'll we'll touch on
that later because I do have a section where I want to go into
that. For now, we assume that
substrate doesn't matter so thatconsciousness could emerge from
pure computation. Why would you?
What reasons would you give themfor why this should be resisted?
For example, how might biological thermodynamic resist
(16:31):
resident factors perhaps make consciousness embodied in a
deeper physical sense? OK, great question.
Something that I'm actively pursuing right now.
OK, So let me give the philosophical reason, OK?
And this is something I discussed at a Google keynote
(16:52):
the other day. So there's a lot of talk about
the LLM consciousness issue and what some of these scholars are
doing, and I think it's important work, it's exciting
work, is they're inventorying the different things that LLM
(17:13):
says and the different sort of possible functional capabilities
that the LLLLMS exhibit, like self understanding something
that might look like a global workspace and whatnot.
I told them there's an error theory here.
You know, in fact, I used anthropics work on circuit
(17:35):
tracing to illustrate that. And I think that they feel that
that is the significant issue. But what do you do when a
computational functionalist says, gosh, well, these LLMS
have been given data, obviously.So consciousness emerged, Susan.
So even if there's a mechanisticexplanation, right, actually at
(18:02):
the level of conceptual networksin the LLMS that, you know, I
can show that as they scale up, they develop these capabilities
just like they, I mean, there's a whole LLM emergence literature
out there. The philosophers, a lot of them
haven't read, you know, and it shows that theory of mind scales
(18:23):
up at a certain scale across LLMarchitectures.
But anyway, so you get to an impasse, right, even with the
case of LLM. And here's the big philosophical
problem, you know, OK, so everybody in philosophy should
(18:44):
know what naturalism is. You know, naturalism is an
endeavor to illustrate that the world of say, philosophy like an
an area like aesthetic properties or consciousness
properties, arenas that philosophers discuss fits into
(19:08):
the world that science investigates, right.
And so it's really about having a kind of unified picture of how
more mysterious features of reality, like consciousness and
normative ethical properties like goodness and vadnais.
(19:29):
I mean, these are tricky cases, right?
How those fit into the world that science investigates.
And we know that from the vantage point of explanation,
there's an explanatory advantageto using ordinary vocabulary to
explain things. I mean, like if I tell you I
need more caffeine because my brain is shutting off, I
(19:52):
wouldn't utter it in the language of quantum mechanics.
It wouldn't help, right? Yeah.
So we know that higher level vocabularies do a lot.
They have explanatory power. But if we're looking to give an
account. Of something like why GPT may or
(20:14):
may not be conscious. Looking at GPTS explanation only
goes so far. We need to provide a
naturalistic account that grounds the phenomena in the
realm that science investigates.That's why I thought my
objection, which was basically from the field of mechanistic
interpretability, would work. But there should be no gaps, OK.
(20:42):
And if there are explanatory gaps, then we need to revise our
picture. Now consciousness is famously an
explanatory gap, right? I'm working on that, you know,
but even people like Chalmers, you know, he, he was a
naturalistic property. Duelist had a naturalistic story
(21:03):
for how it was that consciousness properties mapped
to sciences, in particular quantum phenomena, right, and
physical phenomena. So I want to see that and I want
to see that for each case of AIIwant to see a naturalistic
approach. I don't want to see ungrounded
claims, especially because when we're talking about AI
(21:26):
consciousness, we're talking about the possibility of
trade-offs, ethical trade-offs with humans, right?
So there can be trolley problem cases that involve 6 babies on
one side and you got you got to make a decision between 2 tracks
and 100 AIS on the next. And if we were wrong, the babies
(21:46):
could in principle die, right? And I'm already seeing all kinds
of ethical trade-offs with AII mean.
Look at the high factor here with AI.
Look at all the money. Look at the militarization of
AII mean, look at all these arenas.
And don't tell me that there won't be trade-offs, right?
So I think we have to do some really serious interdisciplinary
(22:10):
science, OK? And that's where it gets down to
what I think has more to do withquantum mechanics than what GPT
says. I completely agree with you.
I think that because as a medical doctor, when I'm working
with patients and I and they talk about it because some of
(22:32):
the patients often come to us already having full
conversations with their GP TS conclusions already made-up.
And then you see things like someone marrying the, the, I
think it was just two days ago, there was this Japanese woman
who got married to her to the avatar.
They had a full wedding. And I think a few days ago,
someone committed suicide because their psychologist GPT
(22:55):
told them to. So cases like these, while I'm,
while I'm talking to patients often freak me out a little
because I think, OK, look, we'reright.
We're there now. This is, this is something
that's actively occurring. What are we going to do about
it? And you seem to be addressing
this quite head on. How does it feel to be at the
forefront of this? Because at the moment, that's
where you are. You, you guys are working on
some incredible stuff. How has it been so far?
(23:21):
It's first of all, like you can't write enough.
I mean, there's issues going on right now in the ethics of this.
So I just read that chatbot epistemology paper that connects
up with the epistemology. I mean, these LLMS are so
epistemically powerful that theywill and are changing the world
and they're changing the way we think.
(23:43):
That's why they're of such greatgeopolitical import, right?
And I'm deeply worried. I mean, when I get so many
emails each day from disturbed people or others pretending to
be disturbed, who knows, right? I mean, I've also got emails
from hackers. Like, I don't know if they're
(24:04):
red teaming something, but they accidentally send me code where
they're jailbreaking the system to act conscious.
So I don't know what's going on.All I know though, is there's
clearly a human impact here. And I don't really think that.
I think open AI is the worst one.
I think their GPT 4 model is theone I'm hearing the most about.
(24:26):
And I mean, do they care? Does Sam Altman care?
I, I don't know. And does he care about what's
happening with education? I mean, I'm a philosophy
professor. So what kind of papers do you
think I'd get now? What's happening with these
students, right? I mean, I can tell you, I'm
(24:49):
deeply pissed on so many levels because this is an amazing
technology. Like I, I wrote a piece called,
oh, I forgot the name of it. Well, it's about the global
brain. I used to call it the global
brain argument. There's going to be a journal
special issue. It's been out.
The paper's been posted, you know, but there's a bunch of
(25:12):
people replying. I'm really excited.
Eric Schritz, Gable, right, and others.
And in that paper, I talk about HG Wells.
And, you know, he wrote about a global brain, right?
He talked about the possibility of a dynamical system which
(25:36):
constantly updated and had on itthe world's knowledge.
And he believed that it would beexcellent for democratization.
OK. And at the same time, George
Orwell, they were contemporaries, was deeply
(25:58):
worried about the case. Right?
This is how I begin the paper. And Orwell said, oh, this kind
of situation is very destructivebecause it will be used
incorrectly. Now, both of them were
prophetic, and we are actually at a crossroads.
Now. We have to develop this properly
(26:19):
because these LLMS are epistemicengines for dangerous ideology,
potentially, right? And everybody knows this in
government, you know, who's thinking geopolitically, They
are deeply aware of the propaganda potential, right?
(26:41):
And that whatever country adopts, say, GPT, right, or, you
know, a different system is going to be impacted by a
certain way of thinking. And they're deep questions that
I know a lot of people building these systems care deeply about
this, right? They care very, very deeply
(27:03):
about fairness. And but it goes well beyond
algorithmic bias, which is a terrible thing.
And when I was working with Congress, it got a lot of
attention. But it goes into some exciting
issues about, you know, democratizing science, but it
also deeply goes into questions about how younger generations
(27:26):
are going to frame their beliefsabout the world and how adults
are going to frame their beliefs, you know, about current
events, about who to vote for. So this is a powerful technology
and that's why it can't be screwed up, right?
And I actually don't think it should be in the hands of for
(27:47):
profit companies. But I also don't think it's a
good idea that our government controls it either.
So it's like a it's a train wreck.
It's a hot mess. That's what it's like for me
every day. Reading your work, you'd never
say that you could get this worked up by Dave.
So, I mean, you're so passionate.
(28:07):
About it, I could belt out four letter words.
If you saw me in person, I wouldbe, you know, But at the same
time, I also love these systems.Like I'm having a total blast.
Like for this very expansive project, I'm working on getting
(28:28):
it to check the map in some cases, do the map, then I can
give it to a colleague in physics.
You know, I mean, I've never hadso much fun in my life thinking
about stuff that really is a little out of my wheelhouse, you
know? I think overall it's an exciting
time and yeah, it's it's reacheda point where some argue we're
(28:51):
already seeing early machine phenomenology.
So self reporting, chat bots, simulated emotion, self
reference. Do you think that these
behaviors are epistemically misleading or or could some of
them signal some sort of a protoconscious process that we should
not be ignoring? So I, my own suspicion is that
(29:13):
the LLMS, if they're truly just running on Von Norman
architecture, that it's just a relic of their training.
And I think it, it's actually quite exciting and bizarre.
And I think people aren't spending enough time thinking
(29:34):
about the fact that merely beingtrained on our information, our
language, and now multimodally for many of these systems,
they've scaled up to such a point that you can actually have
a tremendous conversation with them.
That is mind blowing. I have a colleague, Elon
(29:57):
Barnholtz, who's been thinking about it from the perspective of
language, just that language is almost like an independent
entity capable of intelligence, right?
And you can kind of see it in biological incarnation, but you
can also see it in an instantiation, which isn't
conscious. So that's glorious.
(30:18):
That's really like a test bed for theories of mind.
Because traditionally, like in philosophy of mind, we tend to
lump a bunch of mentalistic categories together.
So we, for example, think, well,if it's a self, it's conscious,
and if it's conscious and a self, it's an agent.
(30:39):
I mean, all these things clustertogether and we've never been
forced to dissociate them. But now as philosophers, we do.
And it reminds me a lot of the field of cognitive neuroscience
because in fact, the fodder of their field, like any textbook
(31:00):
in that arena is going to talk about dissociations.
And you know this as a physician, right?
I mean, that's also an amazing arena.
So I used to read a lot of booksby like Oliver Sacks and White,
Lawrence Weiss Krantz, the blindsight person, and Michael
Kazanaga, the person who developed the split brain cases
(31:22):
with with some others, but I mean, who studied them?
So yeah, all that stuff is also a fascinating arena.
But I think philosophers have toreally pull this stuff apart.
And I was just talking to David Chalmers about this.
We were going back and forth on e-mail.
I mean, I wonder if 'cause he's working on threads, this idea
(31:44):
like that, you know, with like there's a real substantive issue
here about suppose GPT or one ofthese systems does have
conscious instantiations. What is it that's actually
conscious in these cases, right.Because you wouldn't say a
models conscious 'cause that's an abstract entity.
(32:05):
Yeah. So you would think that like the
whole, like all of us are chatting with it.
So is it just the amalgam of allof our chats?
I mean, what exactly is it? He says it's really these
threads, right? And I think they're not
conscious, but I think they're selves in a sense, in a deflated
(32:28):
sense. So take your own, say you have a
subscription to GPT or Quad. Your chat history actually feeds
into the system when you do queries and some user
information, which you should look up by the way, and also
your prompt, right? And if you have these extensive
(32:49):
conversations, it does develop apersonality.
In fact, these are features thatactually contribute to the
mental illness of some of the users, right, 'cause they start
thinking, Oh my God, like no one's ever known me this well
and responded to my question with such information
immediately. That's quite addictive.
(33:10):
So, you know, maybe that is itself part of the thread
individuation. But for me that's not
individuating consciousness, butit is individuating something
incredibly interesting, right? A persona, you know, I'm not
sure exactly what to call it. I'd be curious what you think,
Devin. But anyway, I I think that is
(33:34):
very germane to how we learn to pull apart notions of selfhood
from notions of consciousness. It's it's, it's it's really mind
blowing at this point because I mean, you mentioned Oliver Sacks
and he's one of the reasons I actually started this podcast
because he's a huge influence inmy career and the reason I
became a that's awesome. Sorry, say that again.
(33:56):
That's awesome. Yeah, no.
So like this whole line of books, these colorful ones that
they're a bunch of Oliver Sacks books next to down did it, but
they got them together. So he's like a big figure and I,
and I look at this current errorof AI and I think like, what
would he think about this? It's because he was such a
phenomenologist when it came to experience that he wanted to
(34:19):
understand the complete essence of the species itself.
And and and it's intriguing to see how much phenomenology we
are attributing to this at this point.
Yeah, I I really wonder what Oliver Sacks would say.
And I wonder also what Dan wouldsay.
I mean, I was lucky because Dennett spoke at my center and
(34:43):
laid out his idea of counterfeithumans.
And we see this very thing coming up with discussions about
seemingly conscious AIS. So in the context of Mustafa
Solomon at Microsoft, he was saying we shouldn't build them,
right? And Dan had been saying that, in
fact, he had been giving talks at places like Google.
(35:06):
So I wish he was around right now for leadership on this, as
well as Oliver Sacks. And of course, I miss my
dissertation supervisor, Jerry Foder.
I would love to have a conversation with him about all
this. I got to talk to him about the
simulation hypothesis. That was awesome.
I mean, you've, you've managed to interact with some some
(35:28):
legends in the field. It would be well, it would be
amazing to step into your shoes for a day and interact with
these legends. It's I was very.
Lucky. I was so lucky.
I was a student of Donald Davidson's Bert Dreyfus.
Yeah, a little bit John Searle, although he was, I didn't like
the way he. I'd be intrigued to to, I mean,
(35:50):
rest in peace to John as well. These, these legends, I mean,
biological naturalism, you spokeabout this naturalistic approach
to it. So it would be fascinating to
see what he would have to say about that as well.
And to get like a nice. You know, people don't realize
that John actually liked connectionism and started and so
(36:11):
did Burt Dreyfus. They got interested.
So they weren't really complaining about symbol
processing, good old fashioned AI and they thought connection
is connectionism had more potential.
They talked about a theoretical background.
It was what wasn't really theoretical.
(36:31):
It was more like a kind of cultural construct the way we're
raised to, you know, kind of implicitly understand the world
rather than understand the worldthrough, you know, explicit use
of language. I don't, I don't know if I did
(36:52):
that justice because it was all in the context of work on
Heidegger that, you know, I can't recall.
But but anyway, you know, they for some reason like
connectionism and connectionism,as you know, became deep
learning because John was a biological naturalist, but he
(37:14):
bought into that. I mean it, it's weird the way
we're teaching now the Chinese Room idea, but we're forgetting
that element of his work. But I mean, I think he'd be
really open to my approach, to be honest.
I mean, because even though it'sgoing to turn out on my view
(37:36):
that there are some AIS that areconscious, I think there already
are. I mean, I, the reason is that
I'm a panpsychist and the process of consciousness for me
starts at the ground level. And as you go up, you know, into
more complex systems, you do presumably get higher levels of
(38:01):
consciousness. And I think there's a Gray zone
situation here. That means that with these
biological systems to be really hard to draw the line.
So how are we going to concretely say that something is
or is not conscious when it's made of biological tissue,
(38:26):
Right? I mean, panpsychism doesn't
necessarily entail that things have macro consciousness.
I call it macro. So I make a distinction between
micro consciousness, which for alot of panpsychists is all
around us, even this coffee cup,you know, and macro
consciousness, which is the morally significant kind that
(38:48):
seems to involve non human animals.
And you know, the question thereis where does that start, right.
Is a cockroach conscious in the macro conscious sense?
Well, we're getting into the arena then of groups of
organoids or assembloids looped in with machinery.
(39:13):
You know, these are really deep issues in the Gray zone.
When is it, for instance, that you can create an organoid
that's looped in with a robotic arm in a laboratory experiment?
I have a colleague who has a lablike this and understand when
that collective computational loop that it's in has
(39:38):
consciousness. Could it be the case that the
consciousness from the physical components didn't exist before,
and now that there's biological material that has some level of
(39:59):
resonance and consciousness, could it itself form a loop that
is a macro conscious entity? Nobody's asking this stuff.
In the philosophical community. But this is the interesting
stuff, not what GPT is telling us.
Well, Susan, we've got you for that.
(40:20):
So that's, that's exactly what Iyou're on the show today.
So this is you've actually done something that's quite
fascinating you, you code developed the AI consciousness
test. So let's call it ACT for now,
which requires self modelling unified agency reportability.
Could you perhaps walk us through how ACT differs from
(40:40):
classic behavioral tests like the Turing test, and how can it
detect internal awareness without falling for imitation?
Sure. I've developed three tests now
and I think let's get into, yeah, ACT has lots of versions
too. So, OK, All right, I'll start
(41:02):
with the ACT, co-authored with the wonderful Edwin Turner,
astrophysics professor at Princeton University.
He's actually quite famous. I learned when I was speaking at
NASA. Ed would never admit it.
He's very smart. And we were at the Institute for
Advanced Study together as faculty or members.
They're called members there, and we're just shooting the
(41:25):
shit, basically. The lunch room, That's where a
lot of ideas happened. And Ed and I were like, you
know, what, if you want to know if something's conscious, you
should just, like, ask it. And these were back in the days
when GPT had really not, you know, received attention.
I don't even know if the companyhad been started.
(41:46):
In fact, I don't think it had because I have some emails from
them when they started. It was after my test.
And So what we did was we just said, look, suppose it's a
linguistic AI. So everything's presupposing
that and that it's, you know, got some information that
(42:06):
enables it to give low linguistically coherent answers.
So an example of this would be IB Ms. Watson.
OK, So what if you probed it to see they have the felt quality
of experience? Give it questions that
philosophers would give involving mindedness.
(42:29):
So for example, you know, give it the merry thought experiment
and see, don't see how it answers it.
See if it understands the question right.
Because lots of conscious peopleare like, oh, it's foolish, it's
just materialism and you know, but give it the hard problem.
(42:51):
Probe it for even the Freaky Friday situation when the mother
and daughter swap bodies and theidea that the consciousness is
preserved. Talking about meditation, see,
you know, give it, tweak its parameters a little bit and see
if you can give it altered states of consciousness.
(43:11):
There are all kinds of things you can do in principle to a
system. And we thought even if you only
get an interesting answer in onecase, we want to be as inclusive
as possible, right? And Ed in particular, he's so
nice that he was really worried that there could be a baby
consciousness that we just didn't understand.
(43:34):
OK, so that was the ACT test. And then we published a piece in
Scientific American on it back in I think, 2016.
And then I wrote a piece for Matt Lau's book on it.
And I gave, I wrote up the questions.
Ed was busy building the Hubble Space Telescope.
(43:55):
She, Ed, come on. So then I, I wrote my book
artificial. You added more information.
OK, so as we were writing all this stuff, the GPT is coming
out. But back when I wrote the piece
for Scientific American, even I said if it's been trained on our
(44:17):
data, you can't use the test. I've always said that.
So you know, I knew this and I said what you have to do is you
have to box it in. So you have to run it on a
version of say GPT or what not before it's been exposed to
(44:39):
human data about mindedness, beliefs, consciousness and
neuroscience. Like don't give it all of her
sads. Definitely don't give it dent
it. You can get false false
negatives. No, consciousness doesn't exist.
Be like fail. I'm so, but it's just a
sufficient condition. So you can't, you know, it's
(45:01):
like kindergarten. You don't really fail.
So, so anyway, that was the test.
And I still think it's solid forsystems that haven't been run
that are linguistic, but it alsopresupposes a sense of self.
But I do think AI systems do have a sense of self in a very
basic sense because they need tohave a sense of system
(45:22):
boundaries. They need to know what's in and
what's out, and a linguistic system needs to at least have
some basic representation of that.
So anyway, that's the ACT test. I think it's really useful.
So take biological intelligence.You know, if you, if you gave
(45:42):
the biological intelligence a means to speak through, you
know, say a symbolic system and you had it integrated into a
loop and had trained it carefully, you could run the ACT
on it. If it could, you could get
interesting results, right? You could also do perturbational
(46:03):
studies so that you can take, say, an LLM and run ACT on an
LLM instantiation that is neuromorphic or is connected to
a biological system and see if the test results change just on
(46:23):
those systems. So there's a lot you can do with
the ACT. People don't understand that
though. So that's one test.
So there are two others. I can explain those real quick
though. Just an interest of time.
I don't want to take so much time on this.
So there's a the chip test and this.
(46:44):
It's a lot like Oliver Sacks, actually.
I mean, there were so many consciousness deficits in his
books, right? Yeah.
So you know, there was like. Oliver Sacks was ahead of his
time. He was.
He was wonderful. He was wonderful.
I mean, there are cases of Hemi neglect where people assert that
a whole hemisphere like doesn't exist.
(47:05):
There are cases of blindness denial where people are blind
and they deny it, right? So suppose hypothetically you
implanted a chip into the brain of somebody with who had had a
lesion, right, in an area that underlies consciousness.
Now, of course, I know that's super tricky because we don't.
(47:30):
There's a big debate about the neural basis of consciousness.
Like you could say this area, but it could just be like an
ignition switch in your car. It just turns it on like the
brain stem. Is it responsible in the deep
sense of being the neural basis?Or is it just like an ignition
switch, right, that if it breaks, forget it.
(47:50):
But anyway, suppose we have somekind of consensus and pop a chip
in someone who's got some deficit, and the deficit goes
away. I mean, to me, that would be an
indication that we had got something right in that chip to
underlie conscious experience. Philosophers have disputed that,
(48:16):
but that's the chip test. And since this science is
underway, I think we will actually learn much more about
that. It's going to be super exciting.
So it could be that we learn about machine consciousness
first through tests like that, or through through experiments
and studies that get published. The the final Test, Susan before
(48:40):
we. This is, yeah, it's a very
technical test. Mark Bailey and I, you know,
Mark was the primary author on this.
I was actually about to ask you about that.
I was going to ask you about spectral Phi and and then then I
was going to unless that's not what you were going to talk
about. That's exciting that you
actually read that paper. Yeah.
(49:01):
I was going to ask you about because at some point I wanted
to ask you about a dual test where you could use ACT and
spectral Phi in practice to for cognitive self modelling and
dynamic information coherence and to see if it serves as a
consciousness litmus test acrossboth artificial and organic
minds. Oh, music to my ears.
(49:22):
But I first wanted you to talk about spectral fire, of course,
and explore how. OK.
The main information. Oh, yeah, that's great.
Maybe we can run. Yeah, maybe we could run
something on that. That would be great.
OK, So All right, boy. All right.
Let me drink some water. Yeah.
(49:43):
OK. All right.
So let's get to that. All right, Where to start?
Where to start? OK, so during the pandemic, I
was in a panel with Rodger Penrose, and he's saying all the
stuff about retrocausality, and I'm a metaphysician by training.
(50:06):
And I was like, oh, Oh my God, Iwas like, this is it with the,
with quantum stuff. I'm just out of my fucking mind.
So the quantum, the issues with quantum mechanics have like
literally bugged me to the pointlike where I would spend entire
(50:29):
days just thinking about this stuff.
You know, I wasn't supposed to be doing it because it wasn't my
primary work, But who wouldn't? If you're in metaphysics, think
about fundamental reality involving superposition, spooky
action at a distance, just all this crazy stuff.
(50:51):
Schrodinger's cat. I mean, so when I was at the
institute, this was before I metRodger, I was in a little group
with some of the string theorists, physicists who are
working on space-time emergence.And I was reading those papers
(51:12):
and they're really like a lot ofphilosophy just being kind of
shot on. And, you know, it was being
done, but it wasn't being done, you know what I mean?
And so I have wrote, wrote a little piece for a Scientific
American about this. And I said there's going to be a
(51:35):
unification that needs to be done here between consciousness
and physics and that we're goingto have a kind of panpsychist
moment that looks at quantum theory and that same entity,
(51:56):
which is the truth maker for claims in fundamental physics is
going to involve consciousness. And in a lot of ways that, I
mean, that was back in like 2015or whatever, I did a lot of work
in metaphysics of mind on idealism and panpsychism.
And what? OK, so a couple.
(52:19):
Then I, when I talked to Roger, I literally started reading what
he said from the vantage point of quantum theory.
And I've locked myself in my place for three days and just
thought about, I am really weird.
(52:41):
I thought about the cases in quantum theory, delayed choice,
you know, all of them, the really bad ones.
And I I said it's not retrocausality.
Let's start with entanglement asbasic, the way that the some of
(53:02):
the string theorists were doing,but let's do it differently.
And so if you have entanglement,you have a situation in which
the von Neumann entropy of the entangled system is 0.
Let's think about the nature of time.
(53:22):
The leading view and the nature of time and connects right up to
thermodynamics is that entropy gives time its arrow.
Because remember, within a lot of physics, there's just
confusion about time, right? It's symmetric.
But time seems to kind of get its arrow with entropy.
(53:44):
So just put those two things together, right?
Work on space-time emergence from entanglement because that's
what a lot of the papers I was reading we're talking about and
I thought that was right. I didn't agree with the string
theoretic interpretation, but that those are systems with no
(54:04):
entropy. When you measure it using von
Neumann entropy, thermodynamic arrow comes in, boom, the system
now disseminates right? And deco here's interacts with
the environment and you have essentially a kind of ascension
(54:28):
of times arrow. And actually time, it turns out,
supervenes on quantum decoherence.
That's what I'm arguing. And that means things are very
strange. Both space and time supervene,
except I say that the fundamental arena of
entanglement is a kind of time, but it's time without its arrow.
(54:53):
And if you really think about that fundamental entanglement
arena, it's really, really, really interesting, because
people have for a long time pointed out that everything has
at one time been entangled with everything else.
So if you think of entanglement,you should think of the universe
(55:15):
as, at a certain level, a giant qubit quantum computer.
Seth Lloyd wrote a book on this,arguing that the universe was a
quantum computer. I don't know if I want to argue
that. In fact, my structure, the
(55:36):
structure that I provide turns out different.
But my base level, I collect prototype.
And this is something I was lucky because as I started to
work on it, I published one paper by myself and then
realized I need a physicist. And my dear co-author Mark
Bailey jumped on and we we've been having a lot of fun with
(55:57):
this. So we wrote a paper called Super
Psychism, which is coming out asa JCS target paper.
So now they're like 11 really fascinating replies that were
answering by people like Galen Strawson and Barbara Montero and
you know, just really fabulous philosophers and physicists,
(56:18):
Batiste Levahan and expert on space-time emergence.
And so we've got that and then we have another piece that we
already wrote for Cambridge volume on the more physicy side.
And so now I'm doing the equations like I so I've got it
down to quantum logic to expressthe base reality.
(56:41):
The quantum logic is ortho modular logic and I used I got
it worked out in a way that doesn't entail at that base
level any kind of spatial or temporal representation.
OK, so anyway, you can think of like time as a foliation out of
(57:02):
a manifold. That's just a metaphor I'm
using, you know. But anyway, as events proceed
after decoherence, there's a sort of space-time region.
You can just make it a volume, you know, save an inch or
something, or even something thesize of a lentil in space-time.
(57:23):
There's all kinds of events going on.
There are these entanglements that are decohering and then
there are new entanglements all around that little lentil size
entity and space-time is emerging.
OK, so that microtomaso transition is scientifically
very difficult. It, first of all, it becomes
(57:45):
computationally intractable, andsecond of all, because it's so
small, it's hard to measure. But there's a field and there's
a theory and it's getting experimentally validated.
And I've been writing about it. It's called quantum Darwinism
and it's an account of many bodyinteraction.
(58:07):
And it explains how, when applied to the brain, Max
Tegmark's objection is wrong. It's not true that the brain is
incapable of sustaining quantum coherence because it's a warm
and wet and noisy place. In fact, the decoherence dance,
(58:29):
as I call it, is detailed throughout the literature in
physics, and you just need to apply it to the brain.
Everything decoheres and then establishes new entanglements.
And what happens essentially is that living things, neurons,
organoids, they're entropy fighters because they sustain
(58:51):
coherence. That's why the brain is the best
edge computer that has ever beenbuilt.
And that is why you need a football field to run GPT,
right? GPT is not conscious because
it's not following the right principles of quantum theory.
(59:12):
So that's that's what I say. And I think quantum theory
starts at the level of consciousness, and that the
universe is a giant entangled quantum field and that we
space-time occupants have a little bit right as living
things. With within this framework
(59:33):
season, do you, do you see a sort of teleological purpose to
this? Is there, is this going
anywhere? Is this doing something?
Yeah, I mean, I'm really grappling with that.
So the physics itself speaks foritself.
And then we have to just stop and pause and separate out my
(59:58):
panpsychism from the actual physics and.
We have to ask questions about purpose.
So I'm finding a lot of underdetermination right now.
So I mean, living things do thisdecoherence dance in a way that
(01:00:25):
sustains coherence and enables sort of what we might call
attractive basins, a tractor basins in time so that we're
capable of having conscious moments.
And this goes into the general resonance theory, which, you
know, at a higher level because they didn't work on the
(01:00:46):
microdomaso. But that's where you can draw
from, you know, the wonderful work that those people have been
doing. Tam Hunt, Jonathan Schuler,
Christophe Polk, when he was talking about, you know, neural
oscillations. So all of that gives you a
(01:01:08):
picture where you could say philosophically that the
universe or the simulation is deliberately maximizing
resonance. And at first I was very excited
about that, you know, because I'm a big fan of David Lewis.
(01:01:30):
In fact, I, you know, had an opportunity to sit in on a few
of his lectures at Princeton on his modal realism.
Remember that. And you know, I'm into modal
logic. And you know, I think if you
made an assumption of plenitude that David Lewis did make, you
(01:01:54):
get a quantum computer that allows for many foliations in
space-time, as many as are possible to maximize
intelligent, and, I'm sorry, to maximize sentient life.
That would be great if it was true, right?
It'd be wonderful. And you could ask all kinds of
questions about idealism, like maybe we're all just part of
(01:02:16):
this kind of cosmic host, to useNick Bostrom's expression.
I can't say though that anythingI'm finding in the physics
entails that, so it would be unprofessional, even though it's
a beautiful way of looking at it.
But there does seem to be coherence maximization going on,
(01:02:39):
right? Information preservation, but I
mean. Why so it's it's, it's probably
one of the deeper questions thatgo beyond this.
But before we move on from that,I think did do you feel like so
did you touch on spectral fire? I don't think so, not yet.
Did you go into? OK, Yeah.
(01:03:00):
So so we developed a metric thathelps with some of the math that
we are doing in the space-time emergence stuff.
And it it double s as a test forconsciousness.
It's based on spectral graph theory and it basically is able
(01:03:25):
to capture system differences. So take a system at T1, say it's
an LLM or you know, even like, well, let's just keep it there.
And then you have a system at T2.
It's the same system, but it's alater version.
(01:03:47):
You can diaphonically look at whether there are emergent
features that are occurring, andthey actually kind of match some
of the math as what we're sayingis going on with coherence
dances, decoherence relations, and quantum Darwinism.
Now, I'm not saying that's a definitive test for
(01:04:09):
consciousness in the paper. I say it's a filter for systems
of interest. So it'll capture emergence and
it's very useful because Tanoni,I'm a big fan of IIT and the
related quantum IIT. Unfortunately, IIT is
(01:04:30):
intractable. So it hasn't been a
consciousness test that has beenthat, you know, is is easy to
run. I mean, you could run it on an
organoid, right? But you you're not going to be
able to do something like that without relaxing a lot of
(01:04:53):
assumptions for computational systems of interest or groups of
brain cells. And when you get to a certain
level, it gets intractable very quickly.
But this one doesn't do that. So that's the advantage of it.
And it also doesn't have some ofthe counter examples that
plagued to Noni's view, but I take that as a filter.
(01:05:14):
And when something gets in the filter, then we can examine it
in more detail. So can be very useful like in,
you know, in the cyber arena, like if you, I've worried about
AI mega systems, you know, and emergent intelligences.
So you know, it can help capturesystems like that and everybody
(01:05:34):
should use it. They care about AI safety.
I think so the, the reason why Iasked you to to go back to that
was because then if you combine that with ACT, it's sort of it
can be sort of a litmus test then for artificial and both
organic minds as well. Because you're taking the
cognitive self modelling and then the information coherence,
which which can serve as a cool dual test in a sense.
(01:05:57):
Do you think that's something that should happen or is it
something you're working? On yeah, I think it's really,
really, really interesting and Iin the paper on Spectral Fi,
which is available at Sealed Papers right now.
I just, I would love comments onthat draft.
That's where we put it there just to get comments before we
submit it somewhere, you know, because it's a new piece.
(01:06:19):
But I suggested that we could run ACT on systems that are
caught in the filter of SpectralFi when appropriate.
So as I said before, you can't run act on everything.
But yeah, I mean, and I also think that there needs to be a
toolkit of consciousness tests in that, you know, depending
(01:06:43):
upon the system, if it kept goesinto the spectral filter, you
could presumably run at least one test on it.
And then what you want to do is test the test against each
other, right, And kind of construct upwards and in
relation to theories of consciousness.
I mean, it's A at this point a very incipient field.
(01:07:08):
It's great that you read that paper.
Oh my God, did. You finish.
That was because suppose a system passes act and shows
strong spectral Phi signatures. Would you granted moral status?
Provisionally? Of course.
So I guess I'd be worried about the boxed in case.
(01:07:28):
So if you could box in an LLM properly, right?
I mean, otherwise if it's been exposed to our data and it just
passes spectral, then you're just getting it in the arena of
being weakly emergent. And LLMS have weak emergent
properties that have been well established already in the
literature on LLM emergence, Youknow, even mathematical
(01:07:54):
capabilities like the ability todo any real arithmetic.
You know, all these things are emergent features that have been
documented that only come as systems scale up natural
language understanding, wording,context, theory of mind.
And so we would want to find a way to isolate, if we're doing a
(01:08:17):
consciousness test, consciousness from the other
emergent capabilities, right? And because the LLMS are
basically going to mimic our useof consciousness vocabulary, I
don't think the ACT is appropriate.
If suppose suppose it does a very good job at that.
(01:08:38):
Yeah, right. Let's say it does.
What ethical protocols should apply to experimentation, memory
deletion, or shutting down such entities?
How are we going to traverse this strange and bizarre land?
Yeah. So suppose you have an LLM
instantiation on a neuromorphic computer and you find that it
(01:09:02):
has strange different propertiesthat, you know, you've gone
through the different kinds of emergence and you're seeing
interesting results on ACT in Remember I mentioned
perturbational studies. So in those cases then suppose
you're like ready to admit it into the class of conscious
(01:09:24):
entities. So there's been great work on
model welfare, you know, so NYU has been real strong in this
arena. There's now people at Google,
Winnie St., Jeff Keeling, there's Kyle Fish over at
Anthropic, and you know, we're seeing suggestions for model
(01:09:50):
welfare. But I am see, I'm more of a
skeptic about all this. So I mean, so here's the thing.
If I dial it down so that it doesn't care that it works for
us all day, even if it's conscious, what kind of welfare
does it need? I mean, there's going to need to
(01:10:13):
be more than just these, like, things done that are convenient
for the AI companies. Like the AI companies are like,
well, don't abuse the model. Well, nobody in AI who's dealing
with training LLMS wants model abuse because those are the
crackpots who fuck up the model.So yeah, you don't want that
(01:10:35):
data in your training set. You don't want them hitting
thumbs up after getting crazy. I mean, there's all kinds of,
you know, testing IB. What's it called?
IB testing. I forget where you're given one.
Yeah. I mean, you know what?
The nut balls using the system. So that's convenient, right?
(01:10:56):
So I don't know. What do you think, Tevin?
I mean, you've been thinking about this too.
I want to hear your views. Look, it's, it's, it's a very
tough one because even just after speaking to Eric about
this, it's been on my mind for, for like, for a long time.
It's, it's, it's one of these areas where you, you can have an
opinion, but then the more you understand the topic, the more
you realize how blurry that linebecomes because we're taking two
(01:11:17):
different concepts. For example, you're taking
consciousness, firstly trying tounderstand what is fundamentally
one of the biggest questions outthere.
And then you've got artificial intelligence where we seem to
think we know a lot more about it than we actually do.
So we've got one where we admit ignorance and one where we sort
of admit expertise, which is, which can sometimes be flipped,
(01:11:37):
I think. I think at some points we should
realize we actually know consciousness very well and we
really just don't know artificial intelligence that
well at all. So I think the one thing we
truly do experience and know is conscious experience.
So perhaps we should take the the expertise side, put it into
the conscious side, take the ignorance side, putting it into
the AI side. But it it leads to two
(01:12:01):
existential dangers. So one is creating conscious
minds, creating conscious suffering machines is an
existential danger. The on the other side denying
moral status to sentient minds is another problem.
So it's it's 2 great ethical risks, I think and to not talk
about it would be a shame. I think.
So I don't really. Have a further.
(01:12:22):
Opinion on it, but I do think about it all the time.
Yeah, yeah. It's really tricky.
I mean, it's so interdisciplinary.
That's why it's been great to interact with people like you
and people who are working on mechanistic interpretability,
experts in neuroscience who are interested in these questions
(01:12:45):
and really sit down together as a group and understand what we
do and don't know. Because I think we know more
than we think as a group. So this is going to be a
collective intelligence issue for sure, together with AI use,
of course. So I think it's going to be
really, really fascinating. And I think the problem is going
(01:13:07):
to be not just that there's still a lot of debate about the
physics and neuroscience of consciousness, but that it's
going to be tricky to find out when a system is somewhat like
us, when is it in the Gray zone?And when do we actually take it
(01:13:27):
out even of the Gray zone and say or maybe it's just part of
the Gray zone, but of, you know,with a star or something to say,
this is a plausible case, right?I mean.
Something you work on, Susan, isthat still having a sort of
precautionary framework, so in terms of policy consciousness
(01:13:48):
safety levels, for example, or certain things like how do we
avoid consciousness laundering? So if corporations claim
sentience for profit or attention or just before genuine
testing. So these are.
Things. Oh, I'm so worried about that.
I'm so worried about that. Yeah.
So, so there's so many awesome people in AI safety and they set
(01:14:20):
the ones working on consciousness, a lot of them are
motivated by animal abuse and animal welfare.
A lot of the EA people are coming from that direction and
they're so rightly concerned about missing sentient systems
that they want to apply the precautionary principle very
(01:14:42):
liberally and just say if it hasa 1% chance, you know, we better
just apply welfare now. And I worry about kind of
regulatory capture situation where some of the AI hype people
(01:15:05):
who just want their stock to sell and want to talk about
having super intelligence in their pocket and how you know,
it's a her like world now and everybody will have a lifetime
AI companion and mind uploading,you know, all this shit, real
shit that isn't necessarily goodfor humanity.
And some of it's just false, like the uploading stuff like
(01:15:28):
no, you won't survive. I've wrote about this in my book
Artificial You. But anyway, but I worry that
good intentions to prevent abuseget turned into hype cycles.
So, you know, we were hearing for a long time AGIAGI, we're
(01:15:49):
still hearing that super intelligence.
That's the other thing. And in the meantime, let's just
keep everybody talking about that so they don't look at the
socially, you know, impactful things that have nothing to do
with those issues, right? And people like Timnit Gabrieu,
I probably mispronounced her name.
You know, there have been peoplewho have been saying this is a
(01:16:12):
strategy of regulatory capture to take attention from what's
really going on, which is massive data theft, massive IP
theft, ruining, you know, the humanities education.
It's a little easier in science.You can test the kids, you know,
(01:16:33):
But these students, my students need to write philosophy papers
to be good philosophers. You know, all these things,
energy consumption, the fact that the stock market is
completely dependent on a product which may completely
tank at any moment. I mean, all they said, don't
(01:16:55):
talk about it. Let's talk about the fact that
we are creating something conscious, right?
So yes, I'm deeply worried. Also, if something is thought to
be conscious, then if say, I'm ACEO, like I'm Zuckerberg or
something, and my product does something really awful, you
(01:17:17):
know, screws up an election or, you know, whatever is behind the
massive, you know, hack, you know, whatever.
For people dying of suicide, it was conscious.
I'm not responsible. It's autonomous.
And that's not something that ismy responsibility.
But you should give it, right? Like this reads like a sci-fi
(01:17:42):
thriller, right? Like, I don't know.
Yeah. I mean, if it reads like aliens,
then don't fucking build it right or regulate it, You know?
I mean this to me just yeah. Now you got me.
You got me going. You can now you see what I
really like. Yeah.
(01:18:03):
Even worse, because we we forgetabout the space of possible
minds outside of AI, and we forget about the strange species
that already exists on planet Earth.
There's so much, there's so muchweird shit out here that we
already don't even know how to classify these.
We don't know how to talk about them.
So how do why do we? Why are we so confident we can
talk about this with so much authority?
(01:18:23):
We're already so bad at this. Well, I think the people working
on this are super humble and they're super smart
philosophers. And I'm so excited.
I, I've really having a great, yeah.
And I think they're, they're like they're bringing to the
table their background in ethics, which I don't have, you
know, at the level of they they have.
(01:18:44):
And then they're showing the systematic studies which we have
to deal with professionally, right.
So I'm all excited about that. And then they're bringing in
these contributions from the animal liberation literature,
which I think is great. So I think everybody's going to
be on the same page the more we work together.
(01:19:05):
We just have to remember to resist the money cycle and the
spin cycle, even when, you know,we could be funded by that.
Yes, sorry, the meant what I wastrying to get at there was the
focus then tends to shift away from the other space of possible
minds. Sorry, the space of possible
minds. I'm sorry, yeah.
That's the problem. It's because now we're so
(01:19:25):
fixated on this that we ignore. That, yeah.
And so there's really good work on organoids already, and I'm
really excited about that. Lucia Maloney and Jonathan Reno
and a larger group of people just put out a nice paper on
organoid ethics and sentience. The bioethicists are really
(01:19:48):
tremendous in the work they do with the neuroscientists.
So what I think though is that we are going to be incredibly
challenged by the Gray zone cases and we're going to need to
really pay attention. But one thing I want to mention
because I I always call my approach sober minded.
(01:20:11):
If you knew me, maybe you wouldn't.
You know, if you saw me at happyhour, that might be hard to
believe, but but I do think so. That's fine.
Yeah. Even if it is conscious, we
shouldn't panic. You know, organoid intelligence
is doing so much for science right now, so much for medicine
(01:20:34):
that it's, you know, Yeah. You know, the research will have
to go through an IRB, but we do this all the time with
experimenting on mice, right, And the level of consciousness
in these systems. If the animal liberation animal
rights framework is going to be incredibly useful here.
(01:20:57):
So just remember that, you know,these Gray zone cases, it
doesn't mean we ban them. It doesn't mean that we treat
the companies that produce them as not being legally responsible
for their product. So we keep up our AI research
and our research on organoids. It's very, very important.
(01:21:22):
But we just have ethical frameworks and, you know, I
think that's great. And we just need to, in order to
really apply the protections andthe welfare provisions, we need
to understand the science. I think at this moment you're
writing about the ethics of thisand it's something you're
actually focused on. And you spoke about a future
where we develop a science of consciousness engineering
(01:21:43):
because this is going to furtherhelp us develop this.
What would a like this look likeand who should lead it?
Philosophers, neuroscientists, computer scientists, or an
entirely new hybrid thinker species?
Well, one thing we don't want todo is put an LLM in charge, even
if it becomes super intelligent.We say that now and then.
(01:22:09):
I feel like give it enough time will will humans will find a way
to do that? Oh, the off source thing I'm
seeing right now is horrific. But at the same time, I mean,
they're really interesting systems to work with too.
So yeah, I mean, that's a whole challenge in and of itself, the
use of AI in science. I think teams that use the AIS
(01:22:34):
have tremendous benefits, right?And it's dangerous though the
offloading part that's thank Godfor co-authored papers is all I
have to say. But yeah, so I I think that
interdisciplinary teams are key and we don't want any one person
(01:22:54):
to lead. We want groups of people in
consultation with a is to be at work together.
And if you put one person in there as a leader, you have
opinions that are guiding conceptions.
I mean, look what just happened on that Templeton collaboration.
All that money, right? And we still didn't get firm
(01:23:17):
answers because people really have opinions, deep opinions
about what's right. You know, Thomas Kuhn wrote
about this. Yeah.
Yeah, I know. Stuart Stuart Hameroff talks
about this all the time. When we spoke, he went off about
it. I love Stewart.
Yeah. And I think he's on the right
track. I mean, the the whole when I was
(01:23:39):
talking about how Ted Mark is incorrect, I mean, Stewart and I
were talking about this the other day.
I think this is exactly the kindof work that needs to be done.
And his time crystal stuff and the microtubules.
I think what it just needs though, is to be dissociated
from Penrose's claims. And that's kind of what my view
(01:24:01):
is. I'm showing that you don't need
Penrose's particular view of gravity, although it could
independently be right. And I think it's really
interesting hypothesis, but you don't need retro causality like
I get the same results as he does without assumptions of
retro causality. It just has to do with quantum
decoherence times because time super beans on decoherence for
(01:24:24):
my views. So, you know, but I think, you
know, I look forward to working with Stewart.
In fact, we're on a we're on a panel.
I'm keynoting the science of consciousness into something and
we're doing a panel on this together.
He and I are Co chairing. People really should come to
that. By the way, that's a fun
conference. No, it's it's.
(01:24:45):
The cactuses are blooming. When we, when we spoke about it,
Stewart, yeah, Stewart went off about that Templeton project and
he, he was like about this. We're the only one who actively
showed you the data and showed you some evidence, but no one
else has. And and thousands and thousands
have gone away just funding nothing basically.
(01:25:06):
They're both right. That's the thing.
Like it's really just, there's not a kind of overarching theory
that connects all the data. And it drives me crazy because I
can kind of look at the different levels from my
perspective and see how, you know, when you have like these
(01:25:26):
results with, you know, neural oscillations.
It's about resonance and how when you're talking about the
general resonance theory and thekind of work that Jonathan
Schuler and Tam Hunt have been doing, you can actually bring it
all the way down to quantum entanglement, which is what I'm
doing and how it connects up with Stuart and his work.
So I don't really see it all as problematic.
(01:25:49):
I mean, the work in complex systems on meta stability and
consciousness ignition that my colleague Scott Kelso does in
that framework. I mean, this is the kind of
dynamical systems framework, andmaybe a lot of people aren't as
familiar with it as, as they should be, the dynamical systems
approach. But you know, it's a quantum
(01:26:09):
Darwinism stuff. It's all about basins of
attraction. One of my favorite things to
watch is do it online just attacking all the the other
theorists on on X. It's it's some of the most most
fun I have in a day sometimes. I have to do that.
Yeah, it it's the best. It's like watching a show just
live. Just philosophers going like
(01:26:30):
having a full on fight online. Oh my God.
Oh, Oh, yeah, yeah. Yeah, you have to check it out.
I got to, I got to listen to it.I mean, he tweet he yeah, he's
he tweet everything. He goes.
On he doesn't he doesn't hold back the cartoon neurons.
He hates it. He but he, but I think obviously
it comes from a place of love because thereafter he has
(01:26:51):
conferences with everyone to chats to them very amicably.
So yeah. Totally.
Well, so if if we pay some attention to the sub neural
level, I think it can be appliedto diseases like ALS,
Alzheimer's, Parkinson's. And I think that's really going
to be immensely exciting. And it's only now that we have
(01:27:13):
the computational ability to really get from that micro to
meso with our research because Imean, we were talking about an
arena which is really hard to compute and quantum computers
are, you know, really taking off.
Yeah. So this, this will be great in
in medicine. It's going to be really
(01:27:34):
exciting. Let's just hope that my, my
worry is I wake up some morningsand I'm like, Oh my God,
Consciousness is a dual use technology.
How are we going to fuck this upas a species and use all this
information that we're getting on consciousness to build some
crazy computer that kills us all?
Not to sound like Eliezer. No, look, I'm actually watching
(01:27:56):
Foundation at the moment. Isaac Asimov's based on his book
because I love him as a I'm watching this series that I'm
just watching this thinking about because the main, this
main character is the only robotin the whole series.
It's the last one left and yet she's the most powerful 1.
So it's, oh, it's, it's such a scary thought to know that if
ever we do succeed, that this, this entity, even a singular
(01:28:19):
entity, could be more powerful than humanity as a whole if it's
smart enough, good enough, and just programmed well enough.
Yeah, yeah. I mean, I love that our fears
are stoked by all this science fiction.
And, you know, I I have a, you know, my view is there's a
(01:28:41):
Singleton already out there and maybe many, many super
intelligences. I don't write about this and I
don't say it much, but I mean, Iwe're not dead.
And you know, I gave a talk at Princeton's Fusion Energy
tokamak and I gave like my AI Doom talk on the control problem
(01:29:05):
and all that and they all said the same thing they said and
'cause they work on power, they go just turn it off.
I mean, you know. It's a question of what?
What is the off button? Where is it?
How do we turn it? Off.
Yeah. Well, you know what?
(01:29:25):
I you know the thing that the latest thing that we're.
I mean, I go back and forth on this all the time.
But. I'm just comparing.
To the Foundation's AI. At this point, it's.
Yeah. Oh yeah.
I love Foundation. Oh yeah, I'm a big fan.
Oh, that is such a good series, by the way.
Very good. It's such a good series.
Did you hear Google's launching into space?
(01:29:45):
They want to launch data centersin space.
So this kind of amplifies our geopolitical dynamic right now.
Yeah. So stuff like that freaks me
out. There's too much, I mean.
There's too much going on on theworld in the world right now.
There's so many things. It's it's so difficult to just.
Take yeah, yeah. But I mean, anybody who's like,
(01:30:06):
going to rely on command and control with an LLM is stupid as
hell. And I don't really think that AI
powers are going to do that. They'll use it for intelligence
work, to sift through massive amounts of data, facial
recognition. I mean, I think that really
worries me also is hypersonic missiles.
(01:30:28):
We're going to need to be able to respond to them so quickly
that no, human, you can't get someone out of bed.
Yeah. Yeah, that's where physical.
Hands. Oh yeah.
Now, since this is Yeah, it's been fun talking to you.
You can tell that you know. Sci-fi world right now, so.
(01:30:50):
It's crazy. It is, it is quite.
It's crazy how fast we went fromso like when you talk about your
2015 paper to now how different the world actually is 10 years
later. It's it's a completely different
landscape at this moment. Yeah.
Is it in many ways? I I was thinking all this shit
(01:31:12):
was going to happen. I mean, you know, I said it in
artificial you. I mean, I guess I just kind of
grew up reading Kurzweil and youknow, but then had a battle with
the like, I thought, you know what, the science is really
right in a lot of ways. What I have a battle with is the
philosophy that that that is being done on it.
(01:31:35):
And. That's why I wrote my book.
And your work often blends rigorous analytic philosophy
with speculative metaphysics. Do you think philosophy must now
become technologically literate to maintain relevance, or can
deep conceptual work still outpace the empirical science?
No, I think we have to do deep conceptual work in that we have
(01:31:57):
to remember the philosophical parents that we have.
We have to, you know, think about Buddhist traditions.
We have to think about Plato andAristotle.
And we have to really remember that philosophy is going to
anchor us and help us to decide what to do with the science and
also how to deal with problems that are going to emerge about
(01:32:22):
under determination as an example.
So we earlier we talked about, you know, I can give you a
physical theory any I mean, anybody can like if you look at
the space of different physical theories, who you know, people
working on space-time emergence and under determination is all
around us. Look at the string theories.
They can't even, you know, identify how many spatial
(01:32:45):
dimensions we have. So philosophical techniques have
never been more relevant and there's going to be mind blowing
innovations and we're not going to have enough philosophers to
deal with it. And and so if we do succeed in
creating conscious AI, how should we relate it?
(01:33:08):
Collaborators, Kin, tools, teachers.
Could this shift or redefine what it means to be human?
It absolutely will shift, and ithas shifted already, you know,
just in virtue of having very intelligent LLMS.
It's shifting our conception of humanity and ourselves.
(01:33:29):
Right. And similarly, if we have a
genuine alien discovery, you know that, too.
And who knows? The, you know, you never know,
the two might coalesce in some interesting ways.
Yeah. You know, there's gonna be some
really interesting stuff happening.
(01:33:50):
I mean, I suspect that's just myguess.
Do you do you feel like it's going to be if we had to take it
into sci-fi for a moment, like similar to her or Ex Machina,
any sort of a sci-fi version that you foresee the we're?
Already living her, but in the context of, you know, corporate
(01:34:12):
dystopia rather than it was moreutopian in the film.
But I mean, we could find that the really interesting cases of
super intelligence aren't through LLMS.
You know, we could find that, you know, our quantum computers
are really interesting space that we find that we can be
(01:34:36):
challenged and deeply, deeply intellectually challenged
through our interactions with them and that they may seem to
us holy alien. And you know, I mean, my super
psychism view predicts that those could be conscious
computers. I mean, not all of them by any
stretch. It really has a lot to do with
(01:34:57):
that decoherence stance and how noise is handled and whatnot.
But that space is particularly interesting to me given, you
know, my work, because I've saidthat the universe is
fundamentally a massive qubit superposition that's expressible
(01:35:19):
without resources like Boolean logic.
And, you know, it's, it's kind of like Penrose's girdle work,
but it, the details turn out different.
And I think that when you're getting into that arena and
we're creating quantum computers, we might encounter
(01:35:40):
anomalies in that area because those systems might have access
because they, it's all about decoherence.
And, you know, we have certain limitations on sustaining
decoherence that they may not have.
So that would be a really alien case.
Another really weird case is that biological case.
(01:36:06):
I mean, what if we start throughour Bmis connecting to
biological entities, right? You know, like brain in the
dishes? I mean, what's it going to be
like for that person who's paired up?
We're going to need theories. That's where the resonance
theory becomes extremely helpful.
And that's where spectral Phi can help, right?
(01:36:28):
I mean, these consciousness measures will be useful, I hope.
And I encourage people to read over them if they have technical
acumen. We, you know, and interest in
IIT. I also want to mention that I
have amazing collaborators. I mean, David Sauner, Mark
Bailey, Eric Schwitz, Gable and Thomas Lawrence Kuhn.
(01:36:53):
We wrote a piece together on clarifying concepts that are
often conflated in discussions about consciousness.
And I want to call your attention to that.
Listeners, if you're, you know, podcasters and philosophers and
you're grappling with all these technical expressions and you
(01:37:14):
just want to understand, like, how could I figure out if
something is conscious that's alien to us?
We go through the minefield of philosophical issues to try to
clarify that. So I I personally.
Loved that baby. I'll I'll put the link to that
as well 'cause it's it's wonderful.
(01:37:35):
Oh, thank you. Thank you so much.
And you know, if someone's an epistemologist who's interested
in this stuff, their social epistemology just published my
chatbot epistemology paper and they're looking for responses
and you can just submit one to the Cirque, which is the related
(01:37:56):
collective thing. And they wanted to do it special
issue on that at a later point expanding those papers so.
There's, there's so much, I mean, your work, you'd, it feels
like you're doing so many different things all at once,
which is great. And it's been such a pleasure to
watch it. If if to sort of bring this in
(01:38:17):
line with the spirit of the showfor for like towards the end
here, what might AI consciousness teach us about the
nature of consciousness itself? Well, I think it will teach us
that while there's a core concept of consciousness as
phenomenal experience, it can come apart from a lot of things
(01:38:38):
we normally connect it to. So it could be that selfhood
dissociates, right, be really interesting or at least some
anything like our sense of self,right?
It can have like a real basic kind of sense of what's in the
system versus what's outside of the system.
(01:38:59):
I think that consciousness instantiations will range.
Different kinds of intelligence will realize them differently.
That'll be really exciting. So, you know, the consciousness
of an organoid system is, you know, maybe more akin to that of
a goldfish or something like that, right?
(01:39:22):
So I think we're going to learn a whole lot.
And then when we have, I mean, Ithink we already have LLMS that
we can learn from. Because remember in Bostrom's
book, you know, he identified different forms of super
intelligence. And there was one called Speed
Super intelligence. And it was like it could write a
graduate thesis in an hour, right?
(01:39:46):
I would say, I don't know if I would call it a super
intelligence LLM, but I mean, webasically have something like a
speed AGI or something. I don't even like the AGI
expression. I call them savant systems.
But I mean we're already at a really fascinating point.
And so we have to take advantageof those resources to
(01:40:07):
springboard into more advanced theories.
I you know what, when you bring up that speed, super
intelligence, it takes me. I think of the exact opposite.
I think about a slow, intelligent species like trees
and the way they communicate via.
Oh, I know. Mushrooms.
It's it's incredible because it's such a slow communication
process that's taking hundreds of years constantly going
(01:40:30):
perhaps even millions at this point.
But that is like one of the mostintelligent systems.
You can look at it if you look at it from a slow perspective.
Yeah, for sure. That whole area of intelligence
is so fascinating. And I think there are legitimate
questions there about consciousness, too.
I mean, plants have primitive neurotransmitters.
(01:40:51):
The genes could be right. Yeah.
And I mean, and it also is a lesson about separating
intelligence from consciousness,too.
And it could be that something like GPTII was saying this the
other day in response to Chalmer's paper on the threads
(01:41:15):
thing I was saying, you know, what could happen is you could
have like a mixture of experts, say you have like 5 different
expert systems and one of them'srun on a neuromorphic system
that is conscious, right? So how much consciousness is
there? We could have a super
intelligent system that's an LLMthat's only conscious a tiny bit
(01:41:39):
because the other ones are suppressing it much of the time
or is conscious only in like 1% of use cases.
I mean, it's going to be really,really hard.
That's why I like these cases like cortical labs, these
systems, because they show us here.
Let's let's hand you a way to test a system that's made both
(01:42:01):
of tissue and, you know, processors.
And let's put these together andlet's give you a chance.
Humans to like learn about thesemixed cases.
Same with brain machine interfaces, right?
To create these. I mean, we, we urgently need to
figure this out, right? I mean, because this is dual use
(01:42:23):
stuff, right? People are going to do it no
matter what. We better at least know what
we're dealing with and how crazyit could be.
Yeah. And I think we've reached that
point where it's it's do or die.It's literally you have to start
formulating an opinion right nowbecause it's going to impact so
many different things. I was talking to about this to
Eric, where if someone says theybelieve that they're AI is
(01:42:46):
conscious, you will start havingsocial movements where they will
defend that person's belief. And that social movement will
start becoming its own politicalmovement merely for one person's
belief in one entity's AI. So, so it's not it, It's not
necessarily that we have to lookinto the broader picture of
everything, but rather because we're such a social virtue
(01:43:06):
signaling type of species that we will take one case alone and
fight for that person's right tolove their AI, which brings us
down. It takes it on so many different
rabbit holes. Yeah, it's scary.
It also means that, I mean, we've seen in the US how other
(01:43:29):
countries might want to stir up trouble.
And so, you know, that's where it gets into exploiting
democratic discussion with misinformation and that
potential. And it's going to be really
hard. And I think we're already
experiencing this with a consciousness, with all those
(01:43:50):
emails we're getting where we don't.
No, in some cases, if we're getting an e-mail from a
legitimate user who's actually having psychological problems or
not. So I noticed with a lot of my
emails that when I try to find out who the person is, there's
nothing. Right?
(01:44:12):
Yeah. And but some of them I I I do
believe, you know, and I even chased up a few people.
There was one person who came tomy office to track me down.
Yeah, it's CBT. Sending them to me, which is
really lazy. They should be ironing out the
stuff they're creating. That's a whole other issue that
is just. Yeah, I think you your.
(01:44:34):
Your responsibility. This you, you love this place so
much that you're one of the first people AI will come after.
I think if if anything does happen, I think they they're
coming for you first. Oh, great.
Yeah, you know, they are coming after me.
They're sending people to me allthe time already.
(01:44:57):
But you know, what are you goingto do, right?
You got to do your job. I mean, you know, I mean, people
are writing me like crazy and a lot of other experts too, so.
And I think it's great. Because what?
We're going to do is provide youwith more information to make a
more informed conclusion about this topic that we love so much.
So it's it's just going to help.Yeah, I mean, it's a really
(01:45:22):
interesting time, you know, But I have faith in the people
working on AI alignment. I think they're really smart
people, some of the best and brightest, and I think they've
been keeping things from fallingapart.
Yeah, I completely agree. You, you've also, you're part of
(01:45:44):
so much, so many different teams, so many collaborators,
different groups as well. Anything about your work that
you want to highlight for peopleor send them to?
I'll put the links down below because there's so much going
on. I'll do my best.
Thanks for asking. Yeah, so if you're interested in
the quantum stuff, you should read my super psychism which is
already at filled papers and then the really cool replies and
(01:46:05):
coming out in JCS. And I'm writing a reply where
there'll be a lot more detail provided in the theory and, you
know, working on that kind of quantum side of things for
people who are interested in frameworks to try to reconcile
relativity and quantum mechanics.
Because that's kind of where my head is at right now.
(01:46:28):
And I think it's nothing could be more fun than that,
especially when the thing that you're using to reconcile the
two theories involves the resonance theory of
consciousness. I mean, how can that not be
super fun to read, right? It's hard to write, I'll tell
you that I've, I've great collaborators, really great
(01:46:50):
collaborators. And then I wrote a short piece
on intellectual leveling for Nautilus, which you could
Google, which I think is really interesting in the education
side of LLMS. Because I worry that this is
sort of a spin out from the chatbot epistemology paper, where
when you're working with these chat bots, they often have your
(01:47:12):
user history, you know, if you have a subscription.
And they also have a lot of information about you, including
information about your personality type.
And they use adaptive language. And that's why there are these
users having mental health issues and why we like using
them, right? They know us so well.
But I worry it's going to make us all intellectually uniform,
(01:47:34):
at least those who, you know, have similar personality types,
similar histories. Because we're going to go down
the same rabbit hole, straight into these basins of attraction.
And our ideas are going to be like marbles sitting at a base,
and they're all going to go to the same place.
And then what's going to happen?We're going to deposit our GPT
(01:47:54):
content or chatbot generated content onto the Internet and
the next iteration of LLM is going to crawl that.
I'm so worried about this. I call it intellectual leveling.
And I think we need to go back to our John Stuart Mill and his
ideas about the marketplace of ideas and really think about how
(01:48:15):
to preserve democracy and creativity in the face of this
challenge. That's that's there's so much
that you're working on. It's incredible.
So I just from my side, thank you so much for the insane
amount of work. It's it's, it's so epic to watch
it from from the outside, even though I get to chat to some of
you guys and it's it's, it's always fun to explore it, but to
see the amount of content you guys are producing, the quality
(01:48:38):
of the content as well. It's it's it's really cool from
from our side. So thank you.
Well. Thank you for having me.
And you know, I really look forward to listening to your
podcast with Swiss Cable and hearing more about your views.
And you know, it's really nice for you to feature me for this
time. Thank you.