Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow Your Mind production of iHeartRadio.
Speaker 2 (00:13):
Hey you welcome to Stuff to Blow your Mind. This
is Robert Lamb and today I'm going to be chatting
with Jonathan Birch about his new book, The Edge of Sentience,
Risk and Precaution in Humans, Other Animals and AI. It
comes out later this week on November fifteenth in the US.
Jonathan Birch is a professor of philosophy at the London
(00:35):
School of Economics and Principal Investigator on the Foundations of
Animal Sentience Project, a European Union funded project aiming to
develop better methods for studying the feelings of animals and
new ways of using the science of animals' minds to
improve animal welfare policies and laws. In twenty twenty one,
he led a review for the UK government that shaped
(00:57):
the Animal Welfare Sentience Act twenty two. In twenty twenty
two through twenty twenty three, he was part of a
working group that investigated the question of sentience in AI.
So I'll definitely be asking him about animals, about AI
and maybe a few surprises here. So without further ado,
let's jump right into the interview. Thank you, Hi, Jonathan
(01:21):
Welcome to the show.
Speaker 3 (01:22):
Hi Robert, thanks for inviting me.
Speaker 2 (01:24):
So the new book is The Edge of Sentience. But
before we get to that edge and start talking about that,
how do you define sentience in your work? And what
are the implications and challenges of agreeing on a working definition?
Speaker 3 (01:40):
Well, so see why I think if sentience is a
really useful concept. Let's start by thinking about pain that
I think a lot of us have wondered. Can an
octopusts feel pain? Can insects feel pain? Can things hurt?
Can they have that feeling of ouch? And this is
a great question, but I think it's a bit too
n because we need to be aware of the fact
(02:02):
that other animals might have very different mental lives from us,
and words like pain they might be a bit narrow
for thinking about what the experiences of other animals are like.
So it's good to have concepts that are a bit
broader than the concept of pain, and to have a
concept that includes other negative feelings like frustration, discomfort, but
(02:29):
also the positive side of mental life as well, because
we also care about this. We care about states like joy, excitement, comfort,
pleasant bodily sensations like the feeling of warmth, and we
want a concept that is broad enough to include all
of this, or the negative side and the positive side
of mental life as well any feelings that feel bad
(02:51):
or feel good. And this is what the concept of
sentience is about. The capacities have feelings that feel good
or feel bad.
Speaker 2 (02:58):
Now to what extent is a different concept from consciousness
or where do they overlap and where do they differ.
Speaker 3 (03:05):
The problem I have with the concept of consciousness is
that it can refer to many different things. Sentience isn't
perfect in that way, but I think it's a bit
more tightly defined than consciousness because when we talk about consciousness,
sometimes we're just talking about immediate raw sensation, what it
feels like to be me right now, the sites, the sounds,
(03:29):
the odors, the bodily sensations, the pains, the pleasures, and
so on, and that's quite closely related to sentience. But
sometimes when we're talking about consciousness, we're talking about things
that are overlaid on top of that, like our ability
to reflect on what we're currently feeling, and our sense
of self, our sense that my current immediate raw experiences
(03:54):
are not happening in isolation, but they're part of a
life that extends back in time and extends forwards into
the future, and I'm self aware. I'm aware of myself
as existing in time, and these things are much more
sophisticated than just having those immediate raw sensations. So it's
very useful to have a term that draws our attention
(04:17):
to those immediate sensations, and that's what sentience does.
Speaker 2 (04:21):
Now. I realize this is of course a huge question
that you tack on the book, and I'm not going
to ask you to regurgitate all of it for us here,
But where are the least controversial divides between synthiens and
non sentience in the animal kingdom and where does it
become more messy or controversial.
Speaker 3 (04:41):
I think it's become very uncontroversial in the last few
decades to think of all other mammals as being sentient beings.
And that's a huge change because as a long tradition
of skepticism in science, going back to Rene Descartes in
the seventeenth century, but also the so called behaviorists in
(05:02):
the early twentieth century, who said you should never be
talking about consciousness or sentience of any kind in relation
to other animals, and that view has just fallen away.
And I think it's good that it's fallen away, because
I think it is pretty obvious that our cats are dogs,
and in fact all other mammals like pigs, cows, et cetera,
(05:23):
they do have feelings. And then because of this new consensus,
the debate, the controversy has started to be more around fishes,
where you get some genuine doubters, and particularly invertebrates, where
we move from animals with a backbone to animals without,
and we're looking at animals separated from us in time
(05:44):
by over five hundred million years of evolution, like octopuses, crabs, lobsters, insects. Here,
I think doubts are very common, and it's entirely reasonable
to think maybe not all invertebrate animals are sentient, and
a lot of debate around that.
Speaker 2 (06:02):
And you mentioned the octopus already being an example of
a very complex creature that of course is very distant
from us, And yeah, how do how do we line
that up with this idea of sentience, and then how
do we keep from comparing it, trying to compare it
too much to what we have?
Speaker 3 (06:19):
And I guess to consciousness, right, Yeah, sentience is a
good word for pushing us away from anthroperstcentrism and away
from this assumption that animals have exactly the same feelings
we do. So does an octopus have pain, Well, it's
probably not feeling it in the same way that we would.
(06:42):
It's going to be a state that feels very different
to the octopus. I think. Is the octopus sentient? Well, yes,
I think so. The sentience concept is broad enough to
just capture the whole range of animal mental lives, soever
much they may vary.
Speaker 2 (06:57):
Now, when it comes to a moral obligation to Cynthia
in life forms, where I guess and I realized this
is asking a question where ultimately, ever, they're going to
be a lot of different cultural differences and so forth.
But where where are we generally with the idea of
our moral obligation to sentient life and where are we
looking to go with it? Or where does what's the
(07:17):
trajectory seem to be? Again?
Speaker 3 (07:20):
I think there's been a sea change on this in
recent decades. I think opinions are changing, and they're changing fast,
and I think changing in a direction I find encouraging
because it wasn't that long ago. You'd often get people
denying the idea that other animals can make any moral
claim on us at all. People would say morality is
(07:42):
about humans, it's human interests, human rights. Other animals are
not part of it, and very few people argue that now,
because I think once you recognize other animals as sentient
beings that can suffer, that can feel pain, that have
lives of their own, it becomes very, very hard to
defend the view that none of this matters ethically or morally.
(08:05):
Of course it matters, and then the debate is about
how it matters, how strong are our obligations, And here
you do get a lot of disagreement. Still, I feel
like the point of consensus that I'm trying to latch
onto in my book is that we have a duty
to avoid causing gratuitous suffering to sentient beings, which is
(08:27):
to say, if we're going to do something that will
cause suffering, we have to have a sufficiently good reason
for doing that thing. And then, of course you get
debate around what might be a good enough reason. You
get debate around, for example, whether scientific research might be
a good enough reason, and there'll always be disagreement about that,
(08:47):
but the need to have a reason so that we
cannot be causing suffering gratuitously. This I think everyone really
can agree about.
Speaker 2 (08:56):
Now you discuss multiple additional cases that seem to exist
at that edge of syndience, as the title refers to,
and I'm not going to ask you about all of them,
but one of the more surprising ones to me, I
guess this isn't an area that I had not been
paying close enough attention to in the science news is
the idea of brain organoids or artificially grown neural tissues.
(09:18):
I was not aware that they were playing pong. So
what's the story here and what does it mean for
our understanding of syndiants in something like this.
Speaker 3 (09:28):
It's an astounding and very exciting emerging area of research
where you can induce human stem cells to form nerve
cells to form brain tissue, and you can build structures
the model regions of the human brain at very very
small scales, And sometimes researchers are doing this to model diseases.
(09:51):
They want to model Alzheimer's or ZEKA or fetal alcohol syndrome,
and this can be a very good way of modeling.
So if you compare it the alternative, that is to
use a living animal like a mouse or a rat,
that has real limitations because the rat's brain is so
different from the human brain. So this is very exciting
way of getting better models of diseases. Of course, it
(10:16):
raises questions as well about well, there must be some
point at which you really should stop doing this, because
you've made something too lifelike, you've made something too big,
you've let it develop for too long, and there's now
a chance that it will be sentient in its own right.
And I feel like this is a risk that seems
particularly striking in cases where what the researchers are trying
(10:37):
to do is model intelligence, model cognitive functions. That's what
this system, dishbrain that you were referring to, is doing.
Because what the researchers did was train it to play
the video game Pong through interacting with a computer interface,
(10:57):
and so the system could control the paddle and they
demonstrated measurable improvement in gameplay over twenty minutes. So by
getting feedback on its performance, the system was learning how
to play pom. And really, the thought that we might
(11:20):
be getting dangerously close to the edge of sentience, I
think strikes you very clearly when you read about studies
like this.
Speaker 2 (11:27):
Yeah, especially to your point, the idea that we could
get there sort of very much by accident in this case,
you know, trying in part perhaps trying to avoid things
like cruelty to mouse kind of lad animal.
Speaker 3 (11:44):
Well, this is why I think it would be an
overreaction to immediately ban all of this research, because that
would be inconsistent. We need to be consistent in our
attitudes to different risks, and it's no use if we
crack down hard on the organoid research in a way
that just leads to more research being done on obviously
(12:05):
sentient animals like mice and rats and monkeys and so on.
We've got to let this research develop because it could
be replacing animal research, and we have to encourage that.
At the same time, we need proportionate steps. We need
to think about what the red lines are so that
it doesn't go too far.
Speaker 2 (12:33):
Now, another huge question from your book is how would
we know even AI became sentient and what would it
mean for us if it did.
Speaker 3 (12:44):
I think we wouldn't though, And this is the big
fear that we may be rapidly approaching the edge of
sentience in this case too, with the rate at which
AI is developing the extraordinary behaviors we're seeing from AI systems,
and yet understanding of how they work remains incredibly poor.
And it's not just that the public doesn't understand that
(13:07):
the people working at the tech companies do understand. The
people at the tech companies do not understand either. These
systems are black boxes where you know the architecture, the
overall architecture that you've programmed the system to have, but
then you've let it, You've trained it on vast, vast
amounts of training data, and in the process it's acquired
(13:28):
these emergent capabilities. It's acquired algorithms that you didn't program
into it, but that it can now implement to reason
its way through problems. And we don't know what the
upper limit is here. We don't know as these systems
scale up, we don't know what algorithms they might be
able to acquire. And we don't know whether there's some
(13:51):
point at which, if you recreate enough of the computations
that are happening in a human brain, maybe you also
get the sentience as well, maybe you also start to
get feeling as well. This is a view that in
philosophy is called computational functionalism. It's like a long word
for the idea that if you recreate all the computations
going on in the brain, nothing else is needed to
(14:14):
get sentience, you get the sentience as well. And that's
the possibility we have to take seriously, and it's a
real risk, and it means we could create sentient AI
long before we accept that we've done so, or before
we realize that we've done so.
Speaker 2 (14:29):
This leads me to a question that my wife asked
me to ask you when I said, hey, do you
have any questions about synthients and AI and animals and
so forth? She wanted me to ask should we be
polite when we're addressing Siri, Alexa or various you know,
Google Gemini or whatever kind of text based interfaces that
we're using. Like what because I found myself making like
(14:54):
going into say Google Gemini, testing it out, just kind
of like experimenting with it, seeing what's up with it,
and then after a few exchanges with it, feeling like
I need to say, oh, well, thank you, that's all
for today, and feeling like I need to be polite.
But then also I have caught children, my own child
once or twice being a little harsh with say Siri,
(15:18):
or maybe their un alarm is going on too long
in the morning, that sort of thing. So what are
your thoughts about.
Speaker 3 (15:23):
Yeah, it's a fascinating question we have As well as
the book, there's a paper that we just released called
taking AI Welfare Seriously, and it's it is an issue
we should take seriously right now because AI systems that
might realistically be sentient could be with us quicker than
(15:47):
we expect, indeed, at any time, and I think it's
great to be having that discussion now about what are
we going to do about that. The questions it raises
are absolutely enormous. We don't know how to answer them,
and I think maybe it's right that a very low
cost starting point that we can do right now is
(16:07):
just start trying to cultivate an attitude of respect the
systems we're currently interacting with. There's every chance they're not sentient,
but there's no harm in cultivating an attitude of respect anyway.
And by cultivating that attitude of respect, will be more prepared,
more prepared for the future where there really might be
(16:29):
a moral requirement to avoid torturing these systems.
Speaker 2 (16:34):
Now in terms of just identifying potential sentience, and you're
already outlined like the challenges, if not impossibility of that.
Can you tell us a little bit about the gaming problem.
Speaker 3 (16:45):
One of the problems we face in this area is
that if you ask AI whether it feels anything or not,
answers very a great deal. Sometimes they say yes, sometimes
they say no, But those answers are not giving us
very good evidence at all. The problem is that we've
we've trained these systems to mimic the dispositions of a
(17:08):
helpful human assistant. So in their training they've got rewarded
constantly for being as human like as possible. And so
we have this situation in which we've got reason to
think our criteria for sentience will be gained, so to speak,
because the system can serve its objectives of being a
(17:30):
helpful like a helpful human assistant by mimicking behaviors that
we see as being persuasive of sentience, in talking as
if it had a rich internal life, as if it
had emotions, as if it had sensations. Sometimes developers have
reacted to that problem by just programming the systems to
(17:52):
deny their sentience, so they just say, of course, as
an AI system I don't have any feelings. That isn't
very helpful either, because that's not evidence that they don't.
So we're facing this tough situation where the surface linguistic
behavior is not really giving us any evidence either way.
To my mind, the message we have to take from
(18:13):
this is that we need to be doing everything we
can to look behind the surface linguistic behavior to try
and understand the inner workings of these systems. Better to
try and get inside the black box, open it up,
find out what computations are actually being performed and how
they relate to those that are being performed in the
human brain, to identify what I call in the book
(18:35):
deep computational markers of sentience, and then look for those
rather than thinking the linguistic behavior will do the job
for us.
Speaker 2 (18:43):
Now, what do you think about our moral and or
legal responsibilities concerning sentient AI as we look forward into
the future, And again you see, as you said, like
a lot of this is and or could be happening
a lot faster than many of us might think. But
you know what does that mean when suddenly we have
at least reasonable reason to believe a particular AI is sentient.
Speaker 3 (19:06):
It's a huge debate that I really think we should
be having now. It's great to be having it now.
In the edge of sentience, I defend this principle I
call the run ahead principle, which says that in thinking
about these issues, we really need to be asking what
would be proportionate to the risks posed both credible future technologies,
(19:27):
not just the technologies we have now. Because the technology
is moving too fast and regulation moves very slow. We
don't want to be in the position where we're totally
unprepared for what happens, because we would only have a
debating the current technology rather than the possible future technology.
So it's absolutely worth debating about if we get to
(19:49):
that situation where we've got some deep computational markers of sentience,
and then we find that we have systems displaying those markers,
so there is a realistic possibility that the system is
genuinely sentient. We really have to be thinking about what
does our duty to avoid causing gratuitous suffering require from
(20:12):
us in this case, and I think it will imply
ethical limits on what people can actually do to AI systems.
What those ethical limits are very very hard to say,
because the welfare needs we can't even really imagine. It
(20:32):
depends a lot on the precise nature of these systems
and the way in which they've achieved sentience, whether we
can say anything about their welfare needs at all. And
to me, all of this points towards having good reasons
to desperately try not to develop this technology at all
if we can. I think currently we're just not ready,
(20:54):
We're just not in a position to use this technology ethically,
and so in a way we should be trying to
avoid making it at all.
Speaker 2 (21:05):
Now in the book, there's at least one example, and
I apologize I'm blinking on the specific here, but you
mentioned a fairly recent call for ethical guidelines concerning AI
development that was dismissed by critics as being mere science fiction.
Speaker 3 (21:22):
Well, Thomas Metsinger, Yeah, yes.
Speaker 2 (21:24):
I believe so. And that struck me as interesting because
on one hand, we have clearly, at least through science fiction,
and of course outside of science fiction as well, we've
been contemplating things like this for decades and decades, and
yet as we get closer to the reality, the label
science fiction is also sometimes used to dismiss it as saying, well,
(21:46):
that is just sci fi. That's not actual things we
should be worrying about. So I don't know if you
have any thoughts on to what extents science fiction and
science fictional thought has prepared us for this or kind
of created this barrier that prevents us from acting as quickly.
Speaker 3 (22:02):
Yeah, I don't think it has prepared us. Yeah, I
think that's fair to say, even though we do see
films like Her for example, about ten years ago that
now seemed remarkably prescient that no one thought they were
describing events ten to fifteen years in the future, and
yet that is the future we now found ourselves in.
(22:24):
It's extraordinary. But yeah, that doesn't in any way mean
that we're prepared. And in my work on this, I'm
trying to develop a sort of centrist position that is
about avoiding the pitfalls of extreme views on both sides,
where one extreme you've got people who think that these
(22:44):
systems already sentient. We can tell from their surface linguistic behavior.
They just talk as if they have feelings, so we
should think they do. And I think that's credulous and
it needs to be avoided. On the other side, there's
this dismissal of the whole idea that AI could achieve sentience,
(23:05):
this idea that, of course you need a biological brain.
Of course you need to be a living animal, and
we're just not in a position to be confident or
sure about that. In this well known philosophical position, computational
functionism might be right, and if it is right, then
you might not need a biological brain at all, and
we have to take that seriously as well. So for me,
(23:27):
it's about finding that middle ground where we're taking the
issue seriously, but we're thinking that this has to be
the beginning now of a process where we really try
and look for robust, rigorous markers and have serious ethical
debates about what the right response to those markers of
sentience would be. We could have to be neither no
(23:49):
knee jerk skepticism or credulousness.
Speaker 2 (24:03):
Now I realized this next question is largely outside the
scope of this book. But what are the implications for
the consideration of possible extraterrestrial syndiants as we encounter it
in potentially encounter it in say a biological or technological form.
Speaker 3 (24:20):
Just make me think of octopuses again, because of course,
you know they're so alien from us. They look like extraterrestrials.
But they're not. They're terrestrial, and they're right here on
Earth right now. So I think it's great to, you know,
recognize the possibility of forms of sentients very different from
our own, and then recognize that our actual Earth already
(24:43):
contains them, and that we can start thinking now about
those real cases and what we're going to do about
those real cases. I'm entirely open to the idea that,
you know, just as there are really alien forms of
sentient on Earth, maybe there are out there elsewhere in
the universe as well, but we can only speculate, and
with octopus says, we don't need to speculate. We can
(25:05):
be studying the alien life forms that are that are
with us now on Earth and get real knowledge about them.
Speaker 2 (25:12):
Now. Through through much of this topic, there, you know,
there's this sense of expanding our compassion for non human
sentient entities, and certainly the octopus is a great example
of that. I know in my own life, like years
and years ago, when I first started reading a bit
about their intelligence and their behavior, I stopped eating octopus
before I stopped eating other meats. And so I feel
(25:35):
like this kind of response is going to you know,
to happen inevitably in as far as we consider these
non human sentient forms. But what kind of impact do
you see all of this having on, potentially on the
expansion of our compassion for each other? Like, does this
expansion of compassion for non human entities, do you think
(25:57):
it ultimately helps us become more compassionate to other humans?
Speaker 3 (26:01):
It may do, and I suppose I hope it does. Yeah.
I certainly don't think it's some kind of zero sum
game where by being more compassionate to octopuses and insects
and crabs and lobsters and so on, we're forced to
then be less compassionate to other people. I don't think
it works like that at all. I think it's more
(26:21):
this general attitude. And I'm a big fan of the
Indian idea of a hymnsa non violence, non injury, abolishing
the desire to kill or harm other beings. I think
it's about trying to cultivate that virtue, trying to walk
that path, and it's a path that encompasses other humans
(26:44):
and non human animals as well. And through cultivating this
general non violence, you know, this general loss of our
desire to dominate and crush and harm other beings, even
if they're insects, can become a lot more peaceful, I
think in our dealings with each other too.
Speaker 2 (27:04):
And what do you see ultimately as the prime I
guess motivators in changing the way we see these various entities.
Is it through Is it through laws and regulations? Is
it through more like sort of ground level outreach? Is
it both? I mean, how do we really affect this
(27:26):
sort of change or how have we affected it so
far most successfully?
Speaker 3 (27:30):
It's a huge open question for me what does actually
succeed in changing behavior? Because I've been focused a lot
on scientific evidence and about synthesizing the existing evidence for
sentience and other animals, presenting it to policymakers. Sometimes it
does produce change, and in the UK, the Animal Welfare
(27:51):
Sentience Act was amended to recognize octopuses, crabs, lobsters, crayfish
as sentient beings because of the report that we produced.
So that was a surprisingly effective in a way example
of how marshaling scientific evidence can move policy makers. So
it's great when that happens, but of course it doesn't
(28:12):
always happen, and we do face this problem that a
lot of animals are pretty clearly sentient, think of pigs,
for example, or chickens, and yet they continue to be
treated by humans in absolutely appalling ways. So merely knowing
that an animal is sentient often does not drastically change
your behavior towards it, And I'm fascinated by the question of, well,
(28:36):
what else is needed? But what other information? I think
there are empathy barriers. You could know that a chicken
is sentient, but doesn't necessarily convert into immediately empathizing with
that chicken and the animals suffering. I've got to think
about what might bridge that gap. Narrative stories are video
(29:01):
documentaries like My Octopus Teacher, they could not be part
of it. I think there's probably lots of ways to
bridge that empathy gap, but we have to recognize it
as a problem and to realize that simply knowing the
animals are sensient is not actually enough.
Speaker 2 (29:15):
It's interesting to think about pork and chicken. I don't
know how this pans out in the UK, but in
the States, you often will drive through a city through
a rural area either one, and you'll find a lot
of signage and promotion for places that serve pork or
chicken that use cute or amusing like cartoon versions of
(29:39):
those animals, And it seems it's always struck me as
strange that these are things, these are acts and choices
that would seem otherwise to be something that would convince
us not to eat set animal, but they seem to
instead give us license to. And I've always had a
hard time understanding exactly what's going on and in our
(30:01):
minds when we consume or create that sort of thing.
Speaker 3 (30:03):
It goes under various names, doesn't it cognitive dissonance? The
meat paradox, this idea that we often love animals, we
find them so cute and adorable, etc. And then continue
to eat them. Anyway, this would be it would make
perfect sense if meat was genuinely necessary for our health.
(30:26):
And I think that's the argument the meat industry would
love to be making. It would love to be able
to convince us that meat is needed for our health,
and so these sacrifices in how we treat the animals
are sadly necessary. But it's just not true. It's just
clearly not true. And then the existence of all these
manifestly healthy vegetarians and vegans makes that completely undeniable. That
(30:48):
we don't actually need to be eating these animals at
all for our health, and we can, if anything, probably
be healthier without doing so. I think once you realize this,
these really does become very clear for not eating these animals,
that the harms we're doing to them contact to be justified,
(31:09):
because the benefit we get is at most that the
gustatory benefit, the enjoyment of the product. It's not necessary
for our health in any way, and that enjoyment can't
justify in the balance all that suffering cause to the animal.
Speaker 2 (31:27):
Well, Jonathan, thank you so much for taking time out
of your day to chat with me. The book is
The Edge of Sentience Risk and Precaution in Humans, Other
Animals and AI. It is out in the United States
on November fifteenth.
Speaker 3 (31:41):
Thanks Robert, Thank you.
Speaker 2 (31:45):
All right, Thanks again to Jonathan Burch for coming on
the show and chatting with me again. That book is
The Edge of Sentience Risk and Precaution in Humans, Other
Animals and AI. It is out later this week on
November fifteenth, And it gets into so much more that
we didn't have time to get into in this interview.
Just a reminder that stuff to blow your mind is
(32:07):
primarily a science and culture podcast with core episodes on
Tuesdays and Thursdays. On Fridays, we set aside most serious
concerns to just talk about a weird film on Weird
House Cinema, and we have short form episodes that air
on Wednesdays. Thanks as always to the great JJ Possway
for editing and producing this podcast, and if you would
like to get in touch with us, well, you can
(32:27):
email us at contact at stuff to Blow your Mind
dot com.
Speaker 1 (32:39):
Stuff to Blow Your Mind is production of iHeartRadio. For
more podcasts from my heart Radio, visit the iHeartRadio app,
Apple Podcasts, or wherever you're listening to your favorite shows.
(33:02):
U