Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:06):
Hey, you welcome to Stuff to Blow Your Mind. My
name is Robert Lamb. Today is Saturday, so we have
a vault episode for you. This is going to be
an episode titled The Edge of Sentience with Jonathan Birch.
I recently referenced this interview episode in some of our
recent Stuff to Blow Your Mind episodes, or yes, actually
(00:29):
it was the episode we did for Halloween the Grimore
of Her Volume two, where we were talking in part
about the classic dark sci fi story I have no
mouth and I must scream. So here's the actual interview.
I do want to stress though, that this came out
at eleven twelve, twenty twenty four, which, yes, was just
(00:49):
barely a year ago. But at the same time, we're
talking in part in this episode about AI and some
technology that has continued to move very quickly. So just
keep that in mind when revisiting this episode, that a
year of technological time has passed since it's airing. All right,
(01:10):
let's jump right in.
Speaker 2 (01:14):
Welcome to Stuff to Blow Your Mind production of iHeartRadio.
Speaker 1 (01:24):
Hey you welcome to Stuff to Blow Your Mind. This
is Robert Lamb, and today I'm going to be chatting
with Jonathan Birch about his new book, The Edge of Sentience,
Risk and Precaution in Humans, Other Animals and AI. It
comes out later this week on November fifteenth in the US.
Jonathan Birch is a professor of philosophy at the London
(01:46):
School of Economics and principal investigator on the Foundations of
Animal Sentience Project, a European Union funded project aiming to
develop better methods for studying the feelings of animals and
new ways of using the soe ccients of animals' minds
to improve animal welfare policies and laws. In twenty twenty one,
he led a review for the UK government that shaped
(02:08):
the Animal Welfare Sentience Act twenty twenty two. In twenty
twenty two through twenty twenty three, he was part of
a working group that investigated the question of sentience in AI.
So I'll definitely be asking him about animals, about AI,
and maybe a few surprises here. So without further ado,
let's jump right into the interview. Thank you, Thank you,
(02:32):
Hi Jonathan. Welcome to the show.
Speaker 3 (02:33):
Hi Robert, thanks for advising me.
Speaker 1 (02:35):
So the new book is The Edge of Sentience. But
before we get to that edge and start talking about that,
how do you define sentience in your work? And what
are the implications and challenges of agreeing on a working definition.
Speaker 3 (02:51):
Well, so see why I think if sentience is a
really useful concept. Let's start by thinking about pain that
I think a lot of us have wondered. Can an
octopus feel pain? Can insects feel pain? Can things hurt?
Can they have that feeling of ouch? And this is
a great question, but I think it's a bit too
narrow because we need to be aware of the fact
(03:14):
that other animals might have very different mental lives from us,
and words like pain they might be a bit narrow
for thinking about what the experiences of other animals are like.
So it's good to have concepts that are a bit
broader than the concept of pain, and to have a
concept that includes other negative feelings like frustration, discomfort, but
(03:40):
also the positive side of mental life as well, because
we also care about this. We care about states like joy, excitement, comfort,
pleasant bodily sensations like the feeling of warmth, and we
want a concept that is broad enough to include all
of this, all the negative side and the positive side
of mental life. Well, any feelings that feel bad or
(04:02):
feel good, and this is what the concept of sentience
is about. The capacities have feelings that feel good or
feel bad.
Speaker 1 (04:09):
Now to what extent is this a different concept from
consciousness or where do they overlap and where do they differ.
Speaker 3 (04:16):
The problem I have with the concept of consciousness is
that it can refer to many different things. Sentience isn't
perfect in that way, but I think it's a bit
more tightly defined than consciousness because when we talk about consciousness,
sometimes we're just talking about immediate, raw sensation, what it
feels like to be me right now, the sites, the sounds,
(04:40):
the odors, the bodily sensations, the pains, the pleasures, and
so on, and that's quite closely related to sentience. But
sometimes when we're talking about consciousness, we're talking about things
that are overlaid on top of that, like our ability
to reflect on what we're currently feeling, and and our
sense of self, our sense that my current immediate raw
(05:04):
experiences are not happening in isolation, but they're part of
a life that extends back in time and extends forwards
into the future. And I'm self aware, I'm aware of
myself as existing in time, and these things are much
more sophisticated than just having those immediate raw sensations. So
(05:25):
it's very useful to have a term that draws our
attention to those immediate sensations, and that's what sentience does.
Speaker 1 (05:32):
Now. I realize this is of course a huge question
that you take on the book, and I'm not going
to ask you to regurgitate all of it for us here,
But where are the least controversial divides between Synthians and
non sentience in the animal kingdom and where does it
become more messy or controversial.
Speaker 3 (05:52):
I think it's become very uncontroversial in the last few
decades to think of all other mammals as being sentient beings.
And that's a huge change because as a long tradition
of skepticism in science, going back to Rene Descartes in
the seventeenth century, but also the so called behaviorists in
(06:13):
the early twentieth century, who said you should never be
talking about consciousness or sentience of any kind in relation
to other animals, and that view has just fallen away.
And I think it's good that it's fallen away, because
I think it is pretty obvious that our cats are dogs,
and in fact, all other mammals like pigs, cows, et cetera.
(06:34):
They do have feelings. And then because of this new consensus,
the debate, the controversy has started to be more around fishes,
where you get some genuine doubters, and particularly invertebrates, where
we move from animals with a backbone to animals without.
And we're looking at animals separated from us in time
(06:56):
by over five hundred million years of evolution. They optopus,
in crabs, lobsters, insects. Here, I think doubts are very common,
and it's entirely reasonable to think maybe not all invertebrate
animals are sentient. And there's a lot of debate around that.
Speaker 1 (07:13):
And you mentioned the octopus already being an example of
a very comblex creature that of course is very distant
from us. And yeah, how do we line that up
with this idea of sentience, and then how do we
keep from comparing it, trying to compare it too much
to what we have?
Speaker 3 (07:30):
And I guess to consciousness, right, Yeah, sentience is a
good word for pushing us away from anthroperscentrism, and away
from this assumption that animals have exactly the same feelings
we do. So does an octopus have pain, Well, it's
probably not feeling it in the same way that we would.
(07:53):
It's going to be a state that feels very different
to the octopus.
Speaker 1 (07:56):
I think.
Speaker 3 (07:57):
Is the octopus sentient? Well, yes, I think so. The
sentience concept is broad enough to just capture the whole
range of animal mental life, soever much they may vary.
Speaker 1 (08:08):
Now, when it comes to a moral obligation to sentient
life forms, where I guess and I realized this is
asking a question where ultimately every's going to be a
lot of different cultural differences and so forth. But where
where are we generally with the idea of our moral
obligation to sentient life and where are we looking to
go with it? Or where does what's the trajectory seem
(08:30):
to be?
Speaker 3 (08:31):
Again, I think there's been a sea change on this
in recent decades. I think opinions are changing, and they're
changing fast, and I think changing in a direction I
find encouraging because it wasn't that long ago. You'd often
get people denying the idea that other animals can make
any moral claim on us at all. People would say
(08:53):
morality is about humans, it's human interests, human rights. Other
animals are not part of it, and very few people
argue that now, because I think once you recognize other
animals as sentient beings that can suffer, that can feel pain,
that have lives of their own, it becomes very, very
hard to defend the view that none of this matters
(09:14):
ethically or morally. Of course it matters, and then the
debate is about how it matters, how strong are our obligations,
And here you do get a lot of disagreement. Still,
I feel like the point of consensus that I'm trying
to latch onto in my book is that we have
a duty to avoid causing gratuitous suffering to sentient beings,
(09:38):
which is to say, if we're going to do something
that will cause suffering, we have to have a sufficiently
good reason for doing that thing. And then, of course
you get debate around what might be a good enough reason.
You get debate around, for example, whether scientific research might
be a good enough reason, and there'll always be disagreement
about that. But the the need to have a reason
(10:01):
so that we cannot be causing suffering gratuitously, this I
think everyone really can agree about.
Speaker 1 (10:08):
Now, you discussed multiple additional cases that seem to exist
at that edge of synience, as the title refers to.
And I'm not going to ask you about all of them,
but one of the more surprising ones to me, I
guess this isn't an area that I had not been
paying close enough attention to in the science news is
the idea of brain organoids or artificially grown neural tissues.
(10:29):
I was not aware that they were playing pong. So
what's the story here and what does it mean for
our understanding of syentians in something like this.
Speaker 3 (10:39):
It's an astounding and very exciting emerging area of research
where you can induce human stem cells to form nerve
cells to form brain tissue, and you can build structures
the model regions of the human brain at very very
small scales and times. Researchers are doing this to model diseases.
(11:03):
They want to model Alzheimer's or ZEKA or fetal alcohol syndrome,
and this can be a very good way of modeling.
So if you compare it to the alternative, that is
to use a living animal like a mouse or a rat,
that has real limitations because the rat's brain is so
different from the human brain.
Speaker 1 (11:21):
So this is very.
Speaker 3 (11:22):
Exciting way of getting better models of diseases. Of course,
it raises questions as well about, well, there must be
some point at which you really should stop doing this,
because you've made something too lifelike, you've made something too big,
you've let it develop for too long, and there's now
a chance that it will be sentient in its own right.
(11:43):
And I feel like this is a risk that seems
particularly striking in cases where what the researchers are trying
to do is model intelligence, model cognitive functions. That's what
this system, dish brain that you were referring to, is doing,
because because what the researchers did was train it to
play the video game Pong through interacting with a computer interface,
(12:09):
and so the system could control the paddle and they
demonstrated measurable improvement in gameplay over twenty minutes. So by
getting feedback on its performance, the system was learning how
to play Pong. Really, the thought that we might be
(12:31):
getting dangerously close to the edge of sentience, I think
strikes you very clearly when you read about studies like this.
Speaker 1 (12:39):
Yeah, especially to your point, the idea that we could
get there sort of very much by accident in this case,
you know, trying in part, perhaps trying to avoid things
like cruelty to mouse kind of lad animal.
Speaker 3 (12:55):
Well, this is why I think it would be an
overreaction to immediately ban all of this research, because that
would be inconsistent. We need to be consistent in our
attitudes to different risks, and it's no use if we
crack down hard on the organoid research in a way
that just leads to more research being done on obviously
(13:16):
sentient animals like mice and rats and monkeys and so on.
We've got to let this research develop because it could
be replacing animal research, and we have to encourage that.
At the same time, we need proportionate steps. We need
to think about what the red lines are so that
it doesn't go too far.
Speaker 1 (13:45):
Now, another huge question from your book is how would
we know even AI became synient and what would it
mean for us if it did.
Speaker 3 (13:56):
I think we wouldn't though, And this is the big
fear that we may be rapidly approaching the edge of
sentience in this case too, with the rate at which
AI is developing. The extraordinary behaviors we're seeing from AI systems,
and yet our understanding of how they work remains incredibly poor.
And it's not just that the public doesn't understand that
(14:18):
the people working at the tech companies do understand. The
people at the tech companies do not understand either. These
systems are black boxes where you know the architecture, the
overall architecture that you've programmed the system to have, but
then you've let it, You've trained it on vast, vast
amounts of training data, and in the process it's acquired
(14:39):
these emergent capabilities. It's acquired algorithms that you didn't program
into it, but that it can now implement to reason
its way through problems. And we don't know what the
upper limit is here. We don't know as these systems
scale up, we don't know what algorithms they might be
able to acquire. And we don't know whether there's some
(15:02):
point at which, if you recreate enough of the computations
that are happening in a human brain, maybe you also
get the sentience as well, maybe you also start to
get feeling as well. This is a view that in
philosophy is called computational functionalism. It's like a long word
for the idea that if you recreate all the computations
going on in the brain, nothing else is needed to
(15:25):
get sentience. You get the sentience as well, And that's
the possibility we have to take seriously, and it's a
real risk, and it means we could create sentient AI
long before we accept that we've done so, or before
we realize that we've done so.
Speaker 1 (15:40):
This leads me to a question that my wife asked
me to ask you when I said, hey, do you
have any questions about synthients and AI and animals and
so forth? She wanted me to ask should we be
polite when we're addressing Siri, Alexa or various you know,
Google Gemini or whatever kind of text based interface that
we're using, Like what because I've found myself making like
(16:06):
going into say Google Gemini, testing it out, just kind
of like experimenting with it, seeing what's up with it,
and then after a few exchanges with it, feeling like
I need to say, oh, well, thank you, that's all
for today, and feeling like I need to be polite.
But then also I have caught children, my own child
once or twice, being a little harsh with say Siri,
(16:29):
maybe their un alarm is going on too long in
the morning, that sort of thing. So what are your
thoughts about this?
Speaker 3 (16:34):
Yeah, it's a fascinating question we have as well as
the book, there's a paper that we just released called
taking AI Welfare Seriously, and it's it is an issue
we should take seriously right now because AI systems that
might realistically be sentient could be with us quicker than
(16:58):
we expect indeed at any time. And I think it's
great to be having that discussion now about what are
we going to do about that. The questions it raises
are absolutely enormous. We don't know how to answer them,
and I think maybe it's right that a very low
cost starting point that we can do right now is
(17:19):
just start trying to cultivate an attitude of respect the
systems we're currently interacting with. There's every chance they're not sentient,
but there's no harm in cultivating an attitude of respect anyway.
And by cultivating that attitude of respect, will be more prepared,
more prepared for the future where there really might be
(17:40):
a moral requirement to avoid torturing these systems.
Speaker 1 (17:45):
Now in terms of just identifying potential sentience, and you're
already outlined like the challenges, if not impossibility of that.
Can you tell us a little bit about the gaming problem.
Speaker 3 (17:56):
One of the problems we face in this area is
that if you ask AI whether it feels anything or not,
answers very a great deal. Sometimes they say yes, sometimes
they say no. But those answers are not giving us
very good evidence at all. The problem is that we've
we've trained these systems to mimic the dispositions of a
(18:20):
helpful human assistant. So in their training they've got rewarded
constantly for being as human like as possible. And so
we have this situation in which we've got reason to
think our criteria for sentience will be gained, so to speak,
because the system can serve its objectives of being a
(18:42):
helpful like a helpful human assistant by mimicking behaviors that
we see as being persuasive of sentience, in talking as
if it had a rich internal life, as if it
had emotions, as if it had sensations. Sometimes developers have
reacted to that problem by just programming the systems to
(19:03):
deny their sentient so they just say, of course, as
an AI system, I don't have any feelings. That isn't
very helpful either, because that's not evidence that they don't.
So we're facing this tough situation where the surface linguistic
behavior is not really giving us any evidence either way.
To my mind, the message we have to take from
(19:24):
this is that we need to be doing everything we
can to look behind the surface linguistic behavior to try
and understand the inner workings of these systems. Better to
try and get inside the black box, open it up,
find out what computations are actually being performed and how
they relate to those that are being performed in the
human brain. To identify what I call in the book
(19:46):
deep computational markers of sentience, and then look for those
rather than thinking the linguistic behavior will do the job
for us.
Speaker 1 (19:54):
Now, what do you think about our moral and or
legal responsibilities concerning the sentient as we look forward into
the future. And again you see, as you said, like
a lot of this is and or it could be
happening a lot faster than many of us might think.
But you know, what does that mean when suddenly we
have at least reasonable reason to believe a particular AI
(20:16):
is sentient.
Speaker 3 (20:18):
It's a huge debate that I really think we should
be having now. It's great to be having it now.
In the edge of sentience. I defend this principle I
call the run ahead principle, which says that in thinking
about these issues, we really need to be asking what
would be proportionate to the risks posed both credible future technologies,
(20:38):
not just the technologies we have now. Because the technology
is moving too fast and regulation moves very slow. We
don't want to be in the position where we're totally
unprepared for what happens, because we would only have a
debating the current technology rather than the possible future technology.
So it's absolutely worth debating about if we get to
(21:00):
that situation where we've got some deep computational markers of sentience,
and then we find that we have systems displaying those
markers so that there is a realistic possibility that the
system is genuinely sentient, we really have to be thinking
about what does our duty to avoid causing gratuitous suffering
(21:22):
require from us in this case, and I think it
will imply ethical limits on what people can actually do
to AI systems. What those ethical limits are very very
hard to say, because the welfare needs we can't even
really imagine. It depends a lot on the precise nature
(21:45):
of these systems and the way in which they've achieved sentience.
Whether we can say anything about their welfare needs at all.
And to me, all of this points towards having good
reasons to desperately try not to develop the technology at
all if we can. You know, I think currently we're
just not ready. We're just not in a position to
(22:08):
use this technology ethically, and so in a way we
should be trying to avoid making it at all.
Speaker 1 (22:16):
Now in the book there's at least one example, and
I Apolgiz I'm blinking on the specific here, but you
mentioned a fairly recent call for ethical guidelines concerning AI
development that was dismissed by critics as being mere science fiction.
Speaker 3 (22:34):
Thomas Mettsinger.
Speaker 1 (22:35):
Yes, yes, I believe so. And that struck me as
interesting because on one hand, we have clearly, at least
through science fiction, and of course outside of science fiction
as well, we've been contemplating things like this for decades
and decades, and yet as we get closer to the reality,
the label science fiction is also sometimes used to dismiss
(22:57):
it as saying, well, that is just sci fi, that's
not actual things we should be worrying about. So I
don't know if you have any thoughts on to what
extent science fiction and science fictional thought has prepared us
for this or kind of created this barrier that prevents
us from acting as quickly.
Speaker 3 (23:14):
Yeah, I don't think it has prepared us. Yeah, I
think that's fair to say, even though we do. Seems
films like Her, for example, from about ten years ago
that now seemed remarkably prescient that no one thought they
were describing events ten to fifteen years in the future,
and yet that is the future we now found ourselves in.
(23:35):
It's extraordinary. But yeah, that doesn't in any way mean
that we're prepared. And in my work on this, I'm
trying to develop a sort of centrist position that is
about avoiding the pitfalls of extreme views on both sides,
where one extreme you've got people who think that these
(23:56):
systems already sentient. We can tell from their surface linguistic
they just talk as if they have feelings, so we
should think they do. And I think that's credulous and
it needs to be avoided. On the other side, there's
this dismissal of the whole idea that AI could achieve sentience,
(24:17):
this idea that of course you need a biological brain,
of course you need to be a living animal, and
we're just not in a position to be confident or
sure about that in this well known philosophical position, computational
functionism might be right. And if it is right, then
you might not need a biological brain at all, and
we have to take that seriously as well. So for me,
(24:38):
it's about finding that middle ground where we're taking the
issue seriously, but we're thinking that this has to be
the beginning now of a process where we really try
and look for robust, rigorous markers and have serious ethical
debates about what the right response to those markers of
sentience would be. We could have to be neither no
(25:00):
knee jerk skepticism or credulousness.
Speaker 1 (25:14):
Now I realized this next question is largely outside the
scope of this book, But what are the implications for
the consideration of possible extraterrestrial syndiants as we encounter it
in potentially encounter it in say a biological or technological form.
Speaker 3 (25:32):
Just make me think of octopuses again, because of course,
you know, they're so alien from us. They look like extraterrestrials,
but they're not. They're terrestrial and they're right here on
Earth right now. So I think it's great to recognize
the possibility of forms of sentient it's very different from
our own, and then recognize that our actual Earth already
(25:54):
contains them, and that we can start thinking now about
those real cases and what we're going to do about
those real cases. I'm entirely open to the idea that,
you know, just as there are really alien forms of
sentients on Earth, maybe there are out there elsewhere in
the universe as well. But we can only speculate, and
with Oltopus says, we don't need to speculate. We can
(26:16):
be studying the alien life forms that are that are
with us now on Earth and get real knowledge about them.
Speaker 1 (26:23):
Now. Through through much of this topic, there, you know,
there's this sense of expanding our compassion for non human
sentient entities, and certainly the octopus is a great example
of that. I know in my own life, like years
and years ago, when I first started reading a bit
about their intelligence and their behavior, I stopped eating octopus
before I stopped eating other meats. And so I feel
(26:46):
like this kind of response is going to, you know,
to happen inevitably in as far as we consider these
non human sentient forms. But what kind of impact do
you see all of this having Potentially on the expansion
of our compassion for each other, Like, does this expansion
of compassion for non human entities? Do you think it
(27:08):
ultimately helps us become more compassionate to other humans.
Speaker 3 (27:12):
It may do, and I suppose I hope it does.
Speaker 1 (27:16):
Yeah.
Speaker 3 (27:16):
I certainly don't think it's some kind of zero sum
game where by being more compassionate to octopuses and insects
and crabs and lobsters and so on, we're forced to
then be less compassionate to other people. I don't think
it works like that at all. I think it's more
this general attitude. And I'm a big fan of the
(27:37):
Indian idea of a hymnsa non violence, non injury, abolishing
the desire to kill or harm other beings. I think
it's about trying to cultivate that virtue, trying to walk
that path, and it's a path that encompasses other humans
and non human animals as well. And through cultivating this
(28:01):
general non violence, this general loss of our desire to
dominate and crush and harm other beings, even if they're insects,
we can become a lot more peaceful, I think, in
our dealings with each other too.
Speaker 1 (28:15):
And what do you see ultimately as the prime I
guess motivators in changing the way we see these various entities.
Is it through Is it through laws and regulations? Is
it through more like sort of ground level outreach? Is
it both? I mean, how do we really affect this
(28:37):
sort of change or how have we affected it so
far most successfully?
Speaker 3 (28:41):
It's a huge open question for me what does actually
succeed in changing behavior? Because I've been focused a lot
on scientific evidence and about synthesizing the existing evidence for
sentients and other animals, presenting it to policymakers. Sometimes it
does produce change, and in the UK, the Animal Welfare
(29:02):
Sentience Act was amended to recognize octopuses, crabs, lobsters, crayfish
as sentient beings because of the report that we produced.
So that was surprisingly effective in a way example of
how marshaling scientific evidence can move policymakers. So it's great
when that happens, but of course it doesn't always happen,
(29:24):
and we do face this problem that a lot of
animals are pretty clearly sentient, think of pigs, for example,
or chickens, and yet they continue to be treated by
humans in absolutely appalling ways. So merely knowing that an
animal is sentient often does not drastically change your behavior
towards it, And I'm fascinated by the question of, well,
(29:48):
what else is needed? What other information? I think there
are empathy barriers. You could know that a chicken is sentient,
but doesn't necessarily convert into immediately empathizing with that chicken
and the animals suffering. I've got to think about what
might bridge that gap, So narrative stories, art, video documentaries
(30:13):
like My Octopus Teacher, they could all be part of it.
I think there's probably lots of ways to bridge that
empathy gap, but we have to recognize it as a
problem and to realize that simply knowing the animals are
sensient is not actually enough.
Speaker 1 (30:27):
It's interesting to think about pork and chicken. I don't
know how this pans out in the UK, but in
the States, you often will drive through a city through
a rural area either one, and you'll find a lot
of signage and promotion for places that serve pork or chicken,
that use cute or amusing like cartoon versions of those animals.
(30:51):
And it seems it's always struck me as strange that
these are things, these are acts and choices that would
seem otherwise to be something that would convince us not
to eat said animal, but they seem to instead give
us license to. And I've ive always had a hard
time understanding exactly what's going on in our minds when
(31:12):
we consume or create that sort of thing.
Speaker 3 (31:15):
It goes under various names, doesn't it cognitive dissonance? The
meat paradox this idea that we often love animals, we
find them so cute and adorable in etc. And then
continue to eat them. Anyway, This would be it would
make perfect sense if meat was genuinely necessary for our health.
(31:37):
And I think that's the argument the meat industry would
love to be making. It would love to be able
to convince us that meat is needed for our health,
and so these sacrifices in how we treat the animals
are satly necessary. But it's just not true. It's just
clearly not true. And then the existence of all these
manifestly healthy vegetarians and vegans makes that completely undeniable that
(31:59):
we don't cents actually need to be eating these animals
at all for our health, and we can, if anything,
probably be healthier without doing so. I think once you
realize this, the case really does become very clear for
not eating these animals that the harms we're doing to
them can't actually be justified because the benefit we get
(32:22):
is at most that the gustatory benefit, the enjoyment of
the product. It's not necessary for our health in any way,
and that enjoyment can't justify in the balance all that
suffering cause to the animal.
Speaker 1 (32:38):
Well, Jonathan, thank you so much for taking time out
of your day to chat with me. The book is
The Edge of Sentience Risk and Precaution in Humans, Other
Animals and AI. It is out in the United States
are November fifteenth. Thanks Robert, all right, thanks again to
Jonathan Burch for coming up the show and chatting with
(33:00):
me again. That book is The Edge of Sentience Risk
and Precaution in Humans, Other Animals and AI. It is
out later this week on November fifteenth, and it gets
into so much more that we didn't have time to
get into in this interview. Just a reminder that Stuff
to Blow Your Mind is primarily a science and culture
podcast with core episodes on Tuesdays and Thursdays. On Fridays,
(33:24):
we set aside most serious concerns to just talk about
a weird film on Weird House Cinema, and we have
short form episodes that air on Wednesdays. Thanks as always
to the great JJ Possway for editing and producing this podcast,
and if you would like to get in touch with us, well,
you can email us at contact at stuff to Blow
your Mind dot com.
Speaker 2 (33:50):
Stuff to Blow Your Mind is production of iHeartRadio. For
more podcasts from my Heart Radio, visit the iHeartRadio app,
Apple Podcasts, or wherever you're listening to your favorite shows
Speaker 1 (34:03):
Name