Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hey, quick announcement everyone, We have just joined TikTok, So
head over there and follow us to see videos of
Daniel asking and answering science questions. All right, enjoy the pod. Hey, Kelly,
(00:20):
is a mosquito technically a parasite?
Speaker 2 (00:22):
Well, you maybe don't realize the can of worms you've
opened up if you go to a parasitology meeting. This
is something that we actually fight about. But I'll just
cut to the chase, because I'm sure what you wanted
was a short answer. H And I would say that
they are micro predators.
Speaker 1 (00:38):
What predators? You're saying that these bloodsuckers don't just make
me itch, they've turned me into their prey.
Speaker 2 (00:45):
They do. And you know, unlike parasites, they don't have
like durable, long lasting interactions with your body. They just
kind of take a meal and then they run off.
Speaker 1 (00:55):
Does that mean that I don't have to feel bad
when I swap one of them?
Speaker 2 (00:57):
I don't think they feel bad when they're thinking your blood?
Speaker 1 (01:02):
All right? Well, what if I was like going to
kill all of the mosquitoes, then should I feel bad?
Speaker 3 (01:07):
Oh?
Speaker 2 (01:07):
I feel like now we're getting into like philosophy. This
is like a twisted version of the trolley problem.
Speaker 1 (01:13):
Well, you know, if I could pull a lever to
have that train kill every single mosquito, I would do it,
even if it saved nobody's lives, even if it just
saved us from some itches.
Speaker 2 (01:24):
You don't need philosophy, you know the answers you go
with your gut.
Speaker 1 (01:43):
Hi. I'm Daniel. I'm a particle physicist and a professor
at UC Irvine and a deep deep hater of mosquitoes.
Speaker 2 (01:50):
I'm Kelly Wider Smith. I'm an adjunct professor at Rice University,
and you know I'm also a deep deep hater of mosquitoes.
Speaker 1 (01:56):
I thought that as a parasitologist, you were like the
biggest ad of it for the most hated species on
the planet.
Speaker 2 (02:02):
I'm an advocate for some parasites, but your mosquitoes kill
a lot of humans, and I don't think that we
really know what would happen if you took them out
of the ecosystems. Maybe they play a role that we
don't know about. But I feel like if you eradicated
all of them and then no one got malaria anymore,
we could probably find some way to fill in the
ecological niches that they were leaving empty. I think it
(02:22):
would be worth it to kill them all.
Speaker 1 (02:24):
Oh, I think I know what purpose mosquitoes serve. They
serve to limit people's happiness.
Speaker 2 (02:28):
You feel like there's some mechanism on this earth where
that's like a thing that needs to happen. I thought
that's what Twitter was for.
Speaker 1 (02:36):
They call it X now, Kelly, they call it x Oh.
Speaker 2 (02:39):
I'm sorry, I'm so behind.
Speaker 1 (02:41):
Let's eliminate both of them, the mosquito of the Internet. Well,
welcome to the podcast. Daniel and Jorge explain the universe,
in which we dive deep into the joys of philosophy
and physics and cosmology and think about everything that's out
there in the universe. We want to understand how it
all works. We want to it makes sense of the universe.
(03:01):
We want to boil down all of those froth and
quantum particles into a story that fits into your mind,
that clicks together and goes Ah, I understand how it works.
Speaker 2 (03:11):
If only all questions were that straightforward.
Speaker 1 (03:14):
Well, all the physics at least has a goal to
tell you a story that makes sense to you, to
incorporate into your mind a mental model of the whole universe.
We don't dare to do that with chemistry and definitely
not with biology, but sometimes we can take a tiny
little sliver of physics and download it into your brain.
My normal friend and co host Orge can't be here today,
but I'm very excited to be here with you, Kelly.
Speaker 2 (03:36):
I am super excited to be here with you. And
while I'm always super excited to be here with you,
I'm particularly excited that we're a little bit more in
my niche today talking about you know, ecology and species conservation,
and it's going to be a good time.
Speaker 1 (03:49):
That's right, because we usually talk about real science on
the podcast, how the universe actually works. No, oh no,
this is not a biology slam.
Speaker 2 (03:58):
I'm you are going the way in the mosquitoes, Daniel.
Speaker 1 (04:02):
Oh no, no, you totally mistoo. No, no, let me finish.
I was going to say that usually we talk about
real science, but today we're talking about science fiction, not
that biology is not of real science. I think you're
a little too defensive.
Speaker 2 (04:17):
Maybe I'm a little sensitive. I feel like physicists often
look down on biology. I saw a talk by Freeman
Dyson where he was certainly doing that. Anyway, I'll stop
being so sensitive no fairpoint.
Speaker 1 (04:29):
Physicists have been guilty of that in the past. I
think it was Rutherford who said all science is physics
or stamp collecting, and I definitely do not agree with that.
There is glory in chemistry and wondrous questions in biology.
But today we are stepping beyond the bounds of all
kinds of science into the worlds of science fiction, because
one of the roles of science fiction is to think
(04:49):
about the boundaries of science, what's beyond it, What other
universes could we live in? What are the consequences of
the technology that we develop? If science keeps barreling forward,
does it change the way we live and what it's
like to be a human being and the choices that
we have to make. And I think that there's a
very close connection between the work of scientists and the
imagination of science fiction. So we have a series of
episodes on this podcast where we read a science fiction novel,
(05:12):
talk about the science in it, and then interview the
author about how they wrote it and why they decided
to build their science fiction universe the way they did.
Speaker 2 (05:21):
And most of the episodes that you and I have
done together with sci fi writers has been about being
moved into a world that's totally unlike our own, and
so it's sort of they build this brand new universe
and you get to enjoy living in it for a while.
What what's so exciting about today is that it's more
near term and you're left thinking, oh, gosh, is this
going to be us in a few decades, And so
it's a little bit different than what we usually do,
(05:42):
which and I really enjoyed it.
Speaker 1 (05:44):
This is a wonderful book we'll be talking about today.
It's called The Venomous Lumpsucker, a novel by Ned Bowman
and asks a really intriguing question about the nature of extinction,
what price we should have to pay to drive a
species extinct, whether we should care about species going extinct?
For example, does the mosquito deserve to live?
Speaker 2 (06:03):
We think that's something that he talked about in particular.
But I did appreciate that there was a whole sort
of speech about how we should be thinking about parasites
and conservation, and that was very refreshing to me.
Speaker 1 (06:14):
Yeah, it was fascinating. I thought about you as I
was reading this book. Today on the podcast, we'll be
covering the science fiction universe of the venomous lumpsucker.
Speaker 2 (06:28):
What a great name.
Speaker 1 (06:30):
Everybody I tell about this book invariably says the what
are you serious? Who would name their book that?
Speaker 2 (06:36):
But I also like totally bought it. Like when I
first read the title, I thought, ah, I thought I
knew about a lot of the weird fish, but venomous
lovesucker I can totally buy there's a fish called venomous lovesucker.
But I was wrong. But anyway, he came up with
a really glorious creature.
Speaker 1 (06:51):
He really did. So let's tell people this setting the book,
and then we'll dig into what the story is. The
book is set somewhere in the near future on Earth,
and it's very much set in our universe. This is
not the kind of book where they invent all sorts
of new physics, and the universe is very different and
we have fast and light travel. This basically takes place
on Earth in Europe in about fifteen years, and it's
facing the question of how do we cope with this
(07:14):
massive extinction event? So many species, so many little beetles
are going extinct every day, What should we do about it?
What can we do about it? And this book paints
this specific picture about how society might handle it.
Speaker 2 (07:27):
And I thought he did a really nice job of
creating a world that you could imagine, like if we
take all the wrong turns between now and like fifteen
to twenty maybe thirty years from now, we could be there.
And so you know, right now we use carbon credits,
and companies can buy carbon credits, you know, to essentially
pay for the right to release more carbon into the atmosphere.
Here he's created extinction credits, where if you're going to
(07:49):
start some big new project that's going to result in
the extinction of some species, you can sort of pay
for that with extinction credits. And you know, there's like
a sliding scale for how many extinction credits you need
depending on some characteristics of the animal that you're going
to have go extinct. But like I bought that this
is a path we could go down in a couple decades.
What about you, Daniel.
Speaker 1 (08:09):
I thought it was both inspired, creative, and also very realistic.
We often have trouble figuring out like how do we
solve a problem, and when we can't figure out what
direction to go in, we basically just turn it over
to capitalism. We're like, can we financialize this can we
incentivize people to do the right thing by making it
expensive to do the wrong thing? And I feel like
(08:30):
that's sort of clever, like turn it over to the
free market, But it also feels like sort of an
abdication of our responsibility. But then again, we can never
really decide on how to do anything, so it's better
to do something than nothing, I suppose. But this was
really uncomfortable to read about. This like financialization of extinction,
It really reminds me of like putting a price on
(08:50):
a human life. You know, when the government has to
make decisions about like how much to spend on things,
or should a company have to install seat belts? They
do so if the price of the seatbelt is less
than the expected loss of human life, you know, you
have to calculate, like, oh, human life is ten million dollars.
Makes me wonder, like, well, if I had ten million dollars,
could I buy a murder credit to like kill somebody
(09:11):
and then give that money to the family and be
like I bought the right to kill your husband, right?
That seems terrible. This is just sort of the same
thing on a larger scale.
Speaker 2 (09:19):
Yeah, I mean, I do think humans for better or worse,
feel more comfortable doing that with non human animals, and
like we get more uncomfortable, you know, if you're talking
about like, well, this chimpanzee really made me angry, Like people, would,
you know, maybe make you pay more to take the
chimpanzee out than the like stink bug that lives on
your curtain or something. But yeah, no, I agree, these
(09:40):
things are complicated and uncomfortable, And I thought he also
did a really nice job of sort of like weaving
in the way that even our best intentions can get
corrupted by things like you know, the market doesn't always
do what we think it's gonna do, even though we
like maybe should have been able to anticipate the two
thousand and eight you know, financial crisis, because what were
(10:01):
we doing with the banking system and housing back then,
But we didn't. And so, you know, things don't go
exactly the way the characters thought they were going to
go with these extinction credits and how they're going to
pay out monetarily, and you know, this stuff doesn't always
work the way we think it will, and so this
is sort of a story about things going kind of
horribly awry.
Speaker 1 (10:20):
Yeah, it is sort of a cautionary tale, and I
thought it was super thoughtful and creative. There are so
many things that happened in the book that surprised me.
And then as soon as I thought about I thought,
you know what, that's totally realistic, Like that's exactly what
could happen. And to me, that's the best kind of
science fiction. Somebody who's like creatively thought about the consequences
of some new technology. And you know the way in
the book, lobbyists and special interests add like loopholes and
(10:42):
exceptions and they end up like driving down the cost
of extinction credits to make it like horribly cheap to
send some caterpillar off to its final demise. I thought
that was very realistic. Another aspect of the book which
is super fascinating is the role of technology. He thinks
about what it really means for a species to become
extinct when you can record it, when you can record
(11:04):
its genome and its behavior and get samples of it,
and if it's possible to bring a species back, is
it still extinct?
Speaker 2 (11:11):
And this is a topic that's like near and dear
to my heart. You know, the positive implications that we
think will happen when we create a technology. So you know,
right now people are working on de extinction, like can
you bring back the mammoth and put it back in
its and you know the permafrost habitats in Russia to
try to make the habitat what it once was, which
would be better for all of the other species that
were there. But you know, a lot of these technologies,
(11:33):
even if they were envisioned with only positive implications, the
way they get rolled out can often have some pretty
negative implications. So here you see they're working on, you know,
figuring out ways to store biological information so that you
cannot only bring back sort of members of a species,
but you could even bring back specific individuals. And I think,
(11:54):
you know, I don't know what the original plan was
in the book for the people who made these technologies,
but I can certainly imagine on Earth people having really
good intentions creating that technology, and then it makes extinction blurry,
Like is you know the beetle that you study really
extinct if everything that you need to bring it back
(12:15):
is in a computer and you could recreate it in
a lab at some point. So you know, these technologies
that are meant to help, but like can get used
in the wrong way and really mess up incentives is
to me a fascinating topic that I felt like he
handled really well on the book.
Speaker 1 (12:27):
Yeah, and lots of really interesting questions that seem initially
like they have obvious answers, like when is a species extinct?
You might say, well, when there are no more living bits, right,
and when there are no more living individuals, But then
he walks you through these arguments in a really thoughtful way, like, well,
if there's a few more living individuals but they can't
reproduce because it's too small a group where they can
only live in zoos, then is that really somehow less
(12:50):
extinct than another species where there are no living individuals,
but we have the capacity to make more because of
our technology and we could bring them back. Which species
really is more extinct in that case? So it's very persuasive.
It really changed my mind a lot of these tricky questions.
Speaker 2 (13:05):
Yeah, at the book tackles a lot of difficult questions.
So what is the story about, Daniel?
Speaker 1 (13:09):
Yeah, so it's not just like here's a future Earth
where everything is going wrong. It tells the story, and
it's from the point of view of a biologist who's
being asked to assess whether a particular species is intelligent because,
as you said, it costs more extinction credits in this
book to kill something if it's intelligent, which I guess
makes sense, but also it feels really icky. And her
job is to assess whether the venomous lumpsucker is an
(13:33):
intelligent species or not. And it turns out she has
her own stake in this game. She wants it to
be intelligent for her own reasons. And in the book,
some corporation comes along and accidentally kills off all of
the venomous lumpsuckers or do they And in these places
where people store these species, these biobanks then get hacked
and the whole question of whether species or extinct becomes
(13:54):
much more fuzzy and questionable. It's a really exciting sort
of thriller that takes you through this world.
Speaker 2 (13:59):
Yeah, it's a totally fascinating world. Can you tell me
a little bit more about some of the science that
was sort of created or forwarded for the story in particular?
Speaker 1 (14:07):
Yeah, I wanted to ask you about it actually, because
a lot of this stuff is biological, I mean, the
core of technological innovations that exist in this world. Are
the ability to preserve as species to imprinciple de extinctify it,
and they do, for example, genome sequencing. Of course you've
got to store the DNA, but they also do deep
scans of the animals and they watch their behaviors et cetera,
et cetera. And as you said, this is something people
(14:29):
are actually working on now. So it made me wonder like,
is it possible today or in the near future to
actually do this to bring a species back from extinction
or what would you need in order to make that possible.
Speaker 2 (14:42):
So there are people who are way smarter than me
who would say that the answer is definitely yes, we
can bring species back from extinction. But to be honest,
I'm skeptical that we're going to bring back the exact
same thing, and maybe that doesn't matter. So, for example,
they're working on, as I mentioned, bringing back the wooly mammoth,
and different groups are doing this in different ways. So
for example, one group has was I think it's the
(15:04):
genome from some elephant species, and then they're taking what
they've been able to get from mammoths that have been
like frozen in permafrost. What they've been able to extract
out of their genome. They're tinkering with an elephant genome
to try to make it look like a mammoth genome.
But then there's all sorts of like you know, maternal
effects that are missing, so that mammoth would have to
be ges stated given the science that we have now
(15:27):
ges dated in the body of an elephant, And how
does that like hormonal environment, you know, differ. And then
I think, you know, some elephants and maybe some mammoths
like eat feces after they're born to get the right microbiome.
And so now you're not getting like a mammoth microbiome.
You're getting like an elephant microbiome. And is that enough?
Speaker 1 (15:45):
Did you say eat feces?
Speaker 2 (15:47):
Sure did, Yeah, this is biology, man. We get to
feces in about five minutes.
Speaker 1 (15:51):
You're saying it's not a mammoth if it has been
eaten mammoth poop when it was a baby.
Speaker 2 (15:55):
You know, I wouldn't say that that's the line yes
or no, but I would say that, like, you know,
so to some extent, these differences build up and like so, yes,
you have a mammoth, but the mammoth is now placed
into an ecosystem that varies dramatically from what it was
in before. It's social interactions might be different, because will
they act the same in a different environment, And is
(16:15):
there something about their development that's missing that's going to
change the way they interact with each other, and so like, yes,
you have a mammoth, but you don't have the mammoth
you used to have. And the extent to which that matters,
I don't know. Maybe it's enough to just have a
mammoth back in the environment and that does some good,
But I think these things are complicated. And as far
as MRI scans and like connectomes so that you can
(16:37):
bring back a human who is exactly the same as
the person you love to just passed away, I think
we are way more than decades away from that. But
I'm sure there are a lot of people who disagree
with me. But it seems like when we first got
the human genome, we were like, there's so much we're
gonna be able to deal with it, And then we
were like, oh, well, not really, because actually it's much
more complicated, And that always seems to be the answer.
Speaker 1 (16:57):
A friend of mine just completed the connectome of life.
The fruitfly which has a tiny number of neurons compared
to the humans, and it took forever, and it seems
like we're never going to get to the connectome of
the human brain. But you raise a lot of really
interesting questions that I think touch on the deeper issue
of like what does it really mean for a species
to be extinct. It's not really just about the individuals.
(17:19):
It's about their entire environment and can they survive and propagate?
And that requires much more than just the actual bodies, right,
It requires the parents and the poop and everything around
them and all this kind of stuff. And I think
that's one reason why all of the efforts so far
in the real world to de extinctify have focused on
things that are near relatives to existing species, Like you
(17:42):
could have a mammoth baby maybe born in an elephant,
and that's giving you something that's close to a mammoth.
Or you can have like an extinct rat be born
from existing rats. These kinds of things. I don't think
it could be possible, for example, to de extinctify species
that was very distant from anything that was currently alive,
like a dinosaur, although maybe I guess you could grow
(18:02):
one in a big alligator.
Speaker 2 (18:03):
I don't know, yeah, Like I don't know how you
de extinctify a trilobite or something, for example. And maybe
the question is, like, you know, if you could de
extinct it, but it could only live in a zoo
because you've like destroyed all of its habitat and it
just can't. The things that it needed don't exist anymore.
What kind of a life is that? And I'm sure
people would dramatically differ in their answer to that question,
(18:24):
And so there you go.
Speaker 1 (18:25):
Well, one of the fascinating things about the book Venomous
Lumpsucker is he talks about the influence of this technology
on decision making, and if it's possible to bring species
back from the dead, then doesn't make it less bad
to make them extinct. That it sort of makes the
question like fuzzier now, because what is extinct really mean?
You know, It's sort of like saying, oh, I can
upload you to the clouds, So what does it matter
(18:46):
if I murder you? Like, well, I still don't really
want to get murdered, even if I'm backed up.
Speaker 2 (18:50):
Well, I mean, I guess there's also like nobody wants
the physical pain of being murdered and many layers of
complication in all of these questions.
Speaker 1 (18:58):
Many reasons to not be murdered by Kelly we Smith.
Speaker 2 (19:03):
Speaking of commercialization. Let's take a quick break for word
from our sponsors, and we're back.
Speaker 3 (19:21):
All right.
Speaker 1 (19:21):
Well, we thoroughly enjoyed this book. We thought it was
very thoughtful, very interesting, very creative, but also very very funny.
I laughed out loud many times while reading this book.
Speaker 2 (19:29):
Okay, so, without further ado, let's bring Ned Bowman onto
the show.
Speaker 1 (19:34):
Well, then it's my pleasure to welcome to the program today,
Ned Bowman. Ned, thank you very much for joining us today.
Speaker 3 (19:39):
Thanks for having me.
Speaker 1 (19:40):
So tell us a little bit about yourself. How did
you get into science fiction writing.
Speaker 3 (19:44):
Well, this is my fifth novel, but it's my first
real science fiction novel. I think it was inevitable that
I would write one eventually because I read pretty much
nothing but science fiction when I was growing up, and
then kind of moved over into more mainstream literary fiction,
but continued to read science fiction, and to be honest,
(20:05):
I always felt, like, you know, it was a genre
that I appreciated, but wasn't necessarily up to myself because
I think it requires quite a specific set of skills.
But eventually, you know, I tried my hand at various
other things, and I had done it. I'd published a
couple of science fiction short stories, and then with this one,
I thought, okay, I'll give it a shot, put everything
(20:27):
into it. Yeah, and that was how I ended up
with Venoma's Lampsyca.
Speaker 2 (20:31):
So you noted that you need to have a certain
set of skills to write science fiction. How did you
get those skills yourself? So you got a lot of
the you know, the biology in this book is fantastic
as a biologist. Did you like pull out biology textbooks?
What was the process of trying to blend science fiction
with all of the appropriate science fact.
Speaker 3 (20:50):
Well, all of my books have been quite research heavy,
you know. For instance, my second book had a lot
about the Avon God Theater in the Weimar era. You know,
I don't think researching science is inherently any harder than
researching that kind of thing, at least until you get
into the really confusing stuff. When I say a specific
(21:14):
set of skills, I mean more trying to kind of
paint a plausible and internally consistent future world without leaving
huge gaps and blind spots. And I have always so
admired the science fiction novelists who were good at that,
(21:36):
And with this book, I was very conscious it's set
fifteen to twenty years in the future, and A, I
don't specify the exact date, which makes it easier, and
also B I think fifteen to twenty years is kind
of the easiest place to put it because it's not
so soon that you can get refuted in all your
(21:58):
predictions really quickly. But it's also not so far away
that you really have to make some big calls about
like what's going to change and where things are going
to go. So I think I was doing it sort
of on easy modes in that respect, but that was
the challenge. Whereas the science stuff, Yeah, you know, at
(22:20):
this point, I'm just kind of used to being at
dilettante and with each book, I stroll through some new
arab of research and I didn't really find it any
harder than any of the stuff I've researched in the past.
Speaker 1 (22:34):
Well, before we dive into the details of your book,
we like to ask the same questions to every author
to sort of put them on the spectrum of science fiction.
So here's some generic science fiction questions, not specifically about
your book. So the first one is do you think
that Star Trek style transporters kill you and clone you
or do you think they actually transport your atoms somewhere else?
Speaker 3 (22:56):
Well, I studied philosophis undergraduate and then later read this
book called Reasons in Persons by Derek Parfitt, which one
reviewer actually noticed as an influence on this book by coincidence,
which I hadn't really consciously thought about, but in hindsight, yeah,
I think a lot of those ideas had implanted themselves
(23:20):
in my head. And I think path It's answer would
be that you need to start thinking about personhood in
a way which doesn't have such strict boundaries. You have
to think about a person as being a kind of
soft entity which doesn't begin or and in a specific place.
(23:46):
And if you look at things that way, then it's
legitimate to say, is the person who beams down to
the planet me sort of that person is semi continuous
with me, not continuous to the extent that we normally
think of ourselves as being continuous. And yeah, I think
(24:07):
Parfitt would say that's okay. That doesn't have to be
a strict boinary answer to that, So I think that's
probably what I would go with because I really respect
Parfit's concept of the world.
Speaker 1 (24:16):
So it is you as long as you redefine you
to be whatever you ends up on the other side
of the transporter.
Speaker 3 (24:21):
Yeah, I think it's fair enough to say it's sort
of you in many ways, pretty much you.
Speaker 2 (24:25):
That's a great answer. So another tech question, what tech
in science fiction would you most like to see become
a reality?
Speaker 3 (24:34):
I think the science fiction story that has had the
most influence on me in terms of my sort of
personality and outlook is this story called Reasons to Be
Cheerful by Greg Egan, and that is a story about
a guy who gets a brain tumor which affects his
(24:57):
ability to take pleasure in things, which kind of flattened
his abilities to take pleasure in things, and well, I
have no choice but to spore the ending bride any
think it really spoils it. They eventually develop a device
which allows him to adjust how much pleasure he takes
in different things, so he's able to say, not do
(25:18):
I like this or do I find that beautiful? But
do I want to like this? Do I want to
find this beautiful. What is it that it would be
most convenient or positive for me to take pleasure in,
and he's able to adjust it on that basis. I've
always felt that would be so good, that would make
(25:42):
it so much easier for us to adjust to the world.
Speaker 2 (25:44):
What would you change about your response to the world
first if you had this device.
Speaker 3 (25:49):
Well, again, this kind of comes up in the novel,
and I'm sure again the novel was kind of influenced
an unconscious way by this story. But basically, one of
the two main characters of Venomous Lumpsucker is this guy
who's a real foodie, but because of the effects of
climate change in fifteen or twenty years, most foods don't
(26:12):
really taste of anything anymore. So he has to take
this pill, which means he doesn't care whether a meal
is good or not, which is sort of the more
destructive version of what I'm talking about. The better alternative
would be, you know, for him to go, well, what
(26:34):
is still available to me, I'm going to decide that
I will love that, and then I'll be perfectly adjusted
to the world that I actually have, as opposed to
the world that I would like to be in. I
mean there's probably a lot of more profound ways, so
you could use something like that. But that's probably what
I would do, at least to start off with, because
you know, I am quite a snob about like food
(26:55):
and fabrics and all that kind of thing. Like No,
imagine if you could just take just as much pleasure
and cheap polyester as in cashmere, or you could take
just as much pleasure and a protein bar as in
you know, a delicious meal. Oh, I mean that's another one.
(27:16):
Like I'm trying to be vegan, not very successfully. I
would love you know, I would just adjustice so that
I didn't even want meat anymore and enjoyed chickpeas way
more than IEVER used to enjoy procido if I can.
Speaker 2 (27:27):
That's going to be tough.
Speaker 1 (27:28):
Chickpeas are delicious. I'm definitely pro chickpee on this question.
Speaker 3 (27:32):
I'm pro check pee. But as soon as you start
eating vegan, you find yourself eating chickpeas like seven times
a day, and it's too much.
Speaker 1 (27:40):
No, there's so many kinds of beans out there. You
should get into indigenous kinds of beans. We're members of
the Rangel Gordo Bean Clubs, and we get this shipment
of heirloom beans every month. It's wonderful anyway. A big
fan of Greg Egan over here. Love his stories so
thoughtful and creative. Last genaeric question before we dive into
the book is what's your personal answer to the Fermi paradox?
Why haven't aliens visited us? Or have they?
Speaker 3 (28:03):
Oh? Yeah, I don't. I don't have a great answer
to that. I mean, I don't see strong reason to
believe that they have. I'm not particularly convinced by any
of those hypotheses about how they know where here and
(28:26):
they're watching and they've chosen not to visit us or interfere.
The answer to it that kind of grips me with
the most like cold, implacable grip as soon as I
heard it. Is just the idea that all advanced civilizations
(28:48):
have actually destroyed themselves one way or another, you know,
before they leave their solar systems, if not before they
leave their planets. So it could be that, But then
you'd think somebody would have got as far as you know,
self replicating probes, ornoy machines or whatever. So I really
(29:10):
don't know why we haven't had any of those. I
can't explain that all.
Speaker 2 (29:13):
Right, so let's start jumping into venomous lumpsucker. I love
this book. So the novel like, so I'm an ecologist,
and so top like climate change in the massive extinction
event that we're living through right now, or topics that
are near and dear to my heart. What fascinates you
about these themes? Why did you decide that you wanted
to write a book around the topic of extinction.
Speaker 3 (29:33):
Well, it's a combination of you know, on the one hand,
I am very concerned about the climates, and I love animals,
and a lot of the sentiments in the book about
how thinking about animals being driven extinct is so painful
you can't even bear it, Like some of that is
(29:54):
an exaggeration of how I feel. But then on the
other hand, like I sail a studied philosophy, and I'm
often frustrated by the way we have so many surface
level debates about things which go round and round in
circles and you never get anywhere around. I always just
think this needs some real philosophy applied to it, and
(30:17):
the question of extinction is really one of those, you know,
because most people basically seem to agree that it's bad.
If a species goes extinct, but obviously there's no consensus
on what we are willing to pay or sacrifice to
(30:38):
prevent that happening. That's not one of those questions you
can answer just by people sort of vaguely, you know,
talking past each other about how they feel about it.
I really think if we're going to talk about how
much do we really care about preventing extinction, you have
to look at it riskly and ask, well, why is
(31:03):
it bad if a species goes extinct? How much do
we or should we care? Why is the species valuable?
Why should we prevent it? And you have to look
at those philosophically instead of just relying on intuition and
assumptions and so on. So I thought that would be
an interesting basis for a novel to start, not offering answers,
(31:28):
but at least asking some questions that I felt like
needed to be asked that weren't being asked in a
more serious philosophical way about this issue.
Speaker 1 (31:38):
I totally agree, and I know that in your answer
you gave sort of two questions. One was what price
are we willing to pay? And the other was how
much should we care for me? One of the most
interesting things about the book was that it seemed to
sort of sound a warning about attempts to legislate and
financialize decision making. I've often heard economists say things like
it's good to put a price on things, even if
(32:00):
it's the wrong price. Do you think that there's a
danger to try to assign a monetary value to moral
choices like a human life or the existence of a species.
Is that the right way for us as a society
to balance these things?
Speaker 3 (32:14):
I mean, I don't think it's intrinsically immral to do that.
You know, if you work in the government, you have
to operate at least in this country on the basis
of they're called qa L wise quality adjusted life years,
and you have to decide is it worth buying this
treatment for a rare cancer, And then you have to think, well,
(32:37):
how many people will live how many extra years longer?
And you have to put a number on that stuff. So,
you know, I find it very frustrating when people are like,
we can't have bureaucrats putting a price on human life
or whatever, when I think you have to. That's the
only way you can make trade offs in it, you know,
(33:00):
relative scarcity, but on the other hand, when the reason
you're trying to put a price on something is because
you're saying, well, a price signal is the only signal
that the free market really understands. So the reason we're
putting in a price on it is so that we
can plug it into the free market and then pull
(33:23):
a few levers and then allow the free market to
work its magic and solve this problem for us. Again,
I don't think that's inherently a moral. Is just one
of the things I'm saying in the book is it's
not going to work because the thing that the free
market is good at is rooting around any impediments to profit,
(33:46):
and the free market the reason it works is it's,
you know, a collaboration of millions of very intelligent people
all working together to solve this problem. Where the problem
is someone is stopping us from making enough money. And
if opposed to them, you only have a handful of
(34:10):
kind of well meaning people in government, then the free
market is always going to outsmart the people in government.
So that's why it's a danger. So that's why I
think it's genius to put a price on it, because
if that price is meant to be a kind of
(34:31):
you know, essentially translating it into free market language. You
don't necessarily want it in that language because once you
give it to them, you never get it back.
Speaker 2 (34:40):
And what is the role of the individual and how
these things all play out? You know, like I recently
purchased something like a new perse the other day made
out of billboards, and I felt so great because I'm
reusing something. But like maybe I didn't need that new
person So what extent do these like you know, credits
and these you know, telling people that your company is
greener and another like, to what extent is it's still
(35:01):
the individual's responsibility when we have all these ways of
making ourselves feel better that may not actually be doing anything.
Speaker 3 (35:08):
Yeah, I don't know. I mean I really see by
the size of this, because on the one hand, you
often hear people saying the emphasis on individual responsibility for
climate change is just a way of distracting from the
fact that we need enormous structural changes at the level
of governments and mega corporations to make any real difference.
(35:32):
And you know, I think it is literally the case
that you know, polluters, via their think tanks and lobbyists
and AstroTurf operations have tried to move the climate change
conversation towards people recycling their bottles or whatever, because it
kind of changes the terms of it, which makes it
(35:52):
easier for them to avoid these demands. But on the
other hand, I I'm always very conscious that my carbon footprint,
as like an affluent northern European, is many times that
of the you know, median global person, and that also
does put me in a difficult moral position. But then
(36:17):
also I feel relatively smug about that. The whole thing
is like I don't drive, I don't have children, I've
basically given up flying, and like I said, I'm trying
to be vegan, and I live in five hundred square feet,
so like it's pretty easy for me to look down
on other people. I also think looking down at other
(36:37):
people for climate reasons is bad and not helpful, But
it does make it easy for me to say that
individual responsibility is important, because if you look at my
individual responsibilities, I come out looking pretty good, I think,
although I do buy quite a lot of clothes. Of course,
the answer is we have to do both, like we
(36:57):
have to have governments making huge changes. Then also realistically,
in the future, all of us individually are going to
have to make changes in our lives as well, because
if all six or seven billion people on Earth live
like affluent Northern Europeans, that won't work. But we also
can't ask the majority of the global population to maintain
(37:21):
a lower standard living than we have because there's no
reason for that. So we are going to have to
smooth things out in some way. So I don't know,
but yeah, I think, you know, we have to do both.
Speaker 1 (37:32):
Of course, I think it's really fascinating the moral implications
of turning things into costs. Though. If I'm willing to
pay more for a banana that's very environmentally expensive, does
that like make it okay that I'm eating this banana
because I've paid for it, Or like in the world
you've constructed, If I want a specific view from my condo,
and I know that building a condo there meant some
caterpillar had to go extinct, but hey, I'm willing to
(37:54):
pay another ten k for that condo, does that like
absolve me of responsibility? Or am I just like seeding
the responsibility for this choice to the algorithm of free
market capitalism.
Speaker 3 (38:05):
So there is this attitude that offsets are dangerous because
they simply, you know, shunt the damage to someone else,
and they relieve the pressure to actually make real changes,
and we need that pressure. I don't really agree with that.
(38:28):
You know. Obviously, the premise of offsets is that the
free market is good at finding the most efficient method
and time and place to accomplish something. And if the
thing we want to accomplish is you know, not omit
one hundred times tons of carbon, then we might as
(38:50):
well do that in the most efficient time and place
and by the most efficient method. You know, I don't
think there's any reason why we can't smooth that out.
But you know, as I write about in the book,
the whole offset idea, since its inception and in every
implementation of it has been extremely be deviled by loopholes
(39:16):
and corruption and fraud and lies and so on. So
in practice it hasn't really worked. But in principle I
don't see anything wrong with it. You know, if the
fact that is it coldplay who were like, our tours
are going to be carbon neutral and some of the
(39:36):
ways we're going to do that with offsets, if the
offsets are real. But I think that's goods. I think
I was something it's good if the offsets are real.
But the problem is again, because the free market is
so nimble and devious, a fake offset is always going
to be more profitable than a real one. So most
of the offsets will turn out to be fake. But
(39:58):
if we could make them more real, great, But the
free market is cleverer than us, so I don't think
that will ever happen.
Speaker 2 (40:06):
Yeah, these things are complicated and it all depends on
their implementation, which sort of leads to the next question.
So technology is an important feature of the book, and
in the book they're working through the technology to maybe
to be able to bring individual people back after they've died,
and then a whole species back after they've gone extinct.
(40:27):
And so you know, this sort of ties in with
the extinction credits. You don't have to feel quite as
bad if you think you can bring an animal back eventually. Also,
so to you, what is the thing that makes extinction
so terrible? Like if we still have it as a
backup on one of our computers and we can maybe
bring it back one data, is that make it less
bad because maybe it's not completely gone? So what do
(40:48):
you think about the role of technology and extinction and
when is the species really extinct?
Speaker 3 (40:53):
Yeah, it's I right about on the book. In principle,
we could get to a point where we have all
of these threatened spece is in Buyo banks, and then
then the future we could bring them back. But will
we ever bring them back? I just don't think we will.
I can see us bringing back willy mammoths and stuff,
but the vorst majority of the species going extinct every
(41:16):
year are kind of very obscure rainforest beetles or whatever,
And I just don't think we ever will bring those
back because who is going to pay for that, and
who is going to keep them alive once they're brought back,
and you know where is that going to happen and
so on. So I think the fact that we could
(41:37):
doesn't mean that we will. We probably won't, which means
we shouldn't put ourselves in that position of being like, well,
we've still got them, so we could still bring them back,
so they're not really extinct. But then when you start
asking whether this kind of potential resurrected beetle is a
(41:58):
kind of Ersat's version the way real thing, that's when
you do start to like wander into this fazzia territory.
You know, is there something inherently valuable about a beetle
that has continuously lived in the habitat in which it evolved, and,
(42:22):
as it were, the kind of community and ecosystem role
of that species within the you know, broader web of
species has continually existed from the first moment it evolved.
Is that more valuable than hypothetically the species being brought
(42:50):
back in a zoo in the future. Well, it seems
to me that it is. But it is harder than
to say, well, why it doesn't really seem to affect anyone.
It doesn't make anyone's life better, even if we're very
invested in this beetle existing somewhere in the world, whose
life is better because this you know beetle has continuously existed.
(43:15):
It is like caring deeply about your table being a
real antique instead of a fake antique. If you're very
into antiques, then of course you care about that. But
why should anyone else care about that? In particularly, why
should anyone else pay costs or give things up because
you care about that. That's a niche interest. It does
(43:38):
seem to me that it would be nice not to
eradicate this beetle and simply have it in a biobank
and clone it later. But you know that's not how
politics works. You can't say to people, well, we all
have to agree to do this because I think that
would be nice. So I think that's where philosophy comes in.
(43:58):
That's where you have to start thinking, well, well, I
have reasons for thinking it would be nice, and once
we dig into the reasons, maybe you would start to
agree with me too. But then, of course the danger
is once you start digging into the reasons, the reverse
could happen. It could be that I start thinking, well,
actually I don't even care anymore now that I've looked
at it, you know, really harshly, I don't care. I
(44:18):
think there actually are more important things. The other thing
I talked about in the book is that knowing that
this technology is there sort of takes the pressure off.
It's going to make us more lackadaisical because we have
a plan B. I think there is something to that,
but I wo you know, I don't think that's the
reason not to build biobanks or whatever. Better to have
(44:42):
them in case we need them than not to have
them out of a fear that they would make us
lazy or whatever.
Speaker 1 (44:47):
I think it's fascinating the way having biobanks or the
ability to rest the day species makes extinction itself less terrible,
because it's the irreversibility of extinction that really gives it
its moral drama. Sort of reminds me of your answered
the question about teleporters, like, if I murdered somebody, it's
actually less terrible to murder them if I knew I
could just recreate them somewhere else. And then I'd say, like, look,
(45:08):
according to you know, novelist Ned Bowman, you still exist
and you as you even if I murdered you and
recreated you.
Speaker 3 (45:14):
Yeah, I think that's a great analogy actually, because again
I talk about this in the book. Yeah, the question
of whether something is extinct or not extinct, it's simplistic
to make that a binary. You know, extinction is arguably
not a clear cut enough concept that you can use
it in that way. It might be more helpful to
(45:35):
start talking about species being sort of extinct ish, although
again in the book Yeah, I asked, is that gonna
sort of expand our sense of how worried we might
need to be about a species, or on the contrary,
is it gonna let us relax when we shouldn't be
(46:00):
relaxing about it.
Speaker 1 (46:02):
Well, then let me make the philosophical game of making
it more personal. Say we could scan you and resuscitate
you or recreate you later on. Would you want that
to happen? And would that make it less bad for
somebody to murder you?
Speaker 3 (46:14):
Well again, reading loads of Greg Egan when I was
younger has been a huge infert of my thinking about this,
because he writes more interestingly than anyone else I've ever
read about what it would be like to be an
uploaded consciousness. And you know, of course, if you end
up living on a computer, then you might live for
(46:35):
well a you might live for another million or billion
years and be at that point you have complete freedom
to alter yourself. So is the person at the end
of the Billionaires who's been radically kind of expanded and
altered and perhaps merged or split into two or whatever.
(47:01):
Is that the same person as the person who was uploaded?
Once again, I think it's preposterous to give a straight
yes or no answer. You have to say, well, there's
some degree of continuity in it being the sameish person,
But I don't know. So you know. That's why I
always think it's a bit kind of apid to say,
(47:24):
or would you want to be immortal or not, because
clearly the person who's there at the end of eternity
is only in certain ways continuous with the person who
was there at the beginning of it, Like is that
person any more similar to you than your father is
similar to you or whatever. So when I think would
(47:45):
I want to live forever? Would that be terrifying? I
always thinking, well, I don't think living forever is possible
because the person at the end of forever is only
partly you. All of that said, my answer basically is no,
I think seventy to ninety years is ample. I really
don't feel any need or desire to live several hundred
or several thousand more. And also, you know, one of
(48:07):
the things greg Igan writes about, and again there is
sort of reference in a distent way in the book,
is like, that's a lot of time to go nuts. Basically,
that's a lot of time to become obsessed with the
wrong thing or to start valuing the wrong things. And obviously,
(48:29):
if you're in this position way you can sort of
edit yourself, then that can really turn into a spiral.
Like if you spend a week thinking there's nothing more
important than this thing that I've just got into, then
maybe you think, well, I'm going to edit myself so
I'm more committed to this thing that I've just got into.
(48:50):
And then the person that you've become who's more committed
to it thinks, well, I got to become even more
committed to it. So you start editing your own conscious
so that you become more and more into this specific thing,
and then you can never get out of it, and
then you're just there for eternity, kind of shriveling up
into this monomoniacal commuter consciousness. And you know, I'm already
(49:16):
way too into Monster Hunter World for my Xbox, Like
I dread to think how much I could get into
it if I had complete control over my own consciousness
and was going to live a billion years. So no, basically,
I think safer to die of old age. But I
wish the best to anyone who's getting uploaded, and I
completely think that's possible and they will be the same
(49:37):
person at least in the short term, so I encourage
I encourage people to try it out, but not for me.
Speaker 2 (49:42):
So speaking of long term planning. What are your thoughts
on are we going to eventually avert this extinction disaster
at some point? Like, what do you think our prospects
are for humanity in the next one hundred or one
thousand years?
Speaker 3 (49:56):
Well, again, this is why I didn't set the book
any further in the future. I know people become furious
when this is said. I do think there is at
least a possibility that when we build an AI that's
like a million times more intelligent than any human being,
the AI will come up with something that we didn't
come up with. Like, I do think that could happen.
(50:17):
I don't think we should rely on that happening. And
if that doesn't happen, I don't think it is looking
very good. I actually listen to a different podcast recently
with Peter Watts, the Canadian science fiction novelist who's really
brilliant and also famous for his pessimism, and his take
on it is that even with a lot of geo engineering,
(50:42):
so much climate change has already locked into the oceans
and so forth, that we can avert the very worst. Maybe,
but we can't. It's already too late to avert the
almost as bad and the almost as bad definitely involves
a lot of ecosystems being absolutely devastated and a huge
(51:06):
chunk of the biodiversity of the Earth just going away,
probably before we have the opportunity to scan and preserve
it all. But then, you know, you've got to have
a certain amount of intellectual humility about this stuff. Like
every ten years you look at the grass and it's
like the graph is not where it's supposed to be,
Like sometimes it's worse or sometimes it's better, Like the
(51:28):
whole thing about renewable energy having gone down in price
point ninety seven percent or whatever it is over the
past decade. So I really can't say it'll be nice
if Ani saved us, But I do want to emphasize
I don't think we should like sit back and wait
for that to happen. It would be if that was
only the emergency plan and we came up with something
better in the meantime.
Speaker 1 (51:47):
All Right, we have lots more hard philosophical questions for
a NED. Well, first we have to take a quick break. Okay,
we're back and we are channing with Niin Bowman, the
(52:08):
author of Venomous Lumpsucker. Well, I'd love to hear a
little bit more about your writing process. You said you
did a lot of research. Why did you decide to
invent a fictional species for your book whereas the rest
of it seems to follow the rules of our universe?
Speaker 3 (52:22):
Well, the book had to be premised on a highly
intelligent species, and most of the highly intelligent species that
we know about are fairly well publicized, so the fact
of whether they are endangered or extinct is a fact
(52:45):
in the world that people know, which would have made
it very hard to fictionalize it. So I had to
come up with a fictional intelligent species that could plausibly
have remained obscure. And I didn't really feel like it
could be a mammal because if you look at the
(53:07):
club of mammals, there aren't actually that many, Like there
really aren't that many mammals, especially in Europe, and if
there was an intelligent mammal, we would have heard about it.
I mean, apart from the ones that we obviously already
know about. I really don't think there are any very
intelligent mammals that just nobody who's noticed yet. That didn't
(53:27):
feel realistic to me. So I made it a fish
because there are so many fish, and fish intelligence is
still pretty understudied, so it was just about credible to me,
and hopefully it's the reader that there could be this
fish that was really special, but just we hadn't really
(53:49):
been paying any attention and it had maybe gone extinct
without anyone really noticing. And the other advantage of a
fish is that fish are hard to find. Like, if
it's a bird, you can just set up cameras or whatever.
I mean, if you care enough about it, you can
just set up loads of cameras. But if something is
obviously in the ocean, then you know, it's very dark
(54:10):
down there, so it's easier to believe that you could
have a quest for this species that didn't just entail
well we you know, send up one hundred drones with
cameras to look for it.
Speaker 2 (54:23):
So when I was reading the book, it sort of
reminded me of some George Saunders short stories that I've read,
like it's sort of like wild and out there and
oh my gosh, what are these people thinking? But at
the end you're left positing all these big questions about
society and humanity. And clearly I'm no literary critic, so
I've done a horrible job of describing all of this.
But who outside of like your science fiction influences, Who
(54:44):
are your like straight fiction influences other than Egan?
Speaker 3 (54:48):
I mean, I love George Saunders. Well I wouldn't really
say he's an influence on me, partly because like so
on as I think he talks about this, ultimately he's
like very concerned with human feeling and human kindness and
stuff like that, and like I'm not interested in that
kind of thing at all, Like that's that's not what
I write Norm's about. So there's a there's a there's
(55:10):
a limit to how much I can take from him.
So influences from outside science fiction, well, yes, funny. Any
of the names I would mention, I don't know how
much you would see of them in this book. Well, actually,
Graham Green is one. You know, Green's novels are all
(55:31):
about putting kind of tortured people into terrible moral situations,
and I think that was definitely an influence. And what
happens to Rasaint in this book, and actually when now
they think about it, when she talks about Catholics and
(55:52):
you know how thorny their theology is. I think I
almost put kind of Catholics in the Graham Green novel
or Catholic Grand Green readers or whatever, So that's definitely
in there. I don't know other than that, you know,
I'm not going to say I have transcended my influencers
or anythink, but I would say that my earlier novels
(56:13):
were very much a patchwork of influences and pastiches and
even direct quotes, and I would happily go, well, this
bit is from this person, and this bit is from
this person. But I don't know. By the point of
this novel, I'm still like totally in the shadow of
(56:34):
all my influences, but I think I've at least found
my own style and preoccupations to this to the point
that I wouldn't say about this novel, well, this novel
is simply this writer and this writer and this writer
mashed together in the way that I would have with
the early ones.
Speaker 1 (56:52):
So the book is really thoughtful. But I also want
our listeners to appreciate, like how funny it is on
the page, and part of that just comes from you know,
your particular terms of phrase. And as I was reading it,
I was struck by this for one word which I
had to look up, and I'm going to ask you
to give us like a useful definition of it because
I need to know it in context. What exactly is
an rgbargie?
Speaker 3 (57:15):
Well, the thing is, as with any word like that,
if there was an easier way of saying it than
meant the same thing that I would have used that like.
I think I'm pretty sure I remember having to think, like,
what is a one word or one phrase expression for
(57:35):
what I am trying to talk about here? And I
think it took me while to get to lgibargie because
argibigi is not a word that I would normally use
in conversation. It's probably a word that I had never
written out in my life before. It's not a word
that you hear come up that much, but it is
one of those English phrases with a specific meaning that
(57:57):
it's some combination of sort of fuss, commotion, disputation, hassle, argument,
you know, all those kinds of things, but none of
them quite capture it. And then if I remember rightly,
(58:20):
it comes up when you know most of the book
is about like Australians and Europeans in Europe, that bit
is a English character talking about something that happened in England,
and it's in in England, which has kind of gone backwards,
so it felt appropriate there to use a quite old fashioned, quaint,
(58:41):
very English words.
Speaker 1 (58:45):
But for example, is this something a married couple might
do when they're, you know, disagreeing about whose turn it
is to have to do the dishes? Or is this
something kids? This is the description of kids arguments on
the playground Or I'm just I'm lacking a concrete like
understanding of what it means.
Speaker 3 (59:02):
If you said, like, oh, I had a bit of
argie bargie with the wife or whatever, that would sound
condescending or at least kind of inappropriately jovial, because it
slightly implies a sort of annoying, somewhat inconsequential obstacle or
(59:30):
friction that you just have to get past. You would
have to say, like, during last night's argie bargie, my
wife expressed some very real concerns which I listened to
and took on boards like that simply wouldn't be compatible.
Speaker 2 (59:48):
What about your kid for the one thousandth time didn't
put their underwear in the hamper and you had a
bit of an argi bargie with them about it? Would
that be appropriate, Like it is sort of inconsequence.
Speaker 3 (01:00:00):
Oh again, it's it's so hard to articulate why. But
it doesn't have that sort of kind of intimate, interpersonal context.
I think it implies more to something that happens at work,
or I'm kind of imagining. I don't know, this is
a random example, like if a policeman tells someone to
(01:00:28):
move their bike or something, you know, or policemen don't
carry guns, So I'm imagining like a slightly more benign
version of that than might happen else are in the world.
I mean, I do think it implies two people who
don't really know each other kind of snapping at each other,
not really succeeding in communicating. But ultimately it doesn't matter
(01:00:51):
and it may as well never have happened.
Speaker 2 (01:00:56):
Oh so like everything on the internet.
Speaker 3 (01:00:58):
Yeah, but no, not really that I really am. I
know it makes it sound like argie bargie is as
hard a word to define as personhood or extinction. And
when I'm thinking about personhood or extinction, I am thinking
about like you know, vik Enstein famously said, no one
can define a game. A game is just a kind
(01:01:21):
of tangle of associated things. So that's why it's slightly
upisaid whenever we try and define anyone, because any word
basically is a tangle of associated, semi continuous things, And
I think personhood is definitely like that, and I think, unfortunately,
argie bargie is like that. Like that's how I'm so
struggling to define it is. So it's so English, so contextual,
(01:01:42):
and so hard to pin down exactly. It does have
some implication of like bureaucracy, misunderstanding, someone trying to exert
or authority, maybe a vague sense of impending physical scuffle,
but the scuffle doesn't quite happen.
Speaker 1 (01:02:03):
This sounds like a faculty meeting.
Speaker 3 (01:02:04):
Yeah, but a faculty meeting would be unlikely to rise
in RGI bargie in that way. I don't know. But
this is also maybe why I never use this word,
because it's so hard to grasp.
Speaker 1 (01:02:14):
Well. I think it's delicious how difficult it is to
understand where the meanings of words are. Maybe we'll find
somewhere a philosophy thesis on the topic of the rgbargie.
But thanks very much for joining us today and digging
into these tricky questions. We really enjoyed the book and
we really enjoyed our conversation with you. Thank you.
Speaker 3 (01:02:31):
Yeah, I'm trying to say thanks a lot for having me.
Speaker 1 (01:02:32):
And before we let you go, can you tell us
anything about your upcoming projects or your next book.
Speaker 3 (01:02:36):
I have such another novel which is about how the
most evil institution in world's history that has existed for
hundreds of years is still in operation and thriving just
west of London. But it is too early to reveal
(01:03:00):
what that anician is. But people, welcome to guys.
Speaker 1 (01:03:05):
Wonderful sounds delicious. We look forward to seeing it. All right,
thanks very much for coming on the program. All right,
so that was a super fun conversation with Ned. I'm
glad that he wouldn't consider that conversation an RG bargie.
Speaker 2 (01:03:18):
I cannot wait to use that word on zach And
because he like loves old English stuff, and I can't
wait to see if he knows what that word means.
And I am going to use that word like five
or six times a day until I personally feel like
I know where it belongs in my life.
Speaker 1 (01:03:34):
Well, I hope it doesn't cause any RG bargies. I'm
going to use it on my brother who moved to
the UK and might have heard it and actually have
like a native understanding of it while remembering his American roots.
So perhaps he can translate it for me.
Speaker 2 (01:03:47):
Oh here, yeah, here's hoping. Keep me posted.
Speaker 1 (01:03:50):
Here's hoping. All right. Well, we had a lot of
fun reading this book and talking to the author and
talking to you about it, so I highly recommend the
book A Venomous Lumpsucker by Ned Bowman. Go ahead and
get it, read it, enjoy it. Thanks very much Kelly
for reading this with me and talking about it.
Speaker 2 (01:04:04):
Thanks for having me. You were right when you said
you read that passage and it made you think of me.
This was the perfect book for me. I enjoyed it
so much. Thanks for the invite.
Speaker 1 (01:04:11):
All right, Thanks everybody for listening, and tune in next time.
For more science and curiosity, come find us on social
media where we answer questions and post videos. We're on Twitter, Discord, instance,
and now TikTok. And remember that Daniel and Jorge Explain
the Universe is a production of iHeartRadio. For more podcasts
(01:04:35):
from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever
you listen to your favorite shows.