All Episodes

August 25, 2016 37 mins

If you could make the world a better place by removing the ability for people to make selfish choices, would you do it?

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking. Hey there, and welcome to Forward Thinking, a
podcast that looks at the future and says right and wrong.
Do you know the difference. I'm Jonathan Strick and I'm

(00:21):
Joe McCormick. Quick warning to the listeners that this is
one of those conversations we had that went way longer
than intended. So we decided to split this up for you,
divide right down the middle where we're gonna have half
of our conversation in this episode and then continue with
the second half in the next episode. Yeah, we got
something excited to talk about, guys, something I'm so excited about.
You know, we talk all about trans humanism and and

(00:43):
bio enhancements. We've talked about it so many times in
this episode. Are not on this episode on the series,
like the idea of of taking humans and making them better.
You know you're staring at me blankly, Jim, I'm listening. Yes, okay,
So we want to make them better by giving them claws,
by giving them sharp clamps, well, clamps with claws on them, Okay, clamps, yes,

(01:07):
of course, of course. Wings, I mean humans need wings.
Generally I can eat like thirty or forty wings all
by myself. Yeah. So we've done an episode called one
to Our Our two cents on More Senses, which was
all about how we might be able to use technology
to expand our our ability to sense stuff around us. Right,

(01:29):
So we've talked about how to enhance ourselves that way.
We've explored how bio enhancements presents some pretty difficult ethical questions.
I remember we used to talk about is it ethical,
say an athlete wants to just cut their healthy legs
off to get fake legs, And that's not the right word,
not fake legs, prosthetic legs that might be better than

(01:51):
the legs he or she was born with. Right, And
then also just on the on the other side of it,
is it ethical for a doctor to perform such a procedure?
Like Like, these are tricky questions that we don't have
the answers to. But instead of looking at those, I
thought we could actually look at a related but slightly

(02:12):
more focused topic, the concept of moral bio enhancements. Yes,
instead of making humans better, we make them better. Yes,
Instead of better at clamping and better at clawing and
running and flying, make make them better at not stealing
out of the donation box, right right, all the all

(02:35):
the little petty things that you know, add up at
the day like you you're you're from the moment you
wake up to the moment you go to bed, all
those little things that irritate you. Because if people were
just decent like you were decent, everyone would get along
much better, share making people less aggressive and more altruistic. Yeah,
like like you know if they would, If people would
just let you into that lane of traffic, then everything

(02:58):
would move much more smoothly, works like a zipper. People
first one car from one lane and then one from
the next. But of course that's not how it works
because jerk faces are out there. But what if we
could have a little switch that said jerk face off
and everyone was a non jerk face. Well, that's thank
YouTube a lot more enjoyable. Yes, the comments sections would

(03:19):
be so much different. Although our our our commenters are
really very lovely. Yes, we're not talking about you guys. No, No,
it's you guys are beautiful. It's not you, it's those
other jerk faces. Uh. We So we wanted to talk
about this concept of moral bio enhancements. Now to talk
about that, we first have to just discuss what is
morality in the first place. Uh, And it's a tricky thing, right,

(03:42):
It's it's basically knowing the difference between right and wrong,
if you want to be super super general about Yeah,
but that that in itself is is Uh, it's deceptive
in its simplicity, right. Well, Yeah, we generally think it
describes a certain type of cognition and behavior, right, that
there is a decision making process and that happens in

(04:04):
the brain, or certain types of reactions to stimuli that
we class under morality. And there there are a lot
of different types of morality, right, Like, there are things
that some people consider moral dilemmas that other people would
not even categorize as moral questions. For example, Uh, generally

(04:24):
people almost universally think of things like not causing harm
to others and making things fair. Those are pretty universal
moral principles. But there are also things like, uh, certain
types of ideas about purity, ideas about loyalty and obeying authority,
and things that people in some places and and some

(04:46):
individuals think of as very important moral issues and other
people don't even categorize as moral. Right, So one might
ask where did this whole concept of morality come from? Uh.
There are a couple of law suffers that I was
reading about and their views on the subject, specifically because
they are advocates for moral bio enhancements. We'll get to

(05:08):
that in a second. Those philosophers are Julian Savolescu and
ingmar Person. They argue that morality arose out of the
way that we humans would group together, way way way back, right,
you know, although to the point where we were just
tiny social groups of humans originally traveling together and then
eventually settling down and trying to create like an actual

(05:32):
uh settlement. I guess. So the theory there is that
evolutionarily there's a survival advantage to group behavior, that you
you have a better chance of passing on your genes
if you work together with others, and you work together
with others better if you behave well. Yes, that's essentially
the perspective saying that morality is that sense that makes

(05:54):
us feel badly when we cause harm to other people,
and to a lesser extent, when we allow someone to
come to harm not through our direct action but through
our inaction and specifically within our social group. So we
feel worse if we're directly responsible for that harm. Morality
and causation are linked, according to their argument, and they
also say that the further out you get from a

(06:16):
social group, the less badly you would feel about harming
someone so so so. Yeah, So if I, if I
had the chance to to to through an action, do
something mean to to you guys, yeah I might. I
might feel pretty bad about it if it was to
someone in the next office over right, because you don't
know that person. Yeah. Yeah. This is also kind of

(06:36):
coming into something that I used to read about, I
think in The Straight Dope, where Cecil Adams argued that
essentially every group's name for themselves tends to mean the people,
and then then everyone's name for everyone else's those godless
heathens over there, right, So, so that that's an illustration

(06:58):
of this idea right there, that other group of people
are not within the social group, and so therefore you
don't feel that sense of of responsibility and accountability, or
or even if you feel responsible, you don't feel badly
about being responsible for their for anything negative happening to them. Well,
a much simpler way to think about this, even is
in your day to day life, you don't you treat

(07:20):
other people in traffic a way that you would not
treat your close friends and family members? What depends on
what they're doing in traffic. According to everyone I know,
I cut off everybody in conversation, So apparently I'm just
I'm talking about a morally normal person. That's fair, that's fair, lawful,
evil schemer, it's more neutral evil, but thank you. So,

(07:43):
as our world has grown, according to these philosophers, we've
become members of a global community, and those natural inclinations
aren't sufficient to keep things civil because the world is
just too big. There are too many people, and social
groups are too fragmented for us sen some morality to compensate.
So they point out three things that have come as

(08:05):
a result of this. They say that, as as a group,
human beings have become more loss of verse, meaning that
we try to protect ourselves against loss rather than pursue improvement.
So rather than say, hey, we can work to make
things better, we say, hey, let's make sure things don't
get worse. This would manifest as a status quo bias. Yes, yeah,

(08:27):
we just don't want things to change, right, or at
least we don't. Things are pretty lousy I don't want
them to get worse than they are now, like that
kind of idea. Also that we focus mostly on our
immediate social group and the immediate future. So if someone
is outside your social group, you're less likely to be
concerned for their well being, and you may also view
any offense they commit as being greater than you would

(08:50):
feel if someone within your social group had done the
same thing. So you you judge people by their actions,
not just on the actions alone, but whether or not
there within our particular social group. E judge outsiders more harshly, exactly.
And uh, you know, I think it's also interesting that
they point out the immediate future problem, because that's something
else that as just sort of an armchair observer, you

(09:12):
could argue. You see throughout lots of different aspects of
life where organizations you may feel like an organization is
making decisions that are for short term benefit but will
be a detriment in the long term. Well, according to
these philosophers, that's an issue that's just it goes all
the way down to the way you treat your your
immediate friends and family. Uh that it's just innate and

(09:35):
it's something that we have to work to get past,
And they also say that we feel less responsible if
we're part of a large group of people causing a
negative outcome for someone else, then we would if it
were on an individual basis. So, in other words, if
I were to act personally in a way that caused
Lauren harm, I would feel very badly about it, if

(09:56):
I Yes, yes, I'm offended that you think otherwise if
I were if I were part of a large group
of people who caused harm to Laura, and I'd probably
feel badly about it. But maybe not I I because well,
it was the group diffusion of the responsibility. Yes, I guess,

(10:16):
I mean it was really Josh Clark study in the
first place. And that guy man that Josh, what a
jerk face he is. There's a similar phenomenon with reluctance
to help by people in large groups. You know, if
you're the only person around to help somebody who's having
some kind of who needs help, you're probably more likely
to do it than if there are a whole bunch
of people standing around and you're just waiting for somebody

(10:37):
else to do something. Yeah, that that I like. There's
some folks I've listened to on another podcast who kind
of casually referred to that as someone else is smarter
than I am. Right, the idea that there's someone else
who is more capable of handling that situation than you are.
Therefore you should hold back because you're afraid of making
the situation even worse, when the reality often is someone

(11:00):
needs to start to act as soon as possible, and
you're not likely to make it worse, but through an action,
you're making it worse. Actually sounds kind of similar to
the status quo bias, very afraid to change things. So
because of these uh issues, the two philosophers are in
favor of the idea of moral bio enhancements. And we've

(11:21):
mentioned that term several times. I guess it's time to
talk about what the heck they are. Okay, So, the
basic picture of a moral bio enhancement is number one
doesn't exist yet, right, or well, you might say that
in some very primitive ways, something kind of like a
moral bio enhancement could exist, maybe in some sort of

(11:44):
drug form, not in a very precise or advanced way, right,
Not in a way where it would be uh, you know,
applicable to large populations. Certainly not And we're not really
using those as therapies right now, right, Yeah, But of
course the second thing is is it well, it would
be some kind of alteration or modification, So maybe a

(12:05):
drug therapy, of physical therapy, of some kind of surgery
and implant would be something you can do to your body,
to your brain specifically that either guides you or forces
you to make moral decisions and exhibit moral behavior. You
could imagine this as anything from sort of a positive
reinforcement where you your body has a reward system for

(12:28):
every time you make a decision that is quote unquote moral,
as is as has been determined by whomever has designed
this bio enhancement, like like a like a little like oh,
like like that that good decision just tasted like chocolate
a little bit, yeah, sort of sort of thing. Or
it could be imagine if morality felt as good as
eating food or having sex or other things that that

(12:49):
give you rewards in the brain, right. Or it could
be an aversion approach where choosing the bad thing, the
immoral choice, would make you feel much worse than you
would otherwise, like in Buffy the Vampire Slayer when the
initiative puts that chip in Spike's head and every time
he tries to attack somebody, he he gets a little

(13:09):
like like instant migraine. Yeah, I've never heard about that.
It's where we learn in the Buffy verse that a
computer chip is equivalent equivalent to a human soul. I
have no problem with that particular part of their mythology.
So that's a good one. Another one would be a
work being problematic. Yes. Another one would be a clockwork
orange yes, yes, clock regards the Ludovico treatment. Yes. So

(13:33):
the very simple explanation is the main character is a
sociopathic criminal teenager who loves to commit ultra violence against
random people, and he gets sent to prison and subjected
to this therapy where he's made to watch films of
violent and lawless behavior and meanwhile is given some kind
of I don't know, some kind of drug that makes

(13:54):
him feel horrible and establishes this association in his brain,
right right. And we'll talk a little bit more about
that towards the end of this episode, because it does
come into play again. But I do bet some of
you out there thinking, wait a minute, moral bio enhancement.
But that couldn't happen. We couldn't do that with technology

(14:16):
and I want to say, let's consider Okay, so first
of all, I think we should all agree morality is
located in the brain. Whatever morality is, it's something that's
going on in the nervous system, definitely, pretty much entirely
in the brain, a chemical and physical process probably, yes,
and for this reason can be altered by changes to

(14:37):
the brain. So there's just overwhelming evidence that moral behavior
is controlled by this complex set of factors in the brain,
something that was referred to in one scientific review that
I read as a neuro moral network. And that review
was by Mario F. Mendez and it's called the Neurobiology
of Moral Behavior Review and Neuropsychiatric Implications in CNS Spectrums

(14:59):
two thousand line. And so this neuro moral network, assuming
it exists, there is some kind of network of processes
in the brain controlling morality. Whatever that is. We can
call it in euro moral network. You know, if it's
not exactly the brain locations that are implicated so far
in euroscience, sure, sure, it's a it's a variable term.

(15:21):
At this point, we're saying like we're using this term
to describe something we do not fully understand or can
fully identify, or even really vaguely understand. Honestly, well, we
do know some things about it, and I want to
I want to say what those are. So we do
think that's responsible for a system of emotions and drives
that guide and motivate our moral decisions. And in a

(15:43):
lot of ways, if you look at what's going on
in the brain when people are making moral decisions, there
seem to be social drives at work. So we have
parts of the brain that respond to social situations and
give us cues about how to, you know, act in
a group or act in relationship to other people. And
there are emotional feelings that are generated that act as
sort of like a sort of like cattle prod to

(16:05):
make us do something. So some areas of the brain
that are involved in morality are already known. For example,
Mendez gives the gives the example that a major area
implicated in response to moral dilemmas is the ventromedial prefrontal
cortex and it's connected regions, especially on the right side

(16:25):
of the brain, one of my favorite regions of the brain. Uh,
there are that this would also implicate the adjacent orbitofrontal
plus of ventrolateral cortex. I'm sorry to just be saying
these names, but we should list what they are. The
AMYG delay and the Dorso lateral prefrontal cortex, also known
as the bad neighborhood of the brain. Now, these are

(16:45):
not the only areas of the brain that would be
involved in moral moral dilemmas, moral reasoning, and moral behavior,
but these are some of the main ones that have
been identified. And most of this has been learned through
f m r I research, which usually looks something like
this US You put a person in an fMRI machine
and this tracks activity in different parts of the brain

(17:07):
in real time based on blood flow. This is often
used to try to figure out what part of the
brain is somebody using when given a certain type of stimulus,
and the stimulus in this case would be For example,
you present people with moral dilemmas, you say, would it
be wrong to play soccer with a human head as
the ball? Would it be moral to kill a healthy

(17:27):
man so his organs could be donated to hospital patients
who need them. Or you show people quote morally salient photos.
So you might show people photographs of very bad things
happening to people and say like this good or bad?
Getting close to that clock wreck, orange treatment, except, of course,
not trying to create an aversion therapy in the process. Right,
They're just trying to understand what parts of the brain

(17:49):
are you using when we make you think about stuff.
Let's see what happens when we take the puppy a
exactly So. So, these neural factors that are controlling moral
be behavior are physical, and they can be modified chemically
or mechanically. One example is acquired sociopathy from brain lesions
or changes in moral behavior linked to fronto temporal dementia

(18:12):
or f t D. These are examples we know of
where you make a change to somebody's brain and their
moral behavior changes. So if you suffer a certain type
of head injury seemingly out of nowhere, you might become
a pathological liar who can't stop shoplifting. Right, And it's
not that you know that, Uh, you have just decided, well,

(18:34):
after I had a near miss with death, I'm just
gonna start taking stuff because the world owes me no.
In other words, that would explain YouTube comments go ahead, No, Well,
there there are faculties in the brain that are involved
in moral reasoning and moral responses, and injury or damage

(18:56):
to the brain can cause major changes in how this
decision making happens, right, right, And and I mean again,
this is where we we look at this as evidence
that morality has ultimately, uh its source within the gray
matter that's in our skulls. I've seen some people argue otherwise,
saying that they thought that that was too simplistic. But

(19:17):
when you look at the evidence of people who have
suffered some form of condition or injury and there thereafter
their morality has changed significantly, it becomes pretty compelling evidence
to suggest that morality does have its seat in the brain. Well,
now I do want to back up and and give
a shout out to those critics you just mentioned, because

(19:39):
they might have a point, not that it's not based
in the brain, but that morality maybe based in in
the brain in a way that is so generalized and
complex that it would be difficult to modify it with
precision changes to certain parts of the brain. Uh. And
so that is another thing though, But the general principle

(20:01):
I think is very true. If you can change people's
moral behavior and change the moral salience of events to
people's brains through accidental changes like injuries and illness, you can,
in principle also change it with intentional modifications. It's just
a question of could we actually figure out what those

(20:22):
modifications should be, And the answer might be no, right right,
could we figure out those modifications to an extent where
we actually get the outcome we desire? Right? Well, anything
having to do with the brain is is like this.
We've probably said twenty eight million times on this show
that the brain is it's really quite a complicated thing,
and we still know so little about exactly what's going

(20:45):
on in there despite our best intentions over the past
sixty or seventy years. UM. Some of the areas that
are currently being investigated into how we might go about
doing this, or areas that show promise, I should say,
rather are stuff like like psychopharmaceuticals UM, you know, using
drugs to to do stuff to the brain like UM,

(21:07):
like a selective serotonin reuptake inhibitors s s R E
s UM. There's research that indicates that serotonin may play
a role in reducing reactive or or impulsive aggression. UM.
So therefore treatment with s SR eyes might induce an
aversion to harming others. You know that Just as a

(21:29):
side note that brings up another thing we should note
is that there are also different processes that produce moral
behavior you mentioned, yeah, yeah, and impulsive aggression deliberate. Yeah.
So you could have somebody who thinks about murdering somebody
and they come up with a plan and they go

(21:50):
and do it. Does a minority report territory here, right,
Versus somebody who just gets into an argument with somebody
and they can't control their flare up of emotions they
get angry, yeah, and they kill somebody. Both result in homicide.
But these are very they apparently at least are very
different things happening in the brain. So that's something that

(22:10):
we should also be cognizant of absolutely. And uh. And
along along that that deliberate aggression kind of line, there's
there's another class of drugs, neoprin reuptake inhibitors that have
been studied in relation to moral behavior and and in
relation to uh, controlling one's own actions, um to not
going out and doing that thing that you thought to

(22:31):
do that is actually kind of crappy, deliberate decision, deliberate
decision making, yeah, um uh. And and there's uh back
back to serotonin though that there are a couple studies
in which, um, some subjects were given citalopram and then
asked to to to think through hypothetical scenarios or to
play an economic game. UM. A couple different studies here,

(22:53):
and the subjects who were on the S s R
I were less likely in the hypothetical scenarios to say
that it's okay to harm one person for the good
of a group. That was weird weird outcome, um and
and uh. In terms of the economic game, the people

(23:14):
on S s R S were less likely to UM
to penalize people who were behaving unfairly towards them in
the game. So, I mean these present interesting moral dilemmas
that we could all think about, right, Like, It's not
just like they make you more likely to kill somebody,
but they might modify your your opinion on a question

(23:35):
that is morally debatable. Yeah, yeah, because I think that
that's one of the classic moral debates of whether, um,
whether it's okay to sacrifice one person for the good
of a group. In fact, that it was what I
brought up, can you kill a healthy guy to take
all his organs to give to ten different people who
need them? Right? And of course it's it's connected to
the trolley problem. Which we've talked about previously, especially in

(23:56):
regards to self driving cars, like these are shoes that
have been talked about in various incarnations for about as
long as we've had philosophy, and the fact that we
don't have a definitive answer tells us that these are
complicated issues. And maybe one of the reasons why some
people say moral bio enhancements are tricky thing to look

(24:18):
into because if they seem to simplify something that humans
over the course of more than a thousand years haven't
been able to come to a conclusion too, maybe it's
a little too simplistic. Yeah, Yeah, other other areas that
that could be potentially looked at in terms of therapies
and treatments that were currently capable of brain stimulation, because

(24:39):
we we've talked on the show before about how electrical
stimulation and and or electromagnetic stimulation are our treatments for
for very severe cases of depression basically because they change
the levels of some of your brain chemicals like serotonin
or nor epinephrin um for for reasons that science isn't
really sure about. Yeah, sometimes we do we do therapies

(25:02):
without really in seeing that they have an effect without
really understanding what the mechanism is of that, and it, uh,
it concerns us a little bit, but worked, let's keep going.
Usually usually it means we're going to keep looking into
this to figure out why it worked. But in the
meantime it's working, so and and and in those cases
for for people for whom drugs don't work, it's it's

(25:25):
a lifesaver and it's a it's an incredibly relieving therapy
to have available. Um Oh. A third, a third area
that that people have hypothesized about is a genetic selection
or or genetic engineering, which is so definitely far beyond
us right now, because Okay, there are a couple of
genes that have been implicated in altruistic behavior, but studies

(25:49):
into the whole genome of human people definitely indicate that
that it's probably like a lot of genes working together
along with a lot of environmental factors that make people
altruistic or jerks. Yeah, I mean, that's taking a step
even farther back from us understanding what's going on in
the brain. So in the brain, you know, we've implicated
a few major regions, but it's still there's a lot

(26:11):
of mystery. Sure, what exactly is the neural substrate for morality?
Is it a whole brain kind of thing that's just
gonna be too hard to understand? And now you've got
to step back and say, now, what are all the
things that make the parts of the brain behave the
way they do? Right? Right? And And obviously if you're
talking about a moral bio enhancement that has the the

(26:35):
largest blanket of of effect, then you would want to
go all the way back to the most basic unit
that you could, But it becomes progressively more difficult to
identify all of the variables you need to understand in
order to actually make it work. That also, though, if
you're if you're going to back up all the way

(26:55):
to say, now we're not modifying an adult, but we
are trying to create human from the start, an embryo
will be the most moral. Yeah, exactly, altering germ cells
to produce moral babies that grow up to be moral adults.
I mean, if you're starting at the beginning, it just
seems like you could work harder to educate kids to
be moral, but that that's not gonna work every time, right,

(27:17):
because we do think that there there's pretty good evidence
that there are genetic conditions that you can be born
with that affect your moral behavior. Sure. Also on a
related note, Moral Babies was one of my favorite Saturday
Morning cartoons, Jim Henson's Moral Babies. Yeah, that's great. So
we've kind of touched on this a little bit. But

(27:39):
why would anyone advocate for moral bio enhancements in the
first place. We have the we have the bit at
the top of the show where we have the philosophers
who said, you know, since the dawn of humanity, we
have had morality. Uh. And it was and it worked
pretty well for us when we were all in our
own little small groups that were more or less, you know,

(28:00):
self sustaining and maybe only occasionally had contact with other
small groups. But this is that that level of morality
is no longer sufficient to uh, to account or to
make us make moral quote unquote decisions in a larger context,
in a global context. Yeah, I mean, i'd say that
if you if you just assume that it works as

(28:21):
we would want it to, and you take away all
of the potential caveats, it's obvious why we wanted you
just you would rather be around people who treat other people, right,
and don't do selfish dangerous things. Yeah, I mean imagine
a world in where in which everyone had these bio enhancements, right,
Nor we're not talking about you have identified one subpopulation

(28:46):
and they're going to be subjected to this because I mean, obviously,
and I'm using loaded words here on purpose, but rather
this is something that affects all humans. It becomes something
that is, uh, whether it's mandate did or everyone somehow
magically global and we've all agreed to do it. Then
then suddenly it sounds like that things would be kind

(29:07):
of awesome. That people would all be making choices that
didn't benefit themselves at the expense of someone else. They
would try to make choices that would benefit multiple people
or at least not cause hardship to other people while
it's benefiting yourself. Dogs and cats living together, mass utopia, Yes, exactly.

(29:29):
That's that's the that is like the the idealistic vision
of what this would be like. Of course, no crime,
no wars, no no poverty, yeah, all all that good stuff.
Yeah okay, yeah, so so so cool. So that that's all,
at face value obviously quite good. And you might ask
on top of that, then if we think it's possible

(29:53):
to make things like this, then do we have a
responsibility to create bio enhancements? Is it our do if
we know that we can do this? Yeah, if you
were to measure up all the people who would be
who would benefit from this, right, who whose condition is
such that if this change were made, they would have

(30:13):
uh more positive influences on their existence. And then you
were to measure all the people who you could argue, well,
maybe it's not a negative impact, but it uh it
impedes their ability to succeed at the rate they've been
going at. If you look at it from that perspective,
chances are you gonna say, well, from the numbers, like

(30:35):
on paper, it makes total sense to do this because
a relatively few number of people, when you're talking about billions,
are going to have a slight a slight to maybe
well even let's say a um extreme impact on their
ability to improve their position in life has been You

(30:56):
know that that's been put in place. You know they're
not badly off, they just it be better off than
they are now easily. But then you have billions of
people who can come from a place of poverty, of conflict,
of real struggle and be removed from that. On paper,
that's an easy thing to to see and say, well,

(31:18):
it makes sense, we need to flip that switch. Sure.
The also, the philosophical argument could be made um that
that traditional methods of betterment, education and socialization and stuff
like that are only going to take us so far,
uh in in in this incredible future that we're trying
to build, and so like, eventually, is this the best

(31:41):
way in fact to go about it? Well? And and
I think in particular not to get political, but in
particular in sixteen, we've seen some evidence of things that
have shocked some people where uh where we see that
some beliefs like let's let's go with zeno of phobia
being one, are much more ingrained and widespread in different

(32:05):
parts of the world than maybe you had been aware
of before. And it raises some real hard questions. And
and and by xenophobia, I mean xenophobia by itself that
word using the word phobia, you're talking about irrational fear.
But there are certainly people who held very strong beliefs

(32:26):
because of what they perceive as being hard truths. And
you you might think, well, I I because sworn we
made more progress on the education of people to understand
what the root causes of problems are versus what the
perceived problems are. But it's clear that that hasn't really happen,
And that's what I think also fuels philosophers like the

(32:48):
two I mentioned previously in saying like we've got to
take a step further because the traditional approaches aren't working
right right, and going further down that line, are are
moral bio enhancements going to be the only way, in fact,
to to save humanity from ourselves, to to to prevent
some kind of global uh catastrophe. We we have the power.

(33:15):
I mean we have the I don't mean the power
to save ourselves. I mean we have the power to
destroy ourselves. I think that's quite clear. We have nuclear weapons,
we have the power to change our climate and devastating ways.
It is totally at our fingertips to destroy the earth. Moreover,
we have the power to not at the very least, yeah,
we have the power to not make choices that would

(33:35):
prevent that from happening. Right, Like in the case of
climate change. There have been stories numerous on numerous occasions
about the things that we need to do in order
to at least slow down climate change. At this point,
stopping it is much further out than a few years. Right,
Even if we were to stop all activity that contributes
to climate change right now, that's something there's momentum there

(33:59):
that would take years for it to finally get to
a point where it stopped, like centuries, like hundreds and
hundreds and thousands of years. So you know there's that.
But if with some people would argue that without this
moral bio enhancement, the fact that we know we need
to make those changes isn't enough, right, we have to
have some other form of compulsion. Well, you mentioned long

(34:23):
term thinking earlier. Yeah, I mean people they're just saying,
what So you're saying, this isn't gonna affect me in
the next few months. I don't care. Yeah, yeah, it's
not gonna does this? Is this going to make my
air conditioning bill lower next summer? No? Well, I mean
might as well just crank everything up right now then.
I mean a lot of people can't even properly factor
the long term personal risks of something like smoking or something.

(34:46):
So I mean it's only understandable that I'm not trying
to make excuses for it, but people we just yeah,
you're saying, okay, so it's gonna have this kind of vague,
generalized effect that's hard to be specific about in like
many years from now. Moreover, if it's an effect that's
so large that you think, well, my personal behaviors are

(35:09):
going to have so little impact on the overall problem
that it doesn't even matter, because as an individual, when
you look at your individual contribution to the problem, it
seems miniscule. And then you feel like, well, even if
no matter what I do, there's no impact on the
end result. The only impact is if everyone does it right.

(35:29):
So it gets back to this idea of of mandating
this moral bio enhancement. Okay, and so we're gonna call
it there for the first half of this conversation on
moral bio enhancements. But if you want to hear the
second half, join us again next time. And I feel
morally obligated to mention that we recorded the whole thing
and then made the determination that it was too long.

(35:52):
I don't want to give people the sense that we
just decided to stop here and then picked up again. See,
this is a potential problem. Is this kind of sheepishness
Jonathan is showing I think you should be more resolute,
and you need a certain amount of immorality to do that.
How can we fix that? I'm sorry that I'm a
morally perfect being, Guys, I feel so badly about that.

(36:13):
Now I'll go and mow your lawns as a way
of saying I'm sorry. If you'd like to tell Jonathan
where your house is so that you can mow your
lawn as well, don't do that. Yeah, that's Jonathan does
not need that power. But if you do want to
get in touch with us, you can say an email
the addresses FW thinking at how Stuffworks dot com, or
you can drop us a line on Twitter or Facebook.

(36:36):
Twitter we are FW thinking over on Facebook. Just search
f w thinking. Our profile will pop up. You can
send us a message let us know what you think
about this episode, any thoughts you have about future episodes.
Maybe there's some subject you really want us to dive into.
Let us know, because we love doing this kind of stuff, guys,
and we will talk to you again. Really so. For

(37:01):
more on this topic and the future of technology, visit
Forward Thinking dot com. Problem brought to you by Toyota.
Let's go places

Fw:Thinking News

Advertise With Us

Follow Us On

Hosts And Creators

Jonathan Strickland

Jonathan Strickland

Joe McCormick

Joe McCormick

Lauren Vogelbaum

Lauren Vogelbaum

Show Links

RSSAbout

Popular Podcasts

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Introducing… Aubrey O’Day Diddy’s former protege, television personality, platinum selling music artist, Danity Kane alum Aubrey O’Day joins veteran journalists Amy Robach and TJ Holmes to provide a unique perspective on the trial that has captivated the attention of the nation. Join them throughout the trial as they discuss, debate, and dissect every detail, every aspect of the proceedings. Aubrey will offer her opinions and expertise, as only she is qualified to do given her first-hand knowledge. From her days on Making the Band, as she emerged as the breakout star, the truth of the situation would be the opposite of the glitz and glamour. Listen throughout every minute of the trial, for this exclusive coverage. Amy Robach and TJ Holmes present Aubrey O’Day, Covering the Diddy Trial, an iHeartRadio podcast.

Good Hang with Amy Poehler

Good Hang with Amy Poehler

Come hang with Amy Poehler. Each week on her podcast, she'll welcome celebrities and fun people to her studio. They'll share stories about their careers, mutual friends, shared enthusiasms, and most importantly, what's been making them laugh. This podcast is not about trying to make you better or giving advice. Amy just wants to have a good time.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.