All Episodes

December 15, 2025 • 75 mins

Why will you make different moral decisions in similar circumstances? Why do some people make different choices than you? What happens when ancient moral instincts collide with modern problems such as pandemics, AI alignment, and political tribalism? Could a simple online game reduce polarization? Could you contribute to charities more effectively if you understood how your moral brain decides? Join Eagleman this week with guest Joshua Greene as we open the hood of human morality.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Why will your brain gladly flip a switch to save
five lives at the cost of one life, but it
will refuse to push one person off a bridge to
accomplish the same thing. Why do Buddhist monks and psychopaths
and patients like Phineas gage behave differently than you might?

(00:25):
And what happens when ancient moral instincts collide with modern
problems like pandemics and AI alignment and political tribalism. Could
a simple online game reduce polarization? And could you contribute
to charities more effectively if you understood how your moral
brain works. This week on Inner Cosmos, my colleague Joshua

(00:49):
Green helps us open the hood on human morality and
asks whether we can build technologies that steer us towards
cooperation in a world our brains weren't built for. Welcome
to Intercosmos with me, David Eagleman. I'm a neuroscientist and
author at Stanford, and in these episodes we sail deeply

(01:13):
into our three pound universe to understand how we see
the world, and, for that matter, how we see each other.

Speaker 2 (01:35):
When you peer.

Speaker 1 (01:36):
Into the human brain, you find a machine built on
conflict on the one hand, it's exquisitely tuned to the
immediacy of social life, reading faces, sensing fairness, feeling indignation
when someone breaks the rules, feeling compassion when someone needs help.
These emotional circuits evolved to help the oldest problem of

(02:00):
group living, to bind us together, to keep our small
bands cohesive, to punish the cheaters, to reward the cooperators.
These systems are fast and automatic and deeply intuitive, and
at the same time housed in the very same skullp
we have slower, more deliberative systems. This is the circuitry

(02:21):
that lets us step back, cool off, calculate, imagine alternative futures.
It allows us to override that first impulse and to
ask what actually leads to the best outcome, what matters
the most in this situation. Our brains can operate in
both of these modes, and most of the time we

(02:43):
toggle between the two without even noticing.

Speaker 2 (02:46):
And as we'll see.

Speaker 1 (02:47):
Today, our moral lives exist in a strange dance between
instinct and reflection. The strange part is that evolution never
anticipated that we would one day wield these moral instincts
on a planetary scale. Our emotional machinery was designed for

(03:07):
life in small groups of hunter gatherers, not a world
of eight billion people with global pandemics and climate changed
and polarized democracies. But we bring the same ancient intuitions
to all of it. We still divide the world into
us and them. We still experience harm differently depending on

(03:30):
whether it's direct or indirect. We still recoil from active
wrongdoing far more than passive neglect. Sometimes these instincts guide
us well, other times they mislead us. If you want
to understand the tensions at the heart of modern ethical life,
from trolley problems, which we'll talk about in a second,

(03:51):
to end of life decisions, from pandemic policy to political tribalism,
we have to understand how this dual process, this moral brain,
actually works. We have to understand why we help, why
we punish, and why certain dilemmas feel difficult even when
the math is simple. So this is why I called

(04:12):
my colleague Joshua Green today. Josh is at Harvard where
he's a psychologist and a neuroscientist and a philosopher, and
his lab studies how we make moral judgments, how our
fast gut reactions and our slow reasoning systems work together
and sometimes work against each other.

Speaker 2 (04:32):
He's the author of.

Speaker 1 (04:33):
A book called Moral Tribes, where he argues that our
everyday moral sense works beautifully within groups, but can fail
spectacularly between groups. And I want to mention that Josh
has been working lately on going beyond describing the machinery.
He's begun building tools what he calls moral technologies to

(04:55):
help societies navigate around our blind spot. So well, here
about tools which help people donate in more impactful ways,
or online games that measurably reduce political animosity.

Speaker 2 (05:09):
In other words, how.

Speaker 1 (05:11):
Can we actually engineer cooperation rather than just hope for it.
So today we're going to zoom into the moral mind,
where emotions meet reason, where tribes collide, and where ancient
circuitry tries to steer a modern world.

Speaker 2 (05:27):
So let's dive in.

Speaker 1 (05:33):
So, Josh, when you look at the sense of morality
that our brains generate, what is that for? What problem
was evolution trying to solve there?

Speaker 3 (05:41):
So morality is kind of a mystery from an evolutionary
point of view, because if you think about evolution in
the most straightforward terms, you would think that the greediest,
brawniest individuals would be the ones who get the most
resources and are able to produce the most offspring. And
why would anyone ever be nice to anybody else? And
this is something that really bothered Darwin right from from

(06:04):
the beginning, and people even said to him, Look, how
could you possibly explain any kind of human goodness if
nature is red in tooth and claws, as Tennyson famously
said about Darwin's theory. And he thought about this and
his answer was one that turned out to be very precient.
So he recognized that while individuals may benefit from being

(06:29):
ruthless and nasty, teams of individuals, groups of individuals can
benefit from being more cooperative within the group. Right, if
you're a member of a group where you know, if
you fall in the river tough luck, then that group
may not survive very well, even if the individual who
carried on hunting instead of rescuing you does a little

(06:51):
bit better. And so the idea that we depend on
each other, that teamwork is a powerful weapon for you know,
fighting against the elements but also out competing other groups.
That idea emerged early on, early on, along with the
idea that individuals who are genetically related uh can can

(07:13):
benefit their their genes indirectly by by by helping others.

Speaker 2 (07:16):
So that's the sort of idea.

Speaker 3 (07:18):
At a strategic biological level, why would anyone ever look
out for anybody else? And then on a psychological level,
the question is how does this work? And it mostly
works at the level of what we might call social emotions.
That is, you, you know, if if someone's in trouble,
you have a sense of vicarious distress and you're motivated

(07:41):
to help them. Or if someone's not being a good
cooperative member of the group, you might be angry at
them and might want to punish them or let other
people know what a jerk that guy is is being.
So it kind of operates on on on two levels,
uh the level of surviving through cooperation, and I think
of morality as a suite of psychological mechanisms that enable

(08:04):
us to be more effective cooperators. And then this is
implemented largely emotionally, but we can also use our reasoning
capacities to figure out how to make our way in
the moral and social world. And it's that duality that
gives rise to some of the most interesting dilemmas that
we've studied.

Speaker 1 (08:22):
And you've used the analogy of a camera when it
comes to that duality, can you unpack that for us?

Speaker 3 (08:28):
So, at least the old sort of digital SLR camera
that I have, you know, have these little automatic settings
like portrait mode and landscape mode, and if you want
to take a picture of a mountain from a mile away,
then you know, you put it in landscape mode and
it does everything and configures it in that kind of,
you know, familiar situation that the manufacturers of the camera anticipated.

(08:52):
But let's say you know you're an artist and you've
got your idea about exactly the sort of off kilter
shot that you want with the light just so and
trying to get a certain weird effect. Then you want
to put the camera in manual mode and adjust the
f stop and everything yourself to take advantage of your
understanding the situation and your understanding of your goals and
get exactly the shot that you want. And you can

(09:13):
think of intuition and including emotional intuitions as like those
automatic settings, where this is a sort of ready made
response for this kind of situation, and it can be
something that we have acquired biologically speaking, that we automatically
dislike certain smells or some people argue that we or

(09:35):
other species have an automatic fear of snakes that might
be poisonous and things like that. But a lot of
it is stuff that we have learned, essentially, habits that
we have acquired. But whether it comes from our individual
experience or things we've learned culturally, or if it's part
of our genetic endowment, it's all in the form of
ready made, quick responses to situations that are either familiar

(09:59):
in our biological history, our cultural history, or our personal history.
And then on the other side, we've got our reasoning abilities,
where we can look at the situation and say, Okay,
normally I don't like to jump out of buildings, but
if the building's on fire, maybe that's something I've got
to do in this case.

Speaker 1 (10:17):
So, with this dual process nature, you've got fast gut reactions,
you've got slower, more controlled reasoning. So how does this
play out in the domain of morality.

Speaker 3 (10:27):
You can see this tension between kind of the automatic
response and the more detached reason response in moral dilemmas
that are sometimes.

Speaker 2 (10:38):
Called trolley problems.

Speaker 3 (10:39):
Right, So in the classic pair of cases, you've got
a trolley that is headed towards five people, and the
only way that you can save them is to hit
a switch that will turn the trolley onto another track.
But unfortunately there's another person there. And the question is
can you hit the switch to avoid having the five
get killed? And there most people say yes, that's okay.

(11:01):
But from a cognitive science point of view, the most
interesting thing is the contrast between that case where you're
hitting a switch and turning the trolley away from five
but onto one, and the classic footbridge case. So this
is where the trolley is again headed towards five people.
This time you are on a footbridge over the tracks,

(11:21):
and the only way you can say to those five
people is to do something that's pretty uncomfortable. There's a
guy next to you wearing a big backpack, and you
can throw the guy with the big backpack onto the
tracks and then he'll be a trolley stopper and stop
the trolley from killing the five people. But that person
will be killed, and you can't jump yourself because you're
not wearing the big backpack, so this wouldn't work. And

(11:42):
we're going to suspend disbelief and assume that you have
good aim and all of that stuff, and even with
all of those somewhat unrealistic assumptions in place, most people
say that it's wrong to push the guy off the footbridge,
or they at least feel a lot more uncomfortable about it.
And so the nice thing about these cases is in
some sense they're very similar death by trolley, five lives

(12:04):
versus one, and yet we give very different responses to them.
And this was the thing that kind of got me
into cognitive neuroscience, you know, many years ago, twenty years
ago or whatever it was when you and I first
met and started looking at this with brain imaging.

Speaker 1 (12:20):
So give us the punchline of why people are happy
to flip the switch in the first case and they
are not in the second case, and what your brain
imaging studies revealed there.

Speaker 3 (12:31):
Yeah, so that the short answer seems to be that
we have a kind of negative emotional response to the
thought of pushing the guy off the footbridge that we
don't have in response to hitting the switch in that case.

Speaker 1 (12:45):
And why and.

Speaker 3 (12:45):
Then right, And so we can answer that question on
sort of two levels. What's going on in the dilemma
that makes us feel differently, and then what's going on
in our brains that is the basis for having that
differential response. So in terms of what's going on on
in the dilemma, there are three things that really seem
to be driving the effect, although there are other things

(13:07):
you could vary as well. But the difference is between
the switch case and the footbridge case. And these were
nicely identified, and since we're fined by people like my
colleague Fiery Cushman.

Speaker 2 (13:18):
So one is that.

Speaker 3 (13:20):
Well, actually, one thing that's just in the background is
that harm is much more salient when it's active rather
than passive, and that's true in both of these cases.
The two things that really differentiate these cases are one
the harm is more direct when you're pushing the guy
off the footbridge, so we call this personal force. This is,
you know, if you're pushing with your hands or pushing

(13:41):
even with a stick, that feels worse than if you're
hitting a switch. Even if you're hitting a switch, that
would drop the guy through the trap door.

Speaker 2 (13:49):
Or something like that.

Speaker 3 (13:50):
So it's just like the footbridge case, we see a
big difference there. That difference interacts with something else which
is a bit more subtle. It has a longer philosophic history,
and this is the difference between harming somebody purposefully.

Speaker 2 (14:06):
Or as a side effect.

Speaker 3 (14:08):
So the idea is that in the switch case, what
you're doing is you're turning the trolley away from the
five people, and as a side effect, you end up
running over the one person.

Speaker 2 (14:17):
But that one person is not part of your plan.

Speaker 3 (14:19):
If they were to magically disappear, that would be great,
Whereas in the footbridge case, you are using that person
as a trolley stopper, right. And it's that combination of
harming somebody in this purposeful way, using them as a means,
and doing it in this direct personal way. And then
in the background, is this the fact that it's active.

(14:41):
Those things combine to really give us our sense of
like what is a violent action. If you remove any
of those three things, it doesn't have that sense of
sort of immediate violence, like touching somebody in the face.
So that's sort of the trigger in terms of like
the features of the dilemma, the differences that make the
difference in terms of the situation.

Speaker 2 (15:01):
Then you say, okay, so.

Speaker 3 (15:03):
That combination of harming somebody in a way that's active, purposeful.

Speaker 2 (15:07):
And direct, that gives us the sense of violence.

Speaker 3 (15:10):
But what's going on in our heads and here I
think the best evidence actually comes from cases where people
have studied patients with brain damage, similar to the famous
case of Phineas Gauge. So if you've taken intro psychology
or have heard about it otherwise, you probably know about
this case. So this was a railroad form and living

(15:31):
in Vermont in the nineteenth century, working on the railroad
all the livelong day. And there was a terrible explosion
and an iron spike, a tamping iron was blasted out
of essentially a cannon, and it went into Phineas Gage's
eye socket, through the front of his brain, and out

(15:51):
the top of his head. And amazingly he survived once
the wound was treated. But when he survived, he didn't
survive intact. He his reasoning abilities, what you might loosely
call his sort of cognitive abilities, remain intact. He could speak,
he could do math problems, he could do basic reasoning.

(16:11):
But his personality, his character, his values, and his decision
making abilities, those things seem to be compromised, and he
ended up going from being this upstanding, you know, a
respected railroad officer who people looked or looked looked up to,
to a kind of lawless Wanderer, and this was one
of the first sort of clear indications that there are

(16:33):
distinct systems in the brain for that that that that
that handle things like social emotional decision making, things that
you kind of have to do by feel, by judgment,
rather than following some kind of formula or using some
previously acquired skill like your skill for for for for language.

(16:53):
After I did those initial brain imaging studies, a group
at the University of Iowa, a group in in Italy
also did a version of this where they tested patients
with damage like Phineas Gage. And these are patients that
you know Demasio kind of described as they know the words,
but they don't feel the music. They say things like

(17:14):
I'm looking at this picture you're showing me of a
gory car accident, and I know, before my brain tumor
or whatever it was, this used to bother me, but
now it just leaves me flat. So they don't have
that feeling. They're kind of emotionally cold. And what you
find with these people is that they're much more likely
to say that it's okay to push the guy off
the footbridge, right. And it's not just these two cases.

(17:35):
There are a lot of different dilemmas. You know, they
don't necessarily involve literal trolleys and things like that, And
this effect was huge.

Speaker 2 (17:42):
You don't need statistics to analyze this.

Speaker 3 (17:44):
You can just see it's like overwhelmingly, they're much more
likely to make those judgments. We've also found or others
have found that psychopaths are more likely to say that
it's okay to push the guy off the footbridge. And again,
the idea is that they can reason, but they don't
have that emotional moral sense, that sense of horror or

(18:05):
reluctance at directly harming somebody in this violent way.

Speaker 2 (18:08):
But then something really interesting.

Speaker 3 (18:10):
This is work that's unpublished, al though I'm pretty confident
about it. I had a fantastic undergrad named Shin Sheng
who's now long since graduated, who went to Tibet and
tested Buddhist monks on the footbridge case. And she tested
about fifty of them, and she found that eighty percent

(18:32):
of the Buddhist monks approved of pushing the guy off
the footbridge.

Speaker 2 (18:35):
Now you might say why.

Speaker 3 (18:38):
You know, when we ask people, what do you think
Buddhist monks would say about this? And they said, definitely,
they are not going to be like, you know, the
Phineas gauged patients and the psychopaths and that, so what's
that's really weird?

Speaker 2 (18:50):
Right? And the idea is that you can reach with this.

Speaker 3 (18:53):
Dual process approach where it's partly about how you feel
and partly about how you think you could reach the
same conclusion in different ways. So the Phineas gauge people
with the emotion related brain damage and the psychopaths who
don't have that emotional moral sense, they just don't have
the feeling that says no.

Speaker 2 (19:09):
Don't do that horrible violence. Right.

Speaker 3 (19:12):
The Buddhist monks said, yeah, I feel that, and I
sense that, but I have this more detached and expansive view,
and I can see in a case like this, if
it is really done with the noble intention of saving
more lives, then that can be acceptable. And many of them,
I think five different monks cited this sutra, this Buddhist

(19:34):
teaching about a ship captain who found himself in this
situation where he could kill somebody to prevent a much
greater harm, and he did that, and he did it
thinking that it was going to be bad karma for him,
but in fact he was reborn as a bodhisattva because
he had this noble intention.

Speaker 1 (20:06):
So, given what we know about this moral machinery, are
there certain kinds of problems that we are systematically bad at?

Speaker 2 (20:14):
Well?

Speaker 3 (20:15):
You know, it's always controversial when it comes to efficult
questions of you know what, what's the right answer and
what's the wrong answer. But I think there are cases
where we're we're bad when it comes to causing harm
or or or And for example, when it.

Speaker 2 (20:29):
Comes to physician assisted suicide.

Speaker 3 (20:32):
Right, Let's say you have someone who has a terrible
terminal illness. They have you know, at most they're going
to live in another couple of months, but they're living
right now in agonizing pain. Let's say this is someone
whose bodies just riddled with cancer and they're just hanging
on and you know, despite all the drugs you can
give them, their their their their in miserable pain, and
they just want to say goodbye and be done for

(20:54):
a long time.

Speaker 2 (20:55):
I don't know if this is still true.

Speaker 3 (20:56):
The American Medical Association's position on this is no, you know,
you can't end life.

Speaker 2 (21:03):
Life is sacred, et cetera.

Speaker 3 (21:04):
Right, Whereas in other countries like the Netherlands, for example,
there are procedures and protocols and guardrails in place. But
if you want to end your life, typically in cases
where someone is has as a terminal illness and it
is a great deal of pain and distress, you can
do that, right, And you can think of this as
a case where the greater good is on the side

(21:26):
of letting this person and their life if that's what
they want. They're experiencing nothing but misery, and everyone around
them is just watching them suffer. But there's this sense that,
you know, ending this person's life actively is like pushing
somebody off of footbridge, and that is just inherently wrong.

Speaker 1 (21:42):
Our moral instincts evolved for life in small groups. What
happens when you take a brain like this and you
drop it into the twenty first century?

Speaker 3 (21:52):
Right, So the way I think about it is as
a kind of sequel to this famous.

Speaker 2 (21:58):
Parable called the traad of the commons.

Speaker 3 (22:00):
Right. So, the ecologist Garrett Harden had this famous paper
in nineteen sixty eight, and he was writing about over population,
which turned out to be not as big a concern
as he thought it was. But he had this very
nice story that sort of beautifully illustrates the challenge of
life in a group. Right, So he imagined a bunch
of herders living on a near a pasture, and each

(22:24):
of them has their separate herds, and each of them
says to themselves, well should I add more animals to
my herd? And they think, well, they're just grazing on
this common pasture, so why not bigger herd, more money
when I take my animals to market. So they all
grow their herds, and they all grow and grow and grow,
until at some point there are more animals on the
pastor than it can support, No food enough for any

(22:47):
of them, as they're scrambling to eat the last few
shreds of grass, and they all die. And that's the
tragedy of the commons. And this is the classic problem
of me versus us, where if everybody does the thing
that's in their individuals interest, then everybody ends up being
worse off or collectively worse off. And this is the
basic problem that human morality is designed to solve. Right,

(23:09):
So we have positive emotions and negative emotions that we
apply to ourselves and that we apply to.

Speaker 2 (23:15):
Others in order to motivate us to be good hurders.

Speaker 3 (23:18):
Right, So if you're if if you're a good hurder,
you have my gratitude. If you are a cheat and
herder who secretly grows herd, then you know you have
my anger and perhaps even my disgust. But if you're
if you if you help me out, then you have
my sympathy. And if I did something bad, I might

(23:38):
feel guilty. So that sort of suite of feelings carrots
and sticks that we can apply to ourselves and apply
to others that have governs life on the new pasture.
But in the modern world, we've got many tribes, we've
got many different groups, and there are different ways that
that that a tribe can get along, right, So you

(23:59):
could have a more end visualist tribe, let's say, where
instead of having a common pasture, you just privatize the
pasture and you divide it up into different plots and
everybody cooperates by having good fences and respecting other people's property.

Speaker 2 (24:13):
Rights and things like that.

Speaker 3 (24:14):
Or you can solve the problem by having everybody just
have a common pasture and a common herd, right, and
then you don't have to worry about who's growing there
private herds because there is no private herd. And then
there are questions about how to organize life more generally.
You know, are we going to have collective health insurance
for our humans and our sheep? Or can you defend
your sheep with an assault rifle? Or you know, who's

(24:35):
allowed to be in charge? Can you be a transgender herder?
Or you know, and so on and so forth, right,
And in my little sequel to Harden's parable, I imagine
something like this. You have a bunch of different tribes
that are living around this forest, and then one hot,
dry summer, there's a fire and the forest burns down,
and then the rains come and there's this lovely pasture

(24:55):
in the middle. And all of the tribes look at
this new pasture and they say, nice.

Speaker 2 (25:00):
Pasture, and they all move in.

Speaker 3 (25:03):
And the question is, what's going to happen when all
of those tribes, with their different ways of life, with
their different religious practices, with their different gender roles, with
their different ideas about violence and peace, with their different
ideas about individualism versus collectivism, etc.

Speaker 2 (25:19):
What's it going to be?

Speaker 3 (25:20):
Is it going to be a bloodbath where it's just
a fight of all against all and the winning tribe
emerges and imposes their tribal culture on everybody else, or
is there going to be some kind of new way
of organizing things that.

Speaker 2 (25:35):
Deals with a modern culture.

Speaker 3 (25:36):
Right, And this is I think exactly what's going on
in the United States right now and what's going on
in other countries, where one solution is essentially to say
blood and soil. This country is for this particular tribe
that got here first, that has historically been in charge.
Let's say we are Whites, we are Christians, we have
European heritage, we have certain ways of doing things and

(25:58):
other ways that we don't do things, and.

Speaker 2 (25:59):
This is what this country is really about now.

Speaker 3 (26:01):
Of course, in reality that that tribe is not really
a single tribe it itself is an amalgam of tribes.
You know, the Germans and the Irish didn't always consider
themselves the same people. But you know, at least there's
something closer to a smaller culturally identified US, right. Or
you can try to have a more modern, pluralistic country

(26:24):
where you say, all right, there are many different tribes
with many different cultures, and what we need is something
like what I call a metamorality. That is where a
morality is a system that enables a group of otherwise
selfish individuals to get along as a tribe. A metamorality
is a moral system that enables a group of otherwise

(26:45):
tribalistic tribes, where tribalism is essentially selfishness at the group level, right.
A meta morality is something that enables a group of
distinct tribes, different cultures, different different people of different backgrounds,
maybe racist or religions, to.

Speaker 2 (27:00):
Get along together in a modern context.

Speaker 3 (27:02):
And I think that what we're figuring out right now
in the US, in Europe, in India, in Brazil, in Israel, Gaza,
is are we going? Is there going to be a
big us? Are we going to live in a real
sort of modern modern democracy where power is truly shared
among groups with different histories and traditionally different moral ideals,

(27:27):
Or does democracy only work when you have a sort
of core dominant tribe and guests right as long as
they're well behaved?

Speaker 2 (27:36):
Right?

Speaker 3 (27:37):
And I think that is the big political question that
we're that we're facing.

Speaker 1 (27:41):
Give us an example where our gut reaction sort of
morality is exactly the wrong thing in a modern global context,
large scale policy decisions, things involving climate or pandemics or
risk of Ai.

Speaker 3 (27:56):
Yeah, so I would say, you know, cultures can be
more individualistic or collectivist, and I think you see this
playing out in a lot of the issues that you mentioned.

Speaker 2 (28:05):
Is ate the case of pandemics.

Speaker 3 (28:08):
There's a real trade off there, right, that that that
the pandemic restrictions really restricted people's individual freedom, restricted their
individual economic freedom, their ability to make money. And in
a country that doesn't have a strong social safety net,
telling people that they who don't who can't work from home,
that they're not allowed to work, I mean that's like,

(28:28):
in some ways, like an economic prison sentence.

Speaker 2 (28:31):
Right.

Speaker 3 (28:31):
But at the same time, there was a real disease
and we didn't really understand that, and people were dying.

Speaker 2 (28:36):
And you know, and and and and.

Speaker 3 (28:37):
More people interacting with each other would predictably lead to
more death. And so there was a trade off between
saying we have a collective problem and we all have
to make sacrifices to solve it, or saying, well, there
are trade offs here, and we're going to let individuals
or or churches or businesses or cities and towns or
whatever it is make their own decisions about how to

(29:02):
navigate the trade off between freedom and public health. Similar
when it comes to climate change, right, I mean, it's partly,
you know, a debate about the background evidence and whether
or not it's real.

Speaker 2 (29:12):
But I think behind.

Speaker 3 (29:13):
That is a set of different orientations where some people
are very skeptical of the idea that there is this
global problem and we all have to change the way
we live and make sacrifices in order to address it,
versus people who they're going to set a very high
bar for the evidence before they give up their individual
freedom to you know, drive the kind of car they

(29:34):
want to drive, or pay pay a gasoline tax, or
vote for politicians who want to you know, change the
way we get our energy and make electricity prices possibly
more higher, at least in the short term.

Speaker 1 (29:46):
So, given everything that you've studied about the brain and
moral decision making, if you were advising a government or
some international body, what would you advise them about decision making?

Speaker 3 (29:58):
So I think there are sort of two level here, right,
I mean, Partly, you know, I'm person with my own values,
and then there's sort of strategy. Whatever your values are. Now,
my values tend to be for the big us that
I am. I'm not a big fan of ethnic nationalism,
and I would like to see us be a more

(30:18):
effective pluralistic democracy, right, that's my goal. But that's you know,
if you're a true believer in a tribal way of life,
you might just say, well, I oppose that, and I'm
going to fight you every step of the.

Speaker 2 (30:30):
Way whichever you choose.

Speaker 3 (30:34):
But I'm now speaking from the perspective of sort of
a big tent, big us kind of of person. I
think the biggest lesson is you have to work. You
have to meet people where they are. You have to
understand that people who have different feelings than you do,
people who have different views about divisive moral issues, they

(30:57):
don't have to be evil to come to a different
conclusion from you, partly because they may just have different values,
and partly because they have made different background assumptions either
what they've heard from the people they trust about you know,
particular questions. You know, is climate change reel and things
like that, or or or or you know that background
values that come from their their their their upbringing, whether

(31:20):
it's you know, secular or religious. And so I think
that you know, people on the left, especially often shoot
themselves in the foot by being maximalist and by saying
Anyone who doesn't meet all of these demands right now
is evil and terrible and wants to you know, doesn't

(31:41):
care about human rights, doesn't care about the people who's
whose whose freedoms let's say, are are are are in question,
and are just bad people who need to be defeated,
right And I think that that approach it's very good
for winning votes within your subset, within your wing of
the Democratic Party, or power in parallel, your right most

(32:02):
wing of the Republican Party. But then you have a
hard time bringing the larger US together and speaking to
you the sixty seventy eighty percent who actually has a
fair amount of agreement on policy and doesn't want either
what the extreme right or the extreme left is offering.
So my general advice is to be pragmatic and strategic

(32:26):
and be flexible enough to form the kind of coalition
that can actually move things forward and not insist on
total moral victory.

Speaker 1 (32:37):
Now, excellent, Okay, So so far we've been talking about
it's the complexities in the brain that lead to decision
making in the moral domain. But I want to turn
out to the fact that you've actually been building things
to try to steer moral decision making in a better way.
This is games, platforms interventions. Tell us about that.

Speaker 3 (33:00):
Yeah, so thanks, this is something I'm really excited about,
and I'm really excited that you're joining the Pods Fight
Poverty program. I'll say more about that and what our
goals are, but let me say a little bit about
the science behind how this got started on my end. So,
one of the things that's most that are frustrating about
the trolley dilemas, in a particular one like the footbridge dilemma,
is that there is no satisfying solution that if you

(33:22):
say that it's okay to push the guy off the footbridge, yes,
you know you're saving five lives within the world of
this scenario, but it's going to feel like a horrible
act of violence, and you'd say, I wouldn't trust somebody
who's who would be willing to do that or at
least feel comfortable doing that.

Speaker 2 (33:38):
Right.

Speaker 3 (33:39):
On the other hand, if you say no, you can't
push the guy off the footbridge, well, then there are
five times as many people dead as necessary, and that's
pretty bad too, right, And I think that as long
as our brains work the way we work, you can
have an answer. But it's never going to be a
completely satisfying answer. In other domains, you actually can find
a satisfying answer. And in particular, what I have in

(34:01):
mind is the domain of charitable giving. So this begins
with a kind of superpower that we have that most
of us don't.

Speaker 2 (34:10):
Realize that we have.

Speaker 3 (34:11):
For those of us who you know, at the end
of the year have an extra few hundred dollars or
a thousand dollars or even more than that, the amount
of good that we can do is enormous, but it
requires doing it strategically. When I first sort of learned
about this, you know, I thought the difference between a
really effective charity and a charity that's not very effective

(34:31):
would be something like the difference between someone who's really
tall and someone who's not so tall. So a really
tall person might be fifty percent taller than someone's who's
pretty short. But in fact, the difference between the most
effective charities and ordinary charities is more like the difference
between redwood trees and little shrubs. You need one hundred
times different or close to a thousand times different. So
let me give you an example.

Speaker 2 (34:53):
There is a.

Speaker 3 (34:55):
Disease called trachoma that is not common in the US,
but common in other parts of the world, particularly in Africa.
And this is a disease that infects people's eyes and
can cause people to go blind, not as common in
the US.

Speaker 2 (35:09):
In the US, people are blind for other reasons.

Speaker 3 (35:12):
And if you want to help a blind person in
the US, one thing you could do is support the
training of a seeing i dog.

Speaker 2 (35:17):
Training a seeing eye dog costs about.

Speaker 3 (35:19):
Fifty thousand dollars, well worth it for the effect that
it has on someone's life, but fairly expensive as.

Speaker 2 (35:29):
Something you can do to improve some of life goes.

Speaker 3 (35:31):
A surgery that can prevent tracoma in a country in
Africa can cost less than one hundred dollars, which means
that you could fund over one hundred, maybe close to
one thousand trachoma surgeries, preventing hundreds of people from going
blind in the first place, for the cost of helping

(35:53):
someone who's already blind in the United States.

Speaker 2 (35:55):
Now, I'm not.

Speaker 3 (35:56):
Saying that we should just forget about people who are
blind in the United States. It's these people are humans.
They are part of our community. And you know, I'm
not saying to hell with them, but I think it
would be a moral mistake to ignore the enormous sort
of turbocharge good that we can do by finding the
most effective ways to help people, typically overseas, not because

(36:18):
they're far away, but because the money goes so much
farther and the problems are so much more dire and widespread.
So you know, the funding surgeries for tracoma is one example,
distributing insecticidal malaria nets for about five thousand dollars on average,
you can save somebody's life. Basically, this is distributing a
thousand malaria nets at the cost of five dollars each,

(36:40):
incentivizing mothers to have their children vaccinated. That can save
about on average, rue life for three thousand dollars. And
then there are things that have enormous improvements on people's
quality of life, so deworming treatments. So in other parts
of the world, people often children are beset by parasitic
worms that colini people's in digestive tracks very painful and

(37:03):
makes it hard to go to school and learn. And
for less than a dollar, you can provide a deworming
treatment that will rid a child of of of of
intestinal worms at least for a while until they get
their next treatment. For one hundred dollars. That's one hundred
children who are in a better position to go to school.
And when those kids go to school, of course, they're
more likely to earn money later in life and have
long term positive effects. And that's just in the domain

(37:25):
of of of global poverty and health. I'll mention one
other charity, which is Give Directly. This is a charity
that it is not focused on a specific intervention, and
those interventions often have sort of in randomized control trials,
the most bang for buck, but something that takes a
little bit more of an expansive view, this is giving

(37:47):
people money directly. And the way this happened was giving
give directly was started by economists who were studying the
efficacy of different types of health and poverty interventions. And
they said, well, we're good scientists. We need a control condition,
like what's standard of care, what's baseline? And they found
that there wasn't one. There was no sort of standard

(38:08):
thing to do. So they said, okay, well, let's just
take as our baseline. What would happen if you took
the money that you could use for this program and
just gave it to people directly. And what they found
was that just giving money people directly had better outcomes
than most of the things that people were trying to do,
and so they started this organization called GiveDirectly, which was
superpowered by the advent of digital banking. So you know,

(38:32):
in places where there are no telephone polls, right, but
there are satellites overhead.

Speaker 2 (38:38):
You know, people in.

Speaker 3 (38:39):
A remote poor village in Rwanda, someone there can have
a cell phone which enables them to do digital banking,
which opens up a world of economic opportunity. So give
Directly gives people money directly, and they can spend it
on immediate necessities, on food, on medicine, and then once
those basic needs are taken care of, they know what
to do. They can you know, and infrastructure, fix the

(39:01):
roof in your house, or they can do things that
can enable a more long term income.

Speaker 2 (39:05):
Right.

Speaker 3 (39:05):
So if you want to start a business and you
need a little motorcycle to get around so that you
can sell your goods, you need to be able to
make that capital investment. And so you know what I
like about this, and I think a lot of people
like about give directly is you know, it's not giving
someone to fish, and it's not teaching somebody to fish.
These people already know how to fish, so to speak.
This is giving somebody the money to buy, you know,

(39:27):
the fish for today, but also a fishing rod that
they can use, and they already know how to use,
and they just need to get over that economic hop.
So this is an incredible charity and this is one
that will that we're actively supporting with this program that
we're doing with podcasts. And I'll say a bit about that.
But back to the psychology, right, as I said with
the Footbridge case, you know, you just have this dilemma

(39:49):
where there's no satisfying solution when it comes to charitable giving.
There's the default thing that most people do, which is
to support charities that are personally meaningful to them.

Speaker 2 (39:59):
So you love animals, you support the local animal shelter.

Speaker 3 (40:02):
Your aunt died of breast cancer, so you support a
charity that does breast cancer research, right, and that is
a very good and noble and that's like, you know,
the best of humanity coming out there, right.

Speaker 2 (40:13):
Don't want to say that that is a bad thing.

Speaker 3 (40:15):
However, typically what people feel most connected to is not
as impactful as the kinds of things that I described,
like malaria, nets and deworming treatments and trachoma surgeries, and
and and and giving directly to people in poverty. So
the conventional sort of thing to do once you're someone
who's realized we need to be doing a lot more

(40:36):
super impact stuff is to say to people, hey, instead
of giving to the local animal shelter or supporting the
breast cancer research, you really should do this other thing
that's more impactful. And the problem is that a lot
of people say, yeah, I get it, but this is
my aunt, or yeah, I get it, but what I
really love is animals, right, and and and and you

(40:56):
know they don't they don't buy it, right. And so
my then post doc and I Lucius Caviola, who's now
a professor at Cambridge in the UK, thought is is
there a third way here? You know, the moral equivalent
of a third way? And it doesn't exist in the
trolley problem. And we had a simple idea, which is

(41:17):
instead of telling people, instead of doing the thing you
most want to do, do this other thing that's more impactful.

Speaker 2 (41:24):
What if we just said to people, hey, why don't
you do both?

Speaker 3 (41:26):
Right?

Speaker 2 (41:27):
So we started running these experiments, and in.

Speaker 3 (41:30):
The control condition, we gave people the conventional choice, it'd said, Okay,
tell us what your favorite charity is, and you give
us the link, and then you.

Speaker 2 (41:38):
Say, and here's this super effective.

Speaker 3 (41:40):
Deworming charity that where for one hundred dollars you can
deworm one hundred kids, for ten dollars you can deworm
ten kids. We're giving you ten dollars, which you want
to choose, And still, like eighty percent of people chose
the charity that they identified originally as their personal favorite.
Some people chose to switch to the one that the
Spurts focused on impact recommended, but most people didn't.

Speaker 2 (42:03):
That's the control condition.

Speaker 3 (42:05):
In the experimental treatment condition, we give those two options,
but we give another option, which is to split the difference,
or instead of doing all the water or all to
the other, you can do a fifty to fifty split
between the charity you love and the charity that the
experts are saying is super effective. And what we found
was that a little bit over half of the people
chose to support to do the split right, which meant

(42:28):
that more money was actually going to the super effective
charity by giving people the option to split than if
you force them to make a stark choice. Then we
did some research to try to figure out, you know,
what's the underlying psychology here, And what we found was
a kind of Trolleysque dual process story, which is to say,
people have sort of two different urges they're trying to satisfy.

(42:50):
They want to give from the heart. They want to
give to the charity that they feel personally connected to,
but they also like the idea of of doing something
super impactful. It's just not their top priority if they're
forced to choose. So what we found is that when
it comes to giving from the heart, it's not about
how much you give. If you give fifty dollars instead

(43:13):
of one hundred dollars to the local animal shelter, that
feels more or less the same if you And then
that means if you could give fifty there, then you
could have this other fifty left over to do something
that's super duper effective, and that scratches a different itch, right,
and the overall feeling of satisfaction of doing something as
we say, you know, smart and from the heart at

(43:35):
the same time, people really like that that that that
that combo. So then we all, okay, so that's cool.
We've got that result, we understand why people do that.
But then we thought, okay, we could publish a paper
saying hey, everybody, you should split your donations like this,
and then it would just die in this journal and
no one would read it, or if just a few researchers.
So we thought, okay, we need some way to get
this out there. Well, what if we had to, you know,

(43:56):
we incentivize people, say well, we'll add money on top
if you do these split donations, and as you'd expect,
people like it even better if we're willing to add money.
In fact, we found it was like a seventy percent
boost if we said that we would add add money
on top. So that's great. But then, of course the
question is where does that money come from? And then
what we said was, well, what if we asked people
who were agreed to split between a personal favorite charity

(44:20):
and this charity that they just learned about, like the
deworming charity, said, what if instead of giving the the
deworming charity, you put that fifty percent in a fund
that will add money on top for the next people,
so kind of pay it forward program to keep this going.
And we found that not everybody but enough people were
willing to do that such that the money they would
put into that fund more than enough to cover the

(44:43):
matching donations for the people who said, now I'll just
take the matching funds. So we thought, my gosh, this
could be like a self sustaining virtuous circle where you
have some people who put money into the matching fund
and some people who are incentivized by the money in
the matching fund, and.

Speaker 2 (45:00):
Thing just works.

Speaker 3 (45:01):
So Lucius and his Techi friends, most most notably the
amazing web designer Fabio kun Uh, created Giving Multiplier, which
is our our our our web platform which does this.
And if if if you go to giving Multiplier, like
what you'd see is you know this description of how
it works.

Speaker 2 (45:20):
This is giving Multiplier dot dot dot org.

Speaker 3 (45:24):
You'll see a place where you can find your your
favorite charity. So any charity registered as a five oh
one C three in the US you enter that. In
the second thing is a list of the super effective
charities that we support. So I've named a bunch of
them UH already so give directly and UH and and
the Against Malaria Foundation and the Malaria Consortium, and then

(45:44):
other ones related to climate change and animal welfare, but
ones that that that that have a super outsize impact.
You pick one of those, and then we have this
cool slider thing where you decide how you want to
allocate your money between the two charity and the more
you allocate to the super effective charities from our list,

(46:05):
the more money we add on top. But you could
still give a majority to your personal favorite charity and
we'll still add something on top to both.

Speaker 1 (46:13):
Let me get one thing straight, which is the money
that I put in. I put one hundred dollars into
the matching fund that's going to charities. I just don't
know which ones.

Speaker 3 (46:20):
That's That's right, it's charities that will be chosen by
other people, future people.

Speaker 1 (46:25):
Excellent, Okay, got it. So tell us about POD's Fight Poverty.

Speaker 3 (46:29):
So we now having over thirty podcasts who are signed
on and our goal is to raise a million dollars
and to We're aiming for three villages in the Bikar
region of Northern Rwanda where.

Speaker 2 (46:45):
People have very little. You know, people are very poor.

Speaker 3 (46:49):
And have are kind of stuck in an economic rut
where they don't have the resources they need to to
get out of it. And our goal is to lift
seven hundred families out of poverty, giving a little over
one thousand dollars to each family, which can be life
changing for a family.

Speaker 2 (47:05):
There.

Speaker 3 (47:06):
Listeners are encouraged to to to go to GiveDirectly dot
org slash cosmos for for your listeners and giving Multiplier
is adding fifty percent match while our while our supplies last.
So we're we're committed to putting up a half a
million dollars for this campaign, So anything your listeners give

(47:28):
giving Multiplier will be matching at fifty percent. And the
results with with give Directly are amazing. I mean, there
have been twenty five randomized control trials, so gold standard
experiments with give directly specifically, and they find that that
that these donations cut infant mortality rates in half.

Speaker 2 (47:48):
And not only.

Speaker 3 (47:49):
Does it help the people who get the money, it
boosts the local economy by a factor of two point five.

Speaker 2 (47:55):
Right, So this is getting back to this.

Speaker 3 (47:57):
You know a lot of people worry about anti pop
pretty mechanisms, especially in poor countries. You say, yeah, this
is just pouring money down the hole where there's some
temporary relief, but it doesn't really go anywhere. I don't
want to under sell temporary relief if you're starving or
if your child is you know, in danger of dying
of malaria or whatever.

Speaker 2 (48:15):
It is.

Speaker 3 (48:16):
Temporary relief matters, right, but you also want to think
about the long term. And part of what's great about
gift directly is that it gives people the power, the agency,
the flexibility to take the money that they that they're
getting that they can use beyond you know, immediate survival,
to build things that that can help them survive. And
we see this in the growth of these local economies

(48:38):
as a result of this. So that's what we're trying
to do. And thank thank you David for being part
of this.

Speaker 1 (48:45):
And are you or someone else doing follow up studies
to see what happens with this village over the next
five ten years.

Speaker 2 (48:52):
Yeah?

Speaker 3 (48:52):
Absolutely, So give give directly. You know, they track the
effects of every every campaign they run, all of the
money that they spend.

Speaker 1 (49:14):
Okay, so you know, by the way, Josh, you and
I have known each other for our whole careers and neuroscience,
and it's so wonderful to see you taking all the
stuff you know about decision making and morality and build
moral technologies. So the last thing I wanted to ask
you about is you had a Nature paper earlier this
year about a game that people could play to cross

(49:37):
to bridge political divides. Tell us about tango.

Speaker 3 (49:40):
Yeah, so let me say a little bit about how
I think all of this stuff fits together.

Speaker 2 (49:44):
Right.

Speaker 3 (49:45):
You know, I was one side of conference recently and
I had to have like my little name tag, and
at this particular conference, you had to put like a
one line thing on, like what's your deal?

Speaker 2 (49:54):
Like what are you about?

Speaker 3 (49:55):
And I never had to do that before, And I
thought back to, you know, what are my heroes Peter Singer,
philosopher and his notion of the expanding circle, the idea
that over time humans have gone from you know, not
just caring about their family and their immediate relationships, but
you know, the circle of moral concern has grown from
the village to the tribe, to the chiefdom to the

(50:18):
nation and and and beyond nations. And what I sort
of put on that little name tag, which I now
think of is my tagline, is is expanding the circle
of of of of altruism and cooperation and as we're
trying to do and with the trolley stuff, you know,
in this weird way, it wasn't weird to me, but
it's it's sort of maybe not obvious.

Speaker 2 (50:39):
That was the goal there as well.

Speaker 3 (50:41):
I thought the way to move forward is we need
a better moral philosophy. And there's I'm a consequentialist utilitarian,
although I don't like the U word. I prefer to
call myself a deep pragmatist. But there are these objections
to that kind of view, like is it okay to
tell one person to say five people you know in
the footbridge case?

Speaker 2 (51:00):
And isn't that wrong?

Speaker 3 (51:00):
And I wanted to understand the psychology so that I
could say, look, this philosophy makes sense, but we have
these over generalizations of certain moral instincts that block us
from there. So it was a kind of philosophical approach
to expanding the circle. As I've gotten older, I thought,
you know, instead of like kind of trying to fly
up into the clouds, do some philosophy and then come

(51:20):
back down to earth, I'm just going to drive along
the ground, take you know, what we think we know
about human nature and pack that up and and see
what we can do with it. So giving multiplier, I see,
is about expanding the circle largely from nation to world
that we are supporting charities where we in the affluent

(51:42):
West primarily can do an enormous amount of good for
other human beings who happen to not be our co nationals, right,
and then from also from species, the human world beat
beyond that is one of the charities we support is
the Humane League, which you know, there are billions of

(52:04):
animals that suffer in factory farms every year, billions, Like
you know, it's hard to get your head around this,
right if aliens were visiting Earth and saying, like, what's
the greatest moral tragedy here? Depending on what you believe
about animal consciousness, you might think that it's actually factory farming.
That's a whole other story. But giving multiplier supports charities

(52:24):
that are looking to end tortuous, miserable factory farming, either
through policy or through developing meat alternatives. And so that's
going from nation to world and from world to other species,
human world, other species. Tango goes back to our earlier
discussion about the tragedy of the commons and the tragedy
of common sense morality and going from a tribal us

(52:48):
to a larger, multi tribal us. So when you know,
when I finished my my and published my book Moral Tribes,
which he mentioned. You know, I was happy with the
book in a lot of ways, but I also felt
like it was kind of an unfulfilled promise in some ways.
I mean, you look at a book like that, the
titles Moral Tribes callon Emotion, Reason, and the Gap between
Us and Them, you might think that that book was

(53:09):
going to give you like practical tools to solve tribalism.

Speaker 2 (53:14):
And I think it falls short in that way.

Speaker 3 (53:17):
It really gives you sort of a kind of guiding
general philosophy and some psychological under self knowledge that could
help you get to that philosophy, but it's not really
immediately applicable tools. And so after that kind of era,
I said, all right, I want to try to try
to fulfill that promise. So I thought, okay, well, what

(53:38):
does it take to solve tribalism? What does it take
to bring groups with distinct identities and with some animosity
towards each other together? And I thought, okay, well, I'm smart.
Maybe I'll have some big new theory about how to
do this. And I looked at the existing research and
what I kind of concluded was that actually, we've got old.

Speaker 2 (53:59):
Ideas that are pretty good, like really good. Right.

Speaker 3 (54:02):
So on the biological front, everything points to the idea
that mutually beneficial cooperation is the key. Mutually beneficial cooperation
is the story of life, starting with primordial soup, and
basically molecules come together to form cells because cells can
can survive and reproduce in ways that loan molecules can't.

(54:23):
And cells form more complicated eu caryotic cells, which form colonies,
which form organisms, which form societies.

Speaker 2 (54:30):
And all the way out up to tribes and nations
and occasionally United nations.

Speaker 3 (54:35):
Every living system is built and sustained on mutually beneficial cooperation.
It's parts coming together for teamwork because they can accomplish
things that they can't accomplish on their own.

Speaker 2 (54:48):
But there's competition at every level, right.

Speaker 3 (54:50):
Organisms are competing to survive, societies are competing with each
other for.

Speaker 2 (54:55):
Resources, and so the.

Speaker 3 (54:58):
Challenge is can we cooperate at the highest level, right,
whether that's tribes within a country or countries in the world, right,
And that may not come so naturally, right, so we
need tools for that. That's the biological perspective. On the
social science side, it's much the same story. You go
back to ideas from the fifties, like Gordon Allport's famous

(55:19):
contact theory. Basically, what he argued is that the way
you bring groups that are intentioned together is well, you
need to have them come to some kind of contact,
and it has to be under the right kinds of conditions, essentially,
conditions that are conducive to cooperation.

Speaker 2 (55:33):
And you know, Alport wrote.

Speaker 3 (55:36):
This all out of the fifties, but really this is
very intuitive. I mean, people have surely recognized for centuries
that you put people on the same team, and they're
more likely to get along, right, if there's really a
shared purpose there.

Speaker 2 (55:46):
Right, So if.

Speaker 3 (55:49):
We've known this for decades, if not centuries, then why
have we not solved our human tribal problem?

Speaker 2 (55:56):
Right?

Speaker 3 (55:57):
And I think the answer is twofold. The optimistic part
is to some extent we already have. I mean, as
my colleague Stephen Pinker has documented with a lot of
resistance to people who don't like.

Speaker 2 (56:08):
This conclusion, but it's very well supported, is that.

Speaker 3 (56:11):
Humans have over millennia and centuries and decades become overall
more peaceful and less milor right.

Speaker 2 (56:18):
And although there's been a.

Speaker 3 (56:21):
Reversal in recent years unfortunately, but nothing close to sort
of at the level of the overall arc of our history.
And really every modern city is a testament to the
idea that people with different cultures, different races, different ethnicities
and religions can view each other more as fellow citizens
and cooperation partners than as enemies to be distrusted.

Speaker 2 (56:44):
Right, So these.

Speaker 3 (56:46):
Can grow organically, right, And we see this in other
contexts where there's an immediate need. During World War Two,
there was you know, we needed soldiers in the United States,
and there was a push to have racially integrated units
in the military. But some people thought this would never work.
You could never have white people and were then called
negroes fighting with each other against the common enemy. And

(57:08):
some people said, well, we've got to try, and we
think this could work. And it worked beautifully, right, And
the US military was far ahead of the rest of
civilian life in the US in terms of racial integration. Likewise,
in sports, you know, just when there's a warn tobe
one or a game to be won, you want the
best players playing together and working together. And that's a
kind of circumscribed context where that can to work. So

(57:29):
part of the answer is in the right when circumstances
demand it, you can get cooperation across tense lines of division.
The challenge is how do you engineer it where the
ball doesn't seem to be rolling in the right direction right,
and where we have these divisions, in these levels of

(57:50):
distrust and disrespect, how do we engineer that deliberately? So
my thinking was, well, we need a way to get
people from opposite sides of whatever divide the same team.
So you know, this can be Republicans and Democrats in
the US, or Jews and Arab slash Palestinians in Israel, Gaza,
or Hindus and Muslims in India. You got to get

(58:12):
people on the same team. Okay, how do you do that? Well,
you need to put people on the same team. It
needs to be scalable and it needs to be fun.
My lab's answer to that was a cooperative quiz game
which was originally developed with my amazing grad student Van
d Philippus, and now those work is being led by

(58:32):
the amazing Lucas Woodley.

Speaker 2 (58:35):
So this game is now called Tango.

Speaker 3 (58:38):
And the way it works is you sign on to
the game and you answer a few questions about yourself
and in the typical version of this. In the experiments
I've done, you might say I'm a Republican or I'm
a Democrat, or I'm a liberal arm a conservative. You
also answer kind of fun, you know, get to know
you questions. You know, what's your favorite superpower that you'd
like to have, and things like that, and you you

(59:00):
answer those questions that you get paired up with your
partner and you have a little get to know you
chat and you say, oh see, we both would like
to have the power of flying or invisibility or whatever
it is. And you can see you know what the
person's politics is. And then you get into the game.
And in the most interesting case, you're playing the game
with someone who is, let's say in the US politically

(59:21):
different from you. You know, they're a Republican or a Democrat,
and you're on the opposite side, right, And the game
starts out with questions that are designed to have a
kind of complementarity, but not anything that's likely to be
divisive or controversial. So, for example, Republicans are more likely
to be able to answer questions about the show Duck Dynasty,
and that this is not stereotypes like this is validated

(59:43):
with data, whereas Democrats are more likely to know about
stranger things or the queen's gambit. Right, So you have
questions where one side's likely to be able to help
the other side and vice versa, and that gets people
into the game. Yeah you're a Democrat, but that's okay.
Yeah you're a Republican, but that's okay. We're winning, we're
high fiving, we're playing together. Everything's great. Then you get

(01:00:04):
questions that are about more divisive issues but still grounded
in facts. So if you ask what percentage of gun
debts in the US involve assault style weapons? If you ask,
you know, liberals, they're likely to say, I don't know,
thirty percent, fifty percent. If you ask people who are conservative,
they're more likely to say no, assault weapons or which

(01:00:25):
they wouldn't even call assault weapons, are like, you know,
two percent, it's more mostly handguns. Right, So that's a
case where the conservative Republicans are more likely to be correct.
But then if you ask a question about, you know,
how who commits more crimes per capita, immigrants or native
born Americans, Republicans think are more likely to think that

(01:00:48):
immigrant crime is sky high, when in fact, immigrants commit
relatively few crimes.

Speaker 2 (01:00:53):
That's the case where the liberal is more likely to
be right.

Speaker 3 (01:00:55):
But at this point in the game, you know where
you might think, oh, man talking about immigration or guns,
one of them, it would be a disaster. But by
then you're into the game, you're playing your teammates, you've
worked well together, you've already gotten to know each other
as respectful, decent people. And people just play the game
and they try to win those points and some you know,
and everyone gets to be surprised and everyone gets to

(01:01:15):
be right, and you have this cooperative experience and at
the end, people like say these tiery goodbyes, Man, it
was so much fun playing with you. I hope it
could meet in real life and things like that, and
it's great. What we want to know is what is
the effect of having this cooperative experience. So we've now
done a series of randomized control trials with online with
Republicans and Democrats, and what we found exceeded our expectations.

(01:01:38):
We find it playing tango with someone who's politically different
from you for less than an hour has positive effects
that last at least four months. So we ask people
how warm or cold do you feel towards the other
party on a scale of zero to one hundred. How
would you divide one hundred dollars between a random Republican
and a random Democrat? Do your respect Republicans or Democrats?

(01:02:01):
Do you trust Republicans or Democrats? And what we found
is that when people play with someone like this, to
take sort of the best known measure, which is that
feeling thermometer, people play the game, and immediately after the game,
we see like, on average, a nine point increase in
warmth towards the other party, where you can think of

(01:02:22):
it as a decrease in cold And you know, we say,
well nine points, what does that mean. That's the equivalent
of rolling back about fifteen years of increased polarization in
the United States. Now that's the immediate effect. Of course,
you know, it's not magic. It doesn't last. But when
we when we go back to people four months later,

(01:02:44):
we still see an effect of you know, that's the
equivalent of like five years of depolarization. And the cool
thing about the game is that you can play it
more than once. It's like Jeopardy, you could play every night, right,
And we also find that people really like it. So
our our median enjoyment rating was ten out of ten.
Now these are research participants who were not expecting to

(01:03:06):
have a lot of fun, so you know, it's not
surprising that they really enjoyed it.

Speaker 2 (01:03:11):
But we.

Speaker 3 (01:03:14):
See these long lasting positive effects you with this scalable
tool and in a way that people really enjoy. So
that's the that's the main points of the paper that
we published in Nature Human Behavior this summer.

Speaker 2 (01:03:26):
In the last year or two, we've been doing is
working on getting this out in the world.

Speaker 3 (01:03:31):
We are building an online tool where people can play,
but where that's not there yet, and I can talk
about kind of what we're doing and what we need
for that. But our most immediate traction has been in
higher education. And you know, historically when it comes to
psychology research, like testing college students is what you do
on the cheap, you know, just a convenience sample. But now,

(01:03:51):
as you know, especially at place like Harvard, higher ed
is sort of ground zero for a lot of our
cultural divisions, right, so we have been working with schools
to deploy this, you know, not as research participants, but
to students living their lives.

Speaker 2 (01:04:06):
And our biggest events so.

Speaker 3 (01:04:09):
Far have been at orientation for Harvard and Cornell and
Penn State, at Harvard this year, we did the entire
incoming class of twenty twenty nine, so over a thousand students,
and we're using some different questions. It's not so much
about your political party as like more like are you

(01:04:29):
politically liberal or conservative? And we're also asking questions about
things that are more divisive within a campus like Harvard,
so like Israel, Gaza, sorts of things in addition to
things like guns and immigration and stuff like that. And
what we found is that playing the game for about
twenty minutes had significant positive effects on acknowledgement that the

(01:04:54):
other side can make valid points interesting, getting to know
people who are different from you, comfort voicing controversial opinions
on campus. My favorite pair of results for Harvard because
I think this speaks to a challenge that we are
really working to address. When when liberal students at Harvard
at orientation in August played with a conservative student, they

(01:05:17):
were seven points less negative towards conservatives, so that's also
a pretty big effect. And then when conservatives played with
a liberal student, they felt five points more open towards
expressing controversial views on campus, So.

Speaker 2 (01:05:32):
This is opening people up.

Speaker 3 (01:05:34):
We also sort of allowed people to take two behavioral steps.
So we asked people at the end, he said, Hey,
you played with your partner anonymously, do you want to
meet your partner?

Speaker 2 (01:05:43):
If so, leave your contact info.

Speaker 3 (01:05:45):
And we found that eighty percent of students gave their
contact info. So you've got people shifting attitudes in making
campuses more open, more more hospitable to both liberals and conservatives,
and people taking steps in the real world. And then
the most fun part of all of this is that
the winning teams went to Fenway Park for a Red

(01:06:05):
Sox game, and so people went with their partners, whether
they were similar or or politically different.

Speaker 2 (01:06:12):
So we are, you know, rolling this out.

Speaker 3 (01:06:15):
And I think of this as kind of the opposite
of divisive online content. I mean, you think of what
internet trolls and operatives have managed to do by creating
by spreading ill will and just trust sort of at scale,
using very engaging content like fact checking. Can't fight that.
The opposite of that is not, you know, an earnest

(01:06:37):
fact check, although we need to do that. It's something
to compete with it in a positive way, something that
people find engaging, that millions of people can do, and
that trust spreads respect and trust at scale.

Speaker 2 (01:06:49):
Not that everybody has to agree.

Speaker 3 (01:06:51):
We're not trying to change people's minds about issues, but
getting to the point where people can disagree respectively and constructively.
So our goal over the next year is to get
this out there and have you know, thousands, if we
can millions of people have the positive experience that people
on college campuses and in our experiments have already had.

Speaker 1 (01:07:10):
Good for you, were you surprised that this works anonymously
because the contact hypothesis suggests that you need to get
people together in person to have conversation.

Speaker 2 (01:07:20):
Yeah, that's it's a very astute point.

Speaker 3 (01:07:23):
That was one of the things we weren't sure about, right,
And you know, one thing we wondered is do we
have to do this like on zoom where people give
their names and you can see their faces. And one
of the cool things about this was that you didn't
have to have that conventional name in a face contact
to get this to work and to have the positive
experience generalize to other people. It's possible that things will

(01:07:44):
work even better if we have those kinds of more direct,
face to face with a name kind of contact. It's
also possible that there's something stealth effective about having it
be anonymous because it makes people feel more safe, right
that that that you can kind of ease into it
where you know, they don't know who I am, they

(01:08:06):
don't know my name, they don't.

Speaker 2 (01:08:07):
Know what I look like.

Speaker 3 (01:08:08):
No one's taking a screenshot of me and putting it
online and say who is this person? Right, So there's
a kind of safety that comes with the anonymity, and
then they can move from the anonymous context too in
person like the students did at Harvard in the dining hall,
or if we're when we're building this out online, we
might people give give people the option after they've already

(01:08:29):
had a friendly anonymous interaction to say, Okay, here's my
here's my social media handle for this platform or whatever
that we're both on, and you know, people can get
to know each other, you know, online and in something
that's more real life than than than an anonymous game.
So these are great questions that we want to experiment

(01:08:49):
with when we when we have the chance.

Speaker 1 (01:08:51):
Great one last question, if everyone on earth understood moral
psychology as you do, what is one belief or habits
that you might hope would change individually and institutionally.

Speaker 3 (01:09:04):
I think that at the psychological level, like at the
level of managing one's own mind, people would have a
little distance between their first thought, which may or be right.

Speaker 2 (01:09:16):
Or may maybe not, and what they actually act on.

Speaker 3 (01:09:20):
And you know this is there's a great bumper sticker,
don't believe everything you think, which I think sort of
beautifully captures this logic that people need to sort of
recognize that what you feel or your first thought is
not necessarily the right thing to do. And then I
think we need to have a kind of openness where

(01:09:41):
we recognize that whatever our differences, we have so much
in common.

Speaker 2 (01:09:45):
We all, you know, we all want to be happy.

Speaker 3 (01:09:49):
We don't want to suffer. We care about our families,
we care about our friends. We want to live in
a world that's almost all of us. Want to live
in a world that's peaceful rather than the violent. We
want to live in that larger US cooperative world at
least where we're not harming each other. Right, But the

(01:10:10):
terms of that cooperation are what's challenging. And what I
would hope to see is when people can sort of
let go of the grip of their prejudices and first judgments,
that people would be willing to take a step and
act on their curiosity and be able to say, okay,
can I get to know people who are differently, not
making any promises that I'm going to agree with them
or want to be their best friend or go into

(01:10:31):
business together, but at least try to understand. And if
I think we took those first two steps of liberating
ourselves from our intuitions and then liberating ourselves from our isolation,
only learning about other people from what bad actors on
social media have to say and encountering people more directly,

(01:10:55):
those things would make us see the humanity in each
other and we be I think that we would be
able to solve our biggest problems.

Speaker 1 (01:11:10):
That was my interview with Josh Green. I still have
a bunch to say, but I want to remind you,
if you're able, go to GiveDirectly dot org slash cosmos.
I have that link in the show notes as well.
Give directly dot org slash cosmos. Donate whatever you can.
All the money goes directly to people in deep need
in Rwanda. No contribution is too small. So let's come

(01:11:33):
back to this idea of Josh's that our moral brains
are beautifully designed for a world that no longer exists.
They're exquisitely tuned for small circles, for family, for friends,
for the people whose faces we can see and whose
pain we can imagine. Our brains give us loyalty and

(01:11:53):
indignation and gratitude and guilt. This is why we'll run
into the street to pull a stranger out of traffic,
and also why we can feel more outrage about one
vivid case than about a million small tragedies. But the
challenges that define our modern life, from pandemics to global poverty,

(01:12:13):
to AI alignment to the future of democracies, these aren't
small circle problems. In many cases, they are statistical, They
are long term. They're geographically scattered. No one is knocking
on our door asking for help. There's no crying baby,
there's no burning building across the street. And so our

(01:12:34):
moral camera, as Joshua might put it, is just aiming
at the wrong scale. This is a design mismatch. We're
running stone age moral software on a planetary scale system.
So that leads us with two tasks. At the individual level.
The task is to become more bilingual in our own minds,

(01:12:55):
to recognize our emotional reactions, to recognize what they're good at,
but also to notice when they're steering us wrong, to
be willing when the stakes are big and abstract, to
flip into manual mode, to ask, okay, what actually helps
the most people? What am I missing because it doesn't

(01:13:16):
feel emotionally salient. At the societal level, the task is
to build better scaffolding around these imperfect brains. That means
institutions and norms and technologies, and even little bits of
choice architecture like the kinds of moral technologies that Josh
is working on, building things that nudge our caring in

(01:13:39):
directions that our emotions wouldn't find on their own. We
need systems to help our local, tribal instincts add up
to something globally wise. That doesn't mean turning ourselves into
cold calculators. Our emotions are part of what we're ultimately
trying to protect. Love and loyal and solidarity. These are

(01:14:02):
the best parts of being human. So the idea is
to use reason and evidence not to erase those feelings,
but to aim them. So the question to carry forward
from today's episode is how can each of us expand
the circle of the people that we care for and
feel responsible for without losing the warmth of the.

Speaker 2 (01:14:24):
Small circle we evolved for.

Speaker 1 (01:14:26):
That's a simple question that sits at the intersection of
our psychology, our politics, and fundamentally the future of our species.
Once again, if you can go to GiveDirectly dot org
slash cosmos. This is one way to turn your moral
intuitions into action and take everything we're learning about the

(01:14:47):
brain and use it to optimize how we go about
trying to heal the world. Go to eagleman dot com
podcasts for more information and to find further reading. Join
the weekly discussions on my substack and check out and
subscribe to Inner Cosmos on YouTube for videos of each

(01:15:10):
episode and to leave comments. Until next time, may your
moral instincts be kind and generous. I'm David Eagleman, and
this is Inner Cosmos.
Advertise With Us

Host

David Eagleman

David Eagleman

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Betrayal: Weekly

Betrayal: Weekly

Betrayal Weekly is back for a brand new season. Every Thursday, Betrayal Weekly shares first-hand accounts of broken trust, shocking deceptions, and the trail of destruction they leave behind. Hosted by Andrea Gunning, this weekly ongoing series digs into real-life stories of betrayal and the aftermath. From stories of double lives to dark discoveries, these are cautionary tales and accounts of resilience against all odds. From the producers of the critically acclaimed Betrayal series, Betrayal Weekly drops new episodes every Thursday. Please join our Substack for additional exclusive content, curated book recommendations and community discussions. Sign up FREE by clicking this link Beyond Betrayal Substack. Join our community dedicated to truth, resilience and healing. Your voice matters! Be a part of our Betrayal journey on Substack. And make sure to check out Seasons 1-4 of Betrayal, along with Betrayal Weekly Season 1.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.