Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
How is your brain like Abraham Lincoln's political cabinet? And
why is it easier to do a drone strike on
an enemy than to stab them with a bayonet? And
what does any of this have to do with Mel
Gibson or the Twilight Zone or mister Spock from Star Trek.
Welcome to Inner Cosmos with me David Eagleman. I'm a
(00:29):
neuroscientist and an author at Stanford University, and in these
next three episodes, we're going to sail deeply into our
three pound universe to understand why and how our lives
look the way they do. So today we're going to
(00:51):
discover that you're not one thing with a single drive,
but instead your brain is a team of rivals. It's
a machine that's built of conflicting parts. This is going
to allow us to understand how we make decisions, which
is what we'll focus on in this episode, and in
the next episode we'll talk about economic decisions in particular,
(01:13):
like how do you choose which ice cream to buy
or which car to buy? And finally, we'll talk about
how we can leverage this sort of knowledge to optimally
navigate our behavior, to make our behavior consistent with our
long term thinking rather than the temptations that sit right
in front of us. So for now, let's begin with
(01:35):
human behavior and why we are all so complicated. So
I'm going to start today with a story that I
told in my book Incognito, and the story is about
that two thousand and six arrest of the actor Mel Gibson.
So he was pulled over for speeding. He was going
almost twice the posted speed limit on the Pacific Coast
(01:55):
Highway in Malibu, and the police officer, James Me, gave
him a breath liizer test, and what that showed is
that Gibson's blood alcohol level was at zero point one
two percent, which is very high, well over the legal
limit of point eight. And there was also an open
bottle of tequila on the seat next to Gibson. So
the officer places Gibson under arrest and asks him to
(02:19):
get into the squad car, and Gibson goes nuts, and
he says Jews are responsible for all the wars in
the world, and he asks the officer are you a
Jew And the officer, James Me, was indeed Jewish. So
Gibson refuses to get in the squad car and he
has to be handcuffed. So within a day the website
(02:41):
TMZ leaks a video of this interaction, and there's this
vigorous response from the media and everything gets very heated.
So eventually Gibson writes a note of apology, and it
reads quote, after drinking alcohol on Thursday night, I did
a number of things that were very wrong and for
which I am ashamed. I acted like a person completely
(03:03):
out of control when I was arrested and said things
that I do not believe to be true and which
are despicable. So he goes on to say he's ashamed
and he disgraced himself, and that he's been battling alcoholism,
and he apologizes for his quote unbecoming behavior. But this
didn't really land that well because there was no reference
(03:24):
at all to the anti Semitic slurs. So Gibson then
writes a longer note of apology directed toward the Jewish community,
and he says, quote, there is no excuse, nor should
there be any tolerance for anyone who thinks or expresses
any kind of antisemitic remark. I want to apologize, specifically
to everyone in the Jewish community for the vitriolic and
(03:46):
harmful words that I said to a law enforcement officer
the night I was arrested on a dui charge. Every
human being is God's child, and if I wish to
honor my God, I have to honor his children. But please,
I know from my heart that I am not an
anti Semite. I'm not a bigot. Hatred of any kind
goes against my faith. So Gibson offered to meet one
(04:09):
on one with leaders of the Jewish community to quote
discern the appropriate path for healing. So he seemed genuinely apologetic,
and Jewish leaders accepted his apology. But here's the question.
Are Mel Gibson's true colors that of an anti Semite
or are his true colors those he showed afterwards in
(04:32):
his eloquent and apparently heartfelt apologies. Well this was a
question that got a lot of people arguing. So one
journalist wrote in the Washington Post an article that he
titled Mel Gibson it wasn't just the tequila talking, and
he wrote, quote, well, I'm sorry about his relapse, but
I just don't buy the idea that a little tequila,
(04:55):
or even a lot of tequila, can somehow turn an
unbiased person into a raging anti semite, or a racist,
or a homophobe, or a bigot of any kind for
that matter. Alcohol removes inhibitions, allowing all kinds of opinions
to escape uncensored. But you can't blame alcohol for forming
and nurturing those opinions in the first place. Then, the
(05:18):
producer of the TV show Scarborough County drank alcohol on
the show until he raised his blood alcohol level two
point one two percent, which was Gibson's level on that night,
and he reported quote, not feeling antisemitic after drinking. So,
like a lot of people, the reporter and the producer
(05:39):
suspected that the alcohol had loosened Gibson's inhibitions and revealed
his true self, and the nature of their suspicion has
a long history. An ancient Greek poet had coined a
popular phrase which translates to in wine, there is the truth,
and this was repeated by the Roman as in Vino veritas.
(06:03):
A passage in the Babylonian Talmud makes the same point
quote in came wine out went a secret. The Roman
historian Tacitus claimed that the Germanic people always drank alcohol
while holding councils to prevent anyone from lying. Okay, But
going back to Mel Gibson, not everyone agreed with the
(06:26):
hypothesis that alcohol revealed who he really was. A writer
in The National Review argued, quote, the guy was drunk.
For Heaven's sake. We all say and do dumb things
when we're drunk. If I were to be judged on
my drunken escapades and follies, I should be utterly excluded
from polite society. And so would you, unless you're some
(06:48):
kind of saint. The Jewish conservative activist David Horowitz commented
on Fox News, quote, people deserve compassion when they're in
this kind of trouble. I think it would be very
ungracious for people to deny it to him. There is
an addiction psychologist named Alan Marlott who wrote in USA
Today alcohol is not a truth serum. It may or
(07:10):
may not indicate his true feelings, and Gibson's social circle
publicly vouched for him. Earlier in the day before the arrest,
Gibson had spent time at the house of his friend
Dean Devlin, who was quoted saying, if Mel is an
anti Semite, he spends a lot of time with us,
which makes no sense. Devlin and his wife are both Jewish.
(07:34):
For Devlin, that was proof enough against the arguments of
anti Semitism. So which are Gibson's true colors? Those in
which he shouts anti Semitic comments or those in which
he feels remorse and shame and publicly says I am
reaching out to the Jewish community for its help. Now,
many people prefer a view of human nature that includes
(07:57):
a true side and a false side. Words, we think
that people have a genuine aim and the rest is
decoration or evasion or cover up. Now that's intuitive, but
it's incomplete. A study of the brain necessitates a more
nuanced view of human nature. And as we're going to
see in this episode, we are made of many drives
(08:20):
under the hood, many different networks of neurons that sometimes
have their own opinions. As the poet Walt Whitman put it,
I am large, I contain multitudes. So Gibson's detractors are
going to continue to insist that he is truly an
anti Semite, and his defenders will continue to insist that
(08:41):
he is not. But both may be defending an incomplete story.
So let's begin in the nineteen sixties, when the pioneers
of artificial intelligence, we're struggling to build simple robotic programs
that could manipulate small blocks of wood. So the idea
was to identify the blocks and then to grip them,
(09:04):
and then to stack them up in simple patterns. And
this was one of those apparently simple problems that turns
out to be incredibly hard, because for a robot, finding
a block of wood requires figuring out which camera pixels
correspond to the block and which ones don't, and then
you have to recognize the block shape regardless of the
(09:27):
angle and the distance of the block. And then you
have to grab it, which requires visual guidance of graspers
that have to squeeze in at the right moment from
the right direction with the right force, and stacking requires
an analysis of the rest of the blocks and adjusting
to those details. And all these programs need to be
(09:48):
coordinated so they happen at the right times in the
right sequence. And so what these AI pioneers discovered is
that tasks that seem simple are often masking enormous computational complexity.
So some decades ago, the computer scientist Marvin Minsky and
(10:09):
his colleagues started thinking about this problem and they came
up with a really progressive idea. Maybe the robot could
solve the problem by distributing the labor among specialized subagents.
So imagine small computer programs that could each bite off
a small piece of the problem. So one computer program
(10:32):
is in charge of the job find, another could solve
the Fetch problem. Another takes care of the stack block.
And these subagents can be connected in a hierarchy just
like a company, and they can report to one another
and to their bosses. And because of this hierarchy, stack
block would not try to start its job until Find
(10:56):
and Fetch had finished their jobs. So they is that
these subagents are totally mindless. But when these sub agents
come together, the whole system starts to look pretty smart.
So this idea of subagents didn't solve the problem entirely,
but it helped a lot, and most importantly, it brought
(11:17):
into focus a new idea about the working of biological brains.
Marvin Minsky suggested that human minds perhaps were collections of
enormous numbers of machine like mindless subagents that collaborate. So
the key idea is that a bunch of small specialized
(11:39):
workers can give rise to something like a society with
all kinds of rich properties that no subagent has by itself.
Each little guy is just doing some simple thing, but together,
when they're connected in the right way, you get something
that looks like intelligence. So the suggestion was that and
(12:00):
thousands of little minds are better than one large one.
And if this sounds weird, just think about how factories work.
Each person on the assembly line is specialized. They do
one simple job. No one there knows how to do everything,
and yet complex products get built out of this. And
(12:20):
this is also how government ministries operate. Each bureaucrat has
one task or a few very specific tasks, and the
government succeeds on its ability to distribute the work appropriately
and on larger scales. This is how civilizations operate. They
reach the next level of sophistication when they learn to
(12:41):
divide labor. They commit some people to specialize in agriculture,
and some to art, and some to warfare and so on.
This division of labor allows specialization and a deeper level
of expertise, so even though no one knows how other
jobs work, as an emergent result, you get civilization. So
(13:04):
this idea of dividing up problems into smaller subroutines that
ignited the young field of artificial intelligence back in the
nineteen seventies. Instead of trying to develop a single all
purpose program, the scientists changed their goal. Now it was
to build a system out of smaller, local expert networks
(13:26):
which know how to do one single thing. And in
this framework, the larger system only has to switch which
of the experts has control at any given moment. The
learning challenge now involves not so much how to do
each little task, but instead how to distribute who's doing
what when. So, as Minsky suggested in his book The
(13:48):
Society of Mind, perhaps that's all the human brain has
to do as well, and he noted that if brains
really do work this way as collections of subagents, we
wouldn't have any reason to be aware of the specialized processes.
He said, quote thousands and perhaps millions of little processes
(14:09):
must be involved in how we anticipate, imagine, plan, predicts,
and prevent, and yet all this proceeds so automatically that
we regard it as ordinary common sense. At first, it
may seem incredible that our minds could use such intricate
machinery and yet be unaware of it. When scientists began
(14:31):
to look into the brains of animals, Minsky's society of
mind idea open up a new way of looking at things.
In the early nineteen seventies, researchers realized that a frog
has at least two separate ways of seeing motion. One
system is looking for small darting objects like a fly,
(14:52):
and it directs the tongue, and the other system is
looking for large looming objects like a person, and it
tells the lef legs to jump. And presumably neither of
these systems is conscious. Instead, they're just simple automated programs
that are burned down in the circuitry. So the society
of mind framework was an important step forward. But despite
(15:16):
the initial excitement about it, a collection of experts with
divided labor has never proven sufficient to yield the properties
of the human brain. It's still the case that our
smartest robots are less intelligent than a three year old child.
So what went wrong? I suggested in my book Incognito
(15:37):
that a critical factor was missing from the division of
labor models, and we turn to that now. The missing
factor in Minski's theory was competition among the experts who
all believe they know the right way to solve the problem.
Just like a good drama, the human brain runs on conflict.
(15:59):
In an assembly line a government ministry, each worker is
an expert in a small task. But in contrast, parties
in a democracy hold different opinions about the same issues,
and the important part of the process is the battle
for steering the ship of state. Brains are like democracies.
(16:22):
They're built of multiple overlapping experts who weigh in and
compete over different choices. As the poet while Whitman correctly surmised,
we are large, and we harbor multitudes within us, and
those multitudes are locked in chronic battle. There's an ongoing
(16:42):
conversation among the different factions in your brain. They compete
to control the single output channel of your behavior. So
as a result, you can accomplish the strange feats of
arguing with yourself, or cussing yourself, or cajoling yourself. These
are things that computers simply don't do. So when the
(17:04):
hostess at a party offers you chocolate cake, you can
find yourself on the horns of a dilemma. Some parts
of your brain have evolved to crave the rich energy
source of sugar. Other parts of your brain care about
the negative consequences, like the health of your heart or
the bulge of your love handles. Part of you wants
(17:24):
the cake, and part of you tries to gather the
strength to pass on it. And the final vote of
the parliament determines which party controls what you ultimately do.
In other words, whether you put your hand up or
you put your hand out. In the end, you either
eat the chocolate cake or you don't, but you can't
do both. Because of these internal multitudes, biological creatures can
(17:50):
be conflicted. Now, the term conflicted can't be sensibly applied
to an entity that has a single program. Your car
can't conflicted about which way to turn. It has one
steering wheel commanded by one driver, and it follows directions
without complaint. But brains can be of two minds, and
(18:11):
often many more than that. We don't know whether to
turn toward the cake or away from it, because there
are several little sets of hands on the steering wheel
of our behavior. Consider this simple experiment that's been done
with laboratory rats. If you put some cheese at the
end of a little corridor, the rat will go towards
(18:31):
the cheese, and you can hook up a little harness
to feel how strongly he's pulling towards that. Now, let's
say instead of the cheese, you put an electrical shock.
Now the rat moves away from the electrical shock, and
with the harness you can measure the force. Okay, Now
you put both cheese and the electrical shock at the
(18:52):
end of the hallway. And what the rat does is
he begins to approach, but with draws, and he finds
the courage to approach again. And what happens is he
oscillates conflicted, and it's exactly at the distance where the
two forces cancel out. In other words, he's going towards
a thing, and he's pulling away from the thing, and
(19:14):
both programs are running at once. The pull matches the push.
The poor rat has two pairs of paws on its
steering wheel, each pulling in opposite directions, and as a result,
he can't get anywhere. So brains, whether rat or human,
are machines made of conflicting parts. If building a contraption
(19:37):
with internal division seems strange, just consider that we already
build social machines like this. Think of a jury of
peers in a courtroom trial. You have twelve strangers with
different opinions, and they're tasked with this single mission of
coming to a consensus. So the jurors argue and cox
(19:59):
and the influence, and eventually the group coheres to a
single decision, having differing opinions. It's not a drawback to
the jury system. It is the central feature. When the
President Abraham Lincoln was putting together his presidential cabinet, he
chose to put in adversaries. The historian Doris Kern's Goodwin
(20:22):
described his cabinet as a team of rivals, and we
see this kind of team of rivals all the time.
In Zimbabwe some years ago, the president Robert Mugabe agreed
to share power with a rival that he had earlier
tried to assassinate. And in China in two thousand and nine,
the President Hu Jintao named two opposing faction leaders to
(20:45):
help him craft China's future. I proposed in incognito that
the brain is best understood as a team of rivals,
and the rest of this episode and the next two
are going to explore that framework. Who the parties are,
how they compete, how the union is held together, what
(21:06):
happens when things fall apart. How this framework allows us
to understand what products we buy and how we can
use this understanding of the team of rivals to better
navigate our own behavior into the future. As we move along,
remember that different political parties typically have the same goal,
(21:28):
which is success for their country, but they just have
different ways of going about it. So, as Lincoln put it,
rivals should be turned into allies quote for the sake
of the greater good and for networks. In your brain
neural subpopulations, the common interest is the thriving and survival
(21:50):
of you the organism. In the same way that liberals
and conservatives both love their country but can have different
strategies for steering it. It's the same way in the
brain you have competing factions that all believe they know
the right way to solve problems. When trying to understand
(22:25):
the strange details of human behavior, psychologists or economists sometimes
refer to a dual process account, and you'll know this,
for example, if you've read Daniel Kahneman's book Thinking Fast
and Slow. In that framework, the brain has two separate systems.
One is fast and automatic and below the surface of
(22:48):
conscious awareness, and the other system is slow and cognitive
and conscious, and these two systems are always battling it out. Now,
this is a very good start to thinking about it,
but there's no real reason to assume that there are
only two systems. In fact, as we're going to see,
there are many systems, and there are also different ways
(23:11):
to think about how to divide things up. So in
nineteen twenty, Sigmund Freud suggested three competing parts in his
model of the psyche. There was the id, the ego,
and the superego. The id was all about instinct, the
ego was realistic and organized, and the super ego was
(23:33):
critical and moralizing. In the nineteen fifties, the neuroscientist Paul
MacLean suggested that the brain is made of three layers
that represent successive stages of evolutionary development. There's the reptilian brain,
which is involved in survival behaviors, and the limbic system,
(23:55):
which underlies the emotions. And in higher animals, there's the
neo core text, which is used in higher order thinking. Now,
the details of both Freud's model and Maclean's model have
largely fallen out of favor, but the heart of the
idea survives, which is that brains are made of competing subsystems.
(24:18):
I'll start with a simple model of competition as a
starting point because it captures one way to see this picture,
but by the next episode we'll see that it's even
more sophisticated than that. So we'll start with a general
statement about the brain's anatomy, which is that some areas
of your brain are involved in higher order operations regarding
(24:40):
events in the outside world. For the cognitionandy, this area
is like the dorsilateral prefrontal core text, which is on
the surface of your brain, just inside your temples, so
those are monitoring and assessing the outside world, while other
areas are involved with monitoring your internal state, like your
level of hunger, or your sense of motivation, or whether
(25:03):
something is rewarding to you. And this includes areas like
the region just behind your forehead called the medial prefrontal cortex,
in several areas deep below the surface, so these monitor
what's going on on the inside. Now, again, the real
situation is even more complex than this rough division would imply,
because brains do a lot more than just monitor the
(25:26):
outside and the inside. Your brain also simulates future states
and reminisces about the past and figures out where to
find things not immediately present, and so on. We'll get
into that, but for the moment, this division into systems
that monitor the outside and the inside will serve as
(25:47):
a rough guide, and we can refine this picture later.
Now to pick two labels that'll be familiar to everyone.
We can call these the rational and the emotional systems.
Are a little underspecified and imperfect, but they carry this
starting point about rivalries in the brain. The rational system
(26:09):
is the one that cares about analysis of things in
the outside world, while the emotional system monitors the internal
state and worries whether things will be good or bad.
In other words, as a rough guide, rational cognition involves
external events, while emotion involves your internal state. So you
(26:31):
can do a math problem without consulting your internal state,
but you have to consult your internal state to order
a dessert off the menu, or to prioritize what you
feel like doing next. The emotional networks are absolutely required
to rank your possible next actions in the world. If
(26:51):
you were an emotionless robot who rolled into a room,
you might be able to analyze this stuff around you,
but you would be frozen with indecision about what to
do next. Choices about the priority of actions are determined
by our internal states. When you get home, will you
(27:12):
head straight to your refrigerator or the bathroom or the bedroom.
That doesn't depend on the external stimuli in your home
because those haven't changed, but instead on your body's internal states.
So the battle between the rational and the emotional systems
can be brought to light by what philosophers call the
(27:32):
trolley dilemma. Here's the scenario. A trolley is barreling down
the train tracks. It's going out of control, and five
workers are making repairs way down the track, and you,
a bystander, realize that they're all going to get killed
by this trolley. But you also notice that there's a
(27:53):
lever nearby that you can throw, and that will divert
the trolley down a different track where there's a single
worker who will be killed. So what do you do?
Do you throw the lever or not? If you're like
most people, you have no hesitation about throwing the lever
because it's far better than only one person gets killed
(28:15):
than five people. Right, Okay, so that's a good choice.
But there's a second version. Of the trolley problem that
presents an interesting twist. Imagine that the same trolley is
barreling down the tracks. The same five workers are gonna
get killed, but this time you're a bystander on a
footbridge that goes over the tracks, and you notice that
(28:38):
there's an obese man standing on the footbridge, and you
realize that if you push him off the bridge, his
bulk will be sufficient to stop the train and save
the five workers. Do you push the man off? If
you're like most people, you bristle at this suggestion of
(29:01):
murdering an innocent person. But here's the thing. What differentiates
this from your previous choice. Aren't you trading one life
for five lives? In both versions of the dilemma? Doesn't
the math work out the same way? So what is
the difference in these two cases. Philosophers propose that the
(29:24):
difference lies in how people are being used. In the
first scenario, you're simply reducing a bad situation, the death
of five people, to a less bad situation, the death
of one person. In the case of the man on
the bridge, he is being exploited as a means to
an end. So that's a popular explanation in the philosophy literature,
(29:47):
but there's also a more brain based approach to understanding
this reversal in your choice. Neuroscientists Joshua Green and Jonathan
Cohen did brain imaging while people consider these two scenarios,
and what they found was that the difference pivots on
the emotional component of actually touching someone that is interacting
(30:11):
with them at a close distance. So if the problem
is constructed so that the man on the footbridge can
be dropped with the flip of a switch through a trapdoor,
many people will vote to let him drop. But there's
something about interacting with the person up close that stops
most people from pushing the man to his death. Why
(30:32):
it's because that sort of personal interaction activates these emotional networks.
It changes the problem from an abstract, impersonal math problem
into a personal emotional decision. So when people consider the
trolley problem, here's what the brain imaging reveals. In the
(30:53):
footbridge scenario, where you actually have to push the guy,
the brain areas involved in motor planning and emotion become active.
But in pulling the lever scenario, the only brain errors
involved are those involved in rational thinking. People register emotionally
when they have to push someone. But if you only
(31:14):
have to tip a lever, your brain behaves like mister Spock,
who's the vulcan on Star Trek, who's all rationality and
no emotion, who says emotions are alien to me. The
(31:42):
battle between emotional and rational networks in the brain is
nicely illustrated by an old episode of The Twilight Zone,
which I saw years ago. Here's how it goes. A
stranger in an overcoat shows up at a man's door
and proposes a deal. He says, here is a box
with a single button on it. All you have to
(32:04):
do is press the button, and I will pay you
one thousand dollars. The man says, what happens when I
press the button? And the stranger says, when you press
the button, someone far away, someone you don't even know,
will die. So the man suffers over this moral dilemma
through the night, and the button box rests on his
(32:27):
kitchen table and he stares at it, and he paces
around it, and sweat is on his brow, and he's
thinking about his desperate financial situation, and finally he lunges
to the box and he punches the button and nothing happens.
It's quiet, anticlimactic. And suddenly there's a knock at the door,
(32:48):
and the stranger in the overcoat is there, and he
hands the man the money and he takes the box,
and the man says, wait, what happens now, And the
stranger says, now, I take the box and I give
it to the next person, someone far away, someone you
don't even know now. I loved this story because it
(33:10):
highlights the ease of impersonally pressing a button. If the
man had been asked to attack someone with his hands,
he presumably would have declined the offer. In earlier times
in our evolution, there wasn't really any way to interact
with other people at any distance other than hands and
(33:31):
feet are possibly a stick, and that distance of interaction
was salient and consequential. And this is what our emotional
reaction reflects. But interestingly, as we evolved, the situation began
to change. Generals and even soldiers could very commonly find
themselves very far removed from the people that they were killing.
(33:55):
There's a great line in Shakespeare's Henry the Sixth where
a man challenges a nobleman and mocks the fact that
the nobleman has never known the danger of the battlefield.
The man says, when struckest thou one blow in the field,
and the nobleman responds, great men have reaching hands oft?
(34:17):
Have I struck those that I never saw? And struck
them dead? And this is what happens all the time.
In modern warfare, we can launch Tomahawk surface to surface
missiles from the deck of Navy ships with the touch
of a button. The result of pushing that button is
then watched by the missile operator live on CNN. Minutes
(34:39):
later when buildings in the enemy's city disappear in plumes,
so the proximity is lost, and so is the emotional influence.
This impersonal nature of waging war makes it disconcertingly easy.
I recently heard an argument between an old time fighter
(35:02):
pilot who was lamenting how easy it is for drone
pilots to do something, and he's right, But it should
also be noted that it's easy when you're thousands of
feet in the air. If you watch the footage of
the bombing of Hiroshima and Nagasaki, which was dropping these
ten thousand pound nuclear bombs onto civilian targets, you'll hear
(35:25):
the total casualness of the pilot even though each bomb
wiped out almost one hundred thousand people, and in fact,
the pilot later said quote, I made up my mind
then that the morality of dropping that bomb was not
my business. I was instructed to perform a military mission
to drop the bomb. Now, I'm not making a judgment
(35:45):
about a military pilot's obligation in the middle of a
world war. It was a complex issue that is difficult
to be understood by a modern audience who hasn't just
lived through years of war and is looking for a
way to end it. That I am saying saying it's
surprising to see how easy it appeared to press the
button to open the bomb doors from thirty three thousand
(36:09):
feet in the air, as opposed to if the pilot
had been sent in to murder one hundred thousand civilians,
including women and children, with a knife or with his hands.
I assume it would have been a very different experience
for him. So, in thinking about these issues about how
easy it is to wage war when it's impersonal, one
(36:30):
political thinker in the nineteen sixties suggested that the button
to launch a nuclear war should be implanted in the
chest of the president's closest friend. That way, if the
president wants to make the decision to annihilate millions of
people on the other side of the globe, he'd first
have to physically harm his friend. He'd have to rip
(36:53):
open his chest to get at the button that would
at least engage his emotional system in the decision making
so as to guard against letting the choice be impersonal.
Because both of these neural systems battle to control the
single output channel of your behavior, emotions can tip the
(37:15):
balance of decision making, and this ancient battle has turned
into a directive of sorts from many people. If it
feels bad, it is probably wrong. Now, there are lots
of counter examples to this. For example, you can find
yourself put off by a particular choice, but still conclude
that it's not morally wrong, like putting your pet down
(37:39):
when it gets too old and sick, even though it
breaks your heart. Nonetheless, emotion serves as a generally useful
steering mechanism for decision making. The emotional systems are evolutionarily
quite old, and they're shared with many other species, while
the development of the rational system is more recent. But
(38:02):
the novelty of the rational system doesn't necessarily indicate that
it's by itself superior. Societies would not be better off
if everyone were like the vulcan mister Spock, all rationality
and no emotion. Instead, a balance, a teaming up of
the internal rivals is probably optimal for brains. And that's
(38:25):
because the disgust that we feel at pushing the man
off the footbridge is critical to social interaction. The impassivity
that one feels at pressing a button to launch a
Tomahawk missile is probably detrimental to civilization. Some balance of
the emotional and rational systems is needed, and that balance
(38:46):
may already be optimized by natural selection in human brains. Or,
to put it in another way, a democracy that is split
across the aisle maybe just what you want, because a
takeover in either direction would almost certainly prove less optimal.
The ancient Greeks had an interesting analogy for life that
(39:07):
captured this wisdom. The idea was that life is as
though you are a charioteer and your chariot is being
pulled by two horses. One is the horse of reason
and the other is the horse of passion. One horse
is always trying to tug you off one side of
the road, and the other is trying to pull you
off the other side, and your job is to hold
(39:28):
on to them tightly and keep them in check so
that you can continue down the middle of the road.
And that's what we try to do with the brain
networks involved in rationality and emotion. So what we've introduced
in today's episode is the way in which the brain
is a machine built out of conflicting parts. In the
(39:51):
next episode, we're going to explore how this team of
rivals expresses itself in very particular ways, like how we
choose what to buy or how much something should cost.
This is the basis of a new type of economics
called neuroeconomics, so tune into that to uncover the next
juicy bits. That's all for this week. To find out
(40:17):
more and to share your thoughts, head over to eagleman
dot com, slash Podcasts, and you can also watch full
episodes of Inner Cosmos on YouTube. Subscribe to my channel
so you can follow along each week for new updates
until next time. This is Inner Cosmos and I'm David Eagleman.