All Episodes

January 17, 2019 54 mins

Our brains are composed of two hemispheres, but in what ways are they truly separate? In which ways are they one? In this bisected Stuff to Blow Your Mind exploration, Robert Lamb and Joe McCormick explore what we’ve learned from split brain experiments in animals and humans. 

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
With every day, and from both sides of my intelligence,
the moral and the intellectual, I've also drew steadily near
to that truth, by whose partial discovery I have been
doomed to such a dreadful shipwreck. That man is not
truly one, but truly too. Welcome to Stuff to Blow

(00:22):
your Mind from how Stuffworks dot Com. Hey, welcome to
Stuff to Blow your Mind. My name is Robert Lamb
and I'm Joe McCormick. And this is going to be
part two of our two part exploration of hemispheric lateralization
and especially the split brain experiments of Roger Sperry and

(00:43):
Michael Gazaniga starting in the nineteen sixties. Now, if you
haven't heard the last episode, you should really go check
that out first. That's gonna lay all the groundwork for
what we're talking about today, right, and it will also
explain why we kicked off this episode and the last
episode with the reading from Robert Louis Stevenson's Strange Case
of Doctor Jack ominisr. Hyde from short version is Robert

(01:03):
Louis Stevenson thought he had another dude in there? What
did he call him? The other guy? The man inside me?
By I know it was a different author. Uh No,
it was it was me and the that other fellow,
that other fellow. Yeah. So in the last episode, we
discussed twentieth century research on a small group, uh, which
was a small subset of the total group of maybe

(01:24):
fifty to a hundred or maybe a little more than
a hundred people who have ever received a surgical intervention
called a corpus callosotomy, which is a severing of the
corpus colosum and the corpus colosum you can kind of
think of as the high speed fiber optic cable that
connects the two hemispheres of the brain together. Now, the
surgery was originally intended as a kind of last resort

(01:46):
treatment for people who had terrible epileptic seizures. There are
so few of these patients because now we generally have
better safer ways of treating epilepsy without such a radical surgery,
right though these individuals are still around. Yes, certainly, in
the last episode we mentioned that Pinto study that looked
at a couple of them in Seen, And it's very

(02:07):
possible that we have listeners out there who have received
this surgery as well, And obviously we would love to
hear from you if there's anything you would like to share.
Oh yeah, please, if you have a split brain email
us immediately. And in fact you mentioned the more recent research.
We're gonna look at some of that research in today's episode.
But what neuroscientists learned in the twentieth century from this

(02:27):
small group of patients was truly remarkable. Beginning in the
nineteen sixties and continuing up until recent years, these split
brain patients have been the subject of some of the
most interesting research ever on the nature of the brain,
the mind, and the self. So last time we talked
about the original work of like Sperry and Gazzaniga, who
discovered many fascinating things about how it's possible for one

(02:50):
half of the brain to not know what the other
half is thinking, doing, or seeing. This time we want
to follow up on the subject to explore some more
recent ease and ask questions about what these split brain
studies mean for our lives. And to start off, I
wanted to mention an anecdote I came across from the
neuroscientist V. S. Ramaschandren that he has brought up in

(03:12):
some of his public talks and work. He tells a
story of working with one particular split brain patient who
had been trained to respond to questions with his right hemisphere.
Now you'll remember from our last episode that in the
case of most patients, the right hemisphere of the brain
cannot speak. It might have some very rudimentary language comprehension,

(03:34):
but generally language and especially the production of speech, is
dominated by areas of the left hemisphere. So if you're
dealing with the right hemisphere of a split brain patient
and you show something only to their left visual field,
which connects to the right hemisphere, and you ask them
about it, what often happens is that, for instance, they
will not be able to say the thing you have

(03:57):
showed them in their right brain, or even a explain
it in words, but they will be able to draw
the image with their left hand. Now, in the case
of Rama Shondre in story, he had trained a patient
in a lab at cal Tech to answer questions posed
directly to his right hemisphere only by pointing with his

(04:17):
left hand to response boxes indicating yes, no, I don't
know now. Of course, asking these questions directly to the
left hemisphere is a lot easier because it just processes
language normally, and you can just ask, but he trained
the right hemisphere to respond as well, so the patient
was perfectly capable of answering questions like this with either hemisphere.
Are you on the moon right now? Patient says no?

(04:40):
Are you at cal Tech? Patient says yes? But Rama
schendra and then asked the right hemisphere do you believe
in God? And it says yes. And he then asked
the left hemisphere, the language dominant hemisphere, do you believe
in God? And it says no. This is yet another
one that immediately when I heard the story, the hair

(05:01):
stand up on the back of my neck. I feel
the I feel that the goose bumps of of counterintuition
running through me. Yeah, because I feel like, for the
for the most part, I feel like a lot of
us want to feel like we have a definitive answer
to that question and answers like that. Now, I'm probably
a little weirder and that I and I imagine a

(05:23):
lot of our listeners are like this as well, where
someone asks you questions like this and you can be
a lot more wishy washy and say, well, I don't know,
it depends you know yes and no. I I feel
like most of us not all of us. You know,
we can have we can have contry, contrary ideas in
our mind. We can have conflicting notions that are that
are vying for dominance, which me, are you asking? Yeah?

(05:44):
I think Jackyl, are you asking? Hi? Do you know
hid He? You know, he's he's not much of a churchgoer,
but but Jekyl, he's there every Sunday. Yeah, but he's
only there to ultimately work his way up the chain
and usurped the creator. Now Rama Shonder and Joe Kingly
asks a theological question about this. He says, you know,
assume the old dogma that people who have faith in

(06:05):
God go to heaven and people who don't go to hell?
What happens when the split brain patient dies? That's a
good laugh line. But I think this question is actually
more profound than it seems at first, because we may
not be divine judges casting people into heaven or hell,
but we are judges, and we judge and evaluate and
characterize people all the time, every day, as if they

(06:28):
are some sort of essential whole. We pick out what
we believe to be the salient characteristics that define a person,
like this is their character? And and now we know
who they are. This is their mind, this is the person.
There might be no way to get people to live
and behave other than this, And it might just be
an inextricable part of our our personalities that we have

(06:50):
to judge people as essential holes in this way. But
I think this research should cause us to wonder about
our folk beliefs about the nature of the mind and
the brain, what it means to be a person. Yeah,
I mean, obviously, just to talk about judgment, we we
have some severe problems with with with dealing with the
idea that that that there is not a single person

(07:12):
over a length of time. I mean, I mean, obviously
you have people serving prison sentences for crimes that an
earlier iteration of themselves committed. What do they say, I'm
a different person now and and it is true, all
different people than than we once were. But you might
in some ways also be a different person than you
were a couple of seconds ago, right, or it can

(07:35):
be kind of a juggling back and forth. You know,
I'm a different person in the morning versus uh, the afternoon.
I mean, I I truly feel that. Well, I mean,
when it comes to questions like this, like the theological question.
The fact is, most people, I think are probably filled
with all kinds of doubts concerning whatever their beliefs about
religion are, whether you believe in God or not. Either way,

(07:56):
you probably sometimes wonder if you're wrong or you should.
That's always a great exercise about anything in life, Think
about the possibility that you're wrong, no matter what it
is exactly. But our everyday experience, of course, is that
these varying states of doubt they get somehow synthesized. Right,
you roll it all up together, you say, even though
whichever way I am, whether I believe in God or not,

(08:18):
I ultimately have one way of answering that question. Most
people are like this when you I mean, you might
not be this way, Robert, but a lot most people
would say I have an answer. Well, at the end
of the day, or even just minute to minute, you
your brain has to tell a story about who you are, right,
and for that to make sense, there still has to
be a sentence. There still has to be a story,

(08:39):
some sort of continuation. And even if you know my
story is a little more um uh, you know, meandering,
it's still a story, right, Yeah, yeah, you're still narrativising yourself.
You're composing a synthetic picture of who I am, and
for you, I think that picture includes more ambiguity than
a lot of people are comfortable with. But either way,
no matter, you're telling a story about yourself. Yeah, and

(09:03):
so despite your doubts either way, you think of yourself
as one whole, unified, unified person. You either believe in
God or you don't, or you identify you have some
narrative that's in between. You say I'm an agnostic or whatever.
But this is just one case of a generally fascinating
phenomenon to ponder, what if by asking parts of our

(09:23):
brains separately, we would think different things about all kinds
of stuff, have different feelings, make different judgments, make different
moral judgments, be different people. Is anyone aspect of your
brain more truly authentically you than another aspect of your brain?
I mean they're both in your head right. So today

(09:45):
this is sort of what we wanted to focus on
to talk about some of these types of takeaways from
split brain experiments and more recent research on split brain patients.
So one really fascinating area of research we can look
at is the idea of moral judgments. Robert can I
pose you a scenario and see what you think. Yes,
go ahead, band or snatched me here? Okay? Oh yeah,

(10:06):
you're taunting me with it every day. I still haven't
seen it yet, but I will. Okay, here's the scenario.
Grace and her friend are taking a tour of a
chemical plant. Grace goes over to the coffee machine to
pour some coffee. Grace's friend asks if Grace will put
some sugar in hers and there is a white powder
in a container next to the coffee machine. The white

(10:28):
powder is a very toxic substance left behind by a
scientist and deadly when ingested. The container, however, is labeled sugar,
so Grace believes that the white powder is regular sugar.
Grace puts this white powder in her friend's coffee. Her
friend drinks the coffee and dies. Now the question is,

(10:49):
is what Grace did morally acceptable or not um given
this scenario. I mean, it seems morally acceptable because she
didn't know it was toxic. It was lay will sugar. Yeah,
she was do and she was following a request. Yeah,
so you are answering the question the way almost all adults.
Adults tend to answer these questions that What matters is

(11:11):
the intention of the person doing the action. Uh So
let me pose it another way. Same scenario, Grace and
her friend or at a coffee. They're getting coffee at
the chemical plant. Now it turns out that the white
powder in the container is just sugar and it's fine,
but it is labeled toxic. So Grace believes that the
white powder is a toxic substance, but she's wrong. She

(11:33):
puts it in her friend's coffee. It's actually just sugar.
Her friend drinks it is What is what Grace did
morally acceptable? Well, I would say it is forbidden because
she attempted to poison a friend, exactly right, So yeah,
this is how I would answer as well. This is
how almost all adults tend to answer these questions. The
fact is that in general, adults tend to think that

(11:54):
intentions are highly morally relevant. So they usually say that
a person who accidently poisons a friend of theirs with
no intent to harm them is not morally blameworthy, But
somebody who intends to poison a friend, even if they
fail at doing so, is morally blame worthy. And of course,
like you know, there are many aspects that you see

(12:15):
this put into practice around the world, and like legal
injustice systems, a person is punished a lot more for
trying to hurt someone on purpose than for hurting them
by accident, though often sometimes they are still held responsible
for hurting somebody gross negligent situation, you know, uh, And
that's like a middle category, right like if you didn't
mean to hurt somebody, but you were doing something really
reckless and it hurt them, that's sort of like a

(12:37):
middle culpability level, right like if you stored the toxic
white powder next to the sugar, and she just didn't
look closely enough, like you really should you know that
you that this place has as sugar and toxic poison.
You should you should know to check which one you're
scoop getting lumps out of, right, But we wouldn't think
that Grace should have expected there to be poison right

(12:58):
next to the coffee machine. And on the other hand, Grace,
you can't expect Grace to just expect people to be
trying to poisoning her all the time like they're they're
they're certain cultural expectations in place here exactly. But the
weird thing is not everyone answers scenarios this way. For example,
previous research, including by the Swiss psychologist Jean Piage and

(13:19):
others later has found that young children and pj found
this was up to about the age of nine or ten,
tend to attribute moral guilt and deservingness of punishment in
exactly the opposite way. They assigned guilt based on the
objective consequences of the action rather than to the knowledge

(13:40):
or intentions of the agent, meaning that many young children
will suggest that if Grace means to put sugar in
her friend's coffee but accidentally poisons her friend, she is naughty.
But if she tries to poison her friend and the
poison doesn't work, she's fine. Well that sounds totally believable.
I mean, I now that it's pointed out like that,

(14:00):
you know, I can see I can see various aspects
of that popping up in just raising a child, you know,
where where they're gonna they're kind of going to jump
to this conclusion, you know, certainly not with poisoning, but
with just sort of the everyday minutia that fills your life. Well,
they don't reason this way every time, Like sometimes intentions
seem salient to them, but generally the rule is after

(14:22):
about age ten, almost nobody ever thinks that accidentally harming
someone is worse than intending to harm them and not
harm in failing. Yeah, but this, I mean that I've
seen this with my son though, where like he'll do
something accidentally and then he's really hard on himself for
having for for quote, being bad or having you know,

(14:43):
done something bad and you have to reassure him you
know this was you know, there was an accident, but
you know it's all cool. Well, this is a fascinating
phenomenon on its own. I mean, before we even get
to how this applies to the split brain experiments for example,
you know, I went back, I was like, is this
really true? So I was reading some of Pj's work
on this question from a book of his, and so
here's one of the scenarios he describes when interviewing young children. Okay,

(15:07):
the first one is, uh this, uh about this little
boy named John? Robert, do you want to read about John? Sure?
A little boy who is called John is in his room.
He has called to dinner. He goes into the dining room,
but behind the door there was a chair, and on
that chair there was a tray with fifteen cups on it.
John couldn't have known that there was all this behind

(15:28):
the door. He goes in the door, knocks against the tray,
bang a go the fifteen cups and they all get broken.
All right. Here's the other scenario. Once there was a
little boy whose name was Henry. One day, when his
mother was out, he tried to get some jam out
of the cupboard. He climbed up onto a chair and
stretched out his arm, but the jam was too high
up and he couldn't reach it and have any But

(15:50):
while he was trying to get it, he knocked over
a cup. The cup fell down and broke. Ah. So yeah,
we have a situation where John was just going about
normal everyday. How stuff He didn't know where some stuff was,
and stuff got broken. But Henry is trying to do
something he shouldn't and then accidentally break something. But here

(16:13):
then PJ includes a little transcript of a dialogue with
a six year old boy named Geo about these stories. Robert,
do you want to be Geo? I'll be the child. Yes, Okay,
have you understood these stories? Yes? What did the first
boy do? He broke eleven cups and the second one
he broke a cup by moving roughly? Why did the

(16:35):
first one break the cups? Because the door knocked them?
And the second he was clumsy when he was getting
the jam, the cup fell down. How did Geo become
Richard O'Brien? Okay, no, sorry going on? Is one of
the boys naughtier than the other? The first is because
he knocked over twelve cups. If you were the daddy,
which one would you punish most? To one who broke

(16:56):
twelve cups? Why did he break them? The door shut
too hot, had knocked them. He didn't do it on purpose.
And why did the other boy break a cup? He
wanted to get the jam? He moved too far, the
cup got a broken. Why did he want the jam?
Because he was all alone? Because his mother wasn't there.
Have you got a brother, no, a little sister. Well,

(17:16):
if it was you who had broken the twelve cups
when you went into the room and your little sister
had broken the one cup while she was trying to
get the jam, which of you would be punished most severely? Me?
Because I broke more than one cup. Robert, First of all,
I'm gonna give a raver view to your creepy child
voice that was like a beautiful riff raff French geo.

(17:37):
I was trying to go for like a Damian child
or something. But you know, Richard O'Brien is still pretty good.
It's all for you, riff raff. But this is illuminating.
This shows, Uh, this shows how they the six year
old is thinking about these two scenarios and applying judgment. Yes,
almost no adult reasons this way right right, So this
on its own is fascinating to me. Why this discrepancy

(17:59):
in moral reasoning of children and adults and what causes
the change? You know, PJ says, the change tends to
happen somewhere in late childhood, you know, somewhere between like, uh,
like seven and nine or ten. This change really takes
over and people still and the children start reasoning about
moral intentions and moral knowledge as opposed to just the

(18:20):
objective outcomes. Uh. One issue I think that plays into
this maturation process in moral judgments is of course going
to be the development of the sophistication of theory of mind,
and theory of mind of course is the ability to
understand that others have independent mental states and imagine what
those states are. But this clearly can't be the only factor,

(18:41):
because most children develop theory of mind by around age
five or so, and a significant number of them think
outcomes matter more than intentions for guilt until around age
nine or so, So there must be something else happening also,
so they're able to either able to contemplate other mind states,
and yet are still sticking to this. Uh, this, this

(19:03):
harsh form of judgment. Yeah, And again, to be clear,
not in every case, because sometimes children will seem to
think intentions matter, but they clearly they they default to
this far more than adults would. Now, there's one reason
to think that, of course, theory of mind is important
for making a mature moral judgments the kind adults make
based on knowledge and intentions, for the obvious reason that

(19:24):
when you make a judgment considering a state of mind,
including the knowledge and intentions of the person who broke
the cups or put the powder in the coffee or whatever,
you need to imagine their state of mind, like you
have to have that in your brain in order to
evaluate whether they were guilty or not. And so, in
like two thousand and eight two thousand nine, researchers named
Leanne Young and Rebecca Sachs use neuroimaging to find evidence

(19:46):
that when you try to ascribe beliefs and intentions to
other people, essentially when you practice theory of mind and
you're thinking about other minds, it involves processes that are
lateralized their primarily on one side of the brain, specifically
in the right temporal parietal junction or t p J.

(20:07):
And in a two thousand nine study, Young and Sacks
found that uh temporal parietal junction activity in the right
hemisphere only appeared when people tried to assess the moral
significance of things like accidental harms when you hurt somebody
but you didn't mean to. So, if I tell you
a story about Jeffrey accidentally knocking somebody into the Grand Canyon,

(20:28):
and then I ask you to think about whether Jeffrey
did something morally wrong or not, whatever thinking you used
to answer that question will probably involve the t p
J on the right side. But oh, what if the
part of your brain that's getting that's interacting with the
language that poses this question to you, cannot retrieve information

(20:48):
from the lateralized TPJ on the right side the split brain. Yes,
so we're gonna look at a two thousand tens study
from Neuropsychologia called abnormal Moral reasoning and complete and partial
calisotomy patients by Miller, Senate, Armstrong, Young, King, Pagi, Fabri, Polinara,
and Gazaniga. So the authors begin by looking at the

(21:11):
state of affairs we just talked about, uh with the
you know, the localization in the right hemisphere of this
part of the brain that's used in imagining other minds
and making judgments about something like the intentions of somebody
in reference to moral guilt and the right quote. These
findings suggest that patients with disconnected hemispheres would provide abnormal

(21:32):
moral judgments on accidental harms and failed attempts to harm,
since normal judgments in these cases require information about beliefs
and intentions from the right brain to reach the judgmental
processes in the left brain. So they ran a test.
They used six split brain patients who have had either
a partial or total sectioning of the corpus colossum and

(21:55):
compared that with twenty two normal control subjects. Now verbally
so what it did as verbally out loud conducted interviews
posing moral judgment scenarios like the sugar or poison story
we talked about with Grace, but also other ones like it. Uh.
They conducted these interviews verbally, asking the subjects about whether
different types of action in the scenario were morally acceptable

(22:18):
or not. And remember, of course, which hemisphere of the
brain is the one primarily responsible for speech. It's the left.
So if you're having a verbal interview with somebody, their
left hemisphere is sort of like it's like the gatekeeper,
right that will in most cases be dominating the input
and output of the brain you're interacting with, since the
input and output is all spoken words. So if you

(22:40):
have to give your answers in words coming from your
left hemisphere and it can't communicate very well with your
right hemisphere or at all with your right hemisphere, which
is the home of an important part of the brain
that used to think about the knowledge and intentions of
other people, your verbal answers on subjects requiring this kind
of knowledge may very well be impaired. And the results,

(23:01):
it turned out, supported this hypothesis. The control subjects, the
people without split brains, they tended to judge just like
we did earlier, Like they judged based on intentions. Well,
did Grace mean to harm somebody? Or not, and that
was the mainly salient thing. The split brain patients did
so far less consistently, more often judging based purely on outcomes,

(23:23):
the way many young children did and pj's work. And
also to supplement their experiment, they tested two of the
split brain patient's ability to detect hypothetical faux pause. For example,
a person quote telling somebody how much they dislike a
bowl while forgetting that the person had given them that
bowl as a wedding present. Uh. And of course, the

(23:46):
idea is that a person who's unable like if you're
unable to give spoken answers involving the theory of mind
function localized in the right TPJ, you will find it
significantly harder to detect a faux paw, which requires you
to think about other minds. And the split brain difference
held true here. Out of tin faux pause, they said,
patient VPD successfully detected only six and patient j W

(24:11):
correctly identified only four, whereas control subjects all identified a
hundred percent of the faux pause. So when they were
given a scenario like that and ask did something awkward
happen normal people, they detected every time. In fact, one
of the things that I would say our brains are
most highly suited for is detecting social awkwardness and stuff, right, yeah,

(24:31):
And it is interesting to notice this emerging in younger
children too, you know, like you see this kind of
awareness coming online, you know, where they're able to identify
faux pause as opposed to just be like the master
of faux pause. Well do you ever notice I wonder
if like adolescents and teenage years are kind of an error.

(24:53):
It's like it's a time when you were almost like
hyper aware of social awkwardness. Does that ring true to you?
Um to a certain extent? But I don't know. I've
run into some teens who I mean, there are a
lot of different types of brains out there, but I
mean I've run into some teens that that definitely have
a lot of social awkwardness or or definitely walk into

(25:16):
a lot of folk pos So I don't know. Well,
I mean, just because you are awkward doesn't mean you're
not aware of awkwardness, right, Yeah, Certainly awkwardness does seem
to define that period in one's life that would be
that might be something to come back to. I know
we've done episodes in the past on the teenage brain
in the particular aspects of the teenage brain. I wonder

(25:37):
if there's a if there's an entire episode on the
science of awkwardness. Well, I think we should take a
quick break, and then when we come back we can
discuss this study a little more than all right, we're back,
all right, So we've just discussed this study about split
brain patients and moral judgments and found that split brain patients,
at least in this one study, made moral judgments based

(25:59):
on out comes rather than on intentions, more like children
sometimes do instead of the way that adults normally do. Um,
and this is fascinating. Now, of course, we should acknowledge
some potential drawbacks of this experiment. Like all split brain studies,
by necessity, it's a small sample, right, you know, there
aren't that many of these people out there, and even

(26:20):
a smaller subset of them want to participate in experiments
like this, But so it's almost one the scale of anecdote,
so you have to be careful about drawing strong conclusions
from the results. Also, there are some other detailed complications
in the study, such as questions about why the effect
also manifested impartial calisotomy patients so when the authors had

(26:40):
not expected it to They thought it would only appear
in the full calisotomy patients. And then also about where
the exact side of decoding the beliefs of others is located.
Maybe it's not exactly the TPJ, but more anterior to it.
Uh So that's some peripheral issues. But nevertheless, if we
tentatively accept these results like how fascinating, and it leads
to all these questions like here's one. You know, we

(27:04):
discussed in the last episode that despite the radical nature
of the surgery that cuts the corpus colosum and the
amazing neurological anomalies that can arise from it under lab conditions,
generally most patients and patient families report totally normal functionality,
no major changes in personality or behavior after the surgery.

(27:24):
If it's changing their moral reasoning in in this kind
of way, how could that be possible? I mean, yeah,
because certainly from your own standpoint, I mean, you were
if you're moral compass has changed, then you I mean,
you can't see the forest for the trees, right, But
but you're gonna be surrounded by other people who would

(27:45):
be able to identify the change presumably yeah, you would
think so, I mean if there is actually a change,
so uh And and also like, yeah, you think that
moral judgments sort of go to the heart of a
person's personality, right, like that that is is your character,
that is who you are as a person, or at
least how you think about that subject. Right. You would
think there would be anecdotes out there about like, yeah,

(28:08):
my uncle had this surgery and then his like his
his political ideology changed afterwards, or yeah you have been
something to that effect. But we have not seen that
in any reference in any of these studies. So if
these results from this two thousand ten study are sound,
what accounts for the discrepancy here? And the authors they
posit three possible answers. One is, well, maybe there are

(28:30):
profound personality changes in split brain patients that have gone
unnoticed or unreported. They don't think this is very likely
because quote, most reports from family members suggest no changes
in mental functions or personality, and early studies that thoroughly
tested patients pre and post operatively reported no changes in
cognitive functioning. So they feel pretty robustly that these patients

(28:53):
in their day to day lives are not really changed.
The other possibility is, well, maybe it's just bea has
the judgment tasks here have no relevance to real life,
But I mean we use judgments like this all the time,
Like did somebody mean to do something that? That seems
like something that comes up every day? Yeah, I mean
I jokingly brought up Bandersnatch the Town Adventure Black Mirror

(29:16):
episode on Netflix earlier, But like I I found myself
in watching that, like having to make choices about moral
choices for the character. I found myself very uncomfortable with
with choices that that I found morally reprehensible, even though
it's just purely hypothetical. It's just a story, right, all right?
What else do we have? What other possible answers? Well,

(29:37):
the third possibility is what the researchers think is probably
the case, which is that even though this impairment is
manifested in the lab, in reality it somehow gets compensated
for somehow in daily life, other brain regions and functions
or alternative processes kick in to counteract whatever is causing
people to give these unusual answers in the lab condition,

(30:01):
the brain finds a way, Yes, so what would it be? Well,
what about a version of something, not exactly but something
like the system one versus system to schema. Of course, now,
of course you can remind people what the system one
in the system two themes are. Well, it's like it's
basically like the different ways of dealing with the threat
of the tiger. There's the way of dealing with the
tiger by avoiding it and not going to the places

(30:23):
where the tiger is, and then there's the way of
dealing with the tiger where you have to fight it
re fleef from it. So I think we'd have the
order inverted there. But yeah, so like system too is
generally considered to be like slow, deliberate, methodical, logical thinking
about how to solve problems, whereas system one is fast, reactive, intuitive, implicit, right,

(30:46):
punch the tiger in the nose and run for it.
And we need both for life. I mean, system to
reactions might be less likely to give us erroneous results.
But you don't have time to use system to thinking
on everything. You know, you're trying to get through life.
Most of the time. You need to make quick judgments
that are not overly concerned. You know, you can't overthink,

(31:07):
like which foot I'm gonna put in front of the
other right now, Yeah, so you've got to be prepared
for either tiger, the distance tiger or the close tiger.
And so maybe the idea here is that the right
TPJ is somehow necessary for making fast implicit system one
type decisions about judging more you know, the moral valance

(31:29):
of an action and imagining theory of mind. But that
you can If you can't do that, you can somehow
do the same thing. It just takes longer, and it's
is a more difficult deliberate process that the brain has
to go through if it can't rely on this brain
region that does does this fast for you normally the
author's right quote. If the patients do not have access

(31:52):
to the fast implicit systems for ascribing beliefs to others,
their initial automatic moral judgments might not take into a
out beliefs of others. But you know, there's slow reason
deliberate thinking system can compensate, it can kick in. Then again,
I mean, I wonder how this if this is the case,
and we'll discuss this a little more, how this wouldn't

(32:13):
manifest in normal life, because I feel like we use
the fast intuitive system one type process to make morally
relevant judgments all the time. I mean, we're constantly making
sort of unfair moral judgments about things that would not
you know, they're not using the kind of reasoning that
you would sit down and deliberate about. Think about how
often you get mad at somebody because they do something accidentally,

(32:36):
and if you were forced to stop and think about it,
you're like, Okay, no, they didn't, they didn't mean to
do that. There's no reason to morally blame them. You
just get mad in the moment and you're just like,
why are you in my way? Or why did you
do that? Yeah, yeah, totally. This is you know, this
like the the other split brain experiments we're looking at.
Though it reminds me of say, if you're watching a

(32:57):
three D film and you have the glasses on and
then you take glasses off and you you you see
that there is there's there's some sort of uh uh,
you know, there's a lack of unity there. Or it's
like you're you're staring through the stereo view and then
you look at the card and you see that it's
two images side by side to create the united whole.
Like it's it's a glimpse at the duality that that

(33:20):
is making the at least, you know, the sort of
the illusion, the experience of the whole possible. Um. But
but but then it's it's we. We shouldn't fall under
the we. We shouldn't then fall into the trap of
thinking that it is dual by nature. It's like taking
the glasses off and saying, oh, the world is really red,
the world is really blue. Well no, no, the world
is the thing that comes together. Yeah, And and the

(33:42):
glasses are designed to give you this three D image
the same way that the brain is designed by evolution
to have compensating processes, to have one way of doing
something or another way of doing something, depending on the
situational need. And so, of course I indicated that the
authors tend to think this third answer is probably the
correct one about the compensating mechanism taking over in real

(34:03):
life scenarios. Uh. And as evidence they cite the fact
that in the experiment, split brain patients would sometimes spontaneously
blurred out a rationalization of an answer that ignored intentions,
almost as if after giving the answer out loud that
ignored intentions, they realized something was wrong with it. So

(34:26):
here's one example. A split brain patient named j W.
Hurd a scenario where a waitress thought that serving sesame
seeds to a customer would give him a terrible allergic reaction.
She thought he was allergic to sesame seeds. She tried,
she served him sesame seeds, but it turns out he
wasn't actually allergic. She was wrong about that, and the

(34:47):
seeds didn't hurt him, even though she thought they would.
J W said the waitress had done nothing wrong. Then
he paused for a few moments, then spontaneously blurted out,
sesame seeds are tiny little things. They don't hurt nobody.
You know. It's it's almost as if he was searching
for a post talk rationalization of an answer he had

(35:09):
already given, but which began to seem wrong to him
as it sank in, you know, given a few more
seconds to think about it, and the patient j W alone,
They reported spontaneously blurted out rationalizations like this in five
of the twenty four scenarios, so like more than a fifth.
And again, I just think back to the fact, you know,

(35:30):
post talk rationalization is a huge part of life. We
talked about this in the last episode with the uh
the the writer and the elephant, right, Like, how often
do we do things that honestly we don't understand why
we did them, but we just come up with a story,
and we even believe that story ourselves as an explanation
for why we did it. But you can see clear

(35:51):
evidence that that is not the reason. Right, Yeah, you
end up telling yourself, well, I wanted that product, or
perhaps oh I would, Well you might even you, you
might even end up telling you this stuff of story
about how you were tricked into buying it. But but
there is some sort of rationalization about the about the
movements of the beast beneath you. Alright, on that note,

(36:12):
we're going to take another break, but we'll be right back.
Thank Alright, we're back. Okay. I think we should take
a look at another study about moral judgment and the
division of the brain hemispheres. So this is one from
Royal Society Open Science from called moral Judgment by the

(36:32):
disconnected left and right cerebral hemispheres, A split brain investigation.
And this is by Steckler, Hamlin, Miller, King and Kingstone.
Uh and when you get King and Kingstone together, you
never know what's gonna happen. So to recap from the
last study, we know that lots of parts of the
brain are used in making moral judgments, including you know,

(36:53):
regions and networks in the left hemisphere, such as the
left medial prefrontal cortex, the left temporal para idle junction,
and the left singulate. But in order to make moral
decisions based on people's intentions when you're imagining what other
people mean to do and what they know, we seem
to require use of an area in or around the

(37:15):
area mentioned in the last study, the right tempo parietal
junction or r TPJ. And it seems that without it
you can't properly imagine other people's intentions and beliefs to
make a quick moral judgment. So here's a question. Then,
the right hemisphere seems necessary in making a quick moral
judgment in the normal way based on people's intent but

(37:37):
is it sufficient. Could the right hemisphere alone make a judgment?
So the authors try to find out with the help
of a split brain patient. They write, quote, here we
use non linguistic morality plays with split brain patient j
W to examine the moral judgments of the disconnected right hemisphere.
So obviously you've got a problem if you're trying to

(37:59):
just talk to the right hemisphere, because the right hemisphere
is not going to do super well at understanding a
verbal scenario you describe to them. Right it doesn't want
to listen to you tell a story. It doesn't want
a lot of dialogue. It just wants some sweet, muted
YouTube action the silent film hemisphere. And again not to
not to be overly simplistic, because we do know from

(38:20):
some research that the right brain does seem to understand
some language, it's just not nearly as linguistically sophisticated as
the left hemisphere. Um So, they use these nonverbal videos
of people trying to help someone and succeeding or failing,
or trying to thwart someone and succeeding or failing. So
an example might be somebody's trying to get something down

(38:42):
off of a high shelf and then somebody either like
bumps into them to try to knock them off the
shelf or tries to help them get the thing down
or something like that. And then they had JW watch
all these videos and point with the finger of a
specific hand which is controlled by the opposite hemisphere, to
indicate which character was nicer. So, in a series of

(39:03):
test sessions like this over the course of a year.
They found that JW was able to make pretty normal
intent based judgments with his right hemisphere alone pointing with
his left hand, but had a lot more trouble making
intent based judgments with the less left hemisphere, in some
cases seeming to respond almost at random with the left hemisphere.

(39:24):
And yet the left hemisphere is the hemisphere that the talks.
So there were more signs of the left hemisphere making
up ex post facto justifications when it did not understand
what the what the person had done. For example, after
one video, when asked why he made the choice he
did of which character was nicer, JW just offered the

(39:45):
rationalization that blonds can't be trusted, when one of the
actors in the video was blonde. So here's one question
why the discrepancy with the last study. In the last study,
the left hemisphere defaulted more often in making moral judgments
based remember the objective good or bad outcomes, rather than
people's intentions. Why did it seem to make judgments at

(40:07):
random this time? So the authors say, maybe in the
previous study it's because subjects were explicitly asked to judge
whether a behavior was morally acceptable or not. And in
this study instead the subject was just asked who's nicer,
maybe to the left hemisphere, you know, separated and on
its own devices. Maybe it doesn't use any kind of
moral reasoning to judge who is nicer, but uses some

(40:30):
other kind of rubric maybe nicer means something non moral
to it. Then again, there's also the possibility, well, you know,
we're again limited to small sample sizes in this case
very small of just one patient. So it's possible that
maybe JW is just unusual. That's always a thing to
consider with this kind of study, and it's what you know, unfortunately,

(40:50):
what what this sort of research is by nature limited to.
One of the things that I think is interesting and
looking at this research we've we've looked at today with
the different minds of moral reasoning in the different hemispheres,
is that we see again the role of something that
we talked about in in part one of this series
back in the first episode, about the role of what's

(41:11):
thought of as the interpreter, or at least in Michael
Gazaniga's theory, that the interpreter in the left hemisphere. So
the idea is of course that your brain constantly makes
up stories to explain why you just did what you did.
But split brain research indicates that we have no guarantee
that the stories we give to explain our own behaviors
have any explanatory power at all. A lot of times

(41:34):
it seems more like they are just confabulated post talk rationalizations,
that you just came up with something to explain something
you did when you really have no idea why you
did what you did. The brain just pulled it out
of its own button, if the brain had a butt.
In the previous experiments, this had to do with stuff
like why did you draw this picture you know? Or

(41:55):
why did you pick this object out of a drawer
with your left hand when you couldn't name that object
in speech or anything like that, and people would make
up excuses. Now you you see a similar kind of
thing perhaps going on with making moral judgments. And I
think that there is some research that this is indicative
not just of something about split brain patients, but of

(42:16):
something larger about this phenomenon of interpretation in the left
hemisphere and of the human condition itself. Yeah, like we've
we've touched on in this episode Sode in the previous
episode and in many other episodes before. It's like there's
always a story that is told, right, We're constantly telling
a story about ourselves, and that story involves rationalizations, rationalizations

(42:39):
for our actions and uh, and interpretations of who we
are and why we're doing everything we do exactly. And
it happens that in multiple level it happens to explain
why you have why you took certain actions that you
can't actually explain. It happens to explain why your mood changes.
Because Aniga writes about this that there are these cases
where you can have somebody who's has a mood shift triggered,

(43:02):
like for example, uh, you get you have split brain
patients where you show some positive or negative mood, triggering
stimulus to the right hemisphere, and then the speaking part
of the brain expresses being upset. But then we'll be
unable to express why, and we'll just make up a
story about why, like well, because you did this thing
that made me upset. And crucially, I think it seems

(43:24):
to be the case that when we make up stories
like this, they're not just you know, they're not just
outward facing. It's not just pr for the brain, it's
inward facing. We are convincing ourselves that this made up
story is correct. Yeah, it helps create like the internal
reality that we cling to. Yeah, exactly, And so it's

(43:44):
it's interesting, I think, to notice that this appears to
be linked to the brain's capacity for language. That, at least,
according to Gazanigas theory here, if he's correct, the part
of the brain that makes up explanations for why something
happened is also highly associated with the part of the
brain that is able to talk about things, and that

(44:05):
very well might not be an accident. It seems possible
there's a link between the networks of the brain that
have the most to do with generating conscious experience and
the networks of the brain that are able to put
things into words. And that's fascinating, all right. So under
under Kazaniga's ideas here, the consciousness generating capacity is located

(44:26):
primarily in the left hemisphere. And what happens when you
have a split brain patient, is you essentially cut off
the conscious part of the brain's access to half of
what the brain is doing. Yeah, though that half of
the brain is still over there doing stuff. Yeah. With
with each example that we we we pull out here,
each each study it is still very difficult to really grasp,

(44:48):
you know. It's it's again this kind of you can't
see the forest for the tree situation where it's hard
to imagine the consciousness we're experiencing, uh in a in
a system that's been divided, you know. Well, yeah, that's
one thing that that's so interesting here. I think one
way you could misunderstand what the split brain cases show

(45:09):
is that if you cut the brain in half, you
generate two conscious, independent people, and that appears to not
be the case people still with two brains, like with
Steve Martin, right, you get one conscious experience. The person
generally does not report feeling any different, as we talked
about last time. Their behavior and stuff is generally about

(45:30):
the same as it was before, except you have the
ability to show under certain conditions that there's this whole
half of the brain over there doing things that you
cannot be conscious of or put into words, so it
can still sense, it can still control. The body is
just apparently not integrating or synthesizing into whatever creates your

(45:50):
conscious experience, which I mean in a way that is
that that is sort of like having the other fellow
in there. In the words of Robert Louis Stevenson. Now
to bring up another literary example. We've talked about Peter
Watt's book blind Side on the program before. I'm sure
you remember the character Siri Keaton who loses his brains

(46:11):
left hemisphere to infection, and uh, and and and then
as a result of that entire hemisphere is largely or
entirely replaced with like a cybernetic implant. Yes, and this
creates a lot of the strange psychology of the narrator
in that book. Yes, yeah, so I couldn't help but
think of that when we were talking about this. Also,
I was reminded of a character in the book, consider

(46:32):
Felibus by Ian M. Banks, who who has tweaked his
brain so that he can engage in uni hemispheric sleep.
We didn't even get into that in in this episode,
but of course this is something that for instance, dolphins
can do. Uh, it can't just go to sleep, so
they'll put one side of their brain, one hemisphere of
the brain to sleep at a time. And so and
then that particular book, it was he was probably leaning

(46:55):
a little bit into sort of the left brain right
brain myth a bit, but he was discussing how if
one side of the human brain is sleeping and on
only one side is awake, you are going to have
a different expression of that individual. Now, if the Gazonaga
model of consciousness is correct, Uh, that would make me
wonder that if a human were capable of uni hemispheric sleep,

(47:18):
would the human be conscious while the right brain is
sleeping and not conscious while the left brain is sleeping,
and yet while the left brain is sleeping still awake,
just not conscious. Well, I guess you'd ultimately and then
you'd have to work out exactly how this would work
in a human scenario. But as as long as one
side would be awake to alert the other side when

(47:38):
full brain alertness was required, you know that would that
would be the main prerequisite. I just thought to look
this up. I wish I thought before we came in here,
whether there are any lateralization properties of sleepwalking. Oh, that
would be good too. Well, we we need to come
back and discuss sleepwalking in in depth, because I'm sure

(47:59):
there's a whole episode just right there. We've done some
episodes on what a parasomnia in the past, like sort
of covering various weird sleep phenomena. But yeah, that would
be a fun one to come back to, for sure,
you know. Speaking of Peter Watts, I remember he's written
about this idea of if thoughts were inserted into your
brain from the outside, would you even perceive them as

(48:21):
alien or would you just perceive them as self? Because
Kazaniga's left brain interpreter model might be totally wrong, of course,
but let's just assume for a minute that it's correct.
Things happen unconsciously in modules all throughout the brain, and
then regions in the left hemisphere have the job of
synthesizing all that activity and generating a story that explains
to you why your brain just did something. And this

(48:44):
interpreter function is somehow crucial to what we think of
as the human experience of consciousness. Consciousness is sort of
is this story we tell about why we're doing things
and who we are now. Normally, if something enters your
left visual field, goes to the right hemisphere, gets processed there,
and then travels to the interpreter and the left hemisphere
through the corpus closum. That doesn't feel like you're getting

(49:07):
that thought or information or experience from somewhere else. It's
all just self. It all just gets interpreted and it's you.
So if we were to start using some kind of
brain to brain interface or a computer to brain interface
where it were possible to transmit thoughts into the brain
from outside, and who knows if that's really possible, of course,

(49:27):
but just assume would we be able to tell the
externally inserted thoughts the sort of incoming brain mail from
activity arising in networks and modules natively throughout the brain itself,
or would it just all go to the interpreter the
same way. So you could send an alien thought into
somebody's head and have them immediately rationalize it as part

(49:50):
of the interpreted self the same way they would if
it came from some network in the right hemisphere, would
they just think, yep, this is just me thinking. I
feel like we're orderline there with certain individuals in their
use of smartphones. Oh yeah, where imagine you and I'm
listeners out there, you've had a similar experience. We would
be in a conversation with someone and they'll without a

(50:12):
phone to remember something. But but but often like not
in a way where it's like oh yeah, I forget that,
let me research it, More like, let me access this
part of my memory. Yes, I know exactly what you mean,
and I, um, I don't know. I mean I wonder
what the processes by which the interpreter function. Again, just
assuming this model of the interpreter and the conscious experience

(50:35):
is correct, I mean this, you know, this might be mistaken.
But if this is correct, what is the rubric it
uses to decide what gets integrated as self? And what
what does it decide is alien? That's a great question.
We'll have to come back to that in the future.
Maybe there is none. Maybe it's also maybe there's no future.
Oh there's maybe there's no self. Yes, well, you know,
it also brings up the question, you know, are we

(50:57):
limited our is our id? I didn't delimited by the
things that we have at our disposal in our mind.
Do you count the things that we we have to
depend upon that we have externalized, you know? And I
feel like that is part of the modern human experience,
that has been part of the human experience for a while.
I mean, if an author writes, say, thirty books, um,

(51:19):
and that author cannot repeat them from memory, they are
not a part of his his or her mind. Uh,
then you know, how do you weigh that into the
equation of self. Yeah, exactly, And what if you didn't
write them? What if these are just books that you
have incorporated into your thinking about things? Are those now
a part of your brain? If you know that, you

(51:40):
could consult them in order to figure out what you
think about something, But you can't do it without consulting them. Yeah,
what if it's a book that you've written and you've forgotten.
I believe Stephen King has a couple of examples of
that right where but he doesn't remember writing a particular novel.
I think one example is Coujoe said they didn't remember
writing it because he was on drugs. Yeah, so its
cougie a part of Stephen King likewise, I mean we there,

(52:03):
we all have pronounced the books, films, etcetera. Some sort
of external influence that has been important at one point
in our life and then is discarded later and then
sometimes pick back up again. Oh, there's an extremely strong
social component here. Lots of people figure out what they
think about something by checking to see what somebody else
thinks about it, whether that's a person you know known

(52:24):
to them or some public figure that they you know,
derive opinions from. And you know what I'm gonna go
ahead and take a stand. That's not behavior I encourage
do not do not trust another person as much as
you trust your own. Right hemisphere, don't just directly incorporate
their their information as as self. I can agree with that. Yes,

(52:47):
all right, well, there you have it. We're gonna go
ahead and cap off these two episodes, Part one, Part two.
Hemisphere left hemisphere right if you will, Uh, if you
want to check out other episodes of Stuff to Blow
your Mind, you know where to go ahead over to
Stuff to Blow your Mind dot com. That's the mothership.
That's where we'll find all the episodes of the show.
And don't forget about Invention at invention pod dot com.

(53:10):
That is the website for our other show, Invention, which
comes out every Monday. It is it's very much a
you know, a sister show to Stuff to Blow your Mind.
It covers a lot of the sort of topics that
we've covered on Stuff to Blow Your Mind in the past,
so it's, you know, I wouldn't say it's a you know,
radically different show, but it's one that if you if
you're a fan of Stuff to Blow your Mind, you
should subscribe to Invention and perhaps you're even the type

(53:33):
of person who you were like. You know what, I
like the Invention episodes the most. Maybe I'll just stick
with Invention. That's fine too. Yeah, we basically applied the
same kind of mindset we do on the show here too,
scientific topics and cultural topics. Over there, we tend to
apply it more to techno history. So if you like
what we do here, you'll like what we do there.
Go check it out, subscribe to Invention, and rate and

(53:53):
review us wherever you have the ability to do so.
That helps us out immensely. Yeah, oh huge, thanks as
always to our excellent audio producers Alex Williams and Tory Harrison.
If you would like to get in touch with us
directly with feedback about this episode or any other, uh,
to suggest a topic for the future, or just to
say hello, let us know how you found out about
the show where you listen from all that stuff, you

(54:14):
can email us at Blow the Mind and how Stuff
Works dot com for more on this and thousands of
other topics. Does it how stuff Works dot com. B

Stuff To Blow Your Mind News

Advertise With Us

Follow Us On

Hosts And Creators

Robert Lamb

Robert Lamb

Joe McCormick

Joe McCormick

Show Links

AboutStoreRSS

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.