Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow Your Mind from how Stuffworks
dot com. Hey, welcome to Stuff to Blow your Mind.
My name is Robert Lamb and I'm Joe McCormick in.
Today is going to be part two of a two
part episode on the illusion of explanatory depth. So if
(00:24):
you have not heard part one yet, it is sort
of foundational to the research we're gonna be talking about today,
you should go back listen to the episode before this one,
the first one about how we really don't understand half
of what we think we do. Right. Yeah. I was
reminded in researching this of a particular episode of Adventure Time,
the Fabulous Cartoon Network animated series, in which they encounter
(00:48):
a demon cat. It's kind of a riff on a
Dungeons and Dragons Displace or Beast. It's voiced by Clancy Brown.
Clancy Brown, the guy from what uh oh I'm thinking
shawngregation Highlander. He was the Kurgan. He's the villain in
both movies and oh yeah, I mean one is a
little more a little more evil than the other. Well,
(01:09):
I mean, I guess it's it's all subjective, but but
in this particular piece, he's a demon cat and a demon.
Cat probably informs the heroes that he has approximate knowledge
of all things. And that's how I often feel here
at how stuff works. That is what I'm afraid of. Yeah,
I mean, it's kind of what I'm jack of all trades,
(01:30):
master of none. We can when we're not experts in
any given topic, but we are continually diving down, often
rather deep, into a variety of topics. You have to
be explanatory generalists, and it means yeah, it means we
were We actually develop expertise in no one thing except
maybe hopefully in the process of explaining. But we'll see.
(01:53):
I mean that process is sticky enough, as we discovered
last time. So brief brief refresher on what we covered
in the last st episode. It's this idea of the
illusion of explanatory depth. This uh, this big two thousand
two paper uh that basically research has shown that people
display different levels of accuracy and how confident they are
(02:13):
about their own knowledge in different knowledge domains. So that
sounds kind of abstract, but here's how it means. People
are pretty accurate in guessing how well they know narratives
like movie plots. You can be pretty accurate in saying
I think I know that about a four out of seven,
and then you probably do know it about a four
out of seven. And procedures like how to tie a
(02:34):
bow tie or how to make a pizza. They're a
little bit less accurate in how well they know facts
like the capitals of countries, and they are much less
accurate in their ability to explain the workings of complex
causal systems or what has been called theory like knowledge.
Can you explain how a toaster works, or how a
(02:55):
cylinder lock works? Or natural phenomenon? Can you explain how
tides work or how rainbows are formed? And we just
tend to systematically overestimate how well we understand these latter
types of things. But the research has also shown that
we can be made aware of our own lack of
understanding in a very simple way, just being asked to
(03:17):
explain them. So you think you understand how a cylinder
lock works, can you please explain it? And then you say, oh, yeah, okay,
And then you try to explain it, and then if
you're asked to re rate again your confidence and how
well you understand it, you will rate your confidence lower
after having tried to explain, you'll realize there are big
gaps in your understanding. Yeah. I mean the example that
(03:40):
I throughout in the last episode was you something's wrong
with the sink. You get out your toolkit because you
think you can fix it yourself, and then you quickly realize, oh,
my understanding of how this sink works is not really
sufficient for the for the task at hand. Yeah, whoops.
I yeah, you realize you have you've bitten off more
than you can shoot. Uh. Yeah. And this again seems
(04:02):
to be mostly unique to explanatory knowledge how complex causal
systems work, like how machines work, how natural phenomena work,
and maybe some other things we can talk about in
this episode, maybe like how policies work. But the same
thing does not happen when people are asked to rate
and explain how to do something, or to recall the
(04:22):
plot of a movie they've seen it. Specifically with this
explanatory understanding of how things work. Now, one of the
first things I think we should look at today is
how the last study we looked at last time, it
talked about adults, But it might also be interesting to
ask is this same thing true of kids in second grade,
(04:44):
fourth grade? Kindergarten. I mean, Robert you you often have
wonderful insights about the minds of children, may be drawn
from experience. Oh yeah, well yeah, my son Bastion, it's
constant questions about how things work and what things are
and why they are that way. Uh, almost to the
point of insanity, on on on on the parents part.
(05:05):
But but yeah, I'm constantly having to explain things to him.
Do you think Baston would be very confident in his
own ability to explain how a toilet works or how
a cylinder lock works. No, he is, Well, he'll occasionally
have a bit of overconfidence in his understanding of something
is when he tries to explain it to us, but
then when we correct him on it. Uh, he's he
(05:29):
generally goes with it, and he he always goes goes
with it. He's willing to admit. Oh, I guess I
don't know how that works. But like the toilet scenario,
if I were to say, do you know how a
toilet works, and if he said he did, and then
he couldn't explain it, his response would be, let's go
look in the toilet. That's awesome. Well, yeah, that's a
great instinct. Except the sad part is and this is
(05:49):
a one of the realities of parenting. Uh, you think
you're gonna be able to you're gonna Well I told
myself this before my son came into our lives, that
I would answer all the questions. I would have the
patients to do it. And it's a wonderful thought, but
the reality is you just don't. You don't have the time.
(06:09):
So if the toilet, for example, or necessarily always the
understanding right right, Well, but like with the toilet, the
first two times he asked to look into the back
of the tank to take the heavy top off and
see the float system at all, I obliged because it
was fun. But it comes up again and again, Hey
can we go take the toilet apart? And it's just
(06:30):
often not time. Well, I think that is an admirable
curiosity to see the guts of the machines to sustain
our everyday lives. Uh yeah, But so my question also
would be you said, he's okay to be corrected when
you tell him no, that's not how something works. Here's
how it works. Does he does he catch himself like
like was described in the experiments here, if he's forced
(06:52):
to explain does he realize in the process of explaining
that he doesn't know. Not necessarily, he doesn't really throughout
a bunch of really robust explanations for things, but he'll
have to sort of a rough one to three point
explanation of something. And sometimes that he's pretty accurate. Uh,
(07:15):
and we have to say, oh, well, that is basically
how this thing works. But other times we're like, oh, no, no,
you're missing a major component here. Well, I think he
might be might be falling in line actually with some
research we're just about to look at. So one of
the authors of the original illusion of explanatory depth study,
Frank Sie Kyle, along with the psychologist Candice in Mills,
authored another study published in two thousand four in the
(07:38):
Journal of Experimental Child Psychology, and what they were looking
for was to see if they could find evidence of
the illusion of explanatory depth in children the same way
Kyle and associates had found it in adults. And so,
uh yeah, So this was published in two thousand four,
and they start by observing young children have a lot
of metacognitive shortcomings that that's not an insult to rag
(08:00):
on little kids like kids are so dumb, but they're
very bad at predicting how well they will do at
mental tasks. One example that the authors give is that
they tend to be overconfident in their abilities. For example,
school preschoolers in kindergarteners, they will tend to believe that
they will be able to recall more than a dozen
(08:20):
items from a list, but then they can only recall
two or three. Yeah. I have encountered shades of this with,
for instance, the question, hey, if you go with us
on this trip, will you be able to walk everywhere
because Daddy's not gonna be able to carry you? And
he might say yes, but then when it comes down
to the actual walk, uh, he's asking to be carried. Yeah.
(08:42):
I guess you could categorize that in a few different ways,
but it might line up with what we're talking about
motor categories versus mental. Do you see the same thing
with purely mental tasks or I guess it probably doesn't
come up that often. I guess not. I guess we
we tend to know what he's capable. I guess the
thing that we come to mind would be ability to
set still and maintain attention on something. But it's difficult
(09:06):
because we're It's not like we're saying, hey, you ready
to go see the Nutcracker and set there for two
hours and him saying yes, of course, Like we know
he's not gonna set there for two I'm gonna have
trouble setting there and watching the Nutcracker for two hours.
I guess for me, it would depend on how monstrous
the costumes are. Yeah, but then you're only going to
get monsters in in Act one. Act two of the
Nutcracker is just a bunch of silly dances. Oh you
(09:28):
know what, I just I just had an illusion of
narrative depth there where I thought I remembered what's in
the Nutcracker, but I opened my mouth to say, and
then I'm like, wait, what does happen? Well, because you
remember the Act one stuff, that's where all the action is.
That's where there's a rat king and sword fights, and
then the rest is just you know, setting their waiting
to go home. Well, I'm sorry, I'm being unfair to uh,
(09:49):
to a wonderful work of Russian art here. Okay, we
should get back on track with the study. So the
question is do children show the same illusion of explanatory
depth as adults or is it manifested slightly differently, or
do they not show it at all. Okay, now, in
answering this question, we do have to just quickly remind
everyone the children are not inhuman. They're born with a
(10:12):
lot of preloaded cognitive abilities. So every kid is kind
of a natural euclidean um. They they're born to navigate
a three dimensional world of fixed and movable objects. I mean,
that's just those are just a level of cognition you
need in order to live in the world. Uh, So
start utilizing geometry before you can even name stuff. And
(10:35):
then there's an innate understanding of basic physical laws. So
only adults really believe in magic, uh while a toddler
will see right through all the supernatural. There's an M I.
T study even came out that found that young children
understand that teleportation is not feasible. So the kids have
to essentially they have to learn that kind of blarkey
(10:59):
over time. But there they're they're born having a certain
a certain idea of how the world works at a
very basic level. Well, I mentioned in the last episode
the idea that in a lot of cases there is
no such thing as magical causation. I mean, mentally I'm
not saying in the real world. I mean even you're
not able to imagine magical causation, because if you're imagining causation,
(11:24):
it becomes in some sense physical and not magical. Magical
just means like the blurring of the concept of causation.
And so I wonder if if what it is is
that kids have this idea of causation and as they
grow up they learn to make an artificial distinction where
there's this other thing, magical causation, which in fact is
(11:46):
just not an intuitively real concept. Yeah. It kind of
goes back to the helium balloon magic example that I
I shared in the last episode. Ye, my son used
the word magic to describe it, basically the descriptive term
for something behaving in a way that that he did
not predict. Yeah, it's it's it's causally vague. Yeah. Um
so yeah, So what happened in this experiment, Well, the
(12:08):
experimenter has used a modified version of the device test
from the original research if you remember from the last episode. Uh,
they wanted to test the illusion of explanatory depth and
how well people think they understand devices from around their home,
like a toaster or something like that. Uh, And they
tested this in a group of kindergarteners, second graders, and
fourth graders, and then they also recruited adults to independently
(12:32):
rate the explanations given by the children as a measure
of sort of the directional accuracy of the children's adjustments
of their own confidence after giving the explanations. So, uh,
for example, the kids would give explanations of how a
toaster works. And because these were kids, this was done
orally instead of written, So I thought these were good
(12:54):
enough to read. Maybe kindergartener explains how a toaster works.
You put something in it. I love it's something. You
put something in it, and then you press a button,
and then you press the button, push it down and
leave it there and then it heats and then it
comes up to alright, well, yeah, fairly accurate, and not
(13:15):
a lot of detail about the parts and what they do,
but okay, uh. Second grader says, well, you put the
bread in and you push this little lever down, so
then there you go. It'll heat raise inside and it'll
make the bread really really hard and stuff, and it'll
just pop out. Okay, that maybe a little better, and
I wish I like worked in a restaurant now, because
(13:37):
I wanted I would want to make that the terminology
in our kitchen, like, hey, make that bread really really hard.
And then the fourth grader says, like, Okay, a toaster
is made by electricity. You plug it in, there's a cord,
it comes electricity, and then you put the bread in.
Then you press the button down. When you hit that
(13:58):
button all the way down, red lights, which is heat
coming out, which is from the electricity, and it heats
the bread and when it comes out it's toast. Okay,
so you're starting to get there. Yeah, I mean that
explanation I feel has some problems. Um, I feel like
it's I feel like it's at once more accurate and
more confusing. I think that's the process of growing up,
(14:20):
isn't it. Yeah, Yeah, well yeah, and I think that's
probably uh, that's probably that's probably my process with a
lot of things we research here when I'm thinking this
is this is making a lot more sense, and it's
raising so many additional questions at the same time, gaining
more correct knowledge and becoming more confused. Yeah, and this
will this will come back again when we get to
politics and policy and a bit right, Uh, So the
(14:42):
results from this experiment what happened when they essentially ran
the same test of the original study from two thousand
two on these children UM, Well, the older children definitely
showed an illusion of explanatory depth. The UH second graders
and the fourth graders showed clear awareness of the illusion
of explanatory depth UH the younger that. The younger the children,
(15:06):
the higher they rated their own understanding. By the way,
so the kindergarteners they rated their own understanding of how
a toaster works. The highest UH kindergarteners did overestimate their understanding,
meaning like independent judges read their explanations and rated them
lower than the kindergarteners rated their own explanations. But the
kindergarteners were much less likely to recognize this fact upon
(15:30):
being forced to give an explanation. Unlike most adults and
the second graders, in the fourth graders, sixteen of the
twenty four kindergarteners appeared to just remain oblivious to the
fact that their explanations were shallow after they gave them,
so that I thought, that's interesting. By second or fourth grade,
you experience this effect where you think you understand something
(15:53):
somebody asks you to explain it, and then you realize
you understand it less than you thought. Apparently kindergarten nurse
don't have that realization. They just give a not very
good explanation and they're still pretty confident. Huh. You know,
I wonder if this to possibly put the narrative spin
on this. I'm reminded of a scenario that I encounter
with my own son, and that my mother, who is
(16:16):
a kindergarten teacher, has encountered with kindergarten students and this,
and I'm sure parents out there can relate to this.
But the kid comes home from school, you ask, what
did you do at school today? And the answer is
nothing or I don't know, and and it's it's it's
confounding for an adult because you're like, how can you
(16:37):
not know? How can you not have the of course,
you know you were there like just half an hour ago.
Uh So, I wonder if part of that is like,
as you get older, you're more willing to just fall
back on on a on a very vague idea of
what the narrative was, Whereas when you're when you're younger,
when you're kindergartener or younger, you're more inclined to just
(16:59):
say I don't, no, I don't. I don't know what
I did today, And you know, as adults, maybe we
should be more open to that kind of self reflection.
I don't know what I did last week. Oh that's true.
You probably don't. Ye might you might remember a few things. Yeah,
But if you had to create a timeline, I mean,
we see this all the time with in cases where
people were called upon it to to create an alibi
(17:22):
criminal trials and they realized, oh, I have no clue.
There's this was like a month ago. You out there listening.
What were you doing four saturdays ago at four pm?
People might actually remember that because it might have been
a holiday. I'm not sure anyway, Okay, So they did
a second study in the same research and it was
(17:43):
the same kind of control for domains of knowledge that
we saw in the research from the last episode. So
they ran the same test again, but instead of asking
them to explain how a device works, they asked them
to explain how to do something to to look at
a procedure. UM So, so instead how do you make
a cheese pizza? You know, second graders, fourth graders, kindergarteners,
(18:05):
um how do you change a flat tire? How do
you catch a fish with a fishing rod? And the
results were just like adults. The kids did not show
the illusion of depth pattern for procedures. In fact, after
giving the explanations, their ratings of their initial knowledge were
adjusted upward. And that was the same thing we saw
for adults. So kids and adults both they don't overestimate
(18:26):
how well they know how to do things, though they
might overestimate how well they understand how external things work. Well.
That would that would line up with with my relationship
with Bastion, because he'll say he's more inclined to say,
I don't know if it's what did you do today,
But if it's something like how to volcanoes work? He
doesn't really have a firm knowledge of how volcanoes work,
(18:48):
but he I'm sure he could go on and on
about it, right, Okay. So that's some more replication and
some some things. It might give some things to think
about with with the raising of children and and how
we think about the education of our of our our
our young members of our species. But we should look
at the perhaps education of the adults of our species,
(19:10):
because adults, they have the power to do things with
their understanding of causal systems in the world, and in
this sense, the illusion of explanatory depth could actually have
many potential applications. For example, here's a very quick one.
In marketing and consumer behavior, research indicates that people's willingness
to spend money to buy a product. I saw this
(19:33):
reported in one of the papers we're talking about. Their
willingness to spend money on something is related to their
belief that they understand how the product works. So, given
our differential understanding, our confidence and understanding based on different
types of knowledge domains UH and the fact that you
can trigger people to realize their overconfidence by forcing them
(19:53):
to explain it. That could have some real impact on
stuff like marketing and consumer behavior. But another potential and
probably much more important application would be in political extremism. Yes,
so perhaps a lot of you are like like me.
I don't like to engage in political arguments, not with
friends and not with family because arguing about politics is
(20:16):
not is not terribly fun for me personally, usually not
terribly effective. Yeah, no one's mind has ever changed, especially
like the more strongly the opinion. I wouldn't say ever,
but almost never, almost never. Yeah, I mean it's nothing's
going to really come of it. Uh. And there, you know,
there's a good chance of the conflicting argument here isn't
even about the thing you're arguing about. You know, there's
(20:38):
some other underlying thing there, or some unspoken assumption about
national character, human behavior, what have you. Yeah, the the
the the supposed issue under debate is actually just a
battle ground where you are confronting people with with different
I don't know, feelings about different values that go unstated
(21:03):
in the conversation. Yeah. And there's a good chance, and
this this goes across is a bipartisan observation, there's a
chance you're not even arguing with that person. You're just
arguing with essentially bullet points that were covered by a
media personality or or even a something that came up
in a news article, and they're just kind of regurgitating
the information. Yeah. This is one of my least favorite
(21:25):
things about political debates is that we we tend to
argue not with the person sitting across from us, but
with people like you. Ben't that horrible, you know, this
is how people like you think. And I'm going to
argue with people like you instead of with you. Yeah,
and you just have yuks and zooks going out of
there talking about which side of the toast the better
(21:47):
goes on and plus on top of all of this,
of course, so often the topic isn't even that cut
and dry, right, I refer our listeners back to our episode,
and we could problems, which gets into so many of
the big problems in society are so complex at any
attempt to correct them just create more problems, etcetera. Uh,
it's it's a messy affair, right, So where the sense
(22:08):
of tying into our topic today, People like you and me,
but especially those uh you might wish to avoid an
argument with often hold extreme opinions about complex policies, and
they mistakenly think they understand the causal processes underlying those policies. Yeah. Yeah,
we we have strong opinions about things even without really
(22:31):
understanding those things super well understanding the uncontroversial factual character
of those things, right, Yes, Like you might have a
very strong opinion about medic care. I don't. I just
made that up because it's a complex government instrument. Um.
But if somebody asks you to explain how medicare works,
(22:53):
you'd be like, well, um, there's the government and you
you know, whether you're pro or or pro or against.
You know, we just we it's much easier to generate
an opinion than it is to comprehend a complex causal system,
right Like I'm always reminded just in politics, with politics
in general, but especially on this topic. I'm reminded of
(23:13):
the Simpsons Treehouse of Horror episode in which the aliens
King and kodos Uh replace Bill Clinton and Bob Dol
during the election. It's one of the finest moments in
the history of the Simpsons, ye is that there's that
wonderful moment where we believe it's King Um and I
forget what he's protempting. He's I think he's pretending to
(23:34):
be dull, okay, and he says, he bradly announces in
the I think they're doing a debate, and he says
abortions for all boom. He says, very well, no abortions
for anyone, boo, And then he thinks he says abortions
for some miniature American flags for others, and that that
tends to go up. The crowd is pacified. Now, now
(23:54):
King had an excuse for not understanding the human complexity. Right,
because he was a tentacled alien. Uh. Not every political
candidate has that handicap on their performance, But but each
attempt here by King at policy seemed like a decent solution. Now,
the rest of US voters, non voters, and even some
(24:15):
elected officials aren't much better off. American voters have an
amazing ability to maintain strong political views concerning complex policies
and yet at the same time remain relatively uninformed about
how such policies would bring about the desired outcome. Right,
your opinion is very strong, but you can't necessarily demonstrate
the factual knowledge on which your opinion is supposedly based. Yeah. Well,
(24:38):
like without even drawing any any specific examples, I'm sure
listeners out there, no matter which side of the the
vast gaping political divide you reside on, you can think
of an example where the other side has has presented
a simplistic solution for a complex problem and the people
on the other side, the you know, the butter side down, folks,
(25:01):
They they seem to believe that this will fix it,
contrary to our understanding that, yeah, that's not really how
you fix complex problems. And given the grammar of what
you just said. Of course your side does that too,
So what's going on here? I mean you can't just
say we're dumb Americans brainwashed by reality television, because that
(25:22):
too is boiling it down to a rather simplistic approach.
And this we get into the idea that maybe it
is the illusion of explanatory depth, Yeah, that that could
be contributing to this type of extremist opinion holding, right,
And so that's where this paper comes in two thousand thirteen,
published in Psychological Science. Political extremism is supported by an
(25:43):
illusion of understanding And this was by fern Boch, Rogers,
Fox and Sloman. So what did they do in this experiment, Well,
they set out to see if people really do have
unjustified confidence in their understanding of complex politics and to
see if this is this in turn contributes to polarization.
It's the same premise that we've been talking about, except
(26:04):
with causal systems instead of machines. Okay, so or well
it would be a different kind of causal system. Right,
So policies or proposals for uh, things that should be
done in a country, things that are in their own
way a kind of machine, but they're not a physical
device or a natural phenomenon, and this in turn would
(26:25):
affect preferences and behaviors. So if this is all the case,
if we can really look to explanatory depth for our
solution here, then merely asking people to explain the mechanisms
behind their policy ideas would decrease their sense of understanding
and those of those ideas and force them to express
more moderate political views. So this would be just like
(26:48):
the King scenario. Ricking says abortions for all, and he's
immediately shouted down and he realized, WHOA, maybe I don't
have a handle on this human abortion topic quite like
I had. I better go in the opposite direction. Yeah, yeah,
and it leads to this correction effect. Now, part of
what they're saying also is that they're drawing on this
(27:08):
idea that decreasing a person's subjective sense of understanding on
a topic will actually lead to moderation. And uh so
this is drawing, of course, on the research on the
illusion of explanatory depth. But also they mentioned that it's
drawing on research that they cite giving evidence that quote,
people are more likely to change their attitudes about a
(27:30):
policy when they have less confidence in their knowledge about it, right,
You're more likely to change your opinion when you're less
confident that you know about the subject of the opinion.
Note the opera operative word here is confidence when they
have less confidence in their knowledge, not when they actually
have less knowledge. This it doesn't address how much you
(27:52):
actually know, but if you don't think you know as
much about the subject, you might be more persuadable on
your opinion about that subject. Uh. And this is the
same gap explored in the illusion of explanatory depth research,
so that that's the basis of their their investigation. And
the researchers went into this realizing that this theory and
(28:13):
they'll we'll talk about this uh in more depth here,
but they realize that this kind of runs counter to
research that had previously shown that people tend to double
down on their crazy ideas, their extreme ideas when confronted
about Yeah, sort of the backfire effect. You might have
heard about this in politics. If somebody, uh, somebody has
an extreme opinion or has an opinion at all, and
you try to present them with counter evidence against their
(28:35):
opinion or ask them to state reasons for their opinion,
or do any kind of confrontation like that. People tend
to become more extreme, right. Uh. An example that has
been brought up before is when someone is conned by
con artists. Oh yeah, and you would think, oh, well,
the connors has been denounced. You should You're going to
denounce them to write. Depending on how much effort they
(28:58):
put into uh support warding the con artists, though, they
might just double down completely and say no, they're absolutely
right this. Yeah, this is complete milarchy. You're thrown out here. Yeah. Well,
I mean in that case, there's also a sunk cost
fallacy involved, Like you've you've thrown in with with a
con artist. You kind of don't want to accept the
possibility that you have you have squandered all of this
(29:20):
time and money and and and personal reputation getting hoodwinked.
Uh So, yeah, you pretty much have no choice. You've
got to double down, You've got to make Yeah, how
that's not coming off right? Uh so? Yeah. So often people,
uh it's it's hard to talk somebody out of an
extremist position. Confrontation often just leads to them either staying
(29:44):
where they are or becoming more extreme. So the real
question is can we exploit the the illusion of explanatory
depth the fact that showing people asking people to explain
makes them realize that they understood mechanistic process is less
than they thought. Can that perhaps change people's opinions? Well,
(30:05):
let's take a quick break, and when we come back
we will see if there's if this holds any water
at all. All Right, we're back. So let's look at
the first experiment in this study. Okay, So participants were
asked to rate their understanding of six political policies. One
group of participants provided ratings of their positions both before
(30:28):
and after they generated mechanistic explanations. Okay, so this is
a one to seven thing. Uh, As in previous studies,
we've looked at one strongly against, seven strongly in favor.
And if you're wondering about the policies, they were Iran sanctions,
raising the social security retirement age, single payer healthcare, cap
(30:49):
and trade, carbon emissions, national flat tax, and merit based
teacher pay. Okay, I think those are good examples because
they're all things that you can easily find a lot
of people having strong opinions about for or against. But
they're also uh complex in terms of detail and in
terms of effect, and so a lot of people might
not actually know very well how these things are supposed
(31:10):
to work. Yeah, and then there's some sort of hidden
complexity I think, to all of those two. So after this,
they they were asked to quantify their own level of
understanding of these positions. A lot of this is gonna
sound familiar because falling very similar methodology. And finally they
were asked to provide that they provide that mechanistic explanation,
and they were asked to then re rate their understanding
(31:32):
of the policies. So exactly the same essentially narrative flow
to this experiment as we encountered with previous experiments. As expected,
post explanation ratings of understanding were lower than pre explanation ratings. Okay,
so that's the that's the mechanism we've seen before, the
illusion of explanatory depth exposed by you trying to give
(31:55):
an explanation of a thing. And the same also proved
true with the differences between position extremity scores. Uh. Though
the authors point out that the social security and merit
pay issues differed the least judgments made after explanations were
less extreme than we're judgments made before explanation. So their
their quote here and this is quote. Our interpretation of
(32:18):
this pattern is that attempting to explain policies made people
feel uncertain about them, which in turn made them express
more moderate views. Well, that is interesting. Now, one thing
we should say, and it's gonna apply to a lot
of stuff throughout this uh, throughout the study. But it's
just worth noting that the statistical effects shown here, even
if it holds true in general and and the studies
(32:39):
results are valid and correct, the the effect is not drastic.
People aren't just like floored by the shallowness of their
understanding and completely converted to the opposite view or something,
or to an undecided position. But it is appearing to
show a modest moderation effect. People reduce the extremity of
their opinion. Now, one of the question that remained here
(33:01):
for the researchers was are we sure that it's uh?
It's their attempt to explain the mechanistic explanation at work
here and not merely reflection greater consideration of the topic. Right,
So what if instead of making them explain the mechanics
of raising the social security retirement age, what if we
just asked them to talk about the idea of raising
(33:22):
the social security retirement age. Would that do the same thing,
or give their reasons or something like that. All right,
so that's where experiment two comes in. Similar approach, except
one group was asked to explain why they held the positions. This, uh,
you know, as prior researcher has suggested that when people
think about why they hold a position, their attitudes tend
to become more extreme. The researchers predicted less attitude moderation
(33:45):
in the explain why group rather than the explain how group.
And this is what they found, they said, quote Experiment
number two replicated the results of experiment number one and
showed further that reductions in rated understanding of policies were
less pronoun among participants who enumerated reasons for their position
than among participants who generated causal explanations for them. Okay, so, uh,
(34:08):
stating the reasons why you think that we should or
shouldn't raise the social security retirement age that moderates your
position less, you know, But explaining how that process would
work moderates your position more right now. They didn't find
that enumerating reasons for supporting position led to an increase
(34:29):
in extremism. They said, although an analysis of individual reasons
suggests that it did increase overall attitude extremity when participants
provided a reason that was an evaluation of the policy. Okay,
so that's sort of also not going along with some
previous research said that if you give reasons why you
feel a certain way, that makes your opinion more extreme. Yeah,
(34:51):
that wasn't found here. Now I'm reminded in all of
this of a moment in the wonderful film Return of
the Living Dead. Oh it's one of my favorites. Wonderful
Clue Gallagher. Zombies. Yeah, fabulous film. Uh you know, one
of the best zombie films. The zombies talking this Yeah,
that's what makes this one unique. I mean, the zombies
actually start speaking. They're not especially profound, or maybe they are,
(35:14):
I don't know. Uh, yeah, the zombies can talk. And
so Robert you you were thinking of this particular scene
where they interrogate a zombie. Yeah, there's the character name
Ernie and he has like a half of a zombie
on the table there and he's he's asking it questions
and he says, uh, can you you can hear me? Yes?
Why do you eat people? Not people? Brains? Brains only yes,
(35:40):
why the pain? What about the pain? The pain of
being dead? It hurts to be dead. I can feel
myself rot eating brains. How does that make you feel?
It makes the pain go away? Okay, wait a minute.
So they're interrogating there almost like could you could you
(36:01):
reduce the extremity of the zombies position by forcing them
to give a mechanistic explanation of how eating brains reduces
the pain of being dead? Um? Maybe, But here in
this case, it is the zombie really explaining I don't know.
It's the zombies forced to provide some level of self
reflective explanation of its hunger for human brains. But it's
(36:23):
not really mechanistic. It's it's merely an explanation of really
why the zombie holds the position that it does. So
if it's if it's just giving reasons, it might be
staying where it is or actually becoming more extreme. Yeah.
So maybe what you should do if you want to
get moderation in the zombie is get the zombie to
explain the I don't know, the biological process by which
(36:46):
eating brains reduces the pain. Yeah, explain to me how
that could possibly work, zombie, Because because I'm drawn a
blank and then present it with a nice, you know,
soft pre shaved forehead and see if it goes after
the brain made it might be a little less likely.
Who knows? Well? That brings us to experiment three outside
of the zombie film, um, because you know we can.
(37:10):
We can go back and forth on whether the zombie
we eat the brain. But more important here would would
be would the reflective voter vote differently based on everything?
Might they choose to donate differently to campaigns or organizations? Okay,
so this is tracking the fact that just how extreme
you report your position is might be different than what
you would actually do based on your political feelings. Yeah. Yeah,
(37:33):
So the same steps were taken in as in the
previous two experiments, only this time the subjects were asked
at the end of the experiment to choose whether or
not they would donate a bonus payment to a relevant
advocate secret I believe the bonus payment was twenty cents,
and they found that yes, attempting to create a mechanistic
(37:54):
explanation resulted in a less likelihood of putting money behind
that uh uh, that cause via a donation. Okay, so
if you are strongly against the flat tax, and then
you are asked to explain how the flat tax would work,
and you have to give that mechanistic explanation. You're less
(38:16):
likely to donate money to organizations that advocate against the
flat tax than someone who was against it and didn't
have to explain what they're what how it worked. Yeah,
it would be like you've I don't know. Let's say
you're in ancient Egypt and uh, a pharaoh says we're
gonna build a pyramid and this is going to solve
our problems. And you're like, yeah, build that pyramid. I
(38:38):
I'm going to donate you know, X amount of labor
or whatnot. Huh uh. And then but if you were asked, actually,
can you explain how building the pyramid is gonna help
everybody out? And then you draw the blank and then
you think, well, maybe I'm not gonna donate as much
of my nice service here. Now. I wonder there because
if if you say how the pyramid is going to
help people, that seems like that may enter do some
(39:00):
value judgments. Well, but that might undercut the mechanistic explanation effect. Well, yeah,
I guess I should be more clear and say that
explain how that how the building of the pyramid is
going to achieve the stated goals. Yeah, what what will
it do? Yeah? And uh, you know, and if the
the answer is not that compelling or it seems more
complex than one has the grasp of, then yeah, it
(39:22):
might make you a little less supportive of it in
indeed's action or certainly money. Yeah, though I would also
certainly guess, just intuitively that mechanistic explanations of how something
will work are different than mechanistic explanations of a something
that already exists, either in terms of proposition as a
(39:42):
policy and is written down, or something that exists in
nature or as an artifact. So in this study they
said they summarize it at the saying explanation generation will
by no means it eliminate extremism, but our data suggests
that it offers a means of counteract a tendency supported
by multiple psychological factors. And that tendency, of course, is
(40:05):
political extremism, which is ultimately heartening because it it gives
us the message that, hey, if you don't want people
to hold ridiculous extreme views on things that are ultimately
going to be hurtful and harmful, and nothing and and
and nothing good is gonna come, and it's not actually
gonna help any of the problems they're attempting to solve.
(40:26):
It's a matter of education. It's a matter of getting people,
presenting people with facts, or at least not even that,
but just making them question. Yea, not presenting them with facts,
just asking them to explain the not not the politicized parts,
but the pure mechanical parts of how what they're saying
will work. Yeah, to just think about it, Think about
(40:47):
the topic, think about this proposed solution and the problem
that it's supposed to solve, and ask some critical questions.
Critical thinking we would have thought it. All Right, we're
gonna take a quick break, and when we come back,
we'll look at a couple more more topics before we
wrap up our discussion of the illusion of explanatory depth. Alright,
(41:09):
we're back. Okay. So, one topic that I thought of
as and it has come up in the research as
being related to the illusion of explanatory depth, but somewhat
different is the often cited, the very interesting but also
much misinterpreted and misused Dunning Krueger effect. So you may
(41:31):
have heard about this before, I think we should do
a whole episode about it at some point, So we're
not going to get into too much depth about it here. Um,
But it's one of those things that I find very interesting.
But I'm also very annoyed by much of the discussion
about because a lot of it I think boils down to, uh,
people like this effect too much. It's it's very interesting
(41:54):
in reality, but a lot of people, uh seem to
like to bring it up in a way just to
show that they are mentally superior to others. Yeah. Now,
if you're out there and you're and you're as you're
saying to yourself, I don't actually know what this is, Uh,
don't feel dumb about it. I actually it wasn't super
familiar with it prior to research for this episode. So,
(42:14):
in brief, the Dunning Kruger effect is all about cognitive bias.
The idea here is that relatively unskilled individuals feel a
false sense of superior superiority as they mistakenly assess their
ability to be much higher than it than it accurately is. Yeah.
So the very very brief summary, and we could get
into more detail in the future if we do a
(42:35):
full episode about this. Study and and its critics and interpretations. Um.
But the basic idea is that people who are very
unskilled at a particular type of task tend to judge
their own abilities way higher than they are because being
unskilled at the task usually comes along with a lack
of metacognitive ability. In other words, unskilled people tend to
(42:58):
be not aware of how unskilled they are and underestimate
the level of expertise required it takes to do that
is required to do something well. Yeah, and this was
first observed by the the study authors, David Dunning and
Justin Krueger of Cornell University. In they said, quote, we
propose that those with limited knowledge in a domain suffer
(43:21):
a dual burden. Not only do they reach mistaken conclusions
and make regrettable errors, but their incomfidence robs them of
the ability to realize it. As you said, though, here's
the thing. It's very easy and very appealing to throw
the Dunning Krueger effect effect around, Willy Nilliet, everyone you
don't like, you don't agree with anyone who seems to
be overstating their understanding of a topic. I think it's
(43:43):
a lot like labeling someone a sociopath. It's far too
easy to do if you have just a basics even
you know, surface level understanding of the symptoms and characteristics. Yeah,
and that's kind of interesting, like people shallowly engaging with
a with the idea of a psycholo logical concept that
has to do with shallow understandings of things. Yeah, the
(44:05):
the the the the doing. Thenn Kruger effect lines up
with the illusion of explanatory depth and runs contrary to
do to it. But it's also susceptible to misuse, in
part because we suffer from the illusion of explanatory depth
on the topic. Yeah. Well, I think we should definitely
come back and do a whole episode about this about
(44:25):
this subject sometime in the future, especially because I've read
some interesting criticisms of the Dunning Kruger effect and how
it's applied and and how it might not be all
it's cracked up to be. It's almost as if a
simplistic explanation for human behavior uh uh might have some
faults with it. Well. I don't want to entirely knock
it either, because I do think it's interesting research and
(44:46):
and it deserves our attention. So but but this seems
to be the this is the way things go. If
you have a a well, let's not say simple, but
let's say if you have a nice, streamlined theory for
why people do the stupid things they do. Um, it's
probably a little more nuanced. But but but it at
least gives us. The wonderful things about these theories is
(45:08):
they give us a starting point for the discussion, for
the further discussion of that thing. It's like like nailing
a metal steak into the side of a mountain when
you're climbing. It's not the only you're gonna have to
You're gonna have to hammer in more steaks to make
it at the top. But but but this is how
you scale the mountain of ignorance. That's a that's a
(45:28):
nice analogy, Robert would thank you. We're always scaling, aren't
we scale? We never reaching the top? Wait, no, sorry,
if it's the mountain of ignorance, maybe we start at
the top when we're trying to climb down without falling
off of a precipice. Yeah. I don't know. I'm thinking
of a divine comedy version of that. Yeah. Okay, So
the last thing I think we have to do is
(45:51):
offer our own, just based on our reading and our
our opinions. A a list of advice on how to
avoid illusory understanding. Now, this is not like this is
not like a doctor approved list. There is, as far
as we know, nobody has come up with us full
proof way to keep yourself from overestimating your understanding of
(46:14):
how things work. But Robert, I think you and I
can get behind a few recommendations coming from us non
experts on this subject. Yeah, just ways to remain conscious
of how the brain works. First thing I would say
is that there's some slight evidence that simply being aware
of the illusion of explanatory depth does not destroy our
susceptibility to it. That's important to remember. Just because you
(46:37):
know about it now doesn't mean you're immune to it. Uh.
One example is back in the original study we talked
about in the last episode, many of the participants subjectively
reported you might remember this. They said something like, oh,
if only I had gotten a different subset of the
devices on the list to explain, I would have done
much better. Uh. Though the effect presented broadly across all
(46:59):
the device paces and the different groups. Thus, even after
we're made aware of the gaps in our knowledge and
the fact that we overestimate how well we will do
in explaining things. Uh, these people were like, oh, I
would have done much better explaining different things. They probably
wouldn't have been until they got in the moment trying
to explain them that they would have realized that they
(47:22):
couldn't do any better on those things than they did
on the devices they originally had to explain. Uh, they
thought it was a fluke somehow. So that you are
not now inoculated having heard this, You're you're not immune, right, Yeah,
it's it's not just magically going to dispel You're you're
missing your misinterpretation of your own understanding. One of the
(47:44):
main things I would recommend to avoid the illusion of
explanatory depth is practice explaining things and be sure to
put them in your own words. Yeah, this is this
is a big one, and this is as we discussed earlier,
this is something that I find, Uh, in my own
experience a lot is that I'll be asked by my
wife what we were doing an episode on I'll have
(48:05):
to explain it in my own words, and that's sometimes
when I realized that I don't understand something well enough yet. Yeah.
Often I have the experience of like I read a
scientific article or something. I'll just read it, and then
I'll think, Okay, I I read that, I comprehended it.
I can I can explain it now. And then I
have this problem. I start talking. I get a few
(48:26):
sentences in, and I realized there are big gaps. There's like, wait,
there are parts I didn't understand that. And I don't
even realize those gaps are there. I'm completely blind to them.
And the way to eliminate them is to essentially summarize
what I have read in writing, to write a summary
myself in my own words, what did I just read, what,
what was it about? What did it say? And then
(48:47):
the gaps in my understanding become clear and I can
fill them in. So I think that helps a lot
in becoming aware of the limitations of your own knowledge
and comprehension. One of the things coming from the interpretations
of the the researchers themselves is be wary of mental animations.
If you're imagining how something, especially something physical, works, just
(49:11):
because you can play a cartoon in your head of
how this device works doesn't mean that the cartoon you're
playing in your head actually makes causal sense. You're the
imagination idea you have of something is not constrained by
the laws of physics and reality is so we're very
apt to run a mental movie of how it can
(49:33):
open our works or something that makes sense in our imagination.
And it's not until we try to explain it that
we realized that we're missing parts and it wouldn't actually
work if we tried to put it together the way
we're imagining it. One more I would say is be
wary of labels and vocabulary. This came up in the
first study. Also, just because you know the name of
components doesn't mean you understand what the components actually do.
(49:57):
And I think that that also applies to like I,
I didn't think of this earlier, but but now it's
occurring to me. Business jargon in terminology sort of the
synergized backward overflow kind of thing where you end up
thrown around these terms for things that that maybe have
definite meanings, but then they lose their stripped of those
meanings through repeated use and they just become kind of
(50:19):
just pointless mantras that are that are you know, thrown
back and forth. You know, with business terminology, I think
we talked about this in our euphemisms episode, where where
I think a lot of the business jargon kind of
avoids saying bluntly things that would not sound so pleasant
if you said them bluntly. But another thing that I
(50:40):
think it maybe does is helps give us an illusion
of understanding of the workings of complex systems that are
actually I mean, a business is a complex system. It's
a machine. It's hard to understand how all the parts
are actually working, and even harder probably to predict its behavior.
But if you have technical sounding names of things, and
(51:00):
you know lots of domain specific labels for business terms
and business phenomena, it might help give you a sense
of understanding and control over a thing that is actually
a wild dragon and you're just writing it. And then finally,
this one might sound kind of weird, but uh, just
stick with me for a second here. I want to
(51:20):
see how well this would work. What about trying to
embody the causality of a process you're trying to describe. Um, So,
when people are trying to describe processes of things that
they know how to do, procedures, you know how to
do something they generally understand pretty well. How how how
well they can explain it. They're pretty accurate, but not
(51:43):
so with explaining external events or external devices, like how
a camera works. So I wonder if that would change
if when you're trying to understand how a camera works,
you imagine yourself as the light entering the lens and
and you sort of walk through the process in an
embodied imaginative state, like going to all the places and
(52:08):
the inside the device and seeing what happens to the
energy and the matter there. I don't know if that's
really possible. Maybe that's just a really hair brained idea,
but I wonder if that would actually make a difference. Well,
I'm not sure about with the with the camera, but
I think that there's this is very valid with human anatomy,
or at least I found that in past episodes that
(52:28):
I've have done, in articles that I've written that have
to do with the functioning of various organs and systems,
I always fall back on the fantastic voyage scenario or
the inner space scenario of the miniaturized submarine inside the
human body, because it does it does put me there.
It transforms a distant, you know, small system into a
(52:53):
place that I can envision myself and that does help
in my situation. It helps me, you understand it, like
it can sort of mentally transform an external process into
a procedure. Yeah, I wonder about that. It might help,
It might be worth a try. Who knows, Uh, you
know all of this. I can't help but be reminded
(53:14):
of a much touted quote from Timothy Leary. Of course, um,
most of you've probably heard this one before, but I
think it bears repeating because it lines up directly what
a lot of we're talking about here, is that to
think for yourself, you miss question authority and learn how
to put yourself in a state of vulnerable open mindedness, chaotic,
confused vulnerability to inform yourself. It's I mean, I guess
(53:37):
maybe what that would mean in this context is um,
is I mean wanting to understand the causal mechanisms by
which things work, but also just recognizing and sort of
accepting that where you haven't forced yourself to make an
effort to understand things explicitly, you're going to be relying
on more magical understanding than you realize. Yeah, I mean,
(54:01):
in my own handling of of topics here at work,
I've I've tried to to put myself in in that space,
you know, and realize that. You know, however, I think
something works might not actually be accurate, uh, that there
there may be more to it. I mean, it's kind
of like the can open analogy. You keep mentioning that
(54:21):
the shows up as an example in these studies even
as you're mentioning it. I feel, on one hand, I
feel like I know how it can opener works. It's
the you know, the little tooth of metal sticking in there,
and then you you know, I can picture the scenario.
I have the mental imagination, and I, well, maybe you
do on that and I feel like I probably do.
But and then on the other hand, I'm willing to
(54:42):
I'm willing to admit that maybe there's something I'm missing.
Maybe there's an interesting physical property to the can opener,
or there's a there's some some sort of quirk of
physics at work, there's something, maybe there is some mystery
to the can opener and I and I want to
know more. And therefore I'm willing to admit, yeah, I
might not have the can opener down as a as
(55:02):
a human technology. Well, I think in this case, uh,
probably the best strategy for life is to uh take
a cue from your own story, is not not from yours,
but to go open up the toilet and see what's inside,
to to to you know, interact with the mechanisms that
we think we understand. Always try to open them up
(55:23):
and see what's happening. Yeah, get hands on with it.
One last thought that I thought might be interesting to
uh to bring up is that what is the biological
origin of a lusory understanding? Like? What would we say
that it's just a cognitive quirk that sort of it's
a byproduct of other cognitive systems that we need in
(55:44):
order to survive, or should we think of it as well?
You know, most of our traits are in some sense
selected for evolved It is the illusion of explanatory depth
and evolved trade. Is it something that has some kind
of value in our lives? Lives um and and is
it a necessary part of our minds? A trait with
(56:05):
real survival value? I would certainly say that it doesn't
pay to misunderstand the world around us, Like I can't
see any way in which it's a good thing to
not know how things work. But perhaps given the general
limitations of our understanding like, given the fact that we
don't understand how a lot of things works work, perhaps
(56:25):
it pays to operate with a sense of confidence that
allows us to interact with complex systems even when we
don't understand them as well as we think we do well.
It's like learning to use, say, learning to use a computer.
Like one of the problems that I've encountered before with
with individuals, you know, particularly older family members, Uh, like
(56:47):
that they they're scared of the computer, that they're scared
of doing something wrong. They don't have the blind confidence
that is necessary to just jump in there and make mistakes. Yeah,
And so I wonder if maybe that kind of blind
confidence is actually a trait that's selected for and it
has Maybe it has a lot of bad consequences. Maybe
it leads to political extremism when we think we understand
(57:10):
how complex social phenomenon and government instruments work better than
we actually do. Maybe it leads to misunderstanding of how
our technology is really functioning and trouble with how to
fix it, you know, over as all kinds of problems
like this. But if we didn't have this trade, if
we didn't have this overconfidence about how well we understood things.
We might just be paralyzed. Maybe we couldn't interact with
(57:33):
the world because we would never have enough, have enough boldness,
have enough heart to just leap into things and live.
It wouldn't be foolish enough to be brave. Now, just
something to think about, all right, Well, hey, think about
that everyone, And in the meantime, while you're mowing that over,
you can head on Overdespeptable your Mind dot com. That
is our homepage, our mothership. You'll find all the podcast
(57:56):
episodes dating back to the very beginning of time. You
will find uh blog post videos. You'll find links out
to our very social media accounts such as Facebook, Twitter, Tumbler, Instagram,
and who knows what else. And if you want to
get in touch with us directly to give us feedback
on this episode or any other, or to request episode
topics for the future, or just let us know what
you think, you can email us at blow the Mind
(58:19):
at how stuff works dot com for more on this
and thousands of other topics. Is that how stuff works
dot com.