Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow Your Mind from how Stuff
Works dot com. Hey, welcome to Stuff to Blow your Mind.
My name is Robert lamp and I'm Joe McCormick. And
today we're gonna be looking at understanding a little bit
of the gaps in our knowledge and our metacognition. And
(00:26):
it's going to be the first part of a two
part episode on the illusion of explanatory depth. So if
you like this one, you should also definitely come back
for the next episode we released next time, which will
be a follow up to what we're going to talk
about today, illusion of explanatory depth. So to put that
in simpler terms, we're talking about a situation where you
(00:48):
think you know how something works, you think you have
a working knowledge of the thing, but in reality you don't. Yeah,
and we've talked about the gaps between the feeling of
no wing and the actual knowing before this came up
in the episode we did about the tip of the
tongue state. You remember that the sense that you know
something is not necessarily coterminous with actually being able to
(01:12):
produce that piece of knowledge from memory. There's a gap
in your mind. So you think you know the name
of the actor who played Thaiwan Lanister on Game of Thrones.
Do you Robert Oh, no, I can't. He has that
weird name that I can never know, Thaiwan. Yeah, wait,
which one is Taiwan? No, I'm thinking of Jamie. I
can never remember Jamie's name. What about Thaiwan? Come on,
he's an alien? Three? Okay, Well you got it. You
(01:34):
didn't You didn't have that gap. Then some people might
there was a gap in there somewhere. I did feel
a gap. If if you're familiar with Game of Thrones
out there you were, like some of you were probably thinking, oh,
I know that name, what is it? What is it?
It was Charles Dance. Okay. The other possible um gap
there is you hear Taiwan? And then you were is
a Tyrian? Is it Thaiwan? More names? Well? I picked
(01:57):
Character actors often fall into this category. The character know
the actors you've seen in tons of movies throughout the years.
They become that tip of the tongue name where you
know the face. You know some movies they've been in.
You know you know the name, but you don't know
the name. At the moment. It's this that guy, Oh,
what's that guy's name? Yeah, And so there's this gap,
there's this feeling of knowing, and there's the gap between
(02:17):
the feeling of knowing and the actual knowing itself. But
the interesting thing is that this gap can be applied
to other realms of knowledge. It's not just in trying
to come up with the name for a thing, for example,
in a quint essentially how stuff works move I think
we should look at the domain of knowledge that covers
understanding how things happen, or really in really understanding how
(02:42):
things work in causal relationships, because of course we live
in a world of systems. The system is always trying
to get you down. But but there are causal systems
all around us, machines, the coffee maker in the office,
the computer you're working on, and ammals, animals or systems
of causal relationships. There are natural cycles, like you know,
(03:05):
the nitrogen cycle or the water cycle. Those are causal systems.
And then other natural phenomenon uh, tides, rainbows, I don't know,
pooping a all natural phenomenon. Well, these are all things
that I mean too. We have to mention, of course,
the famous quote other C. Clark, right, that any sufficiently
advanced technology is indistinguishable from magic. But you could pretty
(03:27):
much say that about like any system, Uh that if
if you if it's if it's it's it's if it's
advanced enough and complex enough, and and most systems are U.
It can it can seem magical in the fact that
the sun rises in the morning. Um, there's a magic
to that. We we've we've we've observed it his magic
and felt it is magic since the time. Out of
(03:50):
mind we sometimes are even though we have the the
actual scientific explanation for what's happening, you still also have
this magical version of the event, uh pared right beside
it on on the shelf in your mind. Oh totally.
I mean, we have strong intuitions to give to give
magical or kind of fuzzy causal relationships. And and it's
(04:12):
funny because one way of interpreting the idea of magic
or the supernatural is it's just causal anddeterminacy, right, Like,
what do you mean when you say something happened by
magic or something happened with the supernatural cause, it just
means that the cause. Essentially you're saying, well, the cause
isn't clear. It's just kind of like getting vague about
(04:34):
what it means to be a cause I like this.
You know, my my son who's almost five. He we
try to explain how things work to him, as one should,
but he also has this concept of magic. It's very
loose concept. So the other day he had a new
helium balloon and it was one of those uh those
fancy shiny ones that that you get. What's the material?
(04:58):
My large, my large? So it was a mile balloons,
so it last, it was lasting longer. He's used to
getting these cheap balloons and they the helium goes down
and they're on the floor, but this one was floating
the next day and he said, hey, that my balloon
is still floating. Is this is it magic helium? Um?
Which I think was maybe like his definition of magic
is more in line with what you just said. There's
a there's a mystery there. Uh like he knows that
(05:20):
this helium is not behaving like like like normal helium.
That's that he's encountered and he has no other explanation
for it. Yeah. But yeah, so you don't have to
at that point explain how the magic does what it does.
If you did, it would sort of stop being magic.
But Yeah. So so there are these systems all around us.
We we sort of naturally feel like they're magic, but
(05:41):
we can come to understand the causal processes that that
that sustained them and that make them work. But as
we've said, understanding and the feeling of understanding are actually
separate things. And whenever you've got two different binary variables
like this, I think it's interesting to try to make
that the grid table, you know, where you've got one
binary area on a column and one binary on a row.
(06:02):
So you can think of things that we understand and
that we don't understand, and then you can think of
things that you feel like you understand or that you
don't feel like you understand. So there are things that
we understand and we feel like that we understand them
like a hammer. Yes, you think you get it, you
really do get it. Yeah, there's there's some very simple
physics involved here. There's a there's a there's a definite
(06:25):
causal um process going on. Yeah. Then there's maybe how
microprocessor engineering works. That's one where you probably don't understand
it and you probably feel like you don't understand it. Right.
This is one of those where you have a problem
with your computer and you just you tell your tech
tech guy or gal, you say, it's all magic to me.
(06:46):
I don't know how this works. Can you help me
fix this problem? Right? So, those are the ones where
are understanding and our feelings are basically in agreement. But
what about the other two boxes? What about things that
you understand but you don't feel like you understand can
actually happen sometimes, and I think it's often the starting
place of a Socratic dialogue or if you ever you know,
(07:06):
the Socratic teaching method is where instead of telling students
what to believe, you ask them questions and sort of
lead them to understand that they already knew the answer,
but they just didn't know how to articulate it. And
so in that case, the child already understands, they just
didn't know how to put the answer with the question
in context. But then there's the other box, the things
(07:30):
you feel like you understand but you don't actually understand.
And the research we're going to talk about today is
addressing how there is tons of stuff in this box.
This box box is filled to the brim. Uh toilets
are probably in this box for you? What do you?
What do you think? Unless you're a plumber, or you've
really done some work on your toilet. I bet toilets
(07:51):
or in this box. Yeah, I mean they're they're fairly complicated,
a little little mechanisms, despite the fact that they maybe
haven't buying large advanced as much as they should, because
it's kind of one of those technologies that we tend
to think, all right, it's good enough, and we don't
want to we don't want to put too much extra
thought into into its design and function. Yeah, here's another one.
(08:12):
How How what about mirrors? Mirrors is a great one,
and I love this example. I think I've brought it
up before, but yeah, I think it's a perfect example
of an everyday object that we take mostly for granted,
but it is ultimately this insane, freaky mystery in our lives.
I mean, really, it's amazing that we don't just run
around constantly smashing them like maniacs. I think you, like me,
(08:34):
love a good creepy mirror story, like a haunted mirror.
What's the Stephen King won the Representage? The Representage fabrical
short stories one of his best, in my opinion. Uh,
and there are tons of them, Lovecraft wrote one Clark
Ashton Smith wrote one, you could probably fill an entire
book with just creepy mirror stories, and then I would
buy said book. But wait a minute. Of course, we
(08:56):
understand how a mirror works. That's easy. It's just uh, well,
like the light goes in and then it comes back. Right. Well, yeah,
we think we we have we have it under wraps, right,
because we encounter them all the time and we have
this sort of ubiquitous environmental knowledge of them. But when
we're put to the test the office seems to be
(09:17):
uh the case. We we don't really understand how they work.
And and I think this is why we have all
these fictional tales about weird creepy mirrors, because we need
that that cultural release valve, that psychic release valve for
our uneasiness about them. But in terms of just proving
this out, there was a two thousand five psychological study
(09:38):
from the University of Liverpool and they looked into this
and they asked participants in the study to consider a
draped mirror, so it's, you know, like a haunted mirror
that's been covered up to keep monsters from coming out
of it, and they had to predict at which points
in the room. They would be able to see themselves
if the mirror in the mirror, if the mirror was uncovered. Okay,
so if you really had a solid understanding of what
(10:00):
a mirror, how a mirror works, you should be able
to predict how you can use it. And they weren't
able to to do that. They weren't. They weren't able
to Another thing they couldn't do is they weren't able
to grasp the fact that your reflection in the mirror
is always half your size, because the mirror is always
halfway between the viewer and the viewers reflection. So they'd
be asked to say, well, they'd be asked how big
(10:22):
is your your head in that reflection, and they would
assume that it was the same size as their own head.
So I would have assumed, yeah, I mean I wouldn't.
I really had to read the that sentence a couple
of times. Towie, oh, yeah, there is the mirror is
halfway between me and the spectral doppelganger with that that
has his his hair parted on the opposite side. Um
(10:45):
the But the study basically revealed that we we tend
to assume the size of the reflection. We tend to
assume that we know exactly how the angles work for
the reflection. Uh, we're terrible determining what will be seen
in a mirror based on the observer's vantage point. And
a major example of this is the Venus effect that
we see in so many paintings. Venus. Okay, so you
(11:07):
have Venus in the painting. Venus is looking at her
face in a mirror, and we're looking at the painting
and we see Venus's face. But if she's looking at
her face in the mirror, how does that work. It's
It's like, next time you're watching a TV show or
a movie and there's a scene with a mirror over analysis,
really think about where's the camera, where's the camera? What
(11:28):
are they looking at it? It really begins to open
up your eyes to the fact that it said, Wow,
I I was completely hoodwinked by this, and maybe I
don't have the firmest idea of the optical scenario going
on here. Uh. Slightly related, Also, anytime you're watching a
movie where there's a mirror on the lid of a
medicine cabinet and the person opens the medicine cabinet and
(11:51):
then shuts it be prepared to see another face in
the mirror behind the person when they shut the lid.
It happens every time. You know. One more, just very
quick optical example is just site itself. I think we've
touched on this that the idea that site is something
that leaves our eyes. Oh yeah, it's like laser vision. Uh.
This is one of those things like I talked about earlier,
(12:12):
where we have this magical unrealistic idea of how it works.
And even if you have the the realistic idea of
how it works, the idea of that light is entering
your eyes, you still you still end up thinking about
the world in terms of the the fictional scenario. I
think that's sort of a different gap because I think
most people do know really they know that the light
(12:33):
is entering the eyes, that nothing, nothing's going out. But
you're talking there about the difference between what what we
know and what we feel, and I think that where
those two converge, there's room for a lot of confusion.
I think that's absolutely right. Well, I think so today
we're going to look at the one big original study
(12:54):
in the illusion of explanatory depth, and then in the
next episode we're gonna look at some some takeaways and
some applications from it. But so I guess we should
get into the study itself, right, Yeah, do you want
to take a quick break before we get into it.
I want to take a quick break, Joe, and then
when we come back, let's get into this study. All right,
(13:17):
we're back, all right. So this landmark study is called
The Misunderstood Limits of Folk Science, an Illusion of Explanatory Depth,
published in Cognitive Science in two thousand two by Frank
Kyle and Leonard Rosenblit. And so they start by discussing
the idea of folk theories. Have you ever heard this
(13:40):
concept before, Robert folk theories or folk science. Yeah, this
is just kind of the It was like folk medicine, right,
It's not necessarily there's not necessarily any science to it.
It's just kind of the the the general understanding of
how something works or how it's supposed to work. Yeah,
it's what we come up with when our methods are
not Garris. Essentially, it's what we all do sort of
(14:02):
intuitively all the time. And so they say, you know,
sort of a theory can be defined as a system
of ideas that are designed to explain something observed. The
theory gives an explanation, and theories are a totally common
feature of science and of everyday life. You know, we
we use theories all the time. They might not be
good or correct theories, but we're constantly having theories about
(14:25):
the explanation of the workings of objects and systems. A
great example of this is that blue blood in your veins.
Oh yeah, do you have an explanation for that? Well,
there's the Well, it's because it's deprived of oxygen, right,
and that why it turns blue. That's not correct, is it. No,
it's not correct, but it's It's one of those that
is often thrown out there sometimes but very intelligent people.
(14:46):
It's I you know, I don't mean to to mention
any of these is an example of intellectual failing, but
they just they pick up esteem. They're passed around, and
it's easy to go through life thinking that they're true. No,
and that that can to another thing. We should say.
This episode is going to be all about our cognitive
limitations and failures and overconfidence in what we know. But
(15:07):
this isn't to say that people are stupid or you know,
we're not accusing the people featured in the studies or
people in general of being dumb. It's just good to
reckon with what the mistakes human brains usually make. Our
human brains make mistakes continually, and uh, I mean, the
best you can do is be aware of the limitations.
(15:28):
But one of the things about these folk theories is
that they often feel like they explain more than they
actually do. And take the blue blood in the vein
that that seems intuitive. What if somebody you believe that, Okay,
I'm looking at my veins in their blue and it's
because the blood turns blue. What if somebody asked you
to write down an explanation of how that happens. Then
(15:51):
you'd start being like, well, wait, so I'm trying to
write the steps down, so the blood is deprived of
oxygen and turns blue, how does that happen? Don't You'd
start encountering gaps in your knowledge. And the authors of
the study right about this, they say, quote, we frequently
discovered that the theory that seems crystal clear and complete
in our head suddenly develops gaping holes and inconsistencies when
(16:15):
we try to set it down on paper. Uh, intuitively,
I think that's they're exactly correct about that. I've had
this experience plenty of times, or not even on paper.
I bet Robert, I bet you've had this experience too.
I know I have. Here in the podcast studio. In
the middle of a podcast, maybe a tangent comes up
where you briefly want to explain how something works that's
(16:35):
not central to your research, and you think you do,
so you just start talking, and you get a sentence
or two in and you wait. You're like, oh, wait
a minute. I thought I understood that when I started talking,
But now that I'm saying the words, I don't actually
know how this works. And you have to stop and
figure out, Okay, what am I gonna do now? Yeah? Yeah,
(16:55):
you have to make that decision. Do I do I
own up to the fact that I that I really
don't know what I'm talking about. I'm gonna make everybody
wait while I read about this for fifteen minutes. Or
do I just plow ahead and somehow easyl my way
out of it. Um. I think where I encountered this
a lot is is in the preparation for a podcast episode.
My wife will ask me what we're recording on this week,
(17:18):
and I'll say, oh, recording on such and side, and
she's like, oh, really, what's what's that about? Give me
the elevator pitch and and then I'll start to explain it,
and then I'll realize, oh, you know, it's you know,
A plus VEH equals C, except I can't adequately describe
step B in the scenario. It made sense in your
head until you started trying to use words, and then
(17:38):
that's where the that's where it became problematic. And what
it reveals is that, in fact, it didn't actually make
sense in my head. It just felt like it did.
And it's useful to it for us because then you know, oh, well,
that's that's what I don't understand. That's what I need.
That needs to make sense to me truly, because if
it doesn't truly make sense to me, it's not going
to make sense to the listener. Right. Okay, So folk
(17:58):
theories in contrast us too scientific theories, where you've got
scientists trying to constantly hunt down the gaping holes and
inconsistencies in their theories and fix them. With folk theories, uh,
the explanatory systems are sort of produced in the minds
of lay people by non rigorous processes. And uh so,
if you're not a telecommunications engineer, you probably have some
(18:20):
kind of folk theory about how your cell phone works. Right,
You've got some basic skeletal idea of well, there's a
signal in the phone. Maybe you know, it's electromagnetic radiation
that goes from the phone from the antenna part maybe
or the antenna is hidden inside now, But it goes
from part of the phone to a tower. Does it
go to a satellite, I don't know. If it goes
(18:40):
to the cloud, that's a big one. And I and
I've been guilty of this too, not really stopping to realize,
oh wait, the cloud, Like I know that there's not
an invisible wonder Woman's airplane type computing system floating in
the sky. And yet somehow I fall back on that idea,
just perhaps just out of like I out of a
(19:02):
lack of desire to understand um the details and our
telecommunication system. But but yeah, I find myself at least
putting it up, putting the non realists, the unrealistic version
up on the shelf with a more realistic expectation of technology. Yeah,
and yet nevertheless, you sort of think you understand how
(19:22):
a cell phone works right at a basic level, at
a basic level, and and then you start, oh no.
But so the authors of the study, they're they're talking
about the problems with the way people hold them. So
they say, quote first, they are novice scientists. People people
in general are novice scientists. Their knowledge of most phenomena
is not very deep. We have shallow understandings. But then
(19:44):
they also say, quote second, their novice epistemologists, meaning people
who study how knowledge is generated, how we know things, uh, continuing,
their sense of the properties of knowledge itself, including how
it is stored, is poor and potentially misleading. So we
have both an incomplete understanding of how many things work,
(20:04):
but we also fail to recognize that we have an
incomplete understanding, uh, exhibited by the fact that when we
get put on the spot where we're sort of caught
off guard, we're like, oh, wait, I thought I understood that,
but now I'm realizing maybe I didn't. UM. So their
central thesis in this paper is quote we argue here
that people's limited knowledge and they're misleading. Intuitive epistemology combined
(20:28):
to create an illusion of explanatory depth or io e ed.
Most people feel they understand the world in far greater detail, coherence,
and depth than they actually do. Um Also, they say
that we're more overconfident about our understanding of some types
of knowledge than others. Specifically, are our knowledge dealing with
(20:50):
explanations for how things work that that is that is
to be singled out. So to test these ideas, the
authors performed a big series of studies. They're actually twelve
different studies inside this this massive paper, UH, to measure
people's level of confidence in their understanding compared with what
their actual level of understanding is as measured by their
(21:12):
confidence after they've had some calibration, and then then UH,
comparing that within various different domains of knowledge, meaning just
different types of knowing things do you know facts about geography,
or do you know the narratives of movie plots, or
do you know how a toilet works. So there have
(21:34):
been a lot of previous studies about overconfidence, and one
of the things that's important to establish is that a
lot of previous research has sort of focused on general knowledge,
that people might be overconfident about knowledge in general. And
the authors are not into this idea. That they don't
like the idea of general knowledge. Instead, they like the
(21:54):
idea of breaking out knowledge into these different categories, because,
as they will end up showing in their research, the
brain estimates its own knowledge in different categories in with
different levels of accuracy. Yeah. I think we all, most
healthy individuals realize that they know a lot about some
things maybe, but certainly little or nothing about other topics. Correct, right,
(22:17):
and especially uh, not just topics, but different types of
things to know. Like, you might be way more if
I ask you, um, Robert, what is the capital of England?
Before you answer, tell me how confident are you that
you know the right answer? On a scale of one
to ten, I would say a ten, Okay, what's the
(22:38):
capital London? Okay, you're right there, you go. Okay, but
tell me how confident are you that you can explain
how a lightsaber works? Well? Uh, not not very because
that's a it's essentially a magical device. Yeah, I forgot
and uh and and I also don't I actually I
think I rewrote the intro page for how lightsabers were
(23:00):
on how stuff works dot com. Yeah, so I have
actually worked with the an article, an article that explains
how it supposedly works, but I don't recall it at all.
Do you think working on that article would have made
you more or less confident in your own understanding. I
think if I had actually worked on the meat of it.
But that's an article. I think I just brewsed up
(23:20):
the landing page. I just sexed it up a little.
Did Tracy Wilson write that one? No, I think it
was an older piece. Well, anyway, let's go onto the study.
So study number one out of this. First thing they
wanted to do was document the illusion of explanatory depth.
If this thing exists, let's see if we can get
some evidence that it is there. So they got sixteen
(23:42):
graduate students from various departments at Yale and these are
the participants that this was done by professors at Yale University,
so there's a lot of Yale's in this. And the
test dealt with their ability to explain how a bunch
of devices work. So participants were given instructions on how
to rate their level of explanatory knowledge of a device
(24:02):
on a scale of one to seven with the help
of a couple of examples GPS system and a crossbow.
So with the example of a crossbow, basically a seven
means you know all the parts and you know how
all of them work together to make the device work.
You know all the causal relationships. You could you could
almost build the thing yourself if you had all the parts.
(24:23):
Um A one means you you basically don't know anything
more than what it looks like and what it does.
You don't know what the parts are, how the parts
work together. It's almost magic to you. Okay. Then the
participants were given a list of forty eight objects and
asked to rate their level of understanding of how the
object works. So you just go down this list, uh
(24:44):
you know, uh l C D screen, car, battery, a zipper,
a spiedometer, piano, key, can opener, hydroelectric turbine, flush toilet,
cylinder lock, helicopter, quartz watch, sewing machine, And you're supposed
to give the number on the scale of one to seven.
Then how well do you understand what all the parts are,
how they work together? How well do how well do
(25:05):
you understand how it works? And just to use a
little terminology because it will recur throughout throughout all the
different studies here. This first rating is known as T one.
This thing they give on the first questions their own
self rating of their explanatory knowledge of each item. Is
T one. And then in the next phase, the students
are asked to write a detailed explanation for half of
(25:25):
the of some of these items in the test category,
to explain in detail how a sewing machine works. So
you rated maybe a four on how well you know
how a sewing machine works? Now we need you to
explain it step by step in in words, And then
they wrote that detailed explanation. Then they were asked to
rate their initial understanding again. So now that you've written
(25:47):
that explanation, how well did you understand it to begin with? Uh?
Then they were given and that that rating is T
two uh. Then they're given a diagnostic question. For example,
if one of the items they had to explain was
a cylinder lock, the diagnostic question might be do you
know how to pick a cylinder lock? And this question
(26:08):
is designed to force the person to think even more
about what the parts are and how they work together.
And then after the diagnostic question, they're asked to rate
their initial understanding yet again how well did you understand
it to begin with? And then finally the participants got
to read a brief explanation written by an expert of
how these items worked that they explained. And these expert
(26:30):
explanations came from a cd ROM titled The Way Things
Worked two point oh. I was hoping they'd use some
vintage how stuff Works articles. No, no, alas it was
two thousand two. How staff Works existed then, but we
were not here anyway. After reading these expert explanations, they
had to rate again how well they had initially understood
(26:50):
the device, and then how well they understood it now
after having read the explanation. So what are the results?
What does this graph look like? You start with your
initial guests, and then you get adjusted by having had
to make an explanation, answer a diagnostic question, and then
read an expert's explanation. Well, the graph forms a kind
of U shape or an inverted bell shape, where initially
(27:14):
the students rate their level of understanding really high or
relatively high. Not necessarily really high, but it's like, yeah,
you know, I give it a four. I I understand
pretty well how a cylinder lock works. Then then they
have to give the explanation and the ratings drop off significantly.
Now note that this is not somebody coming in from
(27:34):
the outside and telling them their explanation is wrong. This
is their own self evaluation after having had to do
nothing but just put their own ideas into words. Then
it continued to drop again after the diagnostic question, and
then finally shot back up again after reading the experts explanation. No,
no surprise. If you read somebody telling you how it works,
(27:55):
now you understand how it works. So it's a perfect
story arc. It's kind of like a most to kung
fu movies, right where you have the the the young
student who is overconfident and then his uh, his, he
gets his his rear end handed to him by the villain,
and then he has to learn, he has to accept
what he doesn't know, and then he has to to
learn the craft from a from a master, and then
(28:17):
in the end he can defeat the villain. It's a
kind of a cry kid situation. Yeah. I think that's
interesting how how our our narratives play on this this
fact about us. It almost suggests that somehow we might
intuitively be somewhat aware of the illusion of explanatory depth.
(28:39):
But so anyway, looking at this graph, so you know
that there were drops from T one to T two,
and then again slightly from T two to T three,
and then pretty much no drop from T three to
T four, and then a large increase from T four
to T five. So one of the things is the
pattern rules out the idea that confidence is dropping merely
(29:01):
because of the elapsing of time in the experiment. Right,
It's not just people are steadily going lower. You know
that there they eventually stopped lowering their own score, and
then it comes back up after they read the expert's explanation.
So so that basically you have to confront what you
don't know in order to learn, yes, exactly. And the
(29:21):
interesting thing is that if they're they they sort of
rate themselves lower, but then they don't keep dropping, You're
not in free fall. Maybe that suggests that they're adjusting
more toward real accuracy in their judgment of how much
they knew. There's also an interesting note that they have
though this is not quantified data, but this is just
sort of a subjective report from the debriefing afterwards. You know,
(29:43):
they talked to the people who were in the experiments,
and many participants subjectively said they were surprised and felt
humbled by how much less they knew than they had
originally assumed. But also, and this is really interesting, even
with this new humility. Some of the participants showed that
they were still susceptible to the illusion of explanatory depth,
(30:06):
because here's what they said. If only I had gotten
the cylinder lock instead of the flush toilet or whatever,
then I would have done better overall. So if only
I'd gotten these other devices instead of the ones I had,
And the experimenters say, this judgment seems unlikely to be true,
given that the average level of performance on the two
(30:26):
different device sets used in the test was pretty much identical. Okay,
so it to use the kung fu advantage. It's like
the the young foolish hero enters into combat with the
villain and is defeated in a sword fight. Uh, and
then afterwards he's like like, oh my goodness, Yeah, I
really didn't know how to fight with a so hard.
(30:47):
After all. If only he had fought me in judo
right then, then I truly I would have taken him
out like that. But what if this guy everybody else
said that about judo, and this guy has defeated everybody
in Judo also, Yeah, I mean, if if he's wrong
about this thing, then it could he is it true
that he's right about everything else? I would doubt it.
Well it is if he's very special. Maybe he's very special.
(31:11):
But but yeah, it shows this um you can still
you still have the blinders on, like you've been humbled
on this one category, but you're still susceptible to the
to the illusion of understanding in all other aspects of life. Yeah,
if only I'd had the toilet, then I would have
been golden. Okay, anyway, so established here. But this is
a pretty small sample sixteen grad students, also Yale grad students.
(31:34):
That's pretty rarefied group to draw from. So we need
to do some more experiments of the same type to
try to replicate the results. So they did another one.
Study number two, they repeat the same experiment, same conditions,
with a larger, younger sample, a group of thirty three
Yale undergrads. Undergrads from the same school were picked because,
in the words of the author's quote, conceivably graduate study
(31:57):
leads to an intellectual arrogance, and the allude should of
explanatory competence might be less in undergraduates who are still
awed by what they do not know. There were there
were some parts of the study where the writing was
a little cheeky. I appreciated it. But the thing is
it replicated basically got very similar results, producing the same
(32:19):
pattern with respect to the responses over time. Uh. They
initially rated their own understanding higher than after they had
to explain it, not got knocked down, uh, and then
down by the diagnostic question, and then up again at
the end after they got to read what the expert
had to say. But one thing that's interesting about the
undergrads is that the effect was actually just a little
(32:40):
bit not significantly, not statistically significantly, but a little bit
stronger with undergrads than with graduate students. So the uh,
the the graduate student arrogance theory, we can say is
probably disproved by this. The the undergrads actually did a
little worse in over over confidence about their understand I
can certainly remember being a weirdly overconfident undergraduate, for sure,
(33:07):
I think we all can. Man, wasn't that a great
time when you knew everything about everything? You know? I
do remember the kind of the trajectory of sort of
in particular, I remember going into some religious studies classes
with certain ideas about the values of certain religions over others,
and how religion kind of worked, and and uh, and
(33:29):
it was just completely foolhardy. And then I was opened
up to do some some generally basic ideas and religious
studies and you know the importance of world views and
how the similarity between systems, the history of these different
religious systems and uh and and I do remember there
being like this, this resistance to it at first, giving
in realizing I didn't know anything, and then a real
(33:51):
excitement they built up from there and really there's you know,
continued my entire life. And that's something I always trying
to keep in mind on our show because sometimes we
do encounter listeners who have uh an adverse reaction to
studies that we talk about or different different takes on topics,
and I always remind myself that, well that to put
(34:13):
it in terms of our study here, that not quite
free fall, but that descent that occurs doesn't feel good.
It doesn't feel it can it can feel It's a
fearful situation at times and humbling. It's humbling, and it's
in the process of being humbled. Is is not necessarily enjoyable.
It's like being beaten by the villain and a karate
movie and the first the first act, but the true
(34:35):
wise person seeks to be constantly humbled by what they
don't know. I agree, I am humbled weekend, week out
by the things I don't know. Oh so, how wise
are you then, Robert, Well, that that's the thing. I
admit that I am not the you know, the wisest
guy in the room, but I am. I'm willing to
admit that there's a lot there, a lot of things
(34:55):
I don't know, and I'm continually hungry to to fill
in those gaps, as we should all be, sir um. Okay,
So back to the study. So we've looked at one
sample and then a larger sample of undergrads. Maybe we
need to look at a different university. Maybe Yale students
are just generally more arrogant about their own understanding. So
(35:16):
they figured they should try this at a different university.
Sixteen students from a regional, less selective state university were
given the exact same experiment UH and they judged the
selectivity by comparing the students s A T scores. The
students at the Southern University had an average of five
and forty points less on their combined math and verbal
and the result was The pattern of the pattern of
(35:37):
results was very similar to the first two studies, a
steep drop off in confidence after being asked to explain
what you thought you knew, and then rising confidence and
new understanding after reading the expert explanation. In fact, though
the overall pattern was similar between the Yale students and
the students from this other university, the students at the
less selective university actually showed a slightly stronger illustion of
(36:00):
explanatory depth effect uh, mostly due to the fact that
their initial ratings of knowledge were about a point higher
than those of the Yale students, and so the results
results of the first two studies were basically replicated. But
basically you could you could rule out various different interpretations
of what this means. Yeah, I mean again, because we're
all susceptible to this. I think we should trying not
(36:21):
to put judgment, you know, moral judgments on people's I know,
I said arrogant a minute ago, because I think the
authors were being a little bit cheeky when they were
talking about grad Yale graduate arrogant um. But yeah, it's
not that you're a bad person if you overestimate how
well you understand the workings of a toilet we've all
(36:42):
been there. Yeah, especially if you have attempt to fix one.
That's generally where the humbling comes. Where if something breaks
in the house and I think, oh, well, maybe I
can fix that. Of course I've got an Ikea toolkit,
let me add it. And then it's you know, hours
later you realize I'm in over my head. I need
to actually call an expert. But you don't want to
admit defeat, right, Okay, So next study studying number four, Well,
(37:07):
maybe a strange selection of devices is driving the effect. Right,
they're asking people about certain things cylinder lock, helicopter, toilet,
all that stuff. What if it's just particularly strong for
cylinder locks and toilets and helicopters. What if this effect
wouldn't show up is strongly for other devices. So they
did the same experiment again, got thirty two undergrads, what
(37:28):
with many more options for devices. Uh, to explain in
the experiment and to keep the experiment time under one hour,
the last couple of ratings T four and T five
were taken off, so participants only did the first three
ratings and the results where the different devices didn't change anything,
the results were the same, So it seems to be
robust across all different types of machines that you would
(37:50):
need to explain the workings of. People generally overestimate their
understanding of them, and then the explanation makes them realize
that they overestimate aided. So next study, well, what if
the subjects are just being cautious? This is something I
thought about when I was sort of running through this
uh with with Rachel on the way to work today.
I was like, how well would you think you understand
(38:12):
how you know A can open our works or something?
And we discovered that we would tend to just rate
ourselves very low, maybe because we've been primed with the
fact that there isn't an illusion of explanatory depth. So
I'm just gonna I'm gonna start with a two to
be safe. Yeah, because if you're asking me, hey, do
you know how do you know how it can opener works?
(38:34):
I would think you're trying to trick me, right, And
so maybe maybe the experiment is doing the same thing
in that once the experience. As the experiment goes on,
people are just becoming more cautious. They're being put on
guard and regardless of the actual accuracy of their original
explanatory depths. Does that make sense, Like they're not adjusting
toward how well they actually understand things, they're maybe they're
(38:58):
just adjusting towards call and just lower in general and
so um. Some students were recruited to subjectively, so they
basically they did the same study again, the same test,
you know, hadding people make the assessments. But then they
also got some people to subjectively rate the explanations written
by the original people in the study. And some really
(39:21):
complicated statistical analysis was required on this one, but the
basic result was that, according to the independent raiders who
read people's explanations and rated them on the scale, the
participants initially overestimated their level of understanding, and then their
confidence ratings became more accurate when they dropped after being
asked to give the explanation on the and on the
(39:42):
calibration question. So this seems to rule out the idea
that people are just becoming more timid or more modest
or cautious as the ratings go on through the test.
According to some independent judges who come in and said,
oh wow, this explanation of a can opener is a
two um. According to these people, the participants are actually
(40:04):
becoming more accurate as the test goes on, that they're
getting closer to how good their understanding was for real.
Here's another thing related to the priming I was just
talking about studying number six. Can you destroy the effect
just by warning people that they're going to have to
give an explanation for how some of the items work.
(40:25):
So think about it this way, Robert. You know, I
ask you, um, on a scale of one to seven,
how well do you understand how a toilet works? And
be prepared to explain your answer. Yeah. Another example of
this would be when someone asks you, hey, have you
ever seen such and such movie? Your answer might be
different if you know that the follow up is tell
(40:46):
me the breakdown of the plot. Yeah, because because that
I've had that happened before and says, hey, you know
such and such movie? And sometimes you interpret that as
do I know of that movie? Did I see the
trailer for it once? Uh? Did I watch it twenty
years ago? Yes? Yes? Maybe, But then if you actually
have to prove that you know this film, that's sometimes
(41:08):
a different can of worms, right, Yeah, So it could
put you on guard. And so the question is if
the illusion of explanatory depth effect is real. What we
would expect is that maybe maybe warning people this way
might reduce the effect, elimit a little bit, but it
shouldn't eliminate it. It shouldn't make it completely go away. Right. Um?
(41:29):
So with thirty one undergrads again, uh, they did the
exact same test, except they added a paragraph warning the
subjects that they were going to have to give a
written explanation and answer a diagnostic question. So what happened here? Well,
the results on this one were pretty odd. The same
pattern presented itself in that the first ratings they gave
(41:50):
were higher, and then they dropped after being asked to
write an explanation, and then again after the diagnostic question,
but the magnitude of the effect was reduced, so the
drop off was much less. Um, there's still a difference
between the initial and the later ratings. But the odd
part is it wasn't because the subjects who were warned
(42:10):
initially rated their understanding any lower. That's what you would expect, right,
you'd expect that if you've been warned, your first rating
would be more cautious, Right Yeah, I mean, if someone
warns you, whatever you say, someone's going to call you
on it. So don't. Don't b S is because you
you will be you'll be corrected, you'll have to have
to prove your answer. Yeah, but that's not what happened here. Instead,
(42:32):
they were no less confident in their initial understanding. It
was because their later self ratings were higher. And that's
kind of weird, right, So this seems to reveal that
it's it's not just a it's not it's certainly not
a conscious matter of I really thinking, Oh, I really
don't understand how toilets work, but I don't want anybody
(42:52):
to know, so I'll just tell them I understand. Could
I mean it? Could? I mean? Maybe that's what's going
on the author's right quote. One path stability is that
the new instruction changed the way participants used the rating scales.
For example, hearing the explicit instructions may have caused participants
to try to be more consistent with their subsequent ratings
(43:13):
because they had less justification for being surprised at their
poor performance. Basically like being it's like you were warned.
What excuse did you have for overcome for being overconfident
in how much you knew UM and the fact that
you were warned? Maybe I don't know, it makes you
more embarrassed that you were overconfident, and thus you're less
(43:33):
likely to admit how overconfident you were initially. I don't
know that that's a that's an odd result here, so
that's worth keeping in mind. But at this point the
study considers the initial effect basically satisfactorily satisfactorily replicated for
how we understand the mechanical workings of devices, and then
it's gonna move on to other things, other types of
(43:56):
knowledge and what the researchers called different domains of knowledge.
Does the same illusion of knowledge hold true for things
other than next like explanations of causally complex phenomenon like
how a machine works, how a device works. Does it
exist for facts? Does it exist for narratives for procedures?
Can it be lumped in with general over confidence effects?
(44:18):
Or is the illusion of explanatory depth its own thing?
So maybe we should take a quick break and then
when we come back we will get into the rest
of this study. All right, we're back, So study number seven.
One of the things is what if people are just
generally overconfident about what they know, regardless of the type
(44:41):
of knowledge. What if it's not just explaining things. What
if everybody's overconfident about all their knowledge. Yeah, I could
see it being sort of like the scenario in which
the brain just sort of convinces you that you have
an answer to a question just so you don't have
to worry about Because the brain is ultimately an economic system,
it can't it doesn't need a waste reas sources. So
it's it's I've read, for instance, the individuals who have
(45:04):
been quizzed on where they were and what they were
doing during the September eleventh attacks. People have very specific answers,
so saying I was wearing a blue shirt, was eating
hunting nut cheerios, but that in many cases what seems
to be happening is you're in this state of fight,
fight or flight. Really uh, there's you're not sure how
(45:27):
you're gonna survive on some level, and your brain just
goes ahead and makes up an answer for you. Because
if to say they don't worry about it was a
blue shirt, why don't you worry about your shirt that
there's this awful catastrophic event taking place. Don't worry about
the cereal bam, I'll just check something off. Don't don't
even don't even fret. Yeah, I've I've heard about this too,
like memory confabulation in these like momentary memories, you know,
(45:51):
the flash bold memories from some big event in your past. Yeah.
So it's like, if I go into the bathroom and
on some level one thing, do I know how a
toilet works? My brain is kind of saying, yeah, you
know how toilet works, use the restroom, and then you
flush it obviously. Yeah, don't worry. You've got other things
to do. Stop worrying about the toilet. Okay, So let's
test some basic geography here. So specifically, what they did
(46:12):
was naming the capitals of countries around the world. Experiment
Ers that came up with a list of forty eight countries,
and they split it roughly in thirds between countries where
it's easy to know the capital, or where at least
where you would expect American students to easily guess the capital.
How about England. We hit that one already, you know,
man genius here. Uh, then the ones where they were
(46:35):
moderate likely moderately likely to know the capital, and then
the ones where they were very unlikely to know. So
split into thirds. Uh, Robert, what's the capital of Brazil. Oh,
this one, it's like Brazilia, but my Portuguese is not good.
You are correct. I was hoping i'd trick you into
saying Rio de jann Arrow. Well, this is You would
have caught me with various states for sure, because yeah,
(46:58):
you think of the what's the most thing this city
from that country or US state, and then you assume
that's the capital. If I recall correctly, the Brazilia Rio
de Janeiro confusion is actually a major plot point in
one of the I Know what you did last summer sequels,
which I am here publicly admitting that I've seen. But
(47:19):
then also, here's a hard one. What's the capital of Tajikstan. Yeah,
that one. That one is one that I I probably
should have a leg up on that one because I
took because I remember taking a course in college about
former Soviet states in that region. Yeah, I'm drawn up
complete blank on that one. I think it used to
be called Stalinabad, but now it's a ducham Bay Okay.
(47:44):
But yeah, anyway, so participants fifty two college undergraduates, um
and they were first shown a list of all the countries.
So here all the countries you're gonna have to know
the capitals of go down and rate them on the
seventh same seven point scale. Rate your confidence in how
well do you know the capital of all these countries? Reason?
And then they're asked to actually list the names of
capitals for half the countries and then asked to re
(48:05):
rate their knowledge. So essentially it's the same thing, except
instead of giving an explanation of how something works, you're
just listing the capital. Then they're told the real names
of the capitals and asked to re rate their knowledge.
So what are the results here, Well compared to a
combined group from studies one through four, and the authors
justify combining them into one group in their discussion. Uh,
(48:26):
the students who were tested on the facts showed a
different pattern in the same direction, but with less magnitude.
So confidence dropped off significantly between the first and second rating,
So after people had to answer the questions, they were
less confident, But UH, it stayed almost the same for
the final rating. And though the drop off from T
one to T two is statistically significant, UH, it's significantly
(48:50):
smaller than the drop off in explanation. So essentially, with facts,
we're seeing a pattern that's going in the same direction,
but it's just much smaller. The line graph shows a
slight decrease, but it's closer to being flat than the
graph line for the device explanations. So we've got some
overconfidence with capitals, and I think we've all got to
(49:10):
be that way given our schooling. Right, How how many
capitals did you have to learn in school? Why why
do they do capitals? You know? I remember I remember
exercises where we had to of course remember the states
and their capitals, but I also remember just a lot
of geography quizzes that had no substance to them, like
you'd have to you'd have to memorize all the nations
(49:33):
of Africa, and yes, some of the some of the
nations you were learning about, you know, every everyone's learning
about Egypt, everyone's learning something about South Africa. But then
there are all these other African states. You're not even
asked to know anything about them, just except for their name,
and so they're useless facts because there's no substance behind it. Yeah,
I feel like it would be much more useful if
(49:55):
if you're doing that instead of learning capitals, to learn
like primary language is main ethnic groups and main exports,
but I guess there's only so much time in a day.
Then again, I guess I won't complain when I I
don't know. It's good being able to produce a capital.
You still you still get that third grade rush. Oh yeah,
I did it, Brazilia. Okay, So next next test study
(50:20):
number eight. Uh, let's look let's look at a different
domain of knowledge. So we've looked at we've looked at
explanations for causal phenomenon with devices, and we've looked at facts.
What about procedures? So this, uh, this type of knowledge
in a way is very similar to explaining how devices work.
It involves explaining how you do something. How do you
(50:40):
tie a tie? How do you bake chocolate chip cookies
from scratch? How do you drive from New York to
New Haven? This makes me think of all the wonderful
wiki how um explanations out there, often with pictures that
explain and yet don't explain the thing you looked that.
Those things are my favorite looking up obscure e how
(51:00):
articles used to be one of my favorite games on
the Internet, the best one I ever came across. I
swear this is true. It was an eHow article because
I think it doesn't exist anymore. But it was called
how to pray for Money? Oh really, yeah, for money?
So it's so it would be like it had instruction
meal pray asked for money? Well, I think it was.
(51:22):
Actually it was actually kind of complex because it was
like recognized that money might not be the most important thing.
Um that was like step number five. I guess, okay, well,
these are the kind of things that I guess occur
when when writing assignments are going out, Um, you know
lickety split based purely on you know, search engine terms,
(51:43):
right right, Okay, So this part, so they're going to
run the same test they've done before, all the same
rating steps, everything's the same, except instead of explanations, it's
going to be how do you do this? Here's a
procedure right down the steps and and what order they
come in and how they work together. Results are very interesting. Uh,
this pattern was completely unlike anything we've seen before. So
(52:05):
instead of the ratings dropping off between T one and
T two, your first guests and then your adjustment after
you have to write something, write the answer out, the
ratings actually showed a slight but statistically non significant increase
from one to two and so after you have to
give an account of how to bake chocolate chip cookies,
you're actually more confident in your knowledge than you were
(52:27):
before you wrote down the steps. Um, and I thought
that was interesting, but it also sort of matches. I
can see how that would be true. Like you think
you probably know how to do something, then you write
down the steps to do it, and looking at them
there you're like, oh, yeah, I was right, I knew,
And so you're a little more confident. Yeah, especially if
it's something you've chocolate chip cookies as an example, Like
(52:50):
so often you're going off a recipe, or at least
I I'm not being a real baker. So I'm gonna
look up the recipe and then I'm gonna follow the
recipe with no intent of memorizing it. And then afterwards
I may be able to recall those steps and list
them out and say, all right, that looks accurate. But
then if I actually take that list and compared to
(53:12):
the recipe, I'm bound to have left out some like
key steps well like baking them, like something I don't know,
like licking the raw egg laden spoon. All on that
part I got down. Oh yeah, So so the authors
also report again this is some non quantified information, but
just some post test debriefing. They report that the students
(53:34):
didn't show any of the now characteristic surprise and all
the other stuff, you know. After the test they'd be like, Wow,
I can't believe how much I didn't know. Instead, they
seemed perfectly aware of how much or how little they
knew about how to do things. On the other hand,
I guess, yeah, I would say, this isn't really all
that surprising. Our mental process of remembering how to do
(53:55):
something is very different from our mental process of remembering
how something x sternal to the self works because when
you're remembering how to do something, it's usually a first
person memory. You picture yourself being the thing doing the thing. Yeah,
often it's like an unlanguaged experience. I've had this this
experience with Legos recently because because I'm building Legos with
(54:19):
my son. Haven't built built anything out of Legos since
I was a kid, and I'm realizing that I'm I'm
sure there are industry terms for all the different blocks
and the sizes of blocks, but I do not know
what those terms are, so I'm and the the of course,
the instructions are wordless, so I don't have any length
(54:39):
or I have very little language to describe the steps
that are taking place, but I can, you know, I
can picture myself doing it. Yeah, And there there are
also plenty of things where there is like something you
know how to do through muscle memory that would be
difficult to put into words, like could you explain the
steps of how to ride a bicycle? Right? Or a
big one is is tying a long neck? Hi? Yeah,
(55:00):
but like I can I can tie one on myself,
I cannot tie one on another person all the time
with people where if they're going to tie a tie
for someone, they have to wear it themselves. Uh. Well,
at least that is interesting and I think that's true
in my experience. But it does not seem so much
born out in these results. It seems like people are
or maybe actually it's not that people were perfect at
(55:21):
being able to explain how to do procedures. They were
just very accurate in predicting how well they would be
able to explain them, because you are you. It's based
on an actual memory of doing the thing or trying
to do the thing. Yeah. Um so yeah, So next
next study, what about a different type of knowledge. We've
looked at facts, We've looked at procedures. How about narratives.
(55:43):
One of my favorite things to recall is what happened
in that part of Big Trouble and Little China after
the monster for first Pokes's head. I don't anyway, So yeah,
recalling a narrative if if the plot of a narrative
is basically realist in terms of genre, you're not talking
about el topo or something. There is a causal logic
(56:04):
to the events that take place in it, right, in
the sense that a narrative, like the plot of a
book or a movie, is a kind of machine. It's
a structure built out of causal relationships that can be labeled, explained,
and summarized. So so let's let's look at the machine
of movie plots. Thirty nine students were given a list
of twenty popular movies UH, selected to be things college
(56:28):
students were likely to have seen. I think Forest Gump
was one of them. UH. And they're asked which of
the movies they had seen, and then asked to rate
their understanding of the plots of five of the movies
they'd seen, and then, after their t one first ratings,
they had to describe each of those five plots and
then rewrite their original understanding, So the same procedure we've
(56:49):
seen the whole time, except instead of explaining devices or procedures,
it's plots of movies. And then finally they read reviews
from a professional movie website, not review summaries of plots
from a professional movie review website, and then compared those
to what they had and rated again, and the results
were that the pattern was closer to the one for
(57:09):
procedures than the one for devices. I thought this was
interesting because in a narrative, you were recalling a narrative
that's not something you had to do with your body.
It's so that it's taking that element out, but it's
closer to the pattern for procedures. There's no significant drop
off from T one to T two. People were pretty
accurate at predicting how well they knew narratives. That's interesting
(57:31):
because just thinking back on movies I've seen, like you mentioned,
Big Trouble a Little China, and I instantly started trying
to in my mind sort of piece out of timeline
of that movie. And it's a movie I've seen a lot,
and and and I have a lot of Love for
But there I think there's some definite holes in my
attempt to restructure. You know, at what point they go
to the to the import export business there, and then
(57:54):
they come out, and then they go back in And
when did this encounter fall in line? You know, they
ran a different study to test different devices just to
make sure the device list they had wasn't peculiar. I
wonder if they should have run another test with different movies,
Like what if the movies they had were unusually perspicuous
and clear in terms of plot relationship? Yeah, like say
a romantic comedy, say like the movie Amala. Despite the
(58:18):
fact that I've seen Big Trouble A Little China far
more than I've seen Amlay, I'm far more confident in
my ability to to just rattle off the plot points
and the basic movement of the narrative for Amlay because
it one thing follows from another, Right, it follows a
basic Uh, there's a basic blueprint for that sort of film.
And I'm not saying Big Trump A Little China doesn't
(58:41):
follow a very basic blueprint as well. Now, I think
some kind of random things happened in it, you wouldn't
necessarily infer from one scene, what's going to happen in
the Uh? Yeah? Anyway, So next study, let's look at
one more different type of knowledge. So we've looked at
explanations for machines, facts about geography, procedures on how to
(59:02):
do things, and narratives from movie plots. How about explaining
natural phenomenon. Natural phenomena are complex causal systems. In a way,
they're very much like devices or like machines, except they're
you know, they're not made by humans. But in other senses,
they are like that. They have causal relationships, different components
that work together, and they in the end they make
(59:23):
something happen. Uh. So participants were thirty one Yale undergrads
and study was identical to the ones before, except instead
of explaining how device works, you explain how tides occur,
how why comets have tales, how earthquakes occur, how rainbows
are formed, things like that. So, just like in the
(59:44):
first four studies, they gave the initial confidence rating, you know, rainbows, Oh,
I'm six on rainbows. Then they had to explain how
they're formed, YadA YadA, and the results were it's a jackpot.
The results distribution from the explanations of natural phenomen we're
very similar to those four devices. They were closest to devices. So,
(01:00:05):
whether it's tides or whether it's toilets, we think we
understand how things work, but when we try to explain it,
we realize there are lots of gaps in our understanding.
So to summarize the results across all these studies we've
we've seen that people are significantly overconfident in their understanding
of how devices work and how natural phenomena occur, um
(01:00:25):
that asking for an explanation makes this overconfidence apparent and
reduces it. People are somewhat overconfident, but less so about
their knowledge of facts like capital cities, and people are
fairly accurate at judging their own knowledge of procedures, how
to do stuff, and narratives what happened in a movie? Now,
the big question is this is the thing we haven't
(01:00:47):
gotten to yet. Why? Why? So? What's causing these differences
in metacognition across different domains of knowledge? Why are we
more overconfident about some types of knowledge than other is?
What is it that would make us more confident about
knowing how a toilet works than about knowing the plot
of Forest Gump. Well, my my initial answer would be
(01:01:09):
that we assume a certain simplicity of its design, Like
without even really reminiscing too much on Forest Gump, I'm
given the fact that it was a big blockbuster summer movie,
I'm assuming it has a pretty simplistic structure. And as
for the toilet, I mean it has such a it
(01:01:30):
has such a a low standing in the household when
it's functioning that you you just assume it couldn't be
that complicated. Why would why would it take high technology
to simply dispose of human waste? Yeah, I guess I
can see that. I mean, so one of the answers
that was given in the piloting they did for this,
you know, when they were trying to think, what, what
(01:01:51):
are some good hypotheses that could explain this this difference
is complexity of the device, and so the hypothese sees
that they ended up wanting to test where how about confusion?
They called it confusion of environmental support with internal representation,
And what that means is confusion of the fact that
(01:02:13):
you can see the parts and how they work with
the idea that you can represent the parts and how
they work in your mind. Um. And so what this
predicts is devices like bicycles and can openers and stuff
that are very clear and perspicuous are the things that
were most likely to overestimate our knowledge of the workings
(01:02:37):
of because when we can just look at them and
see all the parts, there's no there's no sensation that
the workings are being hidden from us. But in your mind,
try to draw a picture of a bicycle right now,
I think actually, even if you've seen lots of bicycles,
chances are you just mentally illustrated a bicycle that could
(01:02:57):
not work. I don't know, I feel I feel a
sense of perhaps false confidence here. Well maybe I mean,
maybe I'm just because I assembled one on Christmas. Well,
if you've if you've actually assembled a bicycle, then you
might be in a different category. But I'm willing to
accept that I'm completely foolhardy on this. The bicycle might
be if you've actually worked on them with your hands,
(01:03:17):
it might be more in the procedures category. But with
an ikea tool kits so place. Well, I mean a
lot of people we would try to draw try to
draw a bicycle, and then they'd have like a you know,
like a single bar running from the spokes of both wheels,
and that would make it impossible to steer the bicycle
and stuff like that, or they'd have, um, you know,
(01:03:39):
the chain running to the front wheel and the back
wheel or something like that. Well, it also makes me
think that one of the scenarios here is that you
feel that one should be able to understand a bicycle. Yes,
I'm reminded that's the ease of representation. Yeah, Like I'm
reminded of a bid ends in in the Art of
Motorcycle Maintenance where uh, it's one of the parts where
(01:03:59):
he's talking about motorcyle Coleman. It's that he says that
the motorcycle is a perfect vehicle, a perfect perfect just
a perfect system to have control of because it is
a it is a complex system, but it's not so
complex that a single individual can't master it and care
for it. And whereas if you get into progressively more
(01:04:19):
complicated mechanical systems or just systems in general, then it
becomes increasingly difficult for one person to be able to
have a grasp of it. Can you be the master
of your prius? I don't know. I mean, I'm sure
there are people that can. I I could not be
the master of a motorcycle. I'm and I'm willing to
to admit that I probably can't even be the master
of my son's bicycle at this point. Well, I bet
(01:04:42):
you're more master than me, now that you've actually used
your hands on it. I I know now. I was primed,
actually had to think about what's in a bicycle and
look it up. But I'm quite confident that if I
had just been asked to draw a bicycle and the
different parts and what they do, I would not have
been able to do it correctually if I hadn't thought
about it ahead of time. Okay, another hypothesis. What about
(01:05:04):
confusing higher and lower levels of an analysis. Basically, this
just means, uh, if you've got an idea of the
causal relationships at a high level, you know the big
parts of a machine and basically what the machine does,
you assume you have an understanding for the things at
the lower level, even if you don't. So you think
(01:05:25):
about car breakes. Car brakes slow the spinning of the
wheels by applying friction. I understand how car brakes work,
but there there are tons of things involved in the
brakes that you've got some kind of hydraulics probably, or
you know, fluid or some kind of How how is
the pressure applied from the brake pedal to the brakes?
What are all the different little gears and connections and parts.
(01:05:47):
There's tons of stuff there that you're not even thinking about.
But at the high level you basically know what it does,
and so that makes you assume that you know how
it works. It's a confusion of the what with the how. Yeah,
Like examples that come to mind, like a chainsaw. You
know how the cutting occurs, but do you really know
how the all the intricacies of the saw itself hydraulic
(01:06:11):
press like the one of the end of terminator. Oh yeah,
you know, but it's pretty simple concept. The two pieces
come together and flatten the terminator. But there's a lot
more involved there with the hydraulic system and everything else.
I like, I don't I don't even have a firm
enough understanding of that of how how the intense pressure
(01:06:31):
is applied via hydraulics. Yeah, yeah, um that that that's
a good one. How about another explanation. What if it's
the problem that explanations of of how things work, explanations
unlike facts and stuff like that have indeterminate end states,
and that if I ask you the capital of a country,
how confident are you that you know the capital of
(01:06:53):
a country? Whether or not you're right about the answer,
you know what the answer will look like. It will
be you know a short word, and you you think
you probably know what that word is, um with an explanation.
It's just it's very open ended. You know, you don't
exactly know what the answer should look like, exactly how
detailed is it supposed to be? UM? What you know?
(01:07:14):
What are all the things that would be in it?
It's it's it's more amorphous in terms of structure, even
if you haven't colored inside the lines yet. And then
the final hypothesis is what about rarity of production? We just, Robert,
here's one where you and I might be different than
a lot of people. Not to say we're better, We're
probably worse. But most people don't have to give explanations
(01:07:38):
of how things work very often. But we do often
have to give recountings of facts, narratives and uh and
about procedures. Right, these are things that are common for
everybody to explain, but it's not all that common for
people to explain how things work, and this may make
us overestimate our performance at it. Yeah, I think that's reasonable.
(01:07:59):
I mean we we are in kind of a privileged
situation where we are constantly having to confront the things
we don't know and and research them and form form
and understanding of ourselves and then share that understanding with
listeners or readers or viewers what have you. Yeah, we
we we were practiced enough to know how little we know. Hopefully, No,
(01:08:20):
we probably don't know how little we know. We foolishly
think we know how little we know. But yeah, we
have an illusion of depth of understanding our own ignorance. Uh,
hopefully we have a lag up on the on the situation.
That's the main hope. Maybe. Well we try, we try,
probably fail, but we try, okay, well to examine how
(01:08:43):
these figure. And there are a couple more studies there
two more in this uh, in this research. So one
of them, study number eleven is what if the difference
in the different knowledge types is just that some knowledge
types are more socially desirable than others. I thought about that.
That's kind of interesting. What if we're more likely to
overestimate our knowledge in say, uh devices, because it's much
(01:09:06):
cooler to know how a toilet works, uh, and thus
much more socially desirable, And thus we're sort of internally
bluffing on the most socially important categories that could be possible.
So twenty four Yale undergrads participated in this. They rate
it on a seven point scale how embarrassed they would
be if they have to admit if they had to
(01:09:28):
admit they were ignorant about certain things. And the things
on the list were pulled from a combined master list
of the contents of previous experiments. So you're asked, like
four each item, please rate how embarrassed do you think
you would be if someone asked you to explain the
item and it turned out that you did not have
a good understanding or knowledge of that item. So apply
(01:09:50):
what I just said to a flesh toilet, the plot
of Forrest Gump, how to tie a bow tie, the
capital of England, how rainbows are formed, And the results
are that people were the least embarrassed to be ignorant
about how devices worked. They were moderately embarrassed to be
(01:10:10):
ignorant about facts, procedures, and natural phenomena, And then this
was crazy. They were the most embarrassed to be ignorant
about narratives. Interesting because it seems like you would have
that that you have the most plausible deniability there I
haven't seen it in a while, or I haven't seen it,
or I didn't like it all that. I guess for
(01:10:31):
college students, having seen certain movies carries a lot of
social cache and don't know it's it's you know anyway.
So this response pattern does not show a correlation between
overconfidence and a knowledge domain and the social desirability of
the knowledge domain. People are not bluffing themselves on the
important stuff, or they would be convinced they know way
(01:10:51):
more about what happens in movies than they actually do
last study. In this research, so what exactly is correlated
with overconfidence? Having established that people are the most overconfident
about their understandings of devices and natural phenomenon, ruling out
the idea that this is because those domains of knowledge
are socially accepted or desirable. Uh, the experimenters, they were
(01:11:14):
trying to measure what are the other factors that are
correlated with overconfidence and understanding? So they returned to our
old friends, the devices, the cylinder lock, the flush toilet
the Grand list from studies one through four. Now, this
tested a lot of different correlates, like familiarity with the item,
the ratio of visible versus hidden parts, the number of
(01:11:36):
mechanical versus electrical parts, the total number of parts, and
the number of parts for which one knows the names. Uh.
There was a lot of complicated analysis on this one
as well, but in the end the researchers ruled that
the visible or two hidden parts ratio explained the most
of the variation in over confidence. In other words, a
(01:11:58):
device with visible trans parent mechanisms, in their words, seems
to be the most likely to trick you into thinking
you understand how it works, when in fact you would
discover yourself unable to explain it. So like we were
talking about the can open or the bicycle, things that
seem very clear and easy to to look at and
think you understand are the most likely to make people
(01:12:19):
over confident in their understanding um. They also believe that
the results indicate that the quote levels of analysis confusion
and the label mechanism confusion may contribute to feelings of knowing,
so that means confusing the higher level with the lower
level you know knowing confusing what it does with how
it does it at the granular level, and then also
(01:12:41):
knowing the names for parts of a thing might make
you overconfident in thinking that you know how the thing works.
A little knowledge is a dangerous thing, it certainly is.
And I can definitely think of this in like biology.
Remember in high school when you learn the names of
all the parts of the cell. Maybe not high school,
I don't know what, but some s class you have
in school. If you learn all the parts of the
(01:13:02):
cell in human maybe there's an art project involved. And
then you think you know how the cell works. I
don't know. You don't know how the cell works? Are
you a fool? Yeah? I mean the same can be
said of the human body, right, I mean you you
you learn all these different anatomical parts, the different organs.
But to say you know what a liver is is
(01:13:23):
the different thing than saying you know how the liver
works exactly right. So they say in their final discussion, Uh,
they're they're thinking about, you know, the explanations for what
causes the illusion of explanatory depth, and and they're sort
of focusing a lot on this idea of the the
environmental support being able to look at an object and
see the parts and confusing that for an understanding. And
(01:13:45):
I thought this was a good, good passage, They say, quote,
it would be easy to assume that you can derive
the same kind of representational support from the mental movie
that you could from observing a real phenomenon. So that's
like when you play a movie of a thing in
your head. Uh. They that you could confuse that with
the same level of information that you get from looking
(01:14:06):
at the object working the right quote. Of course, the
mental movie is more like Hollywood than it is like
real life. It fails to respect reality constraints. When we
try to lean on the seductively glossy surface, we find
that the facades of our mental films are hollow cardboard.
That discovery, the revelation of the shallowness of our mental
(01:14:28):
representations for perceptually salient processes, maybe what causes the surprise
and our participants. And that seems very plausible to me.
Like you, you try to put together a mental movie
of how they can opener works, and you're playing the
cartoon in your mind, and because you can do that,
you're like, oh, okay, I know how it works. Like
(01:14:49):
I just made the parts work in my mind, so
I know what all the parts are and what they do.
And it's something. Uh, it's something about this trick where
our imagination is less vivid than we think it is.
Like I'm picturing it in my head, I can see
it in my head, but then you try to explain
it and you realize there are blind spots in your
own imagination that you do not realize are there. Yeah,
(01:15:12):
our minds kind of tricks, isn't thinking We've filled in
all those little gaps. Um, Like I was having similar
situation just with Big Trouble a Little China. I feel
like my memory of it, when I summon it is
more of it, just a flash of images and uh
and and probably leaning heavy on just the film score,
(01:15:34):
just all these different ideas, scenes and sounds from the
film that I have encapsulated as my memory of the film.
I think that's true for a lot of movies with
me yea. And yet for some reason, people are generally
better at predicting how well they'll be able to describe
a narrative, so that that's one of the outliers for me.
I'm wondering what that really means. Well, I mean you
(01:15:54):
could could certainly take that apart and say, well, it's
a lot of it has to do with the way
that we make sense of our lives being narratives when
they're really not looking for the story shape in everything
from you know, your personal life to current events. It's
we're continually bashing our head up against the reality that
things do not play out with the economy or the
(01:16:17):
form of a traditional narrative. Well, unless you have anything else, Robert,
I think we should wrap up this first part, and
then when we return next time, we can look at
some of the applications of the fact that we have
an illusion of understanding, an illusion of explanatory depth about
the world around us. How can this knowledge be brought
to bear in various domains of life. Yeah, we'll consider
(01:16:40):
the children, will consider politics. Um, we might even consider
zombies a little bit. We'll see. Uh So. One of
the one thing though, I do want to to keep
in mind about this is that you shouldn't just take
this as pessimistic, right, uh Like, oh, we we don't
actually understand things as well as we do. How horrible
you could be pessimistic. You could say, why do we
know so much less about? How things work, then we
(01:17:02):
feel like we do. Or you could look at this
in a very optimistic way, and then and instead ask
the question, how are we so good at surviving in
and traveling through and manipulating the world when our models
for understanding causal relationships are so skeletal and bare bones, Like,
why are we so good at life compared to how
(01:17:25):
absolutely uh uh sparse are our mental imagery that animates
our understanding of the workings of things? Is? Yeah, And
I think at two other positive spins. Hey, if I
forget details of the plot and the narrative structure of
Big Trouble and Little China, that means the next time
I see it, a lot of stuff's gonna be new again.
(01:17:47):
Oh and then, and we talked about our own privileged
place of continually exploring new topics and and confronting what
we don't know and learning more about the world around us.
We should also point out, at risk of sounding like
I'm pandering uh, that our audience probably is in much
the same boat. The mere fact that you listen to
stuff to blow your mind, um, that you engage in
(01:18:09):
uh educational infocational uh podcast it just means. It means
that you to realize. Hey, Like, for instance, we had
to recently had an episode on butter. Some people might say,
I know how better works. I'm not gonna listen to that.
But people who did listen to it, they realized, well,
I think I know how better works. But if they
did an episode on it, then I guess there's more
(01:18:29):
to the scenario than I than I than I give
it credit. Oh that, I guess there's more. Moment. I
feel like it's very central to what we do. Yeah. Uh,
but don't let it go to your head. Robert, you
and I and you out there listening. We're no better.
We're no better. We just we just strive to understand
the depths of our ignorance, all right, And if you
(01:18:50):
want to strive to explore the depths of your ignorance,
head on over to stuff to Blow your Mind dot com.
That's where we find all the podcast episodes, videos, blog post,
you name. It leaks out to social media accounts. We're
on Facebook, Twitter, Tumbler, et cetera. About the Mothership is
Stuff to Blow your Mind dot com. And of course,
as always, if you want to email us directly to
get in touch about this episode or any other you
can hit us up at blow the Mind at how
(01:19:13):
stuff works dot com for more on this and thousands
of other topics. Is it how stuff works dot com.