All Episodes

February 1, 2020 65 mins

If science were a candle in the dark, we’d need only spread its light to combat climate change denial and vaccine conspiracy theories. But what if the problem is more complex than that? What if a quirk of human cognition enables us to remain willingly in the dark, even as we hold the very candle? In this episode of Stuff to Blow Your Mind, Robert and Joe explore the concept of motivated numeracy. (originally published 11/15/2018)

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hey, welcome to Stuff to Blow Your Mind. My name
is Robert Lamb and I'm Joe McCormick, and it's Saturday.
Time to go into the old vault. This time we're
looking at an episode that we originally released in November
of Yes. Uh, this was motivated numerousy and the politics
ridden brain. It's so, it seemed like a good good
year to roll this one back out in. Oh boy,

(00:26):
let's dive right in. Welcome to Stuff to Blow Your
Mind from how Stuffworks dot com. Hey, you welcome is
Stuff to Blow your Mind. My name is Robert Laham
and I'm Joe McCormick. And Robert, I want to hit
you with a quote. I'm sure you've heard this one

(00:48):
a million times before. It's a quote from the American
writer Upton Sinclair. Uh, and the quote goes like this.
He says, it is difficult to get a man to
understand something when his salary depends upon his not understanding it. Well,
that's pretty apt. I'm not sure I've actually heard that
one before, but but that certainly has a ring of

(01:08):
truth to it. Really, you never heard that. I think
I've heard that one. People roll that out all the
time when they're talking about, you know, industry shills, paid spokespeople,
pr types. Um. Yeah, yeah. So. Upton Sinclair ran for
governor of California in the nineteen thirties, and he claimed
in a campaign retrospective that he used to tell his
rally audience is this and it's a great line. There's

(01:29):
plenty of truth to it, right. Yeah. By the way,
for anyone who's not familiar. Upton Sinclair lived seventy eight
through nineteen sixty eight, and he was the author of
The Jungle and perhaps more known to some of our
listeners for his story Oil, which was loosely adapted into
the two thousand seven film There Will Be Blood. Always
going to be remembered for a movie first, But did

(01:50):
he also write Boogie Nights the original version? Maybe so, John,
maybe so? But but no, not just an author but
also a politician. Yeah. So he was used to talking
about issues of public policy. I mean, he was a
politically concerned writer. I think a lot of times people
put him in categories like like with Charles Dickens. You know,
somebody who's known for writing fiction but also for exposing

(02:12):
the plight of the politically disadvantaged. And so, yeah, this
quote comes up a lot, like if you're talking about
a lawyer representing big tobacco back in the day, who
would come on TV and say the science isn't settled yet,
there's no proof cigarettes cause cancer, or maybe a coal
industry lobbyist, maybe literally the same exact person comes on
TV a few decades later and says, don't listen to

(02:33):
the climate alarmist, that there are scientists on both sides.
You know, climate change isn't settled yet. When you're hearing
from people like this who are like paid to represent
a particular point of view, you obviously don't have to
be a super skeptic to realize you shouldn't just take
their word for it. Um, But people who get paid
to tell you that the grass is pink and the
sky is green are going to keep saying that. You know,

(02:55):
you're not going to change their mind by offering them
evidence or making good points or something, because they're not
here to figure out what's true. They're here to say
their lines. Yeah, I'm I'm always reminded of the The
Doctor character who would inevitably show up in the late
night infomercials for various products. Um, you know, clearly they
didn't just do a cold call and get get somebody

(03:16):
in there to uh to to shill for this product.
Only Marrow Burrow stimulates your cue zone when it comes
to people like that. I guess this is kind of
a tangent. But when it when it comes to like
people who shill for a particular you know, point of
view or spokespeople for some kind of line on TV,
I always kind of wonder like, do they end up

(03:37):
really truly believing the thing that they're paid to say
or is there some kind of cognitive dissonance in their brain.
I don't know what it's like to be in that mind. Yeah,
that's a great question though, because I mean it's one
thing for just like an individual to endorse a product,
you know, yeah, like reading an ad yeah you know,
or or even saying hey, I tried out this product.
It's really great. You guys should give it a try

(03:59):
as well, which obviously we do on the show. But
but but when you get to that level where you
have an expert, when you have say a medical doctor, um,
appearing on an infomercial or appearing even um you know,
in some sort of governmental body and saying yes, I
stake my reputation on this, I stay my professional um expertise.

(04:20):
Uh I put it on the line in support of
this product or this industry and directly contradicting what appears
to be the preponderance of the evidence. Right. That that's
what these industry shills come out to do, Right, They
come out to tell you that the scientists are wrong.
But anyway, given evidence that has emerged in recent years,
I think maybe later on in this episode we should

(04:42):
come back and try to do an updated version of
this Upton Sinclair quote, because I think that the scope
of this quote is actually too limited by just focusing
on the salary. So so we'll come back to this.
But today we're gonna be talking about a form of
motivated reasoning, a form of motivated reason thing called motivated
numerous ee and specifically how that relates to the idea

(05:05):
of identity protective cognition. And this has come up on
the show before. We talked about it in an episode
a while back called Science Communication Breakdown. I think that
was like a year and a half ago or so
I believe so, But it was based on when you
had gone to the World Science Festival and seen a
talk that included the work of the Yale psychologist Dan Kahan,
who is he does a lot of really interesting research

(05:27):
about biases and motivated reasoning and the ways in which
our brains failed to be rational in one way, sometimes
by being uh sort of subversively rational in another way. Yeah,
isn't it interesting how we sometimes uh as seem to
outsmart ourselves in these matters? Yeah? So I want to
start by thinking about two different kinds of disagreements that

(05:51):
come up when people talk about politics. There are obviously
lots of different ways people can disagree about politics. Here
here are two different kinds of currently politically re ev statements.
One is somebody who says the government shouldn't have a
right to tax my income. Right you might talk to
like a libertarian who says that. And then here's a
different politically relevant statement, human activity is the primary driver

(06:14):
of global climate change. Now, people have political arguments over
statements like both of these two all the time, But
these are not at all the same kind of statement.
One big difference is that the first statement is a
statement about values, like you can't do a bunch of
empirical experiments to determine if it's correct or not. That
the government should be allowed to tax people. That's just

(06:37):
a question about what you believe should be the case.
What about values and priorities, and about the priorities of
the person making the statement, right, it's a it's a
it's a commentary on how you think, or how one
group thinks politics should work, or how government should work. Rather, uh,
And we shouldn't be confused by the idea of political science.
Political science, though a serious field, is a different matter

(07:02):
compared to the natural sciences. Well, it's certainly true that
with questions about like whether or not you should tax income,
you can approach that question from the point of optimizing
for certain goals, Like if you specify a goal and
you compare different methods of achieving that goal, then you
can do that. But like, absent all of that kind
of framework, that's just a statement about values. On the

(07:23):
other hand, you've got the human activity is the primary
driver of global climate change. That statement is not like that.
There simply is a fact of the matter, either human
activity is the primary cause of global climate change or
it isn't. And you can do empirical experiments to test
this hypothesis. And of course the answer is that yes,
we now know that it is the primary driver of

(07:45):
global climate change with like a you know, ninety something
per cent certainty. It's we really, really strongly know this. Now,
this is undoubtedly the scientific consensus. Even though this question
is politically controversial, it's not scientifically controversial. And if you
doubt this, you actually have the ability to go look
up the evidence yourself. Especially, that's one thing that the

(08:06):
internet is great for. You can go read the most
recent I p c C report, you can read the
thousands of individual studies. You can look at the data
and read the climate scientist's own words about how their
conclusions are drawn from the data of their experiments. And
if you actually do that, I think any reasonable person
should be able to conclude, of course, human activities the

(08:26):
primary cause of climate change. And yet that's not what happens,
is it? Questions like this remain politically controversial, with people
often judging the answer in a way that aligns with
their political identity. Now, speaking of politics, I just want
to throw in a quick fact Lloyd here about this episode.
We were recording this on election day. It will be

(08:49):
published after election day. So yeah, so we don't know
what the outcome is going to be yeah. So so
none of this, none of this is a commentary on
things that have not yet occurred as of this recording,
and it's not really a commentary on politics per se.
It's a commentary on psychology really that that is going
to be at play and people of all political persuasions.

(09:10):
So I think we should turn to look at the
big paper that we're going to be focusing on in
this episode. The the lead author was was Dan Kahan,
but the other authors include Ellen Peters, Rika Cantrell Dawson,
and Paul Slovak. And it's called Motivated Numerousy and Enlightened
Self Government, published in Behavioral Public Policy, I think first
published in Revised in seventeen, and they start off by

(09:34):
observing the same kind of thing we've just been talking
about that obviously there are questions where people can argue
about their political values, but that politics is also full
of these arguments about purely empirical questions, many of which
are no longer in fact empirically controversial, like is climate
change driven by greenhouse gas emissions? The answer is yes,

(09:55):
but this is still politically controversial. Other questions like this
that they give a big list of them. One would
be like could we improve public safety by storing nuclear
waste deep underground? And that one is a yes as well.
I believe that's the one that was brought up in
the panel of World Science Festival that Kahan spoke on,
and that was one that actually I seem to be

(10:16):
more divisive. Um, they kind of pulled the audience there
at the World Science Festival, so you know, for the
most part of very informed and curious bunch, but even
they were not as well informed on this issue as
they were on some of these other issues we're talking
about here. Yeah, Now, not all of these questions are
going to be as settled with as much confidence as

(10:37):
other ones are. So like, we have a very high
confidence now that greenhouse gas emissions are driving climate change,
but there could be other questions that are in theory
empirical even if we don't have a scientific consensus yet.
I honestly don't know where this this next question falls in,
whether it's more settled or less settled. But other questions
would include things like, uh, do gun control measures reduce

(11:01):
violent crime or increase it? Uh, does public spending in
the aftermath of an economic recession increase the length of
the recession? Or shorten it. And so with some of
these questions, we don't always yet know the correct answer,
but they are at least empirical. You can do tests,
and you can gather data, and you can find with
some degree of confidence that there is a correct answer.

(11:22):
It's not just going to be an endless contest of values. Yes,
it's in the domain of science, and science can have
at it. One of the interesting things about a lot
of these questions is that they, for some reason almost
always seem to concern questions or perceptions of risk. I
guess maybe that's just what politics is about. Yeah, I
think there is a lot of risk analysis in politics.

(11:44):
I mean, obviously there's there's there's always a certain amount
of fear mongering as well, Like how do you how
do you capitalize on the sort of risks that that
voters are considering? How do you potentially stir up the
flames or uh or or or tamp them down a bit,
depending on what kind of reaction you're looking for. Well,
I guess you could look at many major policy decisions

(12:07):
as um as conflicts between perceptions of different kinds of risks, right, Like,
so somebody will say, well, there's a certain amount of
risk we're running by not doing anything about global climate change.
Here the things that could result and somebody else's yes,
But if we do something about it, we risk I
don't know, we risk not making enough money or something
or or per perhaps it's yeah, we risk hurting ourselves

(12:30):
in the short term or a lot of a lot
of times, the short term risk versus long term risk,
immediate risk versus more you know, elusive risks. Yeah. Now, obviously,
when you look at these questions that have been pretty
convincingly answered with empirical evidence, and yet intense disagreement persists
in politics, this obviously isn't helpful. Like there's enough under

(12:50):
dispute over what values should drive public policy that it
really doesn't help to add to that, like unnecessary dead
end disputes about underlying empirical facts, when the science or
the facts are actually pretty clear. So the question is
why how come you can have a question where the
evidence is very clear, such as the cause of climate

(13:12):
change being related to the burning of fossil fuels, but
the public not being in general agreement about it. And
this This paper looks at two major competing hypotheses to
explain this, like why people don't accept the facts when
the facts are pretty clear. And the first one is
the hypothesis they call the science comprehension thesis or the SCT,

(13:35):
and basically it goes like this, the public in general
has a pretty weak understanding of science. We are likely
to misunderstand what scientists are telling us. If you put
a scientific paper in front of us, we're probably not
gonna understand it. Thus, we're likely to be misled by
people who are trying to deceive us to their own advantage.
And I think unfortunately, or well, I don't want to

(13:56):
preempt what we get to in a bit, but I
guess we could say unfortunately. This hypothesis is pretty common
among skeptics and science enthusiasts and even scientists themselves, and
I feel myself very drawn to it because if you
accept that the problem is, um, we're just not scientifically
literate enough to understand what's being talked about, in a way,

(14:17):
this is actually kind of hopeful, especially if you're an
educator or a science communicator, because the problem is simply
a lack of knowledge. There's just a deficit that can
be made up. And so if you just you know, community,
you give people better scientific education better communication of the
scientific reality. Under this hypothesis, if you just teach people

(14:38):
better scientific literacy skills, they will finally see the light
and come around and accept the empirically verifiable facts. Yeah,
there's hopingness because you can you can teach people about science.
You can you can teach people more about logical thinking
as well. Um, and though of course I think that's
clearly part of scientific literacy as well. But but I
can't help but think back to, for instance, harl Sagan's

(15:00):
discussion of on the Bologny detection kit, Like the problem
is people don't have the kit online, right, or they
don't have all the tools and the kit for instance,
just to just to blow through these really quickly. He
goes into far more detail in the demon Haunted World.
But the nine tools are again abbreviated Number one. Whenever possible,
there must be independent confirmation of the facts. Facts and

(15:23):
quotations uh. Number two encourage a substantive debate on the
evidence by knowledgeable proponents of all points of view. Number
Since number three, arguments from authority carry little weight. Authorities
have made mistakes in the past, they will do so
again in the future, and science there are no authorities.
At most there are experts. Number four, spin more than

(15:43):
one hypothesis. Number five. Try not to get overly attached
to a hypothesis just because it's yours. Number six. Quantify.
If whatever it is you're explaining has some measure, some
numerical quantity attached to it, you'll be much better able
to discriminate among competing hypothes seas. This is why numbers
are often useful in science. Ye exactly. Number seven. If

(16:04):
there's a chain of argument, every link in the chain
must work, including the premise, not just most of them.
Number eight Acam's razor. This is basically, when you have
um two hypotheses that explained data equally well, you choose
the simpler of the two. Right, So like a dream
or a hallucination is probably a better explanation for your
alien abduction experience than aliens coming here. Exactly. And then finally,

(16:27):
the knife tool in the Bologna detection kit always ask
whether the hypothesis can be at least in principle falsified
Propositions that are untestable or unfalsifiable are not worth much.
That's a really good kit, and I think Carl Sagan
I don't want to put words in his mouth, but
I do think he he seems to operate from that

(16:47):
kind of hopeful scientific comprehension thesis point of view. At
least as best I can tell, it seems like he thinks,
you know, the problem with the lack of scientific skepticism
among the people is just that they need access to
better tools like this, and if we can communicate those
tools to them, they can bring them online and then
they'll be more protected against the titular bologna. Yeah, I

(17:09):
think so. Now back to this paper, the authors write
that on this hypothesis on the science comprehension thesis, the
lack of comprehension skill causes people to over rely on
what's calling what's known as system one thinking when judging
empirical scientific questions like perceptions of risk. Now we should
mention a little bit about the difference between these concepts

(17:30):
of system one thinking and system to thinking. This is
big in the works of people like Daniel Kanemon who
have written about behavioral economics and the psychology of bias
and stuff. That's right. It was key to his two
thousand and eleven book Thinking Fast and Slow. Um, And
we've talked about system one thinking system to thinking on
the show before I Think, I think so Yeah. The
basic explanation here, System one thinking is all about fast, automatic, frequent, emotional,

(17:55):
stereotypic and unconscious thinking. This is the theory. This is
rule by heuristics, you know, shortcut ways of thinking. When
you when you look at two piles of things and
want to know how many, you know which pile has
more things in it? If you just judge by I
don't know your eyeball, it that system one system to

(18:16):
thinking would be what maybe you count the things in
the pile? Right? It is slow, effortful, infrequent, logical, calculating,
and conscious. This reminds me a lot of the two
fear networks that were recently discussed on the show Yeah
and the Slayer episode Yeah. System two is all about
avoiding the tiger haunted thickets. Well, if you rely on
system one and then you're more of a tiger racer,

(18:37):
a tiger boxer, or just I guess, just a straight
up tiger denier. And you know, both of those systems
are necessary actually because we don't always have time to
do deliberate, slow logical calculating conscious thought a lot. You know,
if we did that about every decision we made, we
couldn't live. That would be no way to survive. You

(18:58):
have to be fast and reactive and unconscious about all
kinds of things. And so the question is how do
you choose which types of decisions and scenarios to apply
these two different thinking schema to. On the science comprehension thesis,
I think the idea is that people are relying on
system one thinking to answer empirical questions about science that

(19:19):
are politically relevant, whereas they should be using their system
to thinking to get through the get through the fast, reactive,
stereotypic kind of thinking and come to the correct answer.
Fun fact, we used to be owned by a company
that called itself System one UH, named after this um
this mode of thinking. But that's not the only hypothesis

(19:40):
on offer. That's the science comprehension thesis. The other hypothesis,
the rival hypothesis, is what if the problem with controversies
over empirical questions is not that they're caused by a
deficit of knowledge or cognitive skill UH. And this other
idea the authors called the identity protective cognition thesis or
the I C t and they write, quote whereas s

(20:03):
CT attributes conflicts over decision relevant science two deficits in
science comprehension I set sees the public's otherwise intact capacity
to comprehend decision relevant science as disabled by cultural and
political conflict. In other words, it's not that people can't
understand the science, it's that they could understand the issue

(20:26):
if they were not politically charged. And it is specifically
the political charging of the issue that makes it impossible
for them to understand what they otherwise might be able to.
All right, so I have to try and put this
into tiger terms. Okay, So it's like having the capabilities
to avoid tiger kill zones but refusing to do so
for political reasons. Right, Yes, all your friends around you

(20:50):
maybe are saying, like, oh no, that the people who
say that the tigers hang out in the jungle are dumb.
They are the bad people, real people, really, the good
people all know that there are no tigers in the jungle,
that the tigers are somewhere else. I do admit I
love it anytime we can put things in terms of
big cat attacks. That always just seems to really help

(21:10):
explain the topic. You should know, I'm picturing not a
real tiger. But Tony the tiger. Yeah, Tony the tiger
mauling and killing people. All right, that works for me. Okay,
So here's the question. If this hypothesis is correct, why
would it be the case that political charging of issues
would make us unable to use our normal reasoning faculties. Well,

(21:31):
first of all, I mean, think about the Uptons and
Claire quote. It's difficult to make a person understand something
when their salary depends on it. Here we're not talking
about a salary, but about something else of immense psychic
and material value, and that is your membership, status and
standing within a social group that is in part defined
by its commitment to certain moral and political values. Well,

(21:54):
I think that's very much like salary. I mean, salary
is money, Money is life, money is happiness. I mean
we say it's not it is, uh, and then uh
and then but but it is the thing that allows
us to eat and live and be in most circumstances,
certainly in the world that we've we've we've made and
remade for ourselves. And likewise, in a more primal sense,

(22:15):
belonging to a group, being part of a group, that
is that is survival for for the Homo sapiens. Yes,
that is how we have historically and prehistorically managed to live.
It's psychically necessary to us. It's necessary for us to
have good mental end. In fact, I think in some
ways good physical health to be a member in good

(22:35):
standing of a social group and a social network. But
if you want to go into our you know, our
our evolutionary history, it is literally materially necessary to be
accepted as a member of the end group. If you're
driven out of your hunter gatherer tribe that things are
not looking good for you, you're just waiting to fall
into a tiger thicket at that point, right. And so,
if all your friends and allies believe one way about

(22:58):
any politically charged issue climb change or gun control or whatever,
and you put yourself at huge personal risk by advocating
a position that that group disagrees with, you could be
alienated from your social group. You could lose connections that
you depend on for mental health and survival. Thus, you
could definitely see identity protective cognition as a kind of

(23:18):
mental immune system. It protects the brain from beliefs that
could potentially cause you immense harm if you were to
express them. The brain detects a belief or an idea
that is a threat to your social identity, and it
puts up a wall against that belief and doesn't let
it in because it could hurt you, you know, And
I think we can all relate to this on one

(23:39):
level or another. You know, how many times have any
of us said, well, I refuse to believe that, or
I find that hard to believe. Um. And of course
there are a lot of examples that come up in
which they're the issues relate more clearly to personal belief
and and or just pure opinion and artistic value. For instance,
of a movie reviewer television review or tells me that

(24:01):
an upcoming Coen Brothers movie isn't worth seeing, I generally
find that hard to believe until I see it for myself,
and say, in the case of Inside Lewyn Davis, I
end up agreeing with what Inside Lewyn Davis. You know
it was wonderfully made. Prepare to be ostracized, but you
know it was wonderfully made. But it was just not

(24:21):
my cup of tea. Oh I loved it. I love
Oscar Isaac. It was Oh man, he's such a great
singer to the music and it was wonderful, the music
was was great. It just did not It did not
make me happy or make me sad in an interesting way.
You know, I will, I will do my best not
to fully alienate you and throw you out into the cold. So,

(24:44):
but that's one thing, right, ultimately coming down to art
in personal opinion. Uh and and there are I think
there are going to be certain areas where you are
going to be so attached to certain artistic values that
you're going to feel reluctant to state it because of
how it might affect your standing in a group. Oh yeah,
So that's a different kind of variation, Like there are
some unpopular aesthetic opinions that you're not really scared to

(25:08):
voice because you could abandoned them if you needed to. Maybe,
but I really deeply held aesthetic preference that would be
unpopular you maybe just don't even bring up. Yeah, Like
I imagine a band abandoning suddenly abandoning your favorite rock
band in high school, you know that sort of thing.
But but clearly, you know, a lot of these other
issues are also are going to be different. Matters, say,

(25:29):
matters of hearsay or something that's just not completely a
provable one way or another, uh say, some bit of
dirt on a political candidate that can need to be
confirmed or denied. But then we have to come back
to those empirical questions, the ones where science can and
does weigh in on the matter. Yes, and fortunately, as
the authors point out, not that many empirical questions are

(25:49):
really likely to trigger identity protective cognition. Only empirical questions
that are unfortunate enough to get tagged as politically significant
along partisan lines really acquired this taint. For example, you know,
there's been a partisan divide over the HPV vaccine, probably
because it has some kind of perceived relevance to sexual

(26:09):
morality and young people. But there's no partisan divide on
the use of antibiotics to treat bacterial infections. And most
questions are more like the antibiotics. There's just there's not
a partisan divide about it. What you know, temperature, water boils,
or scientific questions. There's just not really a partisan divide
on though. To come back to antibiotics, I see, I

(26:31):
see a dark future. I see there could be a
time where if members of one major political party but
not the other, happen to start talking about antibiotics. I
think you could quite easily see partisan associations arise, and
antibiotics could go from an issue that's non politicized where
pretty much everybody agrees to an issue that suddenly is

(26:51):
divided along partisan lines. Now that that seems sadly like
the kind of thing we would do. But to come
back on the other side, Okay, wait a minute, don't
be will also have an incentive to have correct beliefs obviously, right,
I mean right, yeah, I mean we It definitely pays
off to have a working, realistic model of how the

(27:12):
world works that you live in. But it pays off
in some ways that are much more personally immediately relevant
than others. Uh, depending on the issue. Think about it.
In policy relevant empirical questions like the impact of carbon
emissions or the impact of gun control policies, the consequence
of one individual person being wrong is vanishingly small. But

(27:35):
for that one person, the consequence of being alienated from
their identity group is potentially massive. So on one decision,
you potentially cast one vote out of millions for a
poorly reasoned public policy, and on the other decision, you
could alienate or weaken your most important friendships, your work relationships,
and even your sense of self. Um and so the

(27:57):
author's right quote, persistent conflict over risks and other policy
relevant facts reflects a tragedy of the science communications commons,
a misalignment between the individual interests that culturally diverse citizens
have informing beliefs that connect them to others who share
their distinctive understanding of the best life, and the collective

(28:19):
interests that members of all such groups share in the
enactment of public policies that enable them to pursue their
ends free from threats to their health and prosperity. Okay,
maybe we should take a quick break and when we
come back we can take a look at how we
can compare these two hypotheses. Thank alright, we're back. So, yeah,

(28:40):
we're gonna look at ways to compare these two hypotheses. Now,
of course, in all of this, I can't help but think, well,
why can't it be both? Why can't what we can
we have like both of these, uh, these these reasons
in play? You mean that? So we've got the two apotheses,
the science comprehension thesis, which says that people come to
incorrect beliefs about scientifically are politically relevant empirical questions because

(29:05):
they lack the scientific literacy skills to understand the issues.
And then the other one says, it's not that they
lack the skills to understand the issues, it's that they
are being selectively blinded from proper reasoning by identity protective
cognition that is socially conditioned. Right, the idea coming back
to Segand's tool kit, it's like, do I not have

(29:25):
the tools or is there just this like this, there
is a social and psychological reason for not using the
tools that I have. Well, I think technically you could
have both in a way. So the question would be, um,
can you show that these are are mutually exclusive, and
that would come through in the evidence. But you certainly
could have a population that has fewer science comprehension skills

(29:48):
than it could and so you could educate people in
science better and we would have higher scientific comprehension skills.
But also within that population, identity protective cognition could be
highly salient. And so that's a good question. But if
you want to pit these two hypotheses against each other,
you can create just create conditions where they're obviously going

(30:09):
to be antagonistic as far as the data is concerned.
So here's one idea. If the science comprehension thesis is correct, right,
the problem is a deficit and understanding science. People who
are better at drawing correct conclusions from scientific data will
be better at it, whether or not the data concerns
politically relevant issues. Right, So it should mean that if

(30:33):
the s CT is correct, the science comprehension thesis, it
should mean that if you have scientific understanding skills like
numerous E, which is skill at using numbers and drawing
conclusions from from quantitative data. If you have high numerous E,
you should be better at drawing the correct conclusions from data,
whether or not that data flatters your political perceptions. Um.

(30:57):
On the other hand, if the identity protective cognition these
this is correct, people who are better at drawing correct
conclusions from scientific data will see this skill significantly hampered
by the introduction of a political identity threat. All right,
so I have a feeling we're gonna we're gonna look
at some experiments. Yes, so the experiment is big sample
of one thousand, one hundred and eleven demographically diverse and

(31:20):
ideologically diverse US adults. Uh, and you sort them according
to a couple of major factors. One is political ideology,
so they're sort of on on a scale of how
liberal or conservative they rate themselves. And then the next
is their numeracy skills, determined by a numeracy test. The
author's right quote a well established and highly studied construct
and numerousy encompasses not just mathematical ability, but also a

(31:43):
disposition to engage quantitative information in a reflective and systematic
way and to use it to support valid inferences. So
it's not just being good at math, but it's being
able to say, look at data in a study and
figure out what that data should tell you. So the
authors came up with a couple of fictional experiments, and
they took the results of these fictional experiments and asked

(32:05):
the participants to draw conclusions based on the results they
showed them. Now, both the results of the fictional experiment
and the topic of the experiment were manipulated to create
different test conditions, so the same results were offered in
the context of either being about quote the effectiveness of
a new skin rash treatment or quote the effectiveness of
a ban on carrying concealed weapons in public one of

(32:28):
those is going to be more controversial than the other. Right,
so what they're saying is they they expect that the
skin rash treatment is not going to have any partisan
significance unless I don't know, major Republicans or Democrats start
talking about skin rashes a lot, but at this point
it was not politically relevant. The other is of course
being about guns, which is one of the most highly

(32:49):
charged politically charged topics where people break down along partisan lines. Okay,
so imagine you're one of the people who's a subject
in this experiment. They will give you a table of
results to look at, and it might say it's say
it's you're in the skin rash condition. It might You'll
have a table of four numbers, and the different numbers
represent patients who did use a new skin cream and

(33:12):
patients who did not use a new skin cream. And
then the other axes of the table will be patients
whose rash got worse and patients whose rash got better.
And then you need to determine, based on the numbers
and the table, whether the skin cream is more helpful
or more harmful, and then substitute in the exact same
thing for instead of using patients using a skin cream,

(33:34):
cities that did or did not ban carrying concealed handguns
in public, and instead of the rash getting worse or
the rash getting better, it's crime went down or crime
went up. So the authors had three hypotheses three that
they would test here. One is that they guessed subjects
scoring high in numerousy would be more likely to get
the right result in both skin treatment conditions. And this

(33:57):
is pretty straightforward. Basically, they're saying people who have higher
numercy skills are more likely to use deliberate system to
thinking to work out the covariance between the results and
draw the correct conclusions, they're more likely to get the
skin rash thing right. Hypothesis too, is based on the
science comprehension thesis, So if the science comprehension thesis is correct,
they predict that subjects scoring higher in numeracy quote would

(34:20):
be more likely to construe the data correctly not only
when it was consistent with their ideological predispositions, but also
when it was inconsistent with them, and thus they were
likely to display less ideological polarization than subjects lower in numeracy.
In other words, on the science comprehension thesis. If you're
better at understanding quantitative science, your interpretation of the results

(34:43):
of the gun band thing should be less affected by
political bias. And then finally, they have a third hypothesis
based on the identity protective cognition thesis. Quote, ideological polarization
in the gun band conditions should be most extreme among
those highest in numerous E. Under this hypothesis, people high
and numerous E are not immune from identity protective cognition

(35:06):
and will, like everyone else, always seek ways to affirm
their existing political beliefs. But using their NUMEROUSY skills, they
can use system to thinking to draw correct but counterintuitive
inferences from the data when it flatters their beliefs, but
detect that they should skip this and use quick heuristics

(35:27):
to arrive at the wrong answer when that flatters their beliefs.
So quote, if high NUMEROUSY subjects use their special cognitive
advantage selectively only when doing so generates an ideologically congenial answer,
but not otherwise, they will end up even more polarized
than their low numerous EY counterparts. And so here we

(35:48):
get to the results. So first thing worth noting is
that detecting covariance is difficult if you're not experienced in it,
so across all test conditions, most people got the answers wrong.
All test conditions combined, fifty nine percent of subjects supplied
the incorrect answer uh. And this is probably because if
you just look at the numbers and use a quick

(36:09):
heuristic or system one thinking you're likely to draw the
opposite of the correct conclusion, you'd actually have to do
the math and compare some ratios to come up with
the correct answer. But the results found hypothesis one, which
was that if you're high in numerous E you're you've
got a better chance of getting the skin rash results correct.
That was supported by the data. The better yard at numeroucy,

(36:30):
the more likely you are to draw correct inferences from
politically neutral data, though most people were not very good
at this um hypothesis too, and which would be consistent
with the scientific comprehension thesis that people high in numeracy
will show less polarization on the gun band condition, this
was not supported by the data. Conversely, hypothesis three was

(36:53):
supported by the data, and and that one was that
people with high NUMEROUSY skills will show even more ideologically
holarized judgments about the results in the gun band condition.
And so what the authors conclude is that high numerous
E partisans use their skills selectively. When a laborious system
to calculation will yield results that are flattering to your

(37:14):
political point of view, you'll do it. But when it
threatens your point of view, you'll skip it. You'll skip
system to reasoning and just draw incorrect heuristic conclusions. Uh.
And so a few takeaways here I think we should
think about while we're discussing this. One is that I
should stress this study doesn't show that science education and

(37:34):
science communication efforts are pointless or bad or anything like that.
Science comprehension skills, including numerous E are crucial for answering
all kinds of questions accurately when a system one heuristic
model would cause you to come to the wrong conclusion.
So it's kind of the baseline, right, you've got to
have scientific comprehension skills. But if these results are valid,

(37:56):
what they do show is that science comprehension skills are
not necessarily protection against getting politically charged science questions wrong
because the brain uses its science comprehension skills selectively it's
more likely to bring out the big guns if they
will help it protect its identity, and it's more likely
to surrender to heuristic thinking if that's what protects your identity.

(38:17):
Another way of putting it, political identity can make you
selectively bad at math, even if you're normally good at math.
And so in this week, this is where we get
into some of these areas where we see, say, you know,
an individual um that that has a scientific background or
PhD or what have you, that you see showing up
on the side of say climate change deniers, or or

(38:40):
even something more ridiculous like a like a like a
flat earth belief system. Yeah, I almost never see it
with flat earth beliefs, but you do see it with
climate change, with definitely what you notice with climate changes
that like um Sometimes people come up with lists of
scientists who don't agree with the consensus on climate change,
and usually almost none of them work in fields relevant

(39:03):
to climate change. Uh, you know, they're they're not like
climate scientists. I'm not saying there are no climate scientists
that disagree, but they're almost none. They tend to be
somebody like one example that often comes up, and I
honestly can't remember to what extent his disagreement is with it.
But say, Freeman Dyson is an individual of note who
has at least at times cast some doubt in the area,

(39:26):
but is brilliant. Is Freeman Dyson isn't was? He's not
a climate scientist, right, It's it tends to be people
commenting outside their area of expertise, and yet they still
have the aura of credibility because it's like, well, these
are smart people, they're scientists, right. Uh So, you know,
you'll see a list of scientists who don't accept the
consensus on climate change, and they might be like petroleum

(39:48):
engineers and stuff like that. You know, so it's like,
not like petroleum engineers aren't smart. I mean, I'm sure
all all these people are very smart people. But it's
just that having scientific comprehension skills does not tech to
you against arriving at malinformed bad conclusions that support your identity. Now,
of course, one of the tools and seconds tool kit

(40:08):
I had to do with replication. Yes, uh so that's
always a big question. And in fact, I found one
thing that I wanted to explore real quickly. If you
follow psychology research and you saw something about motivated numeracy
failing replication in a recent study. I think that's probably
a reference to a conference paper draft presented in seventeen

(40:29):
that claimed, as part of its findings to fail to
replicate the motivated numeracy effect. And then Dan Kahan and
Ellen Peters, two of the original authors of the first
paper we were talking about, in response, defended their paper
as best as I can tell, quite successfully by pointing
out that the study that failed to replicate the motivated
reasoning effect UH number one had a very small sample

(40:52):
size and fifty five, and was ideologically homogeneous. It was
basically liberal, and in a paper called rumors, the non
replication of the motivated numeroucy effect are greatly exaggerated, uh
Kahan and Peters. They so they argue against this supposed
failed replication, and they also present the results of their

(41:12):
own replication attempt with a with a sample size of
fife in which they did successfully replicate the findings of
the original very closely. And so, as far as I
can tell, motivated numerousy through identity through identity protective cognition
is still pretty solid. It looks solid to me. And also,
as far as I can tell, that's not just me

(41:33):
defending a cherished belief that's important to my identity through
motivated judgment, because in fact, I find I strongly dislike
the idea of identity protective cognition. I think I would
much rather live in the world of so many of
our anthropogenic climate change accepting peers, and where, you know,
it's the world where if you could just educate people
enough with better science literacy skills, these dead end public

(41:57):
disputes over pretty solid empirical science could be resolved. What
means you could essentially win an argument over these issues
by presenting facts, presenting data. And that's how a lot
of these you know, like science people want it to
be like that, right, science people want to say, well
I can, I'll just bring more evidence. You'll show up
with even more references next time, and that'll get them.

(42:19):
But I'm afraid the evidence seems to be coming in
that it doesn't necessarily work that way. And maybe, and
you know, we shouldn't be all or nothing in the
way we talk about things. Different different types of appeals
will work with different people, but on average that does
not appear to be how people work. All right, Well,
on that note, we're going to take a break, and
when we come back, we're gonna expand on the the
concept a little bit and talk about what can possibly

(42:42):
be done and talk about Scott Steiner. Thank, alright, we're back. So, Joe,
were you familiar with the Scott Steiner before I mentioned
him to you? I was not tremendously familiar. But you
sent me the best video I've seen all week. Yes,
So this was a video, and this is readily available

(43:02):
online because it kind of went viral and became its
own meme. But it's a video of professional wrestler Scott Steiner,
a k A. Big Papa Pump. Okay, Yeah, well I
think I knew him better by that name. Yeah that
was Yeah, that was a moniker he adopted at one point. Uh,
And it's This is a clip from a wrestling promotion
that was known in two thousand eight is t n A.

(43:25):
The promotion is now called Impact, and Steiner launched into
a backstage promo that, in typical pro wrestling fashion, is
all shouty and laced in macho pravada, but in a twist,
it's also full of math and statistics. So he makes
the highly rigorous yes yes, and in this particular promo,
He makes the following claims, I'm just gonna roll through

(43:47):
these in a normal human voice. Okay, So he points
out that normally a wrestler has a fifty fifty chance
of winning a match, all else being equal. Sure, okay, yeah,
but given his uh big Papa pump superior genetics, um,
his opponent Samoa Joe only has a chance of winning.
But it's a three way match as well, and it

(44:08):
involves Kurt Angle. So each participant here has a thirty
three and a third percent chance of winning. But he
but since Kurt Angle, according to to Steiner, knows that
he cannot win, he won't try. Uh So Steiner presses
the following point quote, So, Samoa Joe, you take your
thirty three and one third chance minus my twenty percent chance,

(44:30):
and you have an eight and one third chance of
winning at Sacrifice, sacrifice being the name of the pro
wrestling event. But when you take my seventy five percent
chance of winning, if we were to go one on
one and then add sixty six and two thirds per cents,
I got one and forty one and two thirds chance
of winning at Sacrifice. See Samoa Joe. The numbers don't

(44:52):
lie and they spell disaster for you at Sacrifice? Did
you watch Sacrifice? Were you there? I did not? I
not there. I did watch some clips from it looks
like it was, you know, pretty hard hitting match. Interestingly enough,
um Samoa Joe one oh Man. However, Kurt Angle was
injured and had to be replaced by another wrestler, so

(45:14):
one assumes that would have changed the equation somewhat despite
having a negative forty one chance of winning one. So yeah,
but as Steiner says, the numbers don't lie or do
that is this admittedly ridiculous example? Is this is this
Scott Steiner falling prey to a lack of understanding regarding

(45:34):
numeracy or is it motivated numeracy? Is he just so
highly motivated by his dislike of Samoa Joe and his
belief in his own superior genetics that he just so
uh you know, readily mishandles them. That might be a
better example of a mathematical incarnation of the Dunning Krueger effect. Sure,
but this is where you believe that you have more

(45:54):
fluency in a particular area than you actually do. Yes,
the we we should we should get into it at
one time the Dunning Kruger effect because there's a I know,
there is a more nuanced understanding of it than you
usually see when it's deployed in the media and stuff.
But the basic idea is that with the Dunning Kruger effect,
if you are not very good within a skill set

(46:16):
or within a knowledge domain, you also lack the meta
cognitive capacities to understand what would make somebody good at it. Thus,
you fail to grasp your own shortcomings. And thus people
who are very low skilled or very low knowledge and
a certain domain tend to vastly overestimate their skills or
their knowledge because they can't know they can't know what

(46:39):
they don't know. All right, Well, I realized that this
example was was maybe more entertaining than helpful. Still my
only opportunity to really work Scott Steiner into an episode.
Come on, we've been plowing through a psychology paper. We've
gotta have a little wrestling to lighten the load. Alright, Well, well,
now that we've lightened the load, let's let's come back
to like the big remaining question you have. If motivated numeracy,

(47:01):
uh is the key thing that's happening here? If this
is the the enemy, the threat, then how do we
deal with Yeah? Like, what what can be done? And so?
One thing I would take away from this research is
that good science education and science communication are necessary, but
not sufficient. Necessary but not sufficient to produce a correctly

(47:22):
informed citizen. Read. You can't have people making good judgments
without understanding the facts. But the better they understand the facts,
the more they'll use their understanding to support their identity
derived point of view. So Kahan and others proposed that
the way to beat motivated reasoning is not necessarily to
improve the reasoning, but to remove the motivation. To remove

(47:45):
the motivation, I like that. That reminds me so much
of Krishna's words to Argina in the Hindu epic the
Baka vad Gita. Uh oh yeah, yeah, yeah, if if,
if I may, I'd like to read, you know, because
having come from the quoting Scott's Ironer, I obviously want
to move on the other high literature. Yes, uh so
this is these are the words of Krishna, that man

(48:08):
alone is wise, who keeps the mastery of himself. If
one ponders on objects of the sense, there springs attraction
from attraction grows desire, Desire flames to fierce passion, Passion
breeds recklessness. Then the memory, all betrayed, lets noble purpose
go and SAPs the mind till purpose, mind and man

(48:31):
are all undone. But if one deals with objects of
the sense, not loving and not hating, making them serve
his free soul, which rests serenely, Lord Low, such a
man comes to tranquility, and out of that tranquility shall
rise the end and healing of his earthly pains. Since
the will governed sets the soul at peace. Oh, I'd

(48:53):
say the will governed as much as you're said, than done,
isn't it. Oh? Yeah, I mean that's why we've clearly
we're still struggling with it. And you know, and I
don't want to, you know, obviously this is a this
is a work of immense literary significance and in deep philosophy.
But but yeah, this idea of of acting without passion
seems to to line up reasonably well with this idea

(49:14):
of tackling various um uh, you know, innumerable um problems
without bringing in this political motivation. Yeah, though, of course
it seems very unfortunate that I think a lot of
this motivation comes in unconsciously right, because I mean we
we I guess we haven't really addressed this so far.
But you have to assume that people are not generally

(49:35):
and you probably know from your own experience at least
if it's like mine, they're not generally thinking like, Okay,
how should I trick myself right now to come to
the wrong conclusion because it would be socially acceptable. It
doesn't feel like that to think about political issues that
are you know, are empirical issues that are politically relevant. Um,
it just feels like, well, I'm just trying to figure

(49:56):
out what's right, but obviously I must be doing this
at least sometimes Yeah, which just kind of where we're
often just we're swimming through life. We're not necessarily thinking
about the individual strokes. You know, it all kind of
comes together and we end up making these mistakes and
cognition and to reemphasize what the authors of that original
paper we're talking about, I mean, in a way, this
is rational. It's rational in a perverse way, not in

(50:19):
a good way that ultimately creates the most benefit, but
in a kind of short term perversity. It is rational.
Like you will sometimes hear people talking about or lamenting
in politics, how others just won't do what's rational. But
given a certain interpretation of rational self interest, this irrational
relationship with empirical questions makes perfect sense. The author's right quote,

(50:40):
what any individual member of the public thinks about the
reality of climate change, the hazards of nuclear waste disposal,
the efficacy of gun control is too inconsequential to influence
the risk that that person, or anyone he or she
cares about faces. Nevertheless, given what positions on these issues
signify about a person's finding commitments, forming a belief at

(51:02):
odds with the one that predominates on it within important
affinity groups of which such a person as a member
could expose him or her to an array of highly
unpleasant consequences. Thus, like, we know that it's radically consequential,
what in general public policy is about climate change or
gun policy or something. You know, these are hugely important questions,

(51:24):
but the impact of one individual person's opinion feels small
enough that you basically the consequences of that are almost irrelevant.
It's like, what's really relevant is how is this affecting
me in my day to day and how it's primarily
affecting you in your day to day is the social
consequences of the beliefs you express. But obviously that's not

(51:45):
what we want, right, Like, we want everybody making rational decisions,
having correct empirical information to reason from. Of course they're
still gonna argue about political values, but at least having
everybody except the same set of correct facts when correct
facts are on the table, right, I mean, a lot
of it comes kind of comes down to the fact
that we are a short sighted species that can, you know,

(52:07):
barely see beyond our own horizon. But but we are
attempting to see beyond that horizon. We are trying to
to to maintain a world or create a world that
can be sustained in some fashion. We you know that
the the old addage, of course, is making thinking about
your children and your grandchildren when when you're making decisions

(52:27):
such as these. But historically it's not the sort of
thing that we're great at as a species. And yeah,
and so it's clearly not enough just to tell people like, well,
here's a problem with how you're probably thinking. You're probably
doing identity protective cognition, and you need to stop it.
You know that that's just obviously not going to work,
as you're just asking somebody to shut their mind their

(52:49):
ears off, like like, oh, yeah, they're really going to
listen to you now, buddy. Yeah, I mean, and they're
they're probably not even doing it on purpose, right, I mean,
you and I are doing it sometimes we're not doing
it on purpose. The people who do this, they're not
doing it out of a will to deceive themselves. Is
just happening as part of what the brain does, even unconsciously.
So the question is, could you do something external? Could

(53:12):
you create a state of affairs that would change the
incentive structure? Do what the author said and somehow change
the motivation. If you can't change the reasoning and motivated reasoning,
maybe you can change the motivation and motivated reasoning. So
here's one thing I'm thinking about most politically relevant. Numerous
E is basically recreational, right, Like you need to get

(53:34):
the numbers right when you're calculating your bank balance. But
if you get the numbers wrong when you're talking about
gun control or climate change, there's no immediately detectable consequence
to you, as long as you get them wrong in
the way that your social group approves of and this
is not true of every person in every context. For example,
why does scientists working within their own fields uh tend

(53:57):
usually to get the numbers right? Of course, not always,
but usually, like, regardless of whatever their political opinions are,
if they're doing work within their field, they tend to
get it right most of the time. Well, because they're
gonna be other scientists that are going to be attempting
to uh to perform the same experiment to see if
they get the same results. There can be people reading it,

(54:18):
and if they see the error they are going to
they are going to correct them on it. I mean,
that's part of the process. Yeah, there's a strong incentive
to get the numbers right. Failed numeracy in your own
published research is potentially a major blow to your credibility,
to your career, to your standing among your professional peers
and stuff. So I wonder if it's possible to change

(54:38):
the incentive structure for non scientists to somehow be more
like that. This might be just completely impossible fantasy, but
is there a way you could make it so that
getting the factually correct answer is incentivized in in in
the social situations of lay people and arriving at conclusions
in agreement with your social group is not especially incent

(55:00):
devised that maybe is that just a totally unrealistic hope
can human nature change that much? And it does sound
kind of daunting, like you like, what kind of structure
system would enforce that? And then how does it know?
How do you roll it out successfully? And some I'm
sure some tech billionaire has some kind of nightmarish idea
for an app that would do that, but in fact

(55:20):
we just destroy everything. They're all sorts of sort of
black mirror esque solutions that come to mind, but they
all have like a black mirror s twist where you
can see how it would screw things up, or where
people would essentially rebel against it and say, like, you
know what, I don't I don't really want Facebook or
Twitter or what have you coming along and calling me

(55:41):
on things that I've said that we're incorrect in the past.
Maybe about just why my account instead suffering that embarrassment. Yeah, okay,
here's another idea. Maybe some way to fight the motivation
perhaps this social support networks and structures that are not
dependent on ideological agreement, Like if people really strongly felt

(56:01):
confident that their friendships and their work and family relationships
were safe and would not suffer at all, no degree
of alienation or weakening of relationships from disagreement over political issues.
Maybe that would remove the incentive. Does that make sense?
Like if people felt that they could disagree with their
social group and not not risk anything by doing that,

(56:22):
then there would so no longer be a protective motivation
in what beliefs you hold. So you saying basically, make
our the social groups, making they're more making them more
open to free discussion, more accepting of disagreement. I guess.
So I mean that at least seems like a possibility. Um.
And maybe the way, maybe one way of addressing that

(56:43):
is not that you can really change the nature of
people's family and friendship relationships like that all that much,
but if you could have I don't know, uh, supplemental
social dynamics like this may be one thing that community
style groups like church congress gations and things like that
are useful for, and that they provide sort of like

(57:05):
outside of the family and the small friend group, they
provide like a backup social situation where you you can
retreat if you are feeling down in your other relationships.
Though not to say that no certain church congregations have
ever made people feel alienated for disagreeing. Oh yeah, I mean,
I guess the thing. But you know, I'm just saying,
like supplemental social safety nets, I guess right. Well, I

(57:28):
could see where different groups, I mean, different social groups
can serve as the backup depending on what's happening in
your life. I mean, I can imagine a scenario in
which certainly a church could be the the fall back,
but also scenarios in which work social group could be
the fallback or just uh, you know, your your your
home life, so your home social your family can at

(57:48):
times be the fall You know, well, my friends are
mad at me because of what I said about but
at least nuclear waste. At least I'm doing okay work.
I don't know it. Like one of the idea, it
seems one of the it comes to mind here, is
like you'd almost want to have just social groups that
are more adherent to scientific consensus. I hate to come

(58:09):
back to to that, but because ultimately you have if
if that is not present in uh, in one of
these social structures, I mean, it's there's going to be
a high possibility that some other factor is going to
be more pressing in the worldview, and certainly one sees
that in religious groups. I mean not all religious groups,

(58:30):
but there are certainly religious groups out there, uh that
have have beliefs that are run very counter to scientific consensus.
Now do they do so in a detrimental fashion? I
mean that's it's going to depend Yeah, again, I don't.
I mean, as with all these questions like, is there
any way to actually engineer that or is that just impossible? Well,

(58:50):
I know, I think we have we need to create
a new religion. That's what I coming down to, you know, Uh,
the an open discussion science first, religion. Uh, they can
just sweep across the sweep across the land from shore
to shore and uh and and make a better world
for the future. Well, I'll let you carry the croak
of priests and profit on that one. But Okay, here's

(59:13):
maybe one more way another. Basically, I'm just offering different
ways you could approach the motivation problem. I don't know
of any specifics that you could create, But here's another
way of approaching it. What if there is a way
to shield facts from acquiring in the first place. What
Cahan and co authors call quote antagonistic cultural meanings. In

(59:34):
other words, if you can't fix public and understanding by
making people better at science comprehension, and you can't program
people not to be incentivized first and foremost by a
sense of partisan social belonging, maybe the best way to
protect facts is to find a way to never let
them become politically charged in the first place. If there's

(59:55):
a if somebody could figure out a way to do
that or at least lessen the probability that would happen,
that also seems like a very useful thing, a good
way to fight this problem. But it may also be
impossible because there's again political incentive for people to politicize
certain issues. Yeah, I believe k Kahan has definitely talked
about this before. I believe he touched on the idea

(01:00:16):
of of not necessarily like alright, preventing, but like identifying
when it is beginning to take place, and in finding
ways to intervene and keep it from being so highly politicized,
because it's like you know, barnacles building up on a
ship or something. Right, Yes, like when you detect and
maybe you have a process for when you detect that

(01:00:36):
an empirical scientific question is starting to become an issue
of political significance. Suddenly. What you want is to get
all the politicians and political actors to stop talking about
it immediately and instead get politically neutral celebrities and spokespeople
and stuff to talk about it. Yeah. I feel like

(01:00:56):
that's a pretty good idea. I think it probably has
a thirty three and one third percent chance of success.
But if you add that to the forty six and
one half percent chance, then you're really getting steinorific. Yeah,
you might get up to two thirds chance of winning.
You know. One of the things that the Khan had
all right in their paper that I thought was really

(01:01:17):
interesting is that they point out that people, even when
experts in other fields are primarily as humans experts about quote,
identifying who knows what about what. That sort of is
the main way our brains work, Right, That's like our
primary capacity is figuring out who knows about what things? Right? Yeah,

(01:01:38):
I mean to come back to Sagan's point of view,
you know, it's it's it. It should be certainly less
about trying to figure out who's the authority and just
looking at who is the best and expert in a
given field and being able to sort of weigh what
they're saying and why they're saying it. But oftentimes we
use this capacity of looking at who knows what about
what not to figure out who has the real who's

(01:01:58):
got the best expertise to offer? But with the best
expertise is saying what I want to hear said exactly? Yes,
who is saying what I want to hear said or
what my social group believes in the best way? So
I can say it the same way? Anyway, Eugenius is
out there? Who who can think of more specific and
possibly effective ways to undercut the motivation part of motivated

(01:02:21):
reasoning and uh, politically relevant empirical questions? Let us know
what are those ideas you have? Indeed, this is one
of those areas where this this hypothesis is so new
I don't even think we probably have the science fiction
to level at it. So you, the listener will be
creating the science fiction uh that might in some way

(01:02:41):
inform what we actually do about it. Yeah, and this
whole field identity protective cognition in a way is still developing,
so more research could change what seems to be true
about it today. But I don't know. It's one of
those where I feel like I'm very interested in this research,
but it's not necessarily encouraging. I want to go back
to the science comprehension thesis world. I want to live

(01:03:04):
in the place where you can just where you can
just tell people more, share more knowledge, with more enthusiasm,
model the correct kinds of critical thinking and all that
and uh and bring people aboard. But it's just not
that easy, is it right? Or it's just not enough? Uh?
I mean it kind of comes back though, again to
the GETA and and and other older works that taught

(01:03:25):
about like self awareness, because that's ultimately what we're talking
about is new ways to become aware of how our
brains are working and how in some cases we our
brains our minds are are tricking ourselves into um and
clinging to beliefs that simply don't hold up. Yeah. Oh
and one of the things, of course, we've always got

(01:03:45):
to mention. We mentioned this and pretty much anytime we
talk about bias or something, you're sitting out there thinking,
right now, yeah, this is what other people do. Yeah,
but it's we can all look to examples in our
own lives, big ones, small, all ones. Uh, ones you
you can't recognize and don't even know you do. Yeah, exactly,
I got to remove that plank. All right. Well, on

(01:04:08):
that note, we're gonna go ahead and close out this episode.
As always, head on over to stuff to Blow your
Mind dot com because that is our mothership. That is
where you will find all the podcast episodes. You'll find
links out to our various social media accounts. There's also
a tab at the top of the page where you
can go to our store and get all sorts of
cool merchandise. Some of its show specific like uh, the

(01:04:29):
Great Basilisk or the Cambrian Life shirts. Other stuff is
just has to do with our logo, but it's a
great way to support our show if you want to
spend a few bucks and get something cool to stick
on your laptop or your shirt. On the other hand,
if you want to support the show without spending a dime,
just rate and review the show wherever you have the
power to do so. Thank you so much to our

(01:04:50):
wonderful audio producers Alex Williams and Tarry Harrison. If you
would like to get in touch with us with feedback
on this episode or any other with your ideas of
how to take the motivation out of Motivated numeracy and
motivated reasoning. If you want to let us know where
you listen from, how you found out about the show,
or suggested topic for the future. If I didn't already

(01:05:10):
say that either way, you can email us at blow
the Mind at how stuff works dot com for moralness
and thousands of other topics. Does it how stuff works
dot com

Stuff To Blow Your Mind News

Advertise With Us

Follow Us On

Hosts And Creators

Robert Lamb

Robert Lamb

Joe McCormick

Joe McCormick

Show Links

AboutStoreRSS

Popular Podcasts

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.