Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow Your Mind from how stup
works dot com. Hey, welcome to Stuff to Blow your Mind.
My name is Robert Lamb and I'm Joe McCormick and Robert.
I've got a trivia question for you. All right, hittany,
this is for all of you out there listening as well.
(00:23):
When Marie Curie died, was she older or younger than
twenty seven years old? Think about your answer, older or
younger than seven? Okay, well, I have to say she
was definitely older. But I have to admit that I
read an excellent glow in the dark book about her
a few years back, titled Radioactive Marie and Pierre Curie,
(00:47):
A Tale of Love and Fallout, which, by the way,
heading into Valentine's Day, it's an excellent Valentine's Day book
to give somebody. Wait, hold on, so this is going
to be about them, but it's glow in the dark
signaling that the book is radio active and will poison
you and your fingers will fall off. And well, when
you put it like that, it doesn't sound very romantic,
but it's it's a it's a very romantic book. But
(01:08):
I know I know from having read that that she
she would live significantly longer than than her twenties. Well, okay,
so most people probably do know that. But here here's
another chance. Just guess what age she died. How old
was Marie Curry when she died? And just think about it.
This becomes a little harder for me because I have
I can clearly picture a photograph of her. I'm gonna say,
(01:33):
very close. Marie Cury died in nineteen thirty four at
the age of sixty six, So yeah, very close. Now
you listening at home, how close were you? Did you overshoot?
I assume not many people undershot the age. If you did,
that's okay, no shame in it. I tried this on
somebody yesterday and she guessed forty four. I did the
same thing. I said, older, younger than seven? What's the age?
(01:56):
And when when I told her the answer that it
was actually sixty six, the person I was talking to said, oh, well,
I've seen pictures of her that looked older than that,
but I guess I assumed it was from all the radiation. Okay,
I see where you're going with this. Though, the the
question you asked by putting twenty seven in there, you're
you're you're sort of lowering their expectations. It could be, Yeah,
(02:19):
maybe something's going on there. I've got another one for you.
When Sean Connery took a role in the film Highlander
to the Quickening one of his finest choices, Yes, the
planet's eist. When he took that role, what was his
salary for the role? Was it more or less than
thirty one million dollars? Okay, this one's tough for me
(02:41):
because I love movie trivia, but I'm not very good
with the economic movie trivia, so I don't even have
a very good starting point. It seems to me, though,
that sounds like an awful lot of money. Um, especially
for Highlander two. Yeah, Like that's that's some that's some
big Uh. That's that's some Tom Cruise money right there,
(03:02):
I would guess. So you're saying lower maybe, But then
this was you have to put yourself in a pre
Highland or two era, So Highlander two can't imagine it,
unable to process, cannot compute. It's It's true. I don't
think I watched Highlander. I got excited to watch Highlander
one after I saw trailers for Highlander two. I believe
that's how that went down. But yeah, not not knowing
(03:26):
what we know now about the the the public reception,
uh to to Highlander two, one could easily say, yeah
it was everyone was just totally optimistic. It was a
follow up to Highlander, which was arguably, you know, one
of the greatest films of its generation. Okay, so guess
taking out what was his actual salary? All right, you're
(03:47):
asking about thirty five one I'm going to have that
and say fifteen million? Is that a lot for Sean Connery? Oh?
Your way over the mark. Now, I do have to
admit that my answer comes from a sketchy looking website,
which is the only place I could find an answer
called like the movie time. So maybe this is wrong,
(04:09):
but the answer I could find said he was paid
three point five million. Okay, well, yeah I was way
over the line. Then yeah, but worth every penny really
and then some exactly right. But notice how far off
the mark you were given those starting questions. I asked,
was she older, younger than twenty seven? Or was it
more or less than thirty one million? And I wonder
(04:32):
to what extent those questions changed the kind of answer
you gave to your ultimate guests on her age at
death or on the movie salary? What would you have
said if you hadn't received those questions to start with? Well,
in the case of the Highlander question, I was just
kind of trying to reverse engineer and answer. I think
(04:54):
I would have I still would have missed the mark,
but I think I would have probably said something like
five or six million, a lot closer, A lot closer,
certainly less of an exaggeration, but the but but but
you were pretty much on the money on Murray Curry, right,
So yeah, but but that was an area where I
I have read about her and I think I did
a podcast that talked about her a while back, so
(05:16):
I had some sort of I had some level of
expert information there, but I had nothing really to go
on for the Highland or two one. Okay. So this
effect that we've just been demonstrating is what we're going
to be talking about today. And this is a psychological effect.
It's been written about a lot in the field of
behavioral economics, but it's fundamentally a psychological phenomenon known as
(05:39):
the anchoring bias, and I would argue it's one of
the most powerful, most well known, and most easily exploited
vulnerabilities in our minds, and for that reason, I think
it's something that really everybody should know about, because it's
something that people will constantly be using to try to
get the per hand on you for the rest of
(06:02):
your life. Indeed, this is definitely a topic that will
change the way you think about everything from salary negotiations
to just haggling at the market totally. Yeah. Uh, and
not just economic matters too, I want to uh, though
it's mostly been tested in terms of estimating numbers, and
(06:22):
especially economic type numbers, prices, things where you're trying to
determine a reasonable figure for something. I would posit that
I think it's very likely this type of thinking also
biases all kinds of judgments we make, such as judgments
of people's reputations, judgments of the confidence we place in
the outcomes of events, all of which is going to
(06:44):
be enormously important for the rest of your life in
myriad ways. Yeah. Though certainly a lot of the more
like readily available examples are gonna involve economics. They're gonna
involve things like massive discounts. How can you do you
remember deep discount vds? Deep discount DVDs or I guess
it was deep discount DVD. I think it was a
(07:05):
website that had it's time in the sun there with
with deeply discounted DVDs. And it seems like everybody I
knew we were just like, oh my goodness, these deals
are too good. You're practically you're losing money if you
don't order these movies. Right. The more you buy, the
more you save. And it's easy to fall into that mentality.
It's like I didn't really want to pick up this
(07:25):
video game or this movie or this book, but when
you slice the price that much, I guess I'll bite. Yeah, man,
seems irrational, right. But back to the questions I asked earlier,
what what age did Marie Curry die? How much was
Sean Connery paid for Highlander two? I actually did a brief,
non scientific email survey. I say non scientific because these
were very small samples, not truly random. I just basically
(07:50):
randomly emailed coworkers UH in two different groups and asked
them to estimate answers to those questions. Now, I had
Group A where I just asked them how old did
you think Marie Curry was when she died? And how
much do you think Sean Connery got paid for highland
Er two? No anchors, right, no starting numbers higher or
lower than And in that group, the average answer that
(08:14):
people gave was that they thought that Marie Cury died
at fifty three, and they thought that Sean Connery got
paid three point two million. Three point that was there.
That was their answer, without any anchoring, right, without any anchoring.
So that's very close, very close to the three point
five if that website is correct. Who knows? Then Group B,
I did the same anchors I just gave you. So
(08:35):
I asked them, did she die older? Younger than twenty seven?
What age did she die? The average answer for that
group was forty eight point three good bit lower than yeah.
And then also I did the same thing, I said, uh,
higher or lower than thirty one million for Sean Connery.
Average guests in Group B was that Sean Connery got
(08:58):
paid nineteen point three million dollars uncle that I wasn't
alone to be for Islander to nineteen point three million.
And these are these are co workers, These are smart people,
you know, they should be good at making estimates of
these kind of things off the top of their head.
But in this non scientific way, I feel like we've
(09:19):
just demonstrated that just putting a number out there, even
if the number is totally unreasonable, and Mary cuy didn't
die at twenty seven, what Sean Connery did not get
thirty one million for this movie in that doesn't make
any sense. But even if you put these unreasonable numbers
out there, they seem to bias people's answers toward the
(09:40):
numbers you've thrown out. Well, it really takes me back
to like pop quizzes in grade school tests, right, and
the saying the famous adage the answer is in the question,
Because what do you do if you don't really know
the answer? Will you have? It's multiple choice? You look
at the available answers and answers and see which one
each is out to you the most, which one feels
(10:02):
true or stirs your memory? And then failing that, you
look to the question itself. Is there some sort of
information in the question? Uh? Essentially you're looking for like
a leak in the question. You're looking for a flaw
in the in the Riddler's strategy. Yeah, there's test taking skills,
which are essentially meta test taking skills. They are skills
(10:25):
that are not really about the subject of the test,
but skills at determining how to interrogate the style and
format of a test to exploit it for better scores
in the end. Right. But then I also think there's
there's also kind of a social connotation to this as well.
Like an example would be, you have a friend who
comes up and says, hey, man, have you heard this
(10:47):
latest album by I don't know name an active band
Kansas Kansas. Have you heard this new album by Kansas?
How awesome is that album? Man? So now I have
to I have to frame my answer around awesome? Is
it pretty awesome? Super awesome? It was okay, reasonably awesome.
When you say it was okay, what that means is
(11:07):
you hated it, But you have to adjust up to
the fact that they started with how awesome is it? Right?
And this is a case though, where it's it's not
a situation where you're gonna appear stupid or or uninformed
on a topic unless you know, except on the topic
of Kansas. Maybe it's not a situation where you have
any monetary steaks, but there is kind of like a
social steak employ there. If your friend is a huge
(11:30):
Kansas fan. You don't want to say, oh, I think
Kansas is awful. You want to adjust your answer so
that it's the appropriate balance of truth and uh in politeness. Yeah, So,
in the same way that my email survey was not
really scientific, what we're talking about. These examples are not
really scientific either. They're just anecdotes, and they've got all
(11:52):
these contaminating factors like you're saying Kansas, well, like social
social dynamics, like you were just explaining, so you respond,
it might not be truly influenced by just the presence
of the word awesome as much as it is by
the fact that you're trying to maintain a relationship with
the person who said this, you know what I mean.
So it's not divorced of this contaminating context. Now, the
(12:15):
anchoring effect that we're going to be talking about today
has been thoroughly demonstrated in fully scientific context, So it's
not always just this social kind of stuff going on. Uh.
You can test it ten ways to Sunday, and it
has been tested not just ten a million ways to Sunday,
and this thing works. This anchoring effect is a known
(12:38):
robust exploit of the human mind that works almost all
the time it is. It is scary how often it works. Yeah,
there are no shortage of papers about this. Uh, that's
for sure. Now, I guess we should try to define
it just a little bit more so to define the
anchoring effect. It is an example of what's known as
a cognitive heuristic. And if you're like me, I can remember.
(13:00):
I think back when I was in college, I went
a long time hearing the word heuristic and just sort
of nodding without really knowing what it meant. Anytime you
hear the word heuristic, you can just substitute the phrase
rule of thumb or mental shortcut. I still picture a
hair shirt, no matter what I mean. Cans like that
because you just kind of you ever have words like
(13:20):
that where it's completely illogical, but you can't help but
picture this thing in your head. I have no idea why,
like I tend to imagine a philosopher in a hair shirt.
Melissandra put on her rough spun heuristic pretty much. But
in reality, a heuristic is a rule of thumb or
a mental shortcut. It's essentially a fast and easy process
(13:41):
that your brain uses to come up with some kind
of output. You need a piece of information or a
judgment about something, and you don't really have time to
sit down and work out all the details, so instead
you use a heuristic. And heuristics can lead to relatively
good output. Sometimes you're good it at a fast and
loose judgment on the fly, or they can lead to
(14:03):
relatively bad output. And there are all kinds of heuristics
we use. One example of an extremely common and extremely
bad heuristic is judging some what you think of somebody
by how they look. Extremely common heuristic. It's a shortcut.
You don't want to do the work of like talking
to them for hours and figuring out, you know, what
you really think about them and their reliability as a
(14:26):
person and their values and all that. So instead you
can just look at them and make a crude judgment.
This is a great example because it's also a process
that is not necessarily taking place at the surface level
of of cognition. It's implicit as opposed to explicit. Yeah,
it very often is. And so this is like one
of these really bad heuristics that we're just plagued by. Uh,
(14:48):
you know, it's everybody should recognize it's a destructive way
of thinking that it's not really good for society. That
people do it, but people just keep doing it because
they're naturally vulnerable to it. It's like taking a short
cut through the woods. It makes sense unless there's a
monster there, or it's rain or you get lost. Um.
I mean really, that can be said about a lot
of shortcuts. So when we call them shortcuts, they can
(15:09):
help you help you out in the short term. But
if everybody does it, it breaks the system, or if
you do it too often, you're more likely to run
up against the pitfalls of taking that shortcut. Another bad heuristic,
of course, is the anchoring heuristic, the one we're talking
about today. Uh. It might not be bad in every
single case, because maybe in some off chance it will
(15:29):
bias you toward a correct answer. But most of the time,
the way the anchoring heuristic is going to be deployed
in your life is by people who are trying to
get you negotiated toward their position on something, and they
will use the anchoring bias in order to exploit your
mind and make you come closer to a position that
(15:51):
benefits them. Right. So again, this is haggling for something
at a marketplace. This is negotiations over a contract. After
what have you exactly right, So I think we should
take a quick break, and then when we come back
we will discuss the origins of the idea of anchoring
and some research in psychology and behavioral economics on how
(16:11):
it applies. Thank alright, we're back. I should mention that
one of our main resources in discussing the anchoring effect
is a two thousand eleven literature review from the Journal
of Socioeconomics by Adrian Fernham and Hua Cheu Boo, which
collects and synthesizes all of the major research on the
subject over the past forty years or so up until
(16:34):
about two thousand eleven. This paper is a great resource.
It puts it all in one place, and so that's
going to be sort of our guide for discussing it
as we go. One question is where does the idea
of anchoring come from. Obviously people have been using it
before it was understood and codified as a principle in
behavioral economics, right, But the anchoring and adjustment effect was
(16:55):
most influentially described and articulated by Tversky and Conomon in
nineteen seventy four, and according to them, it is quote
the disproportionate influence on decision makers to make judgments that
are biased toward and initially presented value. So what that means,
(17:15):
in effect, is that when we're trying to make a
reasonable guess or a judgment about something, any piece of
information you get before you make the judgment is likely
to bias your thinking in the direction of that piece
of information. So, if you're shown a car and asked
how much you would pay for it, you you might say, what,
(17:36):
I don't know, ten thousand dollars. That seems about right.
But let's say instead you are shown the same car
with the price sticker on it that says sixteen thousand dollars.
According to the anchoring and adjustment hypothesis here, you would
be more likely in this scenario to offer more for
the car, more than you would have if you just
saw the car and tried to think, how much would
(17:57):
that be worth to me? Because now, oh, now that
it has a sixteen thousand dollar price tag, I think
maybe it looks worth about twelve thousand. You're still coming
down from the offer, but the offer has biased up
your initial judgment of how much it's worth, or, in
other words, the anchor of the initial price has adjusted
your offer higher than you naturally be willing to pay
(18:19):
if that price hadn't been presented to you. It's kind
of like if you have a ticket for a concert
and then you realize you can't go, and so you
try to sell that ticket, just you know, online to
some friends. Maybe you'll often include how much you paid
for it and and what you're really saying there is
I paid thirty bucks for this ticket, so I'll take
(18:40):
whatever I can get. But either closer you get to
thirty the better. You're not a Yeah, you're not asking
how much is it worth for you to see Kansas.
You're saying, given that I paid five hundred dollars for
front row seats to Kansas, how close can you get
to that that number. I have no idea how much
Kansas tickets actually cost. I assume their mega and demand.
(19:01):
But by simply mentioning five hundred dollars, you made me
think about anything. Well, you know they're Kansas. I I
know of Kansas, so they're a big enough name. Uh,
it makes sense that someone would pay a lot of
money for a first row experience. You know, we're dust
in the wind, we only live once. You might as
well go see Kansas, even if it costs a pretty penny. Yeah,
it's crazy, Like you said, just how just through observation
(19:24):
you can tell how powerful this this, the anchoring phenomenon.
It actually is right, But we don't have to go
anecdotal because this has been proved up down, left, right,
sideways to Wichita and back. Uh. It is a thoroughly,
thoroughly demonstrated principle. Our minds just work this way, and
so there are some qualifications. The anchoring bias can be
(19:47):
affected by some variables, we think, and there is actually
debate over what explains the reason behind it, why it
happens in different scenarios. But what there's really no debating
is that it happens. This is This has proven a
million ways, and it is. It is considered a thoroughly
robust bias and a fundamental part of how the human
(20:08):
brain works. Yes, as you said, it said there are
no shortage of papers to back this up. I would
say that one of the problems is that these are
some of the stuffiest academic papers you could hope to read.
I mean, they're they're breaking apart a phenomenous best studied
through numbers and figures and estimates on value. So it's
(20:29):
not as sexy as you have somebody in a room
pulling a lever to shock somebody in the next room.
You know. I feel like maybe maybe what anchoring needs
is like one really good but kind of superficial study
that's just based on saying, how much do you think
Tom Cruise was paid for this film? Something that will
get that will generate headlines that will be uh relatable
(20:51):
in a slightly different way, and that could help explain
anchoring more to the general public. Yeah, it's like a
popular sensational demons stration, but it's been demonsted. I mean,
part of the problem is you don't need to demonstrate
it anymore. It's been demonstrated with these like hundreds of questions.
It's been demonstrated on right, what is the freezing point
(21:13):
of vodka? That's one that they ask people. Uh makes
a difference there. What is the height of Mount Everest? Uh?
What age was Amelia Earhart when she disappeared attempting to
pilot a plane around the world. So they're just all
these studies that ask questions like this and use anchoring
to bias the answers of participants. But it also works
(21:35):
in things other than just like giving a basic informational
guess about something. That's what we've been doing so far,
Like you know, can you guess a fact about history?
It also works in contexts like what percent chance would
you give of a thing happening? What's the percent chance
you would give of a certain athlete scoring a certain
(21:56):
number of points in an upcoming game. So it influences
our judgments of probabilities. Yes, we certainly see this in
political elections, for instance, Absolutely numbers thrown out what are
the chances of this particular candidate winning, and then you
end up adjusting your expectations of the future based on
those percentages. Yeah, and so those percentages could be based
(22:19):
on something in reality. I mean, like if you're looking
at good, well conducted poll data that's reflecting information about
reality that you might want to adjust according to that, right,
if it's good information. But somebody could also bias you
with bad information, uh, just by using the anchoring effect.
If they just put a ridiculous number that's not true
in front of your face, chances are that this will
(22:42):
actually influence the extent to which will influence your self
synthesized probability judgment. Yeah, Like there's I say, there's a
poll that comes out and says, of wizards think Voldemort,
it will be a great ruler of the Earth, you know,
and then you're like, well, who, I don't know, is
(23:02):
kind of high. It's probably more like sixty, right when
when really most wizards, maybe of wizards think Baltimore is great.
I don't know. I leave that to the the Potter fans. Yeah,
I don't know what the percent is, but yeah, you
could be anchored and biased that way. So it affects
these probability estimates. I know one thing they tested it
on was like likelihood estimates of nuclear war. You can
(23:25):
bias people's answers with anchors there. It has been shown
to influence legal judgments like sentencing and uh and liability
for punitive damages. It's been shown to influence this is
a huge one, valuations and prices, right, how much you'd
be willing to pay for something. That's a really common example. Uh,
(23:46):
it would be it's been used in in forecasting examples
like how much you would expect to spend on a restaurant.
And here's a really weird thing. The types of anchors
that influence people don't have to seem credible. People can
be influenced. These studies have shown by things that obviously
(24:07):
shouldn't be influences. They don't have to like frame this, uh,
this anchoring number that they prime you with as coming
from some reasonable authority or something like that. They can
just prime you with a random number that doesn't matter
at all. Some studies have people spinning a wheel to
get a random number, and the random number still biases
(24:27):
your answer toward it. So just a random approval rating
for Boltimore, I could say, even though it's super high
approval rating among wizards of Baltimore. Apparently you could spin
a wheel in front of people so that it's entirely
clear to them that the number is random and you're
it's not coming from real data, and still showing that
(24:48):
higher number from the random spin of the wheel would
bias people's estimates towards the number. But we're getting ahead
of ourselves, because I think we should take a moment
to talk about the different theories about what explained means
the anchoring effect. Obviously, this thing's there. If you put
a number in front of somebody's face, it's going to
bias their estimate or their answer towards that number. But
(25:09):
why does this happen now we mentioned the idea was
very popularly explained by Knomon and Diverseki in nineteen seventy four,
and their original proposal of adjustment was was going up
or down from a given anchor. And so their idea
was you start with the anchor when you're trying to
reason out the answer to something. So I say, you know,
(25:31):
what was Sean Connery's salary in uh in Highland or two?
Was it thirty one million or or above or below?
The way people reason about that is they'd start with
thirty one million, and they'd say is that reasonable? And
then most people would say, no, it can't be that much.
So then they'd work their way down from thirty one
(25:51):
million to a place that starts to feel reasonable. And
so in that sense, you're sort of biasing yourself up
towards like the the utter top range of whatever you
might consider a reasonable range of answers. Does that make sense? Yes, yeah, definitely.
But this explanation does have problems. People have attacked it
in the literature because anchoring, for one thing, is often
(26:12):
shown to be unconscious. So if you're not doing this consciously,
it's kind of hard to explain how that whole process
could work itself out. Now that's not to say that
it's it's not ever conscious, because clearly, if someone's going
into negotiations of a price, you might go into it
saying I paid thirty dollars for this ticket. If I
could get forty, that would be great. So I'm gonna
(26:33):
start at forty knowing that they'll work me down closer
to what I actually expect to get. Yeah, you're totally right.
Sometimes it clearly is conscious, And in those conscious scenarios,
I think Conomon in Tversky's explanation might be right on
the money. But uh, it also in some cases is
clearly unconscious. And also it affects judgment whether or not
(26:53):
the anchor is anywhere close to the realm of a
reasonable range. So if I said, um, uh Sean Connery's
was was Sean Connery's salary in Highlander to the quickening,
uh eight million dollars? Or if I said, was it
ten billion dollars? Either way, that kind of thing has
(27:16):
been shown to influence to bias your answer toward it.
So whether it's within a somewhat reasonable range or not. Yeah,
and if you throw it throughout one of those Uh,
those figures, I'm thinking it's either exceedingly high or it's
it's pretty small for Sean Connery. Like if you, if
you'd if you'd asked me, did Sean Connery receive less
or more than a hundred dollars for his role in
(27:38):
in Highland or two? That would make me begin to think, well,
maybe he was paid of an exceedingly small amount of
money and it was there was some sort of special
studio deal about it, or he just did it for
the love of the franchise. Yeah, he just wanted to
support Highlander. He said, I'll to just take fifty thousand,
that's all I need. Uh Yeah, that would still bias
you way down from the true answer. Now, a different
(28:01):
hypothesis for explaining what causes the anchoring effect is something
that we're all very familiar with. It's often called the
selective accessibility hypothesis, but really this is just explaining the
anchoring effect through confirmatory hypothesis testing a k A. Confirmation bias.
Oh yes, this is a big one. This is like
(28:21):
the bugbear of scientific study or just critical thinking exactly.
So in this in this format, when you're trying to
find the answer to a question, you mainly seek reasons
to justify belief in the answer you already suspect. So
if a detective is trying to solve a murder and
he's got a gut feeling that Eugene did it, he's
(28:42):
going to unconsciously give greater weight to any piece of
evidence that makes Eugene look more guilty, and unconsciously ignore
or give less weight to evidence that points to somebody
else or exonerates Eugene. So instead of openly and inductively
just gathering evidence for all possibilities, he's subconsciously, without realizing it,
trying to build a case for the suspect he already
(29:06):
hypothesizes to be guilty. Uh. Confirmation bias. Another way of
explaining it is that a lot of times when we
think we're working like an investigator, we're really working like
a prosecutor. Right. Uh. An example that's come up recently
on the podcast is is that of scientific studies into
the effectiveness of prayer, right, because it's you can see
(29:29):
how it's easy for an individual to go into this
thinking that they are being completely objective, but if they
if part of their worldview, even if it's not, even
if they're not just like a hardcore believer. Now, if
it's a part of their past, if it's a part
of their history, Uh, then that could be a stumbling
block to like true objective exploration of prayer as having
(29:53):
some sort of an influence on the real world. Yeah,
but of course we we would. We should say that
this doesn't mean things like prayer studies are do because
you can certainly design I mean, this is what science
is for. This is why you design experiments. You try
to make them so that your your biases don't matter.
You structure an experiment to try to exclude the possibility
(30:13):
of your bias interfering with the results. But I think
that the other takeaway here is that there are there
are two types of bad prayer researchers. Essentially, there's the
researcher who is just objectively bad, that is saying, I
believe prayer is real and I'm going to I'm gonna
bend and break every rule to quote unquote prove it
(30:35):
in the study. And I think though that sort of
researcher tends to not exist. But then there's the second level,
and that's the individual who if you ask them about it,
if you were able to peer into their mind. They
believe they are doing the objective thing. They honestly think
they're doing a good job, probably, but they are still
leaning into their bias. Yeah, they're prosecuting the truth rather
(30:58):
than than investigating all open possibilities. Uh. Yeah. But then again,
like I said, I don't want to automatically tar anybody
who does a prayer study with that, but that clearly
is probably happening in some cases. Yeah. But the prosecution
example is great too because it brings up the idea
of leading questions, and the anchoring seems to indicate that
(31:20):
any question with a with a figure in it, with
with some sort of a number in it is kind
of a leading question if I'm giving you a starting
point for you to determine the value. Yeah, exactly. I
mean that that is, you could say that leading questions
are something similar to the anchoring effect. You're trying to
give people a place to work from in the content
(31:42):
of the question. Now there's a third explanation for how
the anchoring effect works. Apart from the anchoring an adjustment
theory of Konomon and Tversky, and apart from the confirmation
bias or selective accessibility model and The third one is
often known as the attitude change model, and this uh
to think about the simple version of this. Essentially, in
(32:04):
the attitude change model, the anchor is treated as something
that changes your attitude towards the nature of the question.
In other words, the anchor is treated as a kind
of hint. Now, a lot of people might have reacted
to the stuff I said at the beginning of the
episode that way, like, oh, if you said, um, you know,
(32:25):
did Marie Cury live till after? She probably lived more
than that. But I bet that is like a cue
or a hint that she died young. Does that make sense? No,
I think that makes perfect sense. I think that is.
That is the way I tend to think about trivia
questions if one's pulling out some trivia cards just you know,
with friends or family. Like one example, there's a wonderful
(32:47):
little card game called are You Smarter Than a Box
of Rocks? And it's each trivia question the answer is
going to be zero, one or two, and you shake
a box of rocks, and the answers will will be
based on the random way that the rocks of fall
together a zero one or two, so that you're playing
against a box of rocks. But you go into every
(33:07):
question knowing that the answer is going to be low.
It cannot be greater than two, right, So in that case,
you are being primed with an anchor each time you
play with something that is informationally relevant, like it actually
is that that is useful information that's going to buy
us your answer toward correct answers. But in the case
(33:28):
of anchoring, there is plenty of evidence that you can
bias people's answers towards incorrect answers. Obviously incorrect answers answers
they would never give unless they've been given this anchor
before making the judgment. You know. Another area I think
we were one running runs into this a lot, uh
is the area of star ratings for things. You know,
(33:48):
if you see a five star rating for a particular service, podcast, movie, book, game,
you name it, uh, that is going to serve as
an as an anchoring point for your evaluation of the
product or one star. Yeah, well I think that they're
they're clearly is for example, a critical hurting effect about
(34:10):
if you look at the way critics opinions pour in
for movies and video games and things like that, especially
any system maybe less so for things like books where
there's not as much of an organized numerical rating system
that people use. But yeah, for like movies, the Rotten
Tomatoes score or whatever. I do really get the feeling
(34:31):
that once you've seen that lots of other critics like something,
you're more likely to give it a fair shake. Like
you might just pay more attention when you're watching it
and think, Okay, this is something interesting going on here.
You might have watched the same movie otherwise and just
kind of been checking your phone and I was like, oh,
it was okay. Yeah, And it kind of opens your
mind to the possibility for wonder um in something which
(34:56):
it in something that is his low stakes this film
for most of us, you know, unless you're a perfec
national um in the in the industry for the most part,
Like that's a good thing. Why I'm all for finding
the wonder in a terrible film. Uh, But when you
apply that to other areas, to find the wonder in
a terrible automobile, to find the wonder in a terrible
(35:16):
political candidate like that, the stakes are higher. I'm a
I'm a real devotee of cult b cars. We need
like a mystery science theater of household appliances. Yeah, that
the silhouettes are all missing fingers in that in that example. Okay,
I guess we should move on to and we're still
(35:36):
working mainly from that two thousand eleven paper I mentioned earlier.
Uh to mention a few of the factors that have
been found to affect or influence the anchoring effect, one
of which is mood. I thought this was kind of
interesting because it actually runs counter to some of the
ways that mood affects other types of judgment. Here's how
(35:56):
it goes. Being sad has been found to generally make
you more susceptible to anchoring. This is odd because the
general understanding is that people reason better when they're in
a sad mood than when they're in a happy mood. Yeah,
it's kind of the idea you want your shoppers happy, right,
Like a happy shopper is gonna enter and leave with
(36:20):
a smile on their face. But this makes it sound
like the opposite that you want sad shoppers. Yeah. Despite
the fact that information is generally processed more efficiently when
judges are in a sad mood. Uh, This it's the
opposite for the anchoring effect. To quote from the paper
I mentioned the two eleven paper quote. However, an exception
to this rule is judgmental anchoring. Bowdenhausen and Englick and
(36:43):
Soda found that participants in a sad mood were more
susceptible to the heuristic bias of anchoring in comparison to
their counterparts in a neutral or happy mood. From the
attitude change perspective, sad mood causes people to engage in
more effortful processing, where people interpret information through elaboration on
their existing knowledge and determine the claim to be acceptable
(37:06):
or unacceptable. So maybe the idea here is that people
in a sad mood are more likely to spend more
time reading into the question on anchoring, doing that attitude
change thing, looking for a hint in the question, and
this hint can bias them way off the mark. Okay,
well what about the knowledge of the participants? This comes
(37:28):
back to the to your initial question. Like I, I
had read this book right, had researched this topic before,
so I felt like I had a leg up on
the question. Yeah, if you've just been reading about Marie
Curry's life, you probably knew the right answer, and that
anchor wasn't going to throw you right. So there are
some cases where obviously knowledge can play a difference, but
in general, knowledge of a subject area has not been
(37:51):
shown to be a strong way of undercutting the anchoring effect.
Even if you're knowledgeable in a subject area, you're still
susceptible all to anchoring. Examples that have been tested here
are that, for example, car mechanics and car dealers were
influenced by anchors on car prices. Estate agents adjust to
their estate value estimates towards anchors. Even if you know
(38:15):
what you're talking about, anchors will probably still affect you. Huh. Well,
I mean, on one level, this makes sense because there's
we of course have the adage a little knowledge is
a dangerous thing. Uh And and certainly one can be
knowledgeable in a field or or what not without being
an expert in that area. There are gonna be holes
in your knowledge. There's gonna be room for doubt and
(38:38):
uh and and that and where there's doubt, there seems
like there's a susceptibility to anchoring. I'm sure that's not
always the case, but I think sometimes you've got some
sort of analog of the Dunning Kruger effect. Going on
where people who have more knowledge are going to be
a little more cautious people have less knowledge. You're just
kind of like, yeah, whatever, I'll give this answer. Well,
I mean there's more room for ego to get involved
(38:59):
to like take the the Highland or two question, as
I said, trivia about the budget of a film or
the growth of the film. Like, that's not an interest
area for me, and I'm not I'm not really hesitant
to be way off the mark on it. But if
it were a question about like a particular actor in
the film, like who played the villain in Highlander too,
(39:21):
which was because of course Michael Ironside. But if if
that name wasn't instantly coming to my head, I would
be less uh less brave about just blurting something out,
you know, because this is something I should know. So
I'm gonna be more cautious. Right. Well, the idea of
ego does introduce something about motivation, right, You can have
(39:42):
differential motivations and how you should try to answer questions.
Maybe the problem is, um people just don't care enough
to really try to answers answer these questions. Right, So
what happens if you give people incentives to get the
answer right, The answer is not much. Incentives and pay
off for accuracy have not been shown to correct the
(40:02):
anchoring effect. People are still affected by anchors And of
course this comes back to the idea that it is
an implicit process. Yeah, exactly, here's one that should be
a should be a total deal breaker. Here's how you
defeat the anchoring effect. Right for warning people, you say,
there's this thing called the anchoring effect, and we're going
to give you a number, and that number is probably
(40:23):
going to contaminate, uh, the way in which you answer
the question, So that number is going to bias your
answer towards that number. Be aware of the anchoring effect. Unfortunately,
studies have shown this doesn't work. Even when you explain
the anchoring effect to people and warn them that it
may be biasing their thinking, they are still vulnerable to it.
(40:44):
I want to try one out. This is just off off,
just shooting from the hip. Here. How many dwarves are
in the Disney movie Snow White in the seven Dwarves?
More or less than thirty eight? Like, just running it
through my mind, I feel the contamination of that question,
even though the answer is obvious, even though there should
(41:05):
be no rational reason to gravitate towards thirty eight, it
it begins to introduce like weeds of of doubt. Yeah, yeah, totally.
I mean in the same way that I don't know
if you've ever had this experience of like reading a
an obvious like fake news article on the internet, like
(41:26):
somebody posts something it's like from a conspiracy theory website
or you know, one of those fake news websites, or
something that's just obviously made up, is not from a
reputable news source. Even though you know this is obviously untrue,
you can kind of feel it's sort of like, yeah,
creeping in this, like you don't have you don't you
honestly put any credence in it being true, But just
(41:49):
the fact that the words appear on the screen has
some kind of like magical conjuring effect on your mind
that makes you sort of start like entertaining doubts about reality.
Yeah yeah, no, I've I've felt the same thing. And
you see that too with just straight up tabloid coverage
and slanderous statements, like the mere fact that it is
(42:10):
pumped into a headline gives it a certain life that
it shouldn't have. Okay, but what about on the individual level.
Are are some of it is just going to be
more susceptible than others. Uh, it does appear by based
on some preliminary research that that is the case. But
this this is not as solid as some of the
other research. But preliminary research says that participants with high
(42:33):
conscientiousness and that generally means things like self control and
self discipline, and high agreeableness that's how long you how
well you get along with others, and low extroversion meaning
and people who are introverted. Those three things also coupled
with high openness to experience, which these are all getting
(42:56):
into the Big five personality traits, these things are are
more susceptible to the anchoring effect. But like I said,
the study cautions that these are these are not super
solid results. This is just sort of like something that
appears to possibly be true. Now, the question would be
why those traits, Why would those things lend lends susceptibility
(43:17):
to the anchoring effect. To quote from the two thousand
eleven study, quote, individuals with high conscientiousness engage in more
thorough thought processes before judgments are made. Those with high
agreeableness take the provided anchors seriously, and high openness to
experience influences individuals who are more sensitive to anchor cues. Also,
(43:40):
they say that low extra version is possibly explained through
a correlation with sad mood, which apparently increases susceptibility to
the anchoring effect. As we explained earlier. Huh. Now, now
the the openness, high openness to experience, that that rings
true from here as well. And I feel like I've
seen that represented in other studies looking at you know,
(44:02):
individuals with liberal or conservative viewpoints. Uh, someone might ask,
or are you open to new experiences? I see you're
into uh, you know, extreme sports and uh and and
other new novel things in your life. Sure, well, are
you open to the idea that Voldemort would make a
great president and Harry Potter was a terrorist? And maybe
(44:23):
you are? You know, you're open to alternative viewpoints, alternative worldviews, right,
And that kind of that kind of mind can be
a dangerous thing because if you have a closed off mind,
and it kind of runs both both ways, good information
is not getting in, but also maybe bad information is
less likely to get in. So so like I said
that that that aspect of the argument, definitely I think
(44:45):
rings true for me. Okay, here's another one. What about
analytical intelligence? Will people with just greater cognitive abilities be
better at avoiding the effects of anchoring? Uh? This is
one where research is divided on the topic, at least
to the time this meta review is under it can
there were conflicting results. Essentially, some studies seem to find
that those with greater cognitive abilities were more resistant to anchoring,
(45:07):
and another study you found Nope, not the case. Okay, Well,
I mean we've seen plenty of studies before that show
that very intelligent people can be deceived and can be
self deceiving. So it would make sense that you're, you know,
cognitive level would only have so much influence on your
susceptibility to anchoring. Yeah, I mean, it's one of the
(45:28):
things we talked about in our Science Communication Breakdown episode
is that being a smart person does not necessarily protect
you against radicalizing yourself with untrue beliefs on a partisan basis.
Maria Kanakova has an entire book, UH dealing with with
con artists, and one of her key points is it
very intelligent people can be duped by things like this. Yeah,
(45:50):
smart people are vulnerable to con artists she's got a
great story, and that it's not a great story. It's
a sad story, but it's about like, what is it?
A nuclear physicis cysts who gets taken in on this
bizarre drug running scheme. Yes, I believe. So I have
to revisit to make sure I got the details right.
But that's a good book. Book. It's worth reading, by
the way. All Right, we're gonna take a quick break
(46:11):
and when we come back, we'll give you a little
advice on how to avoid the anchoring effect. All right,
we're back. So the question you're obviously wondering about is
you We've gone through all these reasons that the anchoring
effect appears incredibly robust, despite the fact that people want
to be able to avoid it and not have it
influenced their thinking, it just seems to work every time.
(46:35):
Uh So, how do you get around it? Well, this
comes up in uh in the two thousand eleven paper
we've been discussing, and the results are not great. There
there is not a whole lot of hope to be offered. Um.
One of the one of the strategies that has been
put out there is something that might work is what's
(46:56):
known as the consider the opposite strategy. Now, this is
effective at some types of d biasing. De Biasing is
the process of, you know, trying to remove your personal bias,
and so consider the opposite strategies are Actually it seems
pretty simple, but it's worth learning how to do. When
you think something is true, just sit there and come
(47:20):
up with a list of reasons it might not be true.
I think this is reasonable. Yeah, I mean a sort
of a science fiction example would be Star Wars looking
at the the the Empire. Is the Empire good or
is the Empire bad? You're told that they're bad, but
sometimes it's helpful to entertain the opposite viewpoint. Maybe the
Empire was good. I don't know what the arguments for
(47:42):
that would be, but okay, I don't know if it
holds up anymore, but I feel like there was a
time when when the argument was more convincing, or at
least I couldn't see that the Empire is good, but
I could see that the rebellion is also evil. Yes,
I could say that the Empire and the rebellion and
are both evil. Yeah. I feel like they're leaning into
(48:03):
that more with the recent films, right, Maybe, I don't
know But anyway, it comes back to a popular bit
of advice that Timothy Leary gave everyone. Right. Yes, though
a lot of people say that, and I think a
lot of times they just think that means like, don't
believe what the man tells. That is a part of it.
But another very important part of thinking for yourself is
(48:24):
questioning your internal authority, questioning what seems reasonable to you
at this moment. And a good way to do that,
apparently is to try this consider the opposite strategy. Just
honestly do your best to come up with a list
of reasons why what you're thinking is probably wrong. And
then you consider that list and you think about are
(48:46):
these reasons reasonable? And so this this has been shown
to be effective at some things, some types of debiasing,
but apparently it is not shown to be very effective
with anchoring. Well that's not good at all. Nope. Another
thing I want to read a quote from the paper.
Quote in their popular book on behavioral economics, bell Ski
(49:08):
and Golovich warned people that they may be prone to
confirmation biases and anchoring if they make spending and investment
decisions without research. They are especially loyal to certain brands
or investments for the wrong reasons. They find it hard
to see investments for less than they paid for them,
and they rely on the seller's price rather than assessing
(49:29):
the value themselves. They advise people to avoid the pitfall
of anchoring by broadening their board of advisors, so listening
to more people, doing more thorough research before making economic decisions,
So not just relying on one anchor you're seeing in
the store, but trying to get as much information in
front of you as possible, looking at trends, being realistic
(49:54):
and taking the longer view, and showing a little more
humility when it comes to one's own judgment. And now
all of this seems like good advice to me, But
I don't know if this actually proves effective at overcoming
the anchoring bias, right, because in all of these cases,
if you just had this this checklist in your pocket,
you would still being You would still be employing it
(50:16):
explicitly trying to counter something that is occurring implicitly. Yeah,
Now there are a few other ideas I was just
thinking about that these are not tested, but I was
trying to think, well, what could you do given how
robust the anchoring effect is. Here's one whenever possible. What
can you do to avoid the anchor? Like in situations
(50:38):
where you're going to have to make a judgment and
you know that you may be exposed to an anchor
that works against you, just try to protect yourself from
being exposed to it. Do whatever you can to avoid
actually encountering that anchor. Huh. Then this sounds like a
potential role for a an Internet browser filter, like an
(50:58):
anchor filter, where it will take out any any leading
numbers and whatever you might be reading. Yeah, but then again,
it's hard to know how to do that right, Like,
you don't want to cut yourself off from incoming information
that may actually be useful to you. True, and you
don't just remove all numbers from your news feed. That
sounds a bit extreme. Yeah, here's another one that is
much more, much more directly related to price negotiations. Uh,
(51:24):
be preemptive, set your own anchor before you're a negotiating
opponent has a chance to set an anchor for you. So,
if you want to pay a lower price on something,
apparently a good way to do that is you be
the first person to say something and set your really
really really low estimate or high estimate. If you're looking right,
if you're paid, if you're trying to get paid, Yeah, exactly,
(51:45):
this sounds it sounds like the art of the deal
right here. I don't think exactly is the art of
the deal um, but yeah, you can use anchoring to
your advantage. Most of the time people are going to
be trying to use it against you, But there are
cases where we're normal people who are not in advertising
or sales or whatever can try to use this. For example,
(52:05):
studies have actually been conducted and found that when you
if you're trying to get a higher salary at work,
you're trying to negotiate your pay up. Uh, salary negotiations
that open with a very high request are more likely
to end up with a higher salary offer in the end,
even if the opening anchor you request is way too high.
So going to every negotiation saying thirty million dollars, just
(52:29):
go for the Sean Connery money right off the bat.
I don't know if thirty million dollars, I mean, maybe
it will. I don't know. Then again, I mean, I
feel like if you're negotiating with a with a business person,
they've probably been trained to some extent about some version
of the anchoring effects. But then again, as we've discussed earlier,
knowing that out, yeah, knowing about it doesn't make it
(52:50):
not work on you. Hey, if you want to check
out more Stuff to Bbow your Mind, head on over
to stuff to Blow your Mind dot com. That's we'll
find all the podcast episodes, blog post and links out
to our various social media accounts. Big Thing says always
to our audio producers Alex Williams and Tory Harrison, And
if you want to get in touch with us directly,
as always, you can email us at blow the Mind
at how stuff works dot com for more on this
(53:21):
and thousands of other topics. Does it how stuff works
dot com