Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow Your Mind production of iHeartRadio.
Speaker 2 (00:12):
Hey, welcome to Stuff to Blow Your Mind. My name
is Robert Lamb.
Speaker 3 (00:16):
And I'm Joe McCormick. And in today's episode, we're going
to begin a series in which we look at a
phenomenon studied in psychology, economics, and decision theory which is
called ambiguity aversion. Ambiguity aversion is when we prefer risks
with known probabilities over risks with unknown probabilities, even if
(00:40):
you have no reason for thinking that the unknown risks
will be worse than the known risks. And we'll get
to some concrete examples and foundational research on this in
a bit, but to an approximation, a good way to
think about this is that ambiguity aversion is the sentiment
expressed in the saying, better the devil you know than
(01:01):
the devil you don't. This is an aphorism I feel
like people often quote when they're about to justify making
a really bad or indefensible decision. And the whole point
of this saying is that the devil you know is
already bad. That's usually implied, and there's no indication that
the devil you don't know will be worse. It could
(01:22):
well be a kinder, gentler devil with a less pointy pitchfork.
But still we often find ourselves thinking, I'd just rather
stick with the one where I know how pointy it
is upfront.
Speaker 2 (01:35):
Yeah, it's often a form of indecision, like refused to
do anything about the current devil situation, but you know,
at least I know this devil. Next devil could be worse.
I don't want a worst devil, that's right.
Speaker 3 (01:47):
This is another one of those aphorisms. We were talking
about this just recently in the Saint Swithin episode. It's
one of these aphorisms that I think is only wisdom
given certain assumptions or conditions that are not really part
of the saying, Like as stated, it could be purely
terrible advice, like what if the devil you know is
really bad? And what if going with the devil you
(02:10):
don't know gives you a really good shot at improving
your stay in hell? But also I think it does
have a kind of wisdom if you don't think of
it as advice, but rather as an ironic observation about
human nature, in which case I think it's sort of
dead on this This is often how we think, and
that is the core observation also of the ambiguity aversion literature.
Speaker 2 (02:33):
Yeah yeah, I mean because you can also ask, well,
what if we just didn't have a devil? Is that
on the table? Can we just not have a devil?
Why do we have to worry about the devil we
have versus the devil we don't have yet it's you know,
it can be, and I guess it's rather condemning human
nature in that aspect as well, where the answer is, well,
you got to have a devil where humans we make
(02:54):
our own and you've got to have one in play.
Speaker 3 (02:56):
You should have thought of that before you double park Rob.
Speaker 2 (02:58):
Yeah, now you're here in hell with.
Speaker 3 (02:59):
The rest of us.
Speaker 2 (03:00):
There's a rather keen Chinese saying that goes along these
lines as well. It's a well known idiom that states
that it's quote easy to dodge the spear in the open,
hard to avoid a stab in the dark.
Speaker 3 (03:12):
Oh that's interesting, because yeah, that's the same idea. It
isn't necessarily true in all circumstances, like in the dark,
the person stabbing at you also can't see you very well,
so it might be hard to dodge, but it's also
hard for them to hit you. But in the dark,
you just don't really know, you don't really know what's
going on, so it just seems scarier.
Speaker 2 (03:34):
Yeah. Absolutely, And it reminds me a bit of another
Chinese saying, better to be a dog in times of
tranquility than a human in times of chaos, And this
has some of the same sentiments as a fourteenth century
Islamic saying that is sometimes misattributed as being of Chinese
origin itself, and that is better a century of tyranny
(03:55):
than one day of chaos.
Speaker 3 (03:57):
Also, don't know if I actually agree with that.
Speaker 2 (04:00):
Well, I don't know. It's I think about it a
lot for various reasons, and I do. I do agree
with the basic sentiment of it. When you find yourself
dipping into chaos. And I'm not talking about like capital
C chaos, like absolute chaos, but these little moments of
(04:21):
chaos and disorder, you can easily line yourself up with
the idea of like, well, yeah, this is why people,
you know, fall for tyranny. This is because they have
these moments of chaos and uncertainty and they're like, well,
I would just rather have this sort of known top
down oppression in place rather than some sort of ambiguous
chaos bubbling up around me.
Speaker 3 (04:42):
Better are something that's definitely bad than something I don't
understand and can't predict.
Speaker 2 (04:47):
Yeah, Now, ambiguity and chaos, we should note here, are
not the same thing. Ambiguity is a lack of clarity. Well,
chaos is a state of disorder, and the former may
well lead into the other. And chaos may be seen
as a state of absolute ambiguity and certainty as well.
So in that respect, it becomes even more difficult when
(05:07):
we try and weigh chaos and ambiguity in our concerns
about the future.
Speaker 3 (05:12):
I'm just trying to think of familiar, everyday examples of
ambiguity aversion, and I think I came across one internally
in the now commonly acknowledged sentiment of not wanting to
go out and try anything you know, So, like, do
you ever find yourself trying to find a way to
(05:34):
avoid going through an experience that is new and unfamiliar
and thus unpredictable to you? So it seems like, I
don't know, it just seems kind of fraught, like a hassle,
And instead of doing that, choosing to have an experience
that you pretty much know that you will probably not enjoy.
Speaker 2 (05:54):
Yeah, I think I know what you're saying here. Yeah,
I mean, I mean life is perpetually a situation where
it's like, am I going to stick with what's comfortable
or am I going to say yes to adventure? Am
I going to try something new? And I often find myself,
you know, at the crossroads of that moment, because there's
(06:14):
a huge part of me that doesn't want to try
new things, and I get anxious about the various experiences
aligned with those kinds of quests. But on the other hand,
I have to recognize that like most of the really
fulfilling things in my life have been because I said
yes to adventure and because I took a chance on
something and I went out of my comfort zone. But
(06:37):
then again, we call it our comfort zone because it
is comfortable. It is a nice place, it is a
comforting place. So it's you know, it's given take.
Speaker 3 (06:45):
Yes, I think that's absolutely correct. But I think the
really weird thing is that sometimes your quote comfort zone
isn't comfortable. I mean, sometimes we prefer an experience that's
not even very nice. It's still like something that we
know we won't like, but it's just it's you know,
it's familiar. We at least know what we're getting with it,
over something that is just question marks.
Speaker 2 (07:05):
Right right, will choose a known okayness over a potentially
amazing experience.
Speaker 3 (07:12):
So one of the most important early writings on ambiguity
aversion is a classic paper in the Quarterly Journal of
Economics from nineteen sixty one by the American economist, whistleblower
and political activist Daniel Ellsberg. The paper was called Risk,
Ambiguity and the Savage Axioms. That's savage with a capital S.
(07:35):
That's a person's name. It's not describing the axioms as savage.
And just to go ahead and mention it now because
I also use this as a source in writing about
Elsberg and this paper's impact. It was also reading a
paper called Ambiguity and Ambiguity Aversion in the Handbook of
the Economics of Risk and Uncertainty. This is by Mark J.
(07:58):
Maschina and Marciano Cinneskalchi, published by north Holland in twenty fourteen.
This is a sort of general overview of the concept
of ambiguity aversion and a survey of a bunch of
different models of it. Now, before we get into the
idea that Elsburg explored in this very important paper, robbed it.
You had a bit of bio in Elsburg, didn't you.
Speaker 2 (08:18):
Yeah. Yeah. Daniel Ellsberg quite an interesting fellow of nineteen
thirty one through twenty twenty three. In addition to Elsberg's paradox,
which we're getting into here, is he's also famous, perhaps
more so in a general sense, as the Rand Corporation
employee who photocopied and released what would come to be
(08:39):
known as the Pentagon Papers. This is a United States
Department of Defense history of the US's political and military
involvement in Vietnam from nineteen forty five through nineteen sixty eight.
Speaker 3 (08:51):
The key point being that Ellsberg believed that getting these
papers out there was important in informing the public about
things that they didn't know about how the Vietnam War
being prosecuted and represented right right.
Speaker 2 (09:02):
It was very much a whistle blowing act on Ellsberg's part,
and these papers were published by the New York Times
in nineteen seventy one. This would be a decade after
the publication regarding the Elsberg paradox. And yeah, they detailed
secret developments about the scope of the US military efforts
in Vietnam, the real sticking points being the deception involving
(09:25):
the Gulf of Tonkin incident, unreported expansion of the war,
the true objective of the war post nineteen sixty five,
which basically was more about containing China and preventing a
humiliating US defeat rather than any specific objectives to help
the people of Vietnam, and the lack of a clear
plan for victory as well. So I mentioned a lot
(09:47):
of you have at least heard of the Pentagon papers,
even if you're not familiar with the full story, because
this incident this led to conspiracy and espionage charges against
Ellsberg that were then later dismissed, greatly fueled the anti
war movement as well as distrust in the government at
the time, and led to the Nixon Whitehouse's use of
(10:08):
the infamous White House Plumbers to discredit Elsberg. And of course,
all of this ends up feeding into the Watergate scandal,
which is another one of those major historical touchstones from
the time period that again, if you're not super familiar
with Watergate, you've at least heard of Watergate, and you've
heard about the other different gates out there that allude
(10:28):
back to what a scandal it was.
Speaker 3 (10:31):
Now everything gets a gate, suffix.
Speaker 2 (10:33):
Right, no matter how deserve, any or undeserved now. Elsberg
remained an outspoken activist and whistleblower throughout his life, particularly
regarding the US's military actions under multiple presidents, including the
current one, the Russian invasion of Ukraine, and also in
support of various other whistle blowers. He was a vocal
(10:54):
opponent of the use and the threat of nuclear weapons
and the rhetoric around their use, and he is sometimes
described as a nuclear war planner turned nuclear war activists,
and his work at the Rand Corporation did involve such contemplations. Yeah.
Speaker 3 (11:11):
I think a big part of his pitch in his
anti war and anti nuclear weapons activism was like, look,
I've been there where the power players are executing you know,
global military and nuclear strategy. You should be worried. This
is things are not okay.
Speaker 2 (11:27):
Yeah. One of his major overarching criticisms of nuclear weapons,
and I feel like we're probably preaching to the choir
on this one. I can't imagine we have any big
nuclear war fans out there, was that their use is
a deterrent one of the major positive spins that are
often given about having a vast nuclear arsenal. This idea
(11:47):
of the nuclear deterrent was itself highly unreliable and unstable.
There were simply too many variables human, technological, situational that
could lead to not only catastrophic but existential unintended consequences.
Speaker 3 (12:02):
Yeah, too many ways for it to go wrong, and
if it does go wrong, the consequences are too great.
Speaker 2 (12:07):
By the way Elsberg has been portrayed, I believe twice
in major motion pictures, once by James Spader and once
by Matthew Reese.
Speaker 3 (12:24):
So coming back to the Elsberg paper from nineteen sixty one, again,
that's risk ambiguity in the Savage axioms. This is the
really important one. I'd look it up. I think it
had like, I don't know, like tens of thousands of citations.
It's like a very, very famous and influential economics paper.
In this paper he illustrates the idea of ambiguity a
(12:45):
version with a number of thought experiments, and this is
one of them. So this is the three color urn.
Imagine that there's an urn like a jar that is
filled with ninety little ping pong balls that are a
mix of different colors. You cannot see inside the urn,
it's opaque, But you're going to play a game where
(13:06):
you reach into the urn and you pull out one
ball at a time, and you can win prizes based
on the color of the ball that you pull out.
You know that thirty of the balls inside the urn
are red, So since thirty out of ninety are red,
there's a one in three chance that a random ball
is going to be red. The odds there are pretty clear.
(13:27):
The other sixty balls, however, are a mix of yellow
and black. And here's the really important part. You don't
know the composition of that group. So those sixty balls
could be thirty yellow and thirty black. It could be
sixty yellow and zero black. It could be sixty black
and zero yellow, or any mix in between. Since you
(13:49):
don't know the mix, you don't know the probability of
drawing a yellow ball or a black ball ahead of time.
For red, it's one in three. For either yellow or black,
it's anywhere between zero and two out of three. Now,
imagine it is your turn to draw a ball out
of the urn, and you can take one of two bets.
(14:09):
Bet a you get one hundred dollars if your ball
is red, So if you pick this option, your chance
of winning one hundred dollars is one in three. In
bet B you get one hundred dollars if your ball
is black. Since you don't know the mix of yellow
and black, you don't know your odds of winning, but
they are somewhere between zero and two out of three,
(14:33):
with the midpoint being one out of three, which is
the same as the odds on red. Which one do
you pick?
Speaker 2 (14:39):
Well, it sounds like bet A is the suer thing, right.
Speaker 3 (14:43):
If somebody offered me this game, I would probably pick
bet A. Yeah, it just seems safer. It's like, okay,
I know what I'm dealing with. There one in three
the other option. Your odds could be better than one
in three, but you you just don't know.
Speaker 2 (14:56):
They could be worse.
Speaker 3 (14:56):
It could be zero, They could be worse, could be better.
You have no way of knowing either way, Ellsberg suggested,
and later experiments would empirically demonstrate that most people would
bet on red instead of black, even though there is
no objective reason to think that red is a better bet.
Following the pick red strategy will not necessarily earn you
(15:19):
more money in this game, But people really do, in fact,
just like it better to pick red. It feels right,
that's just what we think we should do. That in
itself is kind of weird and interesting, like why do
we prefer to avoid ambiguity even when there is no
clear advantage in doing so. Then there's a really interesting wrinkle.
(15:41):
Imagine you're playing the same game. You are drawing a
ball out of an urn, and this time you're given
assurance that the mix of the balls is the same
as it was the first time. So whatever was the
case with the mix of yellow and black in there,
it's going to be the same as it was with
your last draw. And you get a chance to make
a second bet. You can take either bet C, where
(16:03):
you get one hundred dollars if the ball drawn out
is either yellow or red, and this one has an
unknown chance of winning, because of course you don't know
how many yellow or black balls there are. You know
that your chance of winning is somewhere between one and
three if there are you know, the thirty red balls
and zero yellow balls, or one hundred percent if there
(16:25):
are sixty yellow balls, Or you can take bet D
where you get one hundred dollars. If the ball is
either yellow or black, this one has a guaranteed two
thirds chance to win, because we know that together yellow
and black make sixty of the ninety balls that we
don't know what the mix is. In this second trial,
people will tend to pick bet D once again, because
(16:48):
the probabilities are known. With D, it's a guaranteed two
thirds chance to win. With bets C, it's somewhere between
one one in three and one hundred percent. Picking bet
D allows them to avoid placing bets C, which, again,
which has ambiguous chances. Here's the really interesting thing that
Elsberg pointed out. There's actually a contradiction in people's behavior
(17:13):
between the first and second round of this game. Assuming
you are actually doing your best to win the money.
In round one, if you bet on red instead of black,
that implies you believe that less than thirty of the
unknown sixty balls are black. Right, That makes you it
implies that red has a better chance of winning, so
(17:34):
less than half of them have to be black. However,
in the second round, if you take the second option
picking yellow and black, it implies that you think that
more than thirty of the unknown balls are black. Otherwise
the red and yellow combination option would have the better
chance of winning. Right, These two bets in combination make
(17:56):
no sense because they imply self contradictory beliefs. It says
that I believe that more balls are black and less
balls are black at the same time. Obviously that can't
really be the case. But there could be another issue
driving this behavior, which is simply that people don't like ambiguity,
(18:18):
and they will pay up for known odds, even if
doing so implies mutually exclusive assumptions. I was trying to think,
you know, as I said a minute ago, I think
I would probably pick red if given the option to
play this game. Obviously, I sort of already knew the
deal here, so I can't know what I would do
going into it blind like most people, but I think
(18:41):
I would, for no rational reason, prefer the uncertainty reducing options.
I think the most likely reason for that is even
if I were given assurances that this game was on
the up and up, I would suspect some kind of trick,
Like if I'm being asked to bet on uncertainty, There's
some part of me that just kind of rises up
(19:02):
and says, uh, there's a scam here that you are
not seeing.
Speaker 2 (19:06):
Yeah, yeah, I mean you kind of expect that within
the constraints of a testing environment, and you definitely expect
that out of the world. You know, whatever form this
was taking, you'd assume that somebody had a vested interest
in manipulating how you were going to behave to the
how you're going to respond to the two possibilities.
Speaker 3 (19:25):
Yeah. Elsberg also in this paper, by the way, he
came up with a second illustration of the same principle.
So if you go reading about this, you'll read about
the three color urn and then the two urn experiment.
The second illustration used two eurns instead of one, because
it made the contradiction between the different bets even clearer.
And this contradictory self contradictory behavior is now called the
(19:47):
Elsberg paradox. Elsberg himself didn't call it that, that's what
other people they put his name on it. Anyway, Elsberg's
paper about ambiguity aversion was not just about observing a
weird little quirk of human behavior. It was especially of
interest to the fields of economics and decision theory modeling
(20:10):
how people make decisions, because it sought to undermine some
of the major underpinnings of the fields, specifically providing evidence
against some of the tenets of a framework called subjective
expected utility theory or savage as axioms, named after the
American mathematician Leonard Savage who formulated them. Subjective expected utility
(20:35):
theory or SEU theory is a way of modeling how
people make decisions when we don't have perfect or complete information, which,
of course in life we rarely do. Right, So, you
know this is going to be relevant in understanding how
people behave in most real world situations. Most situations in
life are not a casino table game where you have
(20:58):
no odds of winning.
Speaker 2 (21:00):
Or you know, or you know, you can easily imagine
to a situation where you're playing some sort of a game,
be it a tabletop role playing game or some sort
of like, you know, video game, and you're given a
choice between two items for your character. Right, you have
the stats of those items right there. Yeah, you know
what kind of enemies you go up against in the game.
Maybe you've played through the game before. You have something
(21:20):
at least approaching perfect knowledge of the simplified world, and
our real world is not so simplified and there's so
many variables to it.
Speaker 3 (21:29):
That's exactly right. So the real world, we're constantly dealing
with uncertainty ambiguous situations where we don't have all the
information to judge what outcomes are most likely, and yet
somehow we navigate this world making decisions all the time.
The question being addressed here is how do people make
those decisions when they don't actually know what the relative
(21:50):
likelihood of different outcomes are. Subjective expected utility theory has
several rules, but a simplified version is that it says,
and then when people don't know the objective likelihood of
an outcome, they form beliefs about how likely that outcome is,
and then they act consistently with those beliefs in order
(22:12):
to maximize their personal benefit. I came across a passage
by the American economist Frank Knight which I think summarizes
this idea well. Night writes, quote, we must observe at
the outset that when an individual instance i e. A
one time event only is at issue, there is no
difference for conduct between a measurable risk and an unmeasurable uncertainty.
(22:36):
The individual, as already observed, throws his estimate of the
value of an opinion into the probability form of a
successes in B trials a slash B being a proper fraction,
and feels toward it as toward any other probability situation.
So we essentially we form a belief for a feeling
(22:58):
about how likely something is, and then we act as
if that was like you know that we know the
odds of a coin flip coming up heads are one
in two, whether or not we're actually right about those
feelings or beliefs. Elsberg used the principle of ambiguity aversion
to say, actually, no, people do not always behave as
(23:20):
if they have consistent beliefs about what is more or
less likely to happen. And you can prove this because
they make bets that would imply self contradictory beliefs in
the same situation, meaning they either don't actually have or
act on consistent beliefs about a situation, or they don't
always act to maximize their own monetary benefit. Elsberg's point
(23:44):
here was that we can have obscure previously unacknowledged motivations
like the desire to avoid dealing with ambiguity, and that
particular desire is so strong in some cases that it overrides.
It's our consistent thinking and a simple dollar value understanding
of rational self interest. And by the way, the technical
(24:08):
name of the savage axiom that Elsberg was attacking here
was it's called the sure thing principle. Another thing to
understand about Elsberg's paper was that he was arguing that
we need to make a stronger distinction between two different concepts.
Those are risk and ambiguity. Elsberg gives the definition that
(24:30):
risk is when we have a stake in an outcome
and we know the theoretical likelihood of that outcome. So
this is like the casino games or a coin toss.
You know, a fair coin toss is fifty to fifty.
We know what the outcome likelihood is, and we can
take a risk. A fair die roll is one in
six to get a particular number. Ambiguity is when we
(24:52):
don't have an objective way of knowing what the probability
of our desired outcome is. To Elsberg, ambiguity was quote
the the nature of one's information concerning the relative likelihood
of events, a quality depending on the amount, type, reliability,
and unanimity of information, giving rise to one's degree of
confidence in an estimation of relative likelihoods. And so it's
(25:16):
worth noting that you can have confidence or even what
feels like certainty, or act as if you have confidence
or certainty without actually being correct. You can go through
life having high confidence in objectively low probability things. One
of the actually really useful things about SEU theories, the
subjective expected utility theory, was that it made things simple
(25:39):
and easier to understand because it treated risk and ambiguity
the same. So a person making bets on a die
roll will know that the probability of rolling a specific
number is one in six, and they will behave accordingly.
And a person who has a stake in an outcome
with an unknown probability will, according to Savage's axioms, mentally
(26:02):
even subconsciously form an internal belief about the probability of
that outcome. So example is you imagine there is a
one in four chance it's going to reign today. That
might not have anything to do with reality, but you
just take that belief on and you behave as if
that is that you make all of your decisions based
(26:24):
on that belief. They will act according to that belief
the same way they would act to knowing that a
coin flip is fifty to fifty, and Elsberg argued, no,
that is not how we make decisions. SEU theory with
savagees axioms failed to account for other complexities in human behavior,
for example, our deep desire to get away from ambiguity,
(26:47):
Like we don't like ambiguity so strongly we will make
decisions that imply self contradictory beliefs just to avoid it. Now,
there have been a lot of arguments over the years
about how to interpret ambiguity aversion. Is it actually rational
in ways that were not previously recognized, or is it
more like some kind of systematic error or cognitive bias.
(27:12):
We might come back and revisit that debate in a
subsequent part of this series, but we've got a couple
other things we wanted to talk about today. First, Rob,
I know you have a really interesting place you want
to take this, but first I wanted to mention the
idea just of experimental confirmation, because one important thing to
understand is that Ellsberg's original paper did not have an
(27:34):
experimental component. It was a thought experiment, and he realized
the phenomenon needed to be tested with real human subjects.
In the decades since his paper, it has been tested many,
many times, many different ways, and generally it has proved robust.
There are some nuances and exceptions, but generally people behave
(27:54):
the way Elsberg predicted. They would a majority of people
structure their choices to void ambiguity, even when there's no
clear objective benefit to doing so, and they even make
bets that would imply against self contradictory beliefs. According to
SEU theory, if those bets can get them out of
dealing with ambiguous probabilities, so I don't have to mess
(28:16):
with that. Regarding those experiments, my main source here is
again that survey article by Machina and Siniskalchi. Again that's
ambiguity and ambiguity of version from twenty fourteen. The authors
here list a whole bunch of experiments where people carried
out versions of like the three color urn or this
(28:38):
other experiment that Delsberg described, the two urn experiment. They
list Felner in nineteen sixty one, which found that people
would rather take fifty to fifty odds than unknown odds,
which could be better or worse. They also cite Becker
and Brownson sixty four, m. Kremen sixty eight, Slovak and
Taverski from nineteen seventy four, Curly and Yate from eighty nine,
(29:00):
among others, all generally finding confirmation of Elsberg's predictions about
ambiguity aversion quote. Although most of these experiments used students
as subjects. Researchers such as McCrimmon sixty five, Hogarth and
Kunruther eighty nine, Einhorn and Hogarth eighty six, Viscusi and
Chessen ninety nine, hoe Keller and Celtica two thousand and two,
(29:24):
and Mafioletti and Sentori two thousand and five have examined
the ambiguity preferences of business owners, trade union leaders, actuaries, managers,
and executives with the same overall findings, so it appears
quite robust. The authors here also report some interesting findings
of McCrimmon and Larsen from seventy nine which they looked
(29:46):
into it, and they found that while the majority of
people did try to avoid ambiguity, some minority of people
did not and even just chose to embrace it. They
also found, as you might expect, that our relative tolerance
for ambiguity went up or down depending on how good
the known odds were in the known odds bet condition. So,
(30:10):
for example, if you do this three color earn experiment,
if you take the number of red balls that's the
ones where you know how many there are, If you
take that down to zero or even just down to
five or ten, most people will take the gamble with
the unknown odds on yellow and black because the odds
on red are clearly very bad. If you make the
odds on red really good, even more people will stick
(30:32):
with the no nods, and ambiguity of version becomes even stronger.
So that shouldn't be all that surprising. But yeah, in
the middle category, where the odds on red are somewhere
in between good and bad, you're just kind of like,
I don't know, But still people tend to go for
the known odds rather than the unknown odds. However, the
authors also found that even when the idea of subjective
(30:57):
expected utility and choice consistent see was explained to the
people doing this experiment explained to the subjects, or even
in cases where the known odds were bad, some subjects
just stubbornly stuck to red and avoided betting on any
ambiguous conditions. So most of us seem to dislike ambiguity
(31:19):
in the majority of cases, and some of us just
don't like it at all and will not tolerate it. Now,
as is often the case with stuff and decision theory
and economics and stuff, you start off playing these little
(31:42):
like gambling games with like balls in a jar or
something like that, and so it seems like, well, how
consequential could this really be? But actually, I think you
can take the idea of ambiguity aversion to number one.
It has been applied to very consequential situations in real life,
and thus understanding it can have major consequences not only
(32:03):
for people's individual lives, but for world events.
Speaker 2 (32:07):
That's right, And that brings us back to Ellsberg, who, again,
and in addition to crafting this highly influential paper, was
also a highly vocal critic of nuclear weapons and again
the rhetoric surrounding nuclear weapons. And I was reading a
good bit from his book The Doomsday Machine, which came
(32:28):
out in twenty twelve, which is quite a good read
if you want to much deeper dive into his thoughts
on all of this. But you know, in that book
the subject of ambiguous data comes up numerous times how
will a nuclear power react in response to ambiguous data
about a potential incoming attack? And additionally, it talks about
(32:51):
the purported advantage of ambiguity in intense slash response, namely
in the case of mad Man theory. This is very
closely associated with Richard Nixon, but also tied to various
other figures domestic and international, including the current US president.
A kind of supposed perceived madness or volatility that would
(33:16):
lead adversaries to second guests any plans to move against them.
Speaker 3 (33:20):
Right, and we're going to get to some major criticisms
of the mad Man theory strategy in a bit, But
the idea of it is that you can leverage a
reputation for unpredictability to your advantage in negotiations.
Speaker 2 (33:35):
That's right. You know, this is very much that nobody
knows what I'm going to do approach. That's the message
you put out there. And the idea part of the
idea here is that it makes logically empty or extreme
threats seem more possible. Now, mad Man theory was you.
Appointment is a term that Nixon even used himself when
talking about this approach. It was outlined by Daniel Elsberg
(33:57):
and Thomas C. Showing as well, but the basic con
pretending to be erratic, irrational, or unhinged to influence the
other side of a bargaining table. This goes back a
long ways. Machiavelli wrote about it in fifteen seventeen, and
it's probably as old as the first prehistoric human to
like go around hitting trees randomly with a club, you know,
(34:17):
and say, who knows what tharg is going to do?
Why would you mess with throg on this? We?
Speaker 3 (34:22):
Yeah, we better just give throarc what he wants.
Speaker 2 (34:25):
Yeah, So again, the Elsberg paradox highlights a core flaw
in human decision making, the aversion to ambiguity. The madman
theory as a political strategy seeks to exploit this by
cultivating a reputation of rationality, creating an ambiguous threat. The
adversary and their aversion to this ambiguity and the unknown
(34:45):
risk of an unpredictable response may be coerced into backing down.
Or at least this is the idea, this is the
logic behind the approach. Yeah.
Speaker 3 (34:55):
The classic example of mad men theory in practice in
real life is that the Nixon administration could, for example,
have Kissinger call up his counterparts in the Kremlin and say, look,
you know, we're all trying to calm Nixon down, but
he's boiling mad at you. He's borderline crazy. You've got
to do X, Y and Z. You know, you've got
to give him these concessions he wants, or we don't
(35:16):
know if we'll be able to control him.
Speaker 2 (35:19):
Yeah. Kind of a good cut, bad cotton thing too,
I imagine.
Speaker 3 (35:21):
Yeah, And this was a way of making I think
a big part of it was it was a way
of making the nuclear deterrent feel once again like it
had force in matters other than just deterring a first strike.
Because the problem with using nuclear weapons as leverage and
negotiation between two nuclear armed powers is obviously, if the
(35:44):
weapons get used at all, everyone loses. There is not
a winner in mutually sured destruction. So if the White
House was trying to get the Kremlin to do something
like I'll nuke if you don't do it, that was
not a credible threat because bo both parties knew that
the other party knows that war would be the destruction
(36:04):
of them both. So no rational actor would ever strike first.
Between rational actors, I'll knew Q if you don't do
XYZ is an empty threat. The goal of madman theory
is to gain leverage in negotiations by introducing some amount
of fear that your counterparty may not be a rational actor.
(36:25):
He might be crazy enough to do it, we don't know. Therefore,
the madman strategy makes direct use of the opponent's ambiguity
version as a tactic to get leverage over them. The
madman player in this game is betting that the fear
and uncertainty about how to handle their unpredictability will cause
(36:45):
their counterparty to make concessions they might not make otherwise,
Like you end up thinking, well, I'll just take a
bad deal rather than wander into territory with unclear levels of.
Speaker 2 (36:56):
Risk, right right, and not to derail the pointer, But
I want to bring it back to Ellsberg's other criticisms
of nuclear weapons and the rhetoric surrounding nuclear weapons, that being,
there's all the there's all these other uncertainty factors and
ambiguous data that would be coming in about what the
enemy is or is not doing, and therefore any given
(37:17):
situation like this is going to be just rot with
with potential ambiguity, potential missteps and potential mistakes, So like
if it were only as easy as we're laying it
out here, it would almost be a different situation totally.
Speaker 3 (37:29):
I mean Ellsberg's point about the all of the uncertainty
in the actual command and control of you know, nuclear
strategy like that's that that introduces real levels of existential
risk into playing games like this. There are a lot
of reasonable criticisms of mad Man theory, but I think
(37:50):
a major one, at least from my point of view,
is that at best it is a strategy for short
term gain at the expense of long term stability and
negotiating power, because one way or another it undercuts your
credibility and your ability to be seen as an honest
and reliable broker in future negotiations, either because you come
(38:13):
to be seen as actually irrational, in which case people
don't want to deal with you, and they you know,
they will find what you know. They may be they
may be motivated to do something bad to strike first
to you because you're too dangerous to be let loose,
or they may be, you know, they may try to
find ways to cut you out of negotiations, so that
could be one consequence, or just because you are revealed
(38:35):
to have been bluffing, in which case you lose your credibility.
Speaker 2 (38:38):
Right right, and yeah, this is this is all valid.
I was looking at a couple of papers on mad
men theory and mad man theory, like ambiguity of version itself,
is something that has been written about a lot there.
There are no shortage of papers out there, but I
looked at just a couple. I looked at one titled
crazy like a Fox? Are leaders with reputations for madness
(38:59):
more successful international coercion? By Roseanne W McManus. This came
out in twenty nineteen from Cambridge University Press, and in
this the author points out that a reputation for madness
would seem to be more often harmful than helpful in
international coercion, So it undercuts the leader's ability to make
(39:20):
believable peace commitments, treaties and so forth, which and it
really almost feels absurd that we need to stress this.
These are all vital for peaceful relations between nations. No
one knows what I'll do easily slides into. No one
knows what I'll honor, No one knows what treaties I
would well I actually stand by, and so forth. You know,
(39:41):
to bring this back to Dungeons and Dragons, I think
any dn D player worth your salt knows that while
you might well enter into a pact with a lawful
evil devil, you never enter into a pack with a
chaotic evil demon because the lords of the nine hell
are going to stick to their letter of the contract,
if not the spirit of the contract. But demon lords
such as the mcgorgan will honor nothing, and there is
(40:02):
no coherence even within themselves. They're just pure ambiguity, and
you can't strike any sort of deal because they won't
stand by it no matter what. Whereas the devil's being
lawful evil, they may look for that wiggle room, they
may find ways to, you know, to avoid the spirit
of the deal, but they're still bound to the letter.
Speaker 3 (40:22):
Right. So if you think you discover that your your
counterparty is actually just chaotic evil, all you can do
is roll for initiative like there's no making a.
Speaker 2 (40:29):
Deal yeah, and then you slide into chaos, pure chaos.
McManus back to her paper, though, she found that mad
men theory may be helpful in crisis bargaining, but only
under certain conditions, namely when employed by military week leaders.
So we're not talking about a true superpower here they
can engage in like true mutually assured destruction. Rather, we
(40:53):
would be dealing with the state that could do a
lot of damage, but in striking out with just thoroughly
destroy themselves. So the idea here is that no one
would take this hypothetical nation or the leader of this
hypothetical nation seriously unless they presented an air of madness,
and then perhaps, hey, they might do it anyway if provoked.
(41:16):
They they you know, they're they're unhinged, and thus it
can be a form of asymmetric leverage. So we're talking
low probability but high consequence. And MacManus also points out
that there would also seem to be more of an
advantage if you had a mild reputation for madness rather
than an extreme one. You don't want to come off like,
you know, a complete da da you know, eye rolling maniac.
(41:41):
In one of these cases, the idea would be like, well,
you know, we can, we can still get this individual
to the negotiation table, but we just have to be
hyper sensitive and I guess again, kind of avoiding that
complete fall off into chaotic evil demonhood and instead dealing
with devils that can still be bound by some sort
of law. All right, I have a hypothetical example here.
(42:03):
So we have demigodic hero Hercules and we have his
mortal cousin Eurystheus, and they're grabbing lunch at the local
euro place. Great, So Eurystheus warns his cousin that if
he tries to steal any of his fries, he's going
to flip the table and all their food's going to
wind up on the floor, and he acts just really
(42:24):
sensitive and unhinged about the whole thing. Yes, Okay, Hercules
obviously knows that Eurystheus can't take him in a fight
and might not even be able to flip that table
over before Hercules stops him, and rationally, Eurystheus should realize
this and just let big Hurk have a few extra
fries if he wants, like that's what does it matter.
(42:46):
But from Hercules's standpoint, does he really want to risk
his lunch winding up on the floor. Maybe he should
just let Eurystheus keep all of his fries. Okay. The
downside of course here is that maybe Hercules just won't
invite you risk the us out for lunch next time,
and he's certainly not going to pick up the check.
(43:07):
And on top of all that, how are they going
to work together to defeat the hydra? Because, as you
remember from past episodes and from Greek mythology in general,
Hercules can't do that on his own. He has to
have his cousins help to burn the stomps after each
head of the hydras cut off.
Speaker 3 (43:23):
Very good points, I mean, apart from any like otherwise
like moral considerations or honesty considerations about deploying something like
mad men theory. Just from a strategic point of view,
it seems like it is probably best for situations where
like your counterparty has to deal with you, it is
not optional for them, and you don't have to like
(43:47):
or respect each other or ever work together on anything,
and you don't care about long term goals. You are
only interested in this situation right now, in extracting a
short term advantage.
Speaker 2 (44:00):
That's right, that's right. But some papers out there that
have crunched the numbers on all this and done some
experimentation questionnaires and so forth, do acknowledge that you know
that there are multiple dimensions to any of these situations. Namely,
and this is something I think everyone can relate to, Like,
you can have a leader that is engaging in an
international situation, but there is still the domestic view of
(44:26):
that situation. There's still the domestic response, the domestic relationship.
And so one of the papers that I was looking
at this one is titled mad Man or mad Genius.
The International Benefits and Domestic Costs of mad Man's Strategy
by Joshua A. Schwartz, published twenty twenty three in Security Studies,
and in this the author found that the Madman approach
(44:46):
can work in negotiations with foreign adversaries, but quote entails
significant domestic costs that potentially erode its efficacy. He also
points out that mad Man theory may simply not work
against major powers, because while it might make a threat
more credible, that doesn't necessarily make the threat more effective
and make the adversary willing to cave to those demands.
(45:09):
So he points out that this is perhaps why Nixon's
use of mad Man theory didn't work against the Soviets. Like, Okay,
it made Nixon's threat more credible, but did it actually
make it more effective? Did he actually get what he
wanted to out of these bluffs these threats, and then
you have the domestic side of things, with the hypothetical
(45:30):
leader's own citizens not loving the heightened stakes because you know,
obviously I mean, I say obviously, but this is one
of those things that we often have to be reminded
of in a case of mutually assured destruction, or even
a case of asymmetric exchange involving nuclear weapons or some
other kind of just you know, horrible weapon of mass destruction,
(45:51):
a leader is always bargaining with the lives of their
own citizens. You are the chips on the board in
a no one knows what I'll do wager. Yeah, And
there also notes that, okay, you know, this is going
to also differ depending on how much of a voice
the people have in a given nation versus how much
power the ruler has, and there's going to be less
backlash in places where the people have less of a
(46:13):
voice in the ruler has more absolute power.
Speaker 3 (46:16):
That's right. But one thing I did want to clarify
is that the sense in which you are the chips
on the board and no one knows what I'll do
wager is true is not just limited to like the
worst possible scenario like nuclear warfare. I mean it's also
the case in say, like trade negotiations or something like that,
like your economic prospects are in a way, those are
(46:39):
the chips on the board, Like the citizens' fates and
futures are the things that are being negotiated with.
Speaker 2 (46:46):
That's right. Yeah, in contemplating potential nuclear exchanges, everything is
a bit more stark and a bit more black and white.
But obviously, you know, any give who can think of
any number of scenarios where the steaks are are still
quite high for the individual.
Speaker 3 (47:01):
So yes, it's not hard to see at all why
a citizenry can easily become upset and annoyed if they're
leader who's supposed to be representing their interests in negotiations,
is acting unreliable and unpredictable. Like the leader may think
that they can get good gains out of that in
the short term, but the citizens are probably thinking about
a lot of these even if not you know, thinking
(47:23):
specifically about them, just thinking intuitively about it. Seems like
there's a lot of downside to this.
Speaker 2 (47:28):
Yeah, and again coming back to what we mentioned earlier,
the idea too that by constant, if you swing around
the sword of mad Man theory too much, then it
makes it more difficult to engage in the other highly
important tools of state craft, peace treaties and agreements and
so forth. So it's like you can only swing that
(47:50):
sword around for so long, and then you have to
be able to engage in these other acts as well. Again,
short term gain with mad Man theory, not so in
the long term. But you know, unfortunately this ends up
rolling out into so many things about human experience and
human perception. Is that we are so focused on the
(48:12):
short term and we don't think about the long term.
Speaker 3 (48:15):
So we've just been talking about a lot of reasons
that it might actually be bad and not as clever
as it first seems to try to leverage knowledge of
ambiguity aversion offensively and negotiations. But I think one way
that it can definitely be useful is to have knowledge
of ambiguity aversion to use defensively to think about in
(48:35):
analyzing your own behavior and being aware of your bias
to avoid ambiguity and sort of check yourself and think, like,
wait a minute, am I actually making the right decision here?
Is this actually what's rational? Or am I just having
an irrational bias against situations with unknown probabilities that are
(48:57):
having an outsized scariness in my mind because of the
ambiguity involved. Like the example we talked about earlier with
people being afraid to try new experiences because there's some
amount of ambiguity.
Speaker 2 (49:08):
Yeah, absolutely, there Again, there's always ambiguity and any new
experience there there's so many ways that could go wrong,
but there are so many ways that could go right.
You know, a lot of us do tend to focus
more on the ways that could go wrong.
Speaker 3 (49:20):
The devil, you know, is not necessarily better.
Speaker 2 (49:24):
Yeah, the devil you don't know, It could be really fun,
give them a shot.
Speaker 3 (49:27):
Sometimes the devil you know could be like a chaotic
good devil. Those those exist, right are there?
Speaker 2 (49:33):
So you know, maybe under the new it's possible under
the new rules, but it's.
Speaker 3 (49:40):
Actually just a friendly tee fling that.
Speaker 2 (49:42):
You can get mistigo. You could have a kat a
good teethling. There you go, it's entirely possible. All right.
Speaker 3 (49:48):
Well, that's going to do it for part one of
our look at ambiguity aversion, But we're going to be
back again next time to talk about how this applies
to some other domains of life and maybe some subsequent research,
I might get some into the different ways that the
ambiguity version observations have been interpreted in terms of different
(50:09):
types of decision theory, like is it an error or
is it actually a type of rationality that needs to
be better understood? So yeah, well we'll talk about that
kind of stuff next time.
Speaker 2 (50:18):
Who knows what angles we'll actually discuss in the next episode.
You can tune in to find out in the meantime,
certainly right in we'd love to hear from you, and
we'd like to remind you that Stuff to Blow Your
Mind is primarily a science and culture podcast, with core
episodes on Tuesdays and Thursdays, short form episode on Wednesdays
and on Fridays. We set aside most serious concerns to
just talk about a weird film on Weird House Cinema.
Speaker 3 (50:40):
Huge thanks as always to our excellent audio producer JJ Posway.
If you would like to get in touch with us
with feedback on this episode or any other, to suggest
a topic for the future, or just to say hello,
you can email us at contact at stuff to Blow
your Mind dot com.
Speaker 1 (51:01):
Stuff to Blow Your Mind is production of iHeartRadio. For
more podcasts from my heart Radio, visit the iHeartRadio app,
Apple podcasts, or wherever you listen to your favorite shows.