Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome, deep divers. Ever feel like you're on
autopilot, sometimes making choices without really, you
know, choosing? Why do you always grab that same
parking spot? Or I don't know.
Why does that brand of cereal always end up in your cart?
It's a great question, and oftenit's not just random chance.
There are these sort of invisible forces at play shaping
those everyday decisions. Absolutely.
(00:20):
And today, we're going to reallypull back the curtain on one of
the most influential ideas aboutthis the concept of the nudge.
We're diving deep into the invisible architecture that
guides your choices, often without you even noticing.
Yeah, it's fascinating stuff. So much of what we do is
influenced by the context, the way choices are presented to us.
Our guide for this journey is a really groundbreaking book
(00:44):
simply called Nudge. It was Co written by Richard H
Taylor, who actually won a NobelPrize for his work in behavioral
economics, and Casper Sunstein, who's a major figure in legal
scholarship and behavioral science.
Right. And this book, it wasn't just an
academic exercise. People like Michael Lewis, you
know, the Moneyball guy, called it utterly brilliant, said it
was one of those rare books thatfundamentally change the way I
(01:06):
think about the world. High praise indeed.
Totally. It's described as terrific,
something that can really knock you off your feet.
And what makes it so powerful, Ithink, is how it connects these
like complex psychological insights to very practical real
world stuff. It's not just theory, it's about
how we can actually design things better.
OK, so let's get right into it. What exactly is a nudge?
(01:29):
The authors define it pretty clearly.
Yeah, they say a nudge is basically any small tweak in the
choice architecture. That's just the environment
where you make decisions that predictably changes your
behavior. But, and this is key, it does it
without taking away any options or really changing the economic
incentives in a big way. It's not about forcing you or
(01:50):
banning things. Exactly.
It's about making the good choices or the desired choices a
little bit easier, maybe more appealing.
It's like gentle guidance, not ashove.
It's all about understanding howwe actually make decisions,
yeah, not how economists used tothink we made them.
Precisely. It accepts that we humans have
our quirks, our mental shortcuts, our, you know, lazy
(02:11):
moments. And then it asks, OK, how can we
design systems that work with that reality?
They kick off the book with thisgreat story about a friend,
Carolyn, who's in charge of school cafeterias for a big
city. Hundreds of thousands of kids
meals every day. Right.
And she's a nutrition expert, but also kind of creative.
So she wonders, what if we just changed how the food is laid
(02:34):
out? Didn't change the menu, just the
arrangement. So in some schools, desserts
were maybe put first in line andothers last.
Sometimes the healthy stuff likecarrot sticks was right at eye
level. Other times, maybe the French
fries were easier to grab. And the results?
Pretty dramatic. Just rearranging the food with
no bands, no price changes, significantly shifted what the
kids actually chose to eat. That's a perfect nudge.
(02:56):
Subtle. Keeps freedom of choice intact
but still effective. It really drives home this idea
they call no neutral design. Like an architect designing a
building. They have to decide where the
bathrooms go, right? And that decision, seemingly
small, effects how people move, who they bump into.
It shapes the whole experience of being in that building.
(03:18):
Exactly. And the same applies to choice
environments. There's no such thing as a
neutral presentation. Every website layout, every
supermarket aisle, every form you fill out, it's all designed,
intentionally or not, and that design influences you.
So we're all choice architects in some way, even if we don't
realize it when we, say, organize a meeting agenda or
(03:39):
arrange items on a shelf. Yep, we're constantly shaping
the contexts in which others make decisions.
OK, this leads to a term they coined that sounds a bit odd at
first. Libertarian paternalism?
What's the deal with that? Huh.
Yeah, it definitely sounds like a contradiction, but it's
actually the philosophical core of nudging.
The libertarian part means liberty preserving.
(04:00):
Meaning people are always free to choose whatever they want,
free to opt out. Precisely the authors really
stress this. When we say liberty preserving,
we really mean it. The ideal nudge should be easy
and cheap to avoid, like minimalhassle if you want to go your
own way. OK, so that's the libertarian
part. What about the paternalism?
That usually means telling people what's good for them.
(04:20):
Right. And here it means gently guiding
people towards choices that are likely to make them better off
according to their own judgment.Especially when our gut
reactions are automatic, systemsmight lead us down a less
beneficial path. So it's like a helpful
suggestion for your own good, but one you can totally ignore
if you want to. Exactly.
(04:41):
It respects your autonomy while still trying to lend a helping
hand, particularly for complex or tricky decisions.
This whole philosophy is built on a really fundamental
distinction they make between two types of beings, econs and
humans. Yes, the classic econ,
traditional economics for a longtime assumed are all Homo
economics. These perfectly rational
(05:01):
creatures. Think Mr. Spock, but maybe even
more logical. God like intelligence, perfect
memory, unbreakable willpower. Right.
The kind of person who thinks like Einstein, remembers every
single price they've ever seen, and has the willpower of Gandhi.
Does anyone actually know someone like that?
Probably not. That's the point, the authors
(05:22):
argue. We are, in fact, humans, Homo
sapiens. We struggle with long division
sometimes. We forget birthdays.
We definitely don't always resist temptation.
We make predictable mistakes. We rely on shortcuts, right?
Sensible ones mostly, but shortcuts nonetheless.
Exactly. We use rules of thumb to get
through the day because we can'tpossibly analyze every single
(05:44):
decision down to the last detail.
And that's the core idea drivingthe whole book, designed for
real people with all our flaws and limitations, Not for these
mythical, perfectly rational econs.
It's about acknowledging realityand building systems that help
us navigate it better. So today, deep divers, that's
our mission. We're going to explore this
(06:04):
contrast, how our very human nature makes us susceptible to
nudges, how society shapes our choices, and how these nudges
are being used to tackle some really big issues, from saving
for retirement to even climate change.
It's going to be a fascinating look under the hood of decision
making. Strap in, it really does change
(06:25):
how you see the world around you.
OK, so let's dig into why we actually need these nudges in
the first place. It really comes down to
acknowledging how our minds work, or sometimes how they kind
of trick us. The authors use a great visual
example, Shepherd's Table Tops Illusion.
Oh yeah, that's classic. It shows how our visual system,
which is amazing for most thingslike, you know, recognizing your
(06:45):
friend in a crowd or reading this text.
It can also be systematically fooled.
So the illusion shows 2 drawingsof table tops.
One looks long and skinny, the other looks, well, much shorter
and wider. They look completely different
in shape and size. Right, Your media perception,
your gut feeling says no way. Are those the same?
But then comes the reveal if youactually take a ruler and
(07:06):
measure the surfaces of the two table tops in the drawing.
They're identical, exactly the same dimensions.
It's wild. And the authors say if you were
fooled by that, and most people are, you are certifiably human.
It's not about intelligence. Einstein would probably be
fooled too. It just shows our brain takes
shortcuts. Yeah, these sensible shortcuts,
(07:28):
as they call them. We can't sit in the grocery
store trying to calculate the absolute optimal combination of
items to maximize nutritional value per dollar while
minimizing, I don't know, packaging waste.
No, we'd starve before we finished aisle one.
We grab what looks familiar, what's on sale, what we remember
liking. We take shortcuts.
And this fallibility, this reliance on shortcuts, ties
(07:51):
directly into a really key concept growing heavily on
Daniel Kahneman's Nobel winning work, the idea that we have two
different systems of thinking. Right, System 1 and System 2.
Exactly. System 1 is the automatic
system. It's fast, intuitive, emotional,
effortless. It's your gut reaction.
Ducking when a ball comes at your head, smiling at a puppy,
(08:12):
getting a bad vibe from someone.That's system 1.
It operates without conscious thought, really.
It's the ancient part of our brain, the lizard brain, as some
call. It kind of, yeah.
And then you have system 2, the reflective system.
This is your conscious thought process.
It's slow, deliberate, analytical, effortful.
So, doing complex math, planninga trip to a new place, weighing
(08:32):
the pros and cons of a job offer.
That's System 2. Working hard.
Precisely. The authors use a great anaudry.
Think of your reflective system as the logical Mr. Spock, calm
and calculating. Your automatic system is more
like Homer Simpson. Impulsive, emotional, focused on
the immediate. Five days.
But I'm mad now. That's pure Homer.
(08:52):
Pure automatic system. Or my own internal Homer.
When the alarm goes off versus my Spock, who knows I should get
up. We all have that internal
dialogue and the famous bat and ball problem shows how easily
the automatic system can jump inwith the wrong answer.
Oh yeah, remind us of that one. OK, a bat and a ball cost $1.00
cent in total. The bat costs $1.00 more than
the ball. How much does the ball cost?
(09:14):
Right, and the immediate automatic answer that pops into
most peoples heads is $0.10. Yep, that's system 1, shouting
out the easy intuitive answer. But if you engage system 2, your
reflective system, you pause andthink, wait, if the ball is
$0.10 and the bat is $1.00 more,the bat will be $1.10 and $1.10
+ 10 cents is $1.20 cent, not $1.10.
(09:38):
So the reflective system calculation shows the ball must
cost $0.05 and the bat $1.00. Five.
Exactly. It requires overriding that
initial automatic impulse. It shows how often we humans
rely on that quick gut reaction without necessarily stopping to
think it through. So if our minds make these
predictable errors, what are thecommon patterns?
The authors talk about biases and rules of thumb heuristics.
(09:59):
Right. Yes, these mental shortcuts are
generally useful, but they can lead us astray in systematic
ways. One big one is anchoring.
Anchoring like dropping an anchor on a boat.
Sort of. It means our judgments get stuck
on or heavily influenced by the first piece of information we
receive. Even if that information isn't
totally relevant, That initial piece acts like an anchor.
OK, give me an example. The book uses the taxi tipping
(10:21):
example. You're paying by credit card and
the screen pops up with suggested tip amounts 15%,
twenty percent, 25%. Ah yeah, I've seen that.
Those percentages act as anchors.
Even if you normally tip, say 15%, seeing 20% right there in
the middle makes it seem like the norm, the easy choice.
So people end up tipping more onaverage just because those
(10:44):
anchors were presented. It's easier to just tap the
button to do the math yourself too.
Definitely. But there's a fascinating twist
called reactants. One study found that if the
default tips were set really high, like 20 percent, 25
percent, 30%, while it did increase the average tip, it
also caused more people to leaveno tip at all.
Why? They felt pushed around.
They reacted against the perceived manipulation, the
(11:06):
feeling of being told what to doand just refused to tip.
So nudges have to be careful notto feel too forceful.
Interesting. OK, what's another major bias?
Loss aversion. This is a huge one.
Basically, losing something feels about twice as bad as
gaining the equivalent thing feels good.
So finding $20 feels good, but losing $20 feels like really
(11:28):
bad. Twice as bad.
Roughly, yes. They proved this with
experiments like the coffee mug study.
People who were given a university mug demanded about
twice as much money to sell it, compared to what people without
a mug were willing to pay to buyone.
Endowment effect, right? Once you own it, it feels more
valuable and losing it hurts more.
Exactly. Once you have a mug, you don't
want to give it up. This is why charging a tiny fee
(11:50):
for plastic bags like $0.05 is often way more effective at
getting people to bring reusablebags than offering a $0.05
discount for bringing 1. Avoiding the loss is a stronger
motivator. And that ties into status quo
bias, doesn't it? Our tendency to just stick with
what we're doing. Absolutely.
It's partly loss aversion. Changing feels like losing
something even if the new thing is better.
(12:11):
But it's also just inertia, laziness, lack of attention, as
the author's put it. Sunstein's magazine story is
hilarious and painfully relatable.
He got 5 free magazines for three months.
But didn't realize he had to actively cancel or he'd be
charged automatically. And he ended up paying for over
a decade for magazines he hardlyever read and mostly despised.
(12:35):
Why didn't you just cancel? Too much hassle finding the
number, calling, maybe waiting on hold.
It was easier to just let it ride.
He called it the yeah, whatever heuristic.
Oh man, I think we all have someyeah whatever subscriptions
draining our bank accounts. This is why default options are
so incredibly powerful, right? Hugely powerful.
Whatever the default is, that's what most people will end up
doing simply because switching takes effort or attention.
(12:58):
Think about software settings oreven just the default ringtone
on your phone. Most people stick with it.
OK, one more bias framing. Framing this is all about how
information is presented. The same facts described
differently. It can lead to very different
choices. The medical example is striking,
telling a patient of 100 people having this operation, 90 are
(13:18):
alive after five years. That sounds pretty good.
Reassuring. Right.
But what if the doctor says of 100 people having this
operation, 10 are dead after five years?
It's the exact same statistic, the same outcome, but the
negative frame feels much scarier and might make someone
refuse the surgery. It's amazing how just changing
the words the frame can shift perception so dramatically.
(13:41):
It really is. And understanding these biases,
anchoring, loss aversion, statusquo framing, and others like
availability and optimism is absolutely fundamental to
designing effective nudges. It's about anticipating how we
humans actually think and react.OK, so we understand why we need
nudges. Our brains aren't perfect
calculating machines. Now let's dive into one of the
(14:04):
biggest human struggles, self-control, or the lack
thereof. The authors have this great way
of framing it as an internal battle.
Yeah, the planner versus the doer, it's like you have these
two conflicting cells inside you.
The planner is the farsighted one, right?
Your inner Mr. Spock, thinking about long term goals, saving
money, eating healthy, getting enough sleep.
Exactly. The planner wants what's best
(14:24):
for you in the future. But then there's the doer.
The doer, my inner Homer Sinson,lives entirely in the present
moment, driven by immediate temptations, emotions, the
automatic system. That's the one The planner sets
the alarm for 6:00 AM to go for a run.
The doer hits the snooze button five times because the bed feels
(14:44):
so good right now. I know that battle well.
My planner lays out workout clothes the night before, full
of good intentions. My doer, come morning, argues
very persuasively for just five more minutes under the duvet.
Classic planner doer conflict, and a key reason for this
struggle is what behavioral economist George Lowenstein
calls the hot, Cold empathy gap.Explain that one.
(15:07):
When we're in a cold state, calm, rational, maybe full after
a meal, we make plans and resolutions.
We think, sure, I can resist those cookies at the party
tomorrow. But then tomorrow comes, You're
at the party, you're hungry, everyone else is eating cookies.
You enter a hot state. Exactly.
And in that hot state of temptation or craving, our
desires and behavior changed dramatically.
(15:28):
We underestimate when we're coldjust how powerful those hot
states will be. That's the empathy gap.
We can't accurately predict or empathize with our future hot
self. Which is why New Year's
resolutions often fail by February.
Our cold state planner makes promises the hot state doer
can't keep. Precisely.
And this is where commitment strategies come in.
(15:50):
The ultimate example is from Greek mythology, Ulysses and the
Sirens. Right.
Ulysses knew he wouldn't be ableto resist the Sirens beautiful,
deadly song when he sailed past their island.
He'd be in a hot state. So in his cold state, before
they got close, he made a plan. He had his crew tie him tightly
to the ship's mast and ordered them to plug their own ears with
wax so they wouldn't hear the song.
(16:10):
He told them no matter how much he begged or commanded them
later when he was hot, they werenot to untie him.
He used a pre commitment to constrain his future tempted
self. His planner outsmarted his
future doer. It's the perfect illustration of
a commitment device, something you set up in advance to prevent
your future self from succumbingto temptation.
And the book is some great modern examples.
(16:31):
Yeah, like the clocky alarm clock.
Oh, clocky. Yes.
Yeah, it's brilliant. It's an alarm clock on wheels.
If you hit snooze, it literally rolls off your night stand runs
away and hides somewhere in yourroom, beeping annoyingly.
So your doer who just wants to stay in bed is forced to
physically get out of bed and hunt the thing down to turn it
off. Exactly.
The planner sets the snooze limit, maybe just one snooze
(16:53):
allowed. The doer then has no choice but
to engage physically, making it much harder to just drift back
to sleep. It's Ulysses Mast for the
chronically sleepy. Beyond gadgets, they talk about
using financial bets as commitment devices.
Thaler's story about his colleague David and his PhD
thesis is fantastic. It really is.
David was procrastinating badly,so Thaler got him to write a
(17:16):
series of $100 checks payable toThaler himself.
And the deal was If David misseda deadline for submitting A
thesis chapter, Thaler would cash one of the checks.
But here's the genius part. Thaler promised to use the money
to throw a party that David wouldn't be invited to.
Ouch. That adds social embarrassment
and exclusion to the financial penalty.
(17:37):
Exactly. The authors point out that the
specific immediate pain of having Thaler cash the check and
picturing that party was a much stronger motivator for David
than the abstract long term benefit of finishing his PhD.
And it worked, right? He finished in record time.
Four months never missed a deadline, although apparently he
often printed chapters out just minutes before they were due.
(18:00):
It shows how personalized salient consequences can really
drive behavior. It also makes you think about
those seemingly irrational Christmas savings clubs.
Right, from a purely rational econ perspective, they make no
sense. You deposit money weekly, earn
almost 0 interest, and you can'twithdraw it until just before
Christmas. Why would anyone do that?
But for us humans. For humans, the cost of lost
(18:22):
interest and inconvenience is tiny compared to the benefit the
assurance of having enough moneyto buy gifts at Christmas.
The key feature, the inability to withdraw the money easily,
isn't a bug, it's a feature. It's the planner stopping the
doer from dipping into the Christmas fund for a new gadget
in July. Exactly.
It's a voluntary commitment device, an adult Piggy Bank.
(18:46):
This also relates to mental accounting, doesn't it?
This idea that we treat money differently depending on where
we mentally file it. Yes, it's this implicit system
we use. We mentally label money for
different purposes. Rent money, vacation money,
grocery money, even though rationally a dollar is a dollar.
The Dustin Hoffman anecdote is perfect.
As a young, broke actor, he had Mason jars on his counter
(19:08):
labeled RENT, Food, Entertainment, some with cash in
them. But he asked a friend for a
loan. Why?
Because his food jar was empty. Even though he had cash in the
rent jar. Right.
Rationally, money is fungible, interchangeable.
But in his mental accounts, the rent money couldn't be touched
for food. It's like budget silos in a
company. It might seem irrational, but it
(19:30):
helps us control spending by creating boundaries.
It does raise a slightly worrying point, though, about
how markets can exploit these frailties.
The Cinnabon example ah. Yes, the Airport Challenge, a
vendor selling healthy salads and sandwiches set up right
across the walkway from a Cinnabon, which, as they do, was
pumping that irresistible cinnamon smell into the air.
(19:52):
And the Cinnabon always has the longer line right?
Even though people know the salad is better for them.
It perfectly illustrates the principle more money can be made
by catering to human frailties than by helping people to avoid
them. It's easier to sell temptation
than discipline. Which is why bars often do
Better Business than Alcoholics Anonymous meetings.
Sadly, yes, and it's why well designed nudges aimed at helping
(20:15):
our planner win those battles are so important in a world full
of temptations designed to appeal to our doer.
OK, so we've looked inward, at our own minds, our biases, our
self-control issues. But we don't make decisions in a
vacuum, right? We're intensely social
creatures. Part 3 of the book dives into
the massive power of social influences.
Absolutely. Unlike those isolated, purely
(20:38):
rational econs, we humans are profoundly affected by what
others around us are doing and thinking, sometimes even when it
makes no logical sense. The authors identify 2 main
types of social influence. What are they?
First, there's information. We look to others for clues
about what's right, what's smart, or what's effective.
If you see a crowd running in One Direction, you might assume
(20:59):
there's danger and run too, evenif you don't see it yourself.
Or like the recycling example, if everyone on your street puts
out the recycling bin, you figure OK, this must be the
thing to do. Exactly.
The second type is peer pressure.
We want to fit in, to be liked, to avoid disapproval, so we
conform to what we perceive the group norm to be.
Even if, as the author's note, we sometimes have this mistaken
(21:21):
belief that they're paying some attention to what you were
doing, we think people notice our choices more than they
actually do. So true.
Think about mask wearing during the pandemic.
In some communities, peer pressure strongly encouraged it.
In others, it discouraged it. Social norms were hugely
powerful. And the examples they give are
just staggering. Teenage pregnancy rates
(21:44):
influenced by peers, college roommates affecting grades, even
judges being swayed by their colleagues.
It's everywhere. It really is, and this isn't
new. Classic psychology experiments
showed this decades ago. Solomon Ashe's line experiment
from the 50s is mind blowing. Remind us of that one.
Participants had to judge which of three lines matched a
(22:04):
standard line. A really easy visual task.
Right, but the catch was everyone else in the room was
secretly working for the experimenter.
They were Confederates, and on certain trials all the
Confederates would confidently give the same wrong answer
before the real participant got their turn.
And what happened to the actual participant?
People buckled under the pressure.
They gave the obviously wrong answer more than 1/3 of the
(22:26):
time. Nearly 3/4 of participants
conformed at least once, literally going against the
evidence of their own eyes because the group disagreed.
Wow, it's like I say, people could be nudged into calling a
dog a cat if everyone else did it first.
And this isn't just an old American finding.
Right. It's been replicated across 17
countries. Conformity is a deeply human
(22:48):
trait, even when the truth is staring us in the face.
Another classic study is Muzaffar Sharif's using the auto
kinetic effect. Yes, in a completely dark room a
tiny stationary pinpoint of light will appear to move
slightly because your eyes have no fixed reference point.
Sure, if had people estimate howmuch the light moved.
(23:09):
And when they did it in groups. Their individual judgments
started to converge towards a group average, but even more
interestingly, if a confederate stated their judgment first,
confidently and firmly, it had ahuge influence pulling the whole
groups estimate towards their number.
So the lesson for choice architects?
Confidence matters. A consistent, unwavering voice
can really shift group perception and behavior.
(23:31):
Definitely. It's why in meetings, it's often
smart for senior people to hear from junior colleagues before
stating their own strong opinionto avoid inadvertently anchoring
everyone else. This power of social influence
can also lead to unpredictable outcomes like information
cascades or fads, Right? The music download experiment is
a great illustration. Yeah, that was clever.
(23:53):
Researchers set up different online worlds where people could
listen to and download unknown songs by unknown bands.
In some worlds, people saw how many times each song had already
been downloaded by others. In the control world, they
didn't. And seeing the download counts
made a difference. A huge difference.
People were far more likely to download songs that had been
previously downloaded. What got popular early on, even
(24:15):
just by chance, tended to becomemuch more popular.
The same song could be a huge hit in one world and a total
flop in another just based on those initial social signals.
Success was really unpredictable.
It makes you think about best seller lists or viral videos.
Early popularity can snowball. Exactly.
And sometimes these Cascades canbe based on, well, nothing real
(24:36):
at all. The Seattle windshield pitting
epidemic of 1954 is just bizarre.
Tell me about it. OK, so reports started surfacing
in a small town in Washington state about tiny pits appearing
on car windshields. Then the report spread like
wildfire to Seattle. People panicked.
Theories flew around. Maybe it was fallout from H bomb
tests. Maybe cosmic rays, Maybe some
(24:57):
weird atmospheric thing. The mayor even pleaded for help
from the governor and President Eisenhower.
But what was actually causing the pitting?
Nothing. Experts eventually concluded the
damage was just normal, everydaywear and tear that people had
never noticed before. But once the idea of windshield
pitting was out there, amplifiedby media reports and social
chatter, people started looking for pits, finding them and
(25:20):
attributing them to this mysterious epidemic, a
collective illusion fueled entirely by social contagion.
Wow. That shows how easily
misinformation or just heightened awareness can spread
through social channels. And it can have serious
consequences, like the example of mass psychogenic illness
following HPV vaccinations in Columbia, which caused a
dramatic drop in vaccine uptake due to unfounded fears spread
(25:43):
socially. Advertisers and politicians
definitely try to leverage this,saying most people prefer our
product or most people are turning to our candidate.
All the time. It's a direct appeal to social
proof. Another angle the book explores
is identity, how our sense of who we are, what groups we
belong to, shapes our behavior. Yeah, Choice architects can tap
(26:05):
into this. Think about nationality,
political affiliation, being a fan of a certain sports team.
We often act in ways consistent with what people like us do.
The Don't Mess with Texas campaign is a fantastic example.
Absolutely brilliant. They wanted to reduce littering,
particularly among young men aged 1834 who were the worst
(26:25):
offenders and didn't respond to typical do your civic duty mess.
So. Instead of guilt trips.
They enlisted Texas icons, Dallas Cowboys football players,
country music stars like Willie Nelson and framed anti littering
as a matter of Texas pride toughness.
The message wasn't please don't litter.
It was real. Texans don't mess with Texas.
And it worked incredibly well, right?
(26:46):
Became a cultural phenomenon. Massively successful littering
dropped significantly. It aligned the desired behavior,
not littering with a powerful existing identity.
Being a tough Texan. What about situations where
everyone is conforming to a normthey secretly dislike?
That's pluralistic ignorance, right?
Exactly. Everyone thinks everyone else
(27:08):
supports the practice so they goalong with it, even if privately
they wish things were different.Nobody wants to be the first one
to break ranks. It's like the Emperor's new
clothes. Everyone sees the Emperor's
naked, but nobody says anything because they assume everyone
else must see the clothes. Until the little kid speaks up.
That simple, honest declaration acts as a nudge, a permission
slip during the illusion of consensus.
(27:30):
It shows everyone else that theyaren't alone in their thinking,
and suddenly the cascade reverses.
We see this in modern social movements too, right?
Like hashtag me too or hashtag Black Lives Matter.
For sure visible actions, socialmedia campaigns.
They allow people to see that others share their views or
experiences, revealing long silenced anger and outrage,
giving the green light for widespread change.
(27:51):
It overcomes that pluralistic ignorance.
Even something as seemingly mundane as tax compliance can be
nudged socially. Yeah, the UK experiment was
simple but effective, sending letters to late taxpayers saying
9 out of 10 people in the UK paytheir tax on time.
You are currently in the very small minority significantly
(28:12):
boosted payment rates. Especially when they localize
it. 9 out of 10 people in Manchester pay on time.
Yep, making the norm more specific and relevant increase
the effect. A cheap, easy social nudge with
a big payoff for the government.The authors also expressed
surprise at how quickly attitudes towards same sex
marriage shifted in places like the US.
Yeah, they call it a rapid cascade.
(28:32):
Think about it. In 2008, Barack Obama opposed
it. By 2015, it was legal nationwide
with relatively little backlash.How did that happen so fast
according to them? Largely through disclosure, more
LGBTQ plus individuals coming out combined with those
informational and reputational cascades, suddenly expressing
support for same sex marriage went from being potentially
(28:53):
punished socially to being accepted or even rewarded.
The perception of the norm shifted rapidly, creating a kind
of self fulfilling prophecy. It really underscores the point.
Social influences are incrediblypowerful.
They could be harnessed for goodor ill, but choice architects
ignore them at their peril. There's huge potential there for
positive change, definitely. OK, we've established we're
(29:15):
fallible humans, easily swayed by biases and social cues.
So if you're a choice architect,someone designing a system, a
form, a cafeteria layout, how doyou actually do it?
Well, what's the toolkit? The fundamental principle, the
sort of golden rule they propose, is straightforward.
Offer nudges that are most likely to help and least likely
(29:37):
to inflict harm. Aim to make people better off as
judge by themselves. Makes sense, but how do you know
when people actually need a nudge?
Are there specific situations where we're more likely to mess
up? Yes, they identify several types
of choices that are particularlyfraught with potential error.
Nudges are most needed when decisions are infrequent or
(29:58):
unfamiliar. Like choosing a mortgage or
retirement plan, you don't do itoften, so you don't build up
expertise. Unlike, say, navigating your
usual rack to work. You don't need a nudge for that.
Exactly. Also when decisions are
difficult or complex with too many options or hard to
understand trade-offs, or when there's lacking prompt feedback.
You invest in a stock but you won't know for years if it was a
(30:20):
good decision. Compare that to touching a hot
stove. The feedback is immediate and
painful. And crucially, when decisions
require scarce attention, we're all busy, easily distracted.
We forget appointments, miss bill payments, don't read the
fine print. That's where technology can be a
huge help, right Smartphones giving well timed prompts,
(30:41):
calendar reminders for appointments, alerts for bill
due dates. Although, as the authors point
out, it's funny how credit card companies who make money from
late fees often don't provide those helpful reminders unless
you specifically ask for them. Funny how that works indeed, but
simple prompts can be powerful. The Get out the Vote study is a
great example. Just asking potential voters 3
(31:01):
quick questions about their plan.
What time will you vote? Where will you be coming from?
What will you be doing beforehand?
Significantly increased turnout.Why did that work?
It forced them to engage their reflective system for a moment,
to mentally rehearse the act of voting.
It moved it from an abstract intention I should vote to a
concrete plan, making it more likely to happen.
It nudged attention and overcameinertia.
(31:24):
We also need nudges for investment goods, things with
immediate cause but delayed benefits like exercising or
flossing. Right.
Our inner doer hates the immediate cost, getting sweaty,
the hassle of flossing and discounts, the future benefit,
better health. So we tend to err on the side of
doing too little. And finally the knowing what you
like problem or the translation issue.
(31:46):
Yeah, choosing between chocolateand vanilla ice cream?
Easy. You know what you like.
But choosing a mutual fund described as capital
appreciation focused with dynamic dividend reinvestment,
or picking a health plan based on complex co-pay structures for
diseases you might never get? It's hard to translate those
abstract choices into the actualexperience you'll have.
How will that fund perform? Will that plan cover my weird
(32:08):
allergy? In those cases, a helpful nudge
or a good default is really welcome.
And this highlights why markets don't always solve these
problems. Remember the snake oil salesman?
They thrived by exploiting humanhope and ignorance.
And similar scams persist onlinetoday.
As the authors say no one could make any money telling people
(32:29):
not to buy snake oil. Often companies profit more from
our weaknesses than from helpingus.
Which brings us back to the needfor thoughtful choice
architecture. Given that we will make
mistakes, a core design principle should be expect
error. Like designing a system that
anticipates user mistakes and isforgiving.
Exactly. The old Paris Metro ticket
(32:49):
example is perfect. It was physically impossible to
insert the ticket the wrong way.The design anticipated and
prevented the error entirely. Brilliant.
Contrast that with frustrating designs like Norman doors, door
handles that look like you should pull but you actually
have to push, or those terrible TV remotes where the button to
change the input and lose the picture is huge while the volume
(33:10):
buttons are tiny. Yes, those designs ignore basic
human factors, but good design can anticipate error.
Modern cars are full of helpful nudges, seat belt warnings, fuel
low alerts, lights left on buzzers, backup sensors.
These things are saving lives bypreventing common predictable
errors. Think about ATM's that spit your
(33:32):
card back out before giving you cash so you can't forget it.
Or photocopiers where you have to retrieve your original before
the copies come out of forcing function.
Even in high stakes areas like medicine, anesthesia errors were
often due to connecting the wrong gas hose.
The solution? Redesign the connectors so they
physically cannot be plugged into the wrong outlet.
Error proofing. It seems so obvious, but it
(33:54):
requires anticipating the ways humans can mess up, like
medication adherence, a huge problem.
Simply designing medication regimens with fewer doses per
day makes it much easier for people to comply, expect error,
and design to minimize it. What about when choices are just
overwhelmingly complex, like choosing an apartment from
thousands of listings, or a mutual fund from hundreds of
(34:14):
options? Right.
For simple choices with few options, we might weigh all the
pros and cons A compensatory strategy, but with massive
choice sets that's impossible. So we simplify.
We use strategies like elimination by aspects, right?
Like saying OK apartment huntingmust be under $2500 a month and
have a commute under 30 minutes and allow pets.
(34:34):
You set cut offs and eliminate anything that fails on one key
criterion. Exactly.
Even if a fabulous apartment exists that misses one cut off
by a tiny bit. We simplify.
And the more complex the choice set, the more influence the
choice architect has. How so?
Because how they structure thosechoices matters immensely.
Think about paint colors again. If a store just listed thousands
(34:56):
of colors alphabetically, it would be useless.
Names like roasted sesame seed tell you nothing.
But they organize them on paint wheels, grouping similar colors
together. That structure makes the choice
manageable. Or think about online retailers
like Amazon or Netflix. Their success is partly due to
immensely helpful choice architecture.
They provide filters, sorting options and crucially,
(35:16):
collaborative filtering. That's the people who bought
this also got or because you watched recommendations?
Yep, they're trying to curate the vast number of options for
you based on the behavior of people like you.
It helps cut through the noise and overcome information
overload. This idea of curation seems
really important, especially now.
(35:36):
How do physical bookstores even survive against Amazon, which
sells practically every book ever published?
They can't compete on having everything.
Their value proposition has to be different.
So the answer is good curators. Exactly.
Successful independent bookstores or specialized
restaurants like those Hawker stalls in Singapore that perfect
just one dish. And when Michelin stars for it.
(35:58):
They thrive by selecting. They offer a well chosen,
meaningful experience. They simplify the choice for you
by doing some of the choosing for you.
Curation adds value in a world of infinite options.
Finally, in this toolkit section, there's the delightful
fun theory. It's not just about making good
choices easy, it's about making them enjoyable.
Right. This complements the Make It
(36:20):
Easy principle. Think back to Tom Sawyer
tricking his friends into payinghim for the privilege of
whitewashing the fence. Ha ha, yes, he framed A
punishment as a fun, exclusive activity.
Brilliant psychological manipulation, really.
Or look at those Volkswagen Fun Theory videos that went viral a
while back. The piano stairs.
Oh yeah, they turned a subway staircase in Stockholm into
(36:42):
giant working piano keys and suddenly 66% more people chose
the musical stairs over the escalator right next to it.
Even if the long term practicality is debatable, it
shows the principal. Make the healthy or desired
behavior fun and people are morelikely to do it.
What about the speeding lottery?Another clever one.
Cameras catch speeders who get fined as usual, but the money
(37:04):
from those fines goes into a lottery pool, and drivers who
are obeying the speed limit are automatically entered to win.
So good behavior gets a chance at a reward funded by the bad
behavior. Carrot and stick.
Exactly. Or China putting lottery numbers
on restaurant receipts to encourage customers to ask for
receipts, which makes it harder for businesses to evade sales
tax. Or programs in England rewarding
(37:27):
people with points redeemable for discounts based on how much
they recycle, leading to a 35% increase.
The take away seems simple as the authors put it, make it fun.
And if you don't know what fun is, well, maybe you need more of
it. OK, we've covered the toolkit
for good Choice architecture. Execting error, curating
choices, making things fun. But oh, there's a dark side,
(37:49):
isn't there? Let's talk about sludge.
Yes #sludge. This is a term the author's
coined, and it's become quite widely used.
It's basically the evil twin of a helpful nudge.
So what is it exactly? They define sludge as any aspect
of choice architecture consisting of friction that
makes it harder for people to obtain an outcome that will make
them better off by their own lights.
It's friction used for bad essentially.
(38:12):
Give me some examples, I feel like I encounter sludge daily.
Oh, you probably do. Think of a ridiculously long and
complicated financial aid form for college.
We're needing to go through 4 rounds of interviews just to get
a student visa, or websites for booking COVID tests that seem
designed to make you give up, orendless paperwork to get
reimbursed for a simple expense that's all sludge.
(38:34):
And it can be intentional, right?
Like companies deliberately making it hard to cancel
something. Definitely.
That's intentional sludge designed to retain customers or
discourage people from claiming benefits.
But sometimes sludge is unintentional, just the
byproduct of bureaucratic processes or maybe overly
cautious program integrity efforts that add tons of
(38:54):
hurdles. There's also the term dark
patterns for sneaky online designs.
Right, that specifically refers to user interfaces designed to
trick or manipulate you, often by making it easy to sign up for
something, but incredibly difficult to opt out or avoid
charges. Pure sludge.
The classic example is the unsubscribe trap.
Easy to sign up, nearly impossible to leave.
(39:14):
Exactly. Actually, that asymmetry is key.
Auto renewals themselves aren't necessarily sludge if canceling
is straightforward. The sludge comes when they make
canceling a nightmare. One of the authors had a
personal experience with this, trying to cancel a London
newspaper. Yes, he was told he had to make
a long distance phone call, not just e-mail or click on line,
(39:35):
specifically so the agent could make him aware of the vast scope
of its coverage before allowing him to cancel.
That's blatant an intentional retention policy, as they call
it. And gyms and cable companies in
the US are infamous for this, requiring you to show up in
person or send a certified letter.
Some gyms, even after COVID shutdowns, were still demanding
in person cancellations. Talk about sludge with an extra
(39:59):
helping of germs. Thankfully, some blazes are
cracking down. California and New York now
require online subscriptions to be cancellable online.
Seems obvious. It should be the standard.
The authors advocate for systemswhere quitting is always easy
and free. What about mail in rebates?
Are those sludge? Kind of.
They are definitely a hurdle. You have to save the receipt,
cut out the specific barcode from the packaging, which is
(40:22):
often hard to find. Fill out a form, find a stamp,
mail it in it's considerable sludge.
Why do companies do it? It's mainly price
discrimination. Only the most price sensitive,
organized, or frankly people with the most free time will
actually bother to jump through all those hoops to get the lower
effective price. And it plays on our optimism
(40:42):
bias, right? We think we'll definitely mail
it in. Absolutely.
Research shows people are reallyconfident they'll redeem the
rebate. Like 80% sure, but the actual
redemption rates are way way waylower.
Marketers know this and exploit that gap between intention and
action. If they made it easy, like just
scanning AQR code, dimpen rates would soar.
They would. A study proved that.
(41:03):
But businesses are unlikely to adopt easy rebates because it
undermines the whole point, which is to offer a low headline
price while most people end up paying the full price due to the
sludge. That brings us to shrouded
attributes, hidden costs. Yeah, this is when the price you
see understates the true cost because extra fees or ongoing
costs are hidden or hard to find.
(41:25):
Like those mandatory resort feesat hotels that aren't included
in the advertised room rate, or printer ink that costs a fortune
compared to the cheap printer. Exactly.
Or bank accounts advertised as free checking but then hit you
with fees if your balance dips below a certain level, or for
using another bank's ATM. These are shrouded costs and
they make comparison shopping really difficult.
(41:47):
And competition doesn't always fix it, because, again, more
money can be made by catering tohuman frailties than by helping
people avoid them. We might not notice or
anticipate these costs until it's too late.
Sometimes sludge is even used within customer service.
Think about getting your credit card annual fee waived only if
you call and threaten to cancel.They offer a special deal just
(42:08):
to the disloyal customers, whilethe loyal, less attentive ones
just pay the fee. OK, let's look at some bigger
examples, business travel expenses.
Often excessively sludgy, the authors say.
Complicated forms, needing original receipts for tiny
amounts, long delays for reimbursement.
It's a pain. Contrast that with Netflix's
(42:28):
famous no rules policy under Reed Hastings.
Right. Their expense policy was
basically spend company money asif it were your own act in
Netflix's best interest. That's it.
It drastically reduced sludge, empowered employees, and
probably improved retention. Who wants to fight over $10 taxi
fare reimbursement? College admissions is another
huge area of sludge, particularly affecting low
(42:51):
income students. Definitely, the complexity of
applications and especially financial aid forms like the
FAFSA creates massive barriers. Talented low income students who
would get into top universities and receive full aid often don't
even apply because the process is so daunting.
The. University of Michigan
experiment tackled this head on.Yeah, they proactively
identified high achieving, low income students based on things
(43:13):
like high school meal eligibility and sent them a
simplified application and a letter guaranteeing 4 years of
free tuition and fees if admitted.
No complex aid forms needed upfront.
They aggressively removed sludgeand the result?
Application rights from those students more than doubled
compared to a control group, 68 percent is 26%, and enrollment
significantly increased too. Removing the sludge made a huge
(43:36):
difference. And governments create sludge
too, right? Not just businesses.
Oh. Absolutely.
The US government alone imposes an estimated 11 billion hours of
paperwork burden on citizens andbusinesses annually.
Filling out forms complying withregulations that time is a real
cost. A hidden tax, a wall blocking
(43:56):
access to benefits, permits healthcare.
Are there any government sludge Busters?
Things that reduce friction. Yes, programs like Global Entry
or TSA pre-check for airport security.
They require an upfront application.
Some sludge there. But once you're in, they save
travelers hundreds of millions of hours per year by drastically
cutting down waiting times. That's a win.
But sometimes attempts to help can backfire and create sludge
(44:20):
like the EU cookie notices. Yes, the intention was good.
Require websites to get active consent before placing tracking
cookies. But the implementation often
results in mountains of sludge. Those annoying banners with
confusing options, tiny fonts, endless questions.
Yeah, you just end up clicking accept all to make it go away,
right? Exactly.
(44:40):
Users either give up and consentwithout understanding or just
leave the site. It rarely leads to truly
informed choice, and websites often use nudges within the
notice to push you towards accepting.
And then there's tax sludge, probably the biggest one for
most people. It's notoriously complex.
Economist Austin Goolsbee proposed a simple solution years
ago, similar to what countries like Sweden do.
(45:02):
The IRS already has most income information for many taxpayers.
Why not send them a pre filled tax return?
So most people could just check it, sign or click approve online
and be done. Exactly.
It would eliminate vast amounts of sludge for millions of
Americans. So why hasn't it happened?
Well, the tax preparation industry, think TurboTax.
H&R Block lobbies heavily against it.
(45:24):
Their business model depends on the complexity.
They offer free filing, but often that comes with its own
sludge. Upsells difficult interfaces for
the truly free version. And the complexity means people
miss out on benefits they're entitled to, like the Earned
income tax Credit, just because they don't navigate the forms
correctly. Precisely.
Reducing sludge isn't just aboutconvenience, it's about access
(45:47):
and fairness. The take away is clear, Sludge
imposes real costs and actively looking for ways to reduce it,
especially using technology, canmake a huge positive difference.
Right, let's shift gears now from the problems of sludge to
the successes of nudges. We've talked theory.
Now let's see nudges in action, tackling major real world
(46:08):
challenges. Retirement savings is a huge
one. Absolutely.
This is probably one of the biggest success stories for
nudging. The challenge with plans like 4
O1, KS or similar defined contribution plans is that it's
up to the individual to decide to join, how much to save, and
how to invest it. And we humans being humans, we
procrastinate. We find it confusing.
(46:29):
We save too little. We invest too conservatively.
Exactly. The traditional approach was opt
in. You had to actively fill out
forms to join the plan. Participation rates were often
disappointingly low, especially among younger or lower income
employees. Then came the game changing
nudge, automatic enrollment. Simple, but revolutionary
(46:49):
companies started making joiningthe plan the default.
You are automatically enrolled unless you actively took steps
to opt out. And the impact?
Massive, huge effects on outcomes.
As the research showed, participation rates shot up
dramatically, often from below 50% to over 90%.
Wow. But there was a potential
downside to just defaulting people in, right?
(47:10):
Yes, the initial implementationsoften had flaws.
The default contribution rate was typically set very low,
maybe 3% of salary, and the default investment was often
something super conservative like a money market fund, which
isn't ideal for long term growth.
So people were in the plan, which was good, but they might
still be saving too little and investing poorly if they just
stuck with those defaults. Correct.
(47:32):
Which led to the next brilliant behavioral innovation, Save More
Tomorrow, developed by Faller and Shlomo Benartzi.
This one is genius. Explain how it works.
The core idea is to overcome inertia and loss aversion by
linking savings increases to future pay raises.
Participants agree in advance toautomatically increase their
contribution percentage each time they get a raise.
(47:54):
So you commit today when it doesn't feel painful to save
more later when you get more income.
You never see your take home payactually go down.
Precisely, it leverages present bias, committing future you is
easier, and loss aversion you don't feel the pinch in your
current paycheck. Remember San Agustin, God give
me chastity, but not yet. And did it work?
(48:14):
Phenomenally, in the first company they tested it, 78% of
employees who had previously been reluctant to save more
agreed to join the plan. And over about 3 1/2 years,
their average savings rates nearly quadrupled, going from
3.5% to 13.6%. That's incredible.
Way better than just telling people to save more.
Far better. And this concept, often now
(48:34):
called automatic escalation or auto escalation, has become
really widespread. About 70% of US firms that use
automatic enrollment now also include an automatic escalation
feature, usually as an opt out default.
And they fixed the default investment problem too.
Yes, regulators approved things like target date funds or
balanced funds as qualified default investment alternatives.
(48:58):
So now if you're auto enrolled and don't choose your own
investments, you're put into something much more suitable for
long term saving than just cash.And this isn't just AUS thing
right? Auto enrollment has worked
elsewhere. Definitely studies in Denmark,
the UK with its Nest system, they all confirm it leads to
significant increases in saving,mostly new saving, not just
(49:18):
shifted money. And opt out rates are
consistently low, typically under 10%.
It's a global success story. Amazing.
OK, another area where nudges have been discussed intensely is
organ donation. There's a huge shortage.
A critical shortage? Yes.
Each deceased donor can potentially save up to 8 lives
through organ donation and many more through tissue donation.
The debate often builds down to opt in versus presumed consent
(49:42):
systems. Right.
Opt in is common in places like the US and UK.
You have to actively sign up to be a donor, like on your
driver's license. Presumed consent or opt out is
common in many European countries.
The idea is everyone is considered a donor unless
they've actively registered their objection.
And there was that famous graph showing much higher donation
(50:03):
rates in presumed consent countries, right?
Seemed like clear evidence for the power of defaults.
It did seem that way, and it certainly confirms that defaults
have a huge impact on the elicitation of preferences.
People tend to stick with whatever the default is.
That graph heavily influenced policy debates.
But the authors argue it's more complicated than that graph
suggests. They're very much so.
(50:24):
They dig deeper and reveal A crucial nuance.
Almost no country actually practices hard presumed consent
where they would take organs regardless of the family's
wishes if the person hadn't opted out.
So even in presumed consent countries, they still ask the
family. In almost all cases, yes, they
operate under soft presumed consent, where family agreement
(50:45):
is still sought at the time of death.
And, the authors argue, this actually imposes cruel and
unusual punishment on families, asking grieving relatives to
make this incredibly difficult decision, often unexpectedly.
So the default mainly effects whether someone is registered as
willing, but the family often still has the final say.
Correct, and think about the chances for a yes.
(51:07):
In an opt in system. If the person didn't register,
the family can still consent. That's two bites of the apple.
In a soft presumed consent system where the default is yes
but the family is asked, there'seffectively only one bite.
The family's decision. If they say no, that OPEC rights
the default. So the term presumed consent is
actually kind of misleading, as they put it.
The big difference in donation rate shown on that graph is more
(51:30):
about the default influencing registration rates, not about
organs being taken against family wishes.
Largely, yes. The default powerfully nudges
registration intentions, but thefinal hurdle is often still
family consent and practice. So if presumed consent isn't the
magic bullet it seemed, what areeffective behavioral nudges for
increasing organ donation? Prompted choice seems to be key.
(51:53):
Actively asking people at relevant moments if they wish to
be a donor, like when they're getting or renewing their
driver's license at the DMV, making it an easy required
choice. Yes or no?
And the Belgium example sounds amazing.
It really was. A popular TV show in 2018 called
Make Belgium Great Again launched a campaign.
They combined powerful, emotional stories with very
(52:14):
clear, easy calls to action. They got hundreds of
municipalities to open their registration offices on a
special Sunday just for organ donor signups.
Made it incredibly convenient. Super convenient.
They also promoted online registration, had a federal
truck visiting schools, used local elections as sign up
opportunities. It was a multi pronged high
visibility effort. And the result.
(52:35):
Over 26,000 new registrations. That was compared to maybe 7008
thousand in a typical year. It effectively tripled the
number of registered donors added since 2009 in just that
short campaign period. It shows that active, well
designed, easy to access nudges can work wonders.
Fascinating. OK, let's tackle the biggest
challenge of all, climate change.
(52:57):
The authors call this the motherof all free rider problems.
Yeah, it's arguably the toughestcollective action problem we
face. Why?
Because the benefits of reducingemissions are global, but the
costs are local or national. So if my country cuts emissions
it helps everyone, including countries that don't cut theirs.
This creates a huge temptation to just free ride on others
(53:20):
efforts. Exactly.
It mirrors the public goods game.
In experiments, people are oftenconditional cooperators.
They're willing to contribute orproduce emissions if they see
others doing the same, but if they see others free riding,
their own willingness to contribute plummets.
Though interestingly, just allowing communication between
players can significantly boost cooperation in those games.
(53:40):
So what kind of policy approaches do the authors
discuss? View through a behavioral lens.
They touch on things like carbontaxes from a behavioral
perspective. They suggest maybe a green more
tomorrow approach. Start with a relatively low
carbon tax, but schedule gradualautomatic increases over time.
Like save more tomorrow, but forcarbon pricing leverage present
(54:02):
bias. Future costs seem less painful.
Exactly. Make it easier to accept
politically now by pushing the bigger increases into the
future. Sweden actually did something
like this, quintupling its carbon tax since 1991.
Subsidies for green tech like electric cars or solar panels
are another tool, though maybe more of a patchwork approach.
(54:22):
And regulatory mandates like energy efficiency standards for
buildings can overcome the energy paradox where consumers
under invest in cost saving efficiency measures.
What about simpler nudges, like just providing information?
Information feedback can be powerful. the US Toxics Release
Inventory Tri is a great example.
Passed in 1986, it simply required industries to publicly
(54:45):
disclose the amounts of hazardous chemicals they
release. No bans, no taxes, just
disclosure. Just disclosure.
But making that information public created huge reputational
pressure from communities, investors and environmental
groups. Companies dramatically reduce
their toxic emissions as a result, often finding it was
cheaper to clean up than face the bad press.
Could something similar work forclimate change?
(55:05):
The authors propose exactly thata mandatory greenhouse gas
inventory GGI, requiring major emitters to disclose their
emissions publicly. They argue it would not be
especially costly and would focus attention, create pressure
and make emissions trends much more salient, especially as
extreme weather events linked toclimate change become harder to
(55:25):
ignore. But perhaps the most potent
environmental nudge is the greendefault.
Yes, the power of defaults. Again, the idea is simply to
make the environmentally friendly option the standard,
the automatic choice unless someone actively ox out.
Because as we know, people tend to stick with the default.
Immensely so. The authors emphasize that these
architectural solutions, making things easy or automatic, can
(55:48):
have a much bigger impact than asking people to do the right
thing. The German electricity
experiment is the killer examplehere.
Absolutely stunning. Some electricity suppliers
switch their default offering. Instead of defaulting new
customers or existing ones at renewal to standard fossil fuel
based electricity, they defaulted them to slightly more
expensive green energy from renewable sources with the
(56:11):
option to opt out and switch back to the cheaper, dirtier
power. And what happened?
A remarkable 69.1% of customers stuck with the green energy
default. Compare that to an opt in system
where customers had to actively choose green energy.
Only 7.2% did so. Wow.
So just changing the default massively increased green energy
(56:32):
consumption. Massively leading to a lot less
dirty air and a lot lower greenhouse gas emissions just
from flipping the switch on the default.
It shows the incredible potential for the world to
increase similarly become automatically green through
smart default choices in energy,transportation, maybe even food.
It's clear these nudges can havereal, significant impact, but
(56:53):
inevitably when an idea gets this big, it attracts criticism.
The authors dedicate a fair bit of space to addressing the
complaints department. They do.
And since Nudge became influential, influencing
policies worldwide, it's faced critiques from all sorts of
angles. Economists, psychologists,
philosophers, legal scholars across the political spectrum.
Some arguments seem almost semantic, right?
(57:15):
Like debating whether libertarian paternalism is
really libertarian or if it's really paternalistic.
Yeah, some of it gets into definitions, but the authors
reiterate their ideal libertarian means choice.
Preserving their gold standard is like one click paternalism, A
nudge that's incredibly easy andcheap to avoid if you want to
think about GPS directions. They suggest a route, a nudge,
(57:39):
but the system doesn't yell at you.
If you ignore it and go a different way, it's a perfect
nudge. But if a nudge makes it harder
to opt out, like needing to filla long form, then it starts
moving towards sludge, right? Exactly, and those costs of
opting out need to be considered.
It's not black and white, it's aspectrum.
OK, what about the slippery slope argument?
This is a common one. If we allow these seemingly
harmless nudges today, won't it inevitably lead to more
(58:01):
heavy-handed government control tomorrow?
Justice Scalia's broccoli mandate argument.
Right the If they can make us buy health insurance soon,
they'll make us eat broccoli line of reasoning.
The authors are pretty blunt in their response.
Critics do not provide any evidence of an actual slope.
They argue history doesn't really support these fears.
(58:22):
Yeah, they point out that opponents of women's suffrage
predicted disaster. Masculine women and effeminate
men, which obviously didn't happen.
Alcohol prohibition in the US was enacted and then repealed.
Marijuana is being rapidly legalized now after decades of
prohibition. Social trends are unpredictable.
Slopes aren't always slippery, and sometimes they even slope
(58:43):
upwards. And their core defense is the
definition of nudging itself. Yes, nudging, properly defined,
requires maintaining freedom of choice.
As long as society commits to that principle and demands that
nudges remain easy to avoid, there's no inherent reason it
has to lead to coercion. Although they do acknowledge
something called nudge creep. Yeah, the idea that a successful
(59:04):
nudge might become the expected norm, shifting the middle
ground. But that's different from
outright coercion. Another critique.
Shouldn't we just force active choosing instead of relying on
defaults? Isn't it always better for
people to make a conscious choice?
The authors think active choosing is great for simple
decisions, like should your utility company automatically
(59:24):
enroll you in paperless billing?Maybe making you actively choose
yes or no is fine there. But not for complex things.
Right. Forcing everyone to actively
choose, say, their entire investment portfolio from
hundreds of mutual funds like inthe early Swedish pension
system, they argue. That's dubious and actually
pretty paternalistic in its own way, because it ignores the
(59:46):
reality that many people don't want to make make those complex
choices. They respect the choice not to
choose. Exactly.
Many people prefer to rely on a well designed default or expert
guidance for things they don't understand well.
The Swedish experience showed most people decline the offer to
manage their own pension portfolio when given the act of
choice. We don't expect people to be
their own doctors. Why force them to be expert
(01:00:08):
financial managers? What about education?
Why not just teach people betterdecision making skills boosting
instead of nudging them? Some critics implied Nudging is
for people who can barely read and write.
The authors push back strongly on this being an either
situation. Why do we have to choose one
over the other? They're both professors.
They love education, and many nudges are educational
(01:00:31):
disclosures, warnings, reminders.
But they're also realistic aboutthe limits of education alone.
Very realistic. They suggest Maybe high school
should teach statistics and basic household finance instead
of trigonometry, which might be more useful.
But even then, how much do we retain long term from school?
Studies show financial literacy training often has effects that
(01:00:52):
completely disappear after just two years.
Ouch. So education is best when it's
timely. Exactly, just in time education.
Providing relevant information right before someone makes a
decision, like mortgage advice for first time homebuyers, is
much more effective than trying to teach everything years in
advance. Nudges and boosting should work
together. Then there's the transparency
(01:01:13):
issue. Do nudges only work if they're
secret or hits? That's a common claim, but the
authors say it's been repeatedlyshown to be false.
Many nudges work perfectly well when they're visible.
Like the cafeteria layout? Yeah, you can see the food
arrangement even if you don't consciously think they put the
salad first to nudge me. Right.
Or think about advertisements orpolitical speeches.
(01:01:34):
They use behavioral insights allthe time, quite openly trying to
persuade you were not naive about their intent.
And sometimes transparency can even help the nudge.
Definitely telling people why they've been auto enrolled in a
retirement plan, for example, tohelp you capture the employer
match and get tax benefits can actually increase participation
(01:01:56):
compared to just enrolling them.Silently explaining the reason
behind the nudge can make it more effective by conveying
valuable information and building trust.
This ties into their ethical guideline, the publicity
principle. Yes, borrowing from the
philosopher John Rawls, the principle is simple.
No choice Architect, whether public or private, should adopt
A policy that she would not be able or willing to defend
(01:02:18):
publicly. Why is that so important?
Partly practical avoids embarrassment if your sneaky
tactic gets exposed, but more fundamentally, it's about
respect. Acting secretively or
manipulatively fails to show respect for the people you're
trying to influence. So things like government
regulations, fuel economy labels, nutrition facts, graphic
(01:02:40):
cigarette warnings, these are all proposed, publicly debated,
explained they meet the publicity principle.
Even clever signs like one at the Lollapalooza festival that
said drink more water, No one wants to be that guy goal
invoking loss aversion around social standing are transparent
in their persuasive intent. But something like subliminal
messaging would violate this. Absolutely.
(01:03:02):
Because you can't publicly defend a tactic that works below
the level of conscious awareness, it fails the respect
test. OK, two more critiques. 1 is
that nudges are just too weak. They tinker around the edges
when we need stronger measures like bands or mandates.
The authors see this as a false dichotomy, A debater's point.
Nudges aren't meant to replace jackhammers and bulldozers.
(01:03:22):
They're like pocket knives, useful tools in the larger
toolkit. They give the example of
Scandinavia and alcohol. Right.
Those countries simultaneously have high alcohol taxes, a big
economic incentive, run public campaigns nudging people not to
drink and drive, social informational nudge, and impose
stiff penalties for drunk driving, a mandate punishment.
(01:03:42):
They do. All three nudges complement
stronger measures. They don't necessarily replace
them. Finally, when is it OK to go
beyond easy opt outs and impose more significant costs like
mandatory waiting periods? They argue this is justified for
certain fundamental decisions, often made on impulse,
particularly when 2 conditions are met.
(01:04:04):
A people make the decision very infrequently, so they lack
experience, and B emotions are likely to be running high,
potentially clouding judgement. Like cooling off periods for
door to door sales. You sign up impulsively, but
then have three days to change your mind.
Exactly. Or mandatory waiting periods for
divorce. In many places, these aren't
about preventing the choice, butabout ensuring sober reflection
(01:04:25):
before making a potentially lifealtering decision in a hot
state. This complexity comes up again
in the health insurance salad bar example.
Oh yeah, that study is sobering.Employees were offered a choice
of like 48 different complex health plans.
Way too many options. A salad bar approach.
What happened? A majority of employees selected
(01:04:45):
a plan that was unequivocally worse than another available
option. On average, they paid 28% more
in combined premiums and out of pocket costs than they would
have in a financially superior plan offered by the same
employer. Actively chose to lose money.
How? Often due to deductible
aversion, people hate deductibleso they pick a plan with a low
(01:05:06):
deductible even if it's high premium made it more expensive
overall than a high deductible plan.
Regardless of how much medical care they actually used, the low
deductible plan was financially dominated.
It shows how too much choice, especially complex choice
without guidance, it can lead toreally poor outcomes even when
people have freedom. Exactly.
It underscores that sometimes simplifying choices or providing
(01:05:28):
stronger guidance, perhaps even limiting the number of options,
can be more helpful than just maximizing raw choice.
It's a tricky balance. Wow, what a journey through the
world of Nudge. We've really dug deep into how
our choices are shaped, often inways we don't even realize from
our own internal planner doer battles.
To the incredible pull of the crowd and social norms.
(01:05:50):
And those frustrating hurdles ofsludge all the way to the
potential for designing smarter,more helpful choice
environments. It really does change how you
look at everything from supermarket layouts to
government forms to website designs.
You start seeing the choice architecture everywhere.
So for you, our deep divers listening in, what are the big
takeaways here? I think the first one has to be
(01:06:12):
awareness, his power. Just recognizing that every
environment is designed and thatdesign is nudging you is the
first step towards making more deliberate choices.
Absolutely. And 2nd, embrace your humanity.
It's OK that we're not perfectlyrational econs.
Acknowledging our predictable biases like loss aversion, or
that optimism about mailing and rebates lets us be smarter.
(01:06:33):
We can set up our own commitmentdevices, use mental accounting
consciously, look for good defaults, or just be more
skeptical of tempting offers. 3rd, Maybe we should demand
better design. When you encounter sludge that
impossible to cancel subscription, that confusing
benefit application, maybe call it out.
Advocate for systems, both public and private, that remove
friction and make the beneficialchoice the easy, obvious choice.
(01:06:56):
We deserve choice architecture that works for us.
And 4th harness social influencewisely.
Understand its power. We can use insights about social
proof and identity to encourage positive actions in our
communities. Whether it's recycling, voting,
or supporting local initiatives,knowing how Cascades work helps
us both start positive ones and resist negative.
(01:07:16):
Ones Ultimately, the promise of nudging, as Thaler and Somstein
presented, isn't about manipulation.
It's about enhancing our freedomand well-being by making good
choices easier, sometimes more fun, and less vulnerable to our
predictable human errors. It's designing for reality.
Like a good building works with how people naturally move, a
good policy or product design works with human nature gently
(01:07:37):
guiding us without forcing us. So as you go about your week,
start noticing. Pay attention to the layout of
the store, the default settings on the app, the way options are
framed in that e-mail. How are you being nudged?
And maybe even think, how could you apply these ideas?
How could you, as a choice architect in your own life or
work, nudge yourself or your team or your family towards
(01:07:59):
smarter, happier, healthier choices?
The decision environment is always there.
The fascinating question is how will we choose to shape it?
Keep on deep diving.