All Episodes

June 4, 2025 20 mins

What is the impact of “ Algorithms” on the prices you pay for your Uber, what gets fed to you on TikTok, even the prices you pay in the supermarket?

Cass Sunstein, professor at Harvard Law School co-author of the new book, “Algorithmic Harm: Protecting People in the Age of Artificial Intelligence” Previously he co-authored “Nudge” with Nobel Laureate Dick Thaler. We discuss whether all this algorithmic impact is helping or harming people.

Each week, “At the Money” discusses an important topic in money management. From portfolio construction to taxes and cutting down on fees, join Barry Ritholtz to learn the best ways to put your money to work.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Bloomberg Audio Studios, podcasts, radio news.

Speaker 2 (00:26):
Algorithms are everywhere. They determine the price you pay for
your Uber, what gets fed to you on TikTok and Instagram,
and even the prices you pay in the supermarket. Is
all of this algorithmic impact helping or harming people? To
answer that question, let's bring in Cass Sunstein. He is

(00:49):
the author of a new book, Algorithmic Harm, Protecting People
in the Age of Artificial Intelligence, co written with Orrin
bar Gil. Cass is also a professor at Harvard Law
School and is perhaps best known for his books on
Star Wars and co authoring Nudge with Nobel Laureate Dick Taylor. So, pass,

(01:12):
let's just jump right into this and start by defining
what is algorithmic harm?

Speaker 3 (01:20):
Okay, So let's use Star Wars. So let's say the
Jedi Knights use algorithms, and they give people things that
fit with their tastes and interests and information and people
get If they're interested in books on behavioral economics, that's
what they get at a price that suits them. If
they're interested in a book on Star Wars, that's what

(01:41):
they get at a price.

Speaker 4 (01:42):
That suits them.

Speaker 3 (01:43):
The sith by contrast, take advantage with algorithms of the
fact that some consumers lack information and some consumers suffer
from behavioral biases. So we're going to focus on consumers first.
If people don't know much, let's say about healthcare product,
an algorithm might know that that they're likely not to
know much and might say, we have a fantastic baldness

(02:06):
cure for you. Here it goes, and people will be
duped and exploited. So that's exploitation of absence of information.
That's algorithmic harm. If people are super optimistic and they
think that some new product is going to last forever,
when it tends to break on first usage, then the
algorithm can know those are unrealistically optimistic people and exploit

(02:29):
their behavioral bias.

Speaker 2 (02:31):
So I referenced a few obvious areas where algorithms are
taking place. Uber pricing is one the books you see
on Amazon is algorithmically driven. Clearly, a lot of social
media for better or WORSEUS algorithmically driven, and even things

(02:53):
like the sort of music you like on Pandora. What
are some of the less obvious examples of how algorithms
are affecting consumers and regular people every day.

Speaker 3 (03:07):
Okay, So let's start with us straightforward once and then
we'll get a little subtle. So straightforwardly, it might be
that people are being asked to pay a price that
suits their economic situation. So if you have a lot
of money, the algorithm knows that maybe the price will
be twice as much as it would be if you
were less wealthy. That I think is basically okay. It

(03:31):
leads to greater efficiency in the system. It's like rich
people will pay more for the same product than poor people,
and the algorithm is aware of that. So that's not
that subtle, but it's important. Also, not that subtle is
targeting people based on what's known about their particular tastes
and preferences. Let's put wealth to one side, and so

(03:52):
it's known that certain people are super interested in dogs,
other people are interested in cats. And there we go,
and all that is very straightforward's happening. If consumers are
sophisticated and knowledgeable, that can be a great thing to
make markets work better. If they aren't, it can be
a terrible thing to make consumers get manipulated and hurt.

(04:14):
Here's something a little more subtle. If an algorithm knows,
for example, that you like Olivia Rodrigo, and I hope
you do, because she's really good. Then there are going
to be a lot of Olivia or Rodrigo songs that
are going to be put into your system. And let's
say no one's really like Olivia Rodrigo, but let's suppose
there are others who are vaguely like her, and.

Speaker 4 (04:35):
You're going to hear a lot of that.

Speaker 3 (04:37):
Now that might seem not like algorithmic harm, that might
seem like a triumph of freedom and markets, but it
might mean that piece of people's tastes will calcify, and
we're going to get very bulkanized culturally with respect to
what people see in here. So they're going to be
Olivia Orrigo people, and then they're going to be led

(04:57):
Zeppelin people, and they're going to be Frank Sinatra people.
And there was another singer called Bach. I guess I
don't know much about him, but there's Bach, and there
would be Bach people. And that's culturally damaging, and it's
also damaging for the development of individual tastes and preferences.

Speaker 2 (05:14):
So let's put this into a little broader context than
simply musical tastes and I like all of those, so
I haven't become bulkanized yet. But when we look at
consumption of news media, when we look at consumption of information,
it really seems like the country has self divided itself

(05:38):
into these happy little media bubbles that are either far
left leaning or far right leaning, which are is kind
of weird because I always learned the bulk of the
country and the traditional bell curve, most people are somewhere
in the middle. Hey, maybe they're center right or center left,
but they're not out on the tails. How does the

(06:00):
these algorithms affect our consumption of news and information?

Speaker 3 (06:07):
About fifteen twenty years ago, there is a lot of
concern that through individual choices, people would create echo chambers
in which they would live. And that's a fair concern
and it has created a number of, let's say, challenges
for self government and.

Speaker 4 (06:26):
Learning.

Speaker 3 (06:27):
What you're pointing to is also emphasized in the book,
which is that algorithms can echo chamber you. An algorithm
might say, you know, you're keenly interested in immigration, and
you have this point of view, so boy, are.

Speaker 4 (06:43):
We going to funnel to you lots of information?

Speaker 3 (06:45):
Because clicks are money and you're going to be clicking,
click and clickling click kicking. And that might be a
very good thing from the standpoint of the seller, so
to speak, or the user of the algorithm, but from
the standpoint of view, it's not so fantastic. And from
the standpoint of our society it's less than not so fantastic,
because people will be living in algorithm driven universes that

(07:10):
are very separate from one another, and they can end
up not liking each.

Speaker 2 (07:15):
Other very much, even worse than not liking each other.
Their view of the world aren't based on the same
facts or the same reality. Everybody knows about Facebook and
to a lesser degree, TikTok and Instagram and how it
very much bulkanized people into things. And we've seen that

(07:36):
in the world of media. You have Fox News over here,
in MSNBC over there. How significant of a threat does
algorithmic news feeds present to the country as a democracy,
self regulating, self determined democracy.

Speaker 4 (07:56):
Really significant?

Speaker 3 (07:58):
And there's algorithms and then there's large language models, and
they can both be used to create situations in which,
let's say the people in some city let's call Los Angeles,
are seeing stuff that creates a reality that's very different
from the reality that people are saying, and let's say,
boise Idaho. And that can be a real problem for

(08:20):
understanding one another and also for mutual problem solving.

Speaker 2 (08:26):
So let's apply this a little bit more to consumers
and markets. You describe two specific types of algorithmic discrimination.
One is price discrimination and the other is quality discrimination.
Why should we be aware of this distinction? Do they
both deserve regulatory attention?

Speaker 3 (08:47):
So if there is price discrimination through algorithms in which
different people get different offers depending on what the algorithm
knows about their wealth and tastes, that's one thing, and
it might be okay. People don't stand up and cheer
and say hooray. But if people who have a lot
of resources are given an offer that's not as let's

(09:09):
say seductive as one that is given to people who
don't have a lot of resources, just because the price
is higher for the roots than the poor, that's okay.
There's something efficient and market friendly about that. If it's
the case that people who are let's say, not caring

(09:30):
much about whether a tennis racket is going to break
after multiple uses, and other people who think that tennis
racket really has to be solid because I play every
day and I'm going to play for the next five years.
Then some people are given the let's say, immortal tennis racket,
and other people are given the one that's more fragile.

(09:51):
That's also okay, so long as we're dealing with people
who have a level of sophistication. They know what they're
getting and they know what they need. It's the case
that for either pricing or for quality, the algorithm is
aware of the fact that certain consumers are particularly likely
not to have relevant information, then everything goes haywire. And

(10:14):
if this isn't frightening enough, note that algorithms are an
increasingly or an increasingly excellent position to know. This person
with whom I'm dealing doesn't know a lot about whether
products are going to last, and I can exploit that.
Or this person is very focused on today and tomorrow

(10:35):
and next year doesn't really matter. The person's present biased
and I can exploit that. And that's something that can
damage vulnerable consumers a lot, either with respect to quality
or with respect to pricing.

Speaker 2 (10:48):
So let's flesh that out a little more. I'm very
much aware that when Facebook sells ads, because I've been
pitched these from Facebook. They could target an audience based
on not just their likes and dislikes, but their geography,
their search history, their credit score, their purchase history. Like,

(11:11):
they know more about you than you know about yourself.
It seems like we've created an opportunity for some potentially
abusive behavior. Where is the line crossed from hey, we
know that you like dogs and so we're going to
market dog food to you to we know everything there

(11:34):
is about you, and we're going to exploit your behavior
biases and some of your emotional weaknesses.

Speaker 3 (11:42):
Okay, so suppose there's a population of Facebook users who
are super well informed about food and really rational about food.
So they particularly happen to be fond of sushi, and
Facebook is going hard at them with respect to offers

(12:03):
for sushi and so forth. Now let's suppose there's another population,
which is they know what they like about food, but
they have kind of hopes and false.

Speaker 4 (12:16):
Beliefs both about the effect of food on health.

Speaker 3 (12:21):
Then you can really market to them things that will
lead to poor choices. And I've made a stark distinction
between fully rational, which is kind of economic speak, and
you know, imperfectly informed and behaviorally biased people also economic speak,
but it's really intuitive. There's a radio show maybe This
will Bring It Home, that I listen to when I

(12:42):
drive into work, and there's a lot of marketing about
a product that is supposed to relieve pain. And I
don't want to criticize any producer of any product, but
I have reason to believe that the relevant product doesn't
help much. But the the station that is marketing this

(13:02):
product to people, this pain relief product, must know that
the audience is vulnerable to it.

Speaker 4 (13:10):
And they must know exactly how to get at them.

Speaker 3 (13:13):
And that's not in the interest of that's not going
to make America great again.

Speaker 2 (13:18):
To say the very least. So we've been talking about algorithms,
but obviously the subtext is artificial intelligence, which seems to
be the natural extension and for the development of algos
tell us how as AI becomes more sophisticated and pervasive,

(13:39):
how is this going to impact our lives as employees,
as consumers, as citizens.

Speaker 3 (13:48):
Chat GPT, chances are, knows a lot about everyone who
uses set. So I actually asked chat GPT recently, I
use it some not huge. I asked it to say
some things about myself, and it said a few things
that were kind of scarily precise about me based on

(14:10):
some number dozens.

Speaker 4 (14:11):
Not one hundreds.

Speaker 3 (14:12):
I don't think of engagements with chatch ept. So large
language models that track your prompts can know a lot
about you, and if they're able also to know your name,
they can, you know, instantly, basically learn a ton about
you online, and we need to have privacy protections that
are working there. Still, it's the case that AI broadly

(14:36):
is able to use algorithms, and generative AI can go
well beyond the algorithms we've gotten familiar with both to
make the beauty of algorithmic engagement, that is, here's what
you like, here's what you want, we're going to help you,
and the ugliness of algorithms, here's how we can exploit

(14:58):
you to get you to buy. And of course I'm
thinking of investments too, so in your neck of the woods,
it would be a child's play to get people super
excited about investments, which AI knows the people with whom
it's engaging are particularly susceptible to even though they're really dumb, engagements.

Speaker 2 (15:17):
Really really interesting. So since we're talking about investing, I
can't help but bring up both AI and algorithms trying
to increase so called market efficiency. And I always go
back to Uber's surge pricing. As soon as it starts

(15:38):
to rain, the prices go up in the city. It's
obviously not an emergency. It's just an annoyance. However, we
do see situations of price gouging after a storm, after
a hurricane, people only have so many batteries and so
much plywood, and they kind of crank up prices. How

(15:58):
do we determine what is the line between something like
surge pricing and something like, you know, abusive price gouging.

Speaker 3 (16:08):
Okay, so you're in a terrific area of behavioral economics.
So we know that in circumstances in which let's say
demand goes up high because everyone needs a shovel and
it's a snowstorm, people are really mad if the prices
go up, though it might be just a sensible market adjustment.

(16:31):
So as a first approximation, if there's a spectacular need
for something, let's say, shovels or umbrellas, the market inflation
of the cost. While it's morally abhorrent to many and
maybe in principle morally abhorrent from the standpoint of standard economics.

Speaker 4 (16:49):
It's okay. Now, if it's the case.

Speaker 3 (16:52):
That people under short term pressure from the fact that
there's a lot of rain are especially vulnerable and some
kind of emotionally intense state, so they'll pay kind of
anything for an umbrella, then there's a behavioral bias which
is motivating people's willingness to pay a lot more than

(17:13):
the product is worth.

Speaker 2 (17:14):
So let's talk a little bit about disclosures and the
sort of mandates that are required when we look across
the pond, when we look at Europe, they're much more
aggressive about protecting privacy and making sure big tech companies
are disclosing all the things they have to disclose. How
far behind is the US and that generally and are

(17:36):
we behind when it comes to disclosures about algorithms or AI.

Speaker 3 (17:41):
I think we're behind them in the sense that we're
less privacy focused. But it's not clear that that's bad,
And even if it isn't good, it's not clear that
it's terrible. I think neither Europe nor the US has
put their regulatory finger on the actual problem. So let's

(18:03):
take the problem of algorithms not figuring out what people want,
but algorithms exploiting a lack of information or a behavioral
bias to get people to buy things at prices that
aren't good for them.

Speaker 4 (18:17):
That's a problem. It's in the.

Speaker 3 (18:19):
Same universe as fraud and deception, and the question is
what are we going to do about it. A first
line of defense is to try to ensure consumer protection,
not through heavy handed regulation. I'm a long time University
of Chicago person. I have in my DNA not liking
heavy handed regulation, but through helping people to know what

(18:42):
they're buying and helping people not to suffer from a
behavioral bias such as, let's say, incomplete attention or unrealistic
optimism when they're buying things. So these are standard consumer
protection things which many of our agencies in the US,
home grown America, they've done that, and that's good and

(19:03):
we need more of that. So that's first line of defense.
Second line of defense isn't to say, you know, privacy, privacy, privacy,
though maybe that's a good song to sing. It's to
say right to algorithmic transparency. So this is something which
neither the US, nor Europe, nor Asia, nor South America
nor Africa has been very advanced on so this is

(19:27):
a coming thing where we need to know what the
algorithms are doing. So it's public what's Amazon's algorithm doing?
That would be good to know, and it shouldn't be
the case that some efforts to ensure transparency invade Amazon's
legitimate rights.

Speaker 2 (19:45):
Really really fascinating, thanks Cas. Anybody who is participating in
the American economy and society, consumers, investors, even just regular
readers of news, needs to be away of how algorithms
are affecting what they see, the prices they pay, and
the sort of information they're getting. So with a little

(20:09):
bit of forethought and the book Algorithmic Harm, you can
protect yourself from the worst aspects of algorithms. And AI
I'm Barry Redults you're listening to Bloomberg's at the Money
Advertise With Us

Popular Podcasts

True Crime Tonight

True Crime Tonight

If you eat, sleep, and breathe true crime, TRUE CRIME TONIGHT is serving up your nightly fix. Five nights a week, KT STUDIOS & iHEART RADIO invite listeners to pull up a seat for an unfiltered look at the biggest cases making headlines, celebrity scandals, and the trials everyone is watching. With a mix of expert analysis, hot takes, and listener call-ins, TRUE CRIME TONIGHT goes beyond the headlines to uncover the twists, turns, and unanswered questions that keep us all obsessed—because, at TRUE CRIME TONIGHT, there’s a seat for everyone. Whether breaking down crime scene forensics, scrutinizing serial killers, or debating the most binge-worthy true crime docs, True Crime Tonight is the fresh, fast-paced, and slightly addictive home for true crime lovers.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.