All Episodes

May 27, 2024 54 mins
Host:
Richard Foster-Fletcher, Executive Chair, MKAI.org


Guest:
Professor Alex Edmans, Professor of Finance at London Business School and Mercers' School Memorial Emeritus Professor of Business at Gresham College

Guest Bio:
Professor Alex Edmans is renowned for his work on finance and misinformation. With a significant following from his TED talk "What to Trust in a Post-Truth World" and presentations at forums like Davos and Google, he brings critical insights into the impacts of misinformation in various sectors. His book "Grow the Pie" and his recent works delve into the complexities of misinformation in finance, politics, and health.

Episode Overview:
In this compelling episode, Professor Alex Edmans joins Richard Foster-Fletcher to unravel the intricate web of misinformation and bias that influences public opinion and decision-making in today’s society. Their discussion pivots on the subtle yet profound ways misinformation permeates various aspects of daily life and decision-making, from social media interactions to academic research.

Key Topics of Discussion:
  1. The nature and impact of cognitive shortcuts and how they contribute to the spread of misinformation.
  2. Challenges in distinguishing credible information from misleading data in the digital age.
  3. The role of biases in shaping public perceptions and opinions, particularly in the realms of social media and academic publishing.
  4. Strategies to foster critical thinking and discernment in evaluating information.
  5. The ethical responsibilities of researchers, corporations, and media in reporting and disseminating information.
Key 'Takeaway' Ideas:
  1. The importance of understanding the underlying biases and structures that facilitate misinformation.
  2. Effective strategies for individuals and communities to cultivate a more discerning and questioning approach to information consumption.
  3. The critical role of education in empowering individuals to navigate the complexities of misinformation and bias in a post-truth era.
This episode is a must-listen for anyone interested in understanding the dynamics of misinformation and seeking ways to foster a more informed and discerning society.


Become a supporter of this podcast: https://www.spreaker.com/podcast/the-boundless-podcast--4077400/support.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome. Alex Edmunds. He's theProfessor is a prominent figure in finance and
misinformation studies based at London Business School. His influential talk What to Trust in
a Post Truth World, has reachedover two million years, and he has
presented at prestigious forums such as DeVosand Google. A former tenured professor at

(00:21):
the Wharton School and named MBA Professorof the Year by Poets and Quants in
twenty twenty one. Alex writes extensivelyfor major publications and is the author of
the critically acclaimed book Grow the Pie. His latest book addresses the pervasive issue
of misinformation across various sectors, frompolitics to personal health. Wow, that's

(00:42):
an amazing bios, Alex. Andgoing on to our chair Richard Foster Fletcher,
whom many of us know, buthis introduction is really crucial. As
the executive chair of MKAI dot Org, Richard leads initiatives to equip leaders with
the knowledge of to ethically harness AI. His work addresses the mitigation of risks

(01:02):
like bias and privacy infringements in AIapplications. Richard is a renowned speaker and
advisor who engages with governments, businesses, and educational institutions to promote ethical technology
use. His contributions to data scientificsand the impacts of AI are widely recognized
in academic and professional circles. Wow, back to you, Richard. After

(01:26):
this, Alex is basically talking tous, I believe about the cognitive shortcuts.
And it's an absolutely fascinating topic because, Alex, these are actually the
reason that we're here today in asocial sense. We've survived millennia as a
species primarily because of these incredible socialskills. But the downside of them is

(01:47):
that we have to take these cognitiveshortcuts. Am I right? Yeah,
that's absolutely the case. Yeah,So I think the cognitive shortcuts can be
helpful in many situations, but shortcutsare shortcuts, and so what I'm trying
to highlight is the dangers of theseshortcuts. And so these dangers are here
and now in present, So what'schanged. These are the things that kept
us alive before we had to makesplit second judgments about life and death and

(02:07):
who to trust and whether to fightor flight or whatever else. But now
in the business context, they're potentiallyvery harmful. Could say a little bit
more about why that is. Yeah, So what has changed? I think
just it's very easy to produce misinformationnow, so anybody with a platform can
share this on social media. Itcan be that irregular person starts some conspiracy

(02:28):
theory, and if that conspiracy theoryfeeds into people's biases, then people are
going to be willing to lap thisup and to share it. And also
what's also changes that who is producinginformation. It is not only reliable sources,
but companies will now start reusing thetheir own reports. So the likes
of black Rock McKinsey are getting involvedin research, but their goal might not

(02:52):
be scientific inquiry. Their goal mightbe brand building, and often they release
research in order to be marketing.People don't realize this because people think that
these are scientific studies, but they'reactually just done for marketing and brand building.
And in the book, you doa bit of debunking, don't you.

(03:13):
Yeah. What I tried to dois highlight some of the issues with
various types of research. And oneof the books that you mentioned is the
Spirit Level. I think, isn'tit? Yeah, that's which quality is
better. But a lot of myworking premise on that book, So it'll
be interesting to explore that later on, and you also take a shot at
Malcolm Gradwell, I think the tenthousand Hours, which I happen to agree
with you on as well. Sowe'll touch on that after your presentation,

(03:37):
and then we'll come back around someof the black and white thinking that you
want to give us maybe some tipsand ideas for how we can overcome.
But would it be a great timenow to have a look at your prepared
comments. Yeah, absolutely, thanksvery much, So let me just share
my slide. So I'm just goingto give you an overview of some high
level themes or a book, andthen that can hopefully then spark the Q
and A that the attendees will havethis book about data and statistics and evidence.

(04:02):
And for me, this is mylife, this is my day job,
and for many of you this iswhat you're you're working on as well.
But then things changed. This wasnot my entire life. After my
son Kasper was born, and asI'm sure many of you will have experienced,
the birth of a child is anamazing event. But then after we
brought him home, I sent Ishared the news with some friends and one

(04:25):
of my friends is a father ofthree, and he wrote back saying now
it's over and now it starts.So we had to think about how to
look after this little guy, andin particular we had to think about what
to feed him. Now to us, there was something which was really clear.
We would feed him based on onlybreastmilk. Why because we'd learned from

(04:46):
the National Childbirth Trust. This wasa parenting course that we took that breastfeeding
is the only thing that you shouldbe doing. So most of the workshop
was about how to look after child, how to change and happy, what
to do whether they're when they're sick, but they give some sessions on breastfeed.
And while most of this was onhow to breastfeed how to ensure a
good latch, they started off bysaying, why breastfeed because breastfeeding is really

(05:14):
tough. It's difficult to do,and so if they could convince you of
the benefits, then you could makesure. There's just lots of background.
If they could convince you of thesebenefits, then people would make sure that
they would persevere even if things werequite difficult. And so they quoted research
from the best sources, such asthe Well Toarth Organization recommending exclusive breastfeeding for

(05:39):
the first six months without water orany other fluids. And this was backed
up by a lot of research,so we thought this had to be true.
Betters. Somebody deals with data andevidence. I thought, let me
not accept this at face value.Let me do some research myself, and
so I looked at Google. Thiswas in the days of four chatch and

(06:00):
there's many things that breastfeeding could begood for, such as child health,
mental health, mother child bonding.But I thought, let me measure the
link between breastfeeding and IQ. Andyou find a lot of studies which suggest
that breastfeeding improves IQ. They're fromthe BBC, so this is a reliable
source. You can scroll down,you're going to find other articles saying this.

(06:23):
You can also come back another dayand then Google the same thing and
Google will also tell you again thereis this link between breastfeeding and IQ.
So this gave a clear message tomy wife and me, which is only
breastfeed. That was our plan.If we were to breast he'd feed him,
oh, for the first six months, he would be cleverer than anybody

(06:45):
else in his nursery class. Basedon this evidence, But then, as
a famous breastfeeding expert Mike Tyson said, everyone has a plan until they get
punched in the face. So whatwas the punching in the face for us?
It was that breastfeeding didn't work.So even though we had a good
latch, my wife just wasn't producingenough breast milk. You don't in the

(07:09):
first few days. So after ourson had drained her try, he still
wanted more. And you might thinkthat it isn't obvious that you should feed
a hungry baby, but it's notobvious when the World Health Organization is saying
don't give anything other than breast milk, we wonder whether we should just let
him go hungry. And indeed,often as parents, you don't give your

(07:29):
kids what they necessarily want. SoI went back to the information that I
looked at previously. So I wentback to this article. And even though
all the articles I found on Googlehad the same headline, I thought,
let's go beyond the headline. Solet me click on the article and let's
see what it stairs. And itsays exactly the same thing as the headline.

(07:54):
Google wasn't doing any fake They wereaccurately reporting one statement from the article.
But here's the important thing. Ifyou read two lines down, read
it says this because of the smallsample size, we could not confirm the
significant difference between the breast fed andbottle fed groups. So this means that

(08:18):
the difference was so small it couldbe due to lack and nothing to do
with breastfeeding. And it gets worsebecause if you look at this top the
top, it tells you about anotherstudy on five five hundred children finding that
breastfeeding has no significant effect on intelligence. Now this was surprising to me because
everybody tells you breast it's best.You hear from doctors and nurses and pediatricians

(08:41):
that we should use breast milk.So how could this be the case,
because this went against all of theinformation that I've been told. Number eighteen.
Here there's a reference. Let's clickon footnote eighteen. You get a
study published in the British Medical Journal. Now I'm not a medicine expert,
but I thought even this article Ishould be able to read. It's six

(09:01):
pages long, and it's full ofa lot of long words which I don't
understand, but I do understand numbers, and so if I look at the
table it compares breast fed versus bottlefed kids, it finds there's a difference
in IQ of nearly five. That'snot major, but it's also not small

(09:22):
either. If you can add fivepoints to your child's IQ just by having
a different feeding method, that's somethingeverybody will take. But as the atitute
of you will notice there is thisword unadjusted. So what does unadjusted mean?
What might you adjust for breastfed andbottle fed kids? They differ across

(09:45):
many dimensions. So breastfeeding is tough. It's difficult to do without family support.
So maybe the mothers who are ableto breastfeed, they have supportive partner
at home, they might have helparound the house, and it could be
those fa not the breast milk,which are leading to the differences in IQ.
And so when you control for allof these other factors, once you

(10:09):
strip out the differences in IQ whichare attributable to things like the mother's education,
the mother's IQ, whether the mother'ssmoked, you are left with peanuts.
The difference in IQ between breastfed andbottle fed kids is only zero eight
point five. It's so small itmight be due to luck. And so

(10:31):
this is striking right. So theidea that we should only feed breast milk,
this is something which can guilt tripmothers into thinking I'm not a good
mother if I'm giving anything other thanbreast milk. Sometimes, if you're extremely
exhausted, you might think I needto use breast milk. I can't reach
for the bottle. So what didmy wife and I do Afterwards? We
decided to have combination feeding, sowe still use breast milk as the first

(10:54):
option. There are lots of benefitssuch as mother child bonding, but we
would not be afraid to use formulaif my wife needed a top. My
wife needed a break while sudden neededa top. So why is this the
example that I want to just useas just the opening example here before the
Q and A is to highlight fourthings which I think are themes of misinformation.

(11:18):
First, it's research is everywhere.So my day job is to review
academic papers and to write academic papers. But this is probably not your job,
and so I really appreciate you givingup an hour to listen to me
when your job is not about scrutinizingacademic papers. But what I'm highlighting is
even if we don't read academic papers, we are affected by research in our

(11:41):
day to day lives. When youchoose whether to breastfeed or bottle feed your
kid. That's being based on research. If you pick up a copy of
Women's Health or Men's Fitness or RunnersWorld, you are acting on the basis
of research. Let's say last monthyou watch the London Marathon. You're inspired,
you want to run a marathon yourself. Do I do high intensity interval

(12:03):
training or low intensity steady state?That is being based on research. What
if you read self help or selfimprovement books. If you want to use
the ten thousand hours rule, whichRichard asked about, and we will,
I'm sure we'll talk about in moredetail. That is something which claims to
be based on research. Perhaps thebest selling book of the current day Atomic

(12:24):
Habits. If you want to developatomic habits, should you really listen to
that book? Is it based onresearch? A famous book about ten years
ago you said you could develop afour hour work week. You could just
work for four hours a week ifyou just bought Tim Farris's book. Is
this research true? Maybe you don'tlike reading books. Maybe you're using your
eyes all the time in your dayjob. Maybe you like to listen in

(12:48):
your spare time, and if youlisten to talks, isn't there anything better
than a ted talk. The secondmost view TED talk of all time claims
that if you are in a difficultinterpersonal situation, let's start by doing some
power poses, like having our armsstretched in a victory salute. Do this
before a job interview or public speaking. You'll a slap talk. But that

(13:11):
research was hugely debunked, So thesethings matter. The idea of being discerning
with information matters irrespective of what yourjob is. This is something which we
deal with in daily life. Thesecond point is, yes, we know
about misinformation, but isn't the solutionto check the fact. So if people
claim that Barack Obama is not anatural born US citizen, you can just

(13:35):
check the facts, look at hisbirth certificate. But what I'm trying to
highlight here is that even if somethingis one one hundred percent accurate, it
is still misleading sometimes. So itis one hundred percent true that breast fed
participants have higher IQ. This isnot a wrong statement. As a government,
you can't prosecute somebody for saying this. This is correct, But what

(13:58):
is not correct is the implication thatit's breastfeeding that leads to the higher IQ.
It could be other things going onlike the different backgrounds. So we
need to do more than just checkwhether something is true. We need to
make sure the inferences are accurate.But you might think, isn't it easy
to check whether the inference is areaccurate? Shouldn't we know that correlation is

(14:20):
not causation? Shouldn't I know thisas a finance professor, And shouldn't I
have known to click through and lookat what the full study said and get
the full picture. But the reasonwhy I didn't do this is something that
Richard mentioned in his opening questions,which is the concern with biases. So
I am a biased person. Iwish I wasn't, but I suffer from

(14:43):
biases. And the bias which matteredhere was confirmation bias. So this is
the idea that people have a viewof the world, and if you see
some information that confirms that view ofthe world, we switch off our critical
thinking faculties and we accept it uncritically. And why do we know this.
We're not bad people, We're nottrying to misinform ourselves. But we're human

(15:07):
and as humans, we have abrain and we have the striatum, and
when we see something which we likenews that we like it triggers the striatum
to release dopamine. It feels goodto see information that confirms what we want
to be true. And when youthink about bias, you might think this
applies to huge things like I couldbe biased or against gun control, or

(15:30):
immigration or abortion. But my pointhere is that even subtle biases can matter.
So I'm not a huge breastfeeding advocate, but I'm biased. I'm led
to believe something natural is better thansomething man made. We learn this as
kids. We learn to eat naturalflavors, not artificial flavors. So when
I was given the evidence that breastmilk is always better, I accepted it

(15:52):
uncritically. So the final point iswe need to check everything. So what
is the solution to check stuff?And you might think to check everything?
Is that even possible? Aren't therehundreds and hundreds of types of misinformation out
there? So one of the reasonsI wrote the book was to try and
categorize all those types of misinformation intojust four to make it easy for the

(16:15):
reader or the user of information.And this is something that I illustrate in
something I call the Ladder of Misinfluence. Why use a ladder is when we
start from data and then we formconclusions. We are climbing up the ladder.
So the first step, the firstmistake we make when climbing up the
ladder, is we confuse statements withfacts. So even if something is stated,

(16:41):
it might not be a fact,it may not be accurate. So
we saw those Google previews. Theywere correct, but they were not accurate
because they were not giving the fullpicture. One more serious case in which
this has happened is in the opioidepidemic, which has killed millions of people
around the world via opio say muchbecause of some research which claims that addiction

(17:03):
is rare in the New England Journalof Medicine, cited by over one and
a half thousand pay people. Thismust be true. Let's look at the
research here it is. That's thefirst page. Where's the second page,
third page, fourth page, fifthpage with the tables and the data.
There's none. This was a letterto the editor. So without even checking,

(17:26):
people saw there was a headline inthe New Linland Journal of Medicine.
They cited it claiming there was noconcern with addiction. They over prescribed opioids.
This is a big concern. Andeven if the letter was not lying.
Let's say this was something which isaccurate. It just looked at having
one narcotic Maybe one doesn't lead toaddiction, but two or three doses might.

(17:49):
And it looked at people in hospital. In hospital, when you're given
this in a controlled environment, you'renot going to get addicted. But it
may be that if you prescribe thisto out patients there could be a serious
problem. So check the check whatyou're looking at, but not only check
the facts. Check the broader context. Is this something where it's a letter

(18:10):
not a study? Is there somethingonly in hospitalized patients? Second part of
the ladder. Facts are not data. They may not be representative. So
what do I mean by this?So let's say you've checked the fact you
know something as one hundred percent accurate, it could still be misleading because it
could be a hand picked example theexception that does not prove the rule.

(18:32):
So let's take the third most fewted talk of all time by Simon Sinek,
the idea that if you have apassion a purpose, you start with
why you'll be successful, and hebacks us up with facts. Apple is
successful undisputable. This is the firsttrill engion dollar company Wikipedia in the nonprofit
sect. They're hugely successful. They'vesurpassed the Encyclopedia Britannica as the founding for

(18:56):
knowledge. Third, the Right Brothershugely success or the first ever people to
launch a flight. They beat Samuelpierrepoint Langley, even though he's much richer.
But these are just hand picked examples, even if they are completely true.
There could be hundreds of other organizationsthat started with why and fail,

(19:18):
but you're never going to hear aboutthem because they don't support Simon Sink's case.
What you'd want is you want somethinglike a clinical trial. You have
some people, you give some adrug. You see how many get better
and how many get worse? Othersget a placebo, how many get better
and how many get worse. Let'scompare the two. Importantly, our data
does contain people who took the drugand got worse. Companies that start with

(19:42):
why and fail, but a Simonse netbook is never going to tell you
about them. Also, it couldbe that the data set contains people who
didn't start with why and still succeeded, patients who didn't have the drug but
still got better. But again,Simon Sinek never tells you about this,
he only tells you about the casesthat support his story, even though they're

(20:04):
completely accurate. This is misleading.See might think the solution is to see
the full picture, to look atall the data, but this is not
a solution because data is not evidence. It may not be conclusive. There
could be a correlation without a causation. So what is the difference between data
and evidence? We hear the wordevidence in a criminal trial. Evidence is

(20:27):
only evidence that points to one suspect. It could be that Tom, Dick
or Harry could have committed the crime. And if the evidence suggests that a
man committed the crime, that isnot evidence because it's consistent with all the
suspects. So the big data wehad about breastfeeding, that is not evidence

(20:47):
because it could be that breastfeeding causeshigher child ake you, or it could
be that something else family background bothcauses somebody's breastfeed and leads to highlight you.
Then what do we do as abusy person who confronts data and evidence
and information. What we need todo is not to use PhDs and statistics,

(21:08):
not to use statistical or pyrotechnics.So in my book, I don't
have a single equation it's just touse common sense. So how do we
trigger the common sense? If yousee something you want to be true like
this, imagine you have the oppositeresult. So imagine breastfeeding led to lower
IQ. So that feels wrong tous. We know that something which is

(21:30):
natural should be good, So tryto think about how we would knock it
down. We would say, maybethe women who are breastfeeding are poor,
that's why they can't afford formula.And maybe it's poverty which is causing the
lower IQ. So, now thatwe've all alerted ourselves to an alternative suspect
poverty, ask ourselves whether the sameexplanation might still hold true even if the

(21:57):
result is in the direction we want. So this highlights that often we have
the discernment within ourselves. We areable to show discernment and ask critical questions,
but we only do this when wefind a result that we don't like.
So by imagining in the opposite,I'm encouraging us to exercise the same
discernment when there's a study that wedo, which leads to the fourth and

(22:18):
final thing. Evidence is not proof. It may not be universal. So
even if evidence is cast iron rocksolid, it only applies in that case.
If Tom did kill Sarah and Tomwas the husband, that doesn't mean
in every case when a woman diesit's always the husband. But a proof
is universal, it applies everywhere.And so let's end with the ten thousand

(22:41):
hours rule, because that's something thatRichard mentioned. So glad Well says that
in any kind of field, ifyou work for ten thousand hours, that
will allow you to be successful.But the evidence he cites is evidence on
violinists, and so what's true inviolin be true in chess or neurosurgery.

(23:02):
He's claiming a general So let mewrap up as a more stuff. But
as I said, I'm gonna endwith the ten thousand hours. So what
I'm looking at here. So ifwe're given a statement, check is this
fact what was actually said? Isit a letter? Is it a study?
What's the border context? Were theylooking at only hospitalized patients? If
you're given facts, are they cherrypicked? Are you being shown the full

(23:23):
picture those with the secret source whofailed and those without the secret source who
succeeded. If we're giving data,is it evidence? Is there only one
possible conclusion or are there rival theories? Alternative suspects. If we imagine the
opposite result, how would we tryto knock it down? And finally,

(23:44):
even if the evidence is rock solid, check the context. Is it something
which claims to be about neurosurgery butthe evidence is only on violin play?
Okay, so thanks very much.That was just a quick overview. So
let me get to the various questions. I think Richards prepared a lot and
obviously if you've got some in theaudience, I see there's a lot of
them there. Alex, thank youso much on behalf of this group as

(24:06):
well for preparing that presentation that you'veshared with us today. It's illuminating and
there's been quite a lot of sortrecently about the biases in academia and how
a lot of these studies can bedebunked. When you see that, of
course, it's vital in your careerthat you have something interesting, conclusive worthwhile
otherwise what's the points of it?Now? Coming to your ten thousand hours

(24:27):
comment, I have a personal storyabout the aligns with what you're saying,
is that when I was young,like a few people on school at Jerry,
I played the drums and I practiced. I had a kit and I'm
playing every night. And I playedin the local orchestra of the local band,
and I worked really hard at andI had lessons, and I've played
for years and years, and Ihad a beautiful kit. I was very
lucky. And then I traveled upto see some family friends. Was I'm

(24:48):
only young. I'm about thirteen fourteenat this age. And the son who
my friend who I meet with,has just got this new kit about three
weeks ago, and it's battered andit's held together with gaffer tape because they
didn't have that much much money.And he said, yeah, I've just
started playing. I said, goon, then play. He picked up
his sticks and he played these drums. Remember he'd only been playing for three
weeks. And I just thought,damn. I was like, that's how

(25:11):
you play the drums. And Iknew I could never play the drums like
that. We were just simply builtdifferently. And he went on, by
the way to play in some ofthe biggest orchestras around the world. So
I was correct. He was outstandinglybrilliant at the drums. And so I
think these myths are harmful. Justto come back on to topic, Alex
and especially in the West, becausewe're supposed to be a product of our

(25:34):
choices, aren't we. Everybody's gotthe same opportunity to be as successful as
you want to be. The onlyreason you're not at this because you haven't
made the best choices. And it'sa very harmful myth. I think that's
propagated by like you're saying ten thousandhours, you just haven't tried hard enough.
You haven't done it for long enough. Are we on the same page
with this, Alex Yeah, Soyou might think why why does something like

(25:55):
the ten thousand hours idea spread?Because we love to have control of our
lives. You like to think thatyou can do anything you put your mind
to. We tell this to ourkids. We say practice makes perfect and
so the idea that we have completefreedom is great. And this is an
empowering book because if the book tellsyou it's just genetics, there's nothing you
can do about it. Nobody's goingto buy the book. But if you

(26:17):
buy the book because the book promises, if you follow my particular plan,
you'll be successful, then then you'regoing to get a lot of buyers.
And this book was extremely successful.But then the flip side is that if
you're not successful, then it wasyour own thoughts. And obviously sometimes you
should have personal accountability and that isimportant. But the idea here, which
is that you can do anything thatyou want to this is not necessarily supported

(26:40):
by the data. Yes, okay, I want to bring Paul in for
a comment. Welcome, Paul.Hello. The I mean a slightly a
noisy place, so sorry, ifthere's a bit of background, that's okay.
One thing I'm conscious of today waswe talked to Alex, is that
we spend a lot of time reflectingon artificial intelligence. When you're hearing Alex's
presentation and some of these comments,well, what does it bring to light

(27:02):
for you around the exasperated, acceleratedrisks that we might see from data and
AI is what's being described. Ihave to say I find Alex's approach very
refreshing and much needed in the AIfield. And I guess one other reason
I'd love to put this to Alexis for me, there's a rhetoric around
AI, Richard, which is thatsomehow bias is disgusting and that the main

(27:26):
benefit of AI is it's going toreduce the disgusting nature of bias in all
two weak human beings, so biaschecking bias forces and so on. And
my own view as a philosopher isthat bias is absolutely beautiful, fundamental and
describes the human condition. So humanbeings express preferences all the time. Some

(27:48):
are more or less conscious, preferencesfor preferences for life and so on.
When they're less conscious, they canlead to either benevolent or very damaging behavior.
So we can express a preference issomething that goes into our behavior and
good comes for it and everyone's happy. A stand up comedian can have a
bias towards certain kinds of humor andeveryone laughs and the smell is rather good
in the comedy. But equally,some of these preference, if it's not

(28:11):
conscious to them in a context,could cause harm to other people because their
preference is damaging. But my belief, and this is what I wanted to
put to you, Alex, andI'm going to come back to the Middle
Ages around the word biasm, thatbiasm is a kind of it's a kind
of substance. So human beings expresspreferences we're supposed to. It's part of

(28:32):
life. In fact, it makesup pretty much all of who we are.
The results of preference is bias.Bias isn't in human beings. It's
not in the human brain. Itis the outcome of preferential behavior that's more
or less conscious and more or lessdamaging. But so I guess the thing
I'm asking really is, maybe it'sa leading question. Is there a danger

(28:52):
that if we locate bias in humanbehavior and forget the word preference, that
we throw out the beauty and thevalue or bias actually being the outcomes of
human behavior and not something to beeradicated from the brain usually by AI.
It's a really good point and areally good challenge, and I've given it

(29:14):
quite a few talks that the book'sonly been out a couple of weeks.
By giving a few talks, it'sjust great to have questions like this which
I haven't got before, and it'sreally illuminating. I fully agree with with
what you're saying. I actually disentangledbias and preferences into two separate things.
So the preference is one outcome dowe want and there might be that I
have a different outcome. I havedifferent goals from you. They're not wrong,

(29:38):
they're just my preference. So mypreference might be an easy life rather
than huge career success and making alot of money. That is not my
bias. That is healthier. We'rejust different people, we have different objectives.
When I talk about bias, it'sspecifically on how we use information to
achieve particular objectives. So when Italk about unbiased use of information, I'm

(30:02):
not talking about the preference or theobjectives. But if my goal is to
achieve that outcome, then am Iusing the best information available? Or am
I latching onto something which is correlationand not causation? And I think you're
absolutely right, Paul. This isreally important for me to emphasize protactally in
future talk so that often we canflatebias and preferences and what we when we're

(30:22):
calling somebody biased, they actually havedifferent preferences for us. So if somebody,
let's say I was a strong remainerin the exit referendum, if somebody
is a brexitter, I could saywe're really biased. But it could be
that their goal is not just economicprosperity, it might be sovereignty, and
it might be other objectives, andthen they're not wrong because they just have

(30:45):
a different preferences throught for me.So I think any bias when I speak
about confirmation bias, it's how werespond to new information. Do we use
this information to help us achieve ourobjective, recognizing that our objective might be
different from other people's objectives. Yeah, and I guess I just asked.
And do you find in AI allthe time that one of the biases may

(31:06):
be out there? Is that biashas become quite dogmatic in the air.
I feel it has been seen asa default bad thing. So it's certainly
if it's preferences, then it's notbecause that's good. That's just how we
are. And I think it'd bedifficult to make a value judgment that your
preferences are wrong and my preference isright. Secondly, some biases, if

(31:29):
they're shortcuts, they can work.So let's say are heuristic could work.
It could be that if you arewalking on the street and if you see
a fifteen stone but strong tattooed man, you might cross the streets because you
might think he could come and beatyou up. Now, is that biases?
Sometimes you're crossing the street unnecessarily,but sometimes statistically you might be in

(31:52):
more danger if encountering somebody of thatdescription. Then if it was a more
elderly citizen. So while I'm tryingto highlight here is that there are reasons
why we have biases like confirmation biasand black and white thinking, which help
us in many cases, but thereare particular cases where it might lead to
the wrong results. And I'm tryingto highlight that sometimes those cases can be

(32:13):
important ones. And these are notsmall things like the opioid example or the
breastfeeding example, Thanks Paul. Andthese are their weaknesses, aren't they their
vulnerabilities almost in alex in our system. And I just want to mention,
although we've covered it in great detailersan organization and community, but of course
social media and how these weaknesses thatwe're describing were systematically exploited with algorithms,

(32:38):
the black and white thinking, thebiases, the wanting something to be true,
and then of course social media foundthe opportunity to push us into filter
bubbles or echo chambers where that couldbe reinforced, and that was very powerful.
So we've seen some of the dangerswith AI, which is the amplification
of this and the acceleration of thesedangers, But we have seen a great

(33:00):
narrative change in social media that's helpedus to become much more aware of that.
A lot of people now know whatI've just been describing. And there's
also been a narrative change around dataas well. So it wasn't that long
ago. And if actually I spoketo Richard Taylor, famous author, and
even he said at the time,maybe two years ago, I'm okay with
these large tech companies having access tomy data so long as they're doing something

(33:22):
useful with it for me, aslong as it's helping me. And that
was quite common to hear a fewyears ago, and now it's less common
and people are more likely to say, I'm not sure I can trust these
organizations when they have access to mydata. So how does that come about,
Alex? Is that part of thisthe kind of work that you're describing,
where we do confront these biases ofwhat we want to be true and

(33:43):
we do challenge our thinking. Howdoes that show that it's possible that we're
changing our perceptions, like in thatarea around data privacy. So I think
privacy and the bias I'm talking aboutare somewhat different. So I think data
privacy is just what an organization isdoing with your data. I think that's
different from how do we respond toit to information? And I just like

(34:04):
that just comment on those things thatare coming in the chat. So so
biases are human traits, but ifthey're human traits that lead us to not
using information correctly, then we shouldbe concerned about it. So absolutely,
if it's differences and preferences, thatis completely fair and that's not something that
we want to eliminate. But ifit means that I am going to not
pay attention to a particular piece ofinformation because I'm going to stuggle it out,

(34:30):
then that's something which can lead usto not achieving our objective in the
most efficient way. Thank you youmentioned some of the comments. Perhaps this
would be in a moment if youcould just call out some of the great
comments that we're seeing in the chat. Absolutely, I have few comments and
let's go through those. So Ithink coming back to breastfeeding and that IQ

(34:51):
associated to it, we had acomment from Alex and he said IQ wouldn't
be the major benefit of breastfeeding formore people. It's bonding, immune system,
physical health for both mother and baby. I would never have even considered
IQ as a benefit. So that'sjust a comment from somebody who feels differently.

(35:13):
But this is an important point.But if you look at the studies,
as I mentioned at the start,there are these other benefits. But
if you take those other benefits aswell, and you control for those other
factors, a lot of those benefitsgo away as well. So those are
benefits that actually don't necessarily come fromthe beast its milk itself, but from
these underlying causes. So I pickedup our iq as as one example.

(35:36):
But if you do this and sothe Emily osterbook on cribsheets, she actually
systematically takes all of the claimed outcomes. Now, actually some of those outcomes
still survive after those controls. Somaybe I think she starts with fifteen,
and after you do the controls,about eleven go away and four still stay.
So there are some benefits we willstill survive the controls, but they're

(35:57):
far less extensive than one might thoughts, so it's not as black and white
as people would argue. Thanks Alexfor adding that Marcus has said, I
quite I think he has given asuggestion or an idea of how he looks
at this. I quite like lookingor searching for what isn't there, but
maybe should be often seems to givemore information compared to the published facts.

(36:23):
And then Paul has added, biasis a form of preference. Preference is
often beautiful. There's nothing wrong withbias per se. Bias is only important
to control if the preferential behavior subconsciouslyit's somehow harmful to self or others.
Reducing bias as a compulsive default behavioris actually a bias in itself and harmful

(36:45):
to self or others. And thenDennis has added bias is a coined word
in Layman language and not technical.But Marisa feels otherwise that bias that can
cause discrimination is problematic because we understandthat technically how confirmation bias can be added
to the data as well. Andnow there is one more comment that has

(37:07):
come in. One of the problemsis that models, and in particular single
factor models, will never be ableto represent or replicate reality, Yet because
it carries the scientific or research flagor label, humans tend to conclude,
oh, it must be true.Then I think these are some of the

(37:28):
comments that have come in. Richard, we have a few questions. Iff
you'late me. If we have thetime, then we'll add that. There
was also I think a good questionfrom Nicholas Strong. Could I speak to
that, Nicholas, Can I askhow you approach cultural variations? I think
this is really important, Nicolas.So often we take a study and the
study is done on a small setof people, and then we kind a

(37:52):
universal proof. We extrapolate from thisand we believe it applies to all cultures.
So often we have these studies whichare done on weird people. So
you might think I'm quite offensive callingthese people weird. This stands for Western
educated, industrialized, rich, anddemocratic, so those are where a lot

(38:12):
of the research funding is in theUS. We find something which is true
in the US, and then weclaim that this is going to be existing
everywhere. So there's various games whichpeople play in psychological settings, like looking
at your approach to fairness, andbecause of some conclusions which are made,
people think this is true everywhere inthe world. And let's go back to

(38:32):
one parenting example, because I startedwith that. I'm sure many of you
will know this famous marshmallow test whereyou give a kid a marshmallow and you
say, if you were to noteat this marshmallow and wait for fifteen minutes,
I will give you a second marshmallowand that apparently predicts success in kids.
And this is so important. Thisis such a famous finding that it's

(38:54):
put into school curriculums. Even SesameStreet had the cookie Monster teaching kids on
the importance of delaying gratification. Butthis was done on Stanford University children.
So they're not only Western, butthey're rich, they're educated, Their kids
are very wealthy parents. In anothersetting, then people can't actually trust the

(39:16):
researcher when they say they're going togive you a second marshmallow later. Why.
In some cultures it may well bethere's just not enough food on the
table. And if your brother tellsyou, hey, give me your cookie,
I'm gonna give you two back,you just don't believe him. Why
Because those are settings where food isless abundant. So, in fact,
what is it which has caused theoutcome. It's not the fact that the

(39:37):
children are delaying gratification and their patientbut children from a tougher environment to begin
with. Are the children who aremore likely to eat the marshmallow now because
they've just been brought up in anenvironment where you should eat whatever food is
on the table because it might notbe there tomorrow. So I think Nicholas
point is really important. We can'tover extrapolate from evidence to proof. That

(39:59):
was the last, but the ladderapproved is something which is universal, and
of course we want diversity, Alex. I remember when I was kating a
few years ago and I kept bangingthe back of the card to the guy
in front of me, one ofthe friends that I was with that I
just met, and afterwards we comparednotes and I said to him, what's
your job? And he said,I'm a pharmacist. He said, what's
yours? I said, I workin the commercial sector. I thought,
that's why we're hitting each other everycorner. You're slowing down and I'm speeding

(40:22):
up. But we need both typesin society. We can't just have people
who want to delay gratification. Weneed others that jump in and try things
and take risks before they're ready.And that's all part of the wonderful soup
of people that make up what weneed. So there's nothing inherently wrong I
think in saying I'm going to grabthis first marshmallow and I'm going to get
on with it because I want tomake things happen today, even though it

(40:42):
might be better tomorrow. Paul questioncomment, I guess I think I'm just
concerned that if we traumatize bias asa bad thing per se with drawings of
the brain too much, we endup with Generation Z. And there's lots
of academic kind of particularly social sciencetype data around this where this is the

(41:04):
generation that fears to say yes orno. They are the culture of maybe.
They when you ask them what theythink, they say I'm not sure.
They say maybe because they're terrified ifthey come down on one side or
the other, they're going to fallinto the disease of bias. And I
think that's really sad, and Ithink AI is exacerbating it if it treats
bias as a kind of medical typedisease to be eradicated, and I think

(41:30):
we see the harms of this.I saw an article from The Economist where
it puts a questionnaire in front offor young people, and one of the
questions was do you believe that theHolocaust was a myth? And then they
reported that twenty percent of young peoplebelieve the Holocaust was a myth, which
again absolute clearly nonsense. First ofall, who are these people and why
would that mean anything? Ever?And second, why would you ask a

(41:51):
question where you put the words Holocausta myth in the same question You're obviously
going to prompt some people to sayyes, because it's a terrible, stupid
question in the first place, designedto get that kind of output. So
then you can go, oh mygod, young people think the Holocaust didn't
happen. It doesn't take much Alexdoes it no exactly, And I think
often things will just have particular connotationswith them. And so what I'm talking

(42:15):
about when we want to absorb informationis to look at the information. We
obviously start with a belief and that'sfine, but what we want to do
is it's called Beaesian updating. Sothere was a comment earlier biases often used
in a loose way and it's nottechnical, and I agree with Dennis's point.
So what would the technical definition beof a fair reaction to information is

(42:37):
to take our earlier belief. Wewould respond to this in what's known as
Beesian updating, and then we'll havea new belief. And so this doesn't
mean that everybody ends up with thesame belief after looking at data, because
we all start from different places.So even if we're still all the same
data, We're not going to allreach the same outcome. So absolutely we
wanted to still have this diversity.But what I am saying is that when

(42:59):
we see the inform, we willtake the information based on its accuracy,
rather than immediately dismissing it because wedon't want it to be true, or
immediately latch onting it latching onto itbecause we do want to be true.
We've got about ten minutes left,AX, so I want to give you
a couple of the prepared questions.So, as we mentioned in your book,
you critical examine the misinformation that canbe perpetuated by wonder and figures,

(43:22):
institutions and so on. Personally,what sort of challenges have you faced when
confronting widely accepted yet potentially flawed truthswithin particularly the academic and potentially business communities.
Thank you very much for raising this, And let me just give a
concrete example. So, there isa lot of studies out there by McKinsey
and black Rock claiming that more diverseorganizations will perform better. So they claim

(43:45):
this conclusive prove that diversity improves financialperformance. Unfortunately, these studies are extremely
flimsy, so they make even morebasic errors than the ones I highlighted in
my talk. I'm not going togo through them, but they're on the
website may contain line about the concernsthat there are, And so one of
the concerns from speaking out is peopleoften conflate the objective with the approach.

(44:09):
So if you say, oh,these DEI studies are problematic, people think
you must be anti DII, youmust be racist or sexist, when actually
maybe I am supportive of DEI,but I think the better way to look
at this is not to reduce somebodyto just their gender and ethnicity, because
that suggestive you're a white male,you can never add a diversity. When

(44:31):
you can, you can have adifferent way of thinking. You could be
the first in your family to evergo to university. You could be from
a different regional background. There's alot of regional inequality in the UK.
Just by having a different accent,people might already make inferences. So then
my own research finds that when youlook at a more holistic measure of diversity,

(44:52):
which is not just gender and ethnicity, that does lead to better financial
performance. But if I didn't havethat are the research, I might have
much more pushback than I would doif I was a white male, I
would face more pushback, And Ithink this is a problem. I think
people should respond to what I saybased on the evidence, not the color

(45:13):
of my skin. So I've receiveda bit little less pushback just because I'm
an ethnic minority, and I knowthat they know that I'm not anti diversity,
and because I have a lot ofpro diversity research. But I think
we should be able and have thefreedom to highlight critiques and studies even if
we are from a particular demographic group, and even if we don't have other

(45:34):
researchers or other benefits of diversity.And how do you balance skepticism and cynicism
in your work? I think thisis a great question, and I think
while I'm trying to encourage is discernmentand healthy skepticism rather than cynicism and these
there's obviously a fine line between this, and so I think with cynicism,

(45:55):
you might be questioning people's objectives orquestioning the whole movement. When I wrote
my articles about diversity, I'm sayingthis is I'm only speaking about this particular
study. I'm not saying that McKinsey'sa bad management consulting firm. And I'm
not saying the problems with DEI ingeneral. It's just limited to this particular

(46:16):
study. And even if this studydoesn't hold water, it could be that
absence of evidence is not the sameas evidence of absence. And I think
here whether the phrase in football isyou play the man, not the ball.
Here I'm actually trying to play theball and not the man. I'm
going to be trying to focus onthe particular study rather than the people behind

(46:39):
it. Thank you. Nicola askeda question about the diversity research, and
there are some indeed in the book. So the leading example of chapter five
is on diversity, but it's arelatively brief so let me just give in
the chat where there is more workon it. So what I did on

(47:00):
the website is because there's lots ofthings which were in the book but got
cut just because of space or numbertwo which came out after the book was
published, I have tried to putthem on the website so people can read
them. So here was just acollection of art and or specifically on diversity.
But on the website more generally thereis some of this just broader type

(47:21):
of research on things like the standardmarshmallow study that was something that cut got
cut from the book, but ison the website. Think I did go
on to your website. I'm sosorry. I forgot to mention that you've
got a website for the book.So that was nice. It's a great
resource, So thank you for remindingme on that. A few minutes left
Public Policy and Democracy, I wantto talk to you for a moment about
that. With misinformation identified as sucha significant threat to democracy and public policy,

(47:45):
what sort of policy changes do youhave in mind, or could you
recommend to government's international bodies that mightsafeguard the integrity of our information. I
know that's a big question, butsome thoughts on that. Maybe the big
question is a great question. Ithink it's a question because the answer is
not obvious. So often people say, shouldn't it be the government who's out
there to ban misinformation? Or shouldn'tit be the publisher's job to not publish

(48:09):
books with misinformation? Or should universitiesnot kick out professors who produce misinformation?
That is true in an ideal world, but I don't think it's practically realistic
because these forms of misinformation can bereally subtle, so as I highlighted earlier,
it could be that you have astatement which is one one hundred percent

(48:30):
true, but it's the inferences whichare misleading. So I go back to
this statement here. As a government, you could never prosecute anybody for saying
this. This is true. It'sjust the implications that flow from this are
misleading. And even if they wenton to say we believe this is because
breastfeeding cause is higher IQ, youcan't prosecute that. That's one interpretation of

(48:52):
the data. And in keeping withthe balls comments earlier, we can allow
for people to have their interpretation,so you can't ban somebody for an interpretation
that you disagree with. So thisis why I think the onus needs to
be on the individual. And sothat's the reason why I wrote the book
is to empower any reader to thinkabout the ladder, to think about the

(49:12):
different ways in which we can bemisinformed, so that this is something that
you can do yourself. So isn'tit, Alex It is suggestive? Isn't
it? It could be? Butis it any more suggestive than saying countries
that begin with you have higher GDPlike the United States and United Kingdom?
People would not then say where ifwe are Somalia, let's call ourselves United

(49:36):
Somalia, and then we're going tohave higher income. So this is just
a statement about what the data finds, and it might be due to my
bias, I thought that this wasleading. So then I think the role
of public policy is to encourage thecritical thinking that I've just gone through in
our short time together. So Ithink this can be taught towards children.
So we now have literacy, wehave AI literacy. I think the it

(50:00):
literacy we're trying to teach towards kids. And I think it's not unrealistic to
think about statistical literacy. Now.Often people think statistics don't we learn this
in a level when we're sixteen orseventeen. But I haven't presented a single
equation today. All I'm saying statisticalliteracy is about is thinking about alternative explanations
for the same data. So withkids, we read murder mysteries to them

(50:22):
where it's often not the most obviousperson who committed the crime. They know
to look beyond the obvious. Andsimilarly, if we see some data a
correlation, think about are they're nonobvious exulations? And I also put a
few brain teasers in the book,which are ways in which we can encourage
kids to think about alternative explanations.So I do think like that of policy,

(50:43):
rather than regulating misinformation, is togive people the tools to spot it
and to make sure that they're notfalling through. Yeah, thank you,
vital this year with all the electionscoming up, my last prepared question,
and we've touched on it, butit's more of a summary in that sense
as well. Because you propose verystrategies to combat misinformation. It's pretty much
been the focus about our whole topictoday. Could you elaborate summarize some of

(51:06):
the most effective approaches individuals can adopta discern truth in the age of misinformation.
And as a second part to that, there's a lot of technologists obviously
on this call. What role doyou see technology playing in aiding these efforts
without exascerbating the problem? Thanks Richard. I think the tool of imagine the
opposite is useful. Why because thisharnesses our discernment and make sure that we

(51:29):
are applying the same discernment to somethingthat we like as something we don't like.
I think we'll broadly just to tryto look for different viewpoints on contentious
issues. So in the Brexit referendum, I was extremely pro Romaine, and
I thought that anybody supported Brexit hasto be uninformed or xenophobic. So then
I forced myself to go to sometalks by brexitters, and I realized they

(51:51):
were very logical. So even ifI didn't agree with the conclusion, I
could see there was a logic tohow they reached their conclusion. It was
like some really illuminating talks out thereI saw, which I was just embarrassed
afterwards at how narrowminded I had beenabout this issue, just being one side
night. I still came down onRemain, but I realized that this was
not as open and shut an issueas I had been led to believe.

(52:13):
Then what is the role of AAout there? I think the roll could
be useful. So as you highlighted, Richard, when you looked at chatch
GBT on this breastfeeding question, yougot a different answer to what I got
in Google, because I looked atGoogle before the age of chat EBT.
So the idea here is that ifindeed generator of AI can look out fair
and can find this scientific consensus,so they're not just swayed by one paper.

(52:37):
Then this will help us, andthis will hopefully avoid the issue that
you can always find one paper tosupport whatever you want to support. So
if you want to believe the vaccinationalleast autism, you can quote Andrew Wakefield's
paper. But maybe chatch ept wouldfind that this paper was debunked and actually
scientific consensus is something quite different.I'm with you for numerous times about Google

(52:58):
search hoping to find an article willreinforce my view on something for sure,
and statistically, just to reinforce yourpoint. Statistically, we are living in
a simulation. There's no doubt aboutthat if you look at the maths.
But I don't believe that we areinside a computer program. I counter the
statistics in that particular area. Alex, We've come to thirteen fifty nine here
in the UK, very conscious ofyour important time. Huge thanks for many

(53:22):
things, for one, for writinganother brilliant book, and we continue to
follow your own journey and story andsupport you anyway that we can. Of
course, for spending some time withus and being so full of your normal
energy and positivity that we've seen fromyou before and hope to see again.
So thank you for what you do, and thank you for your time today,
Alex, and thank you for yourtime, and thank you to everybody

(53:44):
for giving up an hour to listento my views and listen. Thank you
to you, Richard and Jason forfors
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Law & Order: Criminal Justice System - Season 1 & Season 2

Law & Order: Criminal Justice System - Season 1 & Season 2

Season Two Out Now! Law & Order: Criminal Justice System tells the real stories behind the landmark cases that have shaped how the most dangerous and influential criminals in America are prosecuted. In its second season, the series tackles the threat of terrorism in the United States. From the rise of extremist political groups in the 60s to domestic lone wolves in the modern day, we explore how organizations like the FBI and Joint Terrorism Take Force have evolved to fight back against a multitude of terrorist threats.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.