Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Rally is a globally recognized professor of psychology and behavioral
economics at Duke University and the founder of the Center
for Advanced Hindsight. His research deals deep into the intricacies
of human behavior, particularly focusing on the irrational patterns that
influence our decisions in areas such as finance, health, and
(00:21):
personal habits dances. Journey into behavioral economics was profoundly shaped
by his personal experiences overcoming severe injuries, which sparked his
interest in understanding and improving decision making. Dan's work has
reached a vast audience through his books such as Predictably
Irrational and Upside of Irrationality, and his engaging TED talks,
(00:45):
which have collectively garnered many millions of views. These platforms
allow him to communicate complex behavioral science in accessible ways,
helping people apply these insights to enhance their lives. As
a leading war in the field, Dan continues to influence
public discourse on how behavioral economics can inform and improve
(01:07):
real world decisions. His work not only advances cateemic understanding,
but also a tangible impact on how we navigate the
challenges and opportunities in everyday life. Wonderful introduction. I mean,
I'm so glad to meet you here. Thanks Richard for
this opportunity.
Speaker 2 (01:27):
Definitely, you're hugely welcome.
Speaker 1 (01:30):
And there we go for the introduction of Richard Richard
Foster Fletcher. Richard Foster Fletcher is the executive chair of
MKI dot org, a global community dedicated to advancing ethical
AI development and implementation. With a profound commitment to fostering
responsible technology use, Richard leads initiatives that empower leaders to
navigate the complex landscape of AI with integrity. His work
(01:55):
focuses on critical areas such as bias mitigation, privacy protection,
and the broader societal impacts of AI. As an influential speaker, strategist,
and advisor, Richard collaborates with governments, businesses, and educational institutions
to promote ethical standards in AI. His insights into the
ethical dimensions of AI and data science are sought after
(02:17):
across various sectors, and his contributions have been widely recognized
for driving meaningful change in how technology interacts with society.
Richard's mission is to ensure that AI technologies are developed
and deployed in ways that prioritize human values. Inclusivity and fairness.
Through MKI and his broader efforts, He's committed to shaping
(02:39):
a future where AI serves the greater good, fostering a
more equitable and just world.
Speaker 2 (02:46):
And let's talk about Dan's new book. It's got four
point four unsolutely at the moment, which is pretty impressive.
Speaker 1 (02:55):
Dan Areiley's latest book, Misbelief, What Makes Rational People Believe
Irrational Things explores the psychological underpinnings of why seemingly rational
individuals come to embrace irrational beliefs. The book is particularly
timely as it delves into the spread of misinformation and
the psychological mechanisms that make us susceptible to it. Drawing
(03:16):
on Arali's own experiences with misinformation and conspiracy theories directed
at him, the book examines how stress, cognitive biases, personality traits,
and social influences can lead people down a funnel of misbelief.
This process gradually distorts their perception of reality, often leading
to a breakdown in trust and social cohesion.
Speaker 3 (03:40):
Fantastic. Everything you've said has been so succinct and complementary.
I would like to say that my journey down misbelief
started before the most recent AI revolution. But it came
toward the end of my kind of trying to understand
(04:01):
the psychology, and it got me to be very concerned
about what the world of AI would do. Kind of
in this book, I basically describe a machine, virtual machine
that takes people and get them to start believing in
all kinds of things that are not good for them.
They're not good for society. It attacks emotion and stress, cognition, personality,
(04:27):
the social system. And the main kind of finding I
think in the book is those misbeliefs are not for nothing.
They are a response to a need. People are stressed,
they don't know what is happening. They're trying to find
a solution, and they're looking for an answer. And I
(04:49):
can give you some examples for this, but but you know,
for example, tribes of fishermen that's fish in the ocean
to fish in the lake. The ones who fish in
the ocean have a more unpredictable life and they have
more superstitions. You know, when when life is kind of
out of control, we have a need to control the world.
(05:13):
If you're an animal in the jungle and you think
this there might be a tiger, you hone your system
to looks at every movement of leaves that maybe something
is hiding behind there. So we're tuned to finding stories
that explain what's happening in our very complex life. And
we've created a system collectively as humanity that is taking
(05:38):
those moments of weakness and pushing people into a very
very bad, bad direction. And of course AI could vastly
accelerate all of that. And I think my from all
the things to talk about AI, you know, one of
the things that I am very concerned with is that
imagine that I tailored for each of you the exact
(06:00):
story that would get you to take one wrong step
in terms of your beliefs. I would kind of stitch it.
I would know what you believe in. I would start
with things you believe. I would add one thing in
the wrong step. I would identify the point in time
that you're feeling stress and you're looking for an alternative understanding. Right,
you can see how that effort to derail people is
(06:24):
going to could be very very successful.
Speaker 2 (06:28):
Do you think we acknowledge how cognitively vulnerable we all are?
Speaker 3 (06:32):
There? You know? We don't you know what? From? Like?
You know, I've been studying irrationality in general for a
veryvery for a long time. But one of the most
interesting irrationalities is that we don't accept our irrationality. You know,
we can say, oh, these other people that they make mistakes,
(06:55):
and you know, I ask people, I say, do you
have a clear upinion about global warming? And lots of
people say yes. And then I say, and what have
you actually read? And then some people said the u
N Report? And I said, did you really read the
UN report? And most people say no, I read the summary.
(07:19):
Some people say I haven't read I read something about it.
You know. The reality is that are the world is
very complex, it's and we need lots of trust. We
don't read all the information about everything. We need lots
of trust. And as this trust is breaking, lots of
(07:41):
things are breaking. By the way, the story of misbelief,
so the psychology of that is interesting and that's what
I mostly do in the book. But the rapper around
is a rapper of trust. It's basically saying that the
moment people like, imagine I asked you the following question,
imagine the person you dated that you're married started believing
(08:04):
in some conspiracy, which conspiracy will be the least harmful.
Most people say, believing that the Earth is flat. Why
because you know, if you believe in cancer treatment or
vaccines or so on, you can actually make bad decisions
in life because of that. If you believe the Earth
(08:24):
is flat, like you can't change the curvature of the
Earth with your belief. It's harmless. But it's not really harmless.
And why is it not harmless? Because misbelief has two components.
It's about believing in something that is not true, but
it's also adapting it as a perspective from which we
look at life. And the people who believe that the
(08:48):
Earth is flat also believe that NASA is lying to them,
and the US government is lying to them, and every
pilot knows the truth, and they're lying to them, and
every government is doing it, and there are no satellites.
Some by the way, some extreme version also believe that
there's no Australia who would invent as threatening But anyway,
(09:11):
but just think about somebody who believes that there are
so many powerful organizations, NASA, government's pilots and they all
lie to them. Now you wake up in the morning
and it's not just that you say the Earth is flat,
You say, what else are they hiding from me. What's
(09:32):
their intention? Right, So, and now you start doubting everything,
and you're frightened, and you think there are big forces
in the world that are going to influence us. And
it's a very very frightening proposition. By the way, it's
very good for online media because now you have a
cycle that gets to maintain people and get creates a
(09:55):
social group and all kinds of things like that. But
for humanity it's a very bad effect.
Speaker 2 (10:01):
That feeling of powerlessness where you try and falsely create order.
I mean, yeah, I mean when you look at it
and step back logically, I mean, I have a relative
who believes that they've been putting nano fibers on the
end of COVID testing kits to get them in your
brain and track you. Well, first of all, that technology
doesn't exist, yeah, you know. Then following that it would
(10:22):
cost somewhere between sixty and one hundred billion dollars to
roll out that across the COVID tests. Cladly, the cooperation
from the governments is unreasonable and impossible. And fortunately, come on,
I mean, it's possible to get anything done in the world,
let alone organize that in a few weeks.
Speaker 3 (10:36):
But you know, just think about just think about the
damage that a belief like this is causing. Not just
about you know, would they take the vaccine or not,
but what does it do to other vaccines? Like I've
seen people because I'm spending so much time in the
dens of misinformation, I see lots of people who are
(10:58):
now deciding not to take cancer treatment. We just spoke
to somebody like that, right the moment you lose trust,
what else would you not? Would you not do? So?
So it's a very you know, when we start not
knowing what to truy. And there's a story that or
(11:21):
a metaphor that fish don't know that they are in
water because they're surrounded by water. To some degree, we're
surrounded by trust. We trusted the bank will give us
our money back, and that somebody inspected the elevator, and
that somebody if we buy salad, that somebody washed the lettuce,
(11:42):
and we trusted somebody inspected our breaks, and like, if
you think about it, we trust lots and lots and
lots of things. As this trusty road, lots of bad
things happen. And if you thought about, imagine we kind
of for all the time ten years back, and I
ask you to kind of give me the list of
the five main problems the world need to solve. I
(12:06):
don't think that disinformation or trust would have been one
of the top five. I think it is one of
the top five. Now why because I don't think we
could take any action in any direction until we solve
that problem. Just imagine any direction we would want to move.
The first thing would happen is that this would be
(12:28):
a political question, and then of course it will be
very hard to move forward.
Speaker 2 (12:36):
What's the story about AI? Do you feel at the moment?
Speaker 3 (12:41):
Well, I think the lots of stories about AI. I
don't think it's one. On the more lighter side, one
of the people in my research lab at Duke try
to get one of the LM models to give her
a title for the paper, and I said, don't do it.
(13:03):
I said, it's such a creative task. You'll feel so
much pride in your paper if you did it yourself.
We're not in a game of efficiency here. Take take
the time think about it. If you don't find anything
in a week, okay, get get some help. But I say, don't,
don't take the joy away from you. Of course she
didn't listen, she was she was too tempted. But I
(13:26):
think that's on the lighter side. I think there's a risk,
there's a risk that we will take shortcuts in the
place that could give us a lot of human satisfaction.
You know, I I haven't studied this, so I think.
You know, if I wrote a poem and I got
(13:47):
an AI model to write a poem for me, I
suspect that the poem that was written for me will
not be the same.
Speaker 2 (13:56):
You know.
Speaker 3 (13:56):
I know that people take credit or even on things
that haven't done it easily, but still I think there
would be there would be a difference. So I think
I think the pride of creation. I think the feeling
of ingenuity. I don't think it's about just mechanically making
(14:17):
things faster. So that's that's like, let's call it the
problem of the rich, right, it's it's it's it's something
that I think would reduce quality of life. But for
the people who have the time, I think we don't
want AI to substitute the things that give us pride
(14:38):
and meaning in the sense of accomplishment.
Speaker 2 (14:40):
And so on. So let's use that as a means
to dive into our main topic. Because it can do
that in education, can't it. It's going to add, it's
going to take away. We have a technology that's very
much out of the box. It's used apparently by seventy
percent of students already and thirty percent of teachers. You've
(15:02):
mentioned trust on this call already. So we've got two
parts of this, you know, one is the use of
AI in education, and then the second part is the
trust that we have to use that AI in education
knowing that it's a freight train that we can't necessarily
slow down or even stop. So as we get into that,
what's the state of education today if we look at
(15:22):
maybe just go up to eighteen for me here to
start with in terms of the schooling parts, like how
big a problem do we have? What are we actually
trying to fix here when we think about these technologies.
Speaker 3 (15:34):
So we have recently I have a group here in
Israel that we have created a survey technology to check
on how students are doing and we use it for
sel social emotional learning. Is there bullying? What's happening in
the class. It's a system that goes to student to
(15:54):
answer a few questions. It gets to a system it's
automated and the teacher gets, say, a map of not
focusing about the means, but focusing about the outlawers. We
want to understand the kids who are not in a
good state anyway. So we had the system. It's been
working for a while, and we recently edited an AI
(16:15):
component and we added math and reading skills, and the
AI component we used to generate the reading tasks. And
this is fantastic because I can take you, I can
find out what's the topic that are interesting for you,
and I can create it like why shouldn't they create
(16:36):
a good reading comprehension text for you that is based
on something that you're interested writing something you're not interested.
And then on top of that, we can grade. An
AI model can grade the result, and it's fantastic. It
cuts an unbelievable amount of time for teachers. And also
the grading is much more nuanced. It's not just how
(16:57):
are you in reading? It has multiple dimensions of how
you in reading? In math, we're exactly and now we
can create the next version for you. So I think
I think on the on the grading side, I think
on the text, on the material generating function, I think
(17:17):
I'm a huge fan of aar I can see all
the differences.
Speaker 2 (17:23):
Or to be more effective both.
Speaker 3 (17:26):
Like you know, one of the things we learned in
COVID was that autonomy was incredibly important. That you know,
during COVID, kids were studying at home. If you gave
them material they didn't care about, they didn't read. If
you gave the material that they loved, you know, you
got them involved. And I think we the idea that
(17:49):
if I have a class of thirty kids and everybody
needs to read Chekhov, everybody needs to read Chekhov. But
if I can find a paragraph of Chekhov or a
chapter that they would like more than others, or maybe
include a few other authors, everybody could benefit. So I
think that increasing motivation, quality of grading, consistency, saving time
(18:13):
and effort for the teachers, making it more interesting for
the students. I don't see a downside for that. I
don't see a downside. What I do worry about is
I worry about I worry about the system when students
are going through the mechanics but don't have a better
understanding of the world. And I think the same thing
(18:37):
happened a little bit in math, where you know, at
the end of the day where you say, you know,
how much is seven by eight? Or you know, whatever
it is that you're you're thinking about, you have kind
of a mental model of math. What's the derivative? You know?
You might you certainly don't have an accurate answer in
(18:59):
your mind, but you have kind of an idea of
what's happening with a circle and a square, and what
happens with time and distances and dividing, and you have
kind of a general model, almost like a tool that
you can apply. And when kids are just using calculators,
they don't develop this model, so they can't tell you
(19:24):
more or less what something is like. Like I love
the idea that I say, if I say, please multiply
seven by eight, but only tell me the range, you
can tell me the range. And why can you tell
me the range? Because you say, okay, I've figured out
kind of a general sense. You know, how much is
three hundred and twenty nine multiplied by five hundred and
seventy two? Give me a range, like you know, we
(19:46):
can do that, right, We can have a sense of
where these things are. And we call this a mental model,
a mental model, and we develop mental models of the world,
and that helps us navigate how the viruses work. We
have a mental model, and how food works on our bodies.
We have a mental model, and what surgery does, and
(20:09):
what's democracy. We have a mental model. But to get
to a mental model, you need to have a holistic representation.
And if all you do is you say, I'll feed
the input, give me the output, and I'm not going
to go on the journey, You're not going to get
a mental model. And that's one of the things that
(20:29):
worries me. Right, So imagine that we did the whole
semester on democracy, and imagine that you used some la
model and you gave all perfect answers. And let's also
say that you not just gave perfect answers. Let's just
assume you read them. You read them, but you didn't
(20:50):
go through the process of wandering, you didn't make mistakes,
you didn't go into wrong corners. What do you really know?
Do you really understand democracy? I don't think so. I
think that what I want you to do in democracy
is to understand what is yes and what is now?
What there is in the gray areas between democracy. I
(21:14):
want you to have a mental model of democracy. And
if I just give you an instant of an answer,
even if it's a perfect answer, I don't think you'll
get there. And now you can ask yourself, why.
Speaker 2 (21:25):
Do you care?
Speaker 3 (21:28):
I care because I think people can't think critically this way.
I think people can't think about the boundaries correctly. They
can't think about changes they want to make. So I think, now,
can we solve it? Can we create an LM model
that would help with the mental model? Maybe, But in
(21:49):
the current state of affair, I think that the idea
that it's a question answer equation is not correct. It's
a question wondering. That is the important part in where
we're not going to experience it in the current way
we're using lms, or students are using.
Speaker 2 (22:06):
Ls But it's an opportunity and it's a threat. I mean,
you know, we've all played around with chat GBT. You
can ask it questions that you've been wondering since the
dawn of time. You can get on to Blinkist, read
a bit about a book you're interested in, jump on
and just start chatting about that book. It's a phenomenal
resource for comprehending and understanding things. Now, it's a bit
(22:29):
left leaning, it's a bit woke, It won't really give
you the good stuff sometimes because it's programmed to never
insult anybody in any form possible, which makes it a
slightly dull intellectual sparring partner. But now anyway, but of
course chat GPT is not the only tool in the world,
and there are others with more freedoms, so it offers that.
But just staying zoomed out a little bit, I mean,
(22:49):
to what extent is what you're describing? How is it
disrupting or actually breaking the existing processes we have in education?
The standard process is trying to get information understood context
and great that understanding that does that' still work now
that we have these tools.
Speaker 3 (23:05):
So so I think there are some things that I
just want you to remember facts or understand some story,
and for that it's perfectly fun.
Speaker 2 (23:13):
Right.
Speaker 3 (23:14):
So if I say, hey, please tell me you know
how ancient Greek was structured in terms of the court system,
and you say to CHAT you PT, please do this,
and you get an essay and you read it three
times and you even remember it. Maybe maybe it's just
the fact that I want you to remember. That's enough,
(23:35):
or start up for a conversation, that's enough. But there
are very few things like that, you know, we're not
really in a memorization world. What I really want you
to do is I want you to think critically about democracy.
I want you to be able to use your understanding
about democracy to think about who do you want to
(23:57):
vote for, and what you care about? Which organizations do
you want to give money to, and what are the
threats to society? And I want you to figure out
what do you want to learn next? And I think
that without I'll give you another analogy. I wrote some books,
(24:21):
and every time I write a book, there's about twenty versions.
I write one version, and there's another version, there's another,
and you know, you could look at the last version
and you can say, why didn't I just write version twenty?
Like why did I go through all the agony of
writing nineteen versions early on? But there's something incredibly important
(24:44):
in the journey. There's something incredibly important in trying to
figure out what do I really think? What do I
really understand? Convincing myself that this is actually not a
very useful direction. And I think this meandering around and
trying to figure out what is right, what is wrong?
What do I care about? What is going on? Is
(25:07):
is incredibly. It's incredibly important for a real representation of knowledge.
And I think that that if we just give people
the final answer, I think we will we will miss that.
So I want people to fumble around.
Speaker 2 (25:26):
So how much is that happening at the moment?
Speaker 3 (25:30):
Not that much, I mean that there is.
Speaker 2 (25:32):
It doesn't sound like the States quiet at the moment.
Speaker 3 (25:36):
Look, the current state of education anywhere in the world
is not that exciting. So it's not as if, you know,
we have a very high bar that we're fighting against.
But but there is some approaches to to knowledge structure,
and there's some demands to to think about this and
(25:57):
so on, and creating this show, I think is will
make it even harder that people would do it. Now,
should we ban Ai? Of course not? But should should
we pick the places that we want people to have
a good representation of the world and work on it?
I think the answer is absolutely yes.
Speaker 2 (26:19):
How concerned are you How blind do you think some
of the leaders in the educational space around the world
are to think about all the work that you've done
over the years to understand the way that people act
and the incentives and the disincentives and cheating and lying
and bribes and all those things that you've spoken about
in the past, which have been phenomenal talks and books
and so on, like, you know, how much of that
(26:39):
do you think is being considered as they rush to say,
my goodness, we've got to get AI now, we've got
to get our teachers using it. We've got to you know,
worry aboutism.
Speaker 3 (26:51):
Your your first question today was how much how aware
are we to our biases? And the answer is not
that much. And I think that it is very hard
to predict the long term process. So you know, I
woke around the world with this very funny looking half
(27:13):
a beard, so no hair here and here, and the
story the story of this half a beard is it
has three components. I was badly burned, so all of
this is scars, so there's no hair on this sign.
At some point I went on the month long hike,
so I got the half a beard, and people started
(27:34):
telling me thank you for the half a beard because
they felt it gave them some courage with their own injuries.
For example, there was a woman who told me that
she had a car accident when she was seventeen. She's
now over fifty, and she never wore a dress and
now she's going to start. But the really interesting thing
(27:54):
that happened to me was that about four months into
this half a beard, I started feeling more accepting of
my own injury. There was something about, like, I think
there was something damaging about the shape they have shaving
every day. I would start the day with smooth hair,
no hair, little black dots here, and then after shaving
(28:17):
every day it became less nonsymmetrical and letting go of
that was very healing, like really a real interesting journey
of self acceptance. But I couldn't have predicted it. It
happened for other reasons, things that happened to us gradually
over a long time. We are really terrible in predicting.
(28:40):
So if you say, what will happen to the education
system forget different AI, current version of AI five years
from now, we are no capacity to predict it will
certainly not be what it is today. Right in the
same way that when we think about social media and
we say we now have kids that are twelve that
(29:04):
had four years of social media, it's not something we
would have predicted when social media started. So I think
I think we're not even close to grasping the implication
of the current technology, and of course not close to
grasping the implication of and what's GPT seventeen going to be.
Speaker 2 (29:30):
But we can learn something from social media and I
won't put words in your mouth then, but it has
been very harmful in many ways, some of which we
probably had sight of. I say we that's the people
that were developing, and some that we didn't. But overarchingly,
it's hard to say this has been a net positive
thing for mental wellness. What are the lessons then for this?
Speaker 3 (29:52):
So I think I think, look, and this is a
it's a very good analogy because it wasn't too long
ago that we thought that Facebook is bringing democracy. Remember
the Arab spring in Egypt? Right, we thought Twitter and
Facebook are advancing democracy. The world is going to be
a very different world with them, and of course it
(30:14):
turns out to be quite quite the opposite. So, you
know what, what are the lessons I think? I think
the lessons are that unsupervised technology is not necessarily going
to be advancing for the betterment of humanity. You know,
(30:34):
it's it's true for everything. We would never allow medicine
to say, you know what, doctors just develop whatever you
want and sell it to people. No, we have amazing rules, right.
It takes some billion dollars to pass the FDA test.
We would never say to car manufacturers, make whatever cars
you want, no regulations, you know. I think, I think
(31:01):
it looks like it has really good promise, but we
have to be incredibly careful. I think the lack the
lack of regulation. Okay, my analogy for this is cars.
Every year we have more rules for cars, and people
keep on making mistakes. Think about cars, crazy amount of regulation,
(31:22):
seat belts, anti lock brakes, bumpers, lights, horns, things that blink,
mirrors on the side, things that don't let you change
this distance. I mean, you know, basically, the engineers of
cars have figured out that were useless drivers, that we
can't even focus for fifteen minutes without help. And they
(31:45):
did not come and change people. They didn't say, oh,
you know what, let's do with more drivers. Ed. They said,
people are useless drivers. Let's accept it, and let's make
cars that would compensate for how useless we are as drivers.
Speaker 2 (31:58):
And to your point earlier, we think of ourselves as
brilliant drivers despite what you just said.
Speaker 3 (32:04):
Yeah, one of the biggest tests of overconfidence is how
good of a driver are you? And the vast majority
are above average, and.
Speaker 2 (32:10):
I am above average, But I take your point about others.
Speaker 3 (32:13):
Yeah, yeah, And and I think I think, if we're
useless drivers, what are the odds that we are not
useless information consumers? Very low? Now we're driving at least
at least we see the bodies on the side of
the road when somebody makes a mistake. It's very hard
to not see it. When people make information mistakes, they
(32:40):
stop taking a treatment, they reduce doing X, they start
doing why, they spend their money on, you know whatever,
fake medicine. We don't necessarily see it in the same
in the same amount, but it accumulates, it has it
has bad effects. So I think that there is no
area human endeavor that have flourished without thought for regulation.
(33:07):
And I think for something as extreme as this, I
think it will be crazy to think that it will
advance well without without serious regulation. And then and then
the question is what is the regulation? How fast will
the regulation come? You know, regulation usually doesn't just pop up.
Usually people do something bad, you know, drive into trees,
(33:30):
and then we say, okay, let's do something for that
with AI, you know, with with information technology in general,
we have been very slow to reacting, like, for example,
the the damage of social media has been very clear
for quite a while, but we're very very slow in
(33:53):
reacting to it. And and time. Time is very expensive
in the domain of information. You know, just think about
this year twenty twenty four. I forgot the number, but
about half the world is going to vote. You know,
incredibly important decisions are going to be happened in an
(34:15):
information environment that is very unfavorable.
Speaker 2 (34:21):
People say to me, you know, are you scared of
the damage that AI could do? And that's scared of
the harms? And I think, well, we were doing a
pretty good job of completely ruining the world before AI
came along. We didn't need AI's help for that. We've
almost lost the insects. Now, that's not a good one.
Speaker 3 (34:41):
I'll I'll tell you what I think is different, different,
And my story for this goes back to Coursera. So
Coursera is a really wonderful platform trying to create university
courses for the whole world. And when Cosserra started, I
was very excited and I recorded what I think is
(35:02):
a very good course. I hired somebody to do a video.
I didn't just sit in the class. We went outside
a little bit. I really worked very hard to create
a very good course. And I thought it twice, and
every time there about two hundred thousand people, which were
a lot for the time. Maybe now it's a smaller amount.
(35:24):
And every time from the two hundred thousand people, I
had one bad apple. And the first time I thought
there was one person who basically sexually harassed women on
the discussion boards of the class. And on the second
(35:45):
time there was a guy that every time there was
a bug in Coursera, he wrote my university and complained
that I was doing experiments on them without permission. Of
course I wasn't doing experiments without permission, but every time
the university gets a complaint like this, they have to investigate.
(36:05):
So here I am spending these weeks teaching two hundred
thousand people, lots of work, and all of a sudden,
almost every week, I have another investigation that starts and
I have to divert energy and attention. And so it
was terrible. And after that I told I talked to
Corsera and I said, look, I'm not going to teach
again if you're not going to allow me to kick
(36:27):
people out. And they said, but our rules are that
we allow anonymity, and you kick somebody out, they can
come again after a new thing. And I said, I
can't do it. I can't do it. I don't know why,
but in my class I have every time, I have
one person who is really spoiling it for everybody, and
we couldn't agree. Now now, I think that going back
(36:49):
to your question, I think that information technology is giving
disproportional power to bad actors, and I think that one
of the things that we need to do as a
society is to think about how do we protect ourselves.
I love Cossera. I want to create an open system.
(37:12):
But at the same time, if we create an open
system and we have bad actors and it's spoused it
for everybody, we're not really providing the value we want to.
Speaker 2 (37:22):
But these are people with different motivations to us out
rather than bad people, and those motivations can lead to
bad outcomes.
Speaker 3 (37:30):
That's right. But the same thing, I guess, and the
reason I'm saying it is the same thing is happening
online in all kinds of places. Right. They're very the
amount of bots that are roaming the world the amount
of funding for all kinds of videos. You know, these
(37:53):
are people with very different intentions, and what we're doing
is we're giving them this very powerful platform and making
it very cheap for them to create tremendous damage. You know,
the democracy is very very beautiful and very very fragile.
If you have somebody who is elected in a democratic
(38:17):
system but decides not to play by the rules, they
could do lots of damage. We are creating very fragile systems.
Speaker 2 (38:28):
Yeah, I mean, and they are fragile. The applications that
we're talking about as a name than the facebooks, the ex's,
the Instagram's. You know, the algorithms are quite basically quite dumb,
if you like, and so it's quite easy to see
what you have to do to get a post to
get picked up by the algorithms. We know all this
stuff we talked about loads of times. That kind of
reaction that you want to gain from it will help
(38:51):
it to succeed. So it's a bit like people are
writing their cvs now knowing that it's going to get
read by an AI, not a person, and they get
the AI to write the CB that's read by an AI.
So now we're just second guessing algorithms, trying to work
out how we break the system or trick the system.
Speaker 3 (39:08):
Yeah, and and that, and you know, if you want
to get the job, you're not a bad actor necessarily,
but you could imagine that they could be bad actors
as well. So, so you know, this is so when
I when I talked about misinformation. You know, one of
the things that worries me is that now with AI,
(39:29):
I can I can create the tailor made misinformation for you. Yeah,
that would push all the right buttons, that will understand
the moment that you're stressed. That would push all the
right buttons and will do the the job for that.
Speaker 2 (39:45):
And that's quite easy as well, it's not even difficult,
that's right.
Speaker 3 (39:49):
So you know, how do we how do we deal
with that? So those so, so I worry about taking
joy out of life. This was the story of my student.
I worry about people not having a good working model
of the world, which will not allow for productivity and
creativity and true understanding and so on. And then I
(40:10):
worry about bad actors.
Speaker 2 (40:16):
So let's think about that for a second, and I'll
come to Jason just a moment. I'm sure there's some
questions that are burning from the chat the you know,
let's just take chat GPT one of the most common
tools used across the world now by teachers and students alike,
run by a company with the board of directors. I think,
you know, three men, three white men, are all very wealthy,
(40:37):
all multimillionaires or guys who are happened to be very
very good at making money, who have produced the company
has produced a general platform that knows nothing. It's effectively
a well trained parrot. It doesn't specialize in education or
protecting students or anything like that. But it's getting picked
up and used across the board as if it was
(40:58):
as if it was a tool and for education. But
it's going to get used Dan, So what kind of
advice would you have knowing that they are going to
use it, they are going to implement it wasn't built
for this, hasn't got the safeguards in place. It could
destroy creativity. It helps students skip to the outcome or
other than learning process. How on earth do we navigate this? So?
Speaker 3 (41:21):
I think there's a couple of approaches. One is to say,
you know, I really love explainable AI. Explainable AI basically
say that we don't want just to answer. We want
to understand how the system got to that answer so
one and very hard to do, by the way, and
when you realize how hard it is to get explainable I, I,
you understand how much we don't understand how this whole
(41:43):
system works. But one approach would be to try and
get any of those companies to create an educational module
that has a version of explainable AI and it doesn't
give answers without an explanation of where it came from,
and that talks back to the mental model. How are
(42:05):
things working right? Why did I get to that conclusion?
Why did I? I don't care just about the answer.
I want to know something about the building blocks of
how how we got there. That's on the side of
if we could ask the companies to do something something different.
I think if I was a teacher with students doing AI,
(42:27):
and first of all, I can think about how to
make it more difficult to work with AI, which is
to basically say, if you're working with AI, fine, but
I also want you to explain to me every part
every I want you to highlight every assumption that was made,
(42:51):
and I want you to rate how accurate you think
this assumption was, So I can say, okay, I can't
fight you doing that, but I'll demand extra work from
you that would that would do that. Another thing I like,
by the way to do is and that I always do.
I ask people to always do a search for what
they think, for what they're looking for, and also the
(43:14):
opposite search, and I could you could also ask for
for the opposite essay, right, so you could say, okay,
give me, give me an essay about you know how
you know Rome was was great, but also give me
an essay on this, and then do the contrast. So
I can I think we can work a little bit
(43:36):
against that by either trying to make it more work,
trying to get them to understand the structure better, trying
to argue both sides of the equation, and even even
if they don't do all the work, it would it
would mediate some of the of the harmful thing. But
but going back to the joy part, you know, I
(44:02):
I write slower all the time, Like I'm writing slower
than when I wrote my first book, and I'm enjoying
writing more. And it's it's kind of like like drinking
wine and I can't I can't think about it. It's
it's a little slower. I'm enjoying it. I'm enjoying it more.
(44:23):
By the way, I just finished and writing my first
kids book. It's not out yet, but it's I.
Speaker 2 (44:29):
Thought you were reaching for a glass of wine for
a second. But what are we going to see the
kids book?
Speaker 3 (44:38):
I don't know, but it's it's a it's a kid's
book on procrastinations. But but you know, I've it takes
some time to to learn to enjoy the things that
we that we do, and and I think that that's
one of the things that like, Okay, this is like
life philosophy. Now. I think that there's lots of things
(45:00):
in life that we tell people this is going to
be painful and miserable, and it will always be painful
and miserable, and you have to do it anyway. I
think eating healthy is like that, and exercising and some
sometimes education is phrased like this, and I think that
this is just no recipe for good behavior. I think
(45:20):
that we need to figure out how do we help
people find joy in things? Right? So I think that
the education system can also look at itself and say
how much fun are students having and if everything we
do right now it just looks like miserable tasks to
them that they have to do anyway. Maybe maybe we
(45:43):
need to look internally as well and figure out how
do we make learning more fun? How do we how
do we find joy in it? What is what is
the exciting thing about it? Right? How do we make
learning how to read the right something that people feel
as good as playing tennis? You know why why tennis? Yes?
(46:05):
And reading and writing not. I mean there's some there's
some answers to it, but I think it's a It's
also a good time to say, you know, we had
this run by telling people it's miserable and you have
to do it anyway. I'll give you a bed grade
if you don't, and we have to work into a
model that makes it more fun.
Speaker 2 (46:23):
It's the same with your career. I look around this
room downe and I know a lot of people in
this room have worked really hard to work out how
do they enjoy this, how do they make this work
thing fun?
Speaker 3 (46:33):
But by the way, working hard so so human motivation
is kind of amazing. You know, we're not really designed
to want to sit on the beach drinking Mohito. It's like,
you know, that's not the human achievement. We actually enjoy
climbing mountains and writing poetry and doing startups and helping
other people and trying to understand difficult things. Right. Human
(46:59):
motivation is really not about being like a goldfish at rest.
Human motivation is very magical. But but we've abandoned lots
of things in kind of human flourishing, in education, and
we've made it more and more to We've made it
too much. So, you know, the dangerous thing is that
(47:23):
if it's unpleasant, if education is unpleasant or it takes
a long time to start seeing it, kids will take
shortcuts and never get the benefits. That's the danger.
Speaker 2 (47:36):
But there's a dichotomy there. You're saying that you know
the Buddhist approach, almost, aren't you, That life should be
hard and that we should enjoy that in a sort
of stoic way, which of course makes sense. And now
you're saying it's got to be funny. You both.
Speaker 3 (47:49):
I think of joy not fun. The separation for me is, look,
climbing amount everest is painful. I watched a lot of
people running marathons. Nobody ever smiles, you know, it's not
about fun. It's about a sense of meaning. It's about
a sense of achievement. It's about connection to ancient Greece.
(48:13):
There's a lot of things about motivation. I mean, stoicism
is a different story, but there's lots of things about
human motivation that is not about sitting on the beach
drinking mohito. It's about it's about Look, we're all here
tackling complex problems, Like we could all be sitting somewhere
else drinking beer and watching some sitcom, right, but we
(48:35):
choose to think about challengings, complex questions now. Now. For
a long time, I think the education system has had
the privilege of tell of not having to worry about joy.
But now that students can take so many shortcuts, I
think it's our time. I think we need to do
(48:58):
it in food. How do we get people to enjoy
eating healthy food? I think we need to do it
with exercise, and we need to do it with education.
How do we We can't just say to students stop
using AI if what we're telling them is very boring
and we're not helping them see why they're starying what
they're seeing. You know, one of the things I wanted
(49:18):
to do and failed so far. Is I wanted each
kid from first grade to say what they want to
do when they grow up. And I wanted the school
to call it the dream measurement. And I want schools
to be compensated and rewarded and not based on standardized testing,
(49:43):
but based on what dreams their kids have. And I
wanted for the kids to say, you want to be
an astronaut, here are the things you have to study.
You know, basically kind of you know, we haven't really
done We am saying education system, we haven't really done
a good job in recruiting the students. We've kind of
(50:05):
fused our force to get them to do things that
would like prisoners rather than people who are truly interested.
And I think this is a good time to say,
you know, let's let's think of them as bodies that
we need to help motivate into that direction.
Speaker 2 (50:22):
But if always going to be bad drivers, aren't we
always going to be bad learners?
Speaker 3 (50:26):
It is we are never going to be more fun
in the same way that you know, no cucumber is
ever going to be as interesting in the moment as
potato chip. But we can get people to change the
rate of you know, cucumbers versus potato chip. And I
(50:48):
think we can also change the the basics of enthusiasm
about schools.
Speaker 2 (50:56):
I still think we should just have an app that
locks your car doors and pops, just now get the
boss your lazy sid and just you can't get in
the car short term.
Speaker 3 (51:06):
Short term is it's a better strategy long.
Speaker 2 (51:08):
Term which just locks the fridge like you can't open it. Uh,
we've got got five minutes of our prescribed time and
we'll see if you do permit. It's just a few minutes, but.
Speaker 3 (51:18):
We'll understand sadly have another meaning.
Speaker 2 (51:20):
But that's absolutely fine. So I think we can take
at least one question from the chat. Can you read
that one that you've spotted will make the most.
Speaker 1 (51:27):
Absolutely absolutely, I'm going to the questions, and I think
one of the most important questions I feel for this
forum is how do we stop misinformation and galibility of
the public in general. It's by Mike.
Speaker 3 (51:44):
Okay, so there's lots. There's lots to do. This book.
I wrote Misbelief. I promised the publisher a chapter on solutions.
I don't have a chapter on solutions. I have lots
of little section I call hopefully helpful, and it's because
I think that there's lots of things we can do individually,
(52:07):
but the real big solutions need to be government based.
And I will not get into the individual solutions here,
but i'll tell you kind of one answer. We have
this bias to trust people. And the reason we have
a bias to trust people is we grew up evolutionary
(52:28):
speaking in an environment with a small village. And imagine
you live in a small village and you're going to
live with those people for a while. You have really
good reasons to trust those people. Why because you're going
to keep on interacting with them, and if somebody misbehaves
toward you, you'll gossip about them and everybody will learn
(52:50):
that they are not good people, and everybody will mistreat them,
and that behavior would die away. Gossip and revenge are
our intrinsic version of a police force and the judicial system.
So we grew up evolutionary in an environment where it
was the right choice to trust people. People were trustworthy.
(53:13):
Then we moved to big cities, Then we moved online.
We have no reason to trust people anymore, but we're
still trusting people. Now. Should we change people? I don't
think we can. I think the evolutionary forces for trust
are just too large. I don't think we can fight
against it. I also think it's a beautiful thing. But
(53:35):
now what we need to do is to say people
are trustworthy, trusting. Let's create information systems that are trustworthy. Again,
people can't pay attention for fifteen minutes. Let's make cars
that don't demand them to pay attention too much. We
are trustworthy, it's part of our human nature. Let's not
create systems that are assuming that we will have good
(54:00):
judgment and we will not be overly overly trusting. So
for me, and I'm talking as a social scientist, everything
starts with what can we expect the human mind to do?
And now let's create systems that are compatible with that, right,
so so that that is it is the approach, and
(54:20):
I think we if we use that, we would create
lots of lots of improvements the other things, of course,
but that's one. Just to give you a couple of
other tricks. I think we need to separate statements about
truth from opinions. Right now, when somebody is saying something,
you don't know if it's truth or it's an opinion.
(54:45):
I think we need to really think about anonymity. I
think there's a good chance that we should not allow
for anonymity to happen. That people should really think very
carefully about their reputation, and there's some other things like that.
Speaker 2 (55:03):
You know.
Speaker 3 (55:03):
I think it's such a big problem that if we
if we decided to deal with it, I think we could.
But we have to decide to deal with it. And
you know, somebody asked, is there something that is for
sure true? I think the answer is yes, the Earth
(55:28):
is not flat. I think there are things that are
that are true. But I'm also okay with people being skeptical.
Being skeptical is fine. Is to say no, I'm ninety
five percent sure that vaccines are actually very good for humanity.
There's a chance it's not, but I'm quite confident that
it is. But I'm going to make decisions based on that.
(55:51):
But I'm not one hundred percent sure. I think this
is fine. I have to run nice meeting, all of
you talk to you said huge thanks, then thank you