Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jean Gomes (00:03):
What are humans for
in an automating world? This is
a question of huge significancetoday. It can be approached at
many levels, societal, economicand philosophical. What
constitutes a good life whenrobots permeate every aspect of
your work, home, even nature.
We've had some lively debates onthis topic, and our guest in the
(00:25):
show will help you to form yourthinking on this question.
Further still, he's been ashaping force in the realities
that we now face. Neil Lawrencehas been both an academic AI
pioneer at Cambridge and acommercial leader heading up
Amazon's AI efforts you've justwritten the atomic human. Its
big idea is that when you stripaway everything that machine
(00:48):
intelligence can automate, youare left with the core of
humanity that we need to betterunderstand and amplify so that
our world gets better, notworse. So tune in to a
fascinating and importantconversation on the evolving
leader foreign.
Scott Allender (01:24):
Hey folks,
welcome to The Evolving Leader,
the show born from the beliefthat we need deeper, more
accountable and more humanleadership to confront the
world's biggest challenges. I'mScott Allender
Jean Gomes (01:34):
and I'm Jean Gomes.
Scott Allender (01:36):
How are you
feeling today, Mr. Gomes?
Jean Gomes (01:37):
I am feeling the
anticipation of getting on a
plane to New York to come andsee you. After the weekend, I am
feeling exhausted. After thisweek, I've been traveling non
stop. I got locked in a trainfor several hours. Wouldn't let
anybody out. I had to make anescape through the drivers
(01:58):
cabin, which is exciting.
Where were you?
Apart from that, I'm feelinggreat.
Scott Allender (02:02):
Well, I need to
know more about the train.
Jean Gomes (02:04):
I'll tell you when I
see you.
Scott Allender (02:05):
Okay, leave our
leaders in suspense. I'm feeling
contented. Had a holiday weekendhere in the States, and so I've
had family around, and feelingreally grateful to have spent
time with them, and feelingreally grateful and filled with
anticipation as well, to be withyou and Emma next week in New
York is going to be delightful,and I'm feeling particularly
(02:31):
enthused about our conversationbecause it's such an important
one we're going to have todaywe're joined by one of the
world's most influentialthinkers on the future of
machine intelligence and theimplications of what it means to
be human. He is the Deep Mindprofessor of machine learning at
the University of Cambridge anda visiting professor at the
University of Sheffield. He'swritten a book called the atomic
(02:55):
human, which explores thedifferences between AI and human
intelligence. And Jean, I knowfrom your summer reading
recommendations, this was listedas one of your favorite reads of
the year. And his experienceisn't solely confined to
academia. He also spent threeyears as director of machine
learning at Amazon. Neil,welcome to The Evolving Leader.
Neil Lawrence (03:16):
Thanks very much
for having me. Scott, Jean, it's
great to be here.
Jean Gomes (03:19):
Neil, welcome to the
show. How are you feeling today?
Neil Lawrence (03:22):
Good. Been a
tiring week, a busy week. I
haven't been stuck on anytrains, but it been in lots of
work. This last three months,been very busy with teaching,
doing workshops, policy work,and talking about the book. But
it's nice to be Friday, as Scottsays, not a holiday weekend
here, but I'm looking forward tohopefully watching Sheffield
(03:44):
United beat Sunderland tonighton the TV, so that'll be nice.
Jean Gomes (03:49):
Excellent. Well, in
this conversation, we're really
interested in what leaders needto consider, in how they think
about AI and how they createbusinesses and organizations
where humans thrive. So could westart with why you wrote Atomic
Human? Because you've written itfor not just a specialist
(04:09):
audience, it's a wide audience.
What does it help the person inthe street to understand?
Neil Lawrence (04:15):
I think it's a
great question, and part of me
hopes that it's about the personin the street getting confidence
in what they already know,because I feel the conversation
has been such poor quality, butinvolving some very intelligent
people that the person in thestreet, whether that's a leader
(04:37):
or a regular person, is havingtheir instincts about what it
means to be intelligent, what itmeans to be a human, undermined.
And I think that's deeplyproblematic.
(04:59):
One thing stinks. I mean, thethe sense that it's probably a
little bit more complicated thanthan what we're hearing from,
say, big tech or startupcompanies. I mean, you're,
you're both experts in this,probably to a greater extent
than me. But what I felt I couldbring to it is, look, even if,
even if you do this stuff inmachines, it's much more
(05:21):
complicated than what you'rehearing. And the idea of the
book was to use the machine andthe way it does things as a
place to stand and look back atwho we are and sort of marvel at
what we are, rather than, Ithink, the prevading narrative,
which is one about how we're allgoing to be made a revel
irrelevant and redundant by thistechnology, which I think is
(05:44):
utterly wrong and veryundermining of people.
Scott Allender (05:48):
So let's, let's
unpack that a little bit,
because first I want tounderstand you call instead of
AI, use the term machineintelligence. Can we start with
that? I'm curious to know moreabout why you prefer it that
way.
Neil Lawrence (06:00):
Yeah. And I
think, you know, you do these
post book reflections, you alsowonder whether intelligence is
the right term. But the reasonfor machine intelligence was
because, I think the artificialintelligence term, it has
particular meanings in sciencefiction. It causes people to
think of certain things that weexpect. These sort of, like, I
don't know, Robbie, the robottype entities that can
(06:23):
communicate with us and arenever wrong or data from Star
Trek.
(07:34):
it's an approach that triggerssomething in people, because
it's something that I thinkwe've historically thought of as
unique to us. So you'reeffectively being told this
thing, that intelligence isn'tunique to you. So one of the
things I try to do in the bookis say, Well, okay, if you want
(07:54):
to call and I mean, it's notexplicit, but as you know from
the book, it sort of highlights.
If you want to call machineintelligence intelligence, then
there's a bunch of other thingsyou have to think of as
intelligence, whether that'simmune systems or social insects
or just the ecology around us.
And I then I'm getting morecomfortable with maybe that's
the way we go, that if you wantto use that term for machines,
(08:18):
let's use that for a bunch ofother interesting decision
making systems that surround us,that are not the same as us, but
but work in different ways anddo interesting things.
Jean Gomes (08:29):
So, you know, I love
the book. It was, it's it's
brilliant. It's incrediblyambitious as well, but it's
setting out to change the waythat we think about AI. So what
are we let's dive into a littlebit more detail about what we're
getting wrong about the waywe're thinking about its role in
our futures and across alldomains, business, society, you
know, the media, yeah. What doesthat look like for people?
Neil Lawrence (08:56):
I think the
primary thing is that we have a
tendency to anthropomorphize.
And when we see an entity thatis making decisions that are of
the former human might make, weassume that those entities are
making the decisions in the sameway. So, you know, right at the
beginning of the book, one ofthe messages is that's not
what's happening that. And youknow, I use a few different
(09:18):
analogies in the book, but sincetalking about the book, I've got
further analogies that the rateat which machines are consuming
information versus the way thathumans exchange information is
the difference between walkingpace and light speed that
machines go 300 million timesfaster than us in their ability
(09:39):
to consume information. And theabsurd thing we're hearing from
some very intelligent people,people I respect, colleagues and
interesting people, is, oh, thatwe're entering an era of
artificial intelligence, whereyou might have machines that are
orders of magnitude moreintelligent than a human. But.
(10:00):
That's an undefined concept.
Intelligence, you know, as Ithink you both know, is is
multifaceted. It has differentaspects. And one of the beauties
of being a human is the way wecollaborate with others who have
different strengths, andbuilding teams and as a leader,
that's key to how we build onthose capabilities in the team.
So the notion that you can talkabout one intelligence being
orders of magnitude better thananother is nonsensical.
(10:22):
Intelligence is alwayscontextual. But in terms of
information exchange, we candefinitely talk about that,
because it's a sort of veryfundamental quantity, like
energy. We can already say thatmachines are eight orders of
magnitude faster than us thedifference between walking pace
and light speed in exchanginginformation, and that's true
before AI. That's true beforechatgpt. That's the reality of
(10:45):
our lives, and that's the bigtransformation. Well, this is a
big transformation too, butwe're already dealing with the
consequences of that and what ithas done to modern, digitized
societies, where leaders haveoften been separated from their
information ecosystem, becauseincreasingly, businesses become
(11:08):
digitized. And in the past, anyleader would be able to go and
open the books and see what'sgoing on in the accounts if they
really wanted to. And of course,with digital technology, in some
ways that's become easier, butin some ways that's become much
harder. The complexity, forexample, of Amazon's information
ecosystems, managing the supplychain, were well beyond any
(11:28):
individual to understand, and itwasn't easy to answer a vice
president's question about whysomething had gone wrong. Yeah,
Amazon are more advanced thananyone else. I mean, that's why
I went to work there. I wantedto see well, wanted to see well,
how do the best do it? And theanswer is not very well and
better than everyone else, butit's that old story of the guy
(11:49):
who's putting on trainers afterthey've seen, I never remember
if it's a lion or a cheater, andthe other guy says, You're not
going to outrun the cheetah inthose and the other guy says, I
don't have to outrun thecheetah. I just have to outrun
you. You know, that's, that'show the best digitized companies
operate. They're just outrunning other companies. They
haven't got this stuff sewn up.
(12:12):
And, yeah, I think that that'screating a really interesting
world, and where we're alreadyseeing the damaging effects of
that. And the question is, well,where are we going next with
this technology? Are we going towield this technology in a way
that ameliorates, that makesthings better, or are we going
to deploy it in such a way thatthese problems get worse? And my
(12:37):
answer is, I don't know, but Ibelieve that getting more people
involved in the conversation andconfident about their
understanding is part of the waythat we ensure that the best
outcomes occur in as widelydistributed a manner as
possible.
Scott Allender (13:01):
So you mentioned
the anthropomorphizing quandary
here, where we tend to ascribeto AI, our human qualities, and
I and I'm assuming that'sbecause we don't yet know how to
think about it, right? We don'tknow really, what we're what we
want with it, like, what is theproblem we're trying to solve
with it? What are theopportunities so we ascribe in
(13:22):
its mysteriousness ourselves? Sohow? What are some ways we can
start to reframe how we thinkabout it?
Neil Lawrence (13:30):
Yeah, it's
interesting. I have a colleague
at Zurich who was talking abouttrying some interesting things,
like getting people to do sortof mental exercises before they
engage with chat bots to remindthemselves that,
(14:07):
If you get in a position whereyou don't have to spend any
(17:18):
money on the marketing, it's notthat you can fire your marketing
department. You just have tofind a new way of
(17:39):
differentiating that indicateswe are serious about this
product. So this form ofdisruption that initially
appears to, I think, naivebusiness leaders of, oh, this is
great. I can get rid of themarketing team. Means now you're
going to have to get realimaginative about how you
express to people that the thingyou're providing is a cut above
(18:02):
what everyone else is providing,because those standards by which
we were doing in the pastgetting human creative types to
produce this this form ofmaterial have been undermined by
this technology.
Jean Gomes (18:14):
I want to come back
to the point you were making
about a colleague describing away of almost like priming
yourself to be able to interactwith bots or chat GPT or
whatever, because there is aparallel, you know, that part of
our brain switches off when weuse sat nav. You know, we've
lost the capacity to actuallyfind, you know, for direction.
(18:35):
You know, a lot of people justdon't even look where they're
going anymore because they'rerelying on on on automation. So
how do we avoid that? How do weavoid losing some of the
qualities that elevators abovethe machine?
Neil Lawrence (18:47):
Yeah, so I should
say that this is work that's
coming from men at all asadi,who's at ETH Zurich. But I think
you're the parallel that you'retalking about is bang on,
because there's a sort of a lotof circumstantial evidence that
suggests that when we'replanning, I mean, and you both
(19:08):
perhaps know more about thisthan I do, but when I'm thinking
my way through a problem, I'mleveraging mechanisms that
initially evolve for navigation.
So the hippocampus fires insympathy with the prefrontal
cortex. And you know, it sort offeels like that, because often
we if we talk about problem, wenavigate our way. We use
narratives to explain.
Narratives feel like journeys.
So this has a lot of intuitivesense. I don't know what the you
(19:31):
know latest evidence for it is,but if it were true, then
exactly as you said, John, we'reheaded for some trouble, aren't
we, because we already have allthese stories about people who
drove into rivers or harbors ordrove to the Paris in the wrong
country or state because theydidn't check their sat nav and
(19:52):
drove for 18 hours withoutquerying where they were going,
which is. Driven by thistendency to accept instruction
from a machine, which I think isalso quite widely studied. And
if we have that in this withthis extraordinary technology,
and let's be clear, as a totallyextraordinary technology, I
(20:14):
mean, you can do some amazingthings with it, but I think, you
know, I feel very lucky. Forexample, I wrote the book
without touching a largelanguage model for creating
text, and I was really explicit.
I wanted to do that. I workedhard to find my voice. You know,
do that thing where it's, it's,it's a written version of you
(20:36):
that feels like a spoken versionof you, or is what I was going
for. And it took a lot ofrewrites. And imagine, if you
have access to this technologyin the future, are people going
to put that effort in to findtheir voice? And if they don't,
I find that incredibly sad,because, of course, it has
extraordinary voices already. Imean, their voice is copied from
(20:58):
humans from the past, but theirvoice is copied from the very
best humans of the past, andthen tying this back with well,
navigation of a problem. So ifwe think about the professions
and the time it takes for alawyer or a nurse or a doctor or
anyone to start gaining theintuition they need for their
subject area, and a lot of that,like navigation, requires having
(21:21):
made the mistakes on the route,having gone down the wrong road
under supervision and understoodnot to do that again and then
going forward so you rememberwhich the right turn is. And
just as Sat Nav gives you thesense of that experience, and
there's sort of really greatwork. And I think it was Matt
Wilson at MIT, was talking aboutthis at the Europe's conference
(21:46):
1011, years ago, about you canshow in rats that if you have
one rat towing another rat in acart behind, the rat in the cart
behind doesn't learn the maze.
It's the rat that's at the fronttrying to work through the maze
that learns it, and we'rebecoming the rat in the car. And
what are the consequences ofthat going forward? I think
(22:10):
these are major issues aroundhow we educate, how we train,
how we bring forward the nextgeneration. Of course, there's
immense possibilities in termsof sharing knowledge if this
technology is used well. Butyeah, it's a really big issue,
this sort of, I think somepeople call it mental atrophy,
(22:32):
or Yeah,
Scott Allender (22:34):
do you have
thoughts on on how we can start
to do that? Because I'm really,really interested in what you're
saying this, this idea thatwe're going to lose this
capacity and already have, Ifeel like my my sense of
direction, is getting worsealready. These things are so
convenient, right? So it'sreally difficult to sort of say,
hey, as the world gets moreautomated and you have more and
more conveniences, but youshould watch out and not always
(22:56):
engage with them. Like, how dowe get people to wrap their
heads around this as an issue,and then what would you suggest
they do?
Neil Lawrence (23:04):
It's a really key
question, because on the you
know, let's talk about thepositive side of GPS. I don't so
much anymore, but I used to runa lot, and every time I went to
a new city, I would try and runa half marathon. Now, this
turned out not to be always agreat idea, because when I went
to Baltimore and I ran aroundthe bay, and I showed it to my
friend later, they said I wentthrough some Fauci places,
(23:27):
seemed fine to me, but they theywere pretty freaked out about
where I ran. But actually, atthe time, you couldn't, you
couldn't do Strava following ofother runners, right? So I was
mapping out a route I wanted togo around the bay. I didn't want
to double back on myself, so Iwent inland. And so anyone who
lives in the Baltimore areaknows exactly where I went
running, and it was fine. As arunner, no one bothered me. It
(23:49):
was fine. I enjoyed it. Andactually, I remember those
places more than I remember thebay, because I was running
through some extraordinaryplaces with some extraordinary
people, and it sticks in mymind. And of course, I was
somewhat navigating because, youknow, I'm not going to be
getting my phone out every twominutes. I was, I was making
(24:12):
mistakes on my own, and I'vedone that in Kampala. I went
running down and ended uprunning into the market area,
and it was like not many tallEnglish folk could run that with
that market area and Kampala.
And that sticks in my mind aswell. And I couldn't have done
these things without GPS. I'mnot saying that people should be
doing all these things, butthere are ways of giving
(24:34):
yourselves experience that youjust can't imagine that still
plug into these errors thatstill plug into your
understanding, because it'staking you that much further
than you could have gone before.
So I think that's part of it,but I think another part of it
is, I think one of the problemswe have in this space, with the
(24:54):
conversation at the moment, andthis is one of the things the
book's trying to address, is allthe voices we are hearing. In
the press, or, you know,celebrated as Silicon Valley
founders, are people who aremost confident about what the
future is going to look like,but the only thing we can
confidently say is that peoplewho are confident about what the
(25:15):
future is looking like are goingto be wrong, and what we need is
an ecosystem where those whodon't have confidence are coming
forward, because we need thatdiversity of understandings.
They may not know about AI, butthey know about the thing that
they do, and if they're taught alittle bit about AI, they can
explain how it's going tomanifest in their area and what
(25:37):
changes we might need. And ifthey feel confident and
informed, they can be the earlysignals that something's going
wrong. What really concerns meis a lot of these signals are
difficult to measure in theclassic ways that we might
measure, things like economicroutes or efficiency
(25:58):
performance, because so much ofthis stuff is about the core of
a human and to me, I don't talkabout this so much in the book,
but I've just written an op edthat hopefully will be in the
Financial Times next week, whereI sort of try and highlight that
as we cut away capabilities thatwe formally thought of as human
(26:20):
capabilities, but now can bedone with the machine. As we get
close to the core of what I callthe atomic human what the book's
about? The book doesn't reallytalk about this idea much, but
it's something I feel quitekeenly. There's almost like an
uncertainty principle. What do Imean by that, any facet of human
(26:41):
productivity or attention thatis easy to measure is going to
be a facet that is easier toreplace by a machine. Any facet
of human capability that is hardto measure is going to be
something that's difficult toreplace by the machine. So as we
slice away humans and takethings and give them to
(27:05):
machines, whether that's mentalor manual labor, we're making
the human contribution harder tomeasure, and this is a deep
problem, because we aren't thenseeing how things are failing,
because you're really getting tothe core. And you know, the
(27:25):
example I give in the op ed thatI don't talk about in the book,
because people, I'm doing a lottrying to work in AI and
education, and people say thingslike, Oh, well, you know, soon
you won't need teachers, becausethese chat bots already know so
much. You just learn from that.
And I say, Go and watch thevideo of Ian Wright, the England
national footballer, meeting histeacher who turned his life
(27:49):
around when he was living in anabusive home and no one believed
in him, and one teacherunderstood that despite the fact
he was playing up at school,there was something in that
little boy, and by that teacherbelieving in him entirely turned
his life around. And no one'sbeen measuring that. That
(28:12):
doesn't coming out, that thatone act in the statistics, but
it's absolutely vital to someonewho became a very productive and
admired citizen's life. Andthese are the pieces, the atomic
human but how we preserve andsustain and build on these
(28:36):
things, I think, is requiringmore sophistication from how we
approach businesses and publicsector in particular, it's what
I think of is it's the core ofhuman capital that is difficult
to measure. You want to measurehuman capital in terms of
various measures ofproductivity, fine, but that
will always be somehow doable bya machine. You want to measure
(28:57):
human capital by what sometimeswe might call hidden labor is a
useful term to use in thissense, hidden labor is this sort
of character, and it is rarelyrecognized properly in
businesses. That is the thingthat will remain, and we have to
work out ways of rewarding,sustaining and celebrating that
(29:17):
type of work in businessestoday, because I think we've
gone too far in the direction ofwhere we will monitor
everything. Of course we need tomonitor of course we want to
make things efficient and makesure we're spending money
wisely, et cetera, et cetera.
But going too far in thatdirection is really making these
(29:38):
weird predictions true, thathumans are redundant because the
machine can do everythingbetter. Well, yes, it can do
everything. It probably can doeverything. We can measure
better. So that means the theimmeasurable becomes more
important, but that leads tothis uncertainty of, what is
that immeasurable and that, youknow, sorry, coming all the way
back, Scott, to your question,that means we have. Respond as
(30:00):
we respond in the presence ofuncertainty, which is, we gather
a more diverse group of voices,and we carefully monitor what
we're doing, and we carefullylisten to how that's panning
out, which is, you know, formany businesses, for example,
operationally in the supplychain, in Amazon, that's not
what you're used to doing. Weneed to make a decision now.
Someone needs to make thatdecision, we don't have time to
(30:22):
listen to what everyone thinks,and that's fine as well. That is
how you make sure that you'vegot stock on your shelves. But
like I tried to encourage us todo in Amazon, sure do that.
Monday, Tuesday, Wednesday, Ihad this thing. Thursday is
thoughts day. We're going to sitback on Thursday and we're going
to listen to diverse voices andhear what went wrong, and shift
(30:44):
our way of thinking to somethingthat is picking up on these
problems before they grow intothings that are really
challenging for us to deal with.
Scott Allender (30:55):
Did that help?
Did that work?
Neil Lawrence (30:57):
Did that work?
You know, I don't know, becauseI started introducing it, then I
took my new job and left.
Scott Allender (31:03):
Okay,
Neil Lawrence (31:04):
so I suspect it
didn't one of the, I mean, but
it came about. But, I mean, Iwas so admiring of the leaders,
but early on, because initiallyI would be looking at them
thinking, These people are doingsuperhuman jobs. I can't
understand how they're managingthis, this large supply chain,
you're sort of hundreds ofmillions of dollars being spent
every week. But then at somepoint, you notice, yes, but
(31:28):
they're running a playbook. Theway they manage the superhuman
nature of the job is they'rerunning a playbook, which is a
tried and tested playbook thatthey understand, but it's a
playbook and what would gowrong? And I realized what I was
doing as a sort of, you know,the scientist in the team, what
I would get wrong is I wouldenter with a move that's outside
(31:50):
the playbook in a meeting whenthey're running the playbook,
and then they would find thatvery difficult to see. It's this
difference between, like,focused attention on the problem
in front of you, and what aboutin the book, I like to talk
about the picture of Newton fromBlake, and the sort of the
scanning, the sort of Marmotlooking around the place. And I
don't think you can expect theseleaders when they're
(32:12):
operationally attentive, makingsure that you're not. You've not
just lost $50 million orwhatever it is that week to do,
to do the horizon scanning. So II started in introducing this
because I realized, well, look,Monday and Tuesday, we were
doing weekly business reviews.
You can't expect them to bethat. Wednesday, you're sort of
doing the work that you know.
(32:36):
You're sort of chasing down allthe things you suggested. So
Thursday felt like a good day.
But like I say, I think totransform a company the size of
Amazon at that time, I mean, Iwould have had to move to
Seattle and dedicate a largeamount of time to that. And we
started working it up. I don'tknow it was a big organization,
1200 people. I think it wouldhave taken a lot. I think we
(32:59):
could have done it, but, um, butyou see what I mean? You can't
expect the leaders to be contextswitching in a given day. You
want them to be going in in themorning, Thursday. Oh, this
today. We're going to run adifferent playbook. We're going
to run the playbook where welisten, rather than Monday,
Tuesday, Wednesday. Even thoughit's not quite the Amazon
culture, because they'reoperational, they tended to run
a much more closed mindset, andit was just interesting to
(33:22):
understand why they were doingthat, and see that they are
doing the right thingoperationally, but it affects
the business's long term abilityto see problems on their eyes.
Jean Gomes (33:38):
Why I was
particularly excited to have
this conversation with you isexactly the point you were
making a moment ago about theatomic human when you slice away
all the things that can beautomated, and what you're left
with is the hidden labor, theintangible assets, the things
that are incredibly hard tomeasure, that is really where
human beings are going to getelevated with AI, as opposed to
(33:58):
commodified. So I'm interestedto just get a sense, because you
talk about the fact that we'reand you've illuminated some of
these things about the myths andmisinformation that that we're
laboring under, particularlywithin, you know, leadership
roles. Can we talk a little bitabout trying to help leaders in
(34:21):
this conversation understand howwe're currently looking at AI in
a way that might stop us fromseeing how to move forward in
its adoption. Yeah,
Neil Lawrence (34:33):
so I do quite a
lot with the Judge Business
School teaching on this andthese things are less in the
book. I mean, I'm glad businessleaders do seem to enjoy the
book, but it doesn't do thething that they always teach me
to do in the judge, which is, atthe end, you have to say, here's
the three things you need to dotomorrow. It's nice that you did
all this philosophy. And the waythat I tend to do that is, I
(34:56):
think it's inattention, bias.
And for the moment, I'veforgotten the original. All
authors of the work, but this,this famous sort of video that
lots of business schools, I'msure, use of the gorilla walking
into the basketball thing. Sothere's people passing the
basketball. You have to count.
It's difficult. And becauseyou're so focused on the
counting of the basketball, youdon't see this gorilla walking
(35:18):
in. And one of the things I sayin those sessions, because it's
my idea, is to support leadersin lifting their vision. Because
the problem I think leaders haveat the moment is that they've
come through the businesswithout this technology
existing. So where theirbusiness instincts are
operational is not in the areaof what might happen if you and
(35:42):
let's just not talk about AI.
You know, this is also true ofsort of digitization, and I used
to think of like the challengesof digitization as two fold. One
is actually making sure thatyour systems are reflecting the
outside world, which Amazon wasamazing at. But then there's a
second problem, that once you'vegot that data inside the
company, how are you trackingand making sure your secondary
statistics actually arerepresenting what you care
about, and that turns out to bea lot harder, and that was the
(36:05):
sort of area where things mightgo wrong in Amazon. But like I
say, I think they're ahead of itanyone else in that regard, you
know? And this is not a problemthat comes up for Google or
Facebook, right? Because theirwhole world is virtual. This is
about the transition between thephysical and digital world. Now
what I would say to people isthat the reason you're a senior
(36:27):
leader is not because you're anexpert in machine learning or AI
or digitization, is because youhave a sense of what does and
doesn't work in the business.
And what tends to happen tosenior leaders is is they feel
the obligation. And I saw thiswith leaders in Amazon, as we
introduced as machine learningtechnology. Even though they're
(36:48):
pretty technically advanced,you'd see the same thing. You
the leader knows that this isimportant. I mean, we had a
whole Bezos remit, sayingeveryone has to explain how
they're using machine learningin their systems. They're then
told it's very technical. Andthere's this sort of technical
presentation where it's like,well, actually, it's really hard
how these things work. Let meexplain. And what do they do?
(37:09):
They start counting basketballpasses. They're distracted from
what's actually they're therefor, which is gorilla spotting,
yeah. And this is an enormousproblem, and this is where so
many businesses go wrong. Andthe effect you sort of see is
the sort of following one thatthey've lost their ability to
(37:31):
make a calibrated judgment aboutwhen and where to ship. And as a
result, they tend to be all inor all out. And it tends to go
this way. They're initially allin that project utterly fails,
because no one brought the soundbusiness judgment of what would
work and wouldn't work, becausethe people often pushing the
(37:52):
project are new to the business.
They're keen, they understandthe technology, but they don't
understand the business. Andthen after that initial failure,
the leader never wants to touchthat stuff again. And of course,
the right reaction is thecalibrated one where this is an
interesting idea, but it won'twork on that product. It might
work on this one. And what we'regoing to do is we'll do a pilot
(38:13):
study, and we'll understand whatthe problems are in deployment
before we go big, because theproblem that really occurs is,
and this is really high pressureon particularly large companies.
At the moment, there's pressureon the board to say what they're
doing around AI. So there'spressure on the C suite
(38:33):
executives to sort of get theattention of the CEO around
who's doing what. And thenthere's pressure on the orgs
below. And that means that,instead of these ideas being
allowed to incubate andintegrate with the processes of
business, the person who's mostconfident and who sounds like
they know what they're talkingabout. This is a pattern. Here
(38:57):
is the one that makes picked upby the senior vice president,
often to the exclusion of thepeople who actually do know how
it goes wrong, because they willpresent a more nuanced point of
view. And the projects that arecelebrated at the top become
what I think of as Potemkinprojects, ones that basically
(39:18):
can't be allowed to fail becausethey've been shown to the CEO,
and anything which has the CEO'seyes on it as an ongoing project
is actually doomed to fail ifyou're uncertain about how it's
going to be going on. So whatneeds to be happening, but this
has to be a bottom up process,is that there's interest in
(39:41):
deploying in these projects, butthere's also a culture of
sharing how they're going wrong,and that has to percolate in
some way, such that middlemanagers know which ones to
bring up and share how to dothat in individual businesses.
Varies, but, but one of thefirst things people get wrong is
(40:02):
that if you haven't got yourdata infrastructure right, you
ain't going to get your AIproject right. And the one area
of a business that likely hasits data infrastructure right,
and I say this to CTOs, CEOs,CIOs and CDOs, a lot. Who has
the bigger department? You arethe CFO. Always the CFO. What
(40:22):
does the CFO do, other than dealwith data with dollar signs in
front of it? How big would yourdepartment need to be? Because
very often your data is largerthan the CFOs data, but the
business doesn't know how toquantify the value of your data,
(40:43):
so it's not willing to invest inthe accounting and the
provenance around that data. Butwhen it comes to actual
accounting, that's just datawith dollar signs. Now, if you
want to be at the standard ofthe CFO How big is your
department going to be andthat's not going to happen. But
how do you get there? Well, youactually have to be careful
(41:05):
about using data in projects insuch a way that allows you to
start quantifying the value andimprove its quality. That is
probably the biggest culturalchange most businesses face. So
how do you launch data scienceprojects in the business. Where
do you start? In the CFOsdepartment, you start with
projects where you have gooddata, because it's financial
(41:27):
data, and you do that in such away that the CFO is interested
in the project. They're notdetermining it, but they're
agreeing. This is an interestingone, in an open way, so that
you're hearing about what thefailings of the project are that
you've got the CEO integrated,that you've got your senior tech
team, and you've got a top datascientist that is given the
cover to talk honestly aboutwhere and when this is working
(41:48):
and not working, and you maybehave that as your thoughts day
thing. And if you're not doingthat, you know you are just
constantly in this danger ofhaving a bunch of people who are
overstating their claims. My endthing on the slide is, see the
gorilla Don't be the gorilla,because what tends to happen,
(42:11):
particularly with male leads, isthey don't like the fact that
they don't know, and theyoperate in this alpha male way.
And they don't listen to thejunior people. They don't see
the problems before they happen.
They operate like as a malegorilla would. And therefore,
you know, the project issuccessful, whether it's
(42:32):
successful or not until ittotally blows a hole in the
company. So the the senior maleleads on that thought stay. They
have to tone down, tone down.
Their alpha male nature. Getinto listening mode, build the
confidence of the team at thecoalface. Start to learn where
things work and where thingsdon't work, and keep their
awareness of, you know, the realinsight, it's incredibly
Jean Gomes (42:58):
helpful.
Scott Allender (42:59):
What are some
other ways that leaders can be
more human and help manage theoverwhelm that people are
feeling at this what seems to bea technology that's beyond most
of our comprehension.
Neil Lawrence (43:16):
Yeah, I think
it's particularly hard because
there's interesting things aboutyou know, as leaders in
organizations. When we're doingthat, we're actually at the
forefront of the removal of theatomic human from process. What
do I mean by that? Well, youknow, there was a time before we
developed writing and gottogether in cities, whatever,
(43:40):
5000 6000 years ago, when allpeople didn't need laws, they
engaged in moral labor andlistened to each other and made
decisions based on history. Andit's incredibly cognitively
demanding. You know, my mywife's from South Italy, and I
would say in southern Italianculture, they engage in much
(44:02):
more moral labor than we do. InEnglish culture, there's a lot
more unspoken obligation thatthey understand intuitively to
family things like, everyone'sextremely offended if I land in
Naples and I try and get a taxior train, you know, I don't have
a member of the family pick meup that, you know? You know, up.
(44:23):
I think for UK culture, peopleare like, Oh, not bother you.
But in Italy, it's consideredoffensive that I didn't ask a
member of the family, because,of course, they do that for me.
So it's quite complex and it'squite hard work. And what sort
of happens when we developcities is we develop processes
and administration. Code ofHammurabi, 1700 BC, tries to
(44:44):
codify some of our ideas aboutwhat moral labor might involve
in I think the second set oflaws. Of course, when we're a
leader, we're in the sameposition. So if I'm being asked
to cut an organization which has1000 P. People, and I'm being
asked to cut 300 of thosepeople. I'm not engaging in the
(45:06):
moral labor of interacting withwhether someone's a single
parent or whether they have afamily when with health
problems. I'm just following aprocess that the company says
and in the country, says, oftenin the law, this is, this is how
you have to deal with this. Youmight have to pay some
compensation. You know, verycountry to country, there's no
(45:28):
right answer on this. And weaccept as a society that those
processes are allowed to occur,that people do things that
individually we might find,morally represent,
reprehensible, but allow forefficient process, allow us to
create products that are better,allow us to collaborate in ways
that we wouldn't be able tocollaborate if we were relying
(45:49):
on direct connection. So that,first of all, you have to
realize, oh, that that's part ofwhat we're being told to do. You
know this expression, it's justbusiness is basically like, oh,
suspend your normal human moralcompass. And as a result, what
you get is companies oftenbehave like children, because
(46:12):
the obligations we put oncompanies are not the same as
the obligations we put on adultsor public institutions, such as
universities, we expect otherinstitutions to behave in a more
morally upstanding way, andthere are benefits from that. So
I think this is where it sort ofgets tricky, right? Because,
(46:35):
given that we clearly acceptthat there's some aspects of the
atomic human that we're preparedto sacrifice for efficiency and
and sort of improve productivitythese measurables, it becomes
the case that there's it's notclear that where the dividing
(46:57):
line will fall in The future. Sowhen we're thinking about a
leader's role, I mean, do wereally believe a leader who has
no human aspect is what we wantin the future? I don't think so.
But we're also, in some sense,asking leaders, any leader you
(47:18):
know, to make decisions which wekind of know are compromising
them in ways as a human being.
So I think, and I don't know theanswer to this, and you probably
both have more sophisticatedthoughts on how we support
leaders in dealing with this,but I think that the problem we
(47:38):
get into is that people moveinto a switch. It all off mode.
The switch, it all off modebeing well, if I'm being asked
to make those morally difficultdecisions, I'm just going to
follow the process, and I'm notgoing to put a piece of myself
into the equation, becausethat's too disturbing. And I'm
going to tell everyone that'sjust business. But if that's the
(47:58):
way we're going. Then, then youare replaceable by a machine.
And I don't think that that's,that's what we want out of this.
I think actually, you know, agood leader, you know. And my my
boss, Andrew Hamel, said this tome once. I was saying, Oh, I
don't know I got, I got somegood people. I think I just got
lucky. And whether he was rightor not, the point he was making
(48:18):
was great. He sort of said,Look, you don't get lucky by
attracting good people. You'veinspired them in some way. And
whether that's right or not, Imean, I think it's potentially a
circumstantial I'm not saying Iwas somehow some mega inspiring
leader, all sorts of flaws as aleader, but I do believe what he
said is correct, and part ofthat inspiration is bringing
(48:40):
yourself as a human to the teamand sharing a vision, which
sometimes is actually go beyondanything the company is giving
you. Because everyone knows, Imean, well, they don't know
that. They tend to believe thatthe company is, you know, all
with them and everything else,right, until you know that day
where you get told, Well, you'reout on your ear. Which, which
companies will do. But a leaderwithin that organization can
(49:00):
inspire more from the peoplewithin that organization in the
service of a wider organizationthat, at the end of the day,
doesn't have that humancomponent in it, apart from
through those leaders. So it'san incredibly complex and
somewhat self deceiving system,but it feels to me incredibly
(49:20):
important that that human pieceis in there,
Sara Deschamps (49:28):
Evolving Leader.
Friends. If you're curious toget more insights directly from
our hosts, consider orderingJean's book leading in a non
linear world, which contains awealth of research backed
insights on how to see and solveour greatest challenges, as well
as Scott's book, The Enneagramof emotional intelligence, which
can help you unlock the power ofself awareness sustainably in
(49:49):
every dimension of your life.
Jean Gomes (49:53):
As we come to the
end of this hour, I'd like to
finish with a bit of a thoughtexperiment. So in order to
create this healthy symbiosisbetween the atomic human and all
of the amazing things thatmachine intelligence might be
able to do over the next decadeor two, how do we how do we do
that? How if you were going tostart up a new business and you
(50:16):
wanted to amplify human valuecreation and not have it
undermined by by automation.
What are the kind of things thatyou might start to do in
building a business, designingit and building it?
Neil Lawrence (50:29):
It's a great
question, and this is, I mean,
it's already something I'mthinking of actively working
with a few colleagues, because Isort of really, I'm not saying
I'm a great business lead, but Ikind of think it's interesting.
To try and put your money whereyour mouth is or your effort.
And I think that one of the if Iwere to go for a single
(50:51):
problematic point, I would sayit's the notion of artificial
general intelligence, which Ithink is nonsensical, I mean,
and the way I'll talk about thatsort of post the book, I hint at
this in the book is, you know,it's like talking about an
artificial general vehicle.
Which is it? Is it an aircraft,or is it a Brompton bicycle? Or
(51:12):
is it that train you got stuckin John? You know, it's totally
contextually dependent. Youmight have wished you'd had a
Brompton bicycle when you're onthat train, indeed, but, but at
outset, it probably seemed likea train was a better idea. And
the interesting thing is, thereare, of course, general
principles to vehicle. I mean,the reason you go in a train is
because it reduces friction, butit sort of undermines the
(51:35):
ability of that vehicle to sortof go wherever you want it to
go. There's air resistance,there's wheels, there's wings,
there's all sorts of things thatcould be applied in different
ways to vehicles. So it's notthat there aren't general
principles to vehicles, butthere is no such thing as an
artificial general vehicle. It'san absurdity. We have to know
the context before we talk aboutwhat a good vehicle might look
(51:55):
like. And you know, if you ifpeople are trying to persuade me
that intelligence is lesscomplex than a vehicle. Well, I
think they're wrong. And, infact, I think there's a strong
parallel, because the navigationexamples we were using before,
how do you want to go about yourproblem? You know, what are you
trying to do? You're trying toscale. What's the uncertainty
level? What's the time frameyou're looking at? So stepping
(52:16):
back from that, I think it'sjust an incredibly damaging and
simplistic and incorrect conceptthat, when we trace it back, is
eugenic in origin, because theterm general intelligence comes
from the eugenicists. Okay, sowhat's my fix specialized
intelligence? We don't actuallywant entities. We want to be
(52:38):
able to be able to communicatewith these things, which is just
an extraordinary thing aboutthese chat bots, that they
suddenly allow normal humans tocommunicate with the machine for
the first time. That istransformational. Forget AGI.
That's transformational. I mean,you don't want to indulge in
hyperbole, but it seems at leastas transformational as any other
(53:00):
revolution, even on the back ofthis information revolution
we're on. But what aboutartificial specialized
intelligence? What about justbuilding things that do the job
that you want them to do, andbeing able to build them quickly
so you don't need a generalizedthing, but you do rapidly want
to compose something that Idon't know supports me with my
(53:22):
bird watching hobby, or supportsme in my ability to run a
football team or whatever it isI'm doing. And I think that that
the possibilities for that areemerging. And I think if you
view even with these verygeneral tools, you know, they
are certainly general tools, andthey have interesting general
(53:42):
properties, I think, sort oftrying to approach it from a
business perspective of butactually we see them as tools,
things that are there to at theend of the day, reflect and
express the will of some humanoperator that feels like the
right way to be going. And Ithink if we don't go that way,
(54:05):
then I think societally, we endup in a lot of trouble.
Jean Gomes (54:11):
Do you think we'll
need to invest even more effort
in education, and I don't meanjust the kind of traditional
academic but also craft basedand so on, to help human beings
to to do that, you know, to bethe capable of being the atomic
human. And I'm also thinkingabout things like, you know, our
(54:33):
self awareness, our metacognition, so that we can
actually understand how tointeract with these things.
Neil Lawrence (54:38):
I think
absolutely and probably utterly
redefine what we think of aseducation. I think that the
other thing that it's sotransformational this
technology, that the way we'vedecomposed society, how we've
separated into differentseparation of concerns. You're a
teacher, you know, I'm a studentthat's tailored around a stable
society where the assumption is.
Some group of people know whatto do, and other people don't.
(55:01):
Let's be clear, no one knowswhat to do and and the only way
you can start understanding whatto do is by bi directional
communication across society.
That means learning as much aspossible from what we can from,
say, school students, about howthey're using these technologies
and feeding that back to theteachers to sort of for they can
(55:22):
better understand to teach orlearning as much as possible
what a nurse's job actuallyinvolves, and how we eliminate
the fact that they're spendingall their time doing data entry
and no time with patients, andaccepting that even though I,
you know, might be seen as an AIexpert, I am not an expert of
how we should be applying thistechnology in the domain of a
(55:43):
busy hospital. And the amazingthing is that this technology
should be able to help us withthat. So all the previous
disasters, like the UK ones, thehorizon program, they're
centrally deployed digitalsystems that didn't talk to the
people who were affected.
Someone in the center thoughtthey knew, and then they
deployed it on people withouttalking to them about the
(56:04):
effects. We cannot afford tomake those errors. And those
errors are the errors wherewe're getting communicated by
big tech companies. That's whatthey want to do. Here's we're
going to run the AI, we'lldeploy it. You know, don't
worry. Like that worked in thepast, like we all love word,
don't we? That's really helped.
You know, they keep promisingthey're going to solve the
(56:26):
problems in society, but theyfail to, because they actually
don't engage with those peoplewho are most familiar with those
problems. Now, these problemsare never going to go away.
Problems in health, social care,education, you know, security,
these are so called wickedproblems. But what we can do is
empower the people who arechipping away at them, trying to
reduce the size they're alwaysgoing to be there, but the
(56:46):
people who are trying to chipaway at them is what's the tool
you need to do that chipping andlisten to them, rather than
having some central person, somebureaucrat who, however well
intentioned, actually doesn'tunderstand the problem and
doesn't understand thetechnology imposing this
technology on them, and thatmeans like, a revolution in the
(57:06):
way we educate, and a revolutionin the way we understand who
knows what. But it's actuallyreally exciting. A lot of and
we're doing a lot of it inCambridge at the moment, with
academics. We're starting towork with local government.
We're trying to understand howto do this with teachers. It's a
lot of peer to peer work. A lotof us listening to what their
problems and what theirsolutions are, and supporting
(57:28):
them and building confidence inwhich solutions are working, so
ensuring they're an environmentwhere they can deploy these
things in a safe and ethical wayby talking to, you know,
critical friends, and then whenthey've learned how that works,
that hopefully frees up a bit oftheir time and enables them to
go out and teach someone elsehow to do it a sort of, you
know, see, one teach, one,organize one approach, because
(57:51):
that scales, and it's like aproductivity flywheel that
builds on the human capital inthe system. Doesn't require that
be to translate it back intomoney and then back into
reinvested to sort of get theproductivity flywheel going, it
requires that we get a littlebit of time back for those
nurses, doctors, teachers,whoever, and then hopefully we
(58:14):
can persuade those people thatit's worthwhile spending a
little bit of that time doesn'tneed to be too much of it. Re
engaging and spreading the word,and you know, you can see what
I'm trying to do here. I'mtrying to create something that
scales exponentially but isentirely dependent on the human
capital, and building on thesenotions of atomic human that
that should vary in each area ofdeployment, right? It shouldn't
(58:36):
be that someone centric decideshow it looks. It should be that
those who understand how theybring themselves to their job
are the ones that are spreadingthat message.
Jean Gomes (58:47):
I have one final
question. I can't help it.
Sorry. I know we're runningover, but it's, it's the kind of
leading up families. So you'vegot kids, Neil and both got an
idea as well. So What? What?
What's the advice that we shouldbe thinking about in terms of
our children to be an atomichuman in this future?
Neil Lawrence (59:09):
I mean, I'm very
lucky with my kids that they're,
I mean, they're both quiteacademic, and they're both
passionate about what they do.
And, you know, of course, myoldest one, who's starting to
look at the job market, and he'slike, This is annoying. I really
want to be a chemist, and no onewants to give chemists a good
job. And actually, so what he'slooking at doing now is, okay,
can I build some more skills ontop of my chemistry so that I
(59:31):
can, you know, some data scienceskills, so I can do the two
together. So I find it's beenrelatively easy for them,
because I think the technicalskill sets are still important.
And I think understanding anarea deeply technically is
really useful, and you'll beable to combine it with the sort
of more generic AI data scienceskills and steer in whichever
(59:52):
direction. I think the reallyinteresting and potentially
troubling thing is for peoplewho are in the creative sector.
Yeah, where when you look at theincredible skills, and in fact,
I haven't published this yet,but we often work with graphical
scribes, and I just think whatthey do is amazing, summarizing
a meeting in terms of drawingsand and one of my favorite
(01:00:15):
scribes, I asked him to draw thebook, so he's done an image for
every chapter, and we haven'tsat down and talked about it
yet, but, you know, I know thathe and other scribes are
concerned with, well, this skillI have, you know, it's someone
just going to say, well, here'sa computer doing that. Well, I
feel not because I thinkeveryone really admires and
(01:00:35):
enjoys the skill as a human inthe room doing that, and they
can go and talk to them, butthere are a number of creative
disciplines, whether that's sortof in entertainment, film or
whatever, that's even just doingtraditional CGI, which are going
to be displaced. And these arealready quite, I mean, fragile
jobs. In many respects, it's,it's lots of people want to do
(01:00:57):
creative stuff, and not all ofthem get selected, so I think
it's very disruptive in thatfield. I think it's hard to know
what to say, other than peopleshould follow their passions,
but try and be pragmatic. Andyou know, I always say, if
you've got the ability to make adecision that keeps your options
(01:01:18):
open do that. But of course, Iget to say that because I feel
like I've spent my whole careerdoing that. Look at me now. I
still get to do all thisdiversity. I never had to choose
a job.
Jean Gomes (01:01:31):
Yeah, pivoted quite
a lot within this and you
Neil Lawrence (01:01:35):
and it's but I do
think that the that we must be
optimistic, and we must makethem optimistic, because the
thing I most strongly feel ispessimism is self fulfilling,
right? If we all agree that thisis all over, you know, then it
is. It genuinely is, and theonly way it isn't is by
(01:01:56):
continuing to and I'm notreligious, right, but I almost
get spiritual at the end,because you realize, oh, that
it's actually sort of a matterof faith. It's a matter of faith
and belief in other humanbeings, and that that's
important. And you know, ifthere's a loss of faith in other
human beings, that is real,that's horrific. And so although
(01:02:18):
I've gone through my life notsort of thinking about so much
about matters of faith and how Ifeel about them, I feel very
strongly that we need to bringup the next generation to be
optimistic and to demand to beconfident, and to demand the
power to do these thingsthemselves, so that they can
steer many of the decisions thatare coming up. They have the
(01:02:41):
ability to do that, theunderstanding to do that, and
the confidence to do that, andthe optimism to do that together
and well,
Scott Allender (01:02:49):
that's a lovely
place to land the conversation.
Neil Lawrence, thank you foryour time. Thank you for the
work you're doing. Thank you forwriting an incredible,
incredible book. I loved everyminute of this conversation. We
really appreciate it. Oh,
Neil Lawrence (01:03:01):
thanks, Jean and
Scott. I mean, I always feel
like I would. I could havelearned so much from both of
Jean Gomes (01:03:04):
No not at all,
that's what you're here for.
you. I always ramble too much,so apologies for doing that
Scott Allender (01:03:10):
Absolutely, and
folks, if you do not have your
copy of the atomic human yet,stop what you're doing right now
and place your order, because itis worth your attention, you're
gonna love every minute we'veonly we've gotten in some good
content here, but there's somuch more to get from that book.
So do, do pick it up, and untilnext time, remember the world is
(01:03:30):
evolving. Are you? You?