Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_02 (00:00):
Welcome to Leading
People with me, Jerry Mermay.
This is the podcast for leadersand HR decision makers who want
to bring out the best inthemselves and others.
Every other week, I sit downwith leading authors,
researchers, and practitionersfor deep dive conversations
(00:22):
about the strategies, insights,and tools that drive personal
and organizational success.
And in between, I bring you onesimple thing: short episodes
that deliver practical insightsand tips for immediate use.
Whether you're here for usefultools or thought-provoking
ideas, leading people is yourguide to better leadership.
(00:48):
If you're a regular listener toleading people, you'll know that
I've been exploring the topic ofAI quite extensively,
particularly in the short OneSimple Thing episodes, and more
recently through that deep diveconversation with Dr.
John Finn on how to train yourbrain for this new AI era.
(01:09):
My guest today is Simon Horton,who's been examining an
intriguing question.
How could AI help us endconflicts?
Whether on a global scale,inside our organizations, or in
our personal lives.
We go into this in some depth,and what keeps emerging is the
potential for a genuinelysymbiotic relationship between
(01:33):
humans and AI.
If we can harness that and useit for the greater good, well,
the possibilities areremarkable.
So let's get into it.
Here's Simon Horton talkingabout his new book, The End of
Conflict.
Simon Horton, welcome back toLeading People.
SPEAKER_00 (01:55):
Hi Jerry, how are
you doing?
Great to be back.
SPEAKER_02 (01:58):
We'll have to stop
meeting this way.
SPEAKER_00 (02:01):
People are talking
about it.
SPEAKER_02 (02:03):
People were talking
about it because you got a lot
of uh downloads on the previousepisodes, so I'm hoping for the
same again.
So you've been at guest twicebefore, so every couple of years
you write a book, so you'll haveto stop writing books, or else
I'll have to keep having you onthe um the podcast.
So the first time you were on,we were exploring negotiation.
You'd written a few books onnegotiation as just really a
(02:26):
part of life, and then later youcame on with your great book,
Change Their Mind, and um, whichwas largely about influencing
and convincing people.
And today we're here to talkabout something and let's call
it an evolution from from thoseuh books.
But for those who don't know youyet, could you briefly share the
(02:47):
journey that's brought you fromyour background in IT and AI,
even way back in the day,through negotiation and
collaboration to this new, anddare I say, provocative work on
the end of conflict?
SPEAKER_00 (03:01):
Yeah, yeah,
absolutely.
So I've been teachingnegotiation skills and
collaboration and conflictresolution for the last 20 years
or so, working across sectors, avisiting lecturer at Imperial
College, taught at Side BusinessSchool.
Um and as you say, I've writtena few books on the topic.
(03:21):
But my first career was in IT.
Uh, I spent 12 years in the citydesigning derivatives trading
systems and money market tradingsystems and that kind of stuff.
Uh, and I got into AI, uh, thebig buzzword of the day.
Uh, I got into AI in 1989, uh,uh, but just in a kind of a
(03:43):
micro level.
Um, but my interest grew withMoore's Law.
And by 2005, and Ray Kurzweil uhbought out a book called The
Singularity Is Near, which was areally amazing book, and it blew
my mind away.
So basically, everything thatwe're going through now in terms
of AI, etc., it predicted 20years ago and it put dates to
(04:07):
it, and it's predicted a wholeload more as well.
Uh, and and I read this 20 yearsago and thought, oh my god, this
is amazing.
If this is what we're going togo through, and it was very
credibly written, then I've gotto watch this space.
So ever since then, I've beenobsessed with AI.
And I, so if my field isnegotiation and conflict
(04:30):
resolution, and uh my backgroundis IT and I'm obsessed with AI,
and I write books, you mightexpect me to write a book on uh
AI and negotiation.
And so that is what I did.
And thank you, Jerry.
You're better at marketing thanme because you've got a copy of
my book on display over yourshoulder, the end of conflict.
(04:52):
I don't even have a copy with meuh at the moment.
So um, but it's so it's about AIand negotiation and conflict
resolution, and the amazingthings that are already
happening in that field is quitewhen I looked into it in depth
uh a couple of years back, itreally blew my mind away about
what was going on uh and andwhat is likely to be going on
(05:15):
moving forward as well.
SPEAKER_02 (05:17):
Okay.
So I suppose the first thing outthere is for all those people
who who thought they discovereduh AI when they found Chat GPT
two or three years ago, they'reprobably all feeling very
disappointed now to know thatit's the origins go date back
way, way back, 1989.
Some of the people might nothave even been born who are
listening to this podcast atthat point.
(05:39):
And um maybe we'll get into somemore of those predictions later
on, because I that that could besomething that we could explore
when we get into the topic in abit more depth.
Um let's just talk a bit aboutthe the book now itself.
It's called The End of Conflict:
How AI Will End War and Help Us (05:55):
undefined
Get On Better.
Wow.
So, what's the central ideabehind it, and what inspired you
to believe that technology,especially AI, might actually
help humans get on better ratherthan divide us further?
SPEAKER_00 (06:15):
Well, well, firstly,
to say that there is the
potential for it to divide usfurther, too.
AI is a tool uh and it could gothe wrong way.
The title is the end ofconflict.
Well, it could accelerateconflict.
Um, it all depends on how we useit.
And so that was one of the maininspirations behind the book.
That AI um is this tremendouslypowerful tool, and it can be
(06:40):
used for good and it can be usedfor bad.
And unfortunately, I think ourcurrent trajectory is going down
some not so good routes.
And so that's why I wanted towrite the book to um spread the
word that there is a better wayof using it in order for us to
use it that way.
So the the premise, the corepremise, is that AI is this
(07:05):
tremendously powerful tool.
It can model human best practiceand then make that best practice
more widely available.
Uh, and it can do this withthings like playing chess or um
radiographers, these are some ofthe famous examples.
Well, it can do it with conflictresolution as well.
(07:25):
Uh, and humans do know bestpractice when it comes to
conflict resolution.
You just have to look atexamples like Northern Ireland
or South Africa or Colombia orFrance and Germany, even, that
were at war for hundreds ofyears and now, you know,
absolute best of friends, kindof thing.
(07:45):
So we do know how to do it as aspecies, but unfortunately, that
knowledge isn't widely spread.
Um, but AI can model that bestpractice and then make it more
widely available.
And so we can then tap into thatuh as a resource um to use to
(08:08):
resolve our conflicts or to evenavoid, preempt our conflicts.
And so I'll and I'll give you uha couple of examples, kind of
one quite simple and one quiteum well very, very powerful and
very, very hopeful.
Uh so you know, at its simplest,chat GPT can help you with your
(08:30):
negotiations.
So, you know, just if you've gotum a meeting with the FD coming
up or something like that, andyou're thinking, hmm, I need
some more budget, but I bet youthey're not going to give it to
me.
How should I go about this?
And you type the details intoChatGPT, and it's going to give
you some pretty decent advice.
(08:52):
It's not bad at all.
And obviously, the more detailsyou put into it, the better
advice you're going to get, andyou can ask follow-up questions
and so on.
But here's the uh, and and bythe way, I've got a better
version of that on my website.
I've trained, I've got uh a bot.
If you go to my website, whichis theendofconflict.ai, uh on
(09:13):
the bottom right-hand corner,there's a little web chat icon.
Click on that, up box a window,and that's uh a negotiation bot.
You can ask it questions and itwill give you advice.
Now that bot has been trained onmy material, and so it's been
trained on best practicenegotiation uh material, uh,
it's got special system prompts,etc.
(09:36):
So it's much more specialized onnegotiation expertise.
But here's the thing that if youuse that once for for an
upcoming meeting, for example,and you think, oh, actually
that's quite useful.
And so the next time you have ameeting, you'll you'll probably
use it again, and you go, itgave me some good advice last
time, I'll try it again, and andthen the next time it'll and
(09:58):
then the next time, and you'llstart getting into this doing
this as a habit.
But after a while, you won'tneed to go to it because you'll
know, because you'll know whatit's gonna tell you.
If you if it keeps giving youthis good advice, etc., after a
while you'll pick it up andyou'll learn the best advice
from it.
(10:19):
So uh, and you'll stop justdoing it, and sorry, it won't
just be in that context with thefinance director or whatever
that you'll be using it, you'llbe taking it out into other
conversations, other conflicts,other negotiations and
collaborations.
You'll take it out into theworld outside of work.
And if other people start doingthis, now we're just beginning
(10:43):
to build a very, very harmoniouscollaborative world.
And whereas AI learned itsmethods from us humans, well, we
can learn those same bestpractices back from the AI as AI
becomes more embedded uh in theworld, in our background
(11:04):
processes.
SPEAKER_02 (11:09):
On Leading People,
the goal is to bring you
cutting-edge thought leadershipfrom many of the leading
thinkers and practitioners inleadership today.
Each guest shares theirinsights, wisdom, and practical
advice so we can all get betterat bringing out the best in
ourselves and others.
Please subscribe wherever youget your podcasts and share a
(11:31):
link with friends, family, andcolleagues.
And stay informed by joining ourLeading People LinkedIn
community of HR leaders andtalent professionals.
I've been pretty experimentaland to a large degree
enthusiastic about the potentialof AI.
And of course, I've read yourbook, and I have also been
(11:55):
working with AI in the contextthat you talk about, and it I
hadn't actually really made thatconnection.
You know, I the aha moment thereis actually it becomes some a
tool to teach us because weteach it, we we feed it, best
practice, we interact with it,and then it becomes like our
(12:17):
best personal advisor, and asyou say, is able to teach us.
So, therefore, collectively, itwould strike me that if in a
corporate or in organizationalcontext, if people pool that
into a set more central kind ofAI support tool, they're going
(12:38):
to get a very deep and very richuh um learning support.
SPEAKER_00 (12:44):
Massively,
massively, totally agree with
that.
Now, Jerry, do you mind if Icome back to that?
Because there was anotherexample that I wanted to give
from the book, which will alsofeed into the um into the work
example as well.
But um, so before we get intothe work context, which I'm sure
we'll go into in depth, um, sothe the other example was um so
(13:06):
one of the very first people Iinterviewed for the book was uh
a guy called Colin Irwin.
Uh, and he's a research fellowat Liverpool University, and he
does something called peacepolling.
And uh he he was involved in theNorthern Ireland Friday
Agreement.
Uh, and what he did thenbasically, him and his team
would walk the streets withtheir clipboards, walk down the
(13:28):
high street, stop people as theywere shopping, ask them a whole
load of questions, and ask thembasically what did they want to
see in the agreement.
Uh yeah, and he would then askthem some demographic questions,
age, gender, um, religion.
Uh, and he would then captureall of this information and feed
(13:49):
it back to the negotiators.
And the negotiators would beable to see, ah, so the
Catholics really want this,they'd accept this, but they'd
never accept that.
Okay.
And the Protestants really wantthis, they'd accept this, but
they'd never accept.
Right.
I think we can see where thedeal is going to be.
And they could see from thatinformation what the what the
(14:11):
deal is likely to be.
And he and he did this withevery issue.
And he would then go back ontothe streets with the next one
and so on.
And in the end, the negotiatorswere able to come up with an
agreement that they wereconfident that when it went to a
referendum, it would besupported by the people.
Um, and it turns out that thiscommunity input is really,
(14:34):
really important to peaceagreements working.
So, as as a as a counterexample,um the Oslo Accord uh round
about the same time, give ortake a few years, the the
negotiators there struck a verygood deal.
Everybody in the room was veryhappy with the outcome of the
Oslo Accords.
(14:55):
Unfortunately, because it wasdone in secret, they didn't
include the communities, andthey didn't get any community
input.
So then, when it was put to thecommunities, they didn't buy
into it.
And so it it died of death.
And and that's you know,obviously, in retrospect, quite
tragic.
Um, the it require these peaceagreements require the community
(15:17):
input.
Now, the problem with that isthat that it's quite difficult
scaling these things.
So there are platformAI-supported platforms now, uh,
and they're in the it's a fieldcalled deliberative AI, um,
which is a very growing field uhthat enable these kinds of
conversations to be had atscale, cheaply and quickly, and
(15:43):
reaching nuanced agreementsacross thousands of people, even
on divisive topics.
So, Colin Irwin, basically he'sstill doing the same around the
world, but he uses platformslike this.
And so, for example, in Libya inuh 2020, uh at the end of the
Civil War, um, there was aceasefire.
(16:04):
To be honest, nobody expectedthe ceasefire to last long.
There were all kinds of gangsand warlords and tribal gangs,
all of whom were armed to theteeth and all of whom hated each
other and wanted revenge.
They want they tried to form agovernment of national unity.
And nobody expected this tohappen whatsoever.
(16:27):
But what they did is theyconducted a conversation using
one of these AI platformsbetween a thousand people,
randomly selected but refrepresentative of the demography
of the country, um, and athousand people had a
conversation about what theywanted to see in uh the
(16:48):
government of national unity.
The conversation took two hours.
It was conducted live on TV, anda third, on national TV, a third
of the country watched it.
Uh, and whereas social media isoptimized for disagreement,
basically, these platforms areoptimized for agreement.
And they came to an agreement inthe two hours of what they
(17:11):
wanted the government ofnational unity to be to be.
And because it was watched andobserved by a third of the
country and everybody felt partof that process, five years on,
that government of nationalunity still exists.
There's I'm not saying it'sperfect in Libya, but it's
infinitely better than it wasbefore.
Uh, and that's very much part ofthis AI process that enables
(17:33):
conversations to be held atscale, quickly, across between
thousands, even on divisivetopics, and find nuance
agreements.
Really, really powerful andreally, really hopeful.
SPEAKER_02 (17:46):
Yeah.
And of course, uh some of thepeople out there maybe thinking
about this would instantlyidentify this concept that is
you know quite well known incorporate circles or
organizational circles aroundstakeholder buy-in to change.
And um I come from a country,Ireland, although I don't live
(18:10):
there anymore, but I come from acountry which in the last 10 to
15 years would have transformeditself from what the country I
knew as a young man and as mythe certainly the country my
parents grew up in.
And one of the key things theyadopted was this notion of the
citizens' assembly to to firstget input from a cross-section
(18:32):
of the population into issuesthat had never, you know, could
never get debated, could neverget anywhere in the past.
They just created division insociety.
And this is for me veryencouraging that now you can
actually scale that up and thatthere's evidence out there to
demonstrate that it actually isan important and valuable part
(18:54):
of attempting to get agreements.
And we we know at the presentmoment there are many uh
conflicts around the world whereperhaps hopefully they might
start to consider some of thesetools.
And having said that, let's getto the the workplace, let's just
take the stakeholder conceptnow, and we touched on it
(19:16):
earlier a little bit about howmaybe teams of people could
start to harness the power ofthe of AI in the workplace.
You've long argued that bettercollaboration leads to better
results, and what what might theum end of conflict look like in
the workplace?
And how could AI supportedcollaboration actually improve
(19:36):
organizational performance orrelationships between teams?
SPEAKER_00 (19:39):
Yeah, yeah.
Um so, well, it's interestingactually.
Just just last week I cameacross um a figure.
Uh I was talking to the verysenior executive at ACAS, and
they they told me that theirresearch shows the UK, the UK
economy loses 28 billion poundsa year because of conflict
(20:02):
within organizations.
And personally, I think that'san underestimate.
I think that's probably based onformal disputes that go to ACAS,
but I suspect there's an awfullot more lost because of the
micro disputes, the hiddendisputes that we don't really
pick up on.
SPEAKER_02 (20:20):
Can I just ask for
our non non-UK um listeners what
what ACAS is?
Just can you explain brieflywhat that is?
SPEAKER_00 (20:27):
Yeah, sorry.
It is the what does it standfor?
It's the arbitration service,basically, uh, for that's a um
it's a governmental body set upfor resolving business disputes
uh and rather than go to court,uh, classic ones being trade
unions to employers, that theywill always go through through
(20:49):
ACAS kind of thing.
Okay.
SPEAKER_02 (20:51):
Sorry, I interrupted
your flow there.
Sorry.
SPEAKER_00 (20:53):
No, that's okay,
that's okay.
Uh thanks for the clarification.
So um, and so I do believe thatan organization is built on its
collaborations, it's built onits micro negotiations that take
place every day.
That I think every meeting youhave is a kind of a negotiation,
(21:14):
every conversation with yourcolleague at the coffee machine
is a kind of negotiation.
And the organization that canconduct those interactions,
those micro negotiations quicklyand smoothly to a win-win
outcome, that is going to be ahigh-performing organization.
(21:34):
And unfortunately, mostorganizations can't conduct
these well.
Um so traditionally, in fact,um, traditionally, the approach
has been if they're aware ofthis, the the approach, the
intervention is let's get in auh negotiations trainer or a
collaboration skills trainer orsomething like that.
(21:57):
And they put everybody throughon a one-day program.
Uh, if you're lucky, a half-dayprogram, if you're they they
haven't got the budget orwhatever.
And and this this is kind of mybread and butter for my work,
but even I would be one of thefirst people to say that what
can you expect from a day's workor a half day's work?
(22:17):
You know, you are not gonna,there's gonna be some
improvement, some effect, butyou're not gonna have a deep
cultural transformation on theback of that.
Um, but the follow-on trainingor follow-on coaching that's
gonna require be required toreally embed these skills and
attitudes so that you do get thedeep cultural transformation,
(22:39):
it's gonna be too expensive, uh,typically, typically.
So I believe if the organizationperhaps might run the program,
uh, but then rather than havefollow-up one-to-one coaching,
which will be expensive, it usesthe bot as its negotiation
advisor.
(23:00):
Um, and so that everybody canhave access to that.
And everybody at work thinks,hmm, got a meeting with my
finance.
Or what did that guy say?
Oh, he said, yeah, use the bot.
Okay, I'll give that a go.
And they'll ask the question ofthe bot, and the bot will give
good advice and we'll go throughthat process that we just
discussed.
And and then slowly, or maybeeven quickly, that organization
(23:24):
will become a much morecollaborative organization, and
as such, will be much higherperforming.
Now, then there is, then there'sthe other thing that you talked
about employee engagement, thethe many-to-many negotiations
that we were talking about withthe Liberative AI, that we
talked about with like scalingup citizens' assemblies, that
(23:46):
kind of stuff.
Well, again, as you said, we allknow, and we've known for a long
time, that uh if our staff arefully brought into a uh a policy
because they felt they havecontributed to that policy, um,
(24:08):
that they feel some kind ofownership of it.
Well, again, that that staff,that workforce is going to be
much more motivated.
Um but typically the forms ofemployee engagement we've used
in the past have been thingslike uh town halls, which
generally means the loudest getheard and nobody else, or some
(24:33):
kind of employee survey wherethere's a set of multiple choice
questions.
The questions are never reallythe right questions, and the
multiple choice answers aren'treally the right answers either.
So there will it's always been alittle bit underwhelming.
But those platforms that we weretalking about can be used in the
(24:53):
employee engagement situation inthat kind of context.
So it really can be allemployees contributing and all
contributing fully in free textform.
It's uh you know, you canimagine it typically uh if it's
a free text input, it almost italmost instantly becomes
(25:14):
unmanageable because of theamount of information that needs
to be consolidated andreconciled.
Well, AI can do all of that, sowe really can have thousands of
people putting in their fullopinion, their nuanced opinion,
and then the AI making sense ofthat and coming up with an
agreement that let's saytwo-thirds of the people
(25:35):
support.
But what's interesting withthese processes is that uh when
people go through theseprocesses, even the dissenters,
the people who would have votedagainst it, they end up
supporting it because they saythis was a very fair process.
I like this process, I waslistened to, I did contribute to
it, I was hurt.
I'm gonna support this outcome,even though I would have
(25:57):
preferred something different.
So really, really powerful.
Then we've got our staff fullybehind whatever policy it is,
and they're gonna make ithappen.
SPEAKER_02 (26:06):
Yeah.
That that is uh probably uh afantastic example of um how the
the AI-based tool can can savepeople a huge amount of time.
And uh there's a couple ofthings coming out of what you
just said there, um, which I'dlike to just go a little bit
(26:28):
deeper in with you in a second.
However, the the the point aboutthe surveys, you know, they tend
to be fixed questions and fixedanswer choices, and this idea
that you know you can just getpeople to give freeform answers,
right?
Um and do something with itmeaningful.
(26:48):
I'll give you, I want to giveshare a little example from uh
the this summer when I uh wewere in the United States and we
were traveling to visit somegood friends of ours, and
there's a particular um likewhat would you call it?
Uh like bio, bio food, like uh,you know, one of these natural
(27:08):
food uh supermarkets which haslots of lots of great produce in
it, and it's quite a drive fromtheir house.
And we got a message saying,like a text message saying, uh,
could you stop by the store andcould you perhaps get us some of
these?
Oh, and then by the way, maybewe'd like a little bit of this
and a little bit of that, and alittle bit of and it was one of
these messages with like bitsand pieces of quantities and
(27:31):
everything else and and otherphrases in there.
And we're going into the storeand we were looking at this
message, going, Oh my god, howare we going to decipher this?
And you know, luckily I have gotteenage, well, late teenage
girls, like kids, and they said,just drop it into GPT and and
ask you to turn it into ashopping list for you.
Brilliant.
(27:52):
It not only did that, but itorganized it by category within
seconds, and we were able towalk around the store in the
veggie place, get the veggies,in the whatever else, the deli
place, get the deli stuff.
And so everything that whatlooked like a sort of mess
turned into a very structuredand very um very beneficial
experience for us.
(28:12):
So, I mean, I'm not sure ifpeople are starting to explore
this, but this this is somethingthat's that certainly that's a
brilliant example of, you know,don't get hung up on the fact
that you have to administerthese very fixed questions and
fixed uh answer type surveys.
Now it's possible to havesomething alternative.
But more importantly, what Iwanted to get at was people who
(28:34):
are listening now, some of thelisteners out there might be
thinking, well, this all soundsa bit utopic, you know, and um
what about the human, you know?
Okay, so I have my bot, and Ijust talk to my bot, and my
bot's going to advise me, um,what if the the bot doesn't
recognize the human factors atplay here?
And I think you have some quitesome strong arguments to make
(28:58):
around how this still remainshuman.
So I'll let you go with that.
SPEAKER_00 (29:03):
Yeah, so look,
again, it goes back to that
thing of it being a tool, andit's an early tool, it's uh in
its early stages of development.
So we've got to be very and aswe move forward, we've always
got to be very careful about howwe use it.
Um, in terms of the humanbenefit, human welfare, I think
(29:29):
we've absolutely got to makesure we use this for human
welfare and for everybody.
And I'm saying I'm saying humanas a human race rather than just
the tech woes kind of thing.
Uh, you know, we we really havegot to make sure that uh that we
do you use it for everybody'sbenefit.
Um I think keeping people,keeping humans in the loop is
(29:54):
really important.
Any process that you're doing uhusing AI right now, absolutely
keep humans in the loop, A, forthat human factor, but B because
it's still an early technologyand it can go wrong, and it can
still give you completely wrongadvice unless you check it.
(30:15):
And so I, you know, I reallyencourage people to use the
advice sensibly with aquestioning uh eye on it.
Uh, don't take everything thatit says for granted.
You will definitely comeunstuck.
I wouldn't like to see yourrecipes uh if you just took your
uh the AI sort of advice withoutactually quick uh quick
(30:37):
questioning it.
So definitely always keep humansin the loop, always consider the
human benefit and potentiallyhuman non-benefits, uh, and
always double check uh anyanswer it comes up with.
SPEAKER_02 (30:52):
Yeah, I would concur
with that.
Um I uh have created a wholeprompt sequence for helping me
produce this podcast, andgenerally speaking, it's pretty
effective, very reliable, andI've caught it out now a few
times doing stupid things.
And and the only great thingabout it is when you you can
(31:13):
tell it has done stupid thingsand it won't get upset with you
uh too much.
Well, we'll we'll find out howhow it progresses in the future.
And and it is true that um if Iwas just to take it for granted
that everything it would suggestto me was correct, then I I this
podcast would probably come outback to forefront sometimes, but
at least I I check it and I saythis doesn't look right, and the
(31:36):
same way you would in real life.
But one of the more interestingthings that I found, or more
important things I've found, iseven if you're uh and you can
comment on this in a minute,even if you're finding the AI is
quite useful, both in terms ofefficiency and effectiveness.
So it's it you're it's helpingyou address the most important
(31:58):
things, but it's also helpingyou do it very, very
efficiently.
I have found that the the kindof leverage variable with AI,
and I I did a previous podcast,short podcast on this, was for
me, context is really, reallycritical.
And a lot of people I have foundare you know, going to AI and
(32:18):
saying, but I asked it this andit gave me stuff back that I
didn't like or I didn't thinkmade sense.
I have found that if youcontextualize things quite well,
so don't just go in there andsay, I want you to help me with
negotiation, I've got to, I'vegot to get this price down or
I've got to get a better price.
What can I do?
Because that's not going to leadanywhere.
If you explain the context, andwhat I wanted to say in the
(32:41):
context is I always um stillapply those basic principles of
negotiation.
What's in it for them?
What's in it for the otherperson?
What stepping into their shoes?
Which is when I teach this stuffin the classroom and you've done
it yourself.
This is probably the bit thatchallenges people is how can I
see things from the otherperson's point of view in a way
(33:03):
that then helps me also get myneeds met, but can maybe we can
find a way to get both parties'needs met in such a way that we
can walk out of a room feelingthat we we got something that
mutually beneficial.
So maybe talk a little bit aboutthis this area of you know, how
do you actually uh teach andinstruct AI to be more human, to
(33:26):
to you know, and to be morereliable.
SPEAKER_00 (33:30):
So the quality of
the prompt is so important.
Um, and as you say, you can say,Oh, I've got a negotiation, can
you help me?
And it'll tell you one thing,but that's not necessarily going
to be that helpful.
Whereas if you say, Oh, I've gota uh negotiation with my finance
(33:51):
director uh later on thisafternoon, this is the
situation, and this is thebackdrop, and this is what I'd
like to achieve from it.
And then you put in as much, Iend up writing quite long
prompts, quite long prompts togive it that information, uh,
that context.
And then the more context thatyou can uh give it, the more
(34:11):
likely it's going to give you anaccurate and relevant answer.
And then, of course, you canalways, as we were saying, don't
take that answer as God given,but we can question it and you
can say, hmm, you mentionedthis, what do you mean by that?
Or or you can even say, um, thatyou just mentioned this uh for
me suggesting for me to say itin this way.
(34:34):
That's really great.
I'm not comfortable I could dothat.
Can you give me examples,different examples of me
phrasing that kind of thing?
And so, in other words, you keepit as an ongoing conversation
with it.
Uh, it's not just a one-shot go.
Um, you can always, if you areworried about it, hallucinate
(34:54):
strangely, saying things like,please do not hallucinate, or
take your time and thinkingabout and think about this, um,
it's it's less likely tohallucinate and it's more likely
to give you a thought-throughanswer.
You can set up your own systemprompt.
So uh I think I'm right insaying uh in ChatGPT and most of
(35:14):
them, there's a place where inyour settings you can say, tie
this to, you know, uh add thisonto every single question that
you are.
So, in other words, you can giveit a persona of I want you to be
a thought-through, reliableperson who doesn't or uh who
doesn't make things up or thatkind of stuff.
(35:35):
And all of that will just helpyou get better answers.
SPEAKER_02 (35:39):
One of the things I
I will tend to do is if I think
like you, I mean, I I try tomake the briefing as
comprehensive.
I ask it, I I like to treat itas though it was somebody in the
room with me, and I would saythings like uh, is everything
clear?
Do you know what to do next?
Um, have I missed something?
(36:00):
If I have, what would it be?
Yeah.
Could you just clarify for mebefore you go and do this what
it is you're going to do and howyou're going to do it?
Which you, if you're teachingsomebody new on your team, for
example, with something new thatyou want them to go do, this
would be best practice in justmaking sure that that person for
their sake is clear, leaving theroom, and for my sake, that when
(36:24):
they've gone off and spent a dayor something, they come back
with something that is closeenough or maybe even better than
I thought it would be.
So I find that treating it morelike a conversation partner,
somebody who you might just askto go and do something for you,
and asking it to come back andask you questions, and then when
it does make mistakes, or itadds some new insight, asking it
(36:47):
to remember that.
SPEAKER_00 (36:48):
Right.
Yeah.
SPEAKER_02 (36:50):
Just the same way
you might say to somebody now,
for the next time, let'sremember to add that step,
right?
SPEAKER_00 (36:56):
Yeah, yeah.
Yeah.
And as I, you know, as you say,it's it's it's a bit like
treating it as a veryintelligent and hardworking um
intern, recently graduatedintern, or something like that.
So they're good, they're clever,they will work hard for you, but
often there's just some thingsthat we take for granted that
(37:20):
they miss.
Uh, and so we need to doublecheck that our their
understanding is the same as ourunderstanding.
SPEAKER_02 (37:27):
Okay.
So maybe something else that uhare is on the minds of any
listener out there is that okay,so these tools are becoming more
accessible by the day.
You and I are both exploring andusing you you've been doing it
since 1989, since you were atschool when you were a little
little boy, right?
Um and and I I maybe have toadmit that I've been doing it a
(37:51):
little bit less long than that.
However, people out there mightbe saying, Yeah, but these two
guys are just getting off onthis now because they they like
to use it and they're findingand they might be saying, but
what about all this stuff aboutbias and miscommunication?
And are are we not in danger ofjust amplifying a lot of BS and
a lot of negative stuff, etc.?
(38:13):
So, from your experience interms of the research you've
done, and and you're activelyout there with your bot and with
your trainings, you're you'redoing a lot of trainings.
What what what should leadersand managers be doing to make
sure AI enhances things likeempathy and understanding rather
than eroding them?
And how can they use it to bringout the best in their people,
which aligns with um one of thekey uh tenets of this podcast,
(38:36):
which is how do I bring out thebest in myself and others?
So maybe share with my listenerssome aspects of that.
You're listening to leadingpeople with me, Jerry Murray.
My guest this week is SimonHorton, negotiation expert and
author of The End of Conflict:
How AI Will End War and Help Us (38:58):
undefined
Get On Better.
Still to Come.
We're diving into one of thebiggest questions surrounding
AI.
Can it help reduce human bias orare we at risk of amplifying it?
(39:19):
Simon shares what his researchuncovered, why he's cautiously
optimistic, and how the futureof AI may depend less on the
tech giants and more on the restof us.
That's coming up next.
SPEAKER_00 (39:33):
Well, to address the
bias thing specifically, and and
you're you're absolutely right,and there's been some quite
high-profile cases that AI canbe biased.
Um and yes, the danger is thatwe it will amplify that bias.
Um the where does the bias comefrom?
(39:54):
Obviously, it comes from humansbecause it's trained on human
data, and humans are biased.
Um and so that that has then gotinto the training data, and so
it thinks in the same biasedkind of way.
Um now I'm optimistic on that,and I appreciate I am an old
white male, so it's easy for meto be optimistic on that.
(40:16):
But I am optimistic for a numberof reasons.
Firstly, because it has come outin the open, uh, work is being
done on it.
And that's both on a top-downway and a bottom-up way.
So the big frontier models likeOpenAI and Anthropic and
Microsoft and uh Grok and all ofthese, uh, they are doing work
(40:39):
on it.
They are doing work on how toidentify biases and then how to
de-bias it.
Uh, and in fact, there areplatforms out there that you can
uh that you can use to run anytext that you've written.
If you've written an essay or aspeech you're gonna give, or
something like that, and youthink, hmm, I wonder if I've got
any biases in here, you can runit through that and it'll check
(41:02):
for the known biases at least.
Obviously, there is always thedanger of the unknown biases.
Um, but the known biases, uh,it'll it'll check for those and
it'll point them out in your uhin your in your in your text.
So there's the top-down route,but then there's the bottom-up
route as well, where a lot ofthe different demographic
(41:26):
groupings that um perhaps mightsuffer from these biases.
Um, so but whether that's basedon race or ethnicity or gender
or sexuality or whatever, uh, alot of those groupings are um
building their own models andworking and collaborating with
(41:48):
the with the larger frontiermodels, um, and building their
own models very much based on ade-biased perspective as as they
are bringing to it.
Um so there's a lot of stuff uhgoing on in that level.
So I think this combination ofuh top-down and bottom-up and
the collaboration between thetwo uh is very, very hopeful.
(42:11):
Interestingly, I mentioned thatthe premise of the book is that
AI models best human bestpractice uh and then makes that
more widely available.
And then humans can learn thebest practice back from the AI.
I think the same might happen,and I'm hopeful that it will,
(42:32):
with bias, that our society isbiased for for deep reasons, you
know, you know, deep geneticreasons, for evolutionary
reasons, etc.
Um, and it's quite difficult tode-bias an individual, to
de-bias a society.
Um, but it's easier to de-biasAI, it's easier to identify
(42:55):
biases in AI and then dosomething to correct for it.
So I am hopeful that again, thatthe AI will learn the best
practice from about regarding uhbias from humans, and it will
become less biased, and thenhumans, our society, our wider
(43:16):
society, will become less biasedfrom the AI because of the
benefits of the AI.
SPEAKER_02 (43:23):
Yeah, and and one of
my questions here was really
around where you see the see thereal synergy between human and
human ways of approachingnegotiation conflict and
artificial intelligence.
So, would this be one examplewhere you see there's a
potential for synergy where wecan reduce bias in society?
SPEAKER_00 (43:46):
So I I I just yeah,
I think the potential for
synergy with AI and humans ishuge, is huge that um because AI
all technologies that we'veinvented in the past, whether
we're talking about the steamengine or electricity or the
plow or whatever, they haveamplified our strength or our
(44:09):
speed or our reach, basicallyphysical capabilities.
But humans have always had theintelligence bit, uh, and that's
been amazing, and it's done ustremendously well.
It's built all of these thingsthat we build around us.
However, it hasn't been quitegood enough in many ways, and
hence the inequalities insociety, hence the wars that
(44:31):
still exist, hence poverty, etc.
And AI isn't an amplifier as atechnology, isn't an amplifier
of our strength, it's anamplifier of our intelligence.
And so I think that those issuesthat humans have found just
outside our reach, how to buildfair societies and such.
(44:52):
Um, I think AI will help us withthat.
In fact, and things like climatechange, if it's left to the
humans, I just don't think we'regonna do it basically.
We've we've been trying for 50years, and we just people are
still eating meat, people havestill got the temp the uh the
radiators on at 24 degrees, etc.
Um, we're just not gonna do it.
(45:13):
But I think it's the AI with itsgreater intelligence than
humans, that but working withhumans, um, not at the expense
of humans or whatever, butworking with humans, it really
will solve all of the deep humanproblems that we face, whether
that's happiness or equality ordiversity or wealth, abundance,
(45:35):
health, longevity, etc.
Uh, I think it's AI the thingthat is going to be the thing
that really does help us solvethose things.
SPEAKER_02 (45:43):
Okay, so there might
be a few people running the
expression where there's a will,there's a way.
Yeah.
And now we know there's a way.
Um it maybe you go back a littlebit full circle to the what you
said very early on in thisconversation about the potential
for good and the potentialperhaps for evil uh in anything
(46:06):
new in society, and particularlyin a technology like this, which
is uh you know exponentialalmost in the way it grows and
and in infiltrates ourday-to-day life.
Um there's always sort of moneybehind these things, and and we
see see the massive amount ofinvestment to the point where uh
at the moment it it's only a fewof the really big guys who can
(46:29):
get into this and afford towaste money if nothing else, you
know.
Um and if we looked at thisnegative side uh and we think
about this will, you know, andthe the asset the the influence
money has on our willingness todo things, um are the tech bros,
as they're called, you know, thetech brothers, um, sufficiently
(46:50):
motivated to actually do goodwith this, or are they in danger
based on what we see at themoment?
Are they in danger of fallinginto the sort of evil pathway?
Or is there is there somethingin society that might be able to
prevent that from happening inyour observation?
This might just be an opinionfrom you, or maybe you've seen
(47:12):
some data to encourage you.
SPEAKER_00 (47:16):
Uh so my opinion is
you're absolutely right.
Right now it's the money, it'sthe tech bros who are driving
the conversation.
Um and not just driving theconversation, they're driving
the technology and how it'sused, and therefore they are
driving the future of societyand they're driving the future
(47:37):
of humanity.
And do I trust those tech bros?
Nope, I do not trust them.
I do not trust Google as far asI could throw Google.
I was gonna say I do not trustElon Musk as far as I could
throw him, but I might be ableto throw him at least a couple
of feet, whereas I couldn'tthrow Google very far at all.
It's very, very big.
Uh so I don't trust him in theslightest.
Um but there is the there isthis alternative route possible,
(48:04):
and this is exactly why I wrotethe book.
So that because I think the whythey got into this place is
because they have been runningthe conversation for the last
25-30 years.
Uh, whereas pretty mucheverybody else has been shying
away from the conversation.
And I if most of them had mostpeople for a long time didn't
(48:25):
hear about AI, as you said,until the Chat GPT moment in
November 2022.
If they had, most people kind ofgo, yeah, don't really know much
about it, don't really want totalk about it, I'm a little bit
worried about it, and don't wantto talk about it.
So this is why I wrote the bookto say, listen, we've got to
reclaim the conversation.
And I do think, look, there areonly seven chief execs of big
(48:50):
tech companies.
There are eight billion of us.
And if there was a fight betweenus, we'd win.
Right.
If there was a uh a wrestlingmatch between us eight billion
and them seven, we'd win.
So I do think that there isstill the possibility for us to
have this conversation, putpressure on, uh, put pressure on
the government, put pressure onthe techcos themselves, and just
(49:14):
raise that conversation that weso that we use it in the right
way.
And whilst you're you're rightabout the money being the main
driver, and they've got all ofthe billions that can help build
these data centers and do all ofthe research, etc.
But a lot of AI is actuallybeing driven bottom up.
(49:37):
You you know, you you said thatyou've been playing around and
you've knocked up some things,and I've been playing around and
I've knocked up some things.
Well, people who are moretechnical than you or I are
doing the same in their in theirliving room or wherever, and
they're knocking up some reallyinteresting things.
And I think the future is thereas much as it is in the the
(50:00):
frontier model top-down kind ofapproach as well.
So I do believe it is up to usby spreading this conversation
and making sure that it does.
It's up to us what outcome weget.
What you know, do we get techbros living on Mars after
blowing up Earth?
Um, or do we get uh a utopia foreverybody and all humans
(50:25):
flourishing?
That is up to us.
SPEAKER_02 (50:30):
So uh a question
that just comes to mind here is
um how did AI help you writethis book, or did it?
Uh did you sort of get it toinvolved and and provoke it and
ask it questions?
SPEAKER_00 (50:42):
And so I did use it.
So I was I started writing thebook a couple of years ago, and
um I did use it.
A couple of years ago, AI wasn'twasn't at its best then, it was
it um, but it was still useful.
So a couple of uh ways.
So, for example, I remember onechapter um that I was thinking,
(51:06):
I was thinking, oh, I've got towrite about this.
Well, what and this was a bit ofan outlier chapter in the book.
I thought, well, I don't knowanything about this, but I need
the chapter in there, so what amI gonna write about it?
Um so firstly, I asked it towrite me a kind of the a dozen
subsections within that chapter,uh, and it came up with real
(51:27):
with a really good plan for thechapter.
Uh, and so I ended up doing thatfor every chapter, not
necessarily.
I'd I'd I would typically do mythinking first, then I would get
it to do some thinking, and thenI would kind of marry the two.
But with this particularchapter, with each of those
subsections, yeah, great.
Um, so I then said, okay, canyou take that first subsection
(51:51):
and can you write me, let's say,1200 words on that subsection?
And it this it did.
I printed it off and I read it,and I thought, oh, this is quite
good.
But I I think and it's notreally my voice.
So I said, can you do the samecontent but in my voice?
A little bit more of a joke, youknow, that, that, that, that,
(52:11):
that, that, that, that kind ofstuff.
And again, it came back withsomething, and I printed that
off.
And I thought, oh, this isreally good.
There were some literally somelaugh-out loud jokes, and there
were some great analogies andmetaphors in it.
So brilliant.
I know what to do for the restof the chapter.
So I printed uh so I did Ibasically did the same for every
subsection, printed them all offone after another as I was
(52:34):
uploading the next one, and theneventually I got them all from
the printer and I startedreading them.
And unfortunately, every singlesubsection was identical with
and I with the identical jokesand the identical, identical
metaphors.
So I thought, oh, well, thisisn't gonna work then.
Um, so so I didn't use it thatfor that for that kind of thing.
(52:54):
But occasionally I how I used itwas you know, that time when you
think, hmm, I'm saying it this,this is a I'm saying it in this
way, and maybe you read out,read it out loud and and it
sounds really clunky, or you'remaybe there's a word you're
missing, what's that word, orwhatever, or or sometimes I
(53:14):
would come at come up withcatchy titles for a subsection,
that kind of stuff.
So I used it more as a writingcoach specifically around
language, around uh a phrase ora word in that kind of context.
SPEAKER_02 (53:31):
Yeah.
I I can ask you that questionbecause you've written several
books without the the uh thechat GPT uh being around.
And I I was uh sort of smilingto myself when you mentioned the
humor thing, because one of thethings that characterizes your
books is that little bit of dryhumor that uh sneaks its way in.
And I'm I'm not sure that youcan get uh uh even with
(53:54):
training, I'm not sure you canteach at the moment, anyway, the
AI to reproduce that turn ofphrase, that just that one
little thing that comes fromsomething in your experience or
some way you've looked at thesituation, uh, because it's very
personal, isn't it?
Humor, uh that kind of humor,observational humor, can be very
personal.
And so it's kind of uh actuallyintriguing to listen to you talk
(54:17):
about how you experimented withthe AI, and in the end it you
still wrote the book yourself.
It became a writing assistant, asort of maybe sub-editor type,
like I could throw something atit, and it might suggest new
ways of doing things.
So, for all those people outthere thinking of, well, I need
to write a book or I want towrite a book and I'll just fire
up my chat GPT now, beware,beware, it doesn't work quite
(54:38):
like that.
Okay, so before we we get intothe end and before we close off,
I have a couple of shortquestions for you.
If there was one simple idea orinsight you'd like listeners to
take away from thisconversation, and I guess the
book, oh by the way, you'regonna have to read that book now
to find out which chapter Simonwas talking about.
(54:59):
So, um, but if there's onesimple idea or insight you'd
like listeners to take away fromthis conversation or the book,
what would it be?
SPEAKER_00 (55:06):
Uh, it's that AI can
be very helpful, uh, that it can
help you resolve your conflict,it can help you be more
collaborative, and you will getbetter outcomes in your life.
And the people you'renegotiating with will get better
outcomes, and you'll improve therelationships in your life,
perhaps if you if you if you useit uh in in that way.
SPEAKER_02 (55:28):
Yeah, actually,
there's a great chapter in the
book about relationships becausewe're we're more focused on the
the the leadership aspect here.
There is a great chapter in thebook.
I mean, you also talk aboutworld peace, but there's a great
chapter in the book about justour day-to-day family stuff and
partners and all that stuff.
So I would encourage if you'recurious out there to grab a copy
of this book.
(55:49):
And what's next for you, Simon?
Are there any more ideas orexperiments emerging from the
end of conflict that you'reexcited to explore?
SPEAKER_00 (55:58):
Yes, so well, all
all all of this stuff that we're
talking about, you know, I am Iam building on and uh the the
bot, etc., so the organizationscan use it uh in in the context
that we were talking about,making it more collaborative.
Also, um I'm talking to a lot oforganizations about using
deliberative technology uh in inwithin their organization to get
(56:21):
it more collaborative, to getgreater employee engagement and
therefore higher motivatedworkforce.
I'm also looking about usingthat in the community context.
So maybe local councils makingpolicy decisions and such like
using these technologies.
Uh and lastly, um infrastructurecompanies.
(56:42):
Uh so uh just very briefly,infrastructure companies
classically have lost hugebillions of pounds and gone
years over budget on theirprojects.
Why?
Largely because of communityresistance to the new
infrastructure project, whateverit is, whether it's a ray, road
(57:02):
or railway or housing building,the NIMBY's that are not in my
backyard people say, yes, ofcourse we need housing, but not
here, not next to not next to myhouse.
Um well it turns out that mostpeople aren't NIMBY's, they're
MIMBies, maybe in my backyard,maybe under certain conditions,
(57:25):
if you were to do it this way,and if you were to take this
into account, and if you listento my voice and listen to my
opinion, then yet I would sayyes.
And these platforms that we'retalking about, uh the
deliberative platforms, enablethose kinds of conversations to
be had had amongst communities.
So again, they reached a nuancedagreement that everybody is
(57:48):
behind.
The infrastructure project cannow go ahead quickly, on time,
and on budget.
SPEAKER_02 (57:54):
So I'm gonna put
links in the show notes.
And do you have us any specialoffers for my listeners here
today?
SPEAKER_00 (58:03):
Uh that there's a
good that's a good negotiation,
Jerry.
Uh so yes, uh, there is.
So if anybody wants to reach outto me on LinkedIn and then say
that they heard me on your show,then I'm happy to give them uh a
discount on the book.
I think the book is normally£14.99.
(58:23):
Um you can get it$9.99 on thePDF on my website.
I'm happy to give it to$7.99.
Um, so that's two pounds off, orindeed seven pounds off uh the
the Amazon uh price.
But I'm also I'm willing tonegotiate further if they wish.
(58:44):
So if they go to my website andit's the end of conflict.ai, um
uh on the buy the book page,there's a bit where they can
negotiate with me, and I'mwilling to give it um further
negotiations, further, sorry,further discounts uh down to a
bottom a minimum price of$3.99,depending on what they what they
(59:05):
bring to what value they bring,what contribution they're
they're willing to bring.
But if they just let uh connectdown to me on LinkedIn, and as
you say, mention your this iswhere they came across me, uh,
then I'll give them a uhdiscount from there anyway.
SPEAKER_02 (59:20):
Okay, so for the
brave of heart um who want to
negotiate with a negotiationmaster, then uh you have your
choices now how you do it.
And Simon, it's always apleasure having you on Leading
People.
And I'd like to thank you onceagain for sharing your insights,
wisdom, and tips with me and mylisteners here today.
SPEAKER_00 (59:40):
It's always a
pleasure to be here, Jerry.
Thanks so much, and I just hopethat your listeners really
enjoyed it.
SPEAKER_01 (59:49):
Coming up on leading
people.
And my question always is topeople in LD do you want to
deliver some content or do youwant to make a real impact?
Now, they're always going to saya real impact.
Therefore, you're going to haveto do something more than just
deliver content.
And it's that more bit that'swhere my expertise is and and
the stuff in the books and soon.
(01:00:09):
Is what else do you have to doas a wrapper around the content
to make it have an impact?
SPEAKER_02 (01:00:15):
That's Paul
Matthews, one of the most
practical and respected voicesin learning and development.
In our next episode, we diveinto what really drives
capability at work, why so muchtraining fails to transfer, and
how to shift from deliveringcontent to delivering impact.
We also explore the fivecomponents of capability, what
(01:00:37):
leaders need to understand aboutlearning, and the hidden engine
of informal learning that keepsevery organization alive.
You won't want to miss it.
And remember, before our nextfull episode, there's another
one simple thing episode waitingfor you.
A quick and actionable tip tohelp you lead and live better.
(01:01:01):
Keep an eye out for it whereveryou listen to this podcast.
Until next time.