Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome it. Change makers to the Deck Show with Tim
Flower and Tom McGraw. Let's get into it.
Speaker 2 (00:08):
If manners make a man, as someone said, then he's
the hero of the day. We can hear it in
his accent when he talks. He's an Englishman in New York,
which is my way, Ladies and gentlemen of welcoming our
special guest today, an english Man in New York. Terrence Maori, Heydan.
Speaker 3 (00:25):
Terrence, Hey, Tom, Great to be on the show. Did
you like that?
Speaker 2 (00:29):
It's not my rendition of it, which may have been
just off, But did you ever like that song? I
always quite like that song, you know, I always thought
it was very evocative.
Speaker 3 (00:36):
Sting's song is great. It's one of my favorites.
Speaker 4 (00:38):
And I grew up in Strap up and Aven, the
birthplace of Shakespeare. So I think you have an inner
poet in you.
Speaker 2 (00:45):
Thank you think I was going to say, it's about
time someone put Stratford upon Avan on the map right exactly,
And I have to say it's great to have him
because I feel like I feel Tim.
Speaker 5 (00:56):
Can maybe voucher this right.
Speaker 6 (00:57):
But if I was to think of a theme, a
meta theme for the first quarter of Deck shows of
twenty twenty five, I'm going to say disruption. You know,
we did a lot of We've done a lot of
stuff around around Assist two point zero.
Speaker 2 (01:12):
As listeners will know, we we launched the Transformation Report
about AI transformation, and speaking of PUTT, it is something
of the poet of disruption. We might say terrence right
when you tell people a little bit, a little bit
about your work.
Speaker 3 (01:30):
I love the framing, and it's true.
Speaker 4 (01:32):
I think disruption is one of these words that's misunderstood.
It's overused, often becomes a buzzword and loses its specificity.
But I'm interested in the upside of disruption because I
think on the other side of disruption, whether that's technology
change or talent scarcity or industry convergence, is the upside.
(01:52):
And the upside the opportunity for reinvention could be a
new service, a new product, a new platform. And I
think one of the big takeaways today that I want
to explore with you guys, with your listeners and viewers,
is this idea that we always overestimate the risk of
doing something new and always underestimate the risk of standing still.
That's really at the sort of central call to action
(02:14):
around disruption. Even job titles are being disrupted by the way.
I was at the had a meeting at the Department
of State, the State Department recently, and we exchange business cards,
and on the business card it said head of Uncertainty.
Speaker 2 (02:34):
Okay, maybe be the king of the King of uncertainty.
Speaker 7 (02:40):
But I do think, you know, particularly with AI arrians
right like I you know, I'm sure what you say
is right in general, but tell me if you disagree that.
With AI, there is a consensus that people need to
move and as much as there's that urgency about it,
there's also a sense of like an awareness. But yes
(03:01):
you can you can jump too late, but you can
also jump too early, right like, It's it's a push
and pull.
Speaker 4 (03:08):
I think we have to avoid what I call artificial idiocy.
And as I play on the words AI, and what
I mean by that is that I agree AIS are
here to stay secular and structural trend presenting tremendous risk
but also a tremendous opportunity.
Speaker 3 (03:28):
I think one of the things we're missing.
Speaker 4 (03:30):
Right now with AI is it's largely an automation story
for many organizations and the media as well. And I'm
interested in an elevation story and the facts. In my
new body, the Upside of Disruption, I talk about really
two narratives around AI, warm AI, which is humanity first
enabled by AI, and cold AI, which is machine first,
(03:52):
often at the expense of at the cost of loneliness
or well being. And I feel that a lot of
cape capex is going into cold AI at the moment,
at the expense of warm AI.
Speaker 2 (04:05):
And you know it's something we okay, so think about that,
what war MAI looks like in real terms? Right we
You know, you're seeing a lot of AI naturally launched
at the moment for non technical users. And I mentioned
ASI two point zh. We did a series of shows
about that. But listeners should check out if they mistram
which is effectively putting, you know, highly sophisticated dex insights
(04:27):
into democratizing those insights effectively, what kind.
Speaker 8 (04:32):
Of opportunities and risks ternce does that kind of does
that kind of power of AI put in democratized offers
to organizations in your view?
Speaker 3 (04:43):
So I think, you know, in terms of the key
difference here, it's one between.
Speaker 4 (04:47):
Value extraction and value creation. It's one between AI that
actually creates an ultra process world when we're drowning in
information and drowning in data kind of world where actually
we achieve what's called ROI, which is not just with
ternal investment from our new human metric for a post
AI world return on intelligence and the research is unequivocal.
(05:12):
Around seventy percent of our time every week is spent
on bureaucratic activities and expensive intelligent activities. There's one thing
that AI could do to help us be more productive,
lead more meaningful lives.
Speaker 3 (05:24):
Higher value and more intelligent driven lives.
Speaker 4 (05:26):
It would be to democratize, but also to delete and disrupt, automate, remove,
subtract a lot of this bureaucratic, bad bureaucracy that we
have to deal with, whether that's too many meetings, too
many processes, duplicative structures, or just ways of thinking that
(05:48):
have gone off, like yoga in the fridge.
Speaker 5 (05:52):
And so Tom, I'll agree with you. I think the
first part of the year, there's been a lot of
talk about disruption. In my travels around the world, I've
seen not just excitement about the opportunities of AI, but
I would characterize it as nervous excitement, and I think
it covers a lot of what terrence you're what you're
(06:12):
referencing that you know is the Is the fear of
moving forward justified? Or is it the traditional fud right, fear,
uncertainty and doubt. It's something new. It could be dangerous,
but we're not quite sure. I agree with it too,
that a lot of it is. It has almost been
fairly simplistic in the way it's talked about, uh leveraging
(06:36):
it mostly for automation, But AI, in my view, is
really an advanced way to get information or answer a question.
Right we We've used Google forever. You google your question,
you get tons of results back, but this gives you
very specific, detailed answers to your question. But you talk
about it in the context of next level decision making?
(06:59):
How to company is prepare for that shift? Right? We've
is data driven decisioning already ingreened in US and it's
not that big a deal. Are we already prepared or
do we need to do something else to get ready
for that next level decision making?
Speaker 3 (07:13):
Yeah? I loved I love the question. It's a catalytic question.
Speaker 4 (07:17):
It's a question that doesn't just make us feel good,
but makes people think hard, and that's important. I think
AI has completely misunderstood. The CEO of the IBM today
said that ninety nine percent of data has not been
touched by AI in enterprises and some of two of
the biggest challenges we're facing right now data quality and
(07:37):
speed to insight. And so obviously organizations are drowning in
multi layers and complexity. For example, the average number of
apps for a large enterprise and S and P five
hundred is over six hundred.
Speaker 3 (07:52):
So we've got.
Speaker 4 (07:53):
Legacy, we've got tech debt, we've got different levels of
different levels of hierarchies, and this creates all sorts of complexity,
which is a tax on our ability to really harness
the full intelligenible insight of our AI platforms. So I
think that's the challenge, and that's the opportunity actually, and
(08:14):
I agree with you. I think most organizations are trapped
between the panic zone and the opportunity zone. So sorry,
the panic zone and the complacency zone. And I think
we need to sort of look at a third a
third space, a third zone, which would be the learning zone,
the sandbox zone, the iterative zone, the zone where we're
(08:35):
really kind of you know, a simple core to action
would be what's a simple experiment we can do with
AI over the next thirty days to create our own
AI use case and get started on this journey. That
would be a great call to action straight away.
Speaker 5 (08:53):
So some of the things I've been hearing as I
travel kind of divergent thoughts. The first is we won't
need experts anymore. I'll just ask AI. AI will be
my intelligence. I won't need intelligent people, I'll just need
the artificial intelligence, and I'll rely on that more and
more over time. You see that in even in Microsoft
as they move to using AI to do the majority
(09:16):
of their coding, this notion that AI is going to
be my full time intelligence. But on the other hand,
the divergent alarm that some experts are starting to set
is this concept of AI eating its own tail and
that the model will eventually collapse, that the data it's
(09:36):
consuming to make decisions will be the data that was
generated by AI, and that data is it always accurate.
We see it often that the questions that are asked
into AI result in bad data, or it leverages the
wrong information to give you bad answers, and that eventually
AI and the models will collapse because it's just all
regurgitated bad data. I'm curious what your thoughts are on
(10:01):
that kind of that soft skill of asking the great
question I deciphering the data that it returns back to
determine if it's good or bad. Where are what do
you What do you feel about the skills that it
actually are needed to leverage AI to its forest.
Speaker 4 (10:18):
I think one of the interesting perspectives here, and I
sort of think about the research by Ethan Mollock, for example,
a great researcher on AI is I think one of
the you know this kind of idea that too much
information leads to a poverty of attention, And I think,
(10:38):
how we harness this intelligently so that you know the careering,
for example, the questioning is.
Speaker 3 (10:45):
A real imperative.
Speaker 4 (10:47):
I agree, I think in terms of the upside, and
I talk about a lot about the upside. Using AI
as a consultant as a service, for example, not just
as as as a co pilot, as a co thinker,
as a co strategist is where the upside is. But
I also think there's a risk of the curse of saneness.
And what I mean by the curse of sameness is
(11:09):
this really interesting paradox when everyone's got access to the
same cheap AI, the same data, how do you actually
differentiate yourself?
Speaker 3 (11:20):
That's the real dilemma here.
Speaker 4 (11:21):
That great, we've all been democratized, we've all got access
to AI, But then how do you differentiate yourself? It's
a genuine question as an individual, as a business, as
an enterprise when you've all got access to.
Speaker 3 (11:34):
The same zero cost AI.
Speaker 4 (11:36):
And by the way, we know that the cost of
AI models has dropped over two hundred and eighty x
in the last two years. So that's the dilemma that
I'm interested in exploring as well, which is actually what
will become more scarce when everyone's got access to the
same data, the same information, the same democratized AI. Well,
actually what will become scarce is human intelligence, human imagination,
(12:00):
and empathy, deep trust as opposed to shallow trust. The
two takeaways here one is be aware of the curse
of sameness and to what the world needs right now
more than ever, a willful contrarianism.
Speaker 5 (12:17):
I couldn't agree more that the need for empathy, the
need for creativity, both in how we interact initially with
it as well as how we analyze the results. What
we're going to do with it. I think that's what
differentiates us. If we've all got access to the same models,
the same data, it's going to be how the human
interacts and leverages it and actually uses it to its fullest.
(12:41):
You know, these intelligent systems are becoming more conversational, more embedded.
I've told this story before a not long ago. Preparing
for a roundtable that I was participating in, I asked CHATGBT,
what roles will start to leverage AI more and more?
What new roles will do develop? And it came back
(13:02):
and one of the items it said was dex strategist
and in parentheses next to it it said wink wink,
because that's my title, and it just it made me
sit up and say, now, who coded who programmed GPT
to have that level of interaction and almost humor? Right,
So that.
Speaker 3 (13:23):
Is it I did.
Speaker 4 (13:26):
What's exciting about that, tim is this idea that AI
is the worst it will ever be today. AI is
the slowest it will ever be today. So imagine after
all another three years, five years, we don't even have
to speak about ten years.
Speaker 3 (13:38):
I'm talking about the next thirty six months.
Speaker 4 (13:41):
So I think that's really interesting and I think alongside that,
what we're not talking about enough is the psychological revolution.
We're so focused on the technology revolution and the AI
revolution and this exciting it sucks up all the oxygen
in the room, but we can't actually harness the full
potential of that without a psychological revolution. What I mean
(14:01):
by that is if I go into most organizations today,
they're still using the same organizational playbooks that they used
one hundred years ago, quite literally in the way that
they approach the working day, meetings, committees and breaking down
a bundling and unbundling of jobs. So that's what's interesting
to me, and that's that's the risk and opportunity again,
(14:23):
which is we've got this incredible access to superpower now
only limited by our own imaginations, but without that psychological
and organizational revolutions to supports it, we risk artificial idiocy.
It's a bit like getting a Ferrari or a Porsche
and actually still you know, only going five miles per
hour in it because we don't know how to use it.
Speaker 5 (14:46):
And I think that idiocy is is learned over many
decades of ingreined process ingreined hierarchy in our organizations. You know,
we're stuck in status quo. So we look at things
in the context of what we've done versus the possible
and what exactly right And to get back to this
(15:07):
this concept of taking those old ideas, those old methodologies
and disrupting them. Right, there's some very visible tech leaders
right now who have a strong disruption mindset, but that
when it's done incorrectly, we can see that disruption can
really cause severe havoc. Right, So you've said that this
(15:29):
mindset is a strategy and mindset can really change how
technology is viewed, how it's designed, how it's implemented. Talk
about mindset. What mindset do these leaders of digital column
digital leaders? How do they lead effectively through this disruption
and not avoid the chaos?
Speaker 4 (15:50):
Interesting questions A pivotal question right now, a question of
our times. And you know, i'd spent some time working
with Revolute recently, and we know Revolute for you know,
being really a sort of future ready frontier company in
banking and sort of neo banking, but actually they're going
(16:12):
to start disrupting the telecoms market next you know, they've
they've decided that and this is this is the interesting
thing about these pioneer disruptors, which they're not just focusing
on one vertical on an industry. They can actually disrupt
multiple verticals. And that's the scary truth for anyone listening
today that your next biggest competitor could be a Revolute,
(16:33):
could be a or a company you've never actually heard
of VC backed. So I think in terms of those
minds to collective mindset shifts that we're talking about economies
of scale to economies of sustainability and economies of light speed.
Ecosystem to ecosystem preservers of the status quo to challengers
of the status quo, Homo sapient to techno sapient data
(16:58):
silo to collective intel eligence and wait and see, to
learn and explore, do faster than doubt would be a
key call to action right now. I spent some time
met Jesse Armstrong, the chief writer behind Succession. One of
my favorite sayings from that brilliant show, by the way,
is you can't make a can't make a Tomlet without
(17:21):
breaking some gregs. And Jesse Armstrong, yeah, he used to.
He iterated the script over a one hundred times with
his team, and one of their mottos is start before
you're ready or don't start at all. And I think
in this moment of extreme liminality, in this moment of
high speed on reality, for many, actually not taking a
(17:41):
risk as a risk when it comes to AI.
Speaker 2 (17:46):
Interesting and extreme liminality is super cool. Terrence. I know
you've also written about trust as a kind of currency
in this emergent world thinking, maybe describe your ideas and
about trust and why it's so significant and significant, and
then relate that if you can to how technology teams
(18:09):
and organizations can can build trust as they start to
roll out these very disruptive new tools.
Speaker 4 (18:15):
If trust was on the dashboard right now, be flashing red.
The writer George Orwell would love these times, but not
in the right way.
Speaker 3 (18:23):
We have mean warfare. Deep fake.
Speaker 4 (18:26):
We heard recently that one of the finance team er
up the engineering company based well, this was the Hong
Kong Offers. He saw a deep fake of his CFO
and it was so believable that he transferred twenty five
million dollars. But he did it in twenty five transactions.
And here's an interesting question for you. If you saw
a if you saw your CEO on the screen, how
(18:50):
confident would you be to know that it was him
or her or a deep fake. Last year it was
about sixty six percent confidence. This year's thirty three percent
confidence with the actually is irrelevant.
Speaker 3 (19:01):
At some point, very soon, you won't know the difference.
Speaker 4 (19:05):
And if cybercrime and truth decay and trust and ethical
breaches were a country, there'd be the third fastest growing
country on the planet. Right now, global GDP over eleven
trillion dollars growing at a compound on your growth rate hooever,
twenty eight percent. We see some big examples in the
last eighteen months, snowflake healthcare companies. The average data breach
(19:29):
will cost you between six and ten million dollars. It
will affect your stock price by seven to nine percent.
And trust is the number one driver of enterprise value,
but it's never been more tested and more under attack.
And trust is the ultimate human currency in this age
of AI acceleration and automation. And so for me, if
(19:53):
I break down trust, you know, we're going through different
evolutions of trust. Institutional trust, local trust, this distributed trust
which is enabled by technology, and then auto sapient trust,
which will be you know, you'll have the equivalent of
a chat GBT that will be your your ure, your
(20:13):
user interface, and you will decide what to do through
that particular chatbot auto sapient trust and the organizations own
that own the trust on the ecosystem and own the
value chain. So trust is a big deal. It's time
to rethink trust, physical, digital, ethical, and I think it's
never been under more under under pressure.
Speaker 2 (20:37):
I already fell for just about every phishing scam going terns.
So you know, they don't they don't need to, they
don't need to invest so heavily to pull me in.
Who think you send twenty five million dollars zero to some.
Speaker 3 (20:51):
Company?
Speaker 4 (20:52):
And you know, basically I was on Microsoft recently. They're
just Microsoft, just one company. They're dealing with seven thousand
password attacks a second, and that translates into six hundred
million a day, two trillion a year. Now compound that
by different companies, and you know, we go beyond the
cognitive limits of these numbers. But this is the interesting
(21:17):
paradox again. As AI gets more and more advanced exponential
speed and scale, so does deep fake.
Speaker 3 (21:24):
So do cyber crime, so does data breach. That they're
in parallel together.
Speaker 2 (21:30):
That's that's wild. I have to say, I've kind of
touched upon it already. But what's going to separate, what's
going to distinguish first technology leaders or organizations who can
drive genuine, enduring change as against those who just who
just react to it and dole it out.
Speaker 5 (21:50):
You know, take it under Chin.
Speaker 3 (21:52):
I think what you said there was exactly the right word.
Don't react, but respond.
Speaker 4 (21:57):
And it's about I called it the be proactive, prosilient, pro.
Speaker 3 (22:02):
Experimentation, pro growth.
Speaker 4 (22:05):
So it's really the pros and I spoke about this
dilemma that most companies are trapped between the panic zone
and the complacency zone. Recognize where you are and get
into the learning and growth zone. That's the first the
first quarter action. The second quarter action would be be
aware of the rubber band effect. Right now, we're all growing,
(22:26):
we're stretched, we've got aspirations, but our rubber band snap
back into place very easily.
Speaker 3 (22:33):
As Tim said right at the beginning, you.
Speaker 4 (22:35):
Know, we're kind of prisoners of orthodoxy.
Speaker 3 (22:39):
We're prisoners of a ton of years of doing things.
Speaker 4 (22:43):
They always done ways, and you can't achieve the full
potential of the technology revolution without a leadership revolution. In fact,
AI is not just a technology challenge. That's one of
the biggest mistakes we can make. It's a leadership challenge.
Speaker 2 (22:58):
What about beneath the leader, below the leadership level of terrence.
What about all of us taken together? What's the most
important mindset shift that you think is most urgently required
on a mass scale as we all seek to sort
of successfully navigate this fast changing world.
Speaker 4 (23:17):
There's a lot of fear out there. We've got fear
of missing out fomo. We've got fear of becoming obsolete phobo.
We've got fear of messing up bomo. Apart from that,
we've got burnout, record levels of burnout costing trillion dollars
a year.
Speaker 3 (23:33):
We've also got bor out.
Speaker 4 (23:35):
I was at an organization in Paris recently and employees
successfully sued their company for borout cognitive underload.
Speaker 3 (23:42):
Watch out for.
Speaker 4 (23:43):
The rise of boor out as more and more of
parts of our role, I'm kind of automated borout cognitive underload.
Being bored at work is going to become a real thing.
And let'sure we are rebundling roles in a way that
creates new challenges, new meaning.
Speaker 3 (24:02):
New purpose.
Speaker 4 (24:03):
So I think you know, we saw rese a couple
of days ago Gallup released its latest annual engagement survey
and the scores going down again. I think it's like
record levels now in the wrong direction. And my point
is we can't achieve these incredible upsides of disruption without
motivated workforces, resilient workforces, workforces that have hope. And look
(24:28):
as somebody it comes from from Scratford up and abon.
It's important to date to introduce a new word into
the conversation, an ancient English word that hasn't been used
for a long time, but I'm going to share it
with you for the first time today.
Speaker 3 (24:39):
It's the opposite of despair.
Speaker 4 (24:41):
It means it's called respare and Restpare is hope in
the future, fresh optimism, optimism, fresh hope, respare and we
need an agenda of respare now in the twenty in this,
in this, in this twenty twenty five, because despite all
this disruption around us, there an upside.
Speaker 2 (25:01):
You may have a remarkable repertoire of neologisms and brand
new acronyms, but you speak in a lot of sense,
my friend, and it's been a great pleasure to have
you on very stimulating Where can people find.
Speaker 3 (25:15):
Any any upcoming or very recent books you want to plug?
Speaker 2 (25:18):
Where can people find you in contact you?
Speaker 4 (25:20):
Well, first of all, thanks for the opportunity to have
a great double espresso conversation. And it's refreshing to not
just talk about AI but also some of the risks
around that that I think it's those with the vision
and responsibility and those that have ethical leadership as well
that will really harness the upside. I think, first of all,
(25:42):
my new book, The Upside of Disruption is worth checking out.
And number two my website Terrencemoori dot com.
Speaker 3 (25:49):
I use it.
Speaker 4 (25:50):
I'm on LinkedIn all the time, so join my community,
and I'm often speaking around the world.
Speaker 3 (25:56):
Tomorrow I fly to Stockholm.
Speaker 4 (25:58):
So you speak to two thousand business leaders were around
the world all about future readiness, Muscle following week, Baralow
World and Johanna's burgs On City, So I'm always traveling
every week to different parts of the world. I'll be
in Boston soon as well at MIT, so catch me
at a conference as well.
Speaker 2 (26:16):
Beautiful and I know correct myself. I think I said neologism.
Neologisms is what I'm I'm sorry. I'm sorry, apologize to
believe it. I was got to clear your earlier time.
But he thought I'd let it slide with that maw
of prop ism it why not?
Speaker 5 (26:30):
Why not?
Speaker 2 (26:30):
The last note to Shakespeare Terret's great pleasure maybe see
you and bossed on them see your past soon.
Speaker 1 (26:38):
Likes to make sure that you never miss an episode,
subscribe to the show in Apple podcasts, Spotify your favorite
podcast player, and if you're listening on Apple podcasts, make
sure to leave a rating at the show. Just tap
the number of stars you think the podcast deserves. If
you'd like to learn more about how next Thing can
help me improve your digital employee experience, head over to
(26:58):
next think dot com. Thank you so much for listening.
Until next time,