Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Scott Allender (00:26):
Hi folks,
welcome to the evolving leader,
the show born from the beliefthat we need deeper, more
accountable and more humanleadership to confront the
world's biggest challenges. I'mScott Allender
Arjun Sahdev (00:35):
and I'm Arjun
Sahdev.
Scott Allender (00:37):
I'm delighted to
be hosting this with you today.
Arjun, how are you feeling,
Arjun Sahdev (00:41):
Scott? I'm feeling
fantastic. I am. I'm actually
missing being on vacation. I'vejust come back from Turkey, so
I'm feeling refreshed. I've comeback to cold a wintry London
feeling, feeling like I'mmissing a bit of sun, but but
feeling refreshed, feelingpositive, really excited to be
(01:03):
talking to yourself and to Atiftoday.
Scott Allender (01:06):
Excellent. Well,
I am feeling the beauty of
autumn right now. It's myfavourite time of the year, so I
am feeling energised, and likeyou, I've been excited about
this conversation. So today weare joined by Atif Rafiq, and He
is the former C suite operatorwho led large scale digital
(01:27):
change at McDonald's Volvo andMGM Resorts. And today he's the
CEO and co founder of ritual, aworkflow AI product focus on
upstream problem solving. A def,welcome to the evolving leader.
Atif Rafiq (01:40):
Thank you so much.
Scott and Arjun, it's great tobe here with you.
Scott Allender (01:43):
So you were
officially recognised as the
first ever Chief Digital Officerin Fortune 500 history. That's a
big deal. So what? What laysbehind this? How are you
different to other corporatetech leaders at the time? Give
us your give us your story.
Atif Rafiq (01:57):
Yeah. So let's go
back to 2013. Mark Andreessen,
the venture capitalist, coinedthe phrase, software will eat
the world. It actually did. Itstill is doing that.
Unknown (03:44):
the mistakes they're
making when it comes to decision
making in ambiguity, the numberone pattern is around alignment.
Premature alignment is thenumber one symptom of the
problem and or actually, it'sthe root cause of the problem.
And what I mean by thatspecifically is,
Atif Rafiq (04:05):
you know,
suspending, like, the discovery
and exploration process. Sopeople might ask like, Why do
you ask so many questions in ameeting? You're an executive.
You should know what you wantfrom us. And I do know what I
want, but I am not. I would liketo
Unknown (04:22):
really fill my
knowledge gaps. I would like to
hear the unknowns that are notin my head that people see. I
would like to I think of peoplealmost like sensors in a self
driving car. They see differentthings than what I see, I'd like
to get all those cards on thetable and for us to see the same
pictures, so that we could drawthe right conclusions together.
(04:43):
So I came up with a mantra,which is exploration before
alignment, and that reallycorrects this human factor
pitfall in organisations that wesee, which is alignment before
exploration,
Scott Allender (04:57):
is that the idea
of going upstream essentially.
So you know, many leaders try tobuild speed by pushing work
downstream, but as I mentionedin the opening introduction, you
suggest that real speed comesfrom going upstream. So can we
hear a little bit more about.
Unknown (05:13):
That, sure. Well, I
mean, execution does matter. And
I think this idea was really putin the centre of, you know,
leadership, by Jack Welch,right? So now we're talking
about the 1990s or somethinglike that, and it has persisted
for a good amount of time,probably for about 25 years till
this sort of Mark Andreessenphase of software is in the
(05:37):
world. And then everythingchanged and became, you know,
more more less stable,basically. And so then the world
became more complex. So upstreamwork is essentially a phrase I
coined to basically go fromlike, you know, very high level
(05:59):
altitude, 90,000 foot, big ideasor big problems to solve, and
then how to purposefully learn.
Basically learn in a verypurposeful way so that you have
a high quality, essentially,knowledge base to then come up
with recommendations, I havefound that in organisations,
there's no method to the madnessfor this, like, how do you go
(06:20):
from the promising idea or thebig problem to solve to strong
basis for drawing conclusions,making recommendations, aligning
people. We all want thosethings. I'm not saying don't
align and whatnot. But I'm justsaying that to get to the point
of alignment, there is a way ofworking to do this high quality
(06:41):
learning so that you can fan andexplain the direction that
you're proposing to take withany kind of big idea or
initiative.
Scott Allender (06:53):
So I can't have
a conversation about decision
making without talking about therole of self awareness and
emotions that factor into ourdecision making. I'm always,
I've had several conversations,and I always, my mind always
goes to the same place withwhich is, we all think we're
good decision makers, right? Wedon't intentionally make bad
decisions. But, you know, we thestock market does better when
(07:13):
it's sunny outside, right?
There's no There's no rationalefor it, but people make
decisions based on mood and allof that. So I'm curious with
your methodology
Unknown (07:23):
and the sprints that
you're running and going
upstream, what are theguardrails or some thoughts you
have around taking out bias? Ithink my core idea is actually
that the system sits above humanfactors. And a lot of what
you're talking about, Scott, arehuman factors. And the number
one way to avoid the pitfall ofhuman factors, by the way,
humans are not bad, it's justthey have upside and downside.
(07:46):
And the human factors that aredownside are things like bias or
just being too strongly weddedto your ideas for whatever
reason. And so when we haven't,when we look at questions, the
part our beat of decision sprintis essentially building a
question listand the right breadth and depth
of questions. When we take thatas a step in our process, we
(08:07):
make this objective and neutral.
So that's very different thaneither extreme, because
questions that are used to besceptical about the promise of
an idea is very different thanneutral questions, and then
ignoring critical questions justbecause you want to move on and
(08:29):
start doing the thing is also isthe other extreme, but when
we're centred and grounded withthe work we must do in problem
solving, which is essentiallythe knowledge work we do in
companies, in the form oftaking, you know, using the idea
of like a sprint, where indecision sprint, there is a
(08:50):
discrete step for sourcingquestions and building the right
breadth and depth of questions,then we made that a neutral,
objective step, which then kindof cancelled out the human
factors and bias, at least forthat part. And
Scott Allender (09:06):
I think that's
critical. Can we talk about how
AI is changing decision making?
I'd love to get your thoughts onthis, because can already see
some signs of people using AI inways that actually only
reinforce maybe some baddecisions. So maybe starting at
the top of the org, how can webe thinking about using AI as a
partner to help us do all themethodologies that you've
(09:28):
outlined for us today? Yeah, Ithink it's we're in the era
where human machine interactionwill totally redefine how we run
companies, and we're not thereyet. I mean, I think a lot of AI
adoption is quite tactical,quite operational, quite into
Unknown (09:51):
well defined workflows,
something that's back office,
customer service, customeroperations, right? But it can
absolutely redefine how we dostrategy from the top of the
organisation. So an example ofthat would be,
what are your strategic pillars?
How do you break those big rocksinto smaller rocks if you were
going to do problem solve?
(10:14):
Thing on the smaller rocks, areyou essentially using AI and
human collaboration to definethe problems crisply surface the
right unknowns build thesequestion lists?
Are you or how are peoplecontributing? That's going to
change as well, because we willlook at contribution in very
different ways in five years.
You know, today is very loose,like, Oh, what was someone's
(10:38):
contribution? But in the future,adding a good question to the
mix is going to be a reallygreat contribution. You'll get
recognised for that, becausemachines will be able to connect
the dots between outcomes andprojects that win and succeed,
and why they succeeded, andoften, we'll be able to trace it
(10:59):
back to who contributed greatquestions that helped us stand,
you know, on stronger footingwhen we came out the other side
With specific recommendations,specific decisions. Will be able
to connect decisions with theinputs all the way from the
start, and this will redefineeverything. So as a, in your
(11:22):
current capacity, as a, as aboard advisor, as a, as a CEO
and co founder, what skills doyou think are becoming non
negotiable things like, youknow, brilliant, brilliant
questions or assumption busting.
What skills are becoming nonnegotiable for future leaders in
(11:46):
an AI world, it's definitelyproblem solving and critical
thinking, and those are theheart of any knowledge work. And
what does that look like day today?
It can be everything from, youknow, things like asking
questions or, you know, drawingconclusions and coming up with
recommendations that aren'tobvious. You know, a lot of
(12:09):
times when we have to makedecisions, it's not one
decision, right? Like there's abig idea for McDonald's and Jean
Volvo. It's not like a it's it'snot one big call. There's very
layered things. And so you canmake a you can elevate a
recommendation by addingsomething to it, for example.
(12:29):
But this is, I think, the heartof the skill sets that are now.
There is basically, how do youimprove the decision confidence
of your organisation? There's alot of ways to do that, but
whatever you can do tocontribute to the decision
confidence, or increasing theconfidence level of a decision
to within your organisation, fora thing that is, you know, that
(12:53):
is the skill set of the future,and it does come back to
critical thinking along the way.
Scott Allender (13:00):
What is a maybe
some final thoughts? You want to
leave our listeners withsomething to take, take with
them the conclusion of ourconversation.
Unknown (13:08):
I think that, you know,
I always ask this existential
question, does knowledge workmean? The knowledge worker.
Those are two different things.
Companies will always needknowledge work, right? Because
if you're Amazon or Google, youwant to be in four more
businesses that you're not intoday, and you're going to need
the knowledge to get into thosemarkets. The knowledge worker
hour is a different thing. It'shuman. And a lot of people think
(13:31):
that we won't need the knowledgeworker and we'll be able to do
amazing knowledge work. And myresponse to that is
Atif Rafiq (13:42):
that's that's not
true.
Unknown (13:45):
I think what we're
looking at is the fact that as
companies, we haven't reallytaught people how to do high
quality knowledge work for thelast three decades, mainly
because of coming back to thisJack Welch era, everything was
execution, very stable, and wewere essentially a lot of people
in big companies were doingKnowledge Administration versus
knowledge creation. So I wouldsay, to create sustainable jobs,
(14:10):
to be rewarded,you know, to feel more secure in
your work. What are the thingsthat you can do to contribute?
To knowledge creation? Becausewe will actually, obviously have
aI at the centre of that.
But human contribution in an eraof thinking machines, I believe
there is a way forward, youknow, centred on, on some of
(14:31):
those ideas. Decision sprint isjust one way for humans to make
that contribution. I'm surethere are, there are a lot more
Scott Allender (14:38):
out of thank you
for your sharing your insights.
We'll put all the links to youand to your website, to your
book, all that be in the shownotes. And we, we know our
listeners have appreciated yourwisdom. So thank you. Much
appreciated. Until next timefolks remember the world is
evolving. Are you? You?