Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Most executives see AI as just automation.
That's wrong and explains why their AI strategies fail.
Today on CXO talk #900, Sangeet,Paul Chowdhury, advisor to over
40 Fortune 500 CEOs and author of the book Reshuffle, explains
(00:22):
why AI is actually coordination infrastructure and what that
means for your organization. I'm Michael Kriggsmann.
Let's get into it. There's a lot of hype about AI
at the moment and there's, there's some fairly, you know,
consistent patterns that we see in how strategy and systems
(00:44):
change when new technologies come in.
And when we're in the midst of hype, we sort of miss that
whole, you know, understanding exactly how some of these things
will play out. So the reason I wrote Reshuffle
was to kind of extract us away from how we think about AI in
terms of the hype today, but really to think about how
endealing change happens becauseof new technologies.
(01:04):
And that's really what inspired me to kind of bring this book
together. Your book describes a different
paradigm or a different way of looking at AI, so tell us about
that. I work with a lot of executives,
and when they say that they're implementing AI or they're
(01:28):
thinking about AI, they typically mean that they're
automating tasks. They're automating workflows in
their organization, they're deploying technology towards
automation. And my key argument in the book
is that the real value of AI plays out not through automating
tasks in the system that you're playing in today, not through
(01:49):
just automating what already exists, making it cheaper,
better, faster, but really reimagining your business around
the capabilities of AI. Because the real advantage or
the real impact of any technology plays out when
economic activity gets reorganized based on the new
capabilities of the technology that is made available.
So even when we say, you know, when executives today say we
(02:11):
need an AI strategy, when I ask them about it, very often what
they mean is we want to figure out what should we do with AI?
And that sort of traps them within thinking about, here's
our business. How do we apply AI into it to
get things cheaper, better, faster.
But that's not really what strategy is about.
Strategy is fundamentally about answering 2 questions.
(02:31):
Where do we play? How do we win?
And so when we put AI into the mix, the question that we should
be on, you know, the questions we should be thinking about is
how does a world where AI comes in?
How does that change our playingfield?
How does that change, you know, who we compete with, how we
compete, how we create value, how we capture value?
(02:52):
And then how, how does it changeadvantage within that playing
field? How does it change how we win?
So it's really about reimaginingadvantage and your business
given that the world is changing, rather than thinking
about how to apply AI within theconfines and assumptions of your
current business. OK.
So take that down another level for us because you're describing
(03:16):
the limitations of viewing AI asa technology that can make
things essentially better, faster and cheaper.
And I think that's how most of us view technology in general.
So so you need to unpack this. In my book Reshuffle, I make a
distinction between thinking about the impact of technology
(03:37):
at the level of tasks and activities, which is, you know,
you do a certain set of activities and run a certain set
of workflows to create value as a business.
You're bringing AI in to speed those things up.
So that's the automation view ofthinking about how you think
about AI or technology in general.
But if we look at, you know, previous technology waves, and I
(03:57):
take the example of the shippingcontainer, I use that example
because today we are very obsessed with whether AI is good
enough or not, whether AI is intelligent enough or not.
My point is that even with it's current capabilities, AI is
massively underutilized because we are still thinking about
speeding up things in the existing system and that
(04:20):
prevents us from seeing what's possible around the capabilities
of AI. So if I very quickly just take
us through, you know what happened with the shipping
container because it's, it's an interesting parable to
understand what's possible with AI today.
When the shipping container was introduced, it's fullest order
effects were seen as automation because prior to that we used to
(04:42):
live in a world of brake bulk cargo.
So you needed people to, you need needed dock workers to go
up and down a ship, moving cargooff and on a ship.
And that lack of standardizationmade port operations slow.
So when the shipping container was invented, people assumed
that the port would get automated because cranes could
(05:02):
move, you know, cargo on and offa ship.
So the first order effects were certainly that of automating the
task and making it better, faster, cheaper.
And the first order impact of that was in the loss of the jobs
of the dock workers. And that's where we, you know,
if you think about AI today, we're always thinking about that
what can AI do that a human is currently doing?
(05:23):
And hence those jobs are going to get lost.
And what are we doing right now,which we could do better,
faster, cheaper. But the parable of the, you
know, the story of the shipping container really plays out
through the 2nd and 3rd order effects.
Because what happened next was that trucks, trains and ships
agreed on a common standard for the shipping container.
And that totally unlocked logistics at a global scale,
(05:46):
because now you could move shipment from source to
destination completely seamlessly.
And that made logistics reliable.
And because shipping now became reliable from source to
destination, the logic of manufacturing change because the
previous assumption of manufacturing and I'll come back
to this, you know point about assumption, because we had to
(06:08):
rethink our assumptions with AI.The previous assumption of
manufacturing was that shipping and and movement of goods is
unreliable. So you want to keep either
manufacturing internally vertically integrated or you
want to keep your suppliers Co located.
But once shipping became reliable, suppliers could be
anywhere in the world. And so global supply chains came
up and jobs created a lost were not because of what happened at
(06:31):
the port, but what happened because of these global supply
chains. Manufacturing moved from
vertical vertically integrated models to component based
manufacturing. So new jobs were created because
of that component based manufacturing and competition
that emerged and eventually evencountries rose and fell based on
how they plugged into this new system of global supply chains.
(06:52):
So my point is, a lot of our knowledge work today is
structured around certain assumptions of scarcity of
performing knowledge work. And just like there was an
unreliability of shipping or thecost of shipping was very high,
today, the cost of performing knowledge work traditionally has
been very high. But with AI, certain forms of
knowledge work, not all, but certain forms of knowledge work,
(07:15):
the cost of performing them dramatically collapses.
For example, the cost and speed of translating a document was
very high just a few years back,but today it's completely
collapsed. So when that happens, the
assumptions on which your business is structured
fundamentally change. And with that, you have to
reimagine your business. You have to reimagine what kind
of competition will come in. You have to reimagine what's the
(07:37):
basis of advantage. And that's my key point, that
unless we think in those terms, we're missing out on the real
potential of AI. We have a couple of interesting
questions that have started to come in relating to these issues
that you've just been describing.
So let's jump to those and I just want to tell everybody that
(07:58):
you can and you should ask questions.
If you're watching on LinkedIn, pop your question into the
LinkedIn comments. If you're watching anywhere
else, go to X Twitter and use the hashtag CXO talk.
I urge you take advantage of this.
When else can you ask Sangeet, Paul Chowdhury, pretty much
(08:20):
whatever you want. So take advantage of it.
So to begin, on LinkedIn, Huawei, Yi Lee says that when it
comes to doing what you're describing with the least impact
on structural unemployment, she believes that the right thing to
(08:42):
do is not teach employees how touse Gen.
AI for their existing roles, butto imagine how human agent teams
will be structured and where employees will fit into this new
framework and make these teams part of the test and learning
(09:05):
process. How far off is she?
She's asking. Learning how to use AI is
important, but it's stable stakes.
It's not going to give you an advantage.
It's just going to help you run.It's not going to tell you where
to run. And so you're not going to get
anywhere just by learning how touse AI.
So this whole, you know, a dig of AI won't take your job, but
(09:28):
someone using AI will is grosslyincorrect, correct?
Because it's sort of forces the listener to assume that adoption
of AI, learning how to use Gen. AI is sufficient.
The, the key, I would say there are two or three things that are
really important. The first thing that's important
is that whether you're an individual thinking about your
job or career, whether you're a team leader, thinking about your
(09:50):
team, whether you're CEO, thinking about your company, you
have to think through what is the state of the playing field
going to be out there a year from now, six months from now,
whatever time frame you feel comfortable with it, it can't be
five years because the rate of change of things is too, too
fast. So you have to, let's say, take
a year from now, what's the state of the playing field going
(10:12):
to be and what in that's, you know, given those assumptions,
what's going to give you advantage?
So you have to use some level offoresight, even if an individual
and planning your career, that'sthe only way you can have agency
in a time of rapid change. So the first thing is apply some
foresight. Think about what you know the
future map is going to look likeand where you're going to play
(10:34):
over there, how you're going to win, make some bets and then
start executing, start moving inthat direction.
And as you move in that direction, you'll learn what's
working, what's not working, which of those bets were made
sense or did not. And maybe the models will
improve and some of those bets no longer hold.
So keep updating your bets and your map on the basis of that.
But if you don't have a vision, if you don't have some stake in
(10:57):
the ground, that this is what, what my game is going to look
like, whether I'm an individual or the team member.
And, and this is what, how my team will fit in the
organization at that point. Unless you have that stake in
the ground, you don't, you know,you, you don't have any thesis
that you're testing and updatingand validating as you move
forward. So that's the first thing I
would say. The second thing that's
important is that we very often think about this duality of
(11:22):
automation and augmentation, andwe feel that automation is a bad
thing for humans. Augmentation is good because
it's helping you get better, butthere's a there's a fallacy over
there because augmentation is not inherently good.
Automation means the human effort gets substituted on on a
task. Augmentation means that it gets
complemented. Now, complementarity does not
(11:44):
guarantee that you will continueto retain your salary and your
agency associated with that work.
It just means that if AI complements humans, we will
emerge in a system where the newdivision of labour is made
between AI and humans, so that AI gets what is best done by AI
(12:04):
and humans continue to do what they can best do.
That's that's the only thing that it means.
It just means that we're going to have new workflows, new
division of labour where the twowork together.
It does not say anything about whether you will be able to
retain your ability to capture value and command a premium for
your skill with that work. In many cases you may not, and
(12:24):
it absolutely doesn't say anything about whether you will
have the agency that you enjoy today.
So the second thing that I wouldsay is that don't assume that
just because in your job you're starting to see higher
productivity because you're using AI, that is always going
to be a good thing 369 months from now.
You have to, you know, always beaware of the fact that there's a
(12:46):
new division of labour that's going to come.
And that's the whole idea for a reshuffle in this in this
instance. And so the final thing that I'll
just point out over here is thatsomewhere in the question there
was this point about, you know, should we be looking for
guidance and direction from how the organization is thinking
about implementing this new division of labour etcetera.
(13:08):
I think that it's there's never been a more important time than
today to have agency or our career choices or how we want to
move forward. And I think we're at the start
of a complete reshuffle and how we think about our careers.
We're at a point where what we are learning through the
traditional models of learning is getting decoupled from what's
(13:30):
being rewarded. And that's only going to, you
know, I won't say accelerate, but it's going to get
increasingly decoupled and increasingly thrown out of
disarray. So if you don't have agency with
the first two things that I mentioned, relying on your
organization or some other structure to tell you where you
fit in is not going to be a winner solution.
(13:53):
How do you recommend that business and technology leaders
develop that broader perspectivethat you're describing and that
vision when they don't know what's coming down the Pike?
And we have a question relating to this from Twitter.
And this is from Anthony Scrifignano, who's been a guest
(14:16):
on. He's a great Day to Scientists.
He's been a guest on CXO Talk a number of times and he says
this. What are the blind spots that
you see in the more mature organizations that lead as
you're suggesting? The two questions are related,
but they're slightly distinct. So I'll just talk about the
(14:37):
first thing you know, how do youthink about these issues?
So there's there are certain clear ways and clear structures
on how you think about it, whichI explained as the framework for
how you see the reshuffle. So the first thing is that
technology always moves value. Value does not stay exactly
where it was today. So the first thing that you want
(14:58):
to think about is what are the assumptions on which my work and
my business is based and what are those assumptions of
associated with value. So, so typically when I say
assumptions associated with value, it's about on what is the
assumption about how I differentiate myself versus
competitors and what's the assumption about what helps me
capture that premium against my competitors.
(15:20):
So a lot of knowledge work, as Imentioned, was structured on the
assumption that access to talentis expensive and hence access to
performance of knowledge work isexpensive.
But if that performance of knowledge work becomes cheap,
then suddenly that assumption changes.
A second set of assumptions related to this is that, you
know, traditionally knowledge was very siloed in the sense
(15:44):
that if you had really good knowledge about chemistry, you
would have that trapped inside certain industries.
If you had really good knowledgeabout energy cells, you'd have
trapped that inside certain industries.
But with AI coming in, it's possible to now package that
knowledge and make it available beyond industry boundaries.
And that has implications both for firms and individuals
(16:07):
because as individuals, we were siloed to the training and
career path that we had. And today we could potentially
access knowledge that we would never have had access to.
If we had to access it, we wouldhave had to hire a team.
And that was prohibitively expensive for others
individuals. But now you can have, you know,
a team of agents working on yourbehalf, getting that knowledge
(16:27):
for you so that you can then figure how to apply it to your
work. So the the fact or the
assumption that knowledge is no long, that knowledge is tied to
a silo starts going away as well.
So when these assumptions change, my key point is, and
it's not just these two assumptions, they're
illustrative, but when these assumptions change alongside
that, what happens is that valueused to be tied to these
(16:50):
assumptions. So when the assumptions change,
value moves somewhere else. When value moves somewhere else,
you have to think about, you know, what's going to be newly
valuable. And hence in order to capture
that, what should I be doing? Should I be changing what I do
completely and moving to a different part of the value
chain in order to go there? You know, how do I differentiate
myself if I've never been, if I've never played in that part
(17:13):
of the value chain before? So these are the things that,
you know, you need to think about.
I'll just give a very simple illustration from a previous
technological shift. So when digital cameras came,
the cost of taking photos went down to 0.
And when the cost of taking photos went down to 0 and you
were no longer restricted by thecapacity of the film, the value
(17:34):
of storage initially and and storing and tagging your photos
became more valuable. So you know, Flickr had a good
valuation, then sharing of photos became more valuable.
So Instagram got a good valuation.
But my point is that it was not that Kodak did not become
digital. Kodak was the number one digital
camera manufacturer and seller in the US in 2006 and then went
(17:54):
bankrupt in 2011. But that was because value was
no longer in digital cameras or in selling cameras, and neither
was it in managing and printing photos.
It was in sharing photos. And so Kodak saw that value is
moving, but it couldn't do anything about it.
On the other hand, Fujifilm saw that value is moving and it
realized that it's business had always been about chemistry,
(18:17):
about chemicals and photos just happened to be a way to make
money with those chemicals. So then it took the same
capabilities and moved to other playing fields.
It took the same capabilities, moved into pharmaceuticals,
moved into cosmetics, and essentially used its
capabilities in chemistry to start making money over there
and survived that whole shift very well.
(18:39):
So that's my key point that you need to think about the new
playing field. You need to think about whether
you'll go for new capabilities, leverage your capabilities to
another playing field. You need to think about these
aspects, even whether as an individual or even as somebody
managing your own career. So this is from Arsalan Khan.
He says AI requires imagination and reimagination of the entire
(19:01):
organization. And here's the crux.
If a CXO reached the C-Suite without AI, why should they
care? Are there organizational
existence issues but CX OS don'tlast in their position that
long? It's a somewhat loaded question
in the sense that there are certain the, the term CXO is
(19:23):
very broad. So in certain functions, maybe
if you're ACFO you can afford tonot get affected by it a little
longer than if you are say as you know, the CEO for instance,
who who's responsible for driving competitiveness and
growth and value for the organization.
But that aside, even as a CXO, if you want to continue staying,
(19:45):
staying relevant, it's not whether your organization is
taking a certain posture to AI or not.
The entire playing field is shifting.
So within four to five years when AI or any technical
technological shift, but more sowith AI because the rate of
change and the impact is more widespread than a lot of
(20:08):
previous technological shifts. When the playing field
constantly changes, the way companies compete change.
When the way companies compete changes, the capabilities they
value change. And so the point is not that you
should be learning how to use AI.
Maybe you should, maybe you shouldn't.
That should be subservient to the question of which
(20:28):
capabilities are going to be valued going forward.
And will I continue to have those capabilities, the
capabilities I have today, will they still be valued when the
way companies compete in my industry will fundamentally
change? So whether you're a CXO or not,
the value of experience as crystallized knowledge will
always stay. But the value of experience as
(20:50):
the ability to just understand alot of what you've talked, you
know, just understand domain knowledge is going to collapse
dramatically. Instead, you'll have need to
have the ability to complement your crystallized knowledge with
constantly learning new domain knowledge based on how the
playing field is changing. So if you're ACXO and you're
thinking of retirement, maybe that's different.
(21:11):
But if you're a CX and you want to continue working somewhere
else in order to be relevant, your capabilities should still
be valued. And so, so it's less about
learning AI and becoming a illiterate.
It's more about build a very clear lens of what's going to be
valuable. What, how is the, you know,
changing playing field going to change my job, my job or my role
(21:35):
or, you know, the capabilities that I have?
Will they increase in value or decrease in value?
Those are things that you need to think about.
So if I can paraphrase what you just said, if a company or an
individual does not expand theirskills and broaden their
perspective on how AI will affect the roles, company,
(21:58):
markets and so forth, then they're basically screwed.
I would say that that's pretty much on point.
It's it's less about the skills,it's more about the perspective.
If you have the perspective, you'll know which skills to move
after, but you need to have thatperspective.
And this is from Maya Cunninghamon LinkedIn, and she has a short
question. So my hat's off to Maya
(22:19):
Cunningham. By the way, folks, now would be
an excellent time. To subscribe to the CXO Talk
newsletter. Go to cxotalk.com right now,
subscribe because we have incredible shows like this and
we'll notify you of what's coming up.
And on Monday you can go to that, to the site and see the
(22:43):
edited version of this conversation.
So it's being recorded. So subscribe to our newsletter.
Do that now, please. All right, Maya Cunningham says,
what is 1 early sign that an organization is truly shifting
from task automation to institutional RE architecture
(23:04):
with AI. And I would also add to that,
what are signs that an organization is not making that
shift? There are some leading
indicators in terms of the typesof directives that are being
passed along internally. So let's take a simple example.
You know, AI can be used to accomplish work faster, better,
cheaper, yes. But AI can be used to redesign
(23:27):
work as well, reorganize work aswell.
So even before we talk about institutional redesign, think
about the average team inside anorganization.
Are they thinking about how to use AI to speed up tasks?
Are they mapping out their taskson a white board and saying,
here's how our tasks flow and here's where we can use AI and
move things faster. And instead of 10 days, we'll do
(23:48):
it in three days? Or are they looking at their
flow of tasks and asking themselves why does that flow
exist? Because a workflow looks like a
sequence of tasks, but a workflow is structured around a
constraint in the system. A workflow is set up to solve a
problem for the larger organization.
So if you get distracted by looking at the tasks, instead of
(24:10):
asking what was the problem in the organization this workflow
was set up to to resolve, and does that problem still exist
now that AI comes in, that's theway the team should be thinking
about redesigning their work. They should be reimagining the
workflow by asking first whetherthat constraint still exists.
So that's a leading indicator interms of if you're looking at a
team and organization, how they're thinking about
(24:31):
redesigning their work is, you know, one of the the leading
indicators of whether the organization is taking a task
first approach or an institutional redesign approach.
I think the second piece is in terms of how the organization
communicates internally at everylevel, how to think about AI.
If the narratives are we need X person AI adoption by every team
(24:57):
that does this and Y person by every team that does this, then
you know, they're, they're, they're probably driving a
certain mandate which will help with AI adoption, but they've
not really thought through why. And if instead the CEO has
taken, has made it a point to reimagine the future posture of
the business and given that has asked his directives to think
(25:20):
about what that means for their business units.
And they have in turn, asked their teams to think about what
that means for their team. You know, it has to be
bidirectional every and it has to, you know, you need to have a
constantly learning organization.
It cannot be top down. It cannot be localized.
If everybody's constantly thinking about given everything
that's happening in other industry, what does that mean
(25:41):
for my team a year from now, will my team still have the
power it doesn't the organizational structure?
Or does it need to rethink what it does?
If they're thinking in that way,if they're thinking with, with
foresight and future 1st and working back from it, then you
know they're they're, they are on the path of institutional
redesign. Kristen Wilson on Twitter asks
(26:02):
another short question. So thank you for to Kristen for
this and a really interesting one.
What can legacy companies learn from startups and vice versa
when it comes to AI strategy? Let me talk about what start-ups
can learn from legacy companies.And what they can learn from
(26:23):
legacy companies is really understanding why legacy
companies continue to win and continue to hold advantage when
new technological shifts come. And the reason they typically do
is because they have existing sources of advantage and they
bundle the new technology or, you know, whatever new value
they're creating with that. So a very simple way to think
(26:43):
about it is what happened with the Internet of Things when you
know the Internet of Things are IoT first happened.
There were a lot of start-ups that came out and that came into
IoT with the consumer Internet mindset, which was that let's
build a user base and we'll figure out how to make money.
And while that works in consumerInternet and IoT doesn't work
(27:04):
because what they were doing wasthey were embedding a chip
inside a consumer device, a physical device, and giving away
the device below cost, hoping that they would make money with
data eventually. But it was a dramatically loss
making model because every unit,you know, device shipped out was
going out at a loss. What the incumbents did instead
(27:25):
was they made the device more valuable by giving the service
and data away for free. Any, any analytics around that
data, any, you know, additional personalization and services and
remote management. All of those things were bundled
with the device. So the device became more
valuable and the services got commoditized.
So one of the things that start-ups should think about
(27:46):
when looking at incumbencies, what's the logic around which
the incumbents in my industry run their business?
And does the technological shiftreinforce that logic or does it
dismantle that logic? So that's the first thing you
should be thinking about. Pretty much the opposite thing
happened when we moved from, youknow, on premise software to SAS
where the incumbents logic was dismantled because it was
(28:10):
structured on a fundamentally different, you know, financial
structure of upfront, upfront revenue collection, etcetera.
And it was structured on a very different technological
architecture. Now, what can incumbents learn
from start-ups? I think one of the key things
that incumbents can learn from start-ups with any technological
shift, but more so with AI, is that the real opportunity with
(28:33):
the technological shift is to ask what is fundamentally
changing about the way businesses have been built so
far? What, what is the, you know,
architectural shift that's happening in the industry with
this new technology. So a very simple example to
illustrate that is that in the mid twenty 10s, Facebook,
(28:55):
Instagram, YouTube were declaredas the winners of social
networking and all of them were structured on a, on a simple
logic of the social graph. You needed to follow people in
order to get your feet populated.
And TikTok came along and three things happened around that
time. You know, TikTok was mobile
first, so was Instagram, but then in addition to that, with
4G connectivity, TikTok was alsostreaming videos and AI had
(29:19):
improved. And so mobile video AI, TikTok,
combine the three things made the video duration less than one
minute. And with that, it was able to
capture much more data with people scrolling videos on their
mobile and just using that data,it was able to create an
interest graph or a behaviour graph through which it was able
to populate feeds. And that's why it was able to
(29:41):
build a social network without ever having to build a social
graph to start with. So that's the idea.
You know, when a new technology comes in, it provides you the
ability to completely challenge the dominant assumption on which
the incumbents were working. So that's what I would say, you
know, the two sides should, should learn from each other.
Such an interesting point because also one of the things
(30:02):
that TikTok did is it shifted popularity from the social
graph. With Facebook, it's your
friends. If your friends are posting
something, then you respond shifted to interest.
So if you see a short video thatyou're interested in, and as you
said, because they were now short videos, Tiktok was able to
(30:24):
gather all of this data. And that has changed
dramatically how social media works.
That's right. We have another interesting
question. This is from Ronald Saldana from
on LinkedIn. And he says, and you started
addressing this, he says how canorganizations look at AI
(30:51):
holistically? In other words, OK, people are
listening. They say, fine, you know, we
need to do something to how do you do it?
If you have to be holistic, you need to ask the what question?
What is the future model going to look like with the AI coming
in? So let me give a simple example
to illustrate this. We talked about TikTok, right?
(31:12):
And now what worked for TikTok was that because the video
duration was shot, you could watch 10 videos in one session
versus one or two videos on YouTube in one session that some
some of users would have watched.
And that gave it a lot more dataand a much more scope of data.
Now, if you are a manufacturing company, you would say, well,
(31:34):
what do I do with this? This is TikTok.
I can't really do anything with it.
Well, what happened is that there was a manufacturing
company that decided, well, we can apply the lessons from
TikTok into manufacturing. And so you saw the rise of a
company called Shane coming out of China.
Now, when we think of Shane, we very often think of it as
(31:55):
somebody polluting the environment and, you know,
creating a lot of waste. But really what Shane did to
manufacturing was what TikTok did to social media.
So Shane, you know, traditional fashion used to work in the
model where you would have experts go to Milan and Tokyo,
sense the trends, build out a collection for the next season,
(32:16):
put out that collection and thencome back and then do the same
thing for the subsequent season.Very slow, long cycles and you
did not know what was going to succeed versus not.
So not always a very good ROI onon the entire business.
Now what Shane does is it does what TikTok did to social media.
It constantly collects data fromthe Internet, from social media,
(32:38):
from TikTok incidentally as well, but it uses all of this
data to then determine what people are interested in.
It finds micro styles and so it's again learning in small
batches. It's creates these micro styles.
Based on that, it creates new designs very quickly.
It sends these designs to a network of manufacturing
partners and ask them to run very small production runs of 50
(33:02):
to 100 units. And then it takes them out, puts
them in the market, test them, see if they sell off.
And based on that it either double down, double s down on
the design or it moves to the next design.
But all through its algorithm islearning which designs are
working versus not and hence improving its the ability to
design. So my point is that a
manufacturing company looking atTikTok would have said this is
(33:23):
irrelevant. Tell me something about what's
happening in manufacturing. Tell me what the biggest
manufacturing company is doing. But if you're just stuck in
that, you're not thinking about AI holistically.
So it's less of a question of how initially, it's more of a
question of what does this mean for me?
You need to take off your blinders.
You need to take off the, the assumptions that are trapping
you and think about, you know, how does all of this apply to
(33:47):
me? What, what are the assumptions
that TikTok and log do there, which I can start unlocking and
manufacturing. So I would say it's really first
that before we do anything else to to do a holistic approach to
AI. We have another great question
from Greg Walters on LinkedIn. He says most advice about
implementing AI seems to be on the same talk track from 1990.
(34:10):
How should organizations implement AI today in light of
the broader set of issues that you're describing?
The challenge is that most talk about implementing AI is from
the 1990s because a lot of people who are implementing AIS
vendors are the people who cut their teeth implementing
automation, as you know, roboticprocess automation and other
(34:33):
forms of automation in the past.And they bring in that same
automation hat over here and sell the same thing to to
clients. And that's the real problem,
because if AI is really about anopportunity for system redesign,
you should be talking to fundamentally different people
who are actually system thinking, who are thinking about
what does this mean in terms of competitive forces, what does
(34:54):
this mean in terms of our internal capabilities, and
really rethinking on that basis.So the first thing I would say
for organizations is think hard about whether yesterday's vendor
is still the right vendor to help you tomorrow.
If you're thinking hard about whether yesterday's talent is
the right talent to help you tomorrow, and you're cutting
jobs on that basis, then the least you should be doing is ask
(35:14):
whether yesterday's vendor is the right vendor for tomorrow.
This is from Chris Peterson, he says.
How do we plan for for both technology and cost changes of
AI services to reflect the providers actual cost?
Is a one year ROI estimate even possible?
Cost changes you probably can't figure out in a one year ROI,
(35:35):
but system redesign and really the imagining your business
model, all of that is not a one year ROI and you should actually
be accounting for that separately.
As you know, that's for us to stay relevant.
It's, it's more of or rather an insurance cost of us staying
relevant. It's more of if we don't do
(35:55):
these other things, we may not even be in business tomorrow.
So the whole point of cutting costs won't make sense if the
game in which we are cutting costs is not being played
anymore. This time it is from Gus
Speckdash, and he says technologies don't create value
quickly. In terms of normalized
performance. Electricity took 70 years,
computers took 40, Internet 25. How long will will this be for
(36:21):
AI? The reason technologies don't
create value immediately is because technology in itself
does not create value. It's the largest socio technical
system that creates value. What that means is every
technology needs complements which all need to come together
before value can be created. And I'll give like a very quick
example of what how this plays out.
I take the example of the rise of the barcode and Kmart and
(36:43):
Walmart adopting this in my bookand Kmart adopted improve the
checkout speed at the register. But Walmart combined the
technology of the barcode with compliments.
It had invested in the satellitesystem.
It had invested in its own internal supplier management
system. It had invested in EDI and
managing data across the supply chain.
(37:04):
So it uses the IT used the barcode data to aggregate data
across its stores and used that aggregated data to negotiate
with suppliers because before that stores used to individually
negotiate with suppliers and suppliers, which were the brands
had more power. Walmart was the first retailer
that flipped that power equationbecause it created a new socio
(37:25):
technical system around the technology.
So you have to think about what's that largest system?
You can't just keep saying, here's AI.
When are we going to see the value with that AI?
You have to think about what's the largest socio technical
system that needs to be created in in in your industry for that
value to play through, just likeWalmart did.
This is from Sasan. Kumbulpuri, who says if AI
(37:46):
learns by sensing and adapting continuously, what stops our
strategies from doing the same? That's on LinkedIn and on
Twitter, Elizabeth Shaw says. You say AI will change the
playing field or the holistic systems in which they act and
(38:06):
that and that an AI strategy must address.
So what kinds of change of systems are you talking about?
So we're really getting to the heart here of the strategic
shift. Yes, AI learns and.
If your socio technical system does not learn, which is your
larger system within which you're setting up AI?
(38:27):
If that does not learn, and if it's only the model that's
learning, then it's not going tobe valuable.
The reason, you know, the Shane model works is because it takes
a digital physical complex system and ensures learning
happens across the board. I'll, I'll give an example that
also illustrates the second question, but illustrates the,
you know, the, the first point again with that.
(38:48):
So think of what's happening in the materials industry today.
Materials industry are plastics,rubber, all of these different
industries. And in these industries,
traditionally what was scarce was the ability to come up with
a new formula that you could patent.
And then on that formula, you created a material which could
then be used in in cars and doors, etcetera.
(39:09):
With AI, new formulas can be created much faster.
So the constraint around lab testing and all of that goes
down. But whether that formula fits
the end use case and whether it's valid and whether it's
validated, that becomes a new constraint.
And in order to solve it, labs, production lines and end
(39:30):
channels need to work together to keep testing, gathering data
from the market and keep reinforming the labs and the
production lines on that basis. Just like Shane does it it, it
learns from the market and then it, it improves its future run.
So as a whole industry, these industries will have to change
To bring back to the first question, that whole learning
cycle in and in in order to linkit back to the second question.
(39:51):
What we're seeing is the reason you need the learning cycle is
because the playing field has changed.
Creating that initial compound is not expensive anymore.
What happens when the cost of creating compounds goes down is
you see patent trolls because anybody can use AI to generate
compounds. File patents sit on that.
But those patents become useful only if they can be applied to a
(40:12):
certain end use case for which you need to learn at an industry
wide scale. So that's the thing you you
cannot in an in a world of AI, you simply cannot say that the
learning will happen in the model.
Every single part of the socio technical system, every single
part of the industrial system will.
Have to learn in order to stay competitive, so you're not
optimizing. Just for that technology, you're
(40:34):
optimizing the broader system where that where that technology
fits in your organization, all of its capabilities and so
forth. You're actually redesigning the
system. So that it can learn because the
traditional manufacturing systemcannot learn.
But when Shane redesigns it so that it takes a small batch test
it in the market and then on that basis updates it's
algorithm about which design world and which did not.
(40:56):
That's how the system is learning by the by redesigning
first on LinkedIn. Sufin Ben Sabor says where
should AI not be applied? Does everything look like a nail
or, you know, we're trying to boil the ocean?
So when do we apply and when don't we apply?
AIAI should not be applied. Whether it should not be?
(41:18):
Applied. What I mean by that is you
should never start with AI to start figuring out whether it
should be applied or not. You should try to understand
given not just AI, but everything.
Think about all the forces shaping your industrial playing
field, right? AI, datifs, globalization,
every, everything that's happening.
How is that changing how your industry will work a year from
now? And you know, how are the
(41:39):
boundaries of your industry change?
How is so that's why I used the word playing field.
How is that changing where you play and how you create
advantage. Start over there because AI is
one of the ingredients, even though a very important
ingredient, it's one of the ingredients shaping that.
Once you figure out what's changing, you have to figure out
what your future posture looks like.
And then on that basis, you haveto ask yourself, well, which
(42:00):
capabilities will be value if you become that, if you become a
learning materials company versus just a compound creating
company, what new capabilities will be value?
And so on that basis, you then start to think about if these
are the new capabilities, which of these can AI help me with?
So you have to really work backwards.
You can't just take AI and say we start going to, we're going
(42:20):
to start applying AI because everybody else is.
I mean, that's again, it's, it's, it's a bit like you want
to run it on, you know, you, you, you want to run from one
corner of the US to the other, but you're just hitting the gym
and you don't have a map of, of which direction to start running
in. So you need to know what you are
doing before you start applying AI.
(42:42):
In other words, don't just. Follow the hype.
Absolutely. I mean, that's sort.
Of the unsaid subtext to everything, Let's go to another.
Question here. Really, this is a good one.
They're all good. But this one Mayena K employ
Empoyo says Could we consider AIan accelerator of ecosystems or
(43:07):
the missing link ecosystems? Ecosystems had been waiting to
rise up to their strategic potential.
I'm delighted that he asked thisbecause as I read your book, I
kept thinking about this in terms of ecosystems.
So tell us about that. This is a topic close to my
heart because. Before I started writing about
(43:29):
AI and reshuffle, for the last 10-15 years I've talked about
ecosystems and platforms and, you know, written a very popular
book on that topic, Platform Revolution.
Now the way ecosystems have worked.
So what are ecosystems? First of all, ecosystems are
essentially actors with misaligned incentives that have
(43:50):
complementary capabilities whichneed to be brought together so
that value can be created. So the whole point of creating,
you know, managing an ecosystem is to align the incentives just
as the capabilities are already aligned.
And so when you're thinking about how you you organize an
ecosystem together, take the example of the container again,
(44:12):
you know, the container shippingexample.
I started with the whole global logistics ecosystem.
Tracks, trains, ships, manufacturers aligned because of
standards. They agreed on common standards,
they agreed on a unified contract.
So that's what I typically thinkof as coordination, which starts
with consensus. You need the consensus between
the players to align them, and then the coordination starts.
(44:34):
Now, in the case of AI today, I believe there's an opportunity
to start coordinating and aligning the ecosystem even
without consensus because AI cantake information from various
players, even unstructured information, and make sense from
it and use that to then guide decisions and actions between
different players. A very simple way to think about
(44:55):
it is that when you, you know, book a trip, you get your
flights from one place, you get your hotel booking from one
place, you have your itinerary on a spreadsheet, and then you
you manage your activity somewhere else in your e-mail.
And there was no way for all of these things to talk to each
other. But today you can throw all of
them in into an LLM and ask it to create a structured
(45:19):
itinerary. And that's obviously a very
simplistic way of how that alignment happens.
But just imagine if there was a way for all of these players to
start using agents to communicate with each other.
They would not even have to workon the same interfaces and align
beforehand. You, you would have an ability
for an agent to go out, identifythe right information from
(45:41):
different sources and manage your trip alongside you.
And so that's the idea of a coordination without consensus.
Once enough consumers or once enough demand is aggregated,
because this coordination without consensus has happened,
then the consensus can come in place.
So the, the primary way I believe that AI will change how
ecosystems work is that it will not have to wait for consensus
(46:05):
before coordination can happen. And so a lot of a significant
part of the economy where coordination is broken because
that consensus is difficult, we will probably see solutions
coming in over there. Let's talk about.
Jobs describe the division of Labor between AI and people and
(46:27):
the implications. There are two or three things
that we need to think. About over here.
So the first thing is that when people say AI won't take your
job, but someone using AI will, that's a complete fallacy
because it assumes that substitution of labour is a bad
thing and complementarity is always a good thing.
Now, we've obviously seen in thecase of Uber drivers that the
drivers who used to work before Uber came about, you know, the
(46:49):
London cabbies and so on, they did not benefit from
complementarity because once Google Maps came in, mapping
helped the less experienced drivers get access to the same
pay and reduce the pay of the more experienced drivers.
So in general, complementarity can actually reduce your pay
while retaining your job. So complementarity is not always
(47:11):
a good thing. And the second thing is that you
don't always need substitution or automation to remove the job.
And I take the example of the typist in the book.
When the word processor came in,people would have said the word
processor won't take your job, but someone using a word
processor will. And typist would have tried to
figure out how to learn about, you know, how to use a word
processor. But what that, you know,
(47:32):
confuses is again, what I talkedabout, it focuses on the task.
It assumes that the typist job exists because of the task of
typing. But that's not really true.
The typist job existed because of the constraint in the system,
which was that before the word processor, managing edits on a
document was very expensive. And so you the value of typing
(47:53):
or, or good typing was higher because it reduced the cost of
edits. Now the moment the cost of edits
collapse to 0, go back to the point, you know, if the
assumption changes, the job goesaway.
In this case, the job of the typist went away because
suddenly inefficient typing was not expensive.
Anybody could type with the wordprocessor.
And today the the, the task of typing never got automated, but
(48:15):
the job of the typist went away.So my point is complementarity
is not always good and substitution is not, you know,
always the only reason for jobs to go away.
So we had to think about this a little more, you know, with a,
with a lot more nuance. And the, the one thing that I
would say we need to think aboutis stop thinking about your jobs
or jobs in general in terms of tasks and what AI can do to the
(48:38):
tasks. Because if you think of it in
terms of tasks, you'll always say, what can AI not do today?
And I'll start doing that now. The the two fallacies with that
is it assumes AI will not improve and it assumes the
system will not change. And both those assumptions are
wrong. So focusing on tasks and trying
to these skill based on the tasks is actually a fallacy.
What you should be doing insteadis focus on the constraint.
(49:01):
And the constraint is important over here because when AI
improves execution, everybody goes into this execution mode.
You know, everybody's vibe coding, vibe marketing, they're
just executing with without thinking about anything.
The value shifts to whoever can manage the constraint.
I'll give another example over here.
You know, think of the anaesthesiologist.
I mentioned that in the book, ananaesthesiologist, You know,
(49:25):
every single task he performs inthe operating room is performed
by a machine. It's automated.
He's managing the machines, but the reason he's paid extra high
is not because he's a really good machine manager, but
because he manages the risk in the room of ensuring that the
right amount of anesthesia is going out at the right point to
the patient. And so a very common constraint
(49:46):
that commands a lot of value is risk.
If you can manage the risk in the system, you can command a
lot of value. The final point I will say, and
I'm not saying risk is the only constraint, I'm just saying
don't look at the task, look at the constraint.
When AI comes in, the old constraint goes away, but new
constraints will emerge somewhere else.
The final point I will say is that in addition to what we've
(50:09):
talked about with AI, if we go back to the, you know, Google
Maps and Uber example, the pay of the driver did not go down
just because more drivers came in.
The pay of the driver went down because the pay was now set by
an algorithm. In the case of Uber, drivers are
now being managed by algorithms.So there's increasingly A
distinction between two types ofjobs about the algorithms job.
(50:31):
Those, you know, the Uber data scientist who creates those
algorithms and is paid very wellin equity and below the
algorithm jobs who were this very little agency.
And one of my key points in the book is that when once AI comes
in, it will have the same effectthat Google Maps had on driving.
It will have the same effect on knowledge work.
Because the more you complement people with AI, some jobs get
(50:55):
increasingly hollowed to to the bare minimum.
And at that point, they become amenable to being allocated by
algorithms. They get pushed below the
algorithm. And at that point you don't have
agency anymore. Elaborate more on.
The jobs above and below the algorithm because there are
profound implications for all ofus and many different kinds of
(51:18):
roles. The first question came in from.
Wei Yi, I think, and she's followed my book and once she
read the book, she reached out to me saying that in her role,
which was paid social and, you know, paid programmatic
advertising and data data analytics as well.
A lot of jobs that used to be about the algorithm have
(51:39):
increasingly moved below the algorithm, partly because, you
know, very sophisticated knowledge workers have
constantly worked alongside these algorithms and trained
them. So that a lot of the knowledge
of allocating that work has goneinto the algorithm.
And what has remained with the human has pushed them below the
algorithm. And so the, the, the levels of
(52:02):
pay. I, I hope I'm doing her example
justice, but my point is that's a very good example of a
knowledge work that five years back was valued really well.
But in the meantime, the workerswere training the algorithm and
work that they were left with went below the algorithm.
The agency went away from them and into the algorithm.
And this increasingly will happen with AI coming in,
(52:23):
because the issue is not whetherAI can perform a task or not and
whether you know you have any tasks left to perform or not.
The issue is whether it takes away so much from your work.
It takes away it standardize, standardizes your work to such
an extent that you become very commoditized.
Anybody working alongside AI cansuddenly deliver the output that
(52:47):
you do. And there have been many studies
over the past two to three yearswhich show that less skilled
workers, when complemented by AI, are able to move and deliver
better, much better output than higher skilled workers when
complemented by AI. Which essentially means that
even though AI helps everybody, it leads to a flattening across
the board. It leads to less of a divergence
(53:09):
between high skilled workers andless skilled workers, which ACE
compresses the skill premium. But then the more they get
complemented by AI, the more thework that they are left with,
which is uniquely performed by them and not by their peers,
gradually starts shrinking. And if it reaches a point where
you know like a delivery worker today is not doing anything
(53:29):
which involves tacit knowledge because Google Maps is doing the
navigation for them. When it reaches that point where
all the differentiated work has been absorbed into the
complementary technology, then you get pushed below the
algorithm. What advice can you?
Offer to individuals to all of us in our careers to address
(53:50):
this at very fast. Please don't get into thinking
about. New skills without first
thinking about what's the new system, because everybody is on
this reskilling Ted treadmill, but they don't really know what
they're reskilling towards. So think about what is going to
be valuable in the future and think about how you can reskill
towards capturing that value. But if you don't think about the
future system then you might just be reskilling in the wrong
(54:12):
direction. Most people have no idea how.
To think about a future system. That's why we focus on skills.
So what do we do? You will intuitively know.
That there are certain ways thatyou can play in certain ways you
can't play. So the example I use in the book
as well, which I find pretty interesting, is that when the
Internet came, you know, happened, the value of a
magician's skills went around dramatically because the
(54:34):
magician is paid high, you know,well, and the audience pays him
well because of the sense of wonder that their trick creates.
But with the Internet, you immediately had these people who
would deconstruct and push out the the secret behind any new
trick. So the cost of creating a trick
is very high. The cost of deconstructing and
spreading it to everybody is very low.
(54:54):
So the value of a trick dramatically collapsed.
So what you see magicians doing today is they very rarely create
fundamentally new tricks. They take the old tricks, they
repackage them with new stories,with new spectacle, with new
narratives to recreate the wonder for which they used to be
paid. So a magician today is not
necessarily paid for the wonder created by this skill at the
(55:16):
trick, but for the wonder created by the spectacle that
they are now packaging around the trick.
So my key point is that when your main source of competing
actually gets commoditized as well, you should really think
about why your stakeholder was paying you in the 1st place,
what about it was valuable for them, and how you can
reconfigure that value in a fundamentally new way.
(55:38):
Finally, two last questions. And sort of tweet length
bundles. What advice do you have for
companies? I mean you want it in 140 or
two? 80 characters.
Well, really, you know, it can be a don't, don't, don't look
inside your company to figure out what to do with AI.
Don't look at what you're doing today to figure out what to do
(55:59):
with AI. Think about how AI will change
your playing field and how it will change what advantage looks
like. That will tell you what the new
competitors are going to look like.
That will tell you what new industries you're going to have
to enter, compete in, etcetera. That will tell you which
capabilities will be valuable inthe future.
So the more inward focus you arein a world of constant change,
it's not about AI, but in a world of constant change, the
(56:20):
more you focus inward and you say we this is what our core
competency is, the more you are blinded to the opportunity that
lies out there. And finally, what advice?
Do you have For government policy makers and regulators,
the most important thing is. To, to really understand that
(56:42):
there are a lot of mechanisms ofprotection that used to exist or
institutional mechanisms of protection and value
distribution that used to exist,whether it was unions or whether
it was taxation. I'm, I'm not saying that those
are already efficient mechanisms, but what all these
players need to understand is that the date at which those
(57:05):
institutions move is, you know, a fraction and a very tiny
fraction of the rate at which innovation moves.
And so they have to, you know, if, if organizations have to
create learning organizations and not just allow learning
models, regulation has to createlearning regulation, you know,
it has to be more distributed, multiple points of understanding
(57:29):
what's happening versus a singletop down system through which
regulation is being delivered. So we need a fundamentally new
way to think about policy. The current model will not keep
pace with what's happening. All right, well, this has been
a. Very action-packed episode.
An enormous thank you to SangeetPaul Chidori.
He's advisor to over 40 Fortune 500 CEOs and he's written this
(57:54):
new book called Reshuffle. Sangeet, thank you so much for
being here with us today. Thank you so much.
This was so much fun. Thank you and a huge thank you
to everybody. Who participated?
Your questions as always, you guys are amazing.
You got your questions are great.
Before you go, subscribe to the CXO Talk newsletter and connect
(58:15):
with Sangeet and with me on LinkedIn.
Thank you so much everybody. We have incredible shows coming
U. We'll see you again next time.
Have a great day.