Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Many companies are stuck in AI pilot purgatory.
Endless experiments, but nothingscales, nothing gets done.
Today on CXO Talk number 898, Tim Crawford, a top CIO
strategic advisor, reveals how CIOs and IT can breakthrough to
(00:23):
real ROI. I'm your host, Michael Krigsman.
Let's get into it A. Lot of the work I do is advisory
with CIOs, but then I do a fair amount of advisory as an
industry analyst with vendors aswell.
Tim, you spend so much time withCIOs when you look at AI, What
(00:45):
is the impact of AI on the CIO role today?
It's massive from a number of different lenses, number of
different directions. When you look at the role itself
and what the CIO needs to think about, there are impacts to how
they work, how they operate, howthey conduct themselves, how
(01:06):
they inform themselves. And then also outwardly how they
engage with the others within their organization, whether that
be the enterprise leadership team, whether that be the board,
whether that be customers, as well as their own organization.
And so AI is playing a pretty significant role from a number
of different dimensions for the CIO both today, but then as we
(01:29):
go forward in terms of giving them the insights that they're
going to need to be successful. What does this mean on a daily
basis to the life of a CIO and to IT?
One of the things that that is coming up again and again is how
do we get to the insights to make really good business
(01:51):
decisions. And so there was a lot of manual
labor, a lot of manual work thatwent behind those, those
decisions, whether it be throughspreadsheets and documents,
conversations, kind of just experience even coming into the
mix. And so today you can automate
(02:12):
and bring a number of those insights based on core data to
the surface that you never had access to.
And So what that means is you can now make decisions faster
and more accurately in terms of where you go and what you do.
And that impacts both the organization, the IT
organization, whether it's from a cyber perspective and
(02:34):
understanding what threats are coming your way or, or how to
prioritize those. If it's from a service
perspective, being able to do things like ticket deflection.
And we see that even in the contact center too, being able
to use AI to address some of these core issues that we had no
other choice but to put people in front of.
(02:55):
And so from a day-to-day perspective, the CIO can
leverage AI from a number of different dimensions.
And they really should, because this is really one of the
attributes that's going to differentiate those CI OS that
succeed into the future and those that really get held back.
Are we talking about driving efficiency, saving money, or are
(03:20):
we talking also about innovationopportunities for the CIO to
forge a new or really improved relationship with the business
partners? Now let's take a moment to learn
about Emeritus, which is making CXO talk possible.
(03:40):
If you're a business leader navigating change or driving
growth, explore executive education programs with
Emeritus, a global leader. They offer programs developed in
collaboration with top universities, designed for
decision makers like you. There's a program tailored to
your goals, whether that's AI and digital transformation or
(04:04):
strategy and leadership. Find your program at
www.emeritus dot org. At the end of the day, it's
allowing us to do things as IT leaders that we couldn't do
before. We just didn't have the means
and which to do before. But before we get ahead of
ourselves there, we can definitely use it from an
(04:26):
efficiency standpoint to help understand what are some ways
that we can leverage AI to to dothings quicker to use the
multiples that come from using AI tools.
And this is where the copilots and even the chat GPTS come into
play. Some of the not chat bots, but
(04:48):
when you get into agents and agentic, there are some ways
that you can use that from an efficiency standpoint to kind of
catapult the process and automate some of your day-to-day
work. But once you get past the
efficiency pieces, you really cannot stop there.
You have to keep your eye on theball, which is really what are
those big transformational ways that you can bring innovation to
(05:12):
the table to transform the business.
Because at the end of the day, tech is tech.
And whether it's red, blue-green, yellow, it doesn't
matter. It does not matter.
And when I talk to CI OS, they just exemplify that point.
It's not about the tech, it's what you do with the tech to
address those business outcomes.And that's really where AI can
(05:36):
can help kind of catapult and and move you forward.
But what about this notion of AIpilots and your term AI pilot
purgatory? What's going what?
Yeah. What's going on with that?
There are a number of studies that are out there.
There's the IBMCEO study, which they run every year.
(05:59):
They've been running that for a number of years now.
And the latest one kind of touches on success of AI efforts
and the perception that CE OS have of how successful those AI
efforts have been. And, and in that particular
study, only 25% of the, of AI projects, according to those CE
OS in the study they believe areproving out the ROI and only 16%
(06:22):
are actually going to scale. That's a pretty low number.
And then just what in the last month or two, you saw the MITAI
study, which a lot of folks wantto quote because the numbers are
so dramatic, says that only 5% of AI projects are proving out
their worth. Take the the MITAI study with a
(06:46):
grain of salt. There are some very specific
aspects that it focuses on that you need to make sure that you
understand when you're looking at those numbers.
Don't just read the headlines onit.
But I think, you know, kind of back to your question, Michael,
it's, it's really a question of do you understand what it is
(07:06):
you're trying to accomplish withAI or are you just doing some
sort of of random experiment? And to date, a lot of folks are
doing just random experiments asa means to learn, which is good.
You know, we have to learn aboutnew technology and new
innovation. This is new for all of us.
But they, they're not making that transition from
(07:29):
understanding how to turn it from an experiment into
something that is meaningful forthe business.
And I'd say the business very specifically, because this
cannot be just another technology project.
It has to have some business outcome.
And when you are focused on those business outcomes and
understanding the outcome that this this project or this
(07:52):
experiment could potentially have, then that helps you
understand very clearly what arethe metrics you should be using
to measure the success or failure of that effort.
And then kind of working backwards from there, you need
to quickly understand, is this going to prove out your
hypothesis or not? And if not, dump it and move on.
(08:12):
And this is where I think peopleget caught up is a couple of
things #1 they're not making that that determination quickly
enough #2 they're not thinking about what the outcome is that
they're focused on. And they're not aligning it from
a business standpoint. They're looking at it from a
technology standpoint and then #3 is they're not considering
(08:36):
that AI and the effectiveness ofusing AI requires another
component that we haven't necessarily had to do in the
past. In the past, when we brought
innovation to bear, we just kindof dropped it in and people
started to adopt it and be able to to use it and leverage it.
AI is very different. We're, we're very quickly
learning that without the training to help us understand
(08:59):
how this changes how we do work,there's a very high likelihood
that that AI effort is going to fail.
And we've even seen this with relatively simple things like
Copilot, where when you start todrop it in, it changes how
people work. And if we don't help them along
that journey through training and other means, it's really
(09:19):
going to struggle. I want to remind everybody that
you can ask your questions. If you're watching on Twitter X,
use the hashtag CXO Talk. If you're watching on LinkedIn,
just pop your question into the LinkedIn chat.
And we love your questions. You guys are so smart.
(09:41):
So ask your questions. Take advantage of this
opportunity. So David Morales Weaver makes
the comment that he scaled a pilot by proving ROI, not
chasing features and it saved months.
So that's a, that's a. That's a really interesting
(10:01):
point right there. Absolutely.
But again, he's probably lookingat the ROI from the perspective
of what is this really going to do for my business or for that
that effort. This isn't, this isn't about
technology. This you cannot look at this as
tech for tech or tech for tech sake.
You have to look at this from the context of what is the
(10:24):
return on investment. And what that means is what's
the business value that this change or this technology is
going to bring. And you have to be able to
demonstrate that very clearly. And I don't mean demonstrating
it with a whole bunch of of dotsthat you have to connect.
You have to be able to connect those in pretty short order.
Now let's quickly hear from Emeritus, which is making CXO
(10:46):
talk possible. If you're a business leader
navigating change or driving growth, explore executive
education programs with Emeritus, a global leader.
They offer programs developed incollaboration with top
universities, designed for decision makers like you.
There's a program tailored to your goals, whether that's AI
(11:09):
and digital transformation or strategy and leadership.
Find your program at www.emeritus.org.
Our mutual friend Isaac Saccolic, He says this the
elephant in the room. Is it an AI bubble or are CIOs
(11:31):
moving too slowly? We are definitely in a bubble,
but I also think that we are also moving too slowly in the
way that we are adopting the innovation.
So what does that mean? So we're in a bubble in the
sense that the AI conversation is incredibly frothy, right?
(11:51):
Everything's now with AI. My toaster has AI in it now.
But is that really going to movethe needle for our business?
No. Do we really care?
Not really, but it's a, it's themarketing buzz.
And so you kind of have to parsethrough that when you get down
to kind of the, the, you know, core building blocks of what
(12:14):
this could do and you start looking at the opportunities for
efficiency. Are we really kind of embracing
those? No.
And here's the reason why. And I know Isaac really well and
I know Isaac's probably rolling his eyes going Yep, Yep, Yep.
And that is that we're not rethinking how we have to change
our business and how we operate.And this is something that has
(12:36):
to happen within the organization as much as with the
CIO. You know, Isaac talks a lot
about what it means to be agile within the organization.
That has to play out with AI. We have to think very
differently about how we work, how we operate, our processes,
our workflows. You can't just take AI and apply
(12:58):
it to the way you've always donethings.
You have to simplify, understandhow you can optimize, and then
figure out how AI can help catapult that.
And that's very different. That is very different.
But love the question. I totally agree and you run the
CIO think tank, which is so you're you're actually walking
(13:19):
the walk. But but let's go to Craig
Brown's question. He says this once the Gen.
AI pilots have started. How do you monitor processes and
people to justify the cost of the J Gen.
AI tools and show the value? How do you show the value and
what metrics are key to monitoring and how do you
(13:40):
compare it against current operations?
I mean, it's basically what he'ssaying is how do you justify the
cost, show the value and how do you compare it and measure it?
Depending on the specifics of the type of work you're doing,
the answer is going to change a little bit, but you have to you
(14:01):
have to go back to let me use some examples.
If let's say you're using, I mentioned Copilot earlier, how
much efficiency is that really kind of driving into the
organization? And what that means is you have
to spend the time to understand how much time is actually being
saved. You can't assume that you and
you, you are going to make some assumptions along this process,
(14:25):
but you can't start with assumptions.
You have to start to understand actuals like how much time of
the 10 people in in a given department, how much time are
they really are spending? And you have to spend the time
with each of them. And then you can extrapolate
from there. But it requires more footwork to
truly understand that. And then you can extrapolate to
(14:47):
your organization of 20,100 thousand, 250,000 people when
you're starting to build out discrete non efficiency types of
projects, but rather innovative projects, those have very
discreet outcomes because now you're talking about doing
things you couldn't do before. And in those cases, you
typically have some dollar valuethat's tied to the outcome.
(15:11):
Maybe it's a a new product you can come out with, maybe it's a
a new service line or a new market you're going after.
And so they're very clear metrics that you can track as
part of that. In addition to that, on the back
end, companies like Aptio from IBM and the TBM Council, they're
trying to get their arms around this as much as even just the
(15:34):
fin OPS foundation from a cloud perspective and AI perspective
of how do you start to understand what you're actually
spending on these efforts to be able to factor into that value
equation? Because that's really what
you're asking here, is what's the value that drives the ROI?
(15:55):
And a lot of folks focus on the cost, but cost for AI is not,
it's not clear. It can be convoluted.
And so there are efforts that are starting out, but we're in
the very early days of this too.And so this is where I'd
encourage you to figure out withyour partners whether that's the
(16:15):
SIS, the GSIS, but also the vendors, the enterprise vendors
that you're working with. Work with them and help them
understand what it is that you need help with in terms of
clarity around this, because they're trying to figure it out
too. Let's go over to Twitter to X
and an interesting question fromArsalan Khan.
(16:37):
He says do these AI fail? It's interesting.
He says, do these AI failures? Well, we haven't really been
talking about AI failures exceptI guess pilots not succeeding, I
guess we could say is an AI failure.
And yes, that MIT study is aboutPOC's failing.
Arsalan says this do these AI failures occur because of lack
(16:59):
of vision of non-technical executives, or because tech
executives just don't know how to convince their peers that AI
is a holistic solution? Both are true.
There is an understanding aroundwhat is the value of the
project, what is it, the value of the experiment and you have
(17:23):
to tie that to a business outcome.
And one of the problems that that I often see is that, and
this kind of goes back to something, Michael, that we were
talking about kind of at the topof the show.
But when you start talking aboutthe role of the CIO and how the
role is evolving, And I'm sure many folks haven't had a chance
(17:43):
to read it, but I just publishedA blog post talking a little bit
about this last night from my blog on avowa.com.
But as the CIO role transforms from traditional to
transformational, and what that means is from a technology focus
to a business focus, what that does is it gets the IT
(18:04):
leadership to be thinking more about those business outcomes,
not technology outcomes. Many of those failed projects,
they're technology projects, they're not business projects.
And this is one of the problems is when you're looking at
efficiency, it's efficiency of what?
And So what is this project really intended to to accomplish
(18:28):
and how do you get to that business outcome?
But first you have to understandwhat those business outcomes
are. And many folks in IT don't have
the level of depth that they need to have around what those
business outcomes should be, what their business objectives
are. Like, for example, do you know
(18:49):
if you're in IT, are you crystalclear on understanding what your
company's objectives are, what the business objectives are that
the executive leadership team isoriented around?
And if the answer is no, well, OK, there's step number one is
understand what the nature of your business is.
Step number 2 is understand how your business operates.
How do you make money? How do you spend money step #3
(19:11):
is understand your customer. What does your customer do?
How do they engage with your company?
Why do they engage with your company versus your competition?
Now some in technology might go,well, wait a second.
I work in ITI don't work in marketing or sales.
But guess what? With AI, you have to start
understanding this. We've needed to understand this
(19:32):
earlier, but with AI it's it's absolutely necessary.
Why does AI force someone in IT or force a CIO to think more
broadly across the company? Because I was under the
impression that CI OS should have been doing this since so
(19:54):
basically forever modern times I.
Don't want to go down a rat holeon this.
I have a whole talk track on theanthropology of IT and how we
started by focusing on business outcomes in the 60s.
And we kind of lost track of that.
And we started to say, look, youknow, we got this problem, we
can figure it out solving these business problems.
(20:15):
And we started to create this chasm between what IT did and
what the business needed and thebusiness to kind of lost touch
of what technology was doing behind the scenes.
We have now kind of come full circle where, guess what, that
chasm needs to be closed. And the reason for that is
because what is AI really doing?It's changing how we operate our
(20:36):
business, whether it's from an efficiency standpoint, whether
it's from how we engage with ourcustomers.
It is changing the way that we operate our business.
And the only way you're going toknow or understand the impact of
that to your business is by understanding your business.
And unfortunately, as you said, many folks in IT don't have more
(21:00):
than just a basic understanding of how their company operates.
You know how it spends money, how it makes money, how it
engages with customers. And Simone Joe Moore on LinkedIn
really wants to emphasize this point that we need to not focus
on, quote UN quote, just efficiency and really think
about the effectiveness for workfor organizations and people.
(21:26):
Completely agree. The real money, the real
opportunity is going to be when you start to move into what I
call innovation projects. These are things that you could
not do before AI. That's different than an
efficiency project. Efficiency projects will have
value and they're a great way tolearn, but very quickly you have
to figure out how to shift gearsand start adopting some of those
(21:47):
innovative efforts, some of those things that you couldn't
do before AI. And I completely agree.
That's that's where the real gold is for companies.
It's the efficiency is important, but the real gold is
when you can do things that thatwill allow you to open doors you
couldn't open before, provide insights that you never had
(22:09):
access to in the past, allow youto engage with customers in ways
that you couldn't before. And, and here's the kicker, it
starts to differentiate you fromyour competition.
And so this is where the big race is, is how do you start to
differentiate your company from your competition and being able
to address customers where they are and where they're going.
(22:30):
So looking ahead, not behind, but also looking around the
corner. Rocky Vienna asks about the
security risks in what are usingin essence, he says, public
LLMS. Where do you see movement in the
development of platforms to provide a security envelope
around corporate use of AI? It's a very important question.
(22:54):
We absolutely need to be thinking and talking more about
not just security, but governance.
And this is something that all of you should be hounding on
your vendors and providers to betalking about is how they think
about governance of data. So there are different aspects.
Rocky's asking about security around the actual models
(23:16):
themselves. And there's a, there's an
interesting MIT study different than the MITAI study that talks
about meek models. And so probably just look for
MIT meek models. And it's, it's an interesting
read talking about how the changes in big models versus
(23:36):
smaller models are going to playout.
When you start to think about security, you have to think
about the data. You have to think about the
outcome of how you're using thatdata and protecting that data.
I think this is where there are going to be problems with big
models that we haven't solved. For a good example, here's a
great example. I make a public statement that
(23:58):
says, you know what? Michael Krigsman is awful.
He's a terrible person. Mind you, none of this is true.
Oh, it is true. I didn't know it's true, but go.
On no, no, no, no. I've known Michael for years.
He's a great human being and love working with him.
But the problem is when a model picks that up, there is no way
to remove that from the hive. There's no way to back that out.
(24:21):
So Even so, once we go down thispath, the problem is you're
going to start ending up with more misinformation and
disinformation. And so that's going to be a
problem. But you have to also think about
governance around the data. You do have access to an
especially proprietary customer data.
And even some of those challenges, when you start to
think about user governance models as well as AI agent
(24:45):
models, as well as knowledge models, and then the different
governance models that come fromthe different systems of record,
how do you start to to bring allthat together to make sense of
it? And in my conversations with
some of the largest enterprise vendors on the planet, they're
still working that out too. So hang on, it's going to be a
rough ride, but we've got some some pretty interesting
(25:07):
challenges ahead of us. And of course, we're dealing
with LLMS that have an insatiable desire and greed to
suck up everything in their path.
Yeah, and you have to be carefulthat the configuration of your
interfaces to those LLMS excludethe sucking up of your corporate
(25:30):
data. There's that too.
And so I think this is where youhave to think about your data
strategy. You have to think about how
you're protecting what is important to your organization.
And that's probably a totally different conversation, but AI
starts to factor into that when you start to think about the
governance of what AI and especially as we move into a
(25:50):
gentic with learning models, reasoning models, discovery A to
AMPI could go on and on just on that alone around the challenges
to govern data through those processes.
Greg Walters has been patiently waiting on LinkedIn.
And then we're going to jump back to Twitter because we got
questions stacking up over theretoo.
(26:12):
He says the AI failure seemed tooccur when AI is dropped into
existing processes instead of the processes adapting, being a
jet digested by AI. But he says he he says a ray of
hope that it seems that finally we're looking to focus on
(26:33):
deliverables versus adherence tothe process.
But I, I will just mention I've heard the same conversation
going back, you know, 25 years. This is not new.
There's a whole nother talk track around just policy
development. You know, how did we create some
of these business policies and business processes in the 1st
(26:53):
place? And usually it's because
something didn't go right or we wanted it to work a very
specific way based on the parameters at that point in
time. Well, guess what?
Those parameters have been changing in some cases over
decades, but the company never changed how they did things.
And so one of the things that that you're kind of alluding to
(27:17):
and really needs to be thought out for success is how do you
start to understand that processfirst?
So set AI aside for a minute. How do you understand that
process? How do you optimize that
process? And then how do you bring AI
into the mix? Because if you just simply layer
AI on top of existing processes,you're going to end up with a
(27:39):
mess pretty darn quickly, and especially as you move into an
agentic framework. Arsalan Khan comes back.
He says as a recovering enterprise architect, we should
aim for AI. We should aim for AI taking our
jobs. Some enterprise architects are
OK with that, but some are not. How much of the AI race is just
(28:02):
self preservation? The problem is this is where the
marketing is really kind of settled in.
AI is not going to, AI is not directly going to take your job,
but AI could take your job. And what I mean by that is
there's going to be a differencebetween those folks that use AI
and those folks that that don't leverage AI.
(28:25):
And the folks that leverage AI are the ones that are going to
succeed in Excel. And the ones that don't, and
this goes all the way to the CIOare the ones that are going to
perish. It's the same reason why
traditional CI OS are in declineand transformational CI OS are
are in demand. It's because of what they focus
(28:46):
on and how they leverage the tools at their disposal.
And also how they can provide greater value over time to their
particular role in their particular organization.
Chris Peterson on Twitter makes a couple of interesting points.
He says #1 the recent, There wasa recent study from Anthropic
(29:07):
that said only 250 bad documentsare sufficient to poison almost
any size of model at training time.
So getting to your point about the bad data, basically poison
data, and Chris asks this, he says that he agrees about the AI
(29:28):
bubble and the value conversations that need to
happen. How do we factor in the
potential cost explosions when venture capital and circular
deals go away? As customers, can we even figure
out our sustainability metrics? I think we'll see more smaller
(29:48):
models and leverage the large LLMS less.
It's kind of crazy when you lookat the whole open AI Oracle meta
and they're just driving toward bigger, bigger, bigger.
And this is one of the reasons why I kind of to the earlier
question about a bubble. I think there, there is
definitely a lot of bubble tasteto to what we're seeing happen.
(30:13):
At the end of the day for the enterprise, they need something
that is more performative and also something that's more
specific to their industry or ortheir particular company.
And that's where you get to specific data and data, frankly,
you can control. So that's one aspect.
The second one, which you bring up around sustainability, this
(30:35):
is a real problem. This is a real problem.
And I've tweeted about this. I'm sure, Chris, you've seen me
talk about this, but we have stopped talking about
sustainability. And I think some of this is
somewhat politically driven for the time being.
But we're going to come back andlook at this in spades, which is
(30:56):
how do we start to understand the requirements from of natural
resources and the impact that AIis having, whether that be from
an energy perspective, right. Amazon got asked not to not to
bring more workloads into Ireland or something along those
lines. We're going to see more of that,
whether it be for water reasons or power and driving costs.
(31:19):
Communities that have data centers, they're seeing power
costs go up. But that conversation is largely
kind of subsided for the time being.
We're going to have to come backto that.
And I think that's also going todrive our need to get smarter
about how we use this technologytoo.
A question from LinkedIn from Edward Monroe who says how are
(31:43):
CIOs approaching the shift from SEO search engine optimization
to Geo generative engine optimization?
And are there early priorities that can guide the development
of a road map from this shift from SEO to Geo?
(32:06):
Number one, the CIO doesn't careand I don't say that flip, but
rather you have to look at what the CIO focuses on and you're
talking about something that is farther into the organization
and probably in the CM OS org specifically around how these
technologies are used. Keep in mind the CIO moving
(32:27):
forward is very much focused on business outcomes, not specific
technologies. You know, I often say it doesn't
matter whether it's blue-green, yellow, red, blue, it doesn't
matter who, whose technology, whose badge, whose, which
technology you're using. What matters is how is this
going to move my business forward?
(32:48):
But that particular question I'dhave to punt a little bit on
because that's that's outside ofthe realm of the CIO.
From the CIO perspective, this is an internal set of operations
of tasks and IT IT can help support that the infrastructure.
(33:09):
But once you get inside marketing, really is that the
job of the CIO or is that the job of somebody in marketing who
has real deep expertise in that specific process?
The other piece that you're starting to see more of is
partnerships between organizations.
So CIO and CMO coming to the table together, CIO and
CROCIO&CFOCIO&COOCIO&CPO. This is something that came up
(33:36):
this morning and our conversation within the think
tank was around the the role of the CIO and the Chief Product
officer. And so you have to keep in mind
that the level in which these folks are focused and the things
that are most important to them are very much into these very
(34:00):
strategic business outcomes thatmove the needle for the
organization and for the customer.
When you get into the the how ofit kind of into the the details,
that's when you start working into the organization a little
further. And also I would assume that's
that's when you start working with detailed experts who may
(34:22):
live inside it as well, but who have very narrow domain
knowledge that's deeper on specific topics.
That's right. That's absolutely right.
So we have another question fromTwitter from XA, really good one
going back to the POC's to the proofs of concept.
(34:44):
Elizabeth Shaw says how do you and when do you terminate an AI
proof of concept, especially because doing so is rife with
politics. So unpack that one for us.
The issue there is politics sometimes can govern decisions
(35:04):
and it's it's kind of the necessary evil that you have to
work through. If you've ever worked in large
enterprises, I have most of my career, you have to have the
right balance. But this is also where
relationships play a role. You have to think about what's
the impact of spending the moneyand spending the resources,
(35:27):
because it's not just about money, it's about resources that
are going toward that effort when they could be applied
somewhere else and ensuring thatthat effort is aligning with
those business outcomes. And I'm going to sound like a
broken record, but business outcomes and business objectives
need to rule the day. Now, the politics will play a
(35:47):
role in that. And you have to make sure that
you have the right balance, healthy balance, and play the
job of the politician in it. But you have to understand that
they have motivations and incentives as much as you do.
And so if you can understand what their incentives and
motivations are of why they're driving that and pushing that to
(36:09):
continue, that might give you some insights as to how you
could drive the focus in a different direction.
So you can pull the plug and move on.
But on the sum, you have to figure out how to quickly
ascertain is this experiment going to succeed or fail?
And if it's going to fail, you need to pull the plug as quickly
(36:29):
as possible. If it's going to succeed, you
need to be able to demonstrate success in short order.
I had a conversation with someone that said Oh well, we're
going to be looking at a 12 month run rate before or 12
months out before we can actually show progress.
And I said those days are behindus.
Like you have to show progress in very short order, in some
(36:50):
cases weeks and months, not years.
On this topic of succeeding withthese AI projects or failing,
Isaac Saccolic comes back with areal definition.
He says failure occurs when AI exposes data, or lack of testing
to validate results, or lack of change leadership.
(37:14):
He's absolutely right. You have to think about that.
You have to think about how failure is going to play out.
And you know, to the blog post that I alluded to earlier that I
wrote last night, there's a, there's a significant change
coming. There's a study that just came
out, executive leadership, 6 outof 10 executives across the
(37:37):
spectrum, so globally, across the spectrum are expected to
change roles within the next three years.
That's going to drive a phenomenal amount of change when
you start to kind of boil it down into technology and what
folks are going to be looking todo.
So change is in our future. If if you don't already see it
today, expect more of it. If you are seeing it today,
(38:01):
expect even more of it and it's going to come at you faster.
And so you have to figure out how can you accelerate your
organization and your thinking in your decision making to be
able to accommodate that. I just recently I was giving a
presentation to a group of executives and one of the
statements I made kind of caughta few off guard and I said,
(38:22):
look, today is the slowest your company will ever operate.
Think about that for a minute. Today is the slowest your
company will ever operate. We're only going to get faster.
Strap in and hold on. On the subject of the human
(38:43):
relationships that can help accelerate and make these
provide the the fabric, if you will, that makes these projects
relating to the business in in the strongest possible way.
Arsalan Khan comes back and sayson Twitter, should tech
executives invite non tech executives to our echo chambers?
(39:08):
What incentives are there beyondjust cost and efficiency for non
executives? So how do how do we broaden this
pie, so to speak? I often talk about in my past as
an IT leader, some of the first relationships that I built when
I walked into a new organizationwere with my chief legal counsel
(39:32):
and head of audit. Now, you might go, well, those
are kind of odd for a CIO to build relationships with.
But here's the thing. You're going to run across those
folks from a legal perspective and from an audit perspective
before you know it. And if you are at least building
the relationship and setting thegroundwork so you can
understand, listen, and respect where they're coming from, then
(39:58):
guess what? When it's time for you and the
other person to sit down and have a conversation because
you've been breached or you havea legal issue you have to
navigate through, you already understand where they're coming
from. You're not trying to figure it
out in the heat of battle. And so it's a way to build
respect. It's a way to to build those
(40:18):
relationships. And I can't emphasize how
important those relationships are.
When things get tough, it's the relationships you will have to
lean on. It's not going to be some other
factor or technology whiz bang. You have to rely on the
relationship. So don't just look at at the
value of what you get out of that particular transaction of a
(40:42):
relationship or relationship is something you build over time,
you invest in over time. And when you think about it that
way, it changes your paradigm interms of why you invite these
folks to your staff meetings or to present to your organization.
On LinkedIn, Simone Joe Moore makes the point that applying AI
(41:06):
on top of current processes is no different from automation
questions. If your base isn't good, we'll
just do the wrong thing. Faster and speed does kill until
we slow down enough to make the speed sustainable.
And Greg Walters comments that this is all classic needs
(41:26):
assessment and system analysis stuff.
When you do apply automation, whether it be RPARPO, whether it
be a Gentek frameworks, on the other end of the spectrum, when
you do, all you're doing is justautomating an existing process.
(41:46):
There's a danger with that, and this is something that people
don't necessarily think about orunderstand, which is number one.
Have you taken the time to understand how that process
works? No #2 you're taking the human
out of the loop, and we underestimate the value of the
human. So when when you tell a human do
a certain task, if there are situations that come up that are
(42:10):
outliers to that task, the humanhas the cognitive capability to
be able to say that doesn't passthe sniff test.
I better ask someone because it's not my process, it's not my
capability. When you automate those
processes, all of those checks go away.
And so that becomes incredibly dangerous where you start
(42:30):
automating bad processes. And so that's something you have
to think about and why it's always important understand what
you're trying to work with firstbefore you start layering this
in. And frankly, is this the right
place to start? I mean, that's a whole nother
aspect, which is prioritization.And then Michael, the second
question. He's saying this is classic,
(42:53):
classic stuff. You need system analysis.
Go ahead. Yeah.
So, so I will sort of agree withthat.
I think you need more business analysis then you need system
analysis. And the reason I say that is it
all starts with how you operate,how your business functions, how
the how the work gets done. Understand that first and then
(43:14):
you can go into some system systems analysis to be able to
ascertain how you should adapt that or change it accordingly
when you start thinking about infusing AI into the mix.
But you should, you should definitely start with business.
And the reason why I make that what may seem overly subtle of a
(43:36):
distinction is because if I werejust to agree and say, Yep,
system process, which you're kind of right on too many people
would go, oh, OK, so I just apply my normal system process.
Take this the way the system wasbuilt to understand how the
system was built and refactor itfor AI and that would be a
mistake. Elizabeth Shaw says who should
(43:58):
own the AIROY as an issue? And Arslan Khan says why is the
onus always on technology folks to know about the business
processes? And I think these are related
issues. Who's responsible for AIROY and
(44:18):
why is it always on IT? The easy answer for Elizabeth's
question is there's no one person that's responsible for
it. And This is why some
organizations are putting together an AI council that can
be driven maybe by the CIO, sometimes by the CEO of the
company. But there's someone that leads
(44:39):
the council that's a cross functional representation of the
organization and they collectively, not individually,
collectively are responsible forprioritization and ensuring that
guardrails are put in place and prioritization and, and costs
are, are being managed. And then I think the, the second
(45:00):
piece of that is in terms of just kind of understanding how
do you evolve that over time, right?
And so it's an evolutionary process.
We're still learning, but you have to be able to accelerate
the decision making process in terms of what you work on first,
second, third, and then evolve that right, Fail fast.
(45:21):
We've heard these terms before, we're just applying them into
AI. Let's just briefly talk about
vendors, technology vendors, andservices.
Vendors. Historically, there was a real
problem with services because companies would outsource that
(45:41):
work and the expertise would lieoutside the company with the
third party vendor that the professional services vendor.
And it created a cycle where IT never developed the expertise.
And so they were, it was this cycle of dependence.
What's your view when it comes to AI now, today?
(46:05):
Number one, you cannot do it yourself.
You can't do this yourself. You will need help.
Wait, wait, wait. Sorry.
So, so just so I'm clearly understanding you.
So you're now advocating outsourcing knowledge elsewhere
so that you don't have to worry?Is that what you're saying?
No, no, that is absolutely not what I'm.
(46:25):
Saying OK sorry I missed. What I'm saying is when when you
go down the path of these these larger projects, so moving
beyond efficiency and get into the more innovative projects,
you won't be able to do them allinternally.
You will need help with that, but you have to understand where
those partners play the most powerful role for your
organization. To your point, if you are
(46:48):
outsourcing your organization toa third party and you're
basically paying for them to build the institutional
knowledge and maintain the institutional knowledge and the
learning, that's a mistake. That's a critical strategic
mistake. But the way that you can
leverage a third party is by using them to backfill some of
(47:09):
the more mundane things and helpyour organization, help your
team learn and augment some of that with that outside thinking
so that you can help them accelerate their own thinking.
But at the end of the day, you want that institutional
knowledge inside your organization, not outside.
So I'm not advocating that it beoutside.
What I'm advocating is you need to leverage outside resources to
(47:34):
bring that expertise and help train and educate your team so
that they can move more quickly through this learning process,
right? They're not reinventing the
wheel, but maybe they get a framework and then they can
reinvent it for their own needs.But then you're also not
building that that center of excellence outside of your
organization. You're building that squarely
(47:55):
within your company and that's not always the case,
unfortunately. I ran into one organization
where half the organization was outsourced.
And the problem with that is a lot of that innovation was being
done by those outsourced staff folks.
And yeah, it's, it's an expensive venture and really
(48:17):
detrimental to your organization, to the company and
your customers. So you're not at all suggesting
handing off responsibility for your institutional knowledge to
a third party and then letting them run with it, but you're
saying prioritize carefully, usethem to help jump start.
(48:39):
One of the commenters on LinkedIn and I, I forget which
one, spoke about this prioritizing.
So you're saying pick your battles very carefully?
Yeah. And if you are abdicating
responsibility to a third party,you need to go.
You seriously need to go becausethat that puts the organization
(48:59):
at risk. And I've seen it happen.
And unfortunately, I've seen situations where the the senior
most person in IT is essentiallymy words, they're holding the
company hostage because of some of those strategic mishaps.
And it never ends well. But it's going to force boards
(49:21):
and CE OS to make some very tough decisions.
And yeah, it's going to get rectified one way or another.
But yeah, just don't go down that path.
In one sentence, Chris Peterson on Twitter asks, do you see a
specific C-Suite role for AI alongside the CEO like some
(49:43):
organizations have done with theCDO, or would that stay within
CIO? I'm thinking of like chief AI
officer for example, so very quickly.
I have long since said and I will continue to say, if you are
looking at a Chief AI Officer orChief Digital Officer, CDO or
CCAIO, you probably don't have the right CIO.
(50:05):
It's not to say that you don't need AVP of AI or VP of of data,
but at a chief officer level, no, absolutely not.
Why? Why do you say that?
At the end of the day, is it really a role that is separate
from the CIO? No.
And then is it a role that is going to operate amongst the
(50:29):
executive leadership team on parwith the CEO, the
CFOCMOCOOCROCHROCIO? And the answer is no.
It's a very specific task. That's why I say VP of AI or VP
of Data is a much more appropriate role.
And then that would roll into the CIO, the strategic CIO cause
(50:51):
again, the CIO, their closest relationship is going to be with
the CEO and then secondarily with the rest of the ELT.
You're not going to see those other roles kind of taking,
taking a similar path. And like I said, it's a longer
conversation as to more of the reasons why and how that plays
(51:11):
out, but that's kind of the top line view on it.
Tim Crawford, thank you so much for spending time and sharing
your expertise with us today. I'm very grateful to you.
Michael, thank you so much for the invitation to join and love
the questions coming from the audience.
This has been great. Everybody, now is the time for
you to subscribe to the CXO Talknewsletter so that we can notify
(51:35):
you about upcoming shows. You can be part of our community
and you can participate. This show is all about the folks
who are listening, so go to cxotalk.com right now, subscribe
to the newsletter, and we'll seeyou again next time.
We have really, as we always do,like incredible shows coming up.
Two weeks from next week, there's no show.
(51:56):
Two weeks from today. We have two members of the House
of Lords in the UK talking aboutbig tech and policy around AI
and big tech. So that's going to be really
interesting right from the horse's mouth.
So join us. Thank you so much.
Thanks to Tim Crawford. We'll see you again next time.