All Episodes

February 18, 2025 56 mins

Join Michael Krigsman on CXOTalk as leading CIO advisors Tim Crawford and Isaac Sacolick unpack actionable strategies for Chief Information Officers to thrive in the AI era. This episode dives into critical challenges and opportunities, offering insights on:Governance & Data Strategy: Prioritize robust AI governance frameworks and invest in clean, scalable data to drive reliable innovation. Sacolick stresses early integration of compliance and ethics, while Crawford underscores the need for a holistic data strategy to avoid “garbage in, garbage out” pitfalls.Change Management: Proactively educate teams and collaborate with HR (CHRO) to secure training budgets, empowering employees to adapt as AI reshapes workflows in IT, sales, and customer support.Innovation vs. Efficiency: Focus on AI initiatives that transform business models and customer experiences—not just productivity gains. Align pilots with clear OKRs, balancing agility with measurable outcomes to escape “pilot purgatory.”Collaboration & Risk Mitigation: Engage legal and audit teams early, building cross-functional councils to navigate regulatory demands and ethical AI use.Cultural Shifts: Embrace automation and upskilling, balancing Shadow IT’s creativity with security guardrails to fast-track innovation responsibly.This discussion is perfect for IT leaders navigating digital transformation and equips CIOs to harness AI’s disruptive potential. Like, subscribe, and share your questions in the comments to join the conversation shaping the future of enterprise AI!🔔 Don’t forget to like, subscribe, and share for more thought-provoking conversations with top industry leaders.🔷 Newsletter: www.cxotalk.com/subscribe🔷 Read the summary and key points: https://www.cxotalk.com/episode/cio-ai-lessons-from-early-adopters-in-the-enterprise🔷 LinkedIn: www.linkedin.com/company/cxotalk🔷 Twitter: twitter.com/cxotalk

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
How can Chief Information Officer survive and thrive
amidst the turmoil, madness, andopportunity of AI?
Today on episode 869, we're getting advice from two of the
world's foremost CIO advisors, Tim Crawford and Isaac Saccolic.

(00:23):
Isaac, let me jump to you first.How can early adopters create
effective AI strategy for and we're talking to CIO's.
Very first thing I think of whenI think about AI strategy
actually is our own governance. You know, I need to make sure
that my staff understands what data they can work with, that

(00:46):
the data is got enough volume, it's cleansed enough to be using
inside some kind of AI model that I'm going to use it with,
that people understand what someof our high level objectives are
in terms of what we're trying togo after.
So I'm actually starting with governance as opposed to just
going after and trying out new things.

(01:07):
I think the second thing, and this is where a lot of
organizations do fall off, is I'm going to talk about change
management very early on. A lot of the early successes in
AI has been happening in the IT groups.
If I want to start bringing AI into my sales and marketing
customer support groups where there's a lot more value, I need

(01:27):
to be thinking about how I'm going to educate the groups,
where I'm going to give them some leeway to use some tools
and work with them. So I'm going to really be
thinking about change managementmuch early in AAI strategy, then
maybe I was able to get away with in the past.
And then lastly, I'm looking forvalue other than just
productivity. There's a lot of talk about AIO

(01:49):
let's us be more productive. I can't fund transformation
efforts or AI for that matter, on just productivity
improvements. So I want to hear where is it
transforming our business, want to hear about where workflow is
going to be improved, in what ways, what value what, where
we're driving quality. So I'm looking for value beyond
just productivity to be part of my strategy.

(02:12):
Tim, thoughts on this set of issues?
Isaac has just raised a bunch oftopics from innovation to
culture and how AI is different from traditional IT.
Thoughts on this? You have to think about the end
in mind when you start down thispath.

(02:33):
But but as part of the prerequisites for AI, you have
to start to break things down. You know, Isaac talked about
transformation. I think about it similarly, but
I use a couple of different terms in defining different
types of AI initiatives. One type is around innovation

(02:53):
and the other type is around efficiency.
And as Isaac mentioned, you know, efficiency becomes kind of
hard because how are you really driving toward efficiency and
what metrics are you using and how are you actually measuring
that? There's a lot of gamification
that goes into it and a lot of question around whether you can
actually achieve reasonable efficiency gains considering the

(03:16):
amount of effort you're putting in.
As opposed to innovation efforts, which have a much
higher output and much higher value to the organization than
efficiency in the long run. And so therefore, they tend to
have lower hurdles to get over because efficiency is something
that technically you could do with more people, but you're

(03:39):
using technology as a means to, to advance your position.
Innovation is a definition. I'm looking at those types of
efforts is these are things thatyou couldn't do any other way.
These are insights you could notget by throwing more people at
the problem. And then the last component that
you have to think about before you even start to lift a finger

(04:00):
on AI is you have to start thinking about your data
strategy. You have to have a comprehensive
data strategy in place because if you don't have good data
going in, you're going to have bad results coming out.
And so this is going to require a different way of thinking for
a lot of organizations in terms of just simply how they think
about data, not just where the data is and how they can start

(04:22):
to bring it together. But also, as Isaac mentioned,
where's that governance? How are we start to think about
governance? There's a lot of regulation
that's coming about over 400 components across just the US
states alone, something like 40 in California.
So there's a lot to get your arms around there.

(04:44):
And I think you have to start with the end of mind and be real
methodical about how you traverse down this path.
On the efficiency side, I think we're at a point where CIOs can
start looking top down and startreinventing workflow.
How are we hiring people? What does this mechanical
operation look like if we built it from the ground up using data

(05:09):
analytics, AI, automation from the ground up?
A lot of what we've been doing over the last two years in
fitting Ain has been taking a piece of that process or taking
a process that we haven't done well and saying we're putting in
a technology or a step in here that's going to either make it
more efficient or more scalable or improve quality.

(05:29):
We're really at a point where wecan start reinventing our
workflow. That's going to rattle a lot of
heads. We're always used to how things
are running today. We're always used to what our
job is in that function. And so that flows back into what
I was saying earlier around change management.
We need programs for our organization, training programs,

(05:51):
learning programs so that they start understanding and
embracing what their work is going to look like as AI is
brought in more fundamentally into processes.
And when I think about innovation, Tim, I go back to
the days when we launched mobiletechnologies.
In the early days of mobile, it was, you know, we were taking

(06:12):
web interfaces and slapping themon a small screen and say, hey,
we have a mobile site. And so the, you know, the
technology got a lot easier the to do mobile first capabilities.
We started seeing app stores come out, we start seeing mobile
first capabilities and now we start reinventing the user
experience. We haven't seen that so much

(06:34):
with AI yet. Even our agents, the genetic A
is that we're talking about and bring workflow and there's most
of the talk around that and mostof the value we're seeing is
inside our organization. But look over the next year or
two and saying how are we going to do something completely
different now that we can put anagent in place and how does that
change the customer experience? If you're watching on Twitter,

(06:58):
pop your question onto Twitter using the hashtag CXO talk.
If you're watching on LinkedIn, just ask your question, share
your comment in the LinkedIn chat and ask these guys pretty
much whatever you want. It's a great opportunity.
You can get free consulting right now.
So take advantage of it. So you're you're both describing

(07:21):
attributes of IT, culture, deployment and so forth that are
not really unique or new Cultureis a human nature phenomenon and
change management and all of that.
Is there anything that's really new about about AI that changes

(07:44):
the CIO role that has a big impact here that's different
from what we've had in the past?We have to get more comfortable
with how we trust technology andalso how we bring automation
into the fold. I mean, even with technology, a
lot of applications still rely on a human component.

(08:04):
And with artificial intelligenceand some of the the newer
technology that's coming, comingout, it requires us to get more
comfortable with automation in ways that we haven't had to deal
with in the past. And that is new.
That's very new because we haven't necessarily had to get
focused on our processes such that we need to make sure that

(08:28):
they are sound, they are clear. And this touches into some of
the things that Isaac mentioned around change management.
You know, do we have good processes in place and good
change management in place? And if you just turn around and
say, oh, we're just going to automate it, that's a very
dangerous decision to make. And so one of the things that's

(08:48):
new is it's causing us to kind of go back and rethink how we do
things to speed it up to ensure that we are getting more
accurate output and ensuring that we're not automating bad
processes. And what that means is that
there's a cultural shift to embrace technology more fully

(09:08):
than we ever have. And that is change.
That's A, and it's a, it's a hard cultural change for a lot
of organizations that have maybebeen more accustomed to,
especially as you get to larger organizations more accustomed to
a more incremental change. You know, we're making these
small improvements that doesn't suffice anymore.

(09:29):
You have to make tectonic improvements.
You have to make transformational improvements
because that's exactly where your competition is going.
And if you don't make those, you'll be left in the dust.
One of the differences, Michael,is that it's happening to
employees in our back offices a lot faster than what automation
did. You know, their jobs are

(09:50):
changing and it's looking like the things I used to do, I maybe
no longer have to do it at all or do it fundamentally different
than I've ever done before software development.
I mean, if you asked any of us 4-5 years ago could machines
code for us, we probably would have said no.
I have a blog post around this now somewhere around 20 to 30%

(10:10):
of code being written by AI as being accepted inside being
pushed and being pushed into production being accepted by the
development group. So it's changing fundamentally
how our employees are working. And for some, you know, they
embrace it, they learn it. Or for a lot of other people,
they're still scratching their heads and saying, what do I want

(10:32):
to do now that AI is being able to do some of the things that
I'm doing? I think of that as a writer,
right? Am I writing for people or am I
writing for for LLMS that are going to bring my data in his
content and use it to answer questions?
So it's a, it is a fundamental shift.
And then with the CIO role, I mean, we've seen this a little
bit in the past, but it's getting harder.

(10:54):
You know, every time there's a whole new discipline that we had
to learn, you know, whether it was digital or data and now AI,
if we didn't learn it as ACIO enough and build up enough
competency around us to lead these areas, the board, this the
executive group would look at and say, we're going to go hire
a chief digital officer. We're going to go hire a chief

(11:15):
data officer. Now we're saying we're going to
go hire a chief AI officer. And learning AI is fundamentally
a little bit more difficult. We have to scale our ability to
learn from the organization. So I think the the the goal for
CIOs need to move really fast and make sure that our
lieutenants and our high potentials are getting the

(11:37):
enough leadership and getting out there and learning enough so
that they can inform me as ACIO about how to invest and not
meet. Going back to them and saying
this is the area that I want to focus on.
On Twitter, Mike Boyson says that to take advantage of these

(11:57):
AI opportunities, he says you would need completely new
business models to weave with AI, and those business models
will drive what your data needs are.
And then on LinkedIn, Greg Walters says that this is a long

(12:18):
question, so I'm going to shorten it.
I'm going to ask you guys to askshort questions so it's easier
for the moderator that's me to to ask.
So I'm going to shorten this. He says we're transforming from
manual thinking to artificial thinking.
Does this mean we will proceed by ignoring silos and
hierarchies to change everythingand ensure fully integrated

(12:39):
organization wide adoption of AIAnd, and I think the the common
thread here is the power of AI to drive disruption.
But what kind of changes do we as CI OS need to make in order
to take advantage of these opportunities?
It kind of comes back to is AI going to drive changes to

(13:01):
organizational structure and andhow we think about organizations
and operate as organizations. I mean, let's not forget an
organization is a living being, right?
It's an evolving living being and it changes over time and
culture is just one component ofthat.
I do think that AI is and it allwe already have examples of how

(13:22):
it is changing how we think about structures within an
organization. Has it gone so far as to break
down the silos broadly? Not yet.
I think at some point that mightstart to happen.
I mean, you could envision of position that we get to sometime
down the road where an AI type of solution, magical mystical

(13:46):
solution looks at how we work asan organization starts to learn
how we engage with customers, how we engage with employees,
what the market dynamics are, what the environmentals are from
geopolitics to weather, and is able to actually tell us how to
run our business more efficiently.

(14:07):
That's actually something that we can envision and get to.
And then kind of kind of building off of Isaac's earlier
point. Now just imagine you need
software to be able to do it. And that same system can then go
and start to build some of this software.
There's already talk amongst vendors, big, huge vendors with

(14:29):
really big complicated applications and platforms
around. Can we use AI to essentially
rewrite the entire platform and what, how can we learn from the
way people work to improve upon that?
And then kind of feed that into the software.
So the short answer is not yet broadly, but we can completely

(14:54):
envision that. I do think that AI is already
having an impact though on org change.
Join our community, go to cxotalk.com Sign up for our
newsletter so we can notify you of other live shows.
We have a question now from Arsalan Khan on Twitter relating

(15:15):
to this. And since you're the one who
first brought up culture, let meaddress this to you.
He says who decides what's a good process versus versus a bad
process? Most standard operating
procedures become archaic since people find other ways of doing
things that are not even documented.
And I'll ask you the Isaac to keep this pretty short because

(15:37):
with the questions are stacking up.
Our notion of process is changing dramatically.
We used to think of linear process and linear handoffs.
We used to think about how to automate pieces of this.
And you know, what's changing isour ability to put intelligence
to handle all the nuances in complex decision making.

(15:57):
We want to be able to ask questions and knowing that the
analytics, the data behind it isfar more complex than what can
go into a linear process. And I'm going to ask a question,
you know, given what's happeningin the United States over the
last few weeks, how should I evolve my supply chain?
What are my options? This is not a linear process

(16:18):
anymore. This is about having the right
data in place to make tactical decisions as things change and
evolve quickly. And going back to what Tim was
saying, I mean, we we may not beable to do strategic oriented
questions at that scale just yet, but we certainly can get
our data ready for that. And I think that's a big
challenge for organizations to think think about today.

(16:39):
Martin Davis asks this question.He says how does a CIO avoid
pilot purgatory with AI? So who wants to explain briefly
Pilot Purgatory and then give the antidote to Pilot purgatory?
I just read a statistic that said 69% of companies are

(17:03):
running 10 or more PO CS and 10%are doing 50 or more of and to
some extent I, I don't have an issue with that.
I mean, I think pilots and POC's, you know, not all of them
are going to make it into production.
A lot of them are learning exercises.
A lot of them are really fishinginto, into the underlying data.

(17:24):
Can the data provide enough of abacking to support the AI, the
hypothesis around the AI? And even when it does, does the
AI that you've created drive enough business value to put it
into production? So when you put that all into
play, it's not surprising to me at least that a good number of

(17:45):
these are not making it into production.
I think the real issue is that organizations don't have a
process to make that happen whenthey really having a winning
PSCA pilot that's delivering. Do they have the change
management in place to impact the people who are going to be
affected by it? Can they bring that AI out at
scale? Can they manage the operations

(18:07):
around the model OPS and the ML OPS that go into monitoring a
model and making sure that's still relevant?
So I think there's, you know, you do so much work just to
figure out what video seeds to go after, what data to go after.
But even when you have a willingformula, you haven't done enough
of production Alizing to figure out how to make it do that at
scale. Tim, maybe you can help me make

(18:29):
sure that I understand what Isaac said.
So let me read that back to you and you tell me if this is
right. OK.
So I think what Isaac just said is there's a lot of people
playing with AI. They don't really know what
they're doing. They haven't thought through.
Their organizations aren't ready.
They don't have the data ready. But you know they're spending

(18:50):
money because, well, it's not clear why.
It's absolutely true. And every time I turn around, I
see another survey, another somemore anecdotal information that
comes in that's basically supports that.
Now, I don't want to, I don't want to come down on the idea of
experimentation. I mean, having a culture of

(19:11):
experimentation, especially within a technology org is
really, really important. And that's important because you
want people to use their imagination.
You want people to bring that creativity in because ultimately
that's what leads to change within an organization and
potential ways for your organization to differentiate

(19:31):
yourself from your competition. And so CI OS know that, see
leading CI OS know that they need to find ways to really kind
of explore these new areas. But as Isaac said, you do need
to be somewhat methodical with it.
And as we've gone through the last 12/18/20, four months,
there's been a, a market shift where the bar has gone up that

(19:55):
before you even start an AI experiment, you need to have
some very clear business outcomeand you need to understand how
it ties to one of your company'sobjectives.
And typically that falls into one of three camps.
It's either around customer experience, employee experience,
or business operations and supply chain.
It's one of those 3. So I think that the thing

(20:17):
Michael to to consider there andand to Martin's original
question is you just have to think about what you're doing
and what makes the most sense. But still don't don't make sure
that you don't put everything together and expect it to be
buttoned up from the start. You still have to experiment,
but you need to more quickly come to an outcome to determine

(20:40):
whether this is a project that has legs or whether it's
something you need to to pull the RIP cord on.
Tim, I like that part of it. I think that's what companies
miss a lot is how long do you give a team to experiment And
even if the experiment is a little bit open-ended, when do
they come back with their vision?
How is it aligned to the strategy?

(21:00):
I mean, you can give people sometime to go out and learn what
the data is telling you, but youcan't give them a mile long
runway anymore. The expectation is that an ROI
that a, an AI is going to be delivered in about 8 months.
That ROI is going to be measuredin about 13 months.
That's pretty quick. I mean, we couldn't do apps at
that rate 10 years ago. And we're saying we're doing all

(21:23):
this data work, we're doing all this change management work.
We're putting all this AI in place.
We still have a skill issue hereand we're trying to get to to
value between 8 and 13 months assentiment from boards and CE OS
are looking at and saying, you know, the honeymoon's over, I
better start seeing some value from all this investment I'm
making. Liz Martinez on LinkedIn is

(21:43):
screaming out during this conversation.
Business case. Business case.
Business case. I agree there needs to be some
connection to a business outcome.
I would just caution on using the phrase business case And the
reason why, and we might be agreeing or disagreeing on this,
Liz, is because a lot of times when people think about the

(22:06):
business case, they put togetherthis multi page document that
could be 20-30 pages defining why they want to make this
investment into this experiment and what the potential outcome
is. And that's not what we're
talking about. We're talking about these very
quick wins. But I agree it needs to have
some connection to the business as I mentioned earlier.

(22:29):
So as long as that business caseis very short and direct and to
the point and as you start down the path, you can show
connection kind of closing thoseconnecting the dots, if you will
to the outcome in short order inthe time frame that that Isaac
mentioned, then I'm great. I'm good with that.

(22:50):
Just don't look for the fully like put together business case,
business plan. The three-year plan, the full
PNL No, I Tim, I use a one page vision statement and you know,
and it's got, you know, information around customers
around it. It's got information on
strategy. You have to project the
timeline, you have to project what OK, Rs you're impacting

(23:13):
very important. You got to have a value
proposition for the end user or customer that's benefiting and
anyone wants it could come reachout to me.
I'll be able to share it with anybody who's interested in it.
But I agree with you, it's got to be simple.
And most importantly, what we used to call business case is
really about alignment, making sure that people who are working
on this, making sure that stakeholders and making sure

(23:35):
executives know what the objective is.
I'm going to bring 2 questions together, and this is from
Joseph Puglisi and Ashish Parulikar.
I hope I have your name correct.And Joseph says speak to the
need for governance. What lessons have you learned

(23:56):
around early efforts to use AI? And Ashish says how can the
company communicate its AI strategy and progress to
employees and stakeholders that are inclusive and get adoption?
AI governance is an extension over what we've been doing with

(24:16):
data governance. There's some nuances to it that
are important around areas around bias, around ethics and
AI. But the, the main lesson learned
I, I find is that we treated data governance and data
security very separate tracks from our innovation and very
often lagging. So we'd figure out what the
innovation was around data, we'dcleanse our data, and then we

(24:39):
start looking at what are the governance implications around
it. But with AI, you need to flip
that. And even with data, the advice
I've given to groups I work withis, you know, I do everything in
agile. I have my data scientists
working on the model. I have my application developers
working on the user experience. I need my data governance

(24:59):
specialist there. I need my AI governance on that
agile team to make sure that as we're working through this
problem, we're doing things thatalign with compliance and
regulation. As Tim mentioned earlier, these
are changing very rapidly. There's a lot of depth to them,
so not everybody has the answersaround them, but I'm putting
people who are experts around this right on my team so that I

(25:20):
can do governance in parallel with my innovation.
Couple of things to add in into that second question briefly is,
number one, make sure you have good relationships with your
head of legal, head of audit, ifyou're the CIO.
Those are some of the first relationships you should be
making within your C-Suite, within your peer network.

(25:44):
And then this the second part ofthe question around
communicating this out. I mean, many organizations have
had success with creating a governance body or a council.
And So what that does is 2 things.
One, it starts to bring different ideas and perspectives
into the mix. So you get those different
personas engaged, but it also has a dual purpose, which is

(26:07):
starting to communicate those shared objectives out to the
rest of the organization. And so by bringing that
together, it's not just sitting with the CIO or one individual,
whether it's the head of audit, head of compliance, whomever,
depending on your industry, but rather it becomes a shared
responsibility. And I think that's probably the

(26:27):
best way to go at this stage of the game.
I'm going to add one person to that and that's the CHROI think
is really important to have there when we talk about change
in people's jobs and that changing that quickly.
And a tip for CIOs, if you're looking for budget around
training and learning and development programs, a lot of
enterprises have that with the CHRO, right?

(26:50):
And so bring them into the conversation around how AI is
going to impact the business. And we have a question now on
data from Michelle Clark on LinkedIn.
And let me just remind everybodythat this is your opportunity to
ask your questions to get free consulting.
If you're watching on LinkedIn, pop your question into the chat.

(27:13):
I urge you to put these guys on the spot.
These are two smart guys and askthe ask the hard questions, not
to me, but to them. Michelle Clark says, how do you
get clean data? How do you know that your
medical diagnosis data, for example, isn't full of racial
and gender bias? And this is really, really

(27:36):
important for every business. Usually been a lot of financial
services industry companies thathave been using these data
platforms watch longer with a lot more skill than in other
industries. And because so many of us are
investing in AI and looking at our data, we need to have those
platforms in place. If you don't know if your data
is cleansed enough, or is healthy enough, or is

(27:59):
trustworthy enough, the go look at the platforms and see how to
tune them for your type of data.Completely agree.
Use the technology to your advantage.
They're getting far more sophisticated now where where
they're actually showing ongoingreal time dashboards about the
health of your data. So leverage that.
And I would just add, I mean, you have to also bring the human

(28:20):
component into the mix too, which is help people understand
why things like bias and how they're bringing that data, how
they're putting data into these systems, how it starts to impact
things downstream. Let's go back to Twitter.
A really important question fromNaya on Twitter who says What

(28:43):
are the top three challenges organizations face when
pioneering AI apps? There's two sides to this, OK.
One area you're going to see a lot of pioneering is built
directly into the platforms thatpeople are already using.
So CRMCHRO platforms, they are all putting agents in place.

(29:07):
They're all trying to say put more data in our environment or
port more data accessible in my environment because I'm going to
bring the AI to you. And, and so the, the real
question there is understanding whether your data is ready for
this is one of the things I don't see in some of the
platforms is they can go, you know, do sales forecasting or

(29:31):
they can go do assistance aroundhiring, but they're not telling
you enough about whether the data in your platform is ready
for that. Ask your people, all right, this
is a great place to engage people, let them go, use the
platforms that you've already sanctioned to try out these
agents and come back and saying what?
Where's it delivering value? Where's the data need to be

(29:52):
improved on and go from there when it comes to building your
own agents, right? And I know a number of companies
that are starting to explore this.
It really comes down to a build buy discussion and thinking
about where are you having proprietary data, proprietary
value to customer that you're going to start investing in,
building out an AI agent with such good data, that's going to

(30:15):
really be a game changer for you.
And so that's what I'm looking for first is where is there
something that's going to be a game changer in our industry or
in our company or because I havedata that's worth investing in
building up that capability? I'm going to counter Isaac a
little bit on this one because Ido agree with some of the
points, but one one thing is it's people, people, people is

(30:39):
one of the biggest challenges that organizations have.
And the lighthouse companies that started to roll their own
with AI learned very quickly. They just do not have the
capability to truly do that and maintain it because let's let's
keep this in mind. It's not a one and done project.
And So what you're seeing now iscompanies starting to adopt AI

(31:03):
technology built into those existing products as as Isaac
mentioned. The one thing that I will
counter with is I would highly recommend at this point that
organizations, CI, OS and organizations think about buy
over build, especially with thistechnology and where it's, it is

(31:23):
today. And the reason for that is there
are so many variables and parameters and very, very, very
few organizations, IT organizations or line of
business organizations are trulycapable of getting their arms
safely around the technology. You potentially run into more
risk than reward when you start to go down the build path unless

(31:46):
you can truly get your arms around it, which few can.
So I would encourage buy over build at this stage.
Of course, that's going to that's going to vary depending
on the particular use case and scenario and organization we're
talking about. I agree today is a buy over
build, but what CI OS have to keep in mind is the build is
going to get easier and cheaper.You know, we saw that with

(32:09):
mobile technologies if you builttoo early, you were building a
lot of proprietary code and getting into technical debt
around that and getting into user experiences that needed to
get rewritten. It's going to get easier to
build your own agents. It is but the, but the other
piece to this that that hasn't come up in our conversation is
the adoption piece. And we're already seeing a lot

(32:31):
of dragging of the feed and justnot really good adoption numbers
of just simple things like Co pilots within your existing
productivity applications. So this kind of gets back to the
people and the the cultural pieces, but adoption of these
tools is going to be really key.Ashish Parolekar comes back and

(32:52):
he says are organizations adjusting their Okie Rs OK, OK,
R sorry and metrics to assess the impact of AI on business
outcomes or how are they planning to measure the ROI?
This comes directly back to the question of alignment between AI

(33:18):
initiatives and business strategy that you both alluded
to. Tim, you want to jump on this
one first. It depends on which, OK R, which
particular strategy you're talking about, because in some
cases, it shouldn't impact them whatsoever.
Whether you use a blue technology, a yellow technology,
red, green, it doesn't matter. What matters is the business

(33:40):
outcome. And so there are different ways
that you're going to accelerate or hinder your business, but you
need to stay focused on what that outcome is, whether it's AI
or not. When you get into some of the
operational pieces and you startto want to measure things like
before and after of bringing newapplications or technology into

(34:02):
the mix, I could see those changing.
But you need to distinguish between those business strategy
pieces and the ones that are kind of more in the weeds.
And that's a really big differentiator to answer that
question. Isaac, here's another question.
Let me direct this one to you. This is from Chris Peterson on
Twitter. And he says regarding POC's, to

(34:27):
what extent are legal and audit functions actually ready to be
involved in the development and innovation and not just be
reactive after the POC? So, in other words, to what
extent are legal and audit functions involved and should be
involved? They're understaffed to be able

(34:48):
to keep up with the space of change.
The technologies aren't transparent enough for them.
That might change when you look at what's happening with agents
being able to take natural language input and be able to
share that natural language thinking.
A natural language output where when it has a conversation with

(35:09):
another agent, we're now starting to be able to create a
an audit trail in English that an auditor or somebody in legal
can start following. So it's still early in this
path, but because we're taking our services, moving off APIs
and moving into natural languageinterfaces and they're sharing
their thought streams and where they're passing other questions

(35:31):
to other agents, we're going to start being able to have a lot
more audit controls and legal controls around it.
They have to be right up there at the front of the line.
So at the beginning of the conversation, audit and legal
has to be part of the conversation as part of your
data strategy. But the other thing is there's
been a lot of concern around theblack box nature of these LLMS,

(35:52):
and that's starting to change. We just saw Open AI come out and
start to share some of the reasoning.
And I think Deepseek is driving some of this.
But now you're starting to see companies like Open AI that are
saying, OK, we're going to actually expose some of the
reasoning that went into the answer that we gave you from
your prompt. And I would expect to see more

(36:13):
of that. This is from LinkedIn from Jason
Gutierrez, who says quick wins take smaller bytes, bytes, BYTES
as well out of the problem you're trying to solve.
And here's the question, what KPI are you trying to influence?
Is your dev team skilled enough to deliver an AI app quickly, or

(36:37):
are they still upskilling? Which raises the very important
question of talent. Do we you guys spoke about build
versus buy technology earlier. What about talent and developing
talent in house versus going outon the market and recruiting

(36:58):
people who have those skills? How do you think about that
balance that? Skill set has changed from
knowing as much as you can and how to do something into knowing
what to do and whether what's being built is being built
securely and robustly and, and, and has high performances.

(37:20):
It's a shift in mindset into knowing how to ask the right
question than knowing how to roll up your sleeves and getting
something done. And that's, that's perplexing,
particularly for us engineers and those working in IT to be
able to think about this. But that's the nature of what AI
is allowing us to do. It's not just around, can am I

(37:42):
more productive? It's am I able to do things that
I wasn't able to do because AI is providing the assistance
around it. Last summer, I did coding for
the first time, but I didn't write a line of code.
I had AI write the line of the code.
And I just took the person asking the questions and saying
I need help to be able to do this function.

(38:03):
How can I get some code to be able to do this?
And after 5 or 6 prompts being able to do this.
So I think the same thing is happening inside our IT
organization. I think there's a real question
for CIOs in particular. I saw a data point from McKenzie
saying that IT is more than twice advanced using Gen.
AI than other departments. But the question is, you know,

(38:25):
when we use the word productivity, when we use the
word capability and we start asking the question around ROI,
we're not showing where that's delivering value.
We're going to be asked to give cost up.
Jason Gutierrez comes back to both of you guys, maybe Tim you
what you can grab this and he says sure, but doesn't that

(38:46):
necessarily mean more OpEx? There isn't A1 size fit all for
every organization. I think one of the things that
that I back away from is trying to get too granular on that
answer. I think what's important for the
CIO, one of the things I'm looking at is how do I start to

(39:07):
measure my organization's value and impact to my business.
And when I say my business, I don't mean a particular
department of the the company. I mean to the business and our
company's customers. And so I'm looking at how I tie
what we do as an organization with NIT, with our business

(39:29):
partners in HR, with our business partners in finance and
operations and engineering. And I'm putting together OK Rs
and metrics around our impact and performance against those
objectives that we all share. That's where you have to start.
Now you can delve further into developer productivity and call

(39:52):
center productivity and and that's great.
But when you start to get further into the weeds, that's
where a lot more variables come into play.
And you have there, there is no one-size-fits-all answer at that
point. Isaac, we have a question from
Derek Butts, and it relates to risk and security.

(40:13):
And he says AI risk guidelines and standards are continuously
being developed. Can you recommend any AI
frameworks to securely roll out and mature the value of AI tools
for your business operations without increasing the risk
across your business culture? So I think really fundamentally

(40:38):
we're talking about risk cultureand rolling out at scale.
When you're doing innovation, when you're driving change, you
by definition are increasing risk, right?
There's no ways around that. If you want to stay doing what
you did today and continue to perfect it, that's, you know,

(41:00):
you're going to be in that box. That can be disruptive.
So as soon as I'm innovating, assoon as I'm looking at any new
capability or bring change to myorganization, I'm taking on
risk. And the question is, you know,
are you taking smart parallel measures around that risk?
Are you identifying it? Are you bringing the right

(41:20):
people? And who are going to ask the
questions from a risk managementperspective?
Tim's brought up legal, audit, security.
Am I bringing those people into the right conversations so that
we're asking those questions earlier?
What's going to be the impact ifthis data leaks?
Should we be using this data forthis perspective?
How are we securing this data? Is our data masks so that if

(41:42):
anybody comes into our organization and starts using
that data in a new way, that data is masked for PIA
information? These are the kinds of things
that are building blocks when we're doing innovation that we
have in place so that we can do those things securely.
Tim, this is something that's orthogonally related, and this
comes from R Salon Khan on Twitter, who says, how do you

(42:08):
address shadow IT from frontlineemployees versus executives?
Should we encourage shadow IT? And the reason I think it's
orthogonally related is because one of the traditional arguments
against shadow IT, of course, has been, oh, we're going to
increase our risk footprint. So what about shadow IT?

(42:32):
And especially in this age of AIwhere everybody's using ChatGPT.
I actually support Shadow IT andthere are a lot of folks that
that think it is a dirty word orit can be problematic for the
organization. But you have to go back and
understand why is shadow IT coming into your organization to
begin with. And typically it's because

(42:54):
someone isn't getting what they need in the way they need it.
That's the simple answer. The the longer answer may have a
lot more components to it when it comes to AI.
Yeah, there's sure there there'spotential for invoking more risk
into the equation. However, if you put the right
guardrails in place, you actually can enable shadow IT in

(43:17):
that creativity, especially in business units that understand
the business better than your organization.
And you can embrace that. And organizations have been
highly successful in creating a culture around shadow IT.
So it becomes more Co development, it becomes more
collaborative in nature as opposed to us and them.

(43:40):
But it starts with the culture and then just kind of fans out
from there. Yes, of course you have to think
about the data governance pieces.
Yes, of course you have to thinkabout cybersecurity and and
those kinds of components. And you also want to think about
simple things like how many different versions of generative
AI do you want running in your organization and all the data

(44:02):
that goes behind it. So all of those do come into
play, but I would start with what is your current stance with
regards to shadow IT and then let it fan out from there.
Let's go to another question. This is from Mark P McDonald,
WHO who's been a guest on this show.
He's distinguished Vice President and a research fellow

(44:24):
at Gartner. And he says who are the AI
leaders, companies that we can follow that CICIO see as the
leaders right now that we can study on who are doing AI well?
Any thoughts on that, either of you?
There are some companies that are doing some amazing things.

(44:47):
Can I talk publicly about them? No.
And the, the unfortunate piece to that is because we are in the
very early days, they're using this as differentiating for
their business strategy. And so they're finding ways to
really kind of change the game, not just change the the chess
pieces, but changing the whole game and using a very different

(45:08):
approach with it. So we're not quite there yet.
You do see some public examples of smaller examples of how AI is
being used. You know, everything from
cogeneration to summarization. You know, I think go back to the
earlier conversation about legal, you know, being able to
summarize the body of work around legal.

(45:29):
I mean, that's, that's a massiveopportunity.
So there are some of those opportunities that come into
that efficiencies space, but theones that I'm familiar with, the
really big demonstrable ones that that would be kind of those
lighthouses, those beacons of opportunity to follow in their
footsteps, those are pretty close to the vest still.

(45:50):
Is it close to the vest because they have figured out the magic
silver bullet? Or is it close to the vest
because they're crying in the corners and don't want anybody
to see it? You know, there's probably some
of the crying in the corners too, to be honest.

(46:14):
No, hey, listen, I I understand nobody wants to cry out in
public, right? I mean we.
Want to get back to? Lick our wounds.
Yeah. I mean, let's face it, you got
it. You got to have a few cuts to to
have success, right. And the examples that I was
thinking of, two examples that I'm thinking of, they're pretty
significant examples. But again, I kind of wish I

(46:37):
could even give you some context, but it I don't know how
to do that without exposing who it is.
Look, I'm looking for the small examples, OK?
And the reason they're slow to announce them is their fear of
people coming in, using the technology to find the gaps and
what the AI isn't doing well. And then they're going to be in
the headlines for the thing it shouldn't have been doing in the

(46:58):
first place. So they're rolling these things
out slowly. But I'm starting to see customer
facing agents come out. I've got one in an e-mail this
morning from a bank about being able to go through an agent
around a car loan. We hate going for car loans.
It's a horrible experience. So when you start seeing that
being publicized, little things that people are doing, that

(47:19):
agents are starting to help, well, you know that some of that
is changing in the marketplace. Tim, Jason Genovese comes back
and he says regarding Shadow IT,which you're in favor of.
He says as Shadow IT steps in, at what point does enterprise
architecture become a concern? And really quickly.
Please work because we're running out of time.

(47:40):
Enterprise architecture has to still be there.
I will say a lot of enterprise architecture organizations get a
little too big for their britches and so there has to be
a bit of a check and balance there.
But I think if you're if you're taking the right approach with
EA in such a way that it becomesmore modular and more of a

(48:00):
framework as opposed to a heavily over architectured
structure, then it's accommodating for shadow IT.
Tim, here's one I'll direct to you.
And this is from Ashish Bhatak and he on LinkedIn.
He says buying AI may come with a cheaper cost, but what about
the fear of data privacy and security as the data is governed

(48:24):
by the AI service provider? What are the key parameters to
be kept in mind when choosing anAI service provider?
This kind of touches into what Iwas going to respond to an
earlier question, which is why many companies are choosing to
buy and use AI built into their existing enterprise applications

(48:44):
as opposed to building. Because those companies
understand the challenges aroundthe data management, data
governance, and they're able to put the right guardrails in
place. Most enterprise organizations
don't necessarily know how to navigate that.
This is all new for them, but those enterprise application
vendors actually have the, the girth or or scale to be able to

(49:07):
do it for many different companies.
And so a good way to think aboutthis is I can kind of build off
the the backs of those organizations as opposed to
having to figure out how to do the basics myself.
And unfortunately, there's a lotof assumptions that IT
organizations make. We saw this in data centers

(49:27):
thinking, oh, my data center is super secure.
Guess what? Of all the data centers I have
done assessments on, they are not as secure as you think they
are. And so the problem is you have
to take those assumptions out. And unless you're willing to do
that culturally, it's really hard to ensure that things are
buttoned up to the same degree. Isaac, Jason Genovese just wants

(49:50):
to be clear that he disagrees. Well, I'll say this to to to
both of you that he disagrees with Shadow IT.
He thinks there's too much risk with data exfiltration.
If you're going to allow it, youmust have appropriate security
controls in place. And he does agree, however, that
responsible sandboxing of apps for testing or a PAPOC is

(50:15):
necessary. You just can't forgo security
controls and it's a shame we're not in a bar 'cause we could all
step out and fight. I don't think we're fighting.
I don't mean. Well, OK, I'm trying to
instigate a fight then it's not well and it's not working.
I don't, I don't agree that shadow IT is a good thing.
But what I think CIOs have to understand is that shadow IT and

(50:36):
now shadow I, I is happening, OK, that they can't put up the
walls and prevent it from happening.
And then you can go back to Tim's comments and saying, OK,
what can we learn from what people are trying to do that
we're not servicing well or they're not learning enough in
terms of what are the risks or what are the technologies that
we've already put out in place that that they can go out and

(50:58):
use. So I think the real question is,
well, how is CI OS monitoring for this and responding to it?
All right. So you're on Jason's side.
I'm on Tim's side because personally I think that if an
organization has shadow IT, it means that the the the people
out in the trenches are not getting what they need and so
they're bypassing and they're just doing it themselves.
But that's why I say I wish we were together and we could go

(51:19):
out and fight about it. Just to be clear on that, I'm
not saying shadow IT free for all is OK.
So just just so we're clear, I'mtalking about a managed
approach, an integrated managed approach to shadow IT.
I'm not talking about the free for all that many people think
it is. So Tim makes clear that he's not

(51:40):
talking about technology run amok and he won't allow it.
As a CIO, you have a responsibility to the
organization, right? And so on one hand, you need to
make sure and, and make hard decisions around what you do
and, and how you support the organization and how you engage
the organization. And so that's why I think shadow

(52:01):
IT can be really powerful in a collaborative sense.
But you're right, I mean, if it's run amok, just going off
and doing whatever they want to do and however they want to do
it, sure, that's risky. That's not a good thing, but
again, that requires some some mature thinking from a
leadership standpoint to be ableto get to that point.

(52:23):
Martin Davis, who says AIAI as aservice has the potential to
turn all business apps into simply databases with an AI
layer above to handle all business logic in one place, no
longer siloed to functions, purposes, etcetera.

(52:43):
How should we prepare and plan for this future?
And I think it's a great place to and, and so Isaac, very
quickly, how should we prepare for the AI future that lies
before us all? Very quickly, please.
It's a great vision. I don't think we can get there

(53:03):
that easily. We've been having a vision of a
utopian connected system for a very long time.
I think to execute on that vision really comes down to
developing your people. The the technology is changing
so rapidly. When I talk about
transformation, transformation evolves every 18 to 24 months.

(53:24):
This is just the next wave of transformation with its own
languages, with its own risks. And I think it really comes down
to having the leaders in place that are looking for the
opportunities, where to spend time on where to do the
experiments and how they're going to evolve.
Not just the cost equations and the efficiencies that we're

(53:44):
working on, but how are we goingto really change our business
model and evolve because AI is abrand new capability for us to
consider. Tim, you're going to get the
last word. I actually agree.
I think one of the things that that you need to think about is
how can you accelerate the rate in which you're transforming.
Transformation is an ongoing process.

(54:05):
It's not a, it doesn't have a start and stop to it, but you
need to look at how you can accelerate not just the
technology you're using and the innovation you're using, but
also how your organization both within IT and and outside is
evolving too. So that's going to require
change in terms of the people, the personas, the skills,
rescaling, upscaling is going tobe heavily engaged in this

(54:28):
process as well as the relationships you have both
inside and outside the organization.
That's a that's a huge remit forCIOs and it's a different remit
than they necessarily have had at this scale, at this pace in
the past. All right.
We have covered a lot of territory today.
In this hour, I want to thank Tim Crawford and Isaac Sukullak.

(54:52):
Thank you both so much for coming back and spending your
time with CXO Talk. Really, really appreciate you
both. Thanks for having me.
Thanks for having me Mike. Great show.
And thank you to everybody who watch.
You guys are incredible audience, the questions and the
insights that you have. But before you go, join our
community. Go to cxotalk.com, sign up for

(55:16):
our newsletter so we can notify you of other live shows.
You see the discussions we have.So join us now.
In two weeks, we are speaking with AT&T's President of
Consumer. She owns the lion's share of
AT&T's revenue and business, so join us.

(55:39):
Then you can ask her questions and share your comments.
And with that, I hope everybody has a great day and we will see
you. See you again next time.
Take care, everybody.
Advertise With Us

Popular Podcasts

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.