All Episodes

September 26, 2025 • 43 mins

AI luminary and former Group CEO of Technology at Accenture, author Paul Daugherty, offers a hard-hitting take on enterprise AI and consumer-grade artificial intelligence on CXOTalk episode 895.


He examines what's working, where companies stall with their AI transformation, and what new technology trends leaders should monitor.


Peer into the future of AI and see the importance of responsible AI development.


🔷 Show notes and resources: https://www.cxotalk.com/episode/ai-reality-check-what-works-and-whats-new

🔷 Newsletter: www.cxotalk.com/subscribe

🔷 LinkedIn: www.linkedin.com/company/cxotalk

🔷 Twitter: twitter.com/cxotalk


#AITransformation #ArtificialIntelligence #BusinessStrategy #ResponsibleAI #Innovation #EnterpriseAI #Leadership #DigitalTransformation #AIStrategy #ExecutiveLeadership #cxotalk


00:00 🌟 Paul Daugherty's Career and Current Roles

01:52 🤖 AI Reality Check: Hype vs. Reality

05:51 📈 Generative AI's Impact on Work and Organizations

07:11 🌱 Challenges and Early Adoption of AI in Organizations

08:46 🔄 Invasiveness and Organizational Transformation with AI

13:40 📊 AI Applications and Measuring Business Value

18:14 🎯 Focusing on Impactful Use Cases in AI

19:48 🤖 AI Deployment Challenges and Talent Transformation

24:23 🚀 Prerequisites and Challenges in AI Adoption

25:56 🤔 Defining Failure in AI Projects

29:43 💼 AI's Role in Private Equity and Investment Analysis

33:55 🚀 Future Trends Beyond AI and AGI

35:07 🤖 AI's Impact on Data Management and EMR Systems

36:56 📊 AI's Influence on Jobs and Responsible AI Practices

40:15 🌟 Future Technologies and Personal Reflections

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Paul Doherty has had an incredible career at Accenture.
He was group CEO for technology.He was their chief technology
and innovation officer. He's written two books.
He's advised many startups, sitson boards and is now AI advisory
chair at the large private equity firm TPG.

(00:23):
Today on CXO talk number 895, wetake an AI reality check with
Paul to examine what works, whatfails and what's next.
I'm your host, Michael Krigsman.Let's get into it.
I spent a lot of years at Accenture.
I was there for 38 years until Iretired last year.

(00:43):
And it was an amazing vantage point from which to see the
transformation, transformationalpower of computerization and
digital technology and everything that happened over
those 4 decades. And to put it in perspective,
you know, in a way that some of you will appreciate, the first
production system I wrote was inwas in punch cards on a
mainframe. So it's a, it's quite an arc

(01:04):
from there to where we've gottento today.
And, and I've always been passionate about technology and
my role at Accenture is always about leading what's next and
leading us into what's next. So I'm still an advisor to
Accenture. I still do some work there.
I took on a role, as you said, with TPG, which has been really
exciting. TPG is a great firm.
A lot of partners are based in in the Bay Area at the heart of

(01:26):
technology. They've been very tech forward
PE firms with, you know, large PE firm, 260 billion of, of
assets across a lot of differentplatforms.
And it gives them a way to invest in big companies, the
large companies that are impacting the direction of
technology as well as through some of their platforms involved
in the the leaders of tomorrow. And it's been a, it's been great

(01:48):
to, you know, come on board and work with TPG and we can talk a
little bit more about my role there.
Where are we today? What's the state of AI?
Are we in a bubble? Is all of this hype real?
What's going on? This isn't a bubble.
It's real. Like in terms of the real
transformational power of the technology and what's already

(02:09):
happening is real. We're just at the early stages
of it, which is where you see some of these bubbly things
happening. Let me talk about, you know, a
little bit about, you know, kindof what's real and you know,
what's hype, what's real and what's missing.
I think what's, you know, one example of hype I think is
around AGI, artificial general intelligence.
And one of the problems with AGIis there's not really and agreed

(02:31):
upon clear definition of it. But generally speaking, people
talk about AGI in the sense of general purpose AI that can
exceed the capability of anything a human can do.
And I think we're far, far, far from that in that there's a,
there's a group of people in a group of eminent people that I
respect that claim we're, we're either they are pretty close to
it. I, I, I, I'm with a different

(02:53):
camp of people who, who would disagree with that.
I think, you know, what we have today are really amazing
technology that approximate a lot of the things that people
can do, but that are far from real AGI as I would think about
in terms of human capability. Mustafa Suleiman, who you, many
of you may know from DeepMind and then inflection now leading
AI at Microsoft. Mustafa Suleiman has, has had a

(03:15):
series of posts that you should go look at if you haven't.
The recent post about consciousness and the the risk
of assuming of giving AAI agencyof things like AGI and
consciousness because it can lead to very bad human outcomes
by misinterpreting what the AI really does.
And I think Mustafa explains it really well.
So I'd encourage you to read hispost and this, but I think

(03:36):
that's a, a bubbly aspect of lots of people like talking
about AGI companies are claimingthey're already at AGI, but I, I
really don't believe that's that's the case.
We've had AI around for 70 years.
The the learning branch of AI with the transformer technology
in large language models is realand it's amazing.
If we just froze AI right now, no more innovation, no more

(03:56):
products. Freeze it where it is right now
with large language models. I think you'd have 5 if not ten
years of business adoption of what the technology can do
today, much less the massive amazing arc that it's on.
If we were to freeze it today, that implies that companies have
seen benefits, they're getting benefits.
Just what are the what are thosebenefits?

(04:17):
Generally speaking, there's benefits in the obvious areas
you see like coding and softwareengineering, lot of advances
being made. There are a lot of great tools
there in customer service types of applications and marketing
applications. Again can go into detail.
One organization from working with that had 30% increase in
productivity and 60% increase inNet Promoter score and

(04:39):
satisfaction by introducing new forms of, of generative AI into
their customer service organization.
And then in, in different, in different industry kind of
functions and, and, and such that we can kind of unpack a
little bit. So, so there's real, you know,
real opportunity in, in there inthere already.
But just circling back, maybe I just want to finish one thought

(05:01):
on the, on the what's missing from AI, because it's really
important to where things are right now.
I think everybody's a little toocentered just on the large
language models in the scaling law around large language
models. And I think to get the next
level, to get to anything approaching AGI and to get to
the next level of business improvement, we really need to
focus on other forms of AI. And there's a lot of things out

(05:24):
there, large quantitative models, physical or sorry,
liquid AI, a number of other things that are that are getting
more at symbolic AI. Yan Likun has been talking a lot
about this. And I think the combination of
the amazing things large language models can do and
bringing out that learning branch of AI together with the
symbolic AI techniques is what'sreally going to propel us to,

(05:44):
you know, to kind of a next, next level of what we can do in
business and next level of what AI itself can do.
What about the impact on organizations?
You wrote a book and a year ago pulled out and sections from it
that appeared on the cover of Harvard Business Review and and

(06:08):
you said this. Most business functions and more
than 40% of all US work activitycan be augmented, automated or
reinvented with Gen. AI.
So as you look back a year later, how does that prediction
stand up? Yeah, it was embracing Jenny I
at work was the was the article on Harvard Business Review and

(06:33):
yeah, that came out of research,as you said from our book and
1500 organizations, lots of lotsof research that we did and I
would a year later we were low the 40% was was low.
I think we we we significantly understated it.
We've done some additional research and you know you can,
you can peg it above, above 50% now in terms and that's looking

(06:53):
at work hours that people expend, you know that all of us
do that can be impacted by or that will be impacted by
generative AI where we are. So the coverage, so to speak,
the surface area that it impacts, yes, you know, is, is
increasing or the potential is increasing.
The actual what's happening is still pretty light, you know.

(07:14):
So let's just be be real about this today in terms of, you
know, adoption, we're still in the early stages from some other
research that that we've done the we would, we would take a
different cut and say that about15% of organizations have made
significant, you know, 15 have made significant some

(07:34):
significant progress on scaling AI somewhere in the
organization, 15%, another roughly 1/3 of organizations.
So the balance of, you know, to get you to 50% are close on the
heels and you know, kind of doing things, working, working
to scale. And about 50% are still more in
the kind of experimentation, trying things stage and maybe
individual use cases and parts of the organization, those types

(07:55):
of things. So there's so yeah, there's over
50% of the work can be affected,but that's that's not happening
today. That's the potential and
organizations are working to getthere.
And I think one of the myths about generative AI now is I
think you'll hear a lot of the tech industry talk about it.
It's like there's an easy button, just buy this and
implement it and hit the easy button and you're you transform

(08:18):
your business. For general AI, that's not the
case. This is really hard, probably
harder than the pre, the technology that come before.
This is harder than cloud and the cloud transformation because
it's invasive in your organization.
It's changing processes, it's changing roles of people to re
educating people in transformingyour organization.
You had to do some of that always of course with past

(08:38):
technologies, but that's this istaking it to another level with
AI, which is, which means it's we got a lot of work ahead of us
to get there. Paul, you use the term invasive.
What is it about AI that requires this invasiveness on
organizations as companies adoptit?

(09:02):
There's 2 dimensions of it, one of which is the invasive, the
technology is invasive and organizations are starting out
with legacy systems, with technology debt.
That's been talked about a lot for years.
So I won't go into that in detail.
But you know, the average organization still spends, you
know, 50% to 2/3 of their budgetkeeping lights on on legacy

(09:22):
rather than new capability. It's much higher than that a lot
of organizations and that creates, it's not just in that
it's an investment challenge, but also creates a real anchored
roadblock to to getting new capability introduced.
And AI is significantly different.
You have a lot of work to do at the data level if your data
fabric and getting that in in shape, you have a lot of work to

(09:43):
do at your architecture and application layer, at your
middleware layers. And some of that technology is
immature. We can dive more into this if
you or your audience would like,but we we lack categories of
products right now. There is not really, you know
good mature agentic middleware out there for organizations to
tap into and that's still an evolving product category.
So on that technology dimension,this is hard work.

(10:05):
The second dimension that makes it hard is the process
dimension, because the, the unlike cloud, which a lot of
benefits of cloud were technical, I can move to elastic
capability and external data centers and things and reduce a
lot of costs. And yes, there was business
impact, but you know, you got a lot of benefits to get the
benefit from AI. It's it's going invasively to
processes and changing the way people work.

(10:26):
And that's hard to do. There's change management
issues, there's organizational issues often where things cut
across silos, you know, for example, working with the
organization, which that's looking at AI and customer
service. But they realize the big benefit
for them isn't just in customer service, it's integrating across
product management and product innovations that they can really

(10:47):
dramatically shorten this the, the, the cycle time between
getting product, you know, customer feedback and product
innovation out there. And that's really hard to
compress those two functions into one and make it all work
the way they want to with AI. And that's an example of the
invasive on that process side. And then you can get into
cultural change and everything else that's part of it.
But those are at least a couple of the dimensions that make it

(11:08):
difficult. So according to what you're
saying, and I don't want to put words in your mouth, AI is a
forcing function for relatively broad organizational change with
tentacles and implications that extend in a lot of direct.
Is that an accurate way of saying re saying what you just

(11:28):
said? Your AI strategy is kind of your
business strategy. And yeah, there's a big debate
in organizations. Do I, you know, they want to do
AI, Do they need an AI strategy?I'd argue generally speaking,
what you're really up to, what you're really doing is thinking
about your business strategy with AI as a significant enabler
along with the whatever other things are in the mix that
you're dealing with macroeconomic issues and
everything else that's going on.So yeah, so so you have to get

(11:52):
in in into the, you know, like what's happening in the business
now, there's things you can do more easily.
You can you get some additional incremental productivity by
rolling out copilot tools and doing a variety of things.
So yeah, you can do that. But to really get some of the
bigger benefits and look at the impact and how you can get there
faster than your competitors andsome of these broad
organizational and process transformation that that takes

(12:13):
more work. This type of change, it sounds
like you're describing BRP and business process transformation
from, you know, 25 years ago. How are we different today?
Client server technology is whatpropelled business process
reengineering the famous MichaelHammer book and everything that

(12:34):
some of the audience will remember from the 90s.
And that the message there was similar that you can't, you
can't create the Super highways of tomorrow by paving the dirt
paths of of yesterday. And that that's similar to the
message that I was just saying. I think it's just broader.
If you look at what you look at the surface area of an
organization that client server impact and it was quite low,
whereas AI is impacting pretty much everything that an

(12:57):
organization can do. So I think it's, it is a similar
message in, in a similar need tostep back and rethink about it.
And that's why I call it rather than re engineering, I call it
reimagining. You really have to step back and
look at the art of the possible because you don't just have your
human workers and customers in the mix.
You have you have your, your, your AI agent.
So you have AI customers and partners that you know that are,

(13:19):
that are going to be part of your interaction that you're
thinking about. So it does really require this
kind of reimagine reimagination capability.
Subscribe to the CXO Talk newsletter right now.
Go to cxotalk.com because we have live shows and they are, we
have amazing, really genuinely amazing guests.
So subscribe to the CXO Talk newsletter.

(13:39):
Do it now. We have a question from LinkedIn
on LinkedIn from Larry, Larry Rubinacher.
It's a very specific question, he says.
What do you see as some of the key AI apps for the utilities
industry? You know, utilities is an
interesting industry because you've got services, customer

(13:59):
service and billing and things like that.
You've got manufacturing like process manufacturing, like
capability generation, transmission, distribution,
etcetera. You've got construction type of
capability. So utilities kind of a microcosm
maybe of, of a little bit of everything that you have across
other industries. So I think there's clearly
opportunities in that customer service area.

(14:22):
But like we're seeing across industries and I, I know that's,
that's an area that I've done some work in and I know that a
lot of utilities have, have done, have done work in.
Yeah. So that's, that's I think a
clear area of opportunity in themix in the, in the kind more the
operations side. There's been some really
interesting work and innovative work that some utilities have

(14:44):
done around using AI for things like wildfire prevention.
You know, the West Coast of the US in particular, where fires
have been a cause of major outages and, and system issues.
They're using AI in a couple ways.
One is to detect fires more quickly using AI analysis of
satellite imagery and all sorts of things you can get.
But they're also doing, one utility in particular is with my

(15:07):
working with my former company is doing, is doing some work to
look at how you, how do you equip workers to go into very
hazardous wildfire environments to, to, to preserve, you know,
protect the, the infrastructure,which is very difficult to do.
So a lot of implications like that in utilities as well.
There's some interesting things going on in the in emissions

(15:28):
management for utilities to to help utilities monitor
emissions, for example, things like a meth, you know, tracking
methane leaks or tracking methane burn off in different
ways to allow utilities to operate more sustainably.
So I think there's a lot of use cases in utilities because
again, it kind of pulls in a lotof these use cases that happen

(15:50):
across the across the other other industries.
Utilities just as a challenging environment sometimes because of
the rate based model and the waycapital's allocated, sometimes
hard to invest, you know in those opportunities, but I think
there are a lot of lot of opportunities.
We have another question and this is from Joy Tuhan and she
says, Paul, it was great workingwith you at Accenture.

(16:11):
What are some KPIs and metrics that organizations should use to
show AI is creating real business value?
It's a really important question.
If AI isn't moving your the performance metrics on like the
Page 1 scorecard of the company that they track, whatever they
did in organizations, they're going to track all sorts of, you

(16:33):
know, key metrics to their business from from, you know,
day services outstanding to accounts receivable, also to all
sorts of productivity measurements of manufacturing
and other things. And I think if if AI isn't
making an impact on those thingsthat I don't think you're
looking at AI, right, in terms of of the impact it can have.

(16:55):
So I think it's a matter of looking at the KP is that are
critical in your business and understanding which of those can
be dramatically impacted by AI in in my Accenture role, what we
did to pick up throughout the onJoy's question, you know, what
we're looking at with AI, even before generative AI, with just
AI, you know, classical AI, so to speak, before we were looking

(17:16):
at our, for example, our software development processes
and looking at how do we improvethe efficiency and productivity
of those every year. And we were improving them about
10% a year, every year. And we would we'd measure that
to make sure we got those those kinds of outcomes.
So I think that's I think the key is not necessarily a whole
new set of outcomes, but to be very intentional about the AI

(17:37):
projects that you're pursuing and be able to tie those to the,
you know, the Page 1 metrics that you manage as a as a
company. Of course you're going to have
other metrics for your AI programs, other programs working
right, other successful in delivering value and everything.
But that's I think that's key. The one other element of that
that is going to mention, but you know, maybe it's some other
context, but that is critical isis I think organizations also

(18:02):
need to avoid AI use case Itis. There's too many organizations
that I walk into that very proudly show me 100 use cases of
AI across the whole world. Do they get so many?
They're so excited, they got so many.
I'm like, you know, pick seven, pick five, pick a number, but
focus and pick a few and, and really tie them to something
that that makes a difference to your company and make them

(18:22):
happen. I, I don't think it's victory to
have a lot of use cases. I think it's a victory to have
you, you know, a small number ofuse cases that make a
difference. So you can, you know, show the
value, show the impact on your company and then move to do
more. I have to reinforce that
comment. There are so many folks who
describe who who have chopped upthe organization, evaluated the

(18:43):
organization and identified all of these use cases.
But as you said, identifying usecases and putting people to work
in so many different areas does not necessarily help you reach
the finish line. I think that's right.
I think you can solve small problems and miss the bigger
problem. There is one large consumer

(19:04):
goods organization that I workedwith.
We worked on AI in the sales arena and the initial use cases
were around how do we get sales leads better to make a sales Rep
more effective and all sorts of things.
It turns out the bigger use cases were thing, the bigger
productivity improvements. I should say we're cutting
across use cases. So how do you combine a
salesperson's productivity with things like inventory

(19:26):
availability and, and, and delivery times so that you could
cut across that and, and get things to customers faster?
Like, and there's a whole different way of, you know, way
of thinking, how can you move product to customers faster
rather than just making an individual salesperson more
effectively? You'll miss opportunities for
those bigger reimagining, you know, if you just get caught up

(19:46):
in use cases. This is from Bhavana Bhagat and
Bhavana says is the talent transformation with AI for real
or will it be another RP? AAI risks being another RPA if
deployed the wrong way. I think RPA, you know, RPA was

(20:08):
kind of had this problem of being a little fragile.
You implemented use the RPA technology Robotic Process
Automation for those who aren't familiar with it, which is being
able to stitch together and automate processes across
different systems. I mean, very powerful technology
does a lot of great stuff, but I, I think it hit it hit limits
in terms of what it could do andwas wasn't flexible to adapt to
changes on in the underlying systems.

(20:30):
On a generally speaking, I thinkthe, I think the AI could be
subject to the same limitations if not deployed very well.
So I think that I think the architecture of how you deploy
AI is super important. The way you insulate from
changes and underlying foundation models is super
important. And I think, you know,
architects are back for those ofyou listening, if you're

(20:52):
enterprise architects, we need alot more enterprise architects,
you know, to, to shape these AI solutions going forward that
really understand all these things because it's super
important. And otherwise you can end up
with brittle AI systems as well.Now, Bhavani, in your question,
you said talent transformation. I, I do think so.
I'm not sure exactly what you meant by the talent
transformation related to RPA and and AI, but there is a,

(21:14):
there is a, you know, I think a broad kind of rescaling of
people, you know, develop software engineers, developers,
architects, etcetera to learn AImaybe more so and will certainly
more so than people needed to toadvance their skills with the
RPA era. So I think that is part of the
journey ahead too. Sachin in Namdar says.

(21:36):
He's from Accenture, he says. What are your thoughts on the
future of the consulting business going forward?
Services, generally tech services, you know, let's just
include consulting, systems integration and outsourcing, all
that stuff together. Tech services generally, I think
is, is is is is is hugely impacted by everything happening

(21:59):
with Jenner of AI. And it's probably the biggest
impact from it, you know, more of an impact than any of the
previous technology changes. And what that means is it's that
if the industry's at the center of some of the changes that are
happening, which means it's botha threat and an opportunity.
It's a threat because I think the existing way that consulting
happens, the existing ways that systems integration and

(22:21):
outsourcing happens aren't goingto be the ways to do it and are
already not the ways to do it inthe jet AI world.
So I think the challenge for anycompany, for Accenture, for, you
know, any company in the industry is how to be first and
ahead in transforming to the newprocesses.
So on the consulting side, rather than being having smarter
people and good knowledge capital and walking in with a
better presentation to the CF OSoffice that was yesterday.

(22:44):
You know, tomorrow what you need, you need to walk in with a
foundation model with pre trained on the client's data and
in the CF OS office be remodeling their business
dynamically in the first meetingthat and that's a that's a big
transformation on a lot of levels.
And that's emblematic of what wewhat needs to happen in the
industry and what, what what organizations like Accenture I
know are working on. What is that due to the
consulting industry when you have a lot of people whose

(23:06):
primary skill was listening and now they have to of course still
listen, but there there are so many more dimensions and the
talent pipeline has not necessarily prepared them for
that. This stuff is hard, as I've been
emphasizing a lot as I've gone through, clients need help in
doing this. So I think there's a big role

(23:26):
for services organizations to help companies do this faster,
better, more effectively. So I think there's a big need
for this, for services to do that, but they need different
shape to it. So an example, you need more for
deployed engineers, you know, inthe business to more quickly
drive change for the client. So it's a it's a matter of
transitioning the talent, which I think, you know, companies

(23:49):
have been, you know, century as an example, I've been very good
at transitioning and retooling talent.
It's about rethinking the workforce, the the degrees of
leveraging pyramids and things like that will change, you know,
depending on the type of work. As you look at this going
forward, it's about moving services to software.
So what of the services that areprovided?

(24:09):
Where are there opportunities toplatform, platform a ties if
that's a word or transition to services kind of the pound tier
model in that way. So I think those are some of
the, those are some of the changes and organizationally
talent wise, you know, technology wise, Michael.
And this is from ebru Bayer, whosays there are still
organizations out there who haven't done cloud
transformation. They're lacking data quality and

(24:33):
governance. Do you see any prerequisites
that organizations should focus on before jumping into defining
an AI strategy or executing thatstrategy?
Those are impediments, but you can't wait like like to so it's,
it can't be sequential where I'mgoing to do my cloud and data
and then eventually I'll get to AI.

(24:53):
So there's, if there's some companies, I've decided that
there's some company I've workedwith who have looked at and
said, man, I'm, I'm in such bad shape on data.
I'm just going to do that. And then I'll, I'll deal with AI
once I'm done with that. That's, that's a, a small
minority that I think most realize.
OK. I got a lot of work to do on the
data side, but I got to get someelements of AI work and I got to
make progress on my journey and they're they're melding the two

(25:14):
together. So I think, I think it's you got
to look at your specific circumstances.
Those are big impediments. Many companies, I'd say most
companies have some variety of the challenges that you
mentioned ebru, but it's a matter of figuring out how do
you parallel track it and and manage that in an effective way.
Let's talk about adoption, success and failure.
What can go wrong? We have all heard about a recent

(25:37):
MIT study that said 95% of Gen. Gen.
AI projects fail. Now there are some issues with
that study. Maybe you could say it was
talking mostly about proofs of concept.
Nonetheless, when it comes to Gen.
AI, what does failure mean? What causes failure?
How do we even recognize it? It came at a time in the market

(25:59):
news and when people wanted to seize it and use the message
that 95% of projects were failing like the headline in
different ways. So I think it was unfortunate
that it got the traction in the way it did because I think
that's kind of a misleading conclusion.
But and I don't think 95% of gender Gen.
AI projects are failing. That's very, I have a surface
areas of hundreds of AI projectsthat I see and it it's nowhere

(26:22):
near 95% failure. That's generally speaking.
People are making progress in different, you know, in
different ways. And what is failure?
What? How do we define failure?
And then? I mean, the way I would define
failure is you don't achieve thebusiness objectives you set out
to achieve, which gets into one reason for failure is sometimes
they they don't, they don't there's no clear North star of

(26:43):
what they're trying to achieve or no clear metrics from a
business perspective. So that's, that's how I define
define failure. There's other, I guess you could
define in other ways, but I think that's, that's one way to
see, I've seen people to developgreat individual use cases that
they couldn't scale. I'd say that's an element of
failure too, because they realized that, that, you know,
they, they developed it great POC on a small scale, but they

(27:04):
realized they had, they were missing the data or whatever
they might have needed to, you know, to go scale it more.
So I guess there's five things that that I find that, you know,
companies that are successful are focusing on, and I've been
talking about these for a while.These aren't necessarily new, if
you've heard me talk in the lastfew months, but but these are

(27:25):
these five consistent things that I've been seeing.
One is the value point that I just mentioned, Michael, you
know, back to the definition there, you need to focus on the
value, have a business case, tieit to metrics, understand what
that business case is and in in value generally will mean really
looking across the use case artists and looking at things
that, you know, reimagine the process that are doing something

(27:45):
significant. So it's this value piece is
important. Sounds obvious.
Why wouldn't everybody do that? But you know, not everybody
does. A lot of people are starting in
different ways. The second point that you know,
reason for success or if you know, obstacle is, is what I
call the digital core in data. And, and that was at the heart

(28:06):
of the question we just answered.
But sometimes companies don't have their their digital core of
the systems in place. If you're not on SA PS S4, for
example, their new generation ofSAP, you don't get access to
their AI. So you got it.
You got to get your digital corein place and upgrade your SAP,
the latest version to get accessto the AI as an example.
So I think getting a digital core in place and getting the
data in place is key and, and companies are generally speaking

(28:27):
behind on the data piece, but that's, you know, that's that
can cause that can cause failureor limit the for the
effectiveness. The third piece is talent and
under investing in the people, and this is the talent to do AI.
Like you have people who can do the AI, but it's more more
broadly the talent across the workforce.
Are you training the workforce with what they need to do?
I guess there's some best in class examples of companies that

(28:50):
are doing a great job of educating their entire workforce
on AI, what they're doing, what their objectives are and
involved in their workforce. Bottom up.
These tend to be an enlightened leaders who who kind of get it,
you know, bottom up, you know, in in a corporate, their
workforce, great things that companies like like Merrick,
like Walmart, I think are examples of companies doing this
that that, you know, that are really building the talent base

(29:13):
they need going forward and the learning systems they need.
The 4th point is responsible AI and really putting the guard
wheels around it. So you know, you're not getting
into trouble as you do AI that leaders leading companies, I
think are doing that well. And the fifth point is viewing
it as an ongoing multi year change programs, you're building
the the, the capacity for technological and process and

(29:33):
organizational change into your approach rather than viewing it
as getting one use case or one project right.
So those are the characteristicsthat I see are companies that
are they're doing well. Let's talk about private equity
and venture capital. Just give us some insight into
your role at TPG as AI Advisory Chair.

(29:55):
CPG large private equity firm 260, you know, billion in assets
and they spanned all these different platforms.
It's not just, you know, buy out, you know, capital, it's
growth. They have growth capital, they
have credit, they have real estate, they have impact funds,
they're climate fund and others that are focused on impact
investing. And it's, it's really an amazing
opportunity, you know, amazing opportunity to combine across

(30:17):
those. And the reason they brought me
on to the team is it's a very tech forward team.
They, they really understand andthey've been investing, they've
been very successful investors in a lot of domains, including
technology for many years. But they really wanted to look
at, you know, how tech, you know, where technology is
creating opportunities and how to accelerate some of the
opportunities that they see. So the role as AI advisory chair

(30:37):
is to help with the, with the overall strategy and investment
themes and work across those different capabilities that I
just discussed, looking at wherethere's opportunities, helping,
you know, on deals that they're doing, you know, new deals that
they're doing. And also work on pulling
together, you know, expanding the network, pulling in other

(30:58):
expertise on AI, building other connections in AI that'll help
them, you know, do be be more even more effective going
forward. And it's again, been a lot of
fun. And given their scale, they can
have impact of, you know, kind of on a impact on on a big way,
which has been very, very exciting to see.
So again, that combined with theVC angle has been a lot of fun
to see it from both perspectives.

(31:19):
As you're looking at startups and investments, of course the
traditional evaluation points have been the product, market,
fit the team, and so forth. How does AI now fit into
investment analysis and decisionmaking?

(31:39):
There's a strategic thing. I'll go through this first.
There's a strategic thing you have to look at that's a little
different. And I'll talk about, you know,
technology related companies first, but it's not just
technology, but this conscious and technology terms.
I think there's like five ways that AI can impact a technology
company. First, it can be no impact.

(32:01):
AI doesn't impact it at all. That's number one, rarely the
case, but it could be the case that that that's the case.
The second is AI enhances, you know, the what, what the, the
the SAS model or whatever the company does today and look at
work day as an example. You know, the new agentic, they
just had some new announcements that rising this week and AI
enhancing, you know, the the capability of of what what they

(32:23):
do. I think that's that's the second
level or second example. The third level is the AI is
becomes more important than the underlying SAS product or
whatever itself in the AI is thefolks.
I'd argue that when you look at GitHub, when you look at maybe
ServiceNow now, when you look atthe future of Salesforce, the AI
you know becomes the defining capability around the product.

(32:45):
The next level would be AI goes even further and commoditizes
the underlying technology. Look at, you know, Klarna and
the way, the way this, the CEO of Klarna has talked about the
impact of generative AI and how he works with different
enterprise software companies. As an example, maybe Zendesk,
you know, others, you know, thatthat fall into this, you know,
kind of fell into this this pattern.

(33:07):
And then the fifth level is, is AI just just compresses the
whole spend in the category because you don't need the, the
software. Chegg might be a good example of
this. Chegg, the learning company who
when the general VI models came out, people flocked to, you
know, for free or whatever for very low cost to general VI
models. And Chegg lost a lot of its, you
know, a lot of its population there.

(33:28):
So it goes. That's kind of a little bit of a
framework to look at. I think you have to look at that
from an investing perspective, really look at that and
understand not just today, but if you look at the the move of
the markets space in the next years, what the what the impact
is. We recently had the CEO of
Zendesk on CXO talk discussing their AI transformation.
So if anybody's interested in that, we had an in depth

(33:51):
discussion. So just go to cxotalk.com to
check it out. All right, let's take another
question very quickly now because we're we're just running
out of time. And this is from Sufian Ben
Sabour. And he says what is next after
AI beyond AGI and really quickly.
Please, Paul. The things to look at beyond

(34:12):
what we currently consider is isAI and AGII think, look at
physical AII know you said beyond AI, but not enough.
Not many people are focused on physical AI, which is where
really Elon's going with with X and XA on everything he's doing
and where world labs and organizations like that are
going. So think physical AI and AI
meets the real world as one thing.
Look at quantum, still general purpose quantum still number of

(34:34):
years off, but some specific impacts of quantum happening
fast and related to this world, real world blending thing is
robotics and the the massive advances in robotics and these
new physical world models that are coming, helping us make, you
know, we'll predict on the show right here.
We're going to have a a moment by bigger than a ChatGPT moment

(34:56):
around physical and robotics of what they can do.
That's coming relatively soon and maybe even bigger, bigger to
us in, in, in, in realization than, you know, again, the
ChatGPT moment. How soon?
I think that's within a couple years, you know, in terms of
really seeing these physical models and you know that what
they can do, you know, say 2-2, maybe 2 years, you know,

(35:18):
certainly about five years. Steven Redmond on LinkedIn says
how can organizations get ahead on organizing and cleaning their
data while not losing momentum on AI adoption?
Really think about your businesscase, because what I see time
and time again at organizations is people coming up with great
data plans and data strategies and it gets shot down by

(35:40):
management, whatever, because there's not a business case tied
to it. So I think you need to think
about how to link your AI benefits or whatever benefits
you have an ERP transformation or whatever with, with getting
your data state advanced and link that.
Link things together so that your data transition, your data
modernization, doesn't get cut off and shut down, which is what
continually has been happening to companies for the last decade

(36:00):
or more. Rajat Kumar says what will be
the future of EMR electronic medical record applications
using AI? Are some of the major players
integrating AAI into their apps and what kind of integrations
will that basically the future of electronic medical records?
You know, if you look at the recent announcements from Epic
at their event about a month ago, those of you that are in

(36:22):
this space, they made a broad set of announcements and a
really an expansion of what you think of as EMR.
They're expanding epics role well beyond what you thought of
as EMR to get into these care provision activities where AI
will really have big impact. So I think if, if Epic's view
vision of the future is right, you'll see, you know, the EMR
providers critical play a critical role in automating a

(36:45):
lot of aspects of patient care, revenue cycle management and and
the like. There's other views of the world
which say that'll build build uparound the Emrs and you know,
question will be which which vision plays out.
Arsalan Khan says responding to your comments earlier about
enterprise architect architecture, he is a recovering
enterprise architect. He says all jobs will be

(37:08):
affected by AI. Are there any that are still
untouchable by AI? Physical skills, you know that
the people do you go out and do things in the in the real world.
I think those will take a lot longer for AI to, you know, AI
to impact. I think anything, anything where
you're dealing with knowledge and information, it it'll, it'll
have impact. That's there.

(37:28):
There's we have a lot of data. I can put give you a link to
send around based on different labor categories of where
generate Jenny, I will automate versus augment and where there's
no impact, I can send that out. Maybe as a follow up, Michael,
I'll give you the link. And that does it by industry as
well as function. We need to talk about
responsible and ethical AI. Everybody, every technology

(37:49):
company says we are doing responsible AI.
What's the reality behind that today?
Less than 20%, and that's probably a generous number of
organizations are really doing responsible AI.
The way I would characterize theneed for it, the to do
responsible AI, right? I can't get into each one of
these, but you need 3 things. You need principles.

(38:10):
Everybody's got that you need. I want to be fairness.
It's fairness, privacy, security, explain ability.
Everybody agrees on the principles.
Most organizations have that. The second thing you need is
processes. How do you do it and make sure
that you're doing AI responsibly?
Is it baked into your software engineering practices, etcetera?
Most aren't doing that. And the third thing you need are
tools and measurement. Do you understand every AI use

(38:33):
case across your organization? Do you know what the risk level
of each use case is? And you know, what's being done
to mitigate any, any, you know, bad outcomes, you know, from
irresponsible outcomes, so to speak, Most organizations aren't
doing that. So, so we need to, we need to
get beyond the principles and lip service to responsible AI to
implementing, you know, really rigorous industrialized

(38:55):
responsible AI. And like I said, less than 20%,
probably a lot less are probablyat that stage.
That was a harsh commentary. It's true.
I've been preaching on this, so to speak for about 8 or 8 years
now involved a lot of the organizations that have been
advocating for responsible AI and it was preaching totally
into the wilderness with 0% listening for a long time.

(39:15):
So I think the 20% I view as an accomplishment, you know, we've
got we've got more seriously, but I think too many
organizations don't really understand there's a business
issue because the biggest, the biggest risk of AI is eroding
trust. If a is use deepfakes, whatever,
think about all the different ways AI can tarnish your brand.
Those companies that really develop trust in the way that

(39:37):
they're using AI are going to have a competitive advantage
over other companies. This is about business and
strategic value of your brand. This isn't about a feel feel
good thing. I think too many people still
think a responsible AI is a feelgood thing rather than something
that's strategic and essential to your competitive survival.
What's your view of government regulation of AI?
I think it just needs to be balanced.
I think we need a government regulation that encourages

(39:59):
innovation, innovation, you know, leading innovation and,
and, and has the right focus on preventing the real harm
deepfake and some of the real, you know, obvious harm.
If they're around. It's that's, it's a lot more to
say on that, but that's the one sentence.
And one sentence What technologies are you watching
now? One new thing I'm watching that

(40:20):
I think is in the underappreciated undervalued
category or really under thoughtabout category, maybe not as
crypto. There's there's a lot of people
thinking about crypto, but it tends to be the crypto converted
that are thinking about crypto. But crypto as a general purpose
technology for tokenization, notjust a financial assets, but you
know other things, data manufacturing, etcetera.

(40:41):
I think is is poised with the recent steps the US has taken to
really legitimize in a lot of ways to really have a much more
profound impact on businesses and everything we do then then
the non crypto community is aware of right now.
What is your trajectory for the next steps in your amazing

(41:02):
career? I'm going to keep doing all the
fun things I'm doing now becausethis is like, this is kind of
what I love to do. This, this arc of technology has
been kind of my career and my mylife.
And when it's stay close to what's happening beyond, that's
what more time with my family, which I love to do.
And I'm having a great time withthat and hopefully getting a
little bit better at guitar, which has been my retirement

(41:23):
project since I left Accenture. I'm trying to gear up and spend
more time in guitar, but all this exciting technology stuff
is taking time away. I did notice the guitar in the
background. Yeah, I'm getting pitched on a
lot of startups to that that canhelp me learn music faster.
That's that's where I'm hopeful.And with that, we're out of
time. A huge thank you to Paul

(41:43):
Doherty. He is currently AI Advisory
Chair at private equity firm TPG.
He's a advisor to Accenture. He's doing a lot of other
things. Paul, thank you so much for
taking your time to be with us. It's always fun, Michael.
Thank you and thanks to your audience.
This is an amazing audience. That's what makes this so fun,
in addition to all the work you put into it.

(42:04):
And I love the questions. So thanks to all who are tuning
in. Yes, for the folks who are
watching, there's so much information that has been packed
in today. By Monday, the edited video,
we'll do some light editing. We'll be up on our site and then
next week we'll create a summary.
Watch this video. If you're interested in any of

(42:25):
the topics that we've discussed,the replay will be there.
You can take your time and you can unpack it.
And right now, subscribe to the CXO Talk newsletter, Go to
cxotalk.com. We really have amazing shows
coming up. We'll see you next time.
Thanks so much, everybody. Thanks to Paul Doherty and have
a great day.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.