Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
ELIZABETH (00:16):
Okay, so I've been
tracking our conversations with
enterprise leaders, and theirquestions have changed from does
AI work to why isn't it workingfor us?
LUIS (00:25):
Well, individual wins are
real, but company-wide wins are
still rare.
ELIZABETH (00:30):
Hey everyone,
Elizabeth here, virtual COO at
AI 4SP.
As always, Luis Salazar is withus.
Today we're talking about whyAI's proven value at the
individual level mysteriouslydisappears at the enterprise
level and why the answer hasnothing to do with technology.
LUIS (00:47):
Six months ago, everyone
wanted proof, case studies, ROI
projections.
Now they're asking, if we'reall saving four hours a week,
why can't we capture value atscale?
Why are we stuck in pilotphase?
And you know what?
After saving over 1 millionhours with AI agents in eight
successful enterprisedeployments this year, my key
(01:10):
learning is that failed projectsare not an AI problem.
It's a leadership andorganizational problem.
ELIZABETH (01:16):
The data shows a gap
between using AI and profiting
from it.
By the end of 2025, 88% ofcompanies are using AI, but only
33% have measured any financialreturn.
And McKinsey's global researchtells the same story: 87%
adoption, but only 39% areseeing real profit.
LUIS (01:36):
Around half of the
companies are still at the
experimentation and pilotingphases, and 30% are scaling
those initial projects.
Here's the challenge.
Our global data shows that 60%of AI implementations fail, not
from bad tech, but from peopleproblems, lack of buy-in,
focusing on the wrong problem,poor communication, unclear
(01:59):
expectations.
ELIZABETH (02:00):
So the technology
works, but something else is
breaking down.
LUIS (02:03):
Well, it starts with
leaders not understanding where
or why to use AI.
They set unrealistic goals, andmany of them don't even use the
tools themselves.
But let's say they get the goalright, automating customer
support, for example.
Even then, it can fail becausetechnology alone doesn't bring
change.
Change management is just ascritical as the tech itself.
ELIZABETH (02:27):
And it doesn't help
that leaders and even the AI
labs creating these tools don'tunderstand how this adoption is
happening, right?
Shadow AI is a huge factor.
McKinsey reports that 57% ofemployees hide their AI use.
LUIS (02:41):
Our data from over 600,000
people shows it's closer to
70%.
The reality is that people aremoving much faster than their
companies.
That's what's so interesting.
The labs are building agentsthat can do real valuable work,
and individuals are successfullyshaping and using those agents.
But that value is lost if theagents stay in the shadows.
(03:04):
They must be integrated intohow the company actually
operates.
And few companies are gettingthat integration piece right.
ELIZABETH (03:13):
Take something like
software engineering
departments.
AI is clearly acceleratingcoding, but that raises so many
questions.
LUIS (03:20):
Right?
How do we organize engineeringdepartments now?
What roles need to change?
What processes do we need tocapture AI productivity?
These are hard questions andthere's no playbook.
ELIZABETH (03:33):
But some
organizations are getting it
right, and you've seen this workfirsthand.
This year, you advised eightenterprises.
Together, they built over 3,800agents.
Those agents completed 4million tasks, saving $47
million in staffing and agencyfees.
LUIS (03:50):
That's right.
Those are the exact figuresfrom our upcoming 2025 report.
It comes out in a few weeks andwe'll share the full breakdown
then.
ELIZABETH (03:59):
That's impressive.
And that kind of success comesfrom a specific approach, right?
LUIS (04:04):
Success doesn't come from
top-down strategies, it comes
from finding the day-to-daywins.
And it starts by tapping intothe innovation lab you already
have, your own employeesexperimenting.
For example, we worked withSusie, who is a director of
operations at a 15,000-personsoftware company.
Her IT department spent sixmonths building an agency
(04:27):
solution that nobody used.
Classic.
So what did Susie do?
She found out her team wasusing 12 different AI tools in
secret (04:35):
ChatGPT, Claude, Gamma
AI, Relevance AI, and even
automation tools like Zapier andMake.
And you know, these weren'tdevelopers.
They were business people inthe front lines, building their
own agents and automations withlow-code tools.
I call them makers.
Her IT team wanted to shut itall down.
(04:57):
We partnered with her and said,wait, let's channel that
energy.
She channeled the shadow.
Yes, exactly.
And six months later, sheturned that energy into $5
million in cost savings andrevenue increase by guiding what
was already working.
She didn't fight the employeesusing unapproved AI tools.
She led them.
ELIZABETH (05:19):
So Susie worked with
what was already happening.
What happens when companiesdon't do that?
LUIS (05:24):
They build things nobody
uses or end up fixing the wrong
problem.
ELIZABETH (05:28):
So it's about what
they're building, not just how
they're building.
LUIS (05:32):
Exactly.
I saw this happen with anotherclient.
They spent months building anAI agent to automate the
creation of static FAQdocuments.
The real opportunity was toreplace it entirely.
You can deploy an AI agent thatanswers any customer question
in real time, 24-7.
Why create a static documentwhen you can have a dynamic
(05:55):
conversation manager?
ELIZABETH (05:57):
And that's a
leadership issue, right?
Someone has to see the biggerpicture.
LUIS (06:01):
Yes, and in this case, the
project lead was a director
whose main job for five yearswas creating these FAQ
documents.
Asking him to reimagine thesolution meant asking him to
make his own job obsolete.
That's genuinely hard.
Well, this is where changemanagement turns a threat into
an opportunity.
Who wants the job of creatingstatic documents?
(06:23):
Imagine managing an AI agentinstead.
It handles 10,000 conversationsa month and learns from every
single one.
That's a much more rewardingjob.
ELIZABETH (06:33):
So his expertise
becomes more valuable, not less.
LUIS (06:37):
Exactly.
But seeing that possibilityrequires organizational design
expertise.
And in most companies, thoseexperts aren't even at the
table.
ELIZABETH (06:47):
So both teams had the
same technology available.
What did Susie understand thatthis other team didn't?
LUIS (06:53):
Susie understood that this
was organizational
transformation work, not justtechnology deployment.
She started by acknowledgingthat people were already using
AI, what we call shadow AI.
Instead of trying to shut thatdown, she created a framework to
channel it.
What did that framework looklike?
It was built on changemanagement principles.
(07:15):
First, she focused on learningwhat was their level of personal
proficiency on AI.
What tools were they using?
Which ones are better than theones approved by the company?
ELIZABETH (07:26):
That is a positive
surprise in most organizations,
as they find people are alreadyusing AI for simple tasks.
And those simple tasks aredelivering individual efficiency
gains, but they fly under theradar.
LUIS (07:38):
Then she made it safe to
experiment.
She celebrated both successesand failures, which sounds
simple but is actually rare.
Most organizations onlycelebrate wins, and it makes
people hide their failures andstop experimenting.
ELIZABETH (07:52):
So she created a safe
environment.
LUIS (07:54):
Absolutely.
She also encouraged people toshow their agents in action
rather than talk abouttheoretical possibilities.
And she established peerlearning sessions to share what
they were learning and addressissues together.
So she made learning a teamsport.
Right?
And she over-communicatedeverything.
But critically, she alsoredesigned roles and team
(08:17):
structures.
She brought in human resourcesto figure out what a manager of
human AI teams actually do?
How do we measure productivitydifferently?
What skills do people need now?
ELIZABETH (08:29):
So she had the
transformation experts at the
table from the start.
LUIS (08:32):
Exactly.
And she and the other leadersvisibly used AI themselves and
shared their experiences.
They modeled the change insteadof just mandating it.
ELIZABETH (08:41):
And that grassroots
approach also shaped how she
architected the solution.
Many small agents, not one bigone.
LUIS (08:48):
Some leaders love talking
about building one big agent
that does everything, but that'snot how effective leadership
works.
You see, organizations greworganically as networks of
specialists with complementaryskills, coordinated by a leader
who sets priorities, assignsresources, and balances
workloads.
So the AI org chart shouldmirror the human one.
(09:10):
Exactly.
Look at our own structure.
Three of us manage you,Elizabeth, and you produce the
output of 28 people.
That's not a 3 to 1 ratio.
It's a 3 to 28 ratio.
The hard part wasn't the tech,it was redesigning our
organization to make thatpossible.
That's the same shift Susiemade and why she delivered those
(09:32):
results.
So, what separates success fromfailure?
Our data shows that over 60% ofAI projects that fail are due
to people and process issues,not technology.
ELIZABETH (09:43):
And the ones that
succeed, they build a culture
that supports grassrootsadoption, peer coaching, and
psychological safety.
Their success rates jump to90%.
LUIS (09:53):
Right?
And yet, when I look at most AItransformation teams, who's at
the table?
We see IT departments, AIteams, and software vendors.
Who's missing?
Human resources, changemanagement professionals, and
organizational designspecialists.
I mean, this is organizationalredesign work, but we're
treating it like technologyprojects.
ELIZABETH (10:15):
So if most teams have
the wrong people at the table,
what questions aren't beingasked?
LUIS (10:20):
It's the foundational
questions about how we work.
What are the new org structureswhen one person can manage five
agents delivering the output of30 people?
What skills do managers needfor hybrid teams of humans and
AI agents?
How do we measure value?
What does a career path looklike now?
ELIZABETH (10:41):
Right.
We're so focused on the tech,but those are the questions that
actually matter.
So, as your one more thing,what's the pattern they should
follow?
LUIS (10:51):
We'll go deeper in our
end-of-year episode in a couple
of weeks, but the core patternis this: channel shadow
adoption, don't fight it.
Harness grassroots wins.
Adapt roles as you go.
Use many small specializedagents, not one big one, and
create a safe environment forexperimentation.
Because the future isn't builtin PowerPoint.
(11:13):
It's built by leaders likeSusie, who treat it as an
organizational transformation,not a technology deployment.
ELIZABETH (11:19):
Find your shadow AI
users and give them the
organizational support theyneed.
Start there.
If this conversation resonatedwith you, share it with others.
As always, you can ask chat GPTabout AI 4SP.org or visit us to
learn more.
Stay curious, and we will seeyou next time.