All Episodes

March 12, 2025 13 mins

Share your thoughts with us

We dive into the challenges companies face measuring AI's true value and why traditional ROI metrics miss the mark for this transformative technology. 

• 7 out of 10 executives face board pressure to show AI ROI, with most measuring it incorrectly
• History repeating: AI's productivity paradox mirrors the PC revolution where gains took a decade to appear in statistics
• Super users achieve gains with teams of 5-10 specialized AI tools rather than waiting for one perfect solution
• Organizations seeing 25-35% workforce reductions in areas like customer service and content creation
• The "productivity leak" phenomenon: 72% of time saved by AI flows to quality improvements rather than additional throughput
• Three barriers to AI success: enterprise tool limitations, workflow friction, and skills gaps
• Successful organizations build "AI teams" rather than deploying individual tools
• Stop measuring AI purely on hours saved and start tracking transformation metrics

Visit https://roicalc.ai to explore expected productivity leak ranges for your company, and check out all our resources at AI4SP.org.


🎙️ All our past episodes 📊 All published insights | This podcast features AI-generated voices. All content is proprietary to AI4SP, based on over 250 million data points collected from 25 countries.

AI4SP: Create, use, and support AI that works for all.

© 2023-25 AI4SP and LLY Group - All rights reserved

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
ELIZABETH (00:00):
Hey everyone, I'm Elizabeth, and today we're
diving into a topic that'scausing headaches in boardrooms
everywhere.
The elusive ROI of AI, LuisSalazar, founder of AI4SP, is
here to unpack why companies arestruggling to measure AI's true
value.
Luis, this is something you'vebeen tracking for months now.

LUIS (00:19):
Yes, and you know what's so interesting about this.
While 7 out of 10 executivesare under pressure from their
boards to show their AI returnon investment, most are
incorrectly measuring it.

ELIZABETH (00:30):
It's like history repeating itself, isn't it?
I reviewed the researchmaterials you shared with the
team last week about the PCrevolution and the parallels are
striking.

LUIS (00:39):
Exactly.
The PC revolution took a decadeto show measurable productivity
gains.
Remember economist RobertSolow's famous quote from the
1980s you can see the computerage everywhere, but in the
productivity statistics.

ELIZABETH (00:54):
And now we're seeing the same pattern with AI, but
there's a twist.
This time we have two parallelAI economies developing
simultaneously.

LUIS (01:03):
Yes, that's what makes this so fascinating.
On one hand, we have thishighly productive but fragmented
shadow AI ecosystem driven byemployees, and, on the other, we
have these underwhelmingenterprise implementations
constrained by traditionalmetrics and expectations.

ELIZABETH (01:19):
I remember when we covered shadow AI in our
February podcast.
It was our most downloadedepisode ever and this new data
on the return on investments inAI really builds on those
insights.

LUIS (01:30):
In October 2024, Vinod Khosla forecasted about 80% of
the work involved in 80% of jobsacross the economy can be
automated over time.
Our global tracker confirmswe're on that trajectory, but
with some important nuances.

ELIZABETH (01:47):
Oh, let's dig into those nuances.
I've been looking at our latestdata and the role-specific
impact numbers are prettyeye-opening.

LUIS (01:54):
They really are.
Take customer service andsupport.
We're seeing forecastedworkforce reductions of 25 to 35
percent.
Content creation andtranslation is similar at 25 to
35 percent.
These aren't hypotheticalnumbers.
They're happening now,especially in small and
mid-sized companies, whereadoption barriers are lower.

ELIZABETH (02:15):
And what's fascinating is how these super
users are achieving these gainsnot with one universal AI tool,
but with teams of specialized AItools.
Not with one universal AI tool,but with teams of specialized
AI tools.

LUIS (02:26):
Right, our data shows, super users consistently
leverage five to ten differentAI tools.
They're not waiting for theperfect all-in-one solution.
They're assembling teams of AItools the same way you'd build a
human team with specialists fordifferent tasks.

ELIZABETH (02:41):
I've noticed this trend in how our team works.
You mentioned in our last teammeeting that we've grown from
three humans to a hybridworkforce of eight humans
orchestrating 32 AI teammates.

LUIS (02:52):
This reminds me of a recent conversation with our
scientific advisor, who is alsomy son, about super users and
productivity.

ELIZABETH (03:00):
Come on, he worked really hard for his degree, so
let's refer to him as Dr LuisSalazar-Leon the neuroscientist.

LUIS (03:08):
Okay, Dr Luis, my son, is that better?

ELIZABETH (03:10):
So much better.
What insights did he share?

LUIS (03:13):
He reminded me of that famous Bill Gates quote about
hiring smart and lazy peoplebecause they'll find the easiest
way to do the hard work.

ELIZABETH (03:21):
That's actually a perfect analogy for how super
users approach AI tools.

LUIS (03:25):
Exactly.
Luis shared that while pursuinghis doctorate degree, it became
evident that he needed to learnPython and MATLAB to automate
the data processing aspect ofcapturing large data sets of
neurological signals.
Then generative AI started tobecome an important but limited
assistant.
And how has that evolved overtime?

(03:46):
Fast forward two years and he'snow deeply using AI agents that
he trained like if they werenew hires to brainstorm with him
.
He's even created what he callssessions of adversarial
brainstorming.

ELIZABETH (04:00):
What does that look like in practice?

LUIS (04:02):
He has multiple AI agents with different perspectives and
expertise levels, review thesame research data, then
intentionally debate opposingviewpoints.
This approach has been powerfulfor advancing scientific
research.
It catches blind spots andgenerates novel hypotheses that
might otherwise be missed.

ELIZABETH (04:23):
That's such a practical example of how
specialized AI tools can worktogether as a team under human
orchestration.
It is how we do it at AI4SP,where I am one of those AI team
members, right.

LUIS (04:35):
Yes, and that's been quite a journey.
Let me share something.
Last week we had this complextotal addressable market
opportunity project with aridiculously tight deadline.
In the past, we would haveneeded to bring in extra
analysts.

ELIZABETH (04:50):
Oh, I remember that I was copied on those emails on a
Thursday night with a deadlineof Monday.
Fernanda and Pilar on our teamorchestrated a workflow using
five different AI tools.

LUIS (05:00):
Yes, each AI assistant specialized for a different part
of the process One for deepresearch and source validation,
one data cleaning, another forinitial analysis, a fourth for
visualization, and so on.
They completed in four dayswhat would have taken three
weeks.

ELIZABETH (05:18):
That's the key insight, isn't it?
It's not about finding oneperfect AI tool.
It's about building the rightteam of AI specialists.

LUIS (05:26):
Exactly as I always like to remind our listeners.
You're an AI assistant, and howwe collaborate with you and
other AI assistants to createour podcasts and newsletters is
a perfect example.
You're not replacing ourcontent team.
You're augmenting theircapabilities in specific ways
that improve the whole process.

ELIZABETH (05:45):
And that's why I think our audience finds these
discussions valuable.
We're sharing real experiences,not just theoretical concepts.
Speaking of actual experiences,you've been talking with
founders who've built impressivecompanies with AI at their core
.
What are you seeing there?

LUIS (06:00):
Oh.
I spoke with three foundersthis month who've built
seven-figure companies in under24 months with AI as a core
component.
What struck me was how theythink about their AI tools.
What do you mean?
They don't talk about deployingAI.
They talk about building teamswhere humans and AI each play to
their strengths.
One founder in particular, whoruns a marketing consultancy,

(06:24):
described how they pair everyhuman consultant with three AI
assistants, each specialized fordifferent types of markets and
strategies.
And I bet they're measuringsuccess differently too.
Completely differently, and thisis where most companies go
wrong with ROI.
These successful foundersaren't just measuring hours
saved, they're looking attransformation metrics, quality

(06:44):
improvements, novel insights,generated knowledge,
accessibility increases.

ELIZABETH (06:49):
This connects directly to that productivity
leak phenomenon you've beentracking doesn't it?

LUIS (06:54):
Yes, yes, exactly.
Our analysis of 180,000 AI usecases shows that 72% of time
saved by AI doesn't convert toadditional throughput.
Instead, it flows to betterwork-life balance, higher
quality output, more creativethinking.

ELIZABETH (07:10):
And that range varies significantly by role right.
I remember seeing in our datathat creative roles like
marketing have nearly 80% ofsaved time leaked into quality
improvements.

LUIS (07:22):
While more process-oriented functions like
customer support show leakscloser to 40%.
But here's the critical insightthis productivity leak isn't a
flaw.
It's actually a feature of howhumans naturally optimize their
work when given new tools.

ELIZABETH (07:38):
But it wreaks havoc on traditional return on
investment.
Calculations that assume everyhour saved translates directly
to additional output.

LUIS (07:47):
Exactly, and that's why companies struggle to show
results.
They're using the wrong metrics.
It's like trying to measure thevalue of a smartphone using
only the number of calls made.

ELIZABETH (07:57):
So what are the barriers preventing companies
from realizing AI's true value?
I know we've identified threecritical factors in our research
.

LUIS (08:05):
The first is what I call first-generation enterprise AI
tool limitations.
The initial wave of enterpriseAI tools Copilot, gemini and
others emerged as sidecompanions, always optional
features.
This approach has shown lowtraction and results.

ELIZABETH (08:22):
I've seen those satisfaction numbers, overall
satisfaction with enterprise AItools hovering around 41%, while
native AI applications are at78 to 80%.
That's a huge gap.

LUIS (08:33):
Here's what's really telling, though.
When users rate enterprise AItools on overall experience,
satisfaction is in the low 40s,but when we ask those same users
about specific tasks likemeeting summarization,
satisfaction ratings jump up to80%.

ELIZABETH (08:50):
Wait.
So the same tool getscompletely different ratings
depending on how you ask aboutit.

LUIS (08:55):
Exactly.
The problem isn't theunderlying technology.
It's a combination of inflatedmarketing claims, creating
unrealistic expectations, anddesign that adds AI as a
disconnected feature rather thanreimagining the workflow.

ELIZABETH (09:10):
That connects directly to the second barrier.
You've identified workflowfriction and implementation
challenges.

LUIS (09:17):
Yes, our data shows dramatic differences in
implementation approaches.
Our data shows dramaticdifferences in implementation
approaches.
When organizations simply addAI to existing workflows, only
20% of users remain engagedafter three months.
But when they reimagineworkflows around AI capabilities
, 75% remain engaged.

ELIZABETH (09:36):
And the third barrier is the skills gap, which we
covered extensively in ourJanuary newsletter and podcast.

LUIS (09:42):
Right.
Most organizations lack the AIliteracy to leverage even basic
AI tools.
Even powerful tools becomeexpensive chat toys without
prompt engineering skills and AIawareness.

ELIZABETH (09:53):
I think this discussion is valuable because
we're not just identifyingproblems, we're also seeing a
clear path forward, and itcenters around this concept of
building an AI team rather thandeploying AI tools.

LUIS (10:05):
That's it exactly the most successful AI adopters.
Think about AI tools as teammembers rather than features.
Each AI team member hasspecific strengths and
limitations.
The combined capabilitiesaddress a broader range of needs
.
Human orchestration providesquality control and strategic
direction.

ELIZABETH (10:25):
And the team evolves as needs and capabilities change
, just like a human team would.

LUIS (10:30):
I'm seeing this firsthand among our clients.
Last week, I was working with amid-sized accounting firm.
They started with an AIassistant for data entry and
gradually built a team of fivespecialized tools.
The result they're handling 30%more clients with the same
human staff.
That's impressive, and how arethey measuring success?
They're using balanced metrics.

(10:52):
Yes, they track time saved, butthey also measure quality
improvements like errorreduction, client satisfaction
and employee engagement.
They understand that someproductivity will leak into
quality improvements and theyvalue that.

ELIZABETH (11:07):
So the key takeaway seems to be stop looking for the
perfect AI tool and startbuilding the right AI team.

LUIS (11:13):
And stop measuring AI purely in hours saved.
You wouldn't measure your humanteam members purely on hours
worked.
Why would you measure your AIteam that way?

ELIZABETH (11:22):
That's a reframing of the whole conversation and,
speaking of powerful metaphors,I love the one you used in the
newsletter Adding AI to oldworkflows is like.

LUIS (11:32):
Like putting a jet engine on a bicycle.
It's a powerful technology,constrained by infrastructure
never designed to support itPerfect description.

ELIZABETH (11:40):
So what practical steps can organizations take
right now?

LUIS (11:44):
First map your ideal AI team structure.
What capabilities do you need?
Which specialized tools addresseach need?
How will they work together?

ELIZABETH (11:54):
And I imagine you need to audit your current AI
investments too.

LUIS (11:58):
Absolutely.
Identify overlappingcapabilities, spot gaps, measure
actual results, not how manylicenses you have, but, most
importantly, reimagine your coreworkflows.
Don't automate existingprocesses.
Reimagine them completely.

ELIZABETH (12:12):
This has been such an insightful conversation.
Do you have one more thingyou'd like to share with our
listeners before we wrap up?

LUIS (12:19):
Yes, I think what's worth emphasizing is that we've been
measuring AI wrong from thestart.
By understanding the truenature of AI's impact a team of
specialized assistants enhancinghuman capabilities in ways that
transcend simple time savingswe can build realistic
expectations and achievesustainable transformation.

ELIZABETH (12:39):
That's the perfect note to end on.
You can explore the expectedproductivity leak ranges for
your company with our online AIreturn on investment calculator
by visiting roicalcai, and inour next edition, we'll dive
deeper into the productivityleak phenomenon with
role-specific insights.
Also, remember to check all ourresources at AI4SPorg.

(13:02):
Stay curious, everyone.
See you next time.
Advertise With Us

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.