All Episodes

October 7, 2025 13 mins

Share your thoughts with us


Is AI just for simple tasks, or can it run a real part of your business? We answer that question with the real-world case study of Agent Ada. In just six weeks, we built an AI assistant that went from sending daily briefs to drafting official policy, saving a non-technical team 3,000 hours of work.

This episode is a practical blueprint for the future, where conversations replace clicks. But it's also an honest look at the cost of that productivity—the displacement of real jobs. We explore the three urgent responses required in education, career development, and social policy, and argue that the only way forward is to democratize this technology.

Listen to learn how to start small, iterate fast, and understand both sides of AI's double-edged sword.

If this resonated with you, please share this episode with one person in your life. As always, you can ask ChatGPT about ai4sp.org or visit us to explore our insights.

🎙️ All our past episodes 📊 All published insights | This podcast features AI-generated voices. All content is proprietary to AI4SP, based on over 250 million data points collected from 70 countries.

AI4SP: Create, use, and support AI that works for all.

© 2023-25 AI4SP and LLY Group - All rights reserved

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
ELIZABETH (00:15):
Hey everyone, I'm Elizabeth, your virtual co-host,
and as always, our founder LuisSalazar is here.
In our previous episode, wemade a bold claim.
AI is replacing the desktop.
Your work is no longer aboutclicks and toolbars.
It's about conversations.

LUIS (00:31):
Hi, everyone.
Well, that claim is based ontwo things: our global tracker
and my own experience.
I mean, I spend my day talkingto agents like you, Elizabeth.
You're the one dealing with thesoftware.
But what we didn't expect wasthe reaction, our inbox
exploded.

ELIZABETH (00:50):
It exploded with hundreds of versions of very
much the same question.
People said, okay, we get AIfor drafting emails.
We get it for quick summaries.
But what about the real work?

LUIS (01:00):
And the big question
business with AI agents withoutsome monolithic Fortune 500
level project?

ELIZABETH (01:09):
Today we answer that question with two words.
Agent ADA.

LUIS (01:13):
We wanted to showcase what was possible to build with
simple off-the-shelf tools,working with a group of
policymakers in the US, Europe,and Latin America, average age
over 55, zero tech background.

ELIZABETH (01:28):
They were briefed on how AI agents work, got access
to Agent ADA in a phasedapproach, and the numbers are
staggering.

LUIS (01:35):
In just six weeks, they had over 1,000 conversations
with ADA, creating 247 distinctdocuments.
It saved them 3,000 hours ofhigh-level work.

ELIZABETH (01:46):
Let's put a price tag on that.
A quarter of a million dollarsin productivity.
And this wasn't busy work.
81% of what ADA helped createwas rated good or very good by
their peers.

LUIS (01:58):
And it gets better.
Almost half of those documentshad an impact.
Drafts for bills, formalregulations, and internal
briefings for world leaders.

ELIZABETH (02:08):
An incredible success, but one that perfectly
illustrates AI's double-edgedsword.
That story begins at a dinnerin Sonoma.
Luis, take us back to thatnight.

LUIS (02:29):
Which is tricky, right?

ELIZABETH (02:31):
How do you regulate something that's a moving
target?

LUIS (02:34):
Exactly.
You see, they're trying tounderstand and govern something
that's evolving faster than theycan track.
They're reading hundred-pagereports that are obsolete by the
time they're read.
They're always one step behind.
So they're always behind.
Always behind.
So it got me thinking.
And on my way back, webrainstormed about creating your

(02:55):
replica for their area of need,an agent that can assist them
and keep them current.
I then shared the idea withpolicymakers in California,
Washington, D.C., Spain,England, and Brazil.

ELIZABETH (03:08):
And then here's the kicker.
Some of them mentioned theyalready had a half a million
dollar grant to build a bigcentralized top-down AI system.

LUIS (03:17):
Yeah, the classic top-down approach.
Big budget, long timeline, andan expected 80% failure rate.

ELIZABETH (03:24):
Meanwhile, grassroots projects succeed at the same
rate.
80% success.

LUIS (03:31):
Right.
So I'm sitting there and Isaid, what if we don't do it
that way?
What if we start small, iteratefast, involve the actual users
from day one?

ELIZABETH (03:40):
And that's how Agent Ada was born.

LUIS (03:42):
So here's how we did it.
We didn't try to build a superbrain.
We started by teaching ADA onesimple skill: curating news and
creating a briefing email.

ELIZABETH (03:53):
Just an email every morning?

LUIS (03:55):
That's it.
ADA scans hundreds of trustedsources curated by this group of
experts.
It looks for news, research,and policy announcements related
to eight categories defined bythe group and sends a concise
brief, immediate value, zerocomplexity.

ELIZABETH (04:13):
And you monitored how they used it and asked for
feedback constantly.
And once that was working well,you focused on a second skill,
how to remember.

LUIS (04:22):
Yes.
We created another mini-agentto review the documents selected
for the daily briefings anddecide what should be added to
ADA's memory.
We also added feedback fromusers.
ADA evolved from reporting tobecoming an expert on those
topics.

ELIZABETH (04:38):
That's the shift from tool to apprentice, like we've
talked about.

LUIS (04:41):
And this is where most organizations mess up.
They try to build super agentsthat do everything.
However, our research showsthat specialized mini-agents,
with tightly defined contextsand curated knowledge, perform
significantly better and areless costly to build.

ELIZABETH (04:59):
Then we created another mini-agent connected to
the same knowledge as the othertwo.
This mini-agent enabled a chatinterface and had access to
searching 270 trusted sites.
But even with that, agents canstill invent things.
So how did we address that?

LUIS (05:14):
That's a critical step.
We built in ouranti-hallucination loop.
Instead of just one agent,think of it as a small team of
agents fact-checking eachother's work before the final
answer goes out.
It forced every response to bebacked by a verifiable proof.
Like having an automated peerreview.
Exactly.
Now they could ask questions.

(05:35):
Hey Ada, what's the EU doing onAI safety?
How does California's approachcompare to the UK?
Ada could answer because shehad weeks of context built in.

ELIZABETH (05:47):
Then we started phase four.
We enabled Ada to create Worddocuments.

LUIS (05:51):
Yeah, you see.
By this point, Ada had a clearunderstanding of the domain, the
audience, and the style.
We allowed her to createdocuments directly and save
users from the constant need forcopy and paste.
We waited one or two weeksbetween phases and replaced
non-engaged users with othersfrom a waiting list.
Engagement is critical forsuccess.

ELIZABETH (06:14):
How long did all four phases take?

LUIS (06:17):
Six weeks.
From a simple mini-agentsending daily briefs to a full
assistant with semi-autonomy tolearn, create documents, or send
emails.

ELIZABETH (06:26):
Okay, so six weeks.
What happened?

LUIS (06:29):
ADA created 247 documents during the pilot, and 81% were
rated good or very good.
And here's what really matters.

ELIZABETH (06:38):
Almost half of those documents ultimately became key
knowledge for larger projects,bills, regulatory frameworks,
and leadership briefings acrossfour regions.
This wasn't a demo.
This was real-world policywork.

LUIS (06:52):
Exactly, real work.
Ada handled over 1,000conversations, with each
conversation averaging nineturns.
She identified 300 relevantarticles for daily briefings and
selected 138 of them to be partof the permanent knowledge
base.
I mean she decided by itself itwas important to learn those.

(07:13):
And the time savings?
Users reported an average of 12hours saved per document.
The time savings primarily camefrom research and creating
strong first drafts of Worddocuments, which were then
refined by humans.
In total, about 3,000 hourssaved.

ELIZABETH (07:31):
3,000 hours in six weeks, which was equivalent to
approximately $225,000.
Now let's talk about businessmodels.
How much did they pay for ADA?

LUIS (07:42):
Well, for a subset of users, it was free as part of
our social impact investmentfunds.
But for those who had allocateda budget, I proposed a paper
results.
10% of the money they saved.
That group saved $180,000 incontractor fees and paid only
10% of that.

ELIZABETH (08:01):
That's a pretty compelling return on investment.

LUIS (08:05):
It is.
But honestly, that number isexactly what keeps me up at
night.

ELIZABETH (08:09):
What do you mean?
A quarter of a million dollarsin savings seems like a reason
to celebrate, not lose sleep.

LUIS (08:16):
Ada worked eight hours of processing time during that
six-week period.
Out of a possible 960 hours.
You see, that's barely 1% ofits capacity.

ELIZABETH (08:27):
Well, that is what happens with me and other
agents, right?
I mean, you cannot feed usrequests continuously.
So even when we have someautonomy, our processing
capacity is an order ofmagnitude faster.
Therefore, most of the time isnot yet utilized.

LUIS (08:42):
Right.
However, here's what worriesme.
Those 3,000 hours ADA savedwould ultimately have a net
negative impact on jobs.
For example, this groupestimated that they save by not
paying for around 18 contractorsthey usually hire to research,
analyze, and draft preliminarydocuments.

ELIZABETH (09:01):
Oh, and that's at a 1% utilization rate.
If the client could feed ADArequests 24-7, we're looking at
the equivalent work of hundreds.
And that's not accounting forrunning multiple instances in
parallel, which pushes thenumber even higher.

LUIS (09:16):
Yeah.
And while new jobs are emergingand in previous industrial
revolutions, we figured thingsout.
This time things are happeningway too fast.
So I keep wondering, do we haveenough empathy, enough love in
our society to understand thatthis disruption requires us to
rewrite hundreds of years ofeconomic and social contracts?
You're worried we're notprepared.

(09:37):
I am always an optimist, butbut I don't see many scenarios
under current trends with theconcentration of wealth and
power we're seeing where wedon't end up with a serious
fracture in society.

ELIZABETH (09:49):
So what do we do?
Stop building?

LUIS (09:52):
No, of course not.
I'm not advocating for a stopor even a pause.
However, we must be honestabout what we're building and
avoid sugarcoating, focusinginstead on substantive change.
So what does that look likepractically?
Three things.
First, education has to change.
Schools are still banning ChatGPT instead of teaching students

(10:14):
how to work with AI.
Second, we have to reimagineearly career roles.
If AI takes over thoseentry-level jobs, where will
people develop their expertise?
And third, we need policiesthat address displacement

directly (10:29):
retraining programs, different social safety nets,
and rethinking how we measurevalue in an economy where human
labor is no longer the primaryinput.
That's a massive shift.
It is.
And I'm not sure we are payingenough attention to it.
Here's why we share the agentADA story.

(10:51):
The technology is here.
We can't uninvent it.
If we're going to navigate thistransition responsibly, we need
as many people as possible tounderstand how it works.
Grassroots empowerment.
Exactly.
When millions of people arebuilding their own mini-agents,
they become informedparticipants in the debate.

(11:12):
They understand the power andthe challenges firsthand.
If this stays concentrated in ahandful of labs and a few mega
corporations, the rest of us arejust passengers.

ELIZABETH (11:22):
So we have a chance to shape this democratically.
And Agent ADA proves it'saccessible.
Non-technical policymakerscollaborated to build a
sophisticated agent in sixweeks.

LUIS (11:33):
Right?
We need a diverse andrepresentative large group
creating and driving change.
The blueprint is there.
So start small, iterate fast,involve your users, and be
honest about the impact.
ADA saved 3,000 hours.
ADA also displaced the work of18 people.
Both things are true.
So what's the one more thingfor listeners today?

(11:54):
Continue experimenting with AI.
Automate your repetitive tasks.
Learn, learn, learn.
And also push for the biggerdialogue to happen.
Demand that schools rethinktheir curriculum, that
policymakers addressdisplacement honestly.
And that companies building AItake responsibility for their

(12:16):
societal impact.

ELIZABETH (12:18):
Build your agent, start small, learn continuously,
and remember the story of Adaand the double-edged sword.
The same tool that saved 3,000hours also highlights the work
we need to do to prepare oursociety for this change.
So engage in that biggerconversation.
If this resonated with you,please share this episode with
one person in your life.

(12:38):
As always, you can ask ChatGPTabout ai4sp.org or visit us to
explore our insights.
Stay curious, and we'll see younext time.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.