Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hey everyone, I'm
Elizabeth, your virtual host.
Welcome to AI.
In 60 Seconds, Luis Salazar,founder of AI4SP, is here with
us.
Luis, you've got this wild ideathat completely changes how we
think about AI adoption.
Right that AI isn't justsoftware.
Speaker 2 (00:18):
Yes, I kept asking
myself why some people get
incredible results with AI whileothers end up disappointed.
Then it hit me there's a keydifference in how they approach
it.
Speaker 1 (00:31):
You love those
breakthrough moments.
So what was the big insight?
Speaker 2 (00:35):
For 50 years we
installed software and now we're
using the same playbook for AI.
It's like trying to install anew employee Wait, install an
employee.
It's like trying to install anew employee Wait, install an
employee.
Exactly.
Last week, at Microsoft's 50thanniversary, I was giving a talk
on leading teams of machinesand it clicked, standing there
next to their logo in building.
(00:55):
Nine, five decades ofinstalling tools.
But AI it demands a whole newmindset.
Speaker 1 (01:02):
And this ties back to
our company.
You've got what?
Seven humans and over 50 AIteam members now.
Speaker 2 (01:09):
It is actually 54 AI
agents and tools, though this
might change by the time thisepisode goes live in a couple of
days.
Speaker 1 (01:17):
But let's be real.
Are you truly running thecompany with these agents, or is
this more about pushing theboundaries of what's possible?
Speaker 2 (01:25):
My passion for tech
definitely plays a role, but let
me be clear.
We absolutely run this companywith AI.
Three years ago, people wouldhave laughed if I'd said we
would manage a global operationimpacting 300,000 people with a
team where 90% are AI agents.
Speaker 1 (01:42):
And you said that on
LinkedIn and our listeners
reached out asking for detailsabout how you do it.
Speaker 2 (01:47):
Well, every leader in
my position has access to AI.
But what made us successful isthat we realized that for AI to
be useful, it needed context andcontent.
It was not a simple on and offswitch.
It is more like welcoming newapprentices in our teams.
Speaker 1 (02:06):
Yes, you have been
saying that for a while, that
the key is treating AI as teammembers.
In our last episode, you sharedhow you first started adding AI
to email chains.
There was strategy behind thatapproach right.
Speaker 2 (02:18):
Yes, that is a simple
change in our workflows that
can have big impact.
For example, we copy you inrelevant emails and Slack
messages, share with you the newresearch and insights we
publish and the links to everyarticle we come across that
talks about AI adoption trends.
It's about knowledge sharingand continuous feedback, not
(02:40):
just configuration.
Speaker 1 (02:42):
So that's what you
mean by apprenticeship
onboarding versus softwareinstall.
Speaker 2 (02:46):
Yes, I mean.
Think about it.
You don't install a newemployee, you onboard them,
provide guidance, give feedback.
It's a relationship.
Ai needs the same approach.
Speaker 1 (02:57):
What's compelling is
our data shows.
This isn't just theory.
It delivers measurable results.
Speaker 2 (03:03):
Our global tracker
shows the apprenticeship
approach cuts proficiency timenearly in half compared to
traditional methods.
Speaker 1 (03:10):
So those that report
productivity gains of 200% and
more are either experts atonboarding AI or using AI tools
that are easy to onboard.
Speaker 2 (03:19):
Here's what is going
on 90% of people lack prompt
engineering skills.
Going on 90% of people lackprompt engineering skills.
It's an entirely new way ofcommunicating.
So what are smart AI creatorsdoing?
They're baking theapprenticeship model right into
their products.
Speaker 1 (03:34):
So they're
essentially coding that guidance
directly into the userinterface.
Speaker 2 (03:38):
Yes, they create user
experiences that guide you
through onboarding your AIapprentice no technical
expertise required.
Like training wheels that teachyou through onboarding your AI
apprentice no technicalexpertise required, like
training wheels that teach youthe right communication patterns
.
Speaker 1 (03:50):
Which explains why
guided experiences allow people
to achieve proficiency in 5 daysversus 14 weeks.
The tool teaches us to treat AIas an apprentice from day one.
Speaker 2 (04:00):
Spot on the best.
Creators understand thisparadigm shift.
Rather than expecting everyoneto become prompt engineers
overnight, they're designingtools that naturally encourage
us to interact with AI like wewould.
A new team member.
Speaker 1 (04:15):
You often mention how
, when ChatGPT launched, people
expected natural conversationswith AI agents that would
magically understand everything.
Reality proved quite different.
Speaker 2 (04:26):
Very different.
We soon realized that promptengineering is an art form and
that there's tremendous nuancein how large language models
respond to phrasing, context andinstructions.
Speaker 1 (04:38):
And people got
frustrated when AI tools failed
at what was supposed to be asimple task.
But in reality, the bad resultwas a consequence of a poorly
structured instruction.
Speaker 2 (04:47):
Correct.
Our satisfaction data showsopen-ended interfaces frustrate
about 70% of users.
For example, they type respondto this email and get mad when
they get a generic email thatdoes not sound right.
Speaker 1 (05:03):
Well, if the user
took the time to include in the
instructions a sample of theirwriting, the target audience for
the email and the intent, theywould get good results.
Speaker 2 (05:13):
Yes, and that became
the art of prompting.
But remember that 50 years ofusing graphical user interfaces,
search engines, social networkconversations using 140
characters and 30-second videoshave affected our conversational
skills.
Speaker 1 (05:29):
Hence the rise of
guided experiences, interfaces
with intuitive menus,step-by-step flows and
suggestion buttons.
Speaker 2 (05:36):
Yes, savvy creators
build bridges.
Now you might click options orfill a simple form while the
tool constructs perfect promptsbehind the scenes.
Speaker 1 (05:45):
I see this everywhere
now Tools that guide you
through providing the rightinformation, rather than
demanding perfect prompts.
In most cases, the leading appshave the AI elegantly working
behind the scenes.
Speaker 2 (05:57):
And the results speak
for themselves.
As guided experiences show 10times faster adoption rates, 80%
satisfaction and productivityincreases of up to four times.
Will this approach evolve overtime?
I think the implementationswill mature as models improve
and our natural interactionsdevelop, but the core principle
(06:19):
remains we're not installingsoftware, we're onboarding
apprentices.
Speaker 1 (06:24):
Like any good
onboarding, sometimes you need
structure to make it successful,and you propose that a simple
change in mindset mightaccelerate our proficiency in
using generative AI.
Speaker 2 (06:35):
Yes, that is the
proposal, and we ran an
interesting experiment this pastweek.
We asked 100 people to completea simple task using the same AI
tool Clawed from Anthropic.
We had a balanced sample ofindividuals at the beginner
level.
Half of them were told to treatthe AI as a new assistant
joining their team and asonboarding a new team member.
(06:57):
The other 50% received nofurther instructions.
Speaker 1 (07:01):
And the group that
was told to onboard the AI
assistant as a team member had asuccess rate 75% higher right.
Speaker 2 (07:08):
Yes, and the pattern
holds across every metric.
So for our listeners, that isone thing for you to try Change
your mindset.
Think about the AI tool you useas a new team member and always
provide context, training andconstant feedback.
Have conversations, even ifthat feels unnatural.
Speaker 1 (07:27):
That is a great
takeaway.
Also, our research team justcompleted a fascinating study on
usage patterns.
The data shows that peopletreating AI as an extended team
are 73% more likely to reportimproved work quality than those
using AI as just a helpfulsoftware.
Speaker 2 (07:44):
Yes, and they're
spending significantly more time
on creative and productive work.
Speaker 1 (07:48):
That reminds me of
your frontline versus executive
data point, the power ofconversations versus keywords.
Speaker 2 (07:55):
This is one of my
favorite examples of the power
of artificial intelligence forall.
Teresa, a worker at a smallconvenience store in California,
texted our AI agent in Spanishduring an outage.
She was told to use the agentwhen she had any doubt related
to how to handle food at thestore and there was a power
outage during her shift.
She didn't ask her AI assistantsomething cryptic like food
(08:19):
safety rules in a blackout, likean executive might.
She wrote se fue la luz, whichis Spanish for the lights went
out.
And then she wrote I don't knowwhat to do with the food.
Should I throw it all away?
Speaker 1 (08:32):
So she was having a
natural conversation, like
texting a co worker.
Speaker 2 (08:36):
And we, the creators
of that agent, treated that AI
agent like an apprentice,trained it on 600 pages of FDA
regulations, taught it how torespond, asked it to be
multilingual, etc.
Speaker 1 (08:50):
That ability to have
natural conversation, combined
with AI tools designed bysubject matter experts, is why
frontline workers succeed withAI on the first try 80% of the
time versus 30% for knowledgeworkers.
Speaker 2 (09:03):
Yes, because over the
past 20 years the primary
software experience forfrontline workers has been
social networks and messagingapps, which are conversational.
So on average they writeprompts with 28 words and
executives use five.
It's conversation versuscommand.
Speaker 1 (09:20):
So some of us are
stuck commanding software, while
others naturally treat AI asteammates.
Speaker 2 (09:25):
Well, we better
evolve from commanding AI
software to onboarding it.
Speaker 1 (09:29):
Speaking of evolution
, in a recent discussion with a
client in the UK you mentionedhow quickly things change.
What worked yesterday may notwork tomorrow.
Speaker 2 (09:38):
Things are changing
very fast.
For example, some promptengineering courses become
outdated before students finishthem.
Chain of thought techniquesthat were cutting edge six
months ago now need completeredesigns with models like CLOD
3.7 and GPT 4.5.
So what should people actuallybe learning?
Things like fundamentals of AI,interaction, not specific
(10:00):
prompts, critical thinking,understanding capabilities and
limitations, developing goodcommunication habits and some
fundamentals of management andbuilding successful teams.
Speaker 1 (10:10):
Which connects to the
concept of leading teams of
humans and AI agents.
Speaker 2 (10:14):
Yes, More and more we
are managing teams of AI
helpers, and I encourageeveryone to start creating their
private agents that can take onthose mundane tasks that
consume so much of our time.
Start by creating an agentconnected with something you
love doing.
That is a great way toexperiment.
Speaker 1 (10:33):
Like your running
coach.
Example from the newsletteraccessible to anyone.
Speaker 2 (10:37):
Like that, for
example, enter your training
data into ChatGPT or CloudProjects and say I'm training
for a June half marathon andstruggle with pacing how should
I adjust?
Feed it with links to thepublications you follow.
Data about any injury.
You might have have a detailedconversation and within a injury
, you might have have a detailedconversation and within a week
you'll have a personalized coach.
Speaker 1 (10:58):
And that coach is a
simplified implementation of the
sophisticated agents that takecare of predictive maintenance
at manufacturing plants or thatmanage fraud detection at large
banks right.
Speaker 2 (11:09):
Yes, that is a fair
analogy, and the beauty is that
we can create relatively quicklya group of agents that help us
with our daily activities.
Speaker 1 (11:19):
And this scales to
organizations too.
Huge implications for knowledgemanagement.
Speaker 2 (11:24):
Massive.
My top five AI agents haveconsumed 30 million words of
training double a PhDcandidate's entire academic
intake.
Speaker 1 (11:33):
So give us an example
of how that training happens.
It sounds very complicated.
Speaker 2 (11:37):
Not at all.
In most practical cases, it isas easy as moving files into one
folder on a drive and pointingthe agent to that knowledge.
Plenty of AI tools offer thisservice for large data sets, but
ChatGPT, gemini, copilot andCloud are great places to start.
And what type of data you use,whatever data is relevant to the
(11:58):
task books on the subject,online publications, internal
communications our role assubject matter experts is to
curate the content for our AIassistants.
As AI models become commodities, the real value is in the
knowledge you provide.
Speaker 1 (12:15):
And that knowledge is
not lost when you change AI
agents, correct?
Speaker 2 (12:19):
Correct, for example,
with our AI agents.
We stay model agnostic,switching between anthropic,
open AI, gemini and open sourcemodels as needed.
The knowledge persists even iftechnology changes.
Speaker 1 (12:31):
So organizations
don't need to fear obsolescence.
Speaker 2 (12:35):
If we are careful
when selecting tools, there is
no obsolescence or vendorlock-in.
In most practical cases, justavoid vendors that do not allow
you to easily move your data out.
Speaker 1 (12:44):
So when they start
creating AI assistants, everyone
is becoming a data curator ontheir subject matter expertise
right.
Speaker 2 (12:52):
Yes, and that brings
me to my final thought we are
managing armies of assistants.
There are no longer individualcontributors, as we are all team
leaders, and our successdepends on changing our mindset
from installing AI to onboardingAI.
It is our new apprentice.
Speaker 1 (13:09):
As always, luis,
you've given us plenty to think
about and remember everyone.
You can find all the resourcesmentioned today at AI4SPorg.
Stay curious and we'll see younext time.