Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
(inquisitive music)
- Hello and welcome to "Work Week,"
the podcast where weanswer one big question
about the rapidly evolving workplace,
discuss relevant research about the topic,
and explain what it all means for you.
I'm Dr. Kelly Monahan,
Managing Director of theUpwork Research Institute.
What you're hearing is adigital proxy of my voice
(00:22):
that was created by ourteam with the help of AI.
In a previous episode of "Work Week,"
we explored the question,
"Is your workforce preparedto manage AI agents?"
During that episode, wediscussed a new vision of work,
one in which AI agentswork alongside humans,
and every employee becomes an agent boss.
(00:43):
As organizations across industries
continue to integrate AI intotheir processes and systems,
this raises another question.
What's the difference betweenAI agents and agentic AI
and how will each impactthe future of work?
To the uninitiated,
this might seem likewe're splitting hairs.
Are AI agents and agenticAI not the same thing?
(01:04):
They sound similar, after all.
But if you're a business leader
trying to understand which tools
will truly enhance yourteam's productivity,
or if you're a worker tryingto future-proof your career,
understanding the difference
between these tools is important.
As artificial intelligence
becomes more embedded in how you work,
the kind of AI you build anddeploy will shape its impact.
(01:27):
Your decisions will determinewhether you and your people
are simply automating tasks
or creating systems thatcollaborate, learn, adapt,
and operate more independentlyand responsibly over time.
To understand the implicationsof these different models
let's break down the core concepts,
discuss recent research aboutAI agents and agentic AI,
(01:48):
and look at what it all means
for you, your team, and your organization.
Let's start with somehigh-level definitions.
AI agents are systems designed
to perform tasks autonomously,
often acting on behalf of a user.
Think of a customer service chatbot
that can resolve issueswithout human intervention,
or an AI scheduling assistantthat manages your calendar.
(02:10):
These tools are reactive.
They respond to prompts,inputs, or predefined triggers.
AI agents can search, summarize,analyze, and automate,
but they don't decidewhat they want to do.
Their functions are completely bounded
by the instructions we give them,
either through prompts or programming.
Agentic AI, on the other hand,
(02:32):
represents a broader andmore emergent paradigm.
These systems ofteninvolve multiple AI agents
and go beyond executing tasks.
Their programming isso complex and layered
that they simulate intentionality.
They can delegate, prioritize,pursue goals over time,
deconstruct problems into subtasks,
and even evaluate past decisions
(02:53):
to iteratively refine strategies.
Agentic AI is what happens
when an AI doesn't only answer a question,
but asks the right questions
to solve problems on a user's behalf.
A recent research paperfrom the Cornell University
Department of Environmentaland Biological Engineering
and University of the Peloponnese
compared the difference betweenAI agents and agentic AI
(03:16):
to a smart thermostat versus a smart home.
As the authors describe,
"Smart thermostats have some autonomy.
They can adjust to the user's schedule
and limit the use of theair conditioning system
while a house is empty, for example.
But smart thermostats operate in isolation
and are focused on the task ofcontrolling the temperature.
(03:37):
Smart homes on the other hand,
use multiple specialized agents
to simultaneously manage many tasks,
including the temperature, lighting,
energy pricing optimization,
security monitoring, and entertainment."
In sum, a smart home ecosystem,
the agentic AI in our example,
allows many AI agents tocommunicate and work toward goals
(03:58):
that are larger than anysingle AI agent could.
Many companies, includingbig names in the industry
like Anthropic and OpenAI,
are exploring early forms of agentic AI.
They're experimentingwith tools that could act
as co-pilots or executiveassistants in digital form
autonomously managing workflows
(04:19):
and making decisions based ontheir users' changing goals.
According to research from Anthropic,
agentic AI is rapidly movingfrom concept to reality.
However, Anthropic's large-scale analysis
of millions of Claude interactions
suggest something more nuanced.
It found that 57% of AI use cases
(04:41):
involve augmenting creative tasks
like brainstorming, contentcreation, or rapid analysis,
not replacing the person,but better enabling them.
These tasks increasingly involve AI agents
doing things with humansrather than for them.
In other words,
these tools arecollaborative, not standalone.
While that's an importantstep toward agency,
(05:02):
it implies that the industry
hasn't quite achieved its goals.
As AI systems become more agentic,
meaning they can chain actions together,
adapt their strategiesand handle ambiguity,
they demand something newfrom the people who use them.
It isn't enough for peopleto act as operators.
People need to become orchestrators.
(05:23):
As we discussed in our earlier episode
about managing AI agents,
uniquely human skills willbecome even more valuable
in a future of workthat includes AI agents.
This will also holdtrue in a future of work
that includes agentic AI.
And Upwork ResearchInstitute data recently found
that freelancers outpacefull-time employees
in nearly every human-centric skill,
(05:45):
such as problem solving,clear communication,
critical thinking, and adaptability.
This makes sense whenyou consider the skills
that freelancers need to be successful.
Freelancers are constantlynavigating ambiguity,
adapting to different client environments,
and learning on the fly.
These are exactly the traits
required to collaborate effectively
(06:07):
with both AI agents and agentic AI.
So if you're a business leader,
this might shift how youthink about how hiring.
Instead of prioritizingtechnical AI skills,
you might start screeningfor empathy, resilience,
and abstract reasoning,
skills that don't showup in a GitHub profile,
but matter immensely when you're working
alongside semi-autonomousdigital collaborators.
(06:31):
From an organizational lens,
recognizing the differencebetween using AI agents
and building or adopting agentic AI
will influence how leadersstructure workflows, manage risk,
and even conceptualize productivity.
Here's how.
First, automation isno longer the end goal.
Many business leaders today
are thinking about AIas an automation tool,
(06:53):
a way to reduce repetitivetasks or streamline processes,
but agentic AI shifts the paradigm.
These systems are morethan task executors.
They're decision makers.
They can re-plan, revise,
and optimize workflows dynamically
often without human input.
This is both powerful and risky.
(07:14):
Leaders will need to rethinktraditional governance models
and address questions like,
who's responsible forthe output of a system
that has set its own goals?
Second, new skill sets are emerging.
Deploying AI agents requiresthe engineering of prompts
and designing workflows,
but managing agentic AIdemands something deeper:
AI behavior, design, and fluencyin human AI collaboration.
(07:38):
This means understanding
how data storage influencesAI decision loops,
how to audit reflectionmechanisms, how to manage risk,
and how to build ethical boundaries
that persist even when objectives shift.
Managing agentic AI is aboutmore than managing people.
It's about managing systemsthat can manage themselves.
(07:59):
Third, productivity metricswill need a rewrite.
Productivity has long beenmeasured as output over time,
but how do you measure theeffectiveness of an AI system
that can adapt its approachto changing circumstances,
rewrite its own objectives,or invent new workflows?
This introduces the need fora new layer of explainability,
(08:20):
a concept becomingcentral in enterprise AI.
Explainability is theability to both understand
and explain the reasoningbehind AI decisions,
such as hiring decisions
as we covered in a previous episode.
Because these reasons can be complex,
this creates challenges
with troubleshootingsystems and building trust.
(08:40):
Systems that exhibit agentic traits
must be auditable, not just functional.
You need to know
and to be able to explain to stakeholders
how decisions were made,not just what was done.
Now that we've laid that groundwork,
let's talk about what different AI agent
and agentic AI scenarioslook like in the workplace.
(09:00):
In our first scenario,
let's consider the AIagent as a task assistant.
Right now, the AI industry hasdeveloped early-stage tools
that can execute repetitive tasks
with some degree of autonomy.
A customer support service
might use a chatbot toresolve basic tickets,
or a recruiter might use an AI assistant
to schedule interviewsbased on availability.
(09:23):
These are AI agents.
They extend human productivityby handling volume.
In our second scenario, the AIagent becomes a collaborator.
This is where distinctions begin to blur.
You may have an AI tool thathelps prioritize your inbox,
draft responses, track follow-ups,
and alert you to time-sensitive tasks.
(09:44):
The agent is making choices
based on your patterns and preferences
without you explicitly tellingit what to do each time.
This is moving toward agentic AI
because the system is doingmore than simply reacting.
It's starting to initiate.
It's demonstrating the ability
to work with context and continuity.
Our third scenario imaginesthat the agentic AI
(10:06):
has advanced to a digital project manager.
In more advanced versions AIcould supervise workflows,
assign tasks to team members, human or AI,
optimize timelines andlearn from mistakes.
It may flag inconsistenciesacross departments,
recommend changes to processes,
or even handle end-to-end coordination
(10:27):
of cross-functional initiatives.
Imagine that.
AI that can not onlyhelp you complete a task,
but can also redefinethe process entirely.
This is a seismic shift.
But it won't work unlessthe humans in the loop
can guide these toolswith vision and nuance.
This will require something more
than people who know how to write prompts.
(10:48):
It will require peoplewho can define purpose.
As AI moves from tool to teammate
here are four things you should consider.
First, consider redefiningyour skills frameworks.
Job descriptions builtaround task execution
are becoming increasingly less effective.
Focus instead on capabilities.
Can your team synthesizeinformation, manage ambiguity,
(11:11):
and co-create with dynamic systems?
Second, consider rethinkingteam composition.
As agentic AI takes overproject coordination,
the people on your team will shift
to higher order, more strategic functions
such as storytelling, culture building,
and ethical reasoning.
Don't staff for technical skills alone.
Build teams that can usetheir human-centric skills
(11:34):
to guide machines and each other.
Third, consider experimentingwith role augmentation.
Start small, deploy agenticAI in a single function
like customer operationsor content marketing.
Let the tool manage aprocess from end to end.
Then ask, how did it perform?What context did it miss?
(11:54):
Where did human judgmentstill matter most?
Finally, consider engagingfreelance talent for flexibility.
Upwork Research Institute data shows
that freelancers stand out withtheir uniquely human skills.
The data also shows that freelancers
are already ahead of full-time employees
when it comes to embracing AItools and learning AI skills.
(12:16):
Engaging external experts
can help you quickly test new workflows,
experiment with emerging AI technology,
and build internal muscle
without overhauling your entireorganizational structure.
The future of work won'tbe built on rigid control,
but on thoughtfulcollaboration with each other
and increasingly with theintelligent systems we create.
(12:39):
You don't need to know everything
about AI agents or agentic AI today.
You just need the courage to stay curious,
ask the right questions,
and lead your teams intothe future with integrity.
Agentic AI doesn't mean replacing people.
It means partnering with systems
that can grow in sophisticationas our teams develop,
(12:59):
and that means new roles for people,
AI mentors, goal setters,system debuggers, and ethicists.
(inquisitive music)
As we do every week,let's wrap up this episode
with an action you can take immediately
along with a reflectionquestion to think about.
For your action this week
pick a decision that's made weekly
at your organization orin your specific role,
(13:22):
such as approving budgets,triaging client requests,
or prioritizing projects.
Map out how that decisionis currently made.
What information is used?
Who is consulted? Whichvalues are weighed?
Then ask, "Would Itrust an AI to do this?"
If not, what's missing?
(13:42):
This is a simple way to startpreparing for agentic AI,
not just by learning new tools,
but by understanding thejudgment behind your workflows.
Once you've mapped outhow the decision is made,
share your findings with your team.
This can spark a powerful conversation
around clarity, consistency,and collaboration
in an AI-enabled future.
(14:03):
As for this week's reflectionquestion, ask yourself,
if your AI tools could make decisions
on your organization'sbehalf or your own behalf,
would they reflect yourvalues, goals, and standards
or simply automate what's easy?
This is the heart of the shiftfrom AI agents to agentic AI.
The tools we build and deploy
(14:25):
will increasingly operate with autonomy,
not just efficiency.
The more you understand yourown decision making processes,
what matters, why it matters,
and how trade-offs are made,
the better you'll be atguiding agentic systems.
That's it for today'sepisode of "Work Week."
I'm Dr. Kelly Monahan,and today we discussed
(14:45):
the difference betweenAI agents and agentic AI
and how both impact the future of work.
If you enjoyed this episode,
share it with a colleague or friends
and subscribe for more insights
from the Upwork Research Institute.
(inquisitive music)