Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
David R (00:22):
David Rice
stuck in like pilot mode.
They roll out trainingmodels, they call it a day.
What's the biggest leverthat actually moves an
organization toward fluency?
Gle (00:33):
Glen Cathey
If you are going to createa culture of defaulting
to AI, it's leaders thatdemonstrate, first and
foremost, they set the vision.
They actually live itand show it so they
can serve as examples.
You can have all the trainingin the world, but if you don't
actually have leadership that'sfully aligned, you're not gonna
(00:54):
drive a culture where morepeople are defaulting to AI and
exploring all the ways that itcan actually improve their work.
David R (01:02):
David Rice
How do we teach 80% ofthe workforce to suddenly
become AI people leaders?
Glen Cathey (01:09):
We've never really
saw management as a skill.
That's also not something youcan just take a course and say,
I'm now magically a fantasticmanager of people, or AI.
It is something that peoplehave to turn into somewhat of
a habit to be able to startthinking about those steps.
What am I trying to solve?
What resources do I have?
What are the capabilities?
How do I provide enoughinstructions so that they can
actually do a good job for me?
(01:30):
And then how do Iprovide feedback?
David (01:32):
David Rice
roles almost out of traditionrather than necessity?
Glen Ca (01:36):
Glen Cathey
are relatively new, likeLinkedIn's hiring assistant.
It has a separate model forsearching, outreach, screening.
You can enter in informationand natural language.
We're talking aboutthe automation of
tasks in recruitment.
Understanding a job,translating that job into
search and match require.
So you have solutionsavailable in the market today
that can actually do that.
(01:57):
And then it makes me wonderabout why does the role of
the recruiter exist anyway?
David Ri (02:05):
David Rice
Podcast — the show wherewe help leaders keep work
human in the era of AI.
I am your host, David Rice.
And on today's episode,I'm joined by Glen Cathey.
He is the SVP Talent Advisorywithin Randstad Enterprise.
We're gonna be talkingabout the journey to
AI fluency and buildingreal skills to get there.
Glen, welcome!
Glen Cathey (02:26):
Glen Cathey
I'm happy to be here.
Looking forwardto the discussion.
David R (02:29):
David Rice
So I kind of wanted tostart us with talking
about online training.
Obviously this is somethingthat everybody's thinking
about because we've gottadevelop all these skills
that we don't quite have orso in cases we don't even
know which ones we need yet.
So when we were talkingbefore this, you said to
me like, online trainingcreates knowledge, but
(02:50):
not necessarily skills.
So my question for you is like,what would it take for companies
to actually close that gapand how do we stop treating
AI, like a sort of course topass when What we really need
is to think of it more like aculture that we're gonna live.
Glen Cathey (03:04):
Glen Cathey
It makes sense for anycompany to really lean
heavily into online trainingbecause it's the easiest to
scale to any organization.
You know, you create a courseand then you can deploy it to
hundreds of thousands of people.
But as you mentioned,I do believe that.
Primarily online learningteaches people knowledge, and
we're looking to essentially notjust give people knowledge, but
(03:27):
the ability, there's behavioralchange that's actually
critical as a part of this.
So I don't think it'swise to treat generative
AI as a tech deployment.
It really is more of a changemanagement exercise because it
involves getting people to thinkdifferently about how they work.
And it involves them changingtheir behaviors and then
(03:49):
eventually you want thosebehaviors to turn into habits.
And you do that mostlythrough experience,
like hands-on learning.
So.
I tend to recommend, andsomething that we've done
internally and externally aswell, is essentially develop
hands-on experiential learningworkshops where people roll
up their sleeves, so to speak,get their hands dirty, get
repetitions in so that they'reactually working with it,
(04:10):
with role relevant exercises.
So it's not just a,an idea or a concept.
It becomes real, it'sstill a challenge because
training is just an event.
So whether it's online or aworkshop, you really have to
focus on the reinforcementof that and that's how do
you build mindset first, thenbehavioral change, and then
turning that into habits.
David Rice (04:30):
David Rice
It's like watching afitness video doesn't
make you fit, right?
So like you gotta do it.
It's I don't know why in somecases it's like, we don't
really think of it the same wayhere, but we ought to, I think
sort of, we've gotta build itinto daily workflows, right?
Like this isn't like somesomething that HR can just sort
of like force on people, right?
(04:50):
Like you've gottaembed AI practices.
Even like so saying aboutembedding AI prompts in project
templates, just somethingsimple like that, making
it a part of the everydayexperience at your company.
You know, and then rewardingexperimentation when it happens.
I think all that is kinda keyto how we get to that point.
Gl (05:09):
Glen Cathey
that meets once a month.
I've been doing this for almosttwo years now, and there was a
really nice quote that I won'tbe able to quote exactly, but
the essence of it was, youknow, the magic of generative AI
comes from when you thoughtfullyembedded into workflows.
And so, you know, theword that I tend to use
is being prescriptive.
So sometimes it's still evendifficult for me to wrap my
(05:32):
head around how to reallyexplain the challenge of when
you roll out a Jar AI toolset to your employees, like a
Copilot or a Gemini or ChatGPTis the fact that these tools
are capable of literallyanything when it comes to
knowledge and cognitive tasks.
Now that sounds excitingon one hand, but it poses a
challenge on the other hand,which is where do I start?
(05:54):
So like you talked aboutprompt libraries, it is
very helpful for people to.
Have access to thingsto say, oh, well that's
an interesting idea.
I hadn't thought of that.
But what's also more importantis to also help people even
think about their work in away where they will pause and
reflect and realize this is anopportunity for me to use AI.
So it's on one hand, it'shelpful to be very prescriptive
(06:17):
and say, Hey, in this particularworkflow for this particular
role, here are the five pointsthat we expect you to use AI.
And here are somestarter prompts.
That's very important.
I think it's also equallyimportant to help people
just develop the skillof being able to identify
opportunities where they canleverage AI on their own.
So it's not an either or.
It's an and.
D (06:36):
David Rice
And they gotta have thepsychological safety to
feel like it's okay for meto just try this and see
what happens, and if I fail,it's not that big of a deal.
That's kind of how knowledgebecomes skill, right?
Glen Cathey (06:47):
Glen Cathey
People learn by doing andthere's really interesting
book that shows that peopleactually learn most, even at
the brain level, if there'ssome struggle involved,
which is kind of fascinating.
I think the book I read on thatwas called The Talent Code,
and it just talks about howpeople actually build skills and
skills are often built throughtrial and error, which means
you have to have some mistakesalong the way, and by making
(07:08):
the mistakes and then overcomingthe mistakes, you're really
solidifying your knowledge onhow to do that particular thing.
To me, that's very fascinating.
I love the fact that you touchedupon the psychological safety.
What we find is that.
There's an existing pressurefor people to perform in their
current goal, and if you'regoing to take the time to
slow down with regard to howyou're thinking about your
(07:30):
work and then inviting AI anddoing some experimentation,
it will slow you down.
And many people are nervousthat if I will, if I slow
down and I start experimentingwith this, which isn't always
going to work, I'm not gonnabe an expert at it right
away, that's a concern.
People might say, well,I don't really feel safe.
Spending some extra timewhen I'm supposed to be
as productive as possible.
(07:51):
So I think it's really importantfor companies to recognize that
when there is change involved.
People are gonna have toslow down to speed up.
And like you said, they'regoing to have to experiment
and no one is going to go froma noob to an expert in a day.
It takes a process and youhave to give your people space
and time to experiment safelyso that they can actually
go from where they are towhere you want them to be.
(08:13):
But I feel like that's somethingthat a lot of companies focus
enough on is making surethat people have the space
to experiment and that itis okay to do so, and that
they have time to do that.
And if it slows themdown initially in the
beginning, that's alsookay because it's expected.
Da (08:26):
David Rice
guests on where I've talkedabout the fact that people are
very attached to like tasksbecause we've always taught
them that's their value, right?
Is their abilityto complete tasks.
We're moving into this era wherelike it's really more about
learning and kind of what canyou be rather than what you are.
And I think part ofwhat the challenge here
(08:46):
is like unlearning.
We'll all have habits.
We all have like thingsthat we do in our work and
we've gotta kind of unlearnthat and then relearn it
in a different way with AI.
And I'm curious, in youropinion, is experience
sort of a type of baggage?
How do you convince seasonedemployees that they're
hard won habits mightactually hold them back?
Glen C (09:08):
Glen Cathey
You know, we hadn'ttalked about that before.
It's interesting because Ithink it's unconscious baggage.
I don't know if we'vechat about this before.
The four levels ofcompetence, right.
So you know at the top, mostpeople, if you're not doing
your role and you've been doingyour job for a while, you are
unconsciously competent, whichmeans that you're not actually
thinking about what you do,you just, you're in the flow
(09:29):
of work, you get it done.
That's how we all pretty muchwork, unless again, you're new
to a role, new to the workforce.
The reality is when youimplement a tool or a solution
like generative AI, people haveto actually kick themselves back
down to the first level, whichis hopefully, you know, the four
levels of competence actuallystarts with being unconsciously
incompetent, which means thatpeople don't even know what
(09:51):
they don't know, and then theyhave to move to the next level,
which means that I'm consciouslyincompetent, which is like,
okay, so I'm now aware of AI,but I realize I'm not very good
at using it or incorporatingit into my daily work.
And then you move to the nextlevel where you're consciously
competent, which is I'm kind ofgood at, but I have to really
think about using it properly.
The goal really is to geteveryone from autopilot,
(10:14):
unconscious competence,where everyone's just
doing their work every day.
To me, I actually would callthat kind of like a baggage
that you have to get rid ofbecause you really have to
rethink the way you work.
It is a rethinkingof the way you work.
You have a resource that'sthere available 24/7 to be able
to help you with any knowledgeor cognitive task at any time.
And that's something thatpeople don't have is a
(10:35):
resource just sitting nextto you waiting to be guided,
delegated, collaborated with.
And it is a huge mental leap forpeople to say, how do I go from
the way that I currently workwhere I'm not even consciously
thinking about working?
To slowing down and becomeconscious of my work process
and then also realize I mighthave to ask some key questions
about how can I use AI tohelp me improve the quality.
(10:58):
It's not just efficiency.
It could also be thequality of the work or the
decisions and the outcomesor even customer experience.
So there's lots of othervariables, even beyond
efficiency and like cuttingdown on tedious tasks.
Dav (11:10):
David Rice
Like I think when people hearit, it's, it is the first
thing that sort of like ittriggers is this sense of
like they're gonna abandontheir expertise or something.
But I think it's more likejust positioning it like a
software update or you're noterasing this muscle memory.
You're just sort of likeadding some new reflex.
The research shows that thefolks who know how to do
(11:33):
this stuff, and now they'reapplying AI, they tend to
be more successful in theirroles and have more impact
than folks who are either justgetting started and using AI
or have just outsourced to it.
So I think it's kind ofjust creating a better
perception around what itmeans to unlearn and relearn.
I'm curious, you know, workingwith AI agents, it has some
(11:56):
commonalities with managinga person at times, right?
Delegating tasks, settinggoals, giving feedback,
things like that.
If most people have neverbeen managers, how do we
teach 80% of the workforceto suddenly become AI people
leaders, I guess we'd say.
Glen (12:12):
Glen Cathey
the past few months, ismaking that connection.
The fact that I think theright way to look at working
with AI is the fact that itis a resource to be managed
and collaborated with.
It is challenging becauseif you have the majority of
the workforce, that's nevermanaged anything before.
Or certainly people mayhave collaborated with
other people, but it's notsomething that people can
(12:32):
collaborate with anotherperson any second of the day.
Usually have to setup meetings for that.
So it does require somerethinking of your work and
also training people aroundwhat are management skills.
And the first one isproblem identification and
even deconstruction, whichsounds super fancy, but.
It's like, what are wetrying to actually do?
(12:53):
And to our part of thediscussion just a moment
ago is that most people arejust kind of on autopilot.
They know what they'refocused on every day.
This requires you to slowdown and figure out what am
I actually trying to solve?
And then what resources,you know, if I'm a
manager, what resourcesdo I have available to me?
Well, now we have AI as aresource that I can either
collaborate or delegate to.
It's like, well, ifyou have this resource,
(13:13):
now you have to provideexpert level instructions.
You have to provide enoughcontext for it to understand
what it is you're trying to do,who you are, what you need, why
you want it, how you want it.
And then once it actuallyperforms some work, you have
to provide feedback like amanager would to an employee.
You did this work.
Some of this is okay.
I need you to focus alittle more on this area.
These are things that theindividual contributor
(13:34):
has never done before.
So, although we've neverreally solved like management
as a skill, that's alsonot something you can just
take a course and say, I'mnow magically a fantastic
manager of people or AI.
It is something thatpeople have to turn into
somewhat of a habit and aroutine to be able to start
thinking about those steps.
What am I trying to solve?
What resources do I have?
(13:55):
What are the capabilities?
How do I provide enoughinstruction so that they can
actually do a good job for me?
And then how do I providefeedback so we can continue to
get better and better output?
David R (14:04):
David Rice
And teaching theworkforce to do that.
I think, you know, like as Ithink there's some opportunities
around like little simulationswhere employees sort of coach
an AI to improve its outputs.
Just like a manager givingfeedback and you can give
feedback essentially onhow they're coaching it
and build that skill.
You know, the clarity,the specificity, sort of
the overall feedback loop.
(14:26):
It's the same traits thatgood managers have, but it's
kind of ironic that it mightbe AI that teaches us to
do it better in some ways.
Glen Cath (14:35):
Glen Cathey
And it's funny, it wasn't untilyou said that I, it gave me
an idea that there is a wayto be able to create a prompt.
Or if it's like in Gemini,Gem or if it's ChatGPT, like
a custom GPT, where you cangive it instructions where it
is there to ask you questions.
So it isn't just waiting foryour input, it's actually there
to kind of coach you through it.
So that's something that youcould totally implement today
(14:58):
with almost any commerciallyavailable self solution.
It's basically help youmanage it better, and
it is kind of ironic.
The reality is sometimes whenI do trainings in this area, I
think that people need to fullyappreciate how powerful these
solutions are, how capable theyare, how human-like, and even
sometimes superhuman in terms ofeven human skills like empathy
(15:22):
and emotional awareness, whichsounds strange, that AI can
actually outperform people.
But multiple studies have shownthat in blind research, even
people will judge the outputof AI to be more empathetic
and more emotionally aware.
Which is absolutely mindblowing, but that does mean
that if you have this resourcethat you can work with, it's
not just technologically strong.
It's not just strong from areasoning and intelligence
(15:43):
standpoint, it can actuallyhelp you be even better as
a person and the soft skillsthat you would never have
assumed AI could help you with.
David Rice (15:50):
David Rice
I just keep saying that tomyself, like, Hey, never
thought I'd see this.
So when we were talking beforethis, you know, you mentioned
to me that deploying AI atscale, it's like doubling
your workforce overnight,but you know, if that's true,
what's sort of the hiddencost of that in terms of.
Or risk, I guess not just cost,but suddenly having twice as
(16:11):
many workers, half of whom neversleep, but also never think
without being prompted, neverthink independently, right?
Glen Cathey (16:18):
I find that way of
looking at it to be helpful for
companies to help motivate them.
To understanding that thisisn't just like deploying
software licenses thatpeople may or may not use.
I feel that if you look atit this way, as if you're
doubling your workforce, itcreates a little more, I'd
say awareness and maybe afeeling of responsibility
(16:38):
to actually make sure thatthese resources are working.
Because if not, they'rejust sitting idle.
And I try to use that framingbecause if you really were
to double your workforce, youwould have a lot of pressure
and anxiety around making surethat your new employees are
as productive as possible.
But I don't feel that thecompanies are thinking about
that when it comes to workingwith generative AI solutions.
(17:02):
They should have that mentality.
And that goes back to themanagement training, which is
if you're gonna deploy thisto people, majority of your
employees being individualcontributors, that's the missing
link is figuring out you havehired, quote unquote, these
new resources, their AI, theycan actually perform real work
today, augment, and even from aagent or automated perspective.
(17:23):
But you have to be thinking,we have these new resources.
Who is managing them?
And then who's capableof managing them?
And then how do we upskillpeople to make sure
that these resources aregonna get up to speed?
I guess this framing forme until I find a better
one is really more aroundthe creating anxiety around
we're underutilizing aworkforce asset that we have.
(17:44):
And there's probably a betterway of looking at it, but so
far that seems to resonate withsome people to say, okay, if I
really did double my workforce,I would be responsible
for making sure that thoseresources were up to speed
and as productive as possible.
Whereas right now in a lotof companies, you're sitting
idle, and even though it's muchless expensive than a person,
that doesn't mean that youshouldn't be thinking about
(18:05):
it and having that level ofanxiety of making sure that your
people are leveraging this tool.
I'll say there is you know,more and more companies are
deploying these at an enterprisescale, and that means that.
Having access to the tools isnot the competitive advantage
anymore, which then means yourcompetitive differentiator is
how well you're enabling yourpeople to use the tech, which
goes back to the whole point ofour discussion, and hopefully
(18:27):
that also creates a littlebit of pressure and anxiety
for people to make sure that,Hey, are we doing enough in
this space to make sure thatour people are enabled to get
maximum benefit from thesetools that we're deploying.
David R (18:38):
David Rice
harder, like either the ideathat, you know, this, you get
a competitive advantage outof it, is it is disappearing
a little bit because itis becoming so standard.
I think the thing that Iwould challenge folks to
do is, you mentioned there,you know, like making sure
that they're prepared.
You've gotta think of this aslike management overhead, right?
Like every new agent, everynew thing that you deploy, it
(19:01):
adds cognitive load somewhere.
So there's more things to brief,more things to check and trust.
You wanna build it intosomething you can trust
because it sometimeshas false confidence.
It never sleeps, right,but it never pushes back.
So it can be, it can generatewrong answers very confidently.
And I, if you don'ttake the time, you
(19:22):
might put it out there.
So it's like one of thosethings, like, especially
for me, I write a lot.
That's one of those thingsI'm always aware of.
But I think there's anopportunity there in terms
of like governance andsort of thinking about it,
like AI operations, thesame way you think about
it as an operation, right?
There's rules for likequality control, ownership.
It's about putting thosethings in place and thinking
of it in that framework.
Glen Cathey (19:42):
Glen Cathey
And there, there's likecentralized governance and
then there's also I think,governance that extends
to every employee that'susing these types of tools.
That goes back to training,which is making sure that
people understand the do'sand don'ts of using these
types of technologies.
And yes, there's opportunitiesfor hallucinations or what
people call AI swap, but whatI would say when it comes to
(20:04):
AI SWAP is that it's not an AIproblem that's a people problem.
Because if you are perpetuatinglop, that means you're
allowing it to get out.
And that, I think that justcalls out that one of the most
critical things that I thinkpeople are bringing to the
equation is critical thinking.
That involves the review, andit goes back to thinking as a
manager, because if you were todelegate work to an employee,
you're gonna review that work.
(20:26):
You're gonna make sure thatbefore it goes out externally
or is used for some importantpurpose, that someone who
knows better has an opportunityto review it and make sure
that the quality is there andit's representative of what
you're trying to achieve.
But.
When you think of beingdistributed to a lot of your
employees, you are placinga lot of trust in that every
individual is going to follow.
(20:47):
That's why training isimportant and it's not
just, you know, I like tosay training is an event.
You might go through acourse or you might have a
workshop, but you also haveto perpetuate that kind of
a culture to make sure thatyou're always over-communicating
these types of what wecould call governance, which
is constant reminders ofmaking sure that people are
checking on the quality.
Fact checking anything thatneeds to be fact checked, but
(21:08):
also making sure that it'sthe appropriate quality and
tone of voice that you wantit to be in terms of however
you're using the output.
It's very easy to write a simpleprompt and get somewhat decent
output these days, but that'sgonna be somewhat generic.
And you know, I would,I hesitate to call it
AI slop, but I see a lotof comment around that.
But I try to help people.
That's not AI's fault,that's our fault.
(21:28):
If it gets out, it's becauseyou allowed it to get out.
David Ri (21:31):
David Rice
And the number thatI want you to keep in
mind this week is 64%.
According to EYs latest AI pulsesurvey, 64% of senior leaders
say that fear of replacementby AI is stifling adoption
within their organizations.
Yet only 24% identify employeeresistance is a major barrier.
So let's sit with thatcontradiction for a minute.
(21:54):
What this tells us isthat leaders know their
people are afraid.
They see it, theyacknowledge it.
But they're not namingit as the core problem.
Instead, they're talking aboutdata readiness, cybersecurity,
and regulatory concerns.
But what I see really happeninghere is that we're watching
a transformation where thehuman cost is being coded
as a technical challenge.
Two thirds of leadershiprecognize that their workforce
(22:15):
fears obsolescence, but thatfear isn't being treated as
the crisis that actually is.
The question that we shouldbe asking isn't, how do we
overcome resistance to AI?
It's, what does it say aboutour organizations that our
people are terrified ofbeing replaced and we're
still barreling forward?
Because here's the uncomfortabletruth, when 64% of leaders see
(22:36):
fear, but only 24% treated as abarrier worth addressing, we're
not solving for human wellbeing.
We're solving around it.
The truth is, this isn't achange management problem.
It's a fundamental questionabout what role humans will
play in a world where activelymaking less dependent on them.
And right now, I don'tthink we're having that
conversation, honestly.
(22:57):
And with that, back to the show.
I always say it's neveractually the technology.
It's how we choose to use it.
And if you choose to use itin a lazy way, well you're
gonna get a lazy results.
You know, you're gonna getslop, which is the same thing
you'd have if you had a lazy,whatever position it is.
It doesn't matter.
They're gonna have sloppywork that they turn in and
(23:17):
it's gonna be the same thing.
So, you know, it's aboutlearning how to press it, how
to make sure that it's gettingthe result that you want.
Glen (23:25):
Glen Cathey
that's something that I tryto impress upon others is
that, and I try to be verytransparent with myself,
so I'm on the same journey.
Everyone else is trying tofigure out the best way to use
these tools, and there are timeswhere I'm working with AI where
I start to get annoyed with howlong it's taking me to provide
enough context and a prompt.
And I have to always catchmyself and realize that yes, I'm
(23:46):
getting annoyed, it's becauseit feels like it's slowing me
down, but I have to remember,so I'm even reminding myself
every day, just a few moreminutes of appropriate context
will yield much better results.
And so I try to tell people thatyou shouldn't think of yourself
as a manager of these tools.
And as such, don'tbe a lazy manager.
And I'd also say that ifyou have managed people, I'd
(24:07):
say most good managers takethe time to make sure that.
When they're delegatingtasks or signing work to
people, that there's peoplefully understand what it is
that they're looking to do.
They probably even have examplesof what good looks like, right?
You probably also allowthem to ask questions.
So say, I've explainedwhat I want you to do.
Is there anythingyou need to know?
(24:28):
What questions doyou have for me?
That's like good management.
That's another funny thing thatwe do in our trainings is just
remind people that you can haveAI ask questions, and it's such
a simple, non-technical aha.
Unlock that at the end ofyour prompts, you can always
just say, you know, ask me afew questions one at a time.
That would help you providethe best output for me.
(24:48):
And again, it's treatingit more like a person.
So I just feel like ifyou're gonna manage, be a
good manager, you need tomanage AI as well as you
would manage a good employee.
And I feel that some people,they're using AI and they're
not really managing AI.
Using AI as minimal inputs,configuring things, typing
in, you know, one sentenceand you'll get okay
results, but you're notgonna get the best results.
(25:09):
But that's the same as if yougave such that little amount
of context to a human, theywould never be able to do a
good job nine times outta 10.
So you have to have thesame standard level, I
think is what's helpfulfor people to keep in mind.
David Rice (25:21):
David Rice
I think a lot of peoplehave this kneejerk reaction
when you say, like, treat itlike a human, where they're
like, I don't wanna do that,but I'm like, I don't mean.
You talk to it aboutyour problems, really.
I mean, you gotta give it thesame level of understanding
that you would give a humanto being like, it doesn't,
it's not inherently a genius.
Yes, it has a massive amountof computing powder and it
(25:41):
can think really fast and itcan do all these things, but
it still needs the context.
It still needs to understandwhy you want this.
It's just gonna blindlyspit something out that
it thinks you want, butit may not be that thing.
Glen C (25:54):
Glen Cathey
You know, I used to try totell people that it's almost
garbage in garbage out.
When it comes to prompt.
It's not really true becauseeven with a basic prompt gen
AI tools today, and they'llonly get better tomorrow,
actually still produce forwhat minimal input you produce.
It's still strong.
The issue is it's notits full capability.
(26:16):
So I'd say garbage inis actually better than
garbage out, but it'snot its best output.
And like you were saying,you have to provide
enough information forit to do its best job.
But I still think it's niceand hopefully anybody that's
listening to this will usethat whole concept of just
ask AI to ask you questions,say what else do you need
to know to do your absolutebest on this particular task?
(26:38):
And I will guarantee you,you will be surprised by
the thoughtful questions.
That you canautomatically understand.
Of course, that is helpfulto know if it's going to do
this task for me, but youdidn't think of it first.
It had to pull it out ofyou by asking you questions.
So to me, that's a fun littlething is just have AI ask you
questions and say, you hadto do the best job on this.
(26:59):
What else do you need to know?
It will ask you really goodinsightful questions, which
would then in turn allowsit to do the absolute best
work that it can't do.
But most peoplearen't even aware.
One more thing I'll slip in isI tend to tell people that if
you're ever unimpressed withthe output or you think it's
average or mediocre, I tellpeople that you have to look in
the mirror, that you're probablyat least 50% of the problem.
(27:21):
So take ownership.
If the output is like,ah, it didn't really do a
good job, you need to belike, well, what could you
have done better to help?
Would have done a better job.
I feel that not a lot of peopleare thinking about it from an
accountability perspective ofhow responsible they need to be
in terms of providing input, butit's bizarre in a positive way.
We're at the stage where wecan use natural language to
(27:41):
communicate with systems.
And prior to three yearsago, yes, you had voice text,
but it wasn't like generat.
I feel that people do get alittle bit lazy with their
minimal inputs without realizingthat yeah, just take a few more
minutes and provide it with allthe context it needs to do a
fantastic job for you, whetherit's typed or voice, just
something I still feel that somepeople are not taking advantage
(28:03):
of some of these tools is.
Talk to it like a personthat actually using the
microphone so you don'thave to sit there and type.
So if you feel like you'reslowing down to type,
just click the microphoneand start talking to it.
David Rice (28:13):
It's an incredible
ability that it has.
Glen (28:15):
Glen Cathey
Yeah.
We take it for grantednow, but in the future
it's kind of crazy.
Like we have mobile phonesnow and some of us are old
enough to remember cordedphones that stuck you to
the wall in your house.
David R (28:25):
David Rice
Glen Ca (28:27):
Glen Cathey
Right?
Or records now and wehave digital music.
At some point I think we'llbe laughing at the fact that
we actually used our fingersto touch little buttons to
communicate with computers.
When that's a significantbarrier, when you really realize
that touching keys is a barrierto communication and now that
barrier is literally gone'cause you don't have to type
anymore, you can just speak.
David Ri (28:48):
David Rice
all your friends do, so.
Glen (28:53):
Glen Cathey
It never seems to get annoyedif you ramble a little bit.
And I'll also say it does areally good job, even with.
If you're fast and furioustyping notes in meetings,
even if you use a note taker,sometimes I still like to
take on notes and it's like,this is not gibberish, but
it's not well outlined.
It's amazing how well I canunderstand even those things
and then polish them up intosomething that just looks
(29:13):
like, yes, this is exactlywhat I wanted 'em to be.
But if you looked atmy original notes.
You wouldn't be able to makesense of them, but AI can.
It's just absolutelyfascinating.
Dav (29:22):
David Rice
at literacy though, right?
They get stuck inlike pilot mode.
I've been hearing this a lot,ladies and even we recently
launched a free communityaccount that you can create
and people list in that theirreason for joining, and a lot
of people are in the stage ofrunning AI pilots, and we're
seeing this a lot, but a lotof people get stuck in there.
(29:43):
They roll out training models,they call it a day, right?
I'm curious, in your opinion,what's the single biggest
lever that actually moves anorganization toward fluency?
Gle (29:55):
Glen Cathey
have to say is leadership.
Because if you are goingto create a culture
of defaulting to AI.
Or making sure that peopleare more conscious about
bringing AI into their workthoughtfully, where it does
make a difference, that'snot going to be accomplished
(30:16):
just through training.
So you can have an onlinetraining course, you can have
your experiential hands-onlearning workshops, which I
think are absolutely necessary.
That would be my numbertwo, but number one would
be leadership because.
It's leaders that demonstratefirst and foremost,
they set the vision.
This is what we're trying to do.
They actually live it andshow it so they can serve
(30:36):
as examples, and thenit's leadership's chain
of command to hold peopleaccountable at all levels.
So you have leaders wholead leaders are holding
them accountable for peopleactually making changes
in their daily work.
So I feel like you can haveall the training in the world,
but if you don't actuallyhave leadership that's fully
(30:56):
aligned and holding all thedifferent levels of leadership
from the very top, all theway down to line managers that
manage all of the individualcontributors, you're not gonna
drive a culture where morepeople are defaulting to AI and
exploring all the ways that itcan actually improve their work.
David Rice (31:13):
Yeah, I think it's
like just understanding like
what are the characteristicsof fluency too, like leaders.
Modeling it is gonna be sortof the thing that creates
fluency where it looks likeemployees are willing to
redesign a workflow on theirown, for example, without
being told to, and like movingsort of part of it too is like
moving mentality from sortof completion rates of like,
(31:36):
you know, that productivity.
It's more like capability shift.
Who's using it?
What for?
And like what changed from theoutcomes is like, there's just
some mentality shifts thatgotta happen from leadership
and part of it is just usingit to be honest, I've talked
to multiple people now whohave said the same thing.
Like, they talk to leadersand they're like, no, I
we're an AI organization,but I don't use it.
(31:58):
Huh?
Glen Cathey (31:58):
Glen Cathey
You're always, I think whenyou have any large population
of employees, whether it'shundreds or tens of thousands
or hundreds of thousands,you're gonna have that.
I think your normaldistribution, you're gonna have
your early adopters, they'regoing to lean into it anyway,
regardless of any training.
They're gonna get it.
Then you have some people thatdon't trust AI, and even if
you're really enthusiasticabout it, I should remember
there are people on the energyspectrum that are doubtful.
(32:20):
They're distrustful, they'reworried about it, or maybe
they're just reluctantto learn something new.
And you have a lot ofpeople in the middle.
And to be able to move that.
Again, training is great,but leadership, it's not
just communicating itand demonstrating it.
It's holding people accountable.
And although that soundshardcore, sometimes it can
be as simple as making sureit's a part of every team
(32:41):
meeting that you're discussing.
How are we using AI, givingpeople opportunities to
share some of their usecases so that other people
can hear about that.
Documenting those use cases,sharing them with others.
You can make it a partof performance reviews
as well, and I don'tmean that in a scary way.
It's not a stick, butit's also a carrot.
If you tell me, Hey, Glenn,you know, when we have
our performance reviews,I'm going to be asking you
(33:03):
about how are you using AI?
As simple as that.
Now I know that I'm goingto be asked, and so I'm
going to do things because Iknow I'm going to be asked.
Sometimes it's simple as that.
It doesn't have to beanything more complex, but
it's that leadership chainthat actually starts help
kind of pulling peopletowards the right behaviors.
'cause they know it's gonnabe discussed and talked
about and people are gonnabe asked, are you using it?
(33:25):
'cause if not, thenit'll come out.
And then I would say it doesn'treally look good for you.
It's almost like saying, no, Idon't wanna use the internet.
Like when the internetwas coming out.
Like at some point you justhave to be like, this is the
future and I have to lean in.
David Rice (33:37):
David Rice
I mean, well, especially with,if you have a job where you
sit behind a computer, likethis is just the reality.
You know?
It's like saying that youdon't want to use the internet.
You wouldn't even think thattoday it's gonna be that.
Yeah.
It would be absurd.
Yeah.
It would like, the absurdityof that would be a, wouldn't
even be able to comprehend it.
So, I think we are gonna get tothat point with this as well.
Glen Ca (33:57):
Glen Cathey
And I think even with anytechnology, like sometimes I
use Excel as an example, likeso Excel's been out for a long
time, but there are still thingsthat about Excel that like 99%
of people don't know about.
So there's so muchthat you can do.
There are people who makecareers on just training people
on how to use Excel and it'sjust, I don't wanna say like
it's just Excel, but it's a verypowerful technology and tool.
(34:19):
Most people can get by withusing 10% of the functionality
and it does what they need.
You just have to realizethere's 90% of other
functionality there.
So I feel like we will get morepeople to be more AI native.
I don't want to use that, butI can't think of a better term
at the moment where peopleare more defaulting to it.
But just because they'reusing it, it doesn't mean
(34:40):
they're using all of it.
Just like in Excel or a Worddoc, there's still features.
There's Windows shortcutstoday that I just learned
like two months ago.
I'm like, how didI not know that?
So it's like this willnever be done, just like
leadership hasn't been solved.
Just because there's trainingdoesn't make everyone magically,
you know, awesome leaders.
There's still growth foreverybody to have in that area.
I think it'll be very similarwhen it comes to AI, but as
(35:02):
long as people are leaninginto it and realizing that
this is the new way ofworking and it's a, I think,
fear of being left behind.
It can be leveraged positivelybecause it's also reality.
I mean, imagine if, I knowI'm using like old school
analogies, but imagine whenlike Microsoft Office came out.
If people are just like, Idon't wanna use spreadsheets and
word processing, it's like atsome point you're left behind.
(35:24):
You made a choice to not seethat this is something that's
gonna be integral to workand that's going to hurt you.
It's the same thing with AI.
It's no different thanMS Office or like, it's
just gonna be a skill thateveryone's gonna have to have.
So if you don't lean intoit, then I feel like that's
something people are gonnado to their detriment.
Dav (35:43):
David Rice
for example, right?
Like AI could take over tasksfrom sourcing to shortlisting,
even messaging people.
Where's the line sort of betweenaugmentation and replacement?
'cause that's something alot of people fear, right?
And I'm wondering, do youthink we're cling to certain
roles almost out of liketradition rather than necessity?
Glen Cathey (36:04):
Glen Cathey
I have some thoughts, andit might be controversial
for some, I'm still thinkingthis through like many people
are, but first off, let'slook at a few solutions that
are relatively new, likeLinkedIn's hiring assistant.
So LinkedIn's hiringassistant is a legitimate
multi-agent model.
It has a separatemodel for searching.
(36:25):
It has a separatemodel for outreach.
It has a separatemodel for screening.
And so you can enter ininformation and natural
language, and there's othersolutions to do this too, right?
So we're talking aboutthe automation of tasks in
recruitment, understanding ajob, translating that job into
search and match requirements,finding people, engaging
people, pre-screening people.
(36:46):
So you have solutionsavailable in the market today
that can actually do that.
And then it makes me wonderabout, well, why does the role
of the recruiter exist anyway?
This is at least my journey,is that at a certain size you
have really small companies likestartups where they don't have
dedicated recruiters, right?
So who hires managers, likethey recruit their own people.
(37:06):
But you get to a pointwhere your job as a manager
of whatever it is you'remanaging, let's say software
engineering, I don't have timeto be a full-time recruiter.
So that creates the needfor a person to be able
to take over those tasks.
What happens when you havetechnology that's capable
of performing those tasks?
You could have a scenario inthe future, and I'm not saying
it will come, but you could,and the technology already
(37:28):
exists today where who isto say that the end user of
LinkedIn's hiring assistantisn't the hiring manager?
So I know that sounds scary and,but people need to be aware.
We can't put our headin the sand and realize
that no, they can't.
No, technically that'stotally capable today.
Right?
So let's be aware of that.
We're not sayingit's going to happen.
Then you have to figure outif I'm a recruiter, where
(37:50):
do I fit in the future?
In that type of scenario,not all hiring managers
are going to want toself-serve using technology.
So they'll probably still,many of them will want to
outsource to a human recruiter.
I said like if I'm arecruiter and I manage this
truly multi-agent system,where do I add value?
It sounds cliche, but it's true.
It's like relationship building,the actual recruiting, which
(38:13):
is persuasion and influence.
How do I take a passivecandidate who's wasn't
really thinking about makinga change or someone who's
actively interviewing atother companies, how can I
listen to them empatheticallyand match what it is that
they're interested in termsof their skills, motivations,
and aspirations, and matchthem to the next opportunity?
And how do I align what I'mhearing them tell me with
(38:35):
how our company is actuallyperhaps the best match for
them, if that's actually true.
Relationships with hiringmanagers being consultative.
I don't know if we'llget to the point where.
People are a hundredpercent relying on AI
to be a talent advisor.
But it could.
But I still think there'sdefinitely a role for
people to play from a talentadvisor perspective, both
on the hiring manager sideand the candidate side.
(38:56):
So it's just some of my thoughtsin that space and realize that
the capabilities are there andto say that we can't be replaced
is actually not accurate.
It doesn't mean just becausewe can doesn't mean we should.
And I also think the lastthing I'll say on this is this
will be, I think, a strategicdecision for every company.
One company might say, we aregoing to automate this portion
(39:16):
and we wanna use AI here, butwe're gonna use people here.
Every company is gonna figureout what their fingerprint is,
so to speak, in that space, tofigure out where do we want AI
and where do we want people?
And where you put people mightactually end up being one of
your competitive advantagesfrom an experience perspective,
but also where you use AI couldalso be a competitive advantage
from an experience perspective.
(39:37):
Because not everybodyeven gets feedback, right?
And you don't.
Nobody has enoughrecruiters to provide
feedback to all applicants.
So AI could be perfect at doingthat and filling that gap that
we can't accomplish with people.
It's just a few of mythoughts on that subject.
David (39:50):
David Rice
somewhat underestimate howmuch of work is mechanical in
this particular case, in thecase of recruiters, right.
I think, you know, it can handlequite a bit, but the portion
in that, it's still, you know,where does the human add value?
It's the judgment, it'sstorytelling, empathy,
that kind of thing.
And I think.
It's weird because like therole could persist because of
(40:11):
identity rather than utility.
But what is the value in that?
I think the future value iswhat you mentioned there.
Not every candidate getsfeedback, but candidate
experience would be alot better if that's
what they were getting.
And so like I think maybereshaping candidate experience
and thinking about howyou use it to do that is a
really high value use case.
(40:32):
It's more like not is it gonnareplace me, it's what part of
my work deserves more attention,and how is this gonna allow me
to do that is really what it is.
Comes out I come intothis realization in
my own work, right?
Like I'm realizing, oh, likeactually maybe some of that
stuff that I was really attachedto in the past or that I spent
a lot of time doing, I don'treally have to do it that much.
(40:54):
I can use these prompts,get this result, and
then I can just kind of.
Pour more into things thatlike, I always wanted to
give more attention, but thethings that actually need my
specific attention, you know,versus not anybody can do that.
And it's like, okay,well now I finally can.
So it's findingthose moments and the
opportunities to do that.
Glen Cath (41:14):
Glen Cathey
I think there's an opportunityto not just look at the existing
recruiting lifecycle or eventalent management lifecycle
and saying, where can weapply AI that's legitimate.
I also think we have anopportunity to reimagine
things, which means I thinkyou do have to wipe this
slate clean at some point andsay, now that we have these
(41:36):
additional capabilities, howwould we design this experience?
How would we designthis process?
That's a different thing thansaying, here's our existing
process and we're inserting AI.
Now it's, well, if wewere to reimagine it,
what would look like?
And that to me really capturesmy imagination because
there are already companiesthinking about doing that.
(41:57):
But as time moves on, moreand more companies are gonna
be forced to say, yeah, wecan't just keep doing things
the old way and insertingAI and it will work.
It's not like it won't work.
But it won't be, I think, thefull realization of what could
be until you kind of say,maybe we do have to rethink
everything because we havecompletely new capabilities
(42:17):
we didn't have before.
And it does change things.
So it's not just insertion oftech and our ex old process.
It, I think also involvesthe re-imagination of what a
new process should look likewith our new capabilities.
David Rice (42:28):
David Rice
Right.
Well, Glen, I want to thank youfor coming on the show today.
I really appreciate it.
Thanks for giving us some ofyour time and your insights.
G (42:37):
Glen Cathey
Thank you.
Dav (42:39):
David Rice
If you haven't done soalready, head on over to
People Managing People, createthat free community account.
Get signed up forthe newsletter.
We'll see you next time.