All Episodes

March 17, 2025 39 mins

Send us a text

AI is transforming talent acquisition by automating repetitive tasks and allowing recruiters to focus on strategic partnerships with hiring managers. We explore how implementing the right AI solutions can dramatically improve time-to-fill, candidate experience, and quality of hire.

• AI implementation should start with identifying specific business problems to solve rather than adopting technology for its own sake
• Look for highly repeatable processes that consume significant time—screening calls can take 30-50% of a recruiter's workweek
• AI screening tools provide 24/7 availability for candidates, especially valuable for roles with non-traditional schedules like healthcare
• Quality of hire improves when AI can screen entire candidate pools simultaneously rather than sequentially
Four Compliance Pillars: data security, legal considerations, internal governance, and model integrity
Only 30% of workers fear AI replacing their jobs, while 45% recognize AI proficiency is critical to job security
• Automation doesn't eliminate recruiter jobs—it elevates them to focus on higher-value strategic activities

===========================
Links & Mentions:
===========================
➡︎ U.S. Workers Are More Worried Than Hopeful About Future AI Use in the Workplace
➡︎ 73.6% of All Statistics Are Made Up
➡︎ The Future of Hiring: Integrating AI into Recruitment Strategies

===========================
Connect with our Team of Huemans:
===========================
➡︎ Website: https://www.hueman.com/
➡︎ Podcast: https://www.youtube.com/@huemanps/podcasts
➡︎ LI: https://www.linkedin.com/company/hueman-people-solutions

Don't forget to subscribe to the Hueman Resources Podcast Channel for more valuable insights on talent acquisition, recruiting, and workforce planning and management.

Visit Hueman.com to learn more about our recruiting services.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 2 (00:06):
Welcome to Real Talk on Talent, a human resources
podcast where we talk abouttalent acquisition, recruiting
and all things hiring.
Hey Dina, hi Hilary, welcomeback.
Thanks, pleasure to be here.
So this is actually a reallyunique podcast.

(00:27):
It is For all his reasons thisis our first time having a guest
and we're super excited aboutit.

Speaker 3 (00:34):
Of course, we got to start out with introductions of
stuff at Human People Solutionsand I am also our executive
sponsor for our AIimplementation and strategy
projects and programs.

Speaker 2 (00:50):
Woo, see fancy, but can you tell us what that
actually means as it relates toAI, because that's why you're
here.
We want to talk about AI and,like all of those elements, dina
and I have attempted thisdiscussion before we got here
and be very generous.
Yeah, yeah, so great title,thank you, but like, what does
that mean?

Speaker 3 (01:09):
it means that, as human, has uh gotten further and
embarked on our ai solutions,and how do we incorporate it
into our processes and and howwe work both internally and
client facing.
How do we do that?
What are we putting in place?
What tools are there going tobe?

(01:30):
How do we use it?
What do we think that theimpact is going to be from that?
And then, how do we set up allthe structures around it so it's
used in an appropriate way?

Speaker 4 (01:39):
So I, you know, human is unique in that we are
recruiting 40,000 individualsevery year on behalf of our
partners, Almost every industry,almost every job function you
could imagine with variousvolumes.
How did you approach where toeven start with this?

Speaker 2 (01:56):
It just seems like well, not only that like, not
only the amount of recruitmentand the type of recruitment, but
like AI.
Is this kind of like ubiquitoustopic right?
That's like, what is AI?
We've talked about this chat,gpt.

Speaker 3 (02:09):
it's taking over the world, like, and so walk us
through that first conversationwhen they're like, hey, and we
know we need to do this, like weuh it's funny that you say that
, because that is something wehear a lot in conversations that
we've had with clients andpartners around like we want to
do AI.
And people have said that wehave AI.
What does that mean Exactly?

(02:33):
And we'd like to do it, andwhat does that mean?
And we have tools and it'sunderpinned, so, like, how are
we actually using it?
I think about AI as twodifferent, at least in our
context, as two different typesof tools.
Okay, there's either an AI toolfor automation Okay, so ways in
which we can make what we dofaster, more efficient, better,

(02:55):
yep.
And then there are AIprocessing data and insight
tools Okay, processing data andinsight tools.
Okay.
So, around, you know, how canwe take the data that we have
and learn from it, see patterns,get suggestions in a way that
we don't today, because wearen't naturally connecting all
of those data dots.

(03:16):
Yeah, okay, the one that westarted with was around
automation.

Speaker 2 (03:20):
Okay, so was there any reason that you picked
automation first?

Speaker 3 (03:23):
We were really focused on as our and this comes
to like what's your vision,what's your goal?
What do you?
What's the problem that you'retrying to solve?

Speaker 2 (03:30):
with AI.

Speaker 3 (03:31):
AI.
It's fun to have AI and saythat you're AI enabled, but,
like anything else, it's a tool,and so what's the issue that
you're trying to solve?

Speaker 2 (03:39):
Yeah, I think that's a really interesting call out,
because so often and I say thisall the time like a lot of
people want to go tactics beforestrategy.
So it's and I've made thiscriticism of companies where
there's this race to AIintegration, where it's like, oh
, just plug AI in there andsometimes it's the right fit,
but sometimes it's also justlike so you can say you're AI
enabled.
That's my interpretation of it,and so I love that idea of

(04:02):
saying, okay, we could have gone17 different ways, but the goal
was really thinking aboutefficiency and how a tool could
tie into that process element.
Yes, data could be, dataanalysis could be that piece,
but for us today, that was notthe business case that we wanted
to prioritize.

Speaker 3 (04:17):
And, to be clear, data analysis can help with
efficiency, and I could argueall day.
We could have gone thatdirection, but what's the lowest
hanging fruit?
The other thing is like what iswhat is what we can do today
and where we, you know where wewant to go, or what's the first
thing that we should put inplace.
And so it is so critical tohave a thought of what it is

(04:40):
you're trying to solve, becausenot only does that change the
use case that you're thinkingabout using it for, but it
informs then what types ofcapabilities you are looking for
when you go out to market tosee like who's my vendor partner
and like what is it that weneed to put in place?
Because there are nuancesbetween how different ones work.
And so that's where we started.

(05:01):
We took a look.
We were saying, okay, the focusfor us is we want to get more
efficient.
Ok, and so we're going to focuson the automation side of the
house.
The second piece we did was wesaid, all right, ai.
One of the things that isnecessary for automation to be
useful from an AI perspective isit needs to be a highly
repeatable task with littlevariation between times, because

(05:25):
AI the nice thing about AI andit's different from, like
robotics, process automation, oralso known as RPA, or just
historical like if you think ofa phone tree or something there
isn't a whole lot of room forvariation.
Ai does have some room forvariation, but you still need a
repeatable process.

Speaker 2 (05:43):
But, like, when you think about that repeatable
piece and the efficiency, theway you described that made me
think of the compliance side ofthe house, because when we think
about that's, I think thequestion we probably get the
most, like with our healthcarepartners is okay, there are all
of these compliance questionsconcerns huge potential

(06:03):
liability there.
So when?
And I think one of the benefitsof the repeatability is not
just scalability but control.
Yeah, I would assume.
Yeah.
So it's like, if you know, thisis where we want ai to live and
it's repeatable and it'sscalable, but it's also
something that we can navigatethe complexity of like decision
making.

Speaker 3 (06:22):
Exactly to that point .
One of the most importantthings with AI is deciding what
kind of well truly with any tool, but also especially true with
AI, is what kind of guardrailsdo you want in place?
And you have to.
In order to know whatguardrails you want in place,
how much risk you're willing toassume and how much variation

(06:43):
you're willing for the tool tomake on your behalf, you need to
understand what you want it todo and what that exactly.
To that point, like what thatis, yeah, Okay.
So with the repeatable process,we first looked at what is a

(07:03):
repeatable process.
We first looked at what is arepeatable process, and the
second thing that you want tolook at is how much time does
that repeatable process takewithin a day in the life?

Speaker 2 (07:11):
yep, the more time we say day in the life, clarify
like a day in life of therecruiter, of the recruiter or
whomever, though, you'rethinking about anything.

Speaker 3 (07:19):
How much time does this?
Oh see, I'm straight a human'suse case right now.

Speaker 2 (07:22):
Well, that is more like that's nice.
You're thinking about AI, but Iwant to think about here.

Speaker 3 (07:26):
Let's go.
We did end up looking atexactly to humans away on a day
in the life of the recruiter.
Yeah, yeah, but for anyautomation use case does this
take up a lot of time?
Because human time,specifically Because if it does,
the more time something takes,the more benefit you will get

(07:48):
when it's automated, when ahuman gets their time back, and
so we were looking at where canwe find a repeatable process
that takes up a lot of time.
So then we went to look athumans' day in the life and how
were recruiters spending theirtime, and what we found was
screening was, particularly forspecific types of roles, just an
enormous portion, like between30 and 50 percent of their time

(08:11):
on an average week, and byscreening you mean picking up
the phone, calling up acandidate, asking them very
black and white questions aboutEverything, from looking at a
resume and saying should I pickup the phone and call this
candidate, to picking up thephone and calling the candidate,
and that distinction isimportant for us, particularly
because when we were looking atwhat tool and what use case did

(08:32):
we want, there's one tool andtype of model that looks at
something that is static onpaper and parses that and
understands how to interpret it,and there's a different type,
that is conversational AI, thatcan have this interpersonal
reaction in a way that feelshuman-like.
Two different capabilities, andso when we were looking at how

(08:55):
is it that we want to deploythis, we were looking very
specifically at both of thosetogether and we found a lot of
tools had really great one ofthem, yeah.

Speaker 2 (09:04):
That is something we've talked a little bit about
this.
When we think about thecapability of tools, when you
look at kind of what's in themarketplace today, but let's say
, the next five years, how muchdo you think that's going to
change?
Are we going to like pivot fromthe vendor that we've chosen?
Is that something you thinkthey'll evolve into it?
Like we're still very much inthe infancy of AI technology.

(09:26):
So, like from your exposure,what do you think AI tech will
look like in five years?

Speaker 3 (09:32):
It is moving really quickly.
I think the easiest thing to sayis that I definitely can't
comment to what the underlyingfoundation models will look like
in five years, but what theywill be able to support will
change.
For example, one of the reasonsthat we also started with
screening is from our look atthe market.

(09:52):
Screening is the tools thatsupport screening are a lot more
advanced than, say, sourcing.
There are a lot of sourcingtools out there, but as we were
looking at like, how high of alevel we need this to operate at
, uh, sourcing hasn't quite hitthe same mark as screening yet,
but that's evolving incrediblyrapidly, like within.

(10:14):
You know, it takes us a whileto evaluate a tool and then
deploy it.
By the time we're donedeploying the first one, the
second one could be right therewith it.
So it really is movingincredibly quickly.
And I think what I'm seeing isthe evolution in how many
different scenarios somethingcan work in, like sourcing.
What that looks like and thedifferent types of roles or

(10:35):
where you're getting them fromor how it's finding those
candidates is evolvingincredibly quickly in terms of
what the AI capability cansupport.

Speaker 2 (10:53):
Yeah, that makes sense.
Interesting.
We talked about the businesscase.
I do want to kind of addressthe elephant in the room a
little bit of like compliance.
Well, I don't care about thatright now.
I do want to talk aboutcompliance, but what I want to
talk about is we talked aboutthe human value, which is
efficiency, being able todeliver for our clients at a
better level.
I do want to talk about theclient side, which would be

(11:14):
compliance, but to me, the likeelephant is what the recruiters
think 100%.
So I want to talk about thatrecruiter benefit because I mean
, we we hear it all the time.
I think there was a recentstudy we're going to have to
look this up where like 50 to 60percent of individuals believe
that their jobs are at riskbecause of AI, and that's
industry agnostic.

(11:34):
If I'm remembering the datapoint correctly, we might have
to bring that correction section.
Oh my God, oh yeah, I thinkthat is a throwback.
Although I'm very confident inwhat you just said 78% of data
points are made up on the spot.
Did you not know that?
Yeah, I think it's actually 87%.
I think you're right.
I'm sorry, I apologize.
Recruiters.

Speaker 4 (11:54):
Yeah, it's the recruiters and the candidates
and the candidates Interesting.
Okay, and specifically the techthat we're looking at, it's
clearly AI that is engaging withthe candidate.
Yeah, yeah.

Speaker 3 (12:07):
When we think about our stakeholders for this,
there's clearly human.
There's also our partners, andthe thing that we find that they
are thinking about the most isthe compliance side.
Then there's our recruiters.
We're thinking about how doesthis impact my day in the life,
but also am I in jeopardy?
Is this going to replace me?
And then our candidates how isthis interacting with them?

(12:28):
How does it impact theirexperience?
We'll go back to the compliancepiece.
It's a much larger question.
When it comes to the recruiterexperience, we firmly believe
that this is a tool that can beused by recruiters, not replaced
recruiters.
There's a lot of discussion andthere are different takes on

(12:50):
this.
Where human stands today isthat agentic AI, which means
like an AI that can do sort ofeverything from end to end,
standing in the place of aperson, isn't actually a
direction we want to go from acompliance perspective.
Okay.

Speaker 2 (13:07):
Interesting, I would say, not even a compliance
perspective, but like theservice we provide is so catered
to.
That white human experience,right, literally, like,
literally, h U M A N experience,yeah, yeah, so a hundred
percent.

Speaker 3 (13:21):
True, I bring up the other one, um, because in terms

(13:54):
of like, okay, but you couldchange your mind on that any day
For there to be a humaninvolved in the decision making,
having oversight over this typeof of tool and activity, that
we think that the, theadministrative work, the taking
notes and loading them in from ascreening call, the trying to

(14:15):
schedule a screening call tohappen things move.

Speaker 2 (14:19):
Funny you mentioned this like the scheduling of the
call, because I knew that was abenefit that with the AI
technology we've selected thecandidate can pick their own
time for their screenings.
That removes it from therecruiter.
But we, historically, that isin such a pain point for us that
we actually, with our clienthiring manager, set up blocks of
time that are pre-booked.

(14:40):
So when we're looking to likeschedule calls and stuff, we
actually pre-build time to makeit easier for recruiters.
So this is just an extension ofwe know this is already a pain.
Yeah, let's make that easier.

Speaker 3 (14:51):
Well, and there's scheduling in two ways, right.
So there's scheduling to havethe phone screen with one of our
recruiters and then there's,after we've moved a candidate
along, scheduling between thecandidate and the hiring manager
.
So we have scheduling sort ofin two places.
That is a huge pain point and ahuge use of time, and so the
goal here is that less timespent doing that is more time

(15:15):
spent really understandingcandidate qualifications and
having those relationships withour hiring managers to be able
to move forward candidates whowe think are the best fit for
those roles, and so that reallyputs time back in the
recruiter's day to be able to dothose higher value activities

(15:35):
and spend less time on the stuffthat no one likes to do anyway.
Yep, yep, from a candidateperspective.
Hil Hilary, you touched on thisright out of the gate with
scheduling.
Yeah, yeah, ai is available24-7.
It doesn't sleep, it doesn't besure to time it.

Speaker 2 (15:55):
Oh, that's why the robots are taking over.
Here we go.

Speaker 3 (15:58):
And so you know for us where we work a lot in
healthcare right, you might behiring a nurse and they may have
just gone off a 7 am shift, oryou know they have a day shift
and are only available in theevening.
A recruiter may or may not beavailable to take that
recruiting call at that time.
This means that a candidate cantake that screening call

(16:19):
whenever is most convenient forthem.
They can take as long or asshort as they want and really
fill in everything that theywant to say and that goes back
to the recruiter for review.
And so it's.
We think that in a way that thecandidate experience we hadn't
been able to be flexible or tohave as much to think about the

(16:39):
candidate experience in that wayin the past.
We now can, and so it's atotally different way of
thinking about candidateexperience.

Speaker 2 (16:46):
So, dina, I want to ask you this.
So, because your world is verydifferent, we're implementing AI
on the RPO side right now,correct?
Yes, and you live in thatdirect hire with a very
different type of structure?
Yes, when you think about thisor the conversations you've had,
where do you see this changingyour conversation with your

(17:06):
clients?

Speaker 4 (17:07):
Yeah, so gosh great question.
So I think, for us, I don'tknow how this is going to change
my conversations with theclients, because the technology
we plan on adopting some of thesame technology of everything
we're doing is under the humanbrand, and so I think it's
probably more in our sales pitch, to be candid, it's telling

(17:28):
clients being able to have thistool.
At our gig level, we have AIwhich makes us more efficient,
which makes scheduling yourinterviews easier, so I think,
leaning into how it makes usmore efficient, able to identify
candidates, a little bit.

Speaker 2 (17:42):
You think you could hire recruiters more easily
because of this.
Could I hire recruiters moreeasily?
Like?
Is that a part of our employervalue proposition?
Now, to say less, we remove theburden of like administrative
work.

Speaker 4 (17:55):
So I will be so interested to see what our
internal adoption is of thisplatform.
I'm excited for the launch thatwe have and kind of the rollout
, but I think the internaladoption is really going to tell
us how we need to position thisfor recruiters If I was a
recruiter and I didn't have topaper screen all day.

Speaker 2 (18:12):
I guess that's a question because we've seen
what's the response been of ourpeople and our candidates with
our pilots.

Speaker 3 (18:18):
So far it's been really positive.
Obviously, with any rollout andnew technology, there are always
sort of bumps in the road, butwe've been able to solve them
really quickly and the responsehas been really great.
We're actually excited.
In a couple of weeks we'redoing a panel with a couple of
our recruiters and the teamsthat have launched it so far and

(18:43):
we'll be able to hear directlyfrom them on what their
experience has been to date andon the direct hire side of the
house.
We're really excited from an AIperspective, because AI to this
highly repeatable process piece, one of the things that makes
it valuable is around highvolumes and in direct hire or
for the partners that wetypically service in direct hire
, their volumes may not be ashigh, which might make it harder

(19:06):
for them to use AI on their ownfor it to actually be valuable.
Now, through human direct hireor through the tool that we can
offer like, for example, acrossa PE portfolio company or, sorry
, across a full PE portfolio wecan now get to the levels of
volume that they might need toactually be able to leverage AI.

Speaker 2 (19:29):
Yeah, and that's an interesting pivot into that
client side.
So will you talk to us a littlebit about, like, the value that
our current clients are seeingor hoping to see, and that could
be efficiency but also beyond.
And then those conversationsaround compliance.
I know you've mentioned beforethat different types of
organizations have differentcompliance concerns, so I'd just

(19:49):
love to hear like kind of yourinsight into the discussions
there.

Speaker 3 (19:52):
Sure, so I'll start with the values that we think
that this brings for our clients.
One obvious one is increasingtime to fill.

Speaker 2 (20:06):
Yeah, that is quite the pitch, emily.
We'll make your time to filllarger, crushing your job today.
Keep it going, love it, love itai tools for everybody.

Speaker 3 (20:17):
Um, we so making the speed to film a lot faster, yeah
, and so, uh.
So that's one obvious one.
And and how we get there is isalso, I think, fairly
straightforward.
You're able to screencandidates faster, uh, schedule
those calls faster.
It just gets a candidatethrough that pipeline a lot
quicker.

Speaker 2 (20:37):
And one of the things I do want to clarify is in this
we're not kicking, we're notlike deleting candidates.
This is not limiting theirtalent pool.
Like everyone who applies stilllives within their system.
This is just a way to be ableto review and bubble up talent
in a more effective way.

Speaker 3 (20:50):
Correct and truly it happens faster, like if you
think about the time it takesfor a human being to look at a
resume and scan through it, it'sat least a minute.
Yeah, truly, the tool can do itin seconds, and so just the
order of magnitude is enormous.
Same thing with a screeningcall.
Right, if we can do screeningsafter hours and scheduling

(21:13):
automatically, that means thatthe screenings have already
happened when a recruiter comesback to work the next day, and
now they can review all of thatinstead of then conducting all
of those calls during the day.

Speaker 4 (21:24):
I mean, if you think about it from a recruiter
perspective, if I post acustomer service job at 5 pm,
I'm going to come into work at 9am the next morning and I'm
going to have 100 candidates inthere and that is my entire day.
Yeah, and I'm lucky if I canget 10 of them on the phone.
That's a really great example.
So you're getting rid of thatentire part.
Here it is.

Speaker 3 (21:43):
These are these candidates who have been
preliminarily screened, vetted,and I mean game changer Instead
of coming in posting at 5 pm,coming in at 9 am and having to
now go through all of those.
Try to schedule, get a coupleof them on the phone.
I come in at 9 am and I alreadyhave 20 of those screens done

(22:03):
and now I'm reviewing thoseinstead.
The second piece that is a hugebenefit to our clients is we
think quality of candidate willalso increase Because if you
think about that speed inreviewing all of those resumes
and applications, you have anatural lag where you might go

(22:26):
through the first 10.

Speaker 2 (22:27):
100%, and we find five and we pass them on.
They hit your desk you look at,pick the best ones of those top
10.

Speaker 3 (22:33):
Here's what satisfies those, and so let's pass them
on to the hiring manager.
Now we review all 200 at once.

Speaker 2 (22:39):
Yes, you know it's interesting because I hadn't
even considered that element ofit.
And, to your point, you aregoing to end up with a higher
pool of candidates from whichyou can find quality, but it
also opens up a more equitablescreening opportunity.
But it also opens up a moreequitable screening opportunity,
and I know that compliance andbias those are things we've
talked a lot about as it relatesto AI, but I hadn't even

(23:02):
considered that.
If AI can do a consistentscreen of all 200 candidates who
hit your pipeline, then youactually are giving all 200
people the fair shake at thatassessment, as opposed to who
were the first 10 in the door.
You get the most opportunityand then the next five and maybe
the next three.
I hadn't even considered that.

Speaker 4 (23:24):
I'll be interested to see candidate responses to this
, because ultimately, we aregoing to be handling addressing
more candidates.
How many candidates right nowcomplain about ghosting A con of
them?
Everybody complains about it,so the theory of ghosting is
going to be.
It'll be gone.
However, what that means isyou're now engaging with an AI

(23:44):
technology as opposed to arecruiter, so there's a
trade-off.
You're going to need yourinformation, but it may not be
with a human.

Speaker 2 (23:50):
So I'll be interested , but I do want to say the human
, the person, the recruiterstill has to decide whether
they're going to move forward orreject Correct.
So you could get a response butthen not know if you're
actually being considered ormove forward.
So there's still the onus onthe recruiter to make that
decision of yeah, you had thetime to take the call and talk
to our AI assistant, but youstill like, I agree with you,

(24:13):
with you, but I think we can'tremove the responsibility of the
revertor.
Yeah, yeah, yeah, no, I didn'tthink about that either.
You get more of a directresponse.

Speaker 3 (24:20):
Well, and some of what we've heard so far is both
we have seen an improvement intime to fill on the quality of
candidates.
We've seen where we've beenable to look at this and where
our technology partner has beenable to look at this.
We've seen an improvement inretention because the quality of
the hire is a better fit forthe role, based on that ability

(24:42):
to look across a largercandidate pool.

Speaker 2 (24:46):
Is that a first like 60, 90 days?
I would have to look inspecific, but it's like we're
not looking at like long-termretention yet because we're
still early.
I'll be on a bit.

Speaker 3 (24:54):
Yeah, yeah, on retention, one of the
challenging pieces of measuringit is that you have to wait a
long period of time.
Correct, people who were goingto leave have to leave for us to
know that somebody stayedinstead, but we have some
leading indicators that showthat this is actually helping
with alignment.
The other piece that is apotential indicator of quality
is the submission to hire ratio.

(25:16):
So as we put forward candidatesthat have been screened for
interview, a higher percentageof the candidates who are being
put forward then ultimately gethired.

Speaker 4 (25:28):
Love that yeah.

Speaker 3 (25:31):
On the candidate experience side so far we've had
really positive reactions.
Same thing with our technologypartners.
So we have seen like four andfive star ratings out of five,
some feedback on the candidateexperience across different
roles, roles and withinhealthcare, where sometimes

(25:53):
people, I think, areparticularly nervous about
different roles and how theymight respond to that experience
, and we've gotten somecommentary back.
I think we had one.
I particularly liked this onethat was like it was a seven out
of five experience, which Iparticularly enjoyed Very
specific rating.

Speaker 4 (26:10):
I think that goes to the increased quality of the
service that AI is providing,because if you look at the AI
chatbots a year ago or not evenchatbots, the what do you call
it conversational AI where it'sactually, if you look at that a
year ago, it wasn't great, butthe tech that we have is
absolutely incredible and so itjust continues to evolve.

Speaker 2 (26:31):
It's like you're talking from.
It's like a year ago I was liketalking to Hal from 2001, Space
Odyssey, and now it's like anactual.
It's like talking to Emily.
Emily actually is AI.

Speaker 3 (26:44):
She is a robot.
It is so fun to play plot twistclips for people.

Speaker 2 (26:49):
Oh, it's awesome.

Speaker 4 (26:50):
Yes.

Speaker 2 (26:50):
But also this is where sorry it is, because like
this is where I see therecruiters get the most nervous.
Oh, you're 100% correct whenthey're like oh, it is almost
too cool.
So that's why I wanted to startwith the recruiter saying like
yes, it is unnerving, but thinkabout the fact that, like you
don't have to have all thosephone calls Like this is now.

Speaker 4 (27:08):
Yeah, it is completely transforming the day
in the life of a recruiter andit is just giving an opportunity
to upscale.

Speaker 3 (27:20):
And I think the best way to think about it, and the
way I encourage recruiters tothink about it, is first it's
coming regardless, so how do wedeal with it?
But then, after that, what itmeans is less that it is
replacing my role and more ofthat the role of a recruiter
might evolve, and so what itlooks like to be a recruiter
might just shift from a littlebit of some of the activities we

(27:40):
know today to a different setof activities which, in my
opinion, are much higher valueactivity, or a larger percentage
are those higher valueactivities.
I agree.

Speaker 4 (27:49):
This is if we layer on our favorite TA maturity
cycle bottle, because we lovethe Zena Lloyd maturity cycle
thing.
I mean, when you look at kindof the pinnacle of a recruiter,
it is not that you are a personwho is paper screening resumes
and sending them over, it isthat you are a strategic partner
to your hiring manager.
How do you become a strategicrecruiter?

Speaker 2 (28:09):
Recruitment partner Like you are candidates to your
hiring managers, to yourbusiness.
Yeah, you can't do that if youspend all day Exactly, if you're
just pushing paper.
Yeah.

Speaker 3 (28:18):
Well, one of the things that ultimately this AI I
love it Just want to say.

Speaker 2 (28:23):
Love it.
Hurrah to Emily Emily.

Speaker 3 (28:35):
One of these things that we'll ultimately be able to
do to be a strategic partner toboth candidate and client
partner is, let's say, you havea candidate come through and
they're not a great fit for therole they've applied for.
Well, the AI knows the rest ofthe roles that are open and it
knows what screening questionsand what sort of criteria you're
looking the partner is lookingfor for that role and it can say
I don't think this is going towork out for this role, but we

(28:55):
have this opening over here.
Would you consider that one?
And so this ability to recycletalent into a place that's
actually a better fit for thembetter fit for both candidate
and client partner, I think, isanother major ability of this
and value you know your book ofbusiness.

Speaker 4 (29:15):
You don't necessarily know what your fellow recruiter
is working on and and when youtalent on the table.

Speaker 2 (29:20):
well, you know your book of business but you know,
sometimes you forget what's onyour to-do list when you're like
knees deep and like doing thework.
Um, I know that we've sorry, Iwant to, I we have to hit
compliance, because I know wecould do this forever and I'm
just putting an eye on time.
Compliance, just give us the101.

Speaker 3 (29:39):
It is.
I thought this is where youwere going earlier with the
100-pound gorilla in the room.
It's really a scary thing to alot of partners and it's one of
the reasons why people hesitateon AI.
The first reason is like Idon't know how to use this or
like where to put it or how todeploy it.

Speaker 2 (30:03):
The second is oh my God, there's a lot of risk
associated with this and I thinkyou should have a healthy like
concern about compliance,because you have to take it
seriously.

Speaker 3 (30:08):
It is totally a valid question and it's something
human takes really seriously andthat we have been deploying
right in lockstep in parallelwith our development of the
solutions and solutions thatwe've put in to place.
When I think about compliance,there are four pillars I really
think about.
The first is data.

(30:29):
In order for AI to work, itfunctions off of data, and so
what is happening with the datathat's ingesting A lot of it,
particularly in the talent space, is PII.
It's personal information, andthere's a lot of regulation
around personal information,where that's housed, how
candidates have access to it,right to privacy, right to

(30:51):
privacy, exactly and so what areyou doing with that data and
how are you keeping it secure?

Speaker 4 (30:57):
Yes.

Speaker 3 (30:57):
Okay.
The second is around legal, andI think a lot of people have
question marks on this,particularly because it's an
evolving landscape.
Unlike a lot of other placeswithin talent, this is not a
particularly mature legal field.
We don't know which directionsparticular cases will go.
There's new laws coming out indifferent jurisdictions all the

(31:19):
time, and so how are youmonitoring and knowing that the
way that you are using the AIbecause that's part of it as
well it's not just what the toolis, but how you are actually
using that tool applies in aparticular case, and so, for us,
we have a partner who'sactively monitoring it on a
global, national and localjurisdiction level to let us

(31:40):
know both how are thingsevolving, is there a new law
that's coming into play, and howis what they know about our use
cases relevant for that, andwhere are there risks associated
with something that's changed?
And then also, what adjustmentdo we need to make in order to
be able to remain in compliancewith how that law is being
presented?

Speaker 2 (32:01):
So we've actively pursued an expert in this field
of AI, legal and has this broadunderstanding to help guide us
interpret and we have thatownership of AI for best case,
for our business case.
But then we say, look, we can'tpossibly do this on our own, so
we make sure that we havesomeone to help be that guard in

(32:22):
that way.

Speaker 3 (32:23):
As anyone in compliance knows and I'm sure a
lot of our healthcare partnersknow this as well it's an ever
evolving, and so it's not justabout getting it right once.
It's about making sure thatit's continually right as things
evolve and change.
The third place that I wouldthink about when setting up

(32:44):
governance and compliance isaround your internal governance.
What policies are in place?
How are you evaluating the toolusage?
How are you handling any kindof issue that may arise?
How are you handling any kindof issue that may arise and how
are you?
You know, at what frequency areyou testing your AI to make sure

(33:04):
that it's operating the waythat you expect it to be
operating?
Everything around that what isyour internal policy and
governance?
And then, how is your teamtrained to understand?
This is our policy, this is ourgovernance, this is how we
expect to use the tool, and thefinal piece is actually on, and
this piece, I think, is part ofwhat diverges from traditional
data or IT security inparticular, is the underlying

(33:27):
models and the use case.
How are you using the tool andhow does the risk that you are
taking on change in the way thatyou're using a tool?
For example, if the tool makesa decision, it's a lot higher
risk than if the tool presentsinformation that impacts a

(33:47):
decision, and so, really,important distinction, I think.
And then, similarly to theunderlying model piece, how is
it trained?
Is it using real data orcandidate data?
Is the data that you're sorrysynthetic data or is the way
that you are training itevolving in real time?

(34:08):
Or are any changes to the modelsomething that you are like I
would call a suggestion anddoesn't get deployed unless
there's a human intervention tosay, yes, deploy that chain Got
it.

Speaker 2 (34:19):
Yeah, got it.
So give us, in short, thosefour pillars again.

Speaker 3 (34:22):
All right, so we've got governance.
I'm going to do these not inthe same order.
That's fine Governance.
Legal.

Speaker 4 (34:28):
Okay.

Speaker 3 (34:31):
Use and underlying model.

Speaker 4 (34:33):
Okay.

Speaker 3 (34:34):
And data and data, and so there's a lot to think
about there, which is why one ofthe things around AI for I
think some of our partners is soscary the nice thing is, when
you have a partner like human,is that we've done all of that
and it falls in place.
You don't have to think aboutit.

Speaker 2 (34:52):
This is all you've essentially done for the past
little while.

Speaker 3 (34:56):
There's just so many little components on that one.
There's a lot to think aboutthere.
That's awesome, and it's assmall as when you set up the AI.
You know you call a candidate,the candidate picks up and it
says hello.
Just so you know this is AIright, like little things that
need to be in place to make surethat you are compliant with a

(35:18):
variety of different laws,security, transparency, et
cetera.
Totally.

Speaker 2 (35:24):
I would love to keep rolling on this, but I think
that we should probably wrap itup.

Speaker 3 (35:29):
My job is not yet automated by AI, so Sure.

Speaker 4 (35:33):
I'm just hoping to get there.
I mean, I'm just wait, I justneed a moment.
I'm just so impressed right nowLike wow, emily.

Speaker 3 (35:41):
We've.
Thank you.
Wow, we've come a long wayno-transcript without how much

(36:08):
work that they have put in, andall of them have other
responsibilities as well, sothey have been just an
incredible support to thisproject, that's awesome, very
cool incredible support to thisproject.

Speaker 2 (36:20):
That's awesome, very cool.
Final thoughts.

Speaker 3 (36:25):
What is the most important thing you want to tell
us about AI that we've notasked about?
I think it's exciting, it'sevolving quickly and the biggest
hesitations come from how muchmystery is around it.
So the thing I would like tosay to our client partners is,
if you find yourself hesitating,ask the questions that are
keeping you from hesitating,because it's probably something

(36:47):
that made us hesitate as well.

Speaker 2 (36:49):
We've pursued that to understand what the solutions
are, or I would even say, andyou got me thinking sometimes
you don't know what questions toask, so we can send Emily in to
be like here are our pillars,let's talk about them.

Speaker 4 (37:03):
Yeah, I think framing up the four pillars super
helpful.

Speaker 2 (37:08):
We're doing a live correction section Dina
Correction section.
I have missed that.
It has missed me.

Speaker 4 (37:14):
Emily actually knows how to sing now.

Speaker 1 (37:24):
Yeah, but we're not going to ask her because I like
the octoon.
Maybe the headbutt nailed it.
So, hillary, your guess waspretty close, um, to how many
people think their jobs arethreatened by ai.
Okay, um, it's actually alittle lower, though sorry, I
went doomsday 60 percent.
Uh, what's interesting is, 30%of workers go wide fear that AI
might replace their jobs.
And what's interesting is,people are pivoting into

(37:45):
accepting that personalproficiency in AI is actually
critical to their job security.

Speaker 2 (37:52):
Oh, interesting, interesting, okay, interesting,
wait, wait, wait.
Well, say those numbers again,because I think that's first of
all.
I was way off on the 60%.
Thanks for trying to throw me abone on that one.

Speaker 1 (38:02):
So 30% actually fear that and 45% want to become
better in AI.
Now I'm going to contradictthose stats by saying uh, 73.6%
of all statistics are made up.

Speaker 2 (38:16):
There we go, there we go.
That one is official Boom, butI do think it's interesting.
The most interesting thingabout that, I think, is that we
see this decrease in oh, ai isgoing to take over my job and
more AI is a threat to my job ifI don't know how to use it.

Speaker 3 (38:35):
Yeah, which I mean we could play that over and over
again in what we've seen in thelast 20 years, right, Computers?
Oh, absolutely yeah, I thinkit's just a continued evolution
in the way that we work.

Speaker 2 (38:48):
I still use carrier pigeons for my business
communication.
Emily, thank you for being ourguinea pig.

Speaker 1 (38:53):
This is awesome First guest.
Appreciate it All, right, yeah,have a great day, guys.

Speaker 2 (38:55):
First guest.
I appreciate it.
I appreciate it.
All right, yeah, have a greatday, guys.
Woo, how do we normally endthese things?

Speaker 1 (39:01):
I have no idea, you just go like.
It's been a pleasure to be here.
Bye.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.