All Episodes

April 16, 2025 • 33 mins

In this episode, Kyle Lagunas of Aptitude Research discusses the crucial distinction between "human-in-the-loop" and truly "human-centric" AI approaches in HR. Drawing from his extensive 15-year career studying innovation cycles in HR tech, Kyle explains why adoption rates for AI in HR are stalling despite executive support, and offers practical strategies for building AI literacy within organizations.

Topics Covered:

  • The difference between human-in-the-loop vs. human-centric AI design
  • Why HR departments struggle with AI adoption and tech literacy
  • Practical ways to increase AI literacy within HR teams
  • Evaluating AI use cases across impact, risk, and complexity
  • Success stories and low-risk starting points for HR AI implementation
  • Moving beyond the "bias boogeyman" in AI evaluation

Discover how to evaluate AI implementations and learn why many HR departments are starting their AI journey with conversational AI. This episode provides a nuanced framework for HR professionals looking to move beyond fear-based decision making and implement AI solutions that genuinely augment human capabilities.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to the Changing State of Talent
Acquisition, where your hosts,graham Thornton and Martin Credd
, share their unfiltered takeson what's happening in the world
of talent acquisition today.
Each week brings new guests whoshare their stories on the
tools, trends and technologiescurrently impacting the changing
state of talent acquisition.
Have feedback or want to jointhe show?

(00:21):
Head on over to changestateio.
And now on to this week'sepisode.

Speaker 2 (00:27):
All right and we're back with another episode of the
Changing State of TalentAcquisition podcast, Super
excited for our next guest, KyleLagunas, currently at Aptitude
Research with an incrediblebackground that I'll let Kyle
share a little bit more aboutKyle we'd love to hear a little
bit more about your careerjourney from Aptitude and, more
recently, your role as thefounder of the human-centric AI

(00:48):
company.

Speaker 3 (00:49):
Yeah, I've got a really weird job.
I absolutely love it.
I think it's super cool, but Ihad no idea that existed and I
wouldn't believe it if I didn'thave this day job.
But so my job is to studyinnovation cycles in the world
of HR and tech, and here atAptitude we focus exclusively on

(01:10):
trends that span human capitalmanagement, from talent
acquisition to talent management, learning development yeah, you
name it, and I've been in thebiz for almost 15 years, so
since I was three years old.

(01:30):
But I started as a blogger inthe space, as a Gen Y that was
struggling to find gainfulemployment.
I had a lot of opinions andissues that I wanted to suss out
with HR, but as I got into it,I started to realize that some
of those issues and frustrationsthat I had were rooted in real
problems and not just a lack ofcare, and I got hooked.

(01:52):
So now my job is to try andhelp people navigate some of the
crazy stuff that's going on,some of the wild innovations
that is coming through, and,yeah, just to kind of find a
path forward.
It's a pretty cool job.
I absolutely love it.

Speaker 4 (02:07):
Well, welcome Kyle, we're thrilled to have you, you
know, I guess.
One question I have just tostart, because you were saying
your job that you didn't knowexisted perhaps before this to
study innovation cycles, and ona recent episode of ours we were
talking about AI, specificallyas it relates to the Gartner
hype cycle, which I imagineyou're familiar with, and we

(02:28):
were struggling at that time tounderstand whether the Gartner
hype cycle had much to offer interms of insight about the
moment we're currently seeingwith AI.
I know that's potentially a bigquestion, but I'm just
wondering if you could comment,if you have any thoughts about
that.

Speaker 3 (02:40):
Yeah, so I haven't actually looked at the Gartner
hype cycle in the most recentone, but I am familiar with the
model.
I've looked at it before andlook, there is, I think, an
abundance of hype around theopportunity for impact that AI
has in the world of work andespecially within HR functions

(03:01):
and processes.
But all of that hype is like Imean, it is just hype if we
don't know how to make the mostof these capabilities.
And I really do see morepressure to pursue opportunities
than I see like real confidenceand intentionality.
So I'm not like the hype police, but I'm personally a hype

(03:27):
police adjacent.

Speaker 2 (03:29):
Yeah, well, I do think, like you know, we'll pick
on Garner one last time, andthen I want to dive into some
way pieces.
They also just put out a studyand it was hey, like, what
percentage of CEOs arecomfortable talking to or going
to HR leaders for strategicguidance as it relates to AI?
And the answer was less than 1%, and that is wildly scary,
right A if you're an HR leader,but also B if you're an HR

(03:54):
leader that goes to some ofthese orgs for strategic
guidance in AI and innovation,and so I think there's a huge
opportunity for HR, and I thinkthat's part of the reason why
we're just super excited to haveyou on the episode too, kyle.

Speaker 3 (04:07):
There is an opportunity, but I think that
there's also.
I mean, like dude, that is likea gut punch, isn't it that?
Like less than 1%?
But, candidly, why would peoplecome to HR to understand what's
going on with AI?
I think that I mean, look, soI've been doing this for 15
years, which I think is a longtime.

(04:28):
I've seen a lot of tech trendscome and go and, through it all,
the one constant is that HR asan organization I'm not talking
about like saying no one in HRknows anything about tech, but
organizationally, likefunctionally, most people got a
pass for not being very techsavvy or tech literate, and that
is coming back to bite us nowwhere we are.

(04:51):
There are a lot of questionsabout how these capabilities
work, what they can and can't do, what our organization, what
our company's approach is to theuse of AI.
And, yeah, I think that HR isstarting from much further
behind than maybe our colleaguesin sales and marketing and

(05:12):
operations.
I think that we still have achance.
We still have a goodopportunity to catch up, but
it's going to be a sprint tothat finish line.

Speaker 4 (05:21):
Well, graham will tell you.
Don't get me started on talkingabout HR folks and their tech
savviness or data centricity.
I think all of our audience hasheard enough about that, but
I'm glad to see that we've got alike-minded guest on the show
today, so that's cool.
I wonder if you know what someof you said.
Kyle really resonates with meand perhaps with a lot of our

(05:43):
audience, which is I've onlybeen in this space for what?
Five, six years now, graham andI feel like we've been talking
about AI since I started, sinceI first entered the space.
It's something that had so muchpromise.
It seems to have so muchpromise it still does, but it
just seems like the moment ispregnant with possibility, but
nothing ever arrives or at leastthat's my sense other than what

(06:05):
might be described asautomation, better and better
automation, and I think one ofthe challenges I have with it is
just not knowing, not havinglanguage and not being able to
get more nuance and having aconversation about it, which
brings us to the recent reportyou did, which I thought was
amazing, called Rethinking AIand HR Balancing Innovation,
risk and Human-Centric SolutionsKind of a mouthful, but what I

(06:27):
really wanted to start with hereis.
I think you do meaningfullymove the conversation forward by
making a distinction betweenwhat you call human-in-the-loop
AI approaches and kind of maybetruly human-centric AI
approaches.
Those are lots of big words.
I'm wondering if you couldsimplify it and just let's hone

(06:47):
in on this distinction thatyou're making between human in
the loop and truly human-centricAI.
What is the difference you seethere and how does this inform
the current moment?

Speaker 3 (06:58):
Honestly, I really appreciate you for calling me
out on it.
I mean because my intention isto create more clarity and to
present some more practical,better practices for approaching
the way that we think aboututilize, design, ai solutions
and all right.

(07:18):
So we say a lot about like,because one of the things I
think people are worried aboutwith AI, and have been for
probably the last five or sixyears with AI, is AI running off
the rails and what happens whenAI goes rogue.
And we've seen a lot ofreassurance being offered around
keeping humans in the loop.

(07:39):
So, you know, in New York Citythere's this law that governs
what they're defining asautomated employment decision
making technologies.
So something that is movingsomebody forward automatically
in an interview process could beconsidered an automated
decision.
Right, we are saying no, no, no, no.

(07:59):
Let's make sure we keep humansin the loop.
In all of these air quotes here, moments that matter and, yeah,
we definitely should have thatbut what I think I'm seeing is
that we are really onlycrippling automation when we're
putting all of these blocks andchecks along the way.
It's like running fast andscreeching to a halt running

(08:21):
fast and screeching to a haltfast and like running like
screeching to a halt.
Running fast and screeching toa halt, we find that
implementing some of these easybuttons around like offering a
relevant matching score to likea candidate's application as an
example.
We see somebody who's like, oh,they're an A, let me move them
forward.
I'm going to present them tothe hiring manager.
I'm not going to actually takethat's technically human in the
loop.
I'm going to see that the AIhas proposed that they're an A

(08:45):
match and so I'm just going toclick the button to pass them
forward.
Human-centric AI solutions areactually designing the
implementation of AIspecifically to augment the
human work that does need toremain human and automating the

(09:06):
rest of the work that can berepeatable, that we can trust AI
to do, and I'll give you anexample.
So if we are looking at, let'sstay with matching and scoring
as an example.
It's a really popular use casefor an efficiency gain and
moving faster in the recruitingprocess.
It's also a really greatsolution for being more

(09:27):
data-driven in recruitingprocesses.
But what we've observed is thata lot of people are like that
human in the loop is not reallydoing any quality control.
You give somebody an easybutton, especially when they're
inundated.
If they're overburdened,they're just going to click that
easy button right With a humancentric solution.
I'm actually going to.

(09:47):
I'm going to say I'm going tolook and see.
For, like, our conversion ratesfor A and B candidates in this
job type is like submission toacceptance ratio is 90%.
So I'm actually going to designthis in a way that, like, I'm
going to go ahead and have theAI that has scored somebody as

(10:08):
an A or a B candidate, I'm goingto invite them to a screen, I'm
going to move them forward toscreen because the margin of
error here is pretty good, andI'm still going to let humans go
in and move anybody elseforward that they want to, but
I'm going to unblock that top ofthe funnel and just keep people
moving.
The reason why that's more humancentric is it is enhancing and

(10:31):
augmenting that human worker andit is repeating the tasks that
already occur.
It's also, I think, giving uslike a dual approach.
We are immediately activatingon the information that we have,
we're making data-drivendecisions and then we are also
still maintaining humaninteraction in that process.

(10:52):
And so I can still go through,and recruiters love to find
those diamonds in the rough.
They can still do that.
We can flag a candidate that ormark a candidate that has been
moved forward by the AI versus acandidate that's been moved
forward by a human and we'regoing to make sure that between
the two of us we've goteverything covered and I think
it just helps us move faster.
But it doesn't put everything,everything on the recruiter.

(11:14):
They don't have to go throughand approve every single thing.
Instead, they can can focus onthose diamonds in the rough or
those differentiated candidates,the underdogs, et cetera.
So yeah, it's a long way ofsaying like human in the loop
can be quality control, qa, Iguess, like limited QA.
Human centric is actually likedoing that deep process mind and

(11:36):
journey map to find where isthe best way to use AI here to
achieve a greater outcome, notjust improve efficiency, but to
have a better outcome withoutdisplacing the human worker.

Speaker 4 (11:50):
Interesting, interesting.
Well, it's appreciated to have amore nuanced take on this and
help us with some language here,which I think we sorely lack as
an industry.
So what I'm hearing from you isalmost a paradox, which is often
what you find when you havepowerful ideas, I think, which
is this idea of human in theloop, which sounds incredibly
human centric.
There's a way in which becausemaybe we don't trust AI, or it's

(12:18):
just a black box and everyone'sa little bit scared, maybe
there's some fear of having ourjobs taken.
You know, all these thingsconspire to creating these human
in the loop approaches wherehumans, they're really
functioning as needlessgatekeepers.
You know the AI can sprintahead, but then it has to stop
every 10 yards to check in withthe human and make sure it's
still doing.
We're still feeling comfortablewith it and I think maybe, if
I'm understanding you correctly,what you're saying is and

(12:40):
obviously it's case by casewe're talking about screening
here, but in the case ofscreening, there's a strong case
to be made that the better,more human-centric design is
actually to remove the humangatekeeper at a lot of those
checkpoints.
Is that?

Speaker 3 (12:52):
a fair summary?
It is, but you know what, likeas you're talking, a metaphor
that comes to mind is in thisexample, we are not keeping
humans in the loop, we arekeeping.
We're putting humans into anassembly line, right and like,
the machine's going to like, youknow, it's going to move the
product to us and then we'regoing to put some bolts in it
and we're going to feel goodthat we, that was, that was a

(13:12):
human built.
Like outcome, instead, we couldbe using AI to move faster and
smarter and keep our capacityopen.
It's like do I want to remainin the assembly line or do I
want to step up and be morestrategic?
In every one of the like, everysingle rec that I'm managing,
do you know?
Like the, the work that I dobecomes different, more elevated

(13:35):
, because I'm not just clickingapprove buttons and not making
any real decisions.

Speaker 2 (13:41):
Yeah, I think that's great and, kyle, I'd say, like
you know, that's probably a good, you know kind of segue into
another piece of the report andthat's hey, like there's a lot
of executive support for AI andHR.
It's probably never been, youknow, never been stronger, but
adoption rates are stalling andso much, like you know, we're
running into, you know, barriersor checkpoints with using AI.

(14:01):
You know, I think we're, youknow, running into similar, you
know similar hurdles when itcomes to adopting, you know,
adopting AI.
And so, you know, what do youthink is the disconnect between
HR leaders, executive supportand, like you know, why are we
seeing adoption rates for AI?
You know, stall.

Speaker 3 (14:18):
Yeah, there's a.
There's like it's a greatquestion, graham.
I think there's a couple ofdifferent layers, at least that
I'm observing.
The first is the mostfundamental, which is that there
is a lack of trust in AIamongst most of the rank and
file and even leadership in alot of HR organizations, and the
lack of trust comes fromconcerns fear of what can you

(14:42):
know?
We hear we mostly hear whatgoes wrong.
I mean, you still hear aboutthe Amazon example of I mean
we're going to stay withmatching and scoring for a
second because it's just such agreat use case.
The Amazon example of theybuilt a machine learning
algorithm that was going to helpthem screen candidate like or

(15:02):
help them to funnel throughapplicants right, and we still
hear about it only movingforward men, and that was like
12 years ago, nine years, it wasa long time ago.
And so we are really stuck onwhat we're afraid of and that's
okay.
I mean that's that has beenlike.
One of the core principles of HRis to manage and mitigate risk

(15:24):
in our workforce, to, like,avoid risk and create, like you
know, more equity in ouremployment practices.
I totally get that, but a lotof that fear and concern is also
stemming from what is a lack ofliteracy, a lack of functional
understanding of how thesecapabilities work and how they

(15:46):
don't, what these things do andwhat they won't, and how we can
actually build meaningfulgovernance, even at the use case
level, that will make sure thatthere is the guardrails are
high enough that this thingcan't go off them, right.
That in very practical ways.
By the way, I'm like very dragand drop rule setting, like we

(16:07):
are not talking about having tobe like an AI, like IO.
Instead, you can just identifysome of the rules and boundaries
you want to set on these things, but we don't know that.
I don't think that we quiteunderstand that, and so instead,
we do what we usually do whenwe're faced with something risky
we just avoid it, and you knowwhat I mean and that's what

(16:31):
we've done, and so it's a hardhabit to kick is because it's
worked more or less so far, butthe times they are a change in
kiddos, and I think that thescale of impact that we're
worried about, with AI goingrogue on us or something, the
other scale of impact that whatI think we need to be worried

(16:52):
about is us falling behind ofimpact that what I think we need
to be worried about is usfalling behind, and I think that
the other, like the, the, therisk that's looming over us
because AI is coming, whether weare are guiding that train or
not, and what will happen isthat AI is going to happen to us
and it is going to beimplemented in a way that is
prioritizing efficiency and it'sprioritizing cost cutting, and

(17:15):
it doesn't.
It's not going to end up beingparticularly human centric, it's
going to be business centricand and I that is not doom and
gloom.
I think that is a very realscenario that is in front of us.

Speaker 2 (17:28):
Yeah, no, I I completely agree and I'd say,
like you know, on this thoughtof AI illiteracy, right, you
know, so we said earlier thatyou know, oftentimes you know
accurate or not, like HR teamsare seen as having maybe less of
a technical background, youknow, a little less tech savvy,
you know.
I think that kind of probablyamplifies the you know sort of
gap in terms of skill set youknow needed to really, you know,

(17:51):
be comfortable with AI tools.
How can organizations reallyinvest in building AI literacy
within not just HR teams, maybebroader teams?
And then I suppose that'sprobably a pretty logical
jumping off point talk a littlebit more about this
human-centric AI console thatyou're involved with too, kyle.

Speaker 3 (18:10):
Yeah, yeah.
So I think that there are acouple of things and, by the way
, graham, none of these are likemassive lifts.
I'm coming at this.
I'm trying to be as practicalin my recommendations as I can.
The first is, whether you are ateam manager or a functional
manager, in your standingmeetings with your direct

(18:33):
reports, create space to shareAI best practices, to share a
use case that you found, whetherit's whether it is directly in
HR or it's something that youlike.
Today, this morning, um, I washaving my coffee, I was planning
my 40th birthday like literallydetailed itinerary with chat
GPT I call her chatty G and IDGwas there for me, honey and she.

(19:00):
It was extremely helpful and Iguarantee I cause I guarantee
you that there are so manypeople in the HR organization
organization who do not knowjust how much these tools can do
for you because they have beentold they probably should avoid
them because they're too risky,but we that's like head in the
sand kind of stuff.
So, first thing is create spacein your standing meetings for

(19:25):
sharing just what we're doingwith these things and how these
things can work, and, I think,like getting our hands on it and
playing with it and encouragingus to do it in safe and
compliant ways.
Like there's a lot ofopportunity there.
These feel good KPIs and youknow, um, the data that is like
soft ROI, like that's.
Yeah, we're.

(19:57):
We're still going to have our,our touch point on the business
results from our, our engagement, but I'm actually going to
start creating maybe once aquarter it depends on how, how,
uh, how frequently you're whenyou're running your business
reviews with your vendors.
But create space to learn, say,marty Graham, ahead of our QBR

(20:18):
in June.
I want you to know that a hugepriority for me with my
organization is to drive AIliteracy and I would really like
to know how your organizationis approaching AI.
I want to know if you haveethical standards and
commitments that you're makingthat are guiding some of your
product design.
I want to know how some of yourmost innovative and fearless

(20:42):
customers are leveraging AI topartner with you.
Bring me more than just ourvanity metrics next week and I'm
going to create space for that.
We're going to come withquestions.
It's going to be a great wayfor us to reinforce our
partnership.
Like those are two standingmeetings that happen under any
program, right Under any, forany HR organization.

(21:04):
We have QBRs with our vendorsand we have team meetings Like
let's start creating space inour ways of working to build up
and like confidence in literacy,in AI.
And the third one, which is alittle bit more unique lately
but get out of the office, go toa conference Once a year.

(21:26):
Go to a conference and reallytry to have conversations with
people.
Attend sessions, yes, but thenfollow up with the people who
were in that session.
Be like, hey, can I pick yourbrain for a little bit?
I will buy you coffee.
I just want to talk.
What you said wasn't completelyirrelevant for me because we're
in a different industry, but Iliked the way that you're able
to approach it or whatever it is.

(21:47):
So I think all three of thoseare really practical.
I'm not having to fight for amillion dollar L and D overhaul,
like to get this really crazyconsultant caught, like to come
in and do all this training,like maybe you end up getting
there because you get reallyserious.
But I feel like all three ofthose are just super
approachable and practical.

Speaker 2 (22:06):
So I want to oversimplify it kind of like.
I almost wonder.
It sounds like what you'redescribing, too, is like too.
Is a lot of companies maybehave just a company culture
problem If people are justscared to even think about using
AI because they're going to gettheir hands slapped, they're
not being encouraged to learn orthey're not being encouraged to

(22:27):
go to conferences or see whatother people are doing people
are doing.
Are we kind of paying the broadbrush and saying, hey, maybe HR
needs to rethink the way theygo to market or the way they
market themselves to be a morearguably inclusive department?

(22:48):
And hey, is HR just sitting toomuch in their own little house
without going around and walkingthe neighborhood to see what
other people are doing?
That's probably a bad analogy,but that's where I'll leave that
one.

Speaker 3 (22:59):
So it might make sense.
No, I think it's fair.
I mean, there are culturalbarriers, operational barriers
that we do need to acknowledge.
It's not just a lack of trying.
There are other significantbarriers that we are going to
encounter.
But I guess that's where I'mthinking about trying to propose

(23:20):
ways that it doesn't matter ifyou sit at the top of the food
chain or you're down in the rankand file.
I still think that there areways for us to improve.
I mean, it's Sometimes we'regoing to have to try and move
mountains, but other times it isjust stacking up a couple rocks
and calling it art.
I think that there is some ofthat really, really big stuff

(23:43):
that I think that the HRexecutives who are tuning in
need to be prepared for, and Iwould love to sit down with them
and talk about what I'm seeingworking, what's not.
But I honestly, graham, I thinkthat this is something that the
entire HR and talentorganization needs to be leaning
into, and so I don't think wecan just wage the battle at the
top.
I think we need to be doingthis like more ground level

(24:05):
stuff.

Speaker 2 (24:06):
Yeah, yeah.
Well, I mean, you know, I thinkback to you know I remember
when you know COVID hit fiveyears ago and you couldn't meet
with people in person and youcouldn't do anything Like we, we
started taking a lot morevirtual coffees and you know, I
would argue and say you knowsome of the more interesting,
you know, conversations that wehad.
You know we're really withpeople that you know we're
working on things very muchoutside of HR, and then you

(24:28):
start to think of you knowapplications to your own
business, right, no-transcript,what people are talking about at

(25:00):
that table, and so I thinkthere's so much value to
spending time with peopleoutside of HR too that I think
would probably maybe spur peoplein TA, spur people in HR, to
think about broader applicationsof AI.
I suppose.
Yeah, I think it'd be great totalk a little bit more about.
Let's talk through some successstories from your report.

(25:22):
Maybe give us one good example,kyle.
We talked about organizationsgetting AI implementation wrong.
We can tack Amazon to the wallfor that one again, but what's a
good success story whereorganizations got AI
implementation right?

Speaker 3 (25:38):
I mean there's a lot of examples.
There really are.
Not all of them are very public.
That's the hard part, but thereare quite a few.
One thing that comes to mind isthat we need to be evaluating
AI differently than we evaluatedifferent types of tech.
Differently than we evaluatedifferent types of tech.
Like, if you go out to marketto buy an ATS, you're going to

(25:59):
have an RFP that's going to have60 or 80, I don't know how many
lines of features andfunctionalities and capabilities
that the company has, thevendor has or it doesn't have.
But if you take that approachwith AI and say, hey, can you
support my 16 differentinterview types, they're going
to say yeah, because in theirdemo environment they can build
that up and make it look realnice and shiny.
But what they don't have atlike what they can't say for

(26:22):
certain, is what the datagovernance and systems
governance of your ITorganization looks like.
I don't know that your ITorganization is going to give me
the level of access that I needto Microsoft 365 or Google
suite to make this work right.
I don't know if it's going togive me the level of permissions
that I need to Microsoft 365 orGoogle Suite to make this work
right.
I don't know if it's going togive me the level of permissions
that's going to give me accessto your calendar so I can find
out if you're actually busy orif you're just holding this for

(26:44):
interview.
And so we need to be evaluatingthese capabilities much
differently.
And in the report, we actuallyidentified three different
pillars of a more effectiveevaluation looking at AI use
cases across impact, risk andcomplexity.
And what we find is that a lotof us perceive that any AI use

(27:04):
case is high risk.
Like, if you look at the EU AIAct, they're saying that any AI
use case in HR is high risk.
And, guys, there's literallymillions of use cases and they
are not all high risk.
So I think it's being able tobuild a better framework for
evaluating the compliance risksand ethical risks and

(27:24):
operational risks that a usecase might present for you.
It is looking at the technicalcomplexity, the maybe change
management complexity, lookingat the solution complexity of a
use case, and then also forimpact, like what is the
potential ROI?
Am I impacting efficiency, Am Iimpacting effectiveness, Am I

(27:46):
impacting experience?
And by looking at these threedifferent dimensions of AI use
cases, I mean, yeah, you stillneed to ask them if they can
support your 16 interview types.
But by going this level deeper,you're actually going to get a
little bit closer to what makesthese capabilities, these
solutions, work within yourorganization.
And so I think to answer in aroundabout way examples of

(28:20):
things that go well, I have.
There's a reason why so manycompanies are starting their HR
AI journey with conversationalAI for employees experience to
help, like with employeeinquiries around HR services, or
conversational AI for candidateconcierge to support candidates
with inquiries.
It's going to create a hightouch.
It's going to be a pretty goodresource.
It's more dynamic than your FAQ, it's going to connect people
to some actions and someresources and it's not going to

(28:40):
bog down your HR organization.
We identify that as a pretty lowrisk use case because it's not
going to tell somebody toworship Satan.
I mean, although it is okay ifyou worship Satan, that is a
legitimate you know.
I'm just saying like it's notgoing to tell you to commit, you
know, crimes, but it is goingto point you to the place in the

(29:05):
employee handbook where ittalks about the company's
approach to freedom of religion,like whatever it is, and it is
going to like tell you how tofile for leave.
It's not going to point you tosomebody else's handbook and
share somebody else's policies.
It just doesn't work that way.
And at the same time, theimpact of that use case is

(29:25):
moderate to significant, becauseit's saving a lot of the time
spent answering the samequestions again and again and it
is reducing the time toresolution for employee issues
or for candidate questions andit's enriching, it's impacting
the experience of thosestakeholders.
So that is a use case that Ithink is having a lot of early

(29:48):
adoption, because risk is low,impact is high and complexity is
pretty straightforward.

Speaker 4 (29:54):
Yeah, that makes a lot of sense and I think that's
a great example that helps bringthis to life and makes it a
little more real.
The pillars, I think, are alsovery useful.
I mean, we've spent a lot ofour conversation here talking
about risk.
I think that's because that'swhere most HR people are in this
process, but it's helpful tohave other dimensions to
consider.
You say impact and complexitybeing two of them.

(30:14):
I mean, we can't just look atthis from a risk point of view.
We certainly need to look at itfrom a risk point of view, but
if that's all we do, it'sprobably a recipe for staying
stuck, like we've been.

Speaker 3 (30:26):
Or it's also like just assuming risk, like that's.
That's the other part there.
It's like we're I talk in thereport about getting beyond the
bias boogeyman Cause we're justlike oh, we got to work, we got
to be worried about AIintroducing bias.
Here it's like, okay, well,where in interview scheduling
use cases are there evenopportunities for bias?
Because the way that this worksis the bot identifies when a

(30:48):
candidate is available for aninterview and then it identifies
when the interviewer isavailable for an interview and
then it schedules an interview,and none of that has to do with
any protected class.
So do you know what I mean?
And so, but people really doget questions about bias in any
AI use case.

(31:08):
So sorry to jump on my highhorse there for a minute, but I
think that it's actuallyevaluating risk and not just
assuming it.

Speaker 2 (31:17):
Yeah, kyle, I know we're short on time, you know.
I think you bring up a greatpoint, though, too, and I would
encourage anyone to look ataptitude research report.
You know, because you do agreat job of calling out things
like to your point interviewscheduling, low risk, high
impact, great place fororganizations to start, and I
think that's, you know, theimpetus of this conversation.
It's boy.

(31:37):
We're really getting stuck in aslow adoption of AI.
There's a lot more that HRleaders can do.
How can we make it a little biteasier?
So, you know, really encourageeveryone to.
You know, take a look at thereport.
You know, I guess you know thelast question, kyle, is our
easiest and again, I wish we hada little bit more time, but
where can people find more aboutyou online?

Speaker 3 (31:58):
I mean this is so boring, but I check my LinkedIn
every single day.
Linkedin is a much better,safer space.
Unless you want to look at myliberal memes and my cat memes
and follow me on Instagram, Iwould recommend Instagram.
It's just, it's a very specialplace.
Follow me on Instagram.
I wouldn't recommend.

Speaker 2 (32:18):
Instagram.
It's just.
It's a very special place.
That's awesome.
Well, we'll link the aptitudefull report and the details.
You know a lot of great contentcoming from aptitude research
these days too, and obviouslyyou know we'll share your
LinkedIn, kyle.
We might not link the Instagram, we'll see.
Maybe we do have a lot of catlovers, who knows but really
appreciate you joining for anepisode.
It's been fantastic.

Speaker 3 (32:38):
Thank you so much, guys.
Thanks for having me.

Speaker 2 (32:42):
All right, thanks for tuning in.
As always, head on over tochangestateio or shoot us a note
on all the social media.
We'd love to hear from you andwe'll check you guys next week.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.