Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Announcer (00:01):
The world of business
is more complex than ever. The
world of human resources andcompensation is also getting
more complex. Welcome to the HRData Labs podcast, your direct
source for the latest trendsfrom experts inside and outside
the world of human resources.
Listen as we explore the impactthat compensation strategy, data
and people analytics can have onyour organization. This podcast
(00:24):
is sponsored by Salary.com, yoursource for data technology and
consulting for compensation andbeyond. Now here are your hosts,
David Turetsky and Dwight Brown.
David Turetsky (00:38):
Hello and
welcome to the HR Data Labs
podcast. I'm your host. DavidTuretsky, alongside my best
friend, co host and partner atSalary.com, Dwight Brown. Dwight
Brown, how are you?
Dwight Brown (00:47):
I am wonderful!
How you doing, David?
David Turetsky (00:50):
I'm okay. Well,
we just got over some health
scares, which is good, becausetoday we're talking to one of
the most brilliant people we'veactually had on the HR Data Labs
podcast. Martha Curioni. Martha,how are you?
Martha Curioni (01:03):
Hi, thank you
for having me back. And I am
good. It's been sunny thesedays, so I'm enjoying the sun
while it lasts.
David Turetsky (01:10):
Yes, yes, we're
getting into winter. Well, we're
actually getting into fall,which for a lot of us, turns
directly into winter with verylittle lag. But for those of you
who don't remember Martha,Martha and Dr Adam McKinnon were
on many moons ago, and they weretalking to us about how we can
use machine learning to fix dataproblems in HR, and it was one
(01:34):
of the most popular episodes.
And we're gonna actually have alink back to that episode in the
show notes, but we're also goingto have a link to the code that
Martha had built, and it's onGitHub, so easily accessible and
extendable, and we're going toprobably speak a little bit
about that today, but more sowe're going to get into another
topic. But Martha, why don't youexplain to some of our newer
(01:57):
guests who you are?
Martha Curioni (02:01):
Hi, yeah, so I
let's see. Where do I start? I
have a extensive backgroundwithin the HR space, having
started in recruiting and workedmy way through kind of talent, I
guess, workforce strategies,space and recently or not, not
that recently anymore. Timeflies. Few years back, I decided
to train myself as a datascientist. So that's when I
(02:23):
learned how to code and build AIand machine learning models and
so forth and and now I amworking as a people analytics
consultant. I do advancedanalysis, I support
implementation of peopleanalytics tools looking at
processes around AI, as HRorganizations are looking to
(02:45):
implement that and so forth. Sothat's kind of where I am today.
David Turetsky (02:48):
And one of the
more interesting things about
Martha, Martha, where are youlocated?
Martha Curioni (02:52):
I am based in
Italy, which you cannot tell by
my accent, because I'moriginally from California, but
I have moved to Italy about fouryears ago.
David Turetsky (03:03):
Hashtag jealous,
one of my favorite places in the
world. So Martha, what's one funthing that no one knows about
Martha Curioni (03:11):
I don't know if
I would say no one knows, you?
because of the whole class ofpeople that know. But being in
Italy and being an expat andworking remotely, there are days
where the only other adult Ispeak to is my spouse, which I
love him, but sometimes you needto speak to other adults. I
decided to sign up for a theaterclass, which is all in Italian.
David Turetsky (03:35):
Wow!
Martha Curioni (03:36):
And it happens
once a week, and I, you know,
it's definitely brings me out ofmy comfort zone, even if it were
in English. And so then, beingan Italian, it takes it to a
whole new level, but at leastthe extrovert in me gets a
little bit of social interactiononce a week. So I'm enjoying it.
David Turetsky (03:54):
That's
wonderful.
Dwight Brown (03:55):
That's cool.
David Turetsky (03:56):
That is really
cool.
Dwight Brown (03:57):
Now, are you
fluent in Italian, Martha?
Martha Curioni (04:00):
In a social
setting, yes, when it comes to
work, I would say very goodlevel. I wouldn't say fluent.
But in a social setting, yeah, Ican have a conversation.
David Turetsky (04:10):
Well, now you're
going to test that boundary!
Martha Curioni (04:14):
Uh Oh, in the
class? Yes. I mean, I thought
you were going to have
David Turetsky (04:17):
In the class!
Not here, oh gosh no
Dwight Brown (04:20):
I was waiting for
that too. I'm like, yeah, and?
David Turetsky (04:26):
no, that's about
my limitation on on Italian. No,
we're good. We're good. Sothat's really cool. So we're
gonna see you win a Tony Awardat some point soon?
Martha Curioni (04:37):
I don't know.
Maybe we'll see or whatever theequivalent is in Italy. I don't
know. I don't know what kind ofawards they have.
David Turetsky (04:42):
actually it
would be, the Tony Award,
because that's Italian, right?
The Anthony award. Hey, Anthony,how's Martha doing? She's great.
She's really great.
Announcer (04:54):
If you guys can see,
Dwight Brown (04:59):
oh, who let you
out of your cage today?
David Turetsky (05:07):
Sorry. Hashtag
dad humor. So let's transition
to the topic now, because thisis the reason why we love doing
what we do. We're going to talkabout a really cool, very, very
important topic for today, andthat's the responsible
implementation of AI in HR.
(05:33):
So Martha, let's talk about it.
What does it actually mean toimplement AI in HR in a
responsible way?
Martha Curioni (05:40):
Yeah, so to
start, let's just define what
responsible AI is for anybodythat doesn't know or is not
familiar with the term. It'sessentially it involves kind of
the design, the development anddeployment or implementation, if
we want to use that wordinterchangeably, of AI in a way
that's going to help you tominimize risks that could happen
(06:02):
with using AI and other negativeoutcomes. So if we translate
that then into an HR setting,you think about many. So there
are some HR use cases that arelower risk, right, maybe
automating tickets and some ofthat kind of stuff. But there
are many HR use cases, at leastall the ones you hear about if
you go to a new HR TechnologyConference, right? Are things
(06:25):
like, you know, are we? Who dowe hire? Who do we promote? In
some cases, who do we fire? Ifpeople are looking to your
companies are looking to lay offemployees and you know, or how
much of a salary increase I'veseen people use it to inform
salary increase recommendations.
So, you know, minimizing riskand other negative outcomes, I
think we'd all agree are extraimportant given these use cases,
(06:49):
right? And this is why I thinkcompanies really need to take
the appropriate steps to ensurethat the AI that they're going
to be using is implemented in away that is transparent, that
minimizes the influence of bias,supports fairness and really
empowers employees and managersto make better decisions, right?
(07:11):
So that, to me, is whatresponsible AI means, and that
doesn't only mean picking amodel, or ensuring the model, or
developing a model that offersthese things, right? That's only
the design and development side.
The deployment side is thentaking the extra steps, or the
(07:32):
additional steps, to make surethat people are using the model
in the way that it's intended tobe able to ensure that these
things are happening, right? Youcan't just put the tool in
people's hands and trust thatthey're going to use it the way
that they're supposed to. Thatnever happens.
David Turetsky (07:48):
Is there, is
there another aspect to it which
also goes to the data thatyou're going to use to train the
model on? You know what? Whatdata are we using to the to the
point I made before? Has it beencleaned? Or do we have faith in
it? Do we trust it? Have thedecisions that were made using
the data? Are those things wewant to actually be basing our
(08:09):
our forward, going decisions on?
Does that come into it?
Martha Curioni (08:13):
For sure, you
know that that becomes, you
know, one of the key points,right? In selecting the model,
or picking a model and datathat's going to be used. So some
vendors out there maybe theytrain it on their own data and
then they want to unleash it onyour future decisions. Okay,
well, I don't know if that'sgoing to work. Many
(08:33):
organizations don't have theirdata in a place where they can
do it with their own data, sothere ends up needing to be a
lot of kind of data cleaning, alot of data preparation and so
forth that's needed and reallyunderstanding, you know, even
doing a descriptive analysisbefore you get to that point, to
understand, you know, lookingat, maybe we use the example of
(08:57):
promotions, past promotiondecisions. Do we see that there
are groups that are maybe, youknow, getting promoted less or
more, or what have you, inproportion, obviously, to their
share of the overall head count,right? The overall population,
right? And really understandingyour data first is important, I
(09:18):
would say for sure.
David Turetsky (09:19):
One of the other
considerations that I'd ask is,
is there also a potential issuewith the where is the model
located? Meaning, is it on ourpremise, or is it in the cloud,
or is it on premise of theapplication provider or the
model provider? And the reason Iask that is because the wild,
(09:40):
right? Having our data andhaving our model and having our
decisions in the wild, and whowould have access to the data,
the decisions, the outcomes. Isthat something that comes into
this conversation as well, or isthat really just kind of a, you
know, don't worry about that,David, that's down the road.
It's not an issue for right thissecond?
Martha Curioni (10:02):
No, I think it's
definitely so I would think it's
separate from responsible AI inthe way that I'm defining it.
But when it comes to AI ingeneral, you know, it's
definitely important, right?
Even, you know, for example, Idon't recommend somebody saying,
Oh, let me use my personal, youknow, account with chatGPT or
Claude or what have you, takeall this employee data and
(10:23):
upload it into the, you know,ask it to analyze the data for
me, right? Because there are alot of risks. But that's more of
a data security privacy side, asopposed to, you know, making
sure that to your point, thedata is appropriate. The model
is not does not have biases, andthen it's being used as
intended.
Dwight Brown (10:44):
It would seem that
the Yeah, part of that data
quality, aspect of things, isjust understanding where your
data is coming from, where it'swhere it's pulling from. What
are the data sources you cancontrol? What are the data
sources you you can't control?
Martha Curioni (11:03):
For sure. And I
would think the other thing I
would add, and I I've gotten ona high horse about this lately,
it's something that I bring upanytime I can in a conversation,
are the processes that arecapturing data. And I think so
many times there are processesthat are designed or sometimes
just haphazardly come together,and then there's data. And a lot
(11:25):
of times, the people that aredesigning the processes don't
think about the dataimplications. Or, you know, it's
kind of, here's the process,here's what we're doing, and the
data is an afterthought. And sowhat that means is, for example,
you know, if I want to look atmobility for my organization,
for whatever reason, butmobility moves within the
(11:47):
company are not capturedconsistently in a way that
allows me to then map those,then it makes it almost
impossible for me to do thatkind of analysis. And you can
take that to you knowpromotions. If promotions are
not captured correctly, werethey promoted or did they apply
for another job? And with thatjob became a promotion, right,
(12:08):
right? And then, if you're goingto use that to inform future
promotion decisions, how are yougoing to do that if you're not
capturing the data consistently?
David Turetsky (12:18):
Well, that gets
to Dwight's favorite topic of
data governance, right? Andmaking sure that HR has a good
data governance model.
Dwight Brown (12:26):
And that's exactly
it, because it really gets to
that data trust factor. I thinkthat's one of the pieces with AI
that's a little bit scary, isthe fact that, you know,
there's, there's a big aspect ofthis that is just sort of a
black box. You don't know howthe data is being put together.
Sometimes you don't even knowall the data sources that you're
(12:48):
dealing with. So you know it, itreally gets to that data trust
factor and and how do you getthat? I think that's a key
question.
Martha Curioni (12:58):
So for me, one
of the ways to address the trust
factor is when you haveExplainable AI as part of the
interface, right, or the modeloutput. So there's, you know,
there's some, some modelsinherently have it, right?
Regression models you can lookat a driver analysis, or with
(13:19):
other cases, you might have toput additional tools on that on
top, right? So there's shap,there's lime and probably others
that are coming out to be ableto offer that transparency so
that you say, okay, these youknow, we're recommending David
for a promotion. Here is why.
Here are the things that here.
(13:39):
Here are the reasons that we arerecommending him. That way, the
user can then look at those andeither agree or disagree, right?
Oh no, that's not true abouthim. Or Yes, that's true, but
that's not a factor that we wantto consider in this case,
whatever it might be. But that'show you A, address some of the
(14:00):
true the trust issues and then,B also, again, it goes back to
the AI shouldn't be making thedecision. The human should be
making the decision, and byempowering them with that
information, that's how youensure that that happens, so
that again, they're using the AIas intended.
Announcer (14:19):
Like what you hear so
far? Make sure you never miss a
show by clicking subscribe. Thispodcast is made possible by
Salary.com. Now, back to theshow.
David Turetsky (14:30):
Well why don't
we talk about that as part of
the second question now, whichis for HR organizations that are
planning to actually implementsome kind of artificial
intelligence, what are the mostimportant steps that they have
to take to ensure that it'sactually going to be implemented
responsibly?
Martha Curioni (14:46):
So the first
step is something we've already
covered a little bit, which is,first check your model right?
Don't just trust the vendor orthe data scientists that you
hired to make sure that they'redoing, taking the steps
necessary to make sure that it'sa good model, right, right? Make
sure it's transparent. Make surethe end of the users can
(15:06):
understand the output. Ideally,it will have Explainable AI, so
it's not that black box thatDwight mentioned. And test the
model yourself, right? Runthrough it, see what
recommendations come out. Andyou know, do you notice bias?
Are you seeing bias comethrough? Do the recommendations
make sense? You know, that's howyou want to test it before you
(15:28):
implement anything. Once you'vedone that and you say, okay, the
model is good. I you know, I'mgood. I like the
recommendations, then you wantto be clear about your goals and
objectives of how are we goingto be using this model? What are
the outcomes that we expect tohave? Is it, you know, more fair
decisions? Is it saving time formanagers, whatever it may be,
(15:53):
define those ahead of time, sothat over time, you can track
those measures and decide, is itworking? Is it doing what we
wanted it to do? And if not,why? And should we keep using
it, right? Because otherwiseyou're just going to keep using
something that maybe is makingthings worse.
The next part is, and this one Ican't emphasize enough, is you
(16:17):
need to redesign your processaround the AI, don't just bolt
it on top of an existingprocess, because if you do that,
there's a really big risk thatit's not going to be used as
intended, or, you know, it'sgoing to not get used at all
right, which is also a shame ifit is something that you're
hoping can help make betterdecisions. So, you know, work
(16:40):
through from beginning to end.
What should the new process be?
Incorporating AI, incorporatingchecks and balances, making sure
that the users, you know, thatthere are points where the users
are being prompted to make surethat they're not just auto
clicking through things and soforth. It actually reminded me.
It reminds me of an example Iheard. I was listening to a
(17:04):
podcast the oh, gosh, I can'tremember who it was, but anyway,
the that NASA, they they build,and because they've been using
automated systems for years,right? And so they build in
these kind of, I guess, faultsthat everybody knows that
they're there, so that you don'tgo on autopilot because you know
(17:24):
that they're going to be random,like bad things coming up, or
things that you shouldn't trust,so that people don't just go on
like autopilot and do thingsright? So can you build a
process incorporating somethinglike that? I don't know, this is
an idea that came to mind. But,you know, designing that
process, with the process thencomes proper training, right?
(17:45):
You don't give somebody a carwithout teaching them how to
drive it.
David Turetsky (17:49):
I don't know
about that. You haven't driven
in the US, maybe for a fewyears, but
Martha Curioni (17:55):
Ideally, you
would teach them how to drive
it. You know the risk?
David Turetsky (17:59):
Ideally, yes,
Martha Curioni (18:00):
yeah, the risk
and everything, right? What to
do, and if this happens, andthen you run a pilot. One
recommendation I have would beto have, you know, one group do
it with, with the the model,another group do it without, and
then compare the outcomes,right? And understand again, are
we achieving the objective thatwe want to achieve? And then,
(18:21):
over the long term, continue tomonitor not only the outcome,
but also how are people usingit, right? As much as you can.
Dwight Brown (18:28):
Yeah, one, one of
the things that I think about
with this is that it would, itseems like the possibility of of
over trusting the data isprobably, you probably see it
more in AI. Because if you thinkabout AI output, you know, for
instance, if I go to chatgpt andput in a query, what it outputs
(18:50):
is really, sounds really good,you know? And a lot of times it
seems on point. But if it's atopic that I don't know much
about, I could be kind of starstruck with all the output. And,
you know, the the way that itwords things, and it it's easy
to forget just exactly what thepotential pitfalls are with
(19:12):
that. And I think it, I thinkthat gets to what, what you're
talking about, where you'vethere's got to be some education
around it. There's got to besome understanding around it
that's there upfront, otherwisewe end up just sort of blindly
trusting it.
Martha Curioni (19:30):
And there are a
lot of studies that show,
there's one in particular thatcomes to mind I couldn't tell
you where to read up on it, butyou could probably Google it,
where there was a building, theythere was an alarm, like a fire
alarm or something, and they hada robot who was clearly taking
everybody in the wrongdirection. People knew what the
(19:51):
correct direction was, but theystill follow the robot, right?
And so people tend to definitelyget over confident in the output
of AI, because they think, Oh,well, you know, this is
technology. It's been trained,and it should know better than
me that is, you know, faulty orwhat have you. So for sure, you
(20:12):
definitely have a lot of that.
David Turetsky (20:14):
And let me, let
me expand on that a little bit.
We know and have seen that someof the answers that have been
coming out of chatgpt areactually lies, and that chatgpt
actually doesn't know theanswer. They're making guesses
which are wrong, and there arekids in school have been using
(20:35):
verbatim the stuff that comesout of chatgpt, and it's just
wrong because where whateverit's pulling from is just not
true, or it doesn't have enoughanswers, so it makes shit up.
Pardon my French. And the onething I want to talk about in
the description that you justgave and the six steps is you're
(20:57):
you're basing your career,you're putting your company at
risk for developing a model, andyou need to make sure that the
thing it's doing is actuallydoing it correctly. Now you
mentioned before, Martha, sorry,I'm going a little all over the
place here, but you mentionedbefore that some of the ways in
(21:18):
which AI has been implementedare the bots doing a specific
task, like, is this form filledout? No. Send it to the right
person. Get it filled out, andthen it sends on either to
another bot or to a person whenit has the correct information
or the information. And in thatway, you can actually check it,
you know what steps it's tryingto follow, and you can make sure
(21:40):
that it's accurate, and you canQA it. Some of these more
interpretive models, some of themore sophisticated models, the
steps you mentioned, they'regonna have to be pretty complex,
aren't they? You're gonna haveto do a lot of QA work to make
sure that the model is actuallygenerating what it's supposed
to!
Martha Curioni (21:57):
Yeah. I mean,
that's where Explainable AI
comes in, which obviously is notavailable with all models,
right? Like a large, largelanguage model, Explainable AI
becomes a lot more difficult.
Those tend to be a lot moreblack box, right? And so, if we
so, let's, let's first addressthe models that you can have an
excellent Explainable AIcomponent. Those ones, you make
(22:19):
sure that when they get therecommendation, they're also
getting the, you know, kind ofthe reasons behind the
recommendation. And then maybein the process, there's some
kind of step to make sure thatthey're reading that where they
you know that they agree ordisagree, or they have to add in
(22:40):
some comments, or whatever itmay be. You have to work the
other I'm also a big fan ofhuman centered design, right? So
you're going to work with the,you know, the practitioners, the
employees, the managers,whoever, to understand from them
what is going to be the best wayto design it so that it's not
annoying to them, because thenyou get, you end up getting
(23:01):
practices of people just, youknow, putting in a space in that
text box just to bypass it, orwhat have you, while also making
sure that you're achieving yourobjective. So there are, you
know, those types of models itbecomes a lot easier to go
through those steps of, hey, youknow, let's build in some of
these checks along the way tomake sure that people understand
(23:21):
the recommendation, agree andknow that they have every power
to disagree with therecommendation. When you get
into some of these more blackbox models where the Explainable
AI is not as accessible, then itbecomes to Dwight's point, a lot
more about the education side,right? Helping them understand
(23:43):
that, you know, it could makemistakes, or it can make things
up, or whatever it may be, andmaybe that's where the NASA
example comes in, right? Whereyou say, Look, we are going to
randomly give you fake answersso that you can to keep you on
your toes, right, and make sureyou're checking your sources, or
what have you. You know, I don'thave all the answers for that.
(24:05):
Again, you have to work throughthis specific use case, and,
your organization, the cultureand so forth. But education is
key.
David Turetsky (24:14):
It seems like
what you've outlined is, and I'm
not trying to demean it. Itseems a very expensive process.
And yes, it should be, becausewe're we're building in a new
technology, but the the sixsteps you mentioned, you know,
there's going to be a real costinvolved with not only
(24:35):
implementing this, the training,the education, the pilot, even
the just the technology and thedata itself, that's a lot of lot
of investment. Or are youthinking that this could be
relatively small things, smallsamples, and it doesn't need to
be that expensive?
Martha Curioni (24:54):
I would say that
for the technology, the cost of
that and so forth, you know,that depends on the technology.
Or maybe you have a team, an inhouse team, that's building
something, but when it comes to,you know, redesigning a process,
that's obviously a lot of work,as you mentioned, training and a
pilot, but a pilot, by nature,should be small scale, right? So
(25:17):
if you're able to do it smallscale, test your assumptions,
make sure it works. Read as youknow, tweak the process. Because
once you put it into place,inevitably, there's always
something that doesn't end upworking quite as you imagined it
would, right. And then you tweakit and so forth before rolling
it out to, you know, the broaderorganization or a business unit
(25:40):
within the organization, you canstill scale it out slowly. But
by doing it in a pilot setting,you definitely minimize the
cost. So then maybe you have,you know, one person within your
team who is responsible for, youknow, kind of the this whole
those these six steps, right?
And the the workshops around theprocess, and workshops, around
(26:01):
the training, and everythingelse, but with the end goal, or,
shall I say, the reason why youknow what your goals and your
objectives are and measuringthat is so much more important,
because you want to make surethat it's worth the investment,
and you want to make sure thatyou can accurately gage whether
the pilot was successful or notbefore you roll it out. And do
(26:25):
start to spend more money, or,in some cases, put your
organization at more riskdepending on the use case.
David Turetsky (26:34):
Hey, are you
listening to this and thinking
to yourself, Man, I wish I couldtalk to David about this. Well,
you're in luck. We have aspecial offer for listeners of
the HR Data Labs podcast, a freehalf hour call with me about any
of the topics we cover on thepodcast or whatever is on your
mind. Go toSalary.com/hrdlconsulting to
(26:56):
schedule your free 30 minutecall today.
Let's get to question three,which is, to me, kind of the one
of the things that we've beenkind of talking about most of
the episode, which is, we allknow that HR data, if people
ever listen to this podcastguests, you know that HR data is
far from perfect, and if the AIis trained on that bad data,
(27:19):
there are real risks that canmake the AI generate or or
inform the AI to generate badrecommendations. How do those
steps that we just outlined inquestion two help with that
challenge?
Martha Curioni (27:31):
Listen, I think
it would be great to have AI
that has perfectrecommendations, but we all know
that that's unlikely, right,because we don't have perfect
data, like you said. So myquestion to the both of you is,
or to the to even to thelisteners, is, can the goal just
be to make better decisions thanhumans are going to make alone?
(27:53):
Right? I've done a lot ofresearch in previous roles on
DEI, and I just, I really don'ttrust people to make good
decisions if you leave them tokind of their own devices, aka
biases, because that's why a lotof times the data is so bad,
right? Because in the past,they've made decisions with
these biases and so forth. Youknow, the good news is that
(28:16):
there is a lot that can be doneto address bias or bad data in
models, right? You can clean thedata. You can test it for biases
in many, many different ways.
It's not just, you know, let mebe clear. It's not just, let me
take gender and race and age outof the model. There are other
data points that can ask act asproxies for those. So taking the
steps, the appropriate steps, toaddress some of those things.
(28:40):
Then you add on top theExplainable AI factor or the
transparency factor, and thenyou start to have kind of a
model that hopefully can makerecommendations that are going
to be better than a personmaking it themselves. But you
know you again, as I mentionedbefore, you need to think beyond
(29:03):
the model. You also need tothink about, are people using it
as as intended? And so that'swhere the redesigning, the
process and the training reallycomes in to make sure that
people are using the model. Thehuman in the loop is not
something that is just a term,but people you know, are
actually doing it. Because,let's be honest, people are busy
(29:25):
and in many cases, just lazy, ifthey can just, you know, take
the recommendation. How manypeople managers are managing way
too many people, because so manycompanies have tried to increase
span of control and all of theseother things to save costs. And
then you put this tool in themthat makes recommendations, and
they're like, hey, now I don'teven have to think about this. I
(29:47):
can just, you know, right? Themodel says to promote
Dwight Brown (29:51):
Go on auto pilot.
Martha Curioni (29:53):
Exactly. So you
know that that's where, again,
the process, the training andso. Forth, come in and
monitoring its usage to makesure that it's being used
appropriately.
Dwight Brown (30:05):
And that
continuous view feedback loop,
that when things are when thingsare discovered about the data
having having a way to have thatfeedback, so that people who are
using the data, using the usingthe AI to pull the data, kind of
get a more refined lens thelonger that they're doing this.
(30:28):
Because that's the, you know, Ithink that helps to bring up the
blind spots that that mightotherwise be missed, and kind of
an overall process that justkeeps going, as opposed to being
a defined starting point and adefined end point, there really
isn't a defined end point. It'sjust a loop, much like data
(30:51):
analysis, with using Excel andeverything else.
Martha Curioni (30:55):
Oh, for sure. I
mean that, and that could be
part of your process, right? Youposition it not as a Hey, we're
putting this step in here tomake sure you don't go on
autopilot. You position it asHey, this is how you give us
feedback. We recommended you,and you, you promote David, you
look at the reasons why youdon't agree. You need to tell us
(31:17):
why, so that in the future wecan make the model better,
right, right, yeah, or you doagree, and so forth, and that
feedback loop. And I, honestly,I don't know that there are too
many tools out there now, chatGPT obviously, you know, has the
little thumbs up and thumbs downat the bottom and stuff like
that. But within the HR space,just thinking of tools that I've
(31:39):
used, I don't know that I'vereally seen, aside from, like,
do you like this jobrecommendation or not? I don't
know that I've seen too manyopportunities to give that
feedback. So it's definitelysomething any, any HR tech
vendors that are listening, youknow, something to think about.
David Turetsky (31:57):
Well, we've
been, we've been trained on this
a little bit, Martha in theconsumer side. Because if you
look at, you know, Netflix orother streaming services, and
they do the thumbs up, thumbsdown, you know, and you know, is
this recommendation? Did youlike this series? Did you like
this movie? Thumbs up, yay.
Okay, well, I'm gonna recommendmore movies like this. So we're
definitely getting that in theconsumer side. And I think that
(32:17):
does inform how the feedbackloop can help with these
recommendations, because atleast you're getting that
immediate feedback of, Did thishelp you? Did you use this? Be
this additional information tomake your recommendation? And in
that way, then we can actuallyget at least some understanding
(32:38):
about whether it was good ornot, but it would be better if
we actually had a little bitmore. Like, are you kidding me?
You're promoting David? He's aterrible performer! Why would
you do that? Or, yeah, I thinkDavid's a bad recommendation
because he doesn't have theskills or experience necessary
for this. So it would be betterif it was be it would be more
verbose, but at least the thumbsup, thumbs down as something, I
(33:00):
think at least we've gotten alittle bit more used to
Martha Curioni (33:06):
No for sure. And
I think to your example, you
know, another thing to consideris not just giving it to
managers, but making sure the HRbusiness partner has the same
model output, so that they canalso, as part of the process,
hold the managers to account, tomake, you know, hey, you know,
(33:27):
Dwight wasn't on therecommended, recommended for
promotion list. Talk to me aboutwhy, you know, not, not in a way
that's challenging them, right?
Because you don't want them tofeel like, oh, I have to work
from the list. But, you know,talk to me about Dwight like,
well, you know what's going onthere, or to your example, you
know David? Why did you know?
Why are you suggesting that youpromote him? He's a terrible
(33:48):
employee based on other thingsyou've said about him before,
which I don't think that's true,David. But to extend your
example, and so when you whenyou empower the HR team with
kind of the same information, itpositions them to be able to
have those conversations, tochallenge where appropriate, and
to, you know, again, help makesure that you achieve your
David Turetsky (34:10):
In many cases,
these might be self service
Martha Curioni (34:10):
Yeah. And you
know, you might say, Okay, well,
objectives.
tools, and if the managerrequests kind of a slate of
successors, let's just say for ajob, and that HR business
partner should probably get aconsole that tells them that the
manager did make a request for aslate of successors, so at least
(34:33):
they have that goodunderstanding to be able to
check. Because otherwise they'dhave to find out kind of after
the fact instead of knowing, youknow, here are the alerts of
things that that my managershave requested, and here's what
the results were, so I can atleast be informed, as well as be
a good business partner to themmaking those decisions and
(34:55):
inserting myself to be able toprovide context for that
decision. If that makes sensewhere's the HR business partner
going to get time to do that?
But in their case, instead ofthem having to dig through the
data and try to make that list,all they're doing is validating
the list and having someconversations around it, right?
(35:16):
They skip that first step.
David Turetsky (35:18):
So it might be
part of the loop, part of the
workflow that we defined in thesix steps.
We could talk about this allday. Do you have a couple more
hours so we can continue?
Martha Curioni (35:38):
Unfortunately,
no, I have a little bit more
minutes, but hours, I have togosoon.
David Turetsky (35:42):
No, I'm just
kidding. Yeah, I myself am
getting hungry for lunch. So Ithink what we're going to have
to do, Martha, if you don'tmind, we'll have to come back to
this. Because the there's goingto continue to be an evolution
of AI in the world of HR, youknow, we've been talking about
(36:02):
it for years, but this yearespecially, and most especially
if, if you hear some of theepisodes that we have from the
HR technology show in 2024pretty much AI was everywhere.
And so I think we're going tohave to bring you back again, if
we can, if we can get you backto talk a little bit more about
it. Not just the ethical natureof AI and the implementation of
(36:28):
responsible artificialintelligence, but also, then
what happens when it goes bad,or some other, some other
outcomes, and kind of thelessons learned from that, if
that's okay?
Martha Curioni (36:40):
Yeah, I'd love
that.
David Turetsky (36:41):
Well. Dwight,
thank you very much.
Dwight Brown (36:43):
Thank you. Thank
you for being with us, Martha!
Martha Curioni (36:45):
Thank you for
having me!
David Turetsky (36:46):
Martha, thank
you very much. You're awesome.
It's always a pleasure to talkto you. I always learn a ton
from you, and that's the reasonwhy we love having you on the HR
data labs podcast.
Martha Curioni (36:57):
Thank you.
David Turetsky (36:58):
Thank you all
for listening. Take care and
stay safe.
Announcer (37:01):
That was the HR Data
Labs podcast. If you liked the
episode, please subscribe. Andif you know anyone that might
like to hear it, please send ittheir way. Thank you for joining
us this week, and stay tuned forour next episode. Stay safe.