Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hello, welcome to the
Breakthrough Hiring Show.
I'm your host, james Mackey.
Thanks for joining us today.
We're back on the AI for Hiringseries.
We've got Elijah, our co-host,with us today.
Elijah, what's up?
Hey, james, happy to be here,yeah, it's great to have you
back, and Wes Winham is thefounder and CEO of Woven.
He's joining us today to tellus all about his product.
Wes, thanks for joining us.
Speaker 3 (00:19):
I am stoked for the
conversation.
Speaker 1 (00:22):
Yeah, we are as well.
It's going to be a lot of funand, just to start us off, we'd
love to learn more about you,about your background and how
you came to founding Woven andthen getting into your primary
value, prop.
Would be great, I think.
A great place to start.
Speaker 3 (00:35):
I came to talent
acquisition from being a hiring
manager.
I was a software engineer andjoined an early startup and
became the leader and that meantI needed to hire and I thought
I had a gut that could spottalent.
So made three hires.
It was great, I was great, Icould just see when you have it.
And then I made my fourth hireand it was not great and it was
(00:59):
my fault.
I hired someone who was tryingreally hard.
I put them in a seat.
They were not going to besuccessful.
It was really bad for ourbiggest customer, for my team,
and that was my wake-up callthat I don't have a gut and
actually you need to be good atthis thing and it's hard.
And read Thinking Fast and Slow,and talked to a lot of other
engineering managers and read IOPsych Research like what does
(01:21):
science say?
And my insight was if you'regoing to hire dancers, you
should probably watch them dance.
Doing the job predicts doingthe job, but that was really
hard to do.
And then when I sold thatstartup, I founded Woven to make
it easier to assess folks in areal-world manner in an
engineering context for tech,because it's not easy.
Speaker 1 (01:43):
Yeah, it doesn't
sound like it.
I know it isn't, and so doesElijah.
It's definitely a science, notan art, and it takes a lot of
process, repeatability,iterating, and there's a lot of
nuance.
Right, it's not just about bestpractices will get you a really
long way, but it's alsounderstanding the nuance of the
employer right, like thespecific requirements they have,
(02:06):
their environment, theirstrengths, their weaknesses.
A lot goes into it, right,definitely pretty complex
process, but I think what'sgreat is the products coming out
these days seem to hopefully behandling, absorbing some of the
complexity through some of theproducts that are being built.
So I would love to learn moreabout your product and figure
(02:26):
out how you're solving, exactlywhat problem you're solving, and
maybe we can go ahead and get alayer deeper as well.
Speaker 3 (02:34):
Yeah, so Woven is a
human-powered technical
assessment.
So pre-employment assessmentfor tech roles, mostly software
engineers, data engineers, datascience, that sort of thing.
You're either coding or maybecoding in a spreadsheet, and we
are human powered because thatallows you to evaluate the
things that you actually careabout.
(02:55):
It's not.
Can you write code that passessome automated test?
Can you handle a messy realworld situation?
Here's this pull request.
How do you prioritize it?
Here's this system that's broken.
Here's this email from acolleague where they're not even
clear what they're asking youbut you're supposed to make a
technical business decision.
Respond back to them.
Because we're human powered, wecan evaluate that messier work
(03:18):
which creates candidates like itmore, because they like doing
stuff.
That's like the job.
That's why they have that joband that's why they stick it out
.
And then it also creates moresignal on who's going to pass,
and especially for those folkswho don't quite have the
prestigious resume like that's.
What got me fired up about thisis there's hiring managers have
opinions right, and some of themare informed by the real world.
(03:40):
Some of them are just opinionsabout what they like.
And if an assessment can changea hiring manager's opinion
about what resume and backgroundreally matter.
Take a candidate from probablynot to yeah, that's where I feel
like we can make a big impact,like someone who just needed a
shot.
And this assessment givessomeone a way to get a read on
(04:02):
their actual skills withouthaving to commit to a one-hour
interview with them.
Because y'all have seen it, youcan only do so many speculative
interviews with hiring managersbefore they start to be like, eh
, maybe I don't trust thisperson Judgment.
So that's what we do.
That's our core business andwe've started a just for.
We have a small RPO arm forsome of our customers on the
(04:23):
smaller end who don't haverecruiting.
We have a small RPO arm forsome of our customers on the
smaller end who don't haverecruiting, have intermittent
hiring, and we do a bestdescribed as like a resume AI
powered resume matching engine.
I don't like the word matching,but that's what the market
calls it.
But generally like how can youfind if this candidate meets
your requirements in aconsistent way when you got a
thousand applicants in your ATSand you are one person trying to
(04:45):
read through those?
Speaker 1 (04:47):
Yeah, okay, yeah.
So I have a few follow-upquestions dialing back to the
product aspect or the humanpower.
Technical assessments yeah,what aspects.
Or product specific.
Like we get more granular intowhat's run by the product,
what's and how people on yourteam are incorporated into that,
and like what their kind ofroles are?
(05:07):
Like I would just.
Can we just get a little bitmore detailed into that part?
Speaker 3 (05:12):
Absolutely so.
Y'all are familiar with othertechnical assessments, like to
hacker.
Rank is probably the brand thathas the most recognition.
It's something where some pointin the process, a hiring
manager recruiter will pick someassessments to match the role.
Then the candidate gets invitedover email for the ATS.
They go and they take someseries of tests.
That's the same.
We, you pick assessments.
(05:34):
What is different is we are ableto offer different types of
assessments, so things that aremore free form.
And then on the backend, wherethe humans come in, is we
actually have two engineers thatare blind, evaluating that
candidate's work.
So it's not just some automatedthing, it's two engineers
looking through that analysis,blind to anything about the
candidate, blind to each other,scoring independently and then
(05:57):
creating feedback for thatcandidate.
So every candidate that goesthrough gets feedback on things
they did well and things theycould have done better.
So the folks that you'readvancing, they're going to get
a feedback email, like within aday after completing the
assessment.
That's praising them for thethings they did.
The folks that maybe you're notgoing to advance this time are
at least going to get somethinguseful out of that assessment.
(06:18):
They're going to get an areafor improvement that they can
level up their career.
That's what you get when youuse humans versus getting
automated tests.
You can't really do that.
Speaker 1 (06:27):
Yeah, for sure.
I'm just thinking so to betterunderstand where in the
interview pipeline this sits.
Is this after a phone screen oris this?
People go through the resumes.
They decide the top applicantsthey send them this, or where
does it fit within the interviewprocess?
Speaker 3 (06:44):
applicants they send
them this or where does it fit
within the interview process?
Yeah, great question.
Like all good questions, itdepends.
The typical spot is a recruiterhas done a resume screen,
probably a first screening call,then woven as an assessment
before the hiring manager ortechnical interview, and that
allows the recruiter to takemore shots on maybes and saves
(07:04):
hiring manager time while givingthem some more signal.
It depends because if you'rehiring a more entry-level role,
sometimes you can skip thatrecruiter screen and just have a
more rigorous application.
Sometimes you might be hiringfor a VP of engineering and
actually it's worth doing anextra call before you send the
assessment.
So it moves around a little bit, but typically after the
recruiter screen.
Speaker 1 (07:28):
Okay, yeah, I would
assume every company runs their
process a little differently,right?
So it makes sense to me.
As long as the candidateengagement's high enough, it's
always better to do it earlierin the process, save the hiring
team time.
Speaker 3 (07:37):
Right, yeah, one of
my one of my controversial
opinions is that recruit it'sit's easy for a job to feel like
the activities I'm taking arethe value.
And when it comes to recruiting, like reading resumes aren't
the value, screening candidatesaren't the value, it's getting a
great candidate to aconversation with a higher
manager is the value.
And if you can skip any ofthose other things, you can do
(07:59):
more other valuable things.
So if you can skip thescreening step, like you said,
have an engaged candidate.
We're seeing more and morecustomers do like video
recordings.
That first five minutes of acall you just repeat over and
over your mission, your vision,why your founder is awesome.
You record that in a loom andsend that to any candidate that
passes, your knockout questionsor your early screening, and you
(08:21):
can get a lot of candidatesthat are very engaged without
needing to schedule thatscreening call which slows
things down.
Most folks have jobs.
It's hard to schedule it duringthe day.
So I think there's a lot ofexciting things happening in
that candidate engagement thataren't jump on another 30-minute
screening call where you smileat somebody.
Speaker 1 (08:41):
Got it Okay, so you
got the resume matching aspects
right, as people call it, andthen you're sending out
screening questions too, priorto assessments.
Or is the company doing that?
Is that run through yourproduct?
Can your products say okay here, ask the high level screening
questions to candidates and thensend them up with the
assessment?
Or?
Because you mentioned somethingabout screening questions, so I
(09:01):
just want to double down onthat.
Speaker 3 (09:03):
Yeah, so that.
So product one is thattechnical assessment, human
powered technical assessment.
Product two is applicationscreening and matching for
technical roles, and here thisis selecting.
So what we cover is making surethe requirements are correct
(09:23):
and this is the most importantpart, and I don't see a lot of
people talk about this.
Just I like to see how, what'sthe state of the art in RPO and
staffing.
So every once in a while I'llmake a hire with a staffing
agency just to see how theprocess was.
Recently went through one with acompany everyone has heard of.
They make they have $20 billionin revenue every year in
staffing and I was looking for afront end engineer for a
(09:46):
one-off project.
I was looking for a Reactexperience and they rejected a
bunch of candidates for me thatwere like senior React engineers
because they didn't list CSS ontheir resume.
And if you're not a technicalrecruiter, everyone who does
React does CSS.
You cannot do React without CSS, but because that got into the
(10:07):
requirements list, that becomescandidates that get rejected for
no good reason.
So we have a tool to get therequirements list solidified and
that means being and thisactually we take a lot longer on
this than other times becauseno one likes to do this part.
They like to see candidateresumes but saying is CSS a nice
(10:28):
to have or a must have isreally important?
Because no one asked me thatquestion.
It was obviously a nice to have, but it got into the must have
lists and job descriptions arecrap.
So we we essentially turn thoserequirements into evals for an
LLM yeah, so yes, no questions.
And then we create a hierarchy,those evals, and then we can
(10:49):
run those evals against a resumeand application plus knockout
questions that we generate, andthen that allows us to sort
candidates into qualified andunqualified buckets.
Speaker 1 (11:02):
Right and on the
resume side, is that how much of
that is product versus?
Speaker 3 (11:11):
human review.
It's product.
We do a human review right nowjust because we want to learn
from any mistakes.
But there is no, it's not a.
We're not building this to be aranking algorithm where the
conceit is, oh, it's justranking, it's just your top 50
out of a thousand.
It's not really a hiringdecision because it's just
ranking.
But we all know those other 950folks are not going to get
(11:33):
looked at the same way.
I am not super stoked this ismaybe a little controversial but
I'm not super stoked about theEEOC decision.
When is it a hiring decision?
And if they have applied, it'sa hiring decision.
If you're doing outbound, it'snot.
So that's the thing we're allskating by on.
But like ranking when the otherpeople are not looked at, I
(11:54):
feel like that's, it's like thiscloth.
So I feel like, as vendors, weneed to be building systems that
pass scrutiny.
So Workday is getting suedright now.
Have y'all seen that lawsuitthat pass scrutiny?
So Workday is getting suedright now.
Have y'all seen that lawsuit?
Speaker 1 (12:08):
Yeah, I don't know
what's the latest.
Have there been any recentupdates the past month or
anything?
Speaker 3 (12:13):
I think June or July
was last I saw something new.
Yeah, and Workday's defense isessentially hey, we're not an
employer, we're not a staffingfirm, don't hold us to any
criteria, and I think that isnot the approach we should be
taking as technology firms.
We should be instead thinkingof crypto like they're all these
(12:36):
fly-by-night crypto companies.
And then Coinbase steppedforward and said we're going to
be regulated and we're going tolean into it.
We're going to ask forregulation, we're going to meet
the standard of goodness andlack of shadiness In this case
it would be lack of bias andthat's a system we're building
that can be run automatedbecause you put the human effort
at front at the requirementsDoes this house need a basement
(12:59):
or not?
If you decide later on, youwant to change that.
It's very expensive.
But if you spend the time upfront saying, okay, this human
signed off, doesn't need abasement, then you've created
the paper trail that the EEOCneeds.
And then all the LLMs all the AIis doing is like data entry,
it's just doing dumb things likematching companies versus
criteria or looking for notkeywords but like skills,
(13:22):
because LLMs are already betterthan recruiters at most of the
skill matching, like everyone'sstill using examples from the
keyword things in that resume.
Oh, it's Kubernetes, andsomeone said K8S.
So of course these robots aredone.
They are already better thanmost, better than me, better
than most tech recruiters Like I.
One of our testing data.
We're looking for a engineerwith TypeScript and someone came
(13:46):
through with Nextjs experience.
I didn't know that Nextjs is aplatform only written in
TypeScript, but the robot knew,so they marked that candidate as
a pass.
Like the bots are alreadybetter at data entry, we should
let them do data entry.
Speaker 1 (14:02):
Yeah for sure.
Yeah, that's going to beinteresting to see what happens
with the workday or precedentthat sets as well.
Could you, can we double back?
You were talking about EOC andlike the analogy in terms of not
needing a basement.
Could you help explain that alittle bit more, like how you
think these products and toolsare going to protect against
(14:22):
that and like what the bestpractice is going to be?
Is there any more you couldshare there?
Speaker 3 (14:28):
Yeah, so this is so.
I have a background in computersecurity and and one of the
things you learn really on incomputer security is for
regulatory compliance.
It's a documentation game.
Yes, there's some stuff youshould do, but it's obvious
stuff.
Same thing with EOC.
The stuff you should do is,it's obvious, it's a
documentation game.
Yes, there's some stuff youshould do, but it's obvious
(14:49):
stuff.
Same thing with EOC.
The stuff you should do is,it's obvious.
It's not like they're not askingfor crazy stuff.
They're just asking fordocumentation that a human made
this decision they were notusing.
They made it for good reasons,they can justify it and you have
a trail that you were thenfollowing that criteria and you
could do this manually.
You build a big resume rubricor application rubric with 17
(15:12):
rows, you weight the criteria,you fill out zeros and ones for
every application.
It takes nine minutes percandidate and no human would do
that because you're immediatelylike, oh, I can't do this, I'll
just use my deep learningnetwork that's between my ears
to make a judgment and there's acarve out.
For human made a judgment,they're probably not biased.
(15:32):
Can you prove they were biased,whereas for the robots you have
to prove that they weren'tbiased, but there's already a
way to do that.
The thing about robots is theydon't get bored.
They will fill out that 17 itemrubric and they will do it
better than a human if you buildthe tech right.
Like, hallucinations are onething.
When you're asking someone tolook up something, when you're
asking someone to search a smalldocument for a very specific
(15:54):
criteria or enrich a documentwith LinkedIn data,
hallucination is not the problem.
That's just a lot of plumbing.
So for the EEOC, you need tosay like this requirement is job
related, it's bona fide.
There was a person that madethat decision and here's how
that requirement chased throughand here's why we rejected that
person based on the requirement.
(16:14):
There's nothing about knockoutquestions in any of the EEOC.
Everyone uses knockoutquestions, but there's not like
a carve out for the exceptionfor automated knockout questions
.
There's just is this a jobrequirement?
Speaker 1 (16:27):
So that's what Is it
a job requirement?
As long as it's clearly postedthat a human came up with the
job requirement.
If the AI is then making theevaluation by essentially
matching to the job requirementas you put a knockout question
and there's a documentationtrail of that that's probably
not something that the AOC isgoing to flag.
Speaker 3 (16:46):
Yeah, will you get
sued.
If you're Workday, you're goingto get sued Like anybody can
get sued Like this country.
Speaker 1 (16:51):
We love suing each
other.
It's like our favorite thing,it's like our favorite pastime
in business.
Speaker 3 (16:55):
But will you win that
lawsuit before it goes to
actual trial, because you candump this amazing documentation?
Yeah, as long as you don't dosomething.
If you put on that form iswhite male, then advance.
Okay, yeah, you're going to goto jail.
Good luck.
I'm glad we have thisdocumentation trail.
That's progress.
Speaker 1 (17:15):
Yeah, yeah, I think
for a lot of these products too,
it's like limiting the data tomake sure there's no like
personally identifiableinformation or really anything
that could be used.
And for some products that getsa little more challenging.
The more wide in scope the AIis, more data the AI is
(17:38):
evaluating, then it could be alittle bit harder and somebody
could have had short tenurebecause they had a baby or
something like that.
Then there could be somethingthat you wouldn't think could be
(17:58):
related, could bediscrimination or considered
discrimination could slip orthere could be.
It could be looked at in adifferent context.
And that was an interestingcounterpoint.
He's just like, yeah, you gotto be careful because sometimes
there might be things that areintroducing bias or
discrimination or whatever elseinto the process.
You just have no idea.
It's just hard to catcheverything.
But the more limited, I feellike, the scope is.
(18:20):
Like just looking at a resume,I feel like, but again, there's
the tenure, there's stuff,there's always things.
I think the more limited thescope, the more we can prevent
against that at first, steve isa very smart guy.
Speaker 3 (18:31):
Jim is well.
A lot of our customers use Jim.
They get a lot of value in it.
He's not wrong from hisperspective, and I'm going to
take the opposite point here.
You can come up with these, butwhat about X?
From my perspective, we're notcomparing to some perfect
recruiter who can spend fiveminutes per resume and notice
(18:52):
that was a gap but then noticethere's oh, this is a woman, so
that was probably that.
And that's not what happens.
Like you look at datarecruiters get, depending on
which study you use, between 15and 30 seconds per resume.
That is not enough time to takeall of that in there.
Nobody is that good asrecruiters.
We get good at it, we feel goodat it, we can do it easily.
(19:12):
That doesn't mean we'reactually effective at it.
That doesn't mean the result issuitable for purpose.
That's a different.
Expertise needs a feedback loop.
It doesn't mean just feels easy.
It feels easy for us because wedo it a lot Doesn't mean you're
good at it when this is studied.
So interviewingio is the beststudy in this.
They did 10 years ago and theyjust did a new one.
They asked tech recruitersworking at tech companies.
(19:34):
Hey, here's some resumes.
Categorize them based on theirlikelihood to pass a technical
interview.
Easy right.
These are tech recruiters.
That's what they do all day,that's like their main job.
They were slightly better thana coin flip slightly better.
And when they did some post-hhoc analysis on what predicted a
recruiter picking a resume, itwas underrepresented status.
(19:56):
Recruiters really do care aboutdiversity and it was prestige
of previous employer andspecifically name recognition of
previous employer, not was thisemployer selective?
It's have I heard of thisemployer.
That's what mattered.
Speaker 1 (20:10):
They must have came
from enterprise companies.
I feel like a startup recruiterwould go crazy if they heard a
hiring manager request that typeof experience.
Speaker 3 (20:19):
Well, the thing is
hiring managers don't usually
request this.
This is a common Recruiters.
This is my opinion.
I would love to have thepushback.
You would know more than me.
I have never hired a recruiter,I've only partnered with them.
The hiring managers want peoplewho are good and they would like
to live in a world where theydon't have to confront the
reality that there are athousand resumes and a lot of
them look good.
(20:40):
A recruiter has to pick, and soI have to pick on something.
And what they tend to pick on,regardless of whether they admit
it, if you look in this studyand others, they look at brand
name recognitions Like, oh, youworked at Airbnb, cool.
The problem with that?
It actually is pretty effective.
There are some.
A lot of name brands areselective institutions.
(21:01):
The folks you pick from therereally are more likely to pass
your interview.
It's not wrong.
It feels icky.
The problem is it's incompletebecause there are selective
institutions.
A startup that you have neverheard of, a scale up that you
have never heard of, because nowwe're all recruiting remote,
and there's all these companieswe've never heard of, all across
the country in the world who isbetter than the one you've
(21:23):
heard of?
Who is more selective thanAirbnb.
So there's this prestigiousresume that super predicts
passing your tech screen becausethey have a harder tech screen
than you do, but you never heardof that company.
So you as a recruiter, in ahurry, you have to pass on them
because you just don't recognizeit.
The robot can build a list ofwhat are selective schools, what
are selective employers, andmatch it against the list.
(21:46):
So you're doing the same thing,you're just doing it better,
and we can talk about how tofight against just prestige bias
that's in their topic.
But at the start, if we'regoing to do the same thing
humans are doing, let's just doa better job at it.
Let's match the prestige listto one that is more complete and
hits the people who haven'tworked at Google but have worked
at the most selective startup,in a fintech startup in New York
(22:10):
that recruiters have neverheard of but is incredible at
selecting developers.
Speaker 1 (22:15):
Yeah, I find that
kind of depressing that senior
recruiters would overemphasizelike pedigree or where people
worked, because it's much morerelevant to look at Does the
person come from a relevantenvironment?
Speaker 3 (22:29):
right, what does?
Speaker 1 (22:29):
their team look like.
What is the technical stack,everything that might go.
Okay, what size customers, whatindustries do you service?
All of the the nuanced thingsthat get into?
Uh, all right, like from alooking for, like a technical
perspective, not likeengineering technical, but like
looking at it from an analyticalperspective of what the actual
environment looks like andmatching that is much more
(22:52):
critical.
I go through this all the timewith my customers that are in
the startup and growth stagephase and it's just, I'll see.
I got a customer in HR techright and they are a startup or
growth stage company, probablyaround 50 employees, 400 plus
customers and their primary theywere looking at.
Okay, we need to hiresalespeople.
(23:12):
Oh, let's get people fromLinkedIn.
And I'm of the opinion you don'treally sell LinkedIn.
Sorry, it's a monopoly business.
People come inbound, you'reshuffling papers around.
It is what it is right.
A lot of the times you don'thave to develop very strong
sales skills and it's a totallydifferent motion than working
for a startup or growth stagecompany that nobody's ever heard
(23:35):
of, that doesn't have everyresource available under the sun
, that isn't heavily automated,has every point possible
technical stack thing in place,the motions, the consultative,
strategic motions of everything,and knowing what it's like of
working for a startup, beingspread thin, the work ethic,
everything that goes intoservicing customers.
It's just way different.
(23:56):
So, yeah, I want the no-namestartup that's growing fast.
I want people that come fromthe same environment.
I want people that I don'tthink could do the job.
I want people that have donethe job.
I want people that have donethe job and I can get references
from previous direct managers.
I don't really care where youworked from, as long as the
environment fits.
So I think it's just like ifyou're enterprise and you're
(24:17):
going to another enterprise,it's yeah, you could do that.
But if you're enterprise goingto a startup, I don't care where
you come from, I see that as ariskier hire, like it's just
riskier.
Like even an engineer workingat a big company.
Now, there are situations wherethis is the nuance right, they
were working at a bigger company.
It was on a smaller team.
Was it like in a subsidiary?
Was it like in a new kind ofproject?
(24:39):
Did they have fewer resourcesthan like the parent company?
Or like they were off doingtheir own thing over here?
So they were doing a lot moreSometimes, like you'll see, even
on an engineering point, aswe're a startup engineer, if
you're first like one of thefirst 10 or first 20, your scope
of what you might be doing is alot wider right.
The technologies you might beworking on are a lot like more
recent.
Speaker 3 (24:59):
So there's no
onboarding docs.
You're figuring it on your ownGoogle.
You have six months to onboardwith this pristine process you
got to be.
It helps to be a PhD tonavigate the environment, but
that's not what a startup needs.
But that Google resume.
I got to show the hiringmanager 10 resumes.
Am I going to skip the Googleresume Because they'll be
excited about that Google resume?
Speaker 1 (25:18):
Yeah, it's not,
they'll be excited until the
person flops.
An engineer from Googleprobably isn't going to.
They're obviously going to beincredibly sharp.
So, like for dialing intosoftware engineer, yeah, I'd
probably.
If we could afford the guy fromthe guy from the guy or gal
from Google.
If they're not like doubled atour comp range, then yeah, maybe
(25:38):
we should consider them.
Speaker 3 (25:39):
But yeah, it's also
nuance on the role, right, like
it's all like.
About the nuance aspect too,yeah, and my, my belief is that
the people who have the bestlike strategic thinking around
this, hire around the role,around the company's position,
around their budget.
Realistically, they should puttheir effort at the very front,
defining the requirements andgetting really uncomfortably
specific.
So prestige is something nobodylikes to talk about unless
(25:59):
you're doing outbound, like allthe outbound tools.
They have that filter.
They have that prestige top 1%filter, top 20% filter, but no
one's built that on the inboundyet because it feels gross.
I don't see a lot of scorecardsthat are like must be from a top
20% institution anymore.
But the reality is yourrecruiters often are having to
(26:20):
make decisions and they're usingprestige.
So why not make it an explicitrequirement and get to decide?
Is it a must have?
Is it a nice have?
Right now?
Let's put that thinking upfront.
Have the hard conversations anddon't let it get into the
squishiness of the recruiterwith 30 seconds trying to figure
out if this person gets to havea screen or not.
Speaker 1 (26:39):
Yeah, for sure,
Elijah.
I don't know if any questionsare coming up for you.
I know I've been monopolizingour side of the conversation
here.
Speaker 2 (26:46):
All good.
I'm curious.
So if a, let's say, the hiringmanager or the recruiting team
used AI to actually generate thejob description in the first
place, including some of thosemust haves and nice to haves, is
that can do you think that'sconsidered a hiring decision
relative to the EEOC becausethey, like, reviewed it after
(27:08):
before doing something with it?
Does that make sense?
Speaker 3 (27:13):
I think it's a gray
area.
So I think if you copy a resumefrom a job description from
online and cargo, cult it andthen that's the thing, that's
just on the job page, and thenyou do something totally
different and you can't showthat you're tracing your actions
and screening criteria tosomething relevant, whether it's
a job description or anotherdocument.
(27:34):
I prefer having anotherdocument that is not the job ad,
that has the actualrequirements and rules To me job
description.
I think companies who see thatas an advertisement perform much
better than companies who seeit as a job description.
I think that's a distinctionthat I make and everyone does.
But at least you have to showthat, whatever the thing is,
(27:54):
whether it's a another documentor job ad, you are tying your
actions to that, and then theEOC tends to be, and that means
you need documentation.
So that's the key is writesomething down somewhere that
you can send to a lawyer.
Not only is that good for thelawyer, but that's good for us
to stop to not lie to ourselvesthat actually looked at this.
Speaker 1 (28:16):
Wait.
So it's like.
So your recommendation, likeshould, and sorry if I missed
something here, but I know we'recovering a lot of ground and I
think you, I really appreciateyour advice here and I think I
think a lot of people are atleast, I'm very interested in
this stuff.
So should these products like?
Should they be helpingcompanies create the job
descriptions so the company cantype in your share role
requirements and then it canrefine JDs?
(28:36):
Because a lot of these productsare doing that too.
A lot of products right now areactually, it seems like, almost
helping shape role requirements, and so is that something where
maybe people should be stayingaway from having AI craft
requirements, and it's more oflike giving AI like very clear
(28:57):
requirements and then likePersonally, I worry more about
AI helping with.
Speaker 3 (29:05):
No one likes writing
job descriptions.
It's a marketing hat.
Whenever that's not your thing,nobody likes it, and so I get
why Gen AI that's an earlytarget.
I worry more about Gen AIcrafting the requirements versus
we'll be excited about you likekind of must-haves.
I worry more about that than Ido about Gen AI ranking resumes,
(29:25):
frankly, because if you do thesecond thing, the first thing is
where, if you get that wrongbecause you just cargo culted
someone else, everything else isgoing to be wrong yeah, just,
it's like the yeah, thefoundation to everything you're
doing.
Speaker 1 (29:37):
Yeah, yeah, I like
your distinction, too about the
job ad versus the jd.
That's really cool.
Speaker 2 (29:44):
Yeah, the only
problem is with that because
I've used that in previouscompanies is you're then trying
to, you're trying to manage likemultiple documents and there is
a certain level of transparencywith whatever goes online being
the actual requirements, right,if you have like shadow
requirements, things you're nottelling people and personally,
(30:05):
right, like I just struggle alittle bit and it gives you like
more things to manage.
If the job description isessentially like the core
requirements and then I don'tknow, maybe AI is going to
create a job advertisementthat's just like a few bullet
points and is more of likemarketing marketing, I guess
(30:32):
that's fine.
But yeah, I've tried to manageboth and I think it can be a
huge challenge to try tomaintain multiple documents.
And then maybe there's risk,right, when those get out of
alignment.
A requirement change on the jobdescription nobody updated the
job advertisement, and then howdoes that work with the
scorecard, right, that's beencreated and then any of the
(30:53):
questions, right, that aretrying to pull certain responses
or examples to fill out the.
I just, yeah, I think there's alot of inconsistency with if
there's a job description, a jobadvertisement, a scorecard,
questions that are aligned withthe scorecard.
Those rarely seem to all lineup in this like beautiful,
(31:14):
consistent way.
Speaker 1 (31:15):
Elijah, what if?
When, if people make changes tothe job description, it
automatically updates the job ad?
Oh yeah, 100% right, If itcould automatically do that.
Speaker 2 (31:25):
Right now, none of
the technology does that.
The ATSs are all set up.
Tell me if you've seen onedifferent where there's a job ad
that you can edit and it's notalso the job description.
If you're going to have a jobdescription somewhere else
usually it's Google Docs or likea Word file, but then you have
to go remember to update the ATS, because the ATS is where the
(31:46):
scorecards are housed, which isgoing to be what the recruiters
and the hiring teams are usingto actually evaluate the
candidates.
Speaker 1 (31:53):
I think we're going
to see that more like startups,
like AI, native companies thatare doing some of this
generation stuff.
I think a lot of it will comedown to what happens workday
right Like, and some of therequirements as that gets more
clear.
If, to your point Wes like, ifthere is becomes this
distinction on who's definingrequirements, as like people
(32:14):
that are writing requirementsand reviewing requirements, then
I could see these, the productroadmaps building out this
distinction of here's the JD andhere's the job ad.
But then you're like Elijah, Ithink you touched on this too.
I wonder how it's going to beviewed.
Think about, like transparencylaws, right Around compensation.
Are there also going to be?
Like how much of therequirements need to be publicly
facing too, and having twodifferent documents.
(32:37):
That's a I don't know, it'lljust be.
It's weird.
I think we have to keep likeproduct roadmaps a little bit
loose, like you try to guess.
I think you're thinking aboutit Like it really.
It seems it makes it seems verylogical to me.
Speaker 3 (32:54):
I think I agree with
what you're saying.
I'm very autistic.
It is the autistic approach toresume screening, but
systematized.
It's not out of this for betteror worse.
And yeah, I think it's.
We have knockup.
We already have the knockupquestions scorecard, interview
questions, job description.
We already have four thingswe're keeping in sync and the
other thing adds an additionalone which is hard.
It's hard to keep those thingsin sync.
I'm excited about vendors likePoetry that seem to be targeting
(33:17):
this problem of reusabilityreusable things, maybe you can
use something to generateanother thing.
I don't think they have thisyet, but I would love for them
to build it.
And Ashby has, as far as ATSvendors to complement, they have
a version of creatingrequirements that are separate
from your job description.
They're suggested based on thejob description and then you
(33:41):
have those as a separate thingthat you can add or remove.
I think that's a goodadvancement and you can use LLMs
to score them.
You're on your own for promptengineering the stuff.
It feels very you know beta,but I love that they're doing it
and putting it out there andletting folks get the power of
technology, because the lms areat least as good as a very busy
person, in my opinion yeah, Iknow there's a lot of like
(34:04):
concern around llms involved inthe hiring process, but I I get
it.
Speaker 1 (34:08):
It could be like bias
at scale and it's.
I don't know I think they'regoing to.
It's going to be significantlybetter and less biased than
people.
Yeah, it seems very obvious tome and it did at first, before I
was really diving into AI andLLMs, before I really knew and I
don't think really any of usreally knew a whole lot about
the technology that came out acouple of years ago.
Like maybe you did, but I, alot of us, were like, well, how
(34:30):
the hell does this really workand what's, what are the?
But now that I've learned likea fair amount, like it just
becomes more and more clear tome.
I understand the fear.
It's a priori.
Their CEO, elijah, was.
He came on the show and we weretalking with they do.
It's another AI kind of product, but what they were essentially
talking about is like theanalogy to autonomous cars, and
(34:53):
so it's like statistically it'ssafer, but people are still
scared of it, like that concept.
I think it's like the same withAI.
Here.
It's statistically, we shouldbe able to create this tech like
in the near term to besignificantly less biased.
Like bias is a huge problem inthe United States.
It's massive.
Okay, so this is an opportunityto make it significantly better
(35:16):
.
So it's you know, I think itwas like all this like fear, and
I get it, but I don't know.
I think this is like.
I think the more that peoplebecome a little bit more
comfortable with it, it's prettyclear.
It's pretty clear that this isgoing to be so much better, so
much better as people.
Speaker 3 (35:31):
We're going to mess
it up along the way.
People are going to mess it up.
They're going to use it for thewrong reasons.
They're going to just say, hey,who should I hire?
And they're going to hire thatperson and be like what's wrong?
We're going to do dumb things.
That's how we get through newtechnology.
But I think anyone's lookingclosely at the advancement and
saying we're not going to giveit this A to Q problem.
We're going to give it B and Cand E and F and Z.
(35:53):
Wait, that's outside.
You give them the pieces thatit's going to be really good at
and I think the drudge work oflike kind of data entry.
Does this application matchthis criteria?
That's a really good use caseright now.
Speaker 1 (36:06):
I think it is.
I think in different parts ofthe process too, not even just
top of funnel if case right now,I think it is.
I think in different parts ofthe process too, not even just
top of funnel, if you getspecific enough right.
That's where I say like widerin scope, where it's, if it has
access to demographic data andstuff like that, like you're, of
course, the likelihood of biaslike creeps significantly.
But if it's very dialed intolooking for very specific
information and also like the, Ithink, the prompt engineering
(36:26):
and what's happening with thelike AI and all these types of
things, people are writing inthings already to prevent bias.
It's an ongoing thing wherethere's already a lot happening
to train these systems to not bebiased, and it's going to be a
lot more effective than youremployee taking a compliance
class once a year.
(36:46):
This is like mentally checkedout.
It's the last thing they wantto do after working 40 hours a
week.
It's just I think hopefullythere's going to be mistakes
made.
But it's the same thing withthe autonomous cars, like just
because if one crashes like weshouldn't just say, oh, it's not
safe, if statistically it'ssafer, and then we need to keep
refining it, and there's neveran okay amount of bias or
(37:07):
discrimination ever.
Anything above zero is bad, butif we can move in the right
direction and it hassignificantly less, yeah, let's
do that and I think it will.
Speaker 2 (37:16):
I think one of the
best things I've seen, going
back on the job descriptionstuff, is that concept from Lou
Adler called performance-basedhiring.
And Lou's this great oldergentleman who's been using this
for years and what.
There's this great oldergentleman who's been using this
for years and basically he saysto start with KPOs key
performance objectives.
Every time I've done this, whenI can get the hiring manager to
(37:38):
like really partner with me andbe specific, the whole search
goes better.
So you basically get.
I think it's three to five.
What are the top three to fivethings that need to be
accomplished within the first,let's say, 12 months to
determine whether or not thiswas a successful hire?
So you're tying together,almost like the performance
evaluation at day 365 aftertheir start date, to figure out
(38:03):
how are they going to beevaluated and how are we going
to know that we made asuccessful hire a year in.
And then you're determiningthose well needs.
To close, let's say it's Idon't know $800,000 in new
business If it's a sales roleneeds to, and you go through
these three to five keyperformance objectives.
Then you use that to build thejob description and the
(38:27):
requirements and anything else.
And then the scorecard right inthe evaluation is also what,
like the key performanceobjectives are on the scorecard.
So you're basically trying tofigure out like, can this person
do what we need done?
And, as James alluded toearlier, have they done?
Do they have examples of doingthat before in similar contexts
(38:51):
to this?
I'm a big fan of thatperformance-based hiring.
When you can get thoseperformance objectives,
everything else is moreconsistent and clear throughout
the whole rest of the process,including their first one-year
performance eval.
Speaker 1 (39:06):
Yeah, I see, I
totally agree with that.
And one thing you'll hear toois sometimes people say, okay,
based on the role.
It can be challenging, and ifit's challenging, then you don't
know the role well enough.
Yeah.
Speaker 2 (39:17):
Don't hire it yet If
you're not going to be able to
know whether you made a goodhire 12 months in.
Are you actually ready to spendX hundreds of thousands of
dollars hiring that person?
Probably not.
Speaker 1 (39:27):
So hiring like hiring
should be looked at a it's an
investment, right?
So what is going to be thereturn on that investment?
We should have a very clear ROIand, mind, right, I don't know
why this is randomly coming tomind, but this is Sam Jacobs.
I don't know if you guys knowhe's like the founder and CEO of
a networking group calledPavilion.
Yeah, the tech industry.
(39:52):
So, like one of the things,revenue, collective revenue yeah
, the revenue collective.
Uh, yeah, so they're, it's acool group.
But west I don't know if you'rethey have a ceo group.
That might be interesting foryou guys.
But anyways, yeah, like, one ofthe things he always talks
about is head count is not scale, right, like scale is unit
economics, your revenue, yourmargin, scaling at a rate where
you're making more money becauseof certain investment decisions
.
And he often says a company'smistakes scale with just growing
(40:15):
teams, which is really just acost burden often.
And so it's getting into thatmindset of ROI surrounding hires
and there isn't a clear way totrack back how this hire is
helping the company achieve theNorth star metric.
Like, why is that hiring beingmade?
(40:36):
And you're right, that's how weshould be thinking about
everybody.
We should be thinking aboutputting together job
descriptions.
There should be a very real ROIthat's tangible and not just it
doesn't even have.
It doesn't have to be a salesrole, it should be any role
within the company.
Speaker 3 (40:46):
That should be your
requirement, especially
executive roles, because theyvary so much depending on your
context.
That's such a good exercise Ilike the A method for hiring is
the one.
But it sounds very similar towhat you're describing, elijah,
where you what are theaccomplishments that this person
you want us to have?
And for me, one time I wentthrough that exercise and
realized I wanted to hire.
I was like I'm going to hire aVP of marketing.
(41:06):
I was like no, I don't need aVP of marketing to do these
things.
I need an entry-level person.
It'll be way cheaper, they willactually like their job, versus
if I get a VP to try to updateAdWords.
They're going to hate me and itsaved me $100,000 and a lot of
pain just because-.
Speaker 1 (41:22):
Oh yeah, the
opportunity cost.
Getting an executive hire wrongis literally a seven problem.
At least For a small business,it's a.
For a small to medium sizecompany, it's a seven figure
problem.
For a bigger company, You'retalking like multi-million
dollar.
Yeah, it's just nuts On theflip side.
What's also really interestingis that when I'm evaluating
talent like one of the thingsthat I also look for if I'm
(41:44):
hiring for my team is does theperson I'm hiring understand how
their role directly impactsNorth Star Metrics?
Do they understand thecorrelation to what their
activity, why it actuallymatters and how it's driving the
business forward?
And are they aware of theirenvironment in terms of how they
might be doing that?
Or do they have ideas on maybemore efficient ways to do that,
(42:05):
how they think the team couldoperate and I do this for ICs
Even if they're not in astrategic role I want to know if
they have that self-awareness,business acumen to some extent
and if they really understandthe impact that they need to
have right Like, versus justtactically operating day to day.
Speaker 3 (42:24):
Yeah, that's how you,
that's how you level up an
organization is make sureeveryone knows how they impact
the level above.
And that's hard to do.
I read a book.
It's called Turn the ShipAround.
It's about a submarine, manythings, and his approach is
called leader, where it'sbasically the person that's
reporting to you.
They should do the thing andtell you they're doing it while
they're doing it, so you cancorrect it, but they just go and
(42:45):
do it.
So it shows that they know thenext level it, but they just go
and do it.
So it shows that they know thenext level.
And one of the things in thebook is asking do people know
what they're connected to?
And I was like, yeah, I'mcrushing this.
And then I went and asked Iasked that and everyone I want
to have for the next two weeks,and I was not crushing it.
Speaker 1 (42:58):
Like you're like oh
everyone knows, of course.
Speaker 3 (43:00):
They know how this
goes to revenue or more
customers.
You have your customers, Nope?
Speaker 1 (43:10):
About half the people
did, did not.
It's hard to do.
Yeah, I also like one of mygo-to questions when I was
scaling out either my teamaggressively in 2022, we were
just hiring like recruitersevery multiple month and I would
ask like recruiters, like twoyears of experience and be like
hey, so if you were made CEO ofyour current employer tomorrow,
what are the, what would be yourtop three initiatives and why?
What would you double down on,what would you change, what
would you discontinue?
And I feel like I got so muchvalue from that, just again like
(43:34):
feeding into their awareness,like their understanding of
their role within the company,other people's roles within the
company, like to me, it makes ahuge difference.
So it goes both ways.
It's like the hiring team needsto understand people's point of
impact.
How's it?
Impacting with stars, Like bestcandidates are going to
understand people's point ofimpact.
Speaker 3 (43:49):
How's it, you know,
impacting with stars like best
candidates are going tounderstand that too yeah, to go
all the way back to resumes,it's if css ends up on the job
requirements and no one cantrace why css on someone's
skills list traces to them beingeffective in their first 90
days, their first year, thenprobably we should remove that
(44:11):
and stop looking at it onresumes.
Speaker 1 (44:14):
Yeah, for sure, for
sure.
Look, this has been a reallyfun episode.
I definitely learned a lot.
I really enjoyed the EOCconversation and you definitely
had me thinking about somedifferent problems and
challenges and opportunities ina new way today.
So I'm sure I'm not going to bethe only one, as people are
tuning in here.
It's definitely a lot of valuethat you've shared with us today
.
Wes, thank you very much fortaking the time to educate us
(44:38):
and our audience on everythingthat you're working on and
knowing.
It's definitely reallyimpressive, and we're really
thankful that you've come on theshow today to talk to our
community and help us out.
Pleasure is mine.
Thanks, guys.