Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:30):
no-transcript.
So anyways, Roli, welcome tothe show.
Thanks for joining us today.
Speaker 2 (00:39):
Thank you guys, hi,
hi and thanks for inviting me.
So very excited to be here.
Speaker 1 (00:44):
Yeah, we're really
pumped to host you, and can we
start?
Just great to learn a littlebit about you and the product.
What are you building right now?
Speaker 2 (00:52):
Absolutely so.
I'm here representingBabblepodsai.
This is a generative AI plusvoice product and we're based
out of India, mumbai, so veryhappy to be on the show and
speaking to people worldwide.
So what we are building isbasically, we are building like
an AI recruiter agent.
We're building an agent thatallows companies to autonomously
(01:12):
interview, screen, assess theircandidates very quickly at the
top of the funnel and just getthem the best candidates really
quickly.
Get them the best candidatesreally quickly.
So I think speed is definitelyone of the key things that we
are going after, but it ispretty much the speed to the
right talent, the time to hire,and these are some of the key
metrics that we are going after.
And, yeah, I think that's whatwe are doing.
(01:34):
In a nutshell, we've beenaround for two and a half years
and we have a ton of customersin India and also now some in
the US and UK.
Speaker 1 (01:43):
Wow, that's awesome
and just for context, for our
listeners and for us, that'llhelp us ask questions too.
Could you tell us a little bitabout your customers right?
Are they SMB, mid-marketenterprise, what industry they
might be in, as well as thetypes of roles that your product
helps them hire for?
Speaker 2 (02:03):
Yeah, absolutely so.
Just a lot of startups in theearly stages.
We are also trying to figureout exactly which customer
segment is in some sense goingto be a beneficiary completely.
We have seen actually gettingadopted right from startups that
are very small, like a one ortwo person startup, where there
was just the founders and theydidn't want to spend a lot of
time even onboarding interns oreven junior engineers, to
(02:26):
companies that are probably Ithink the largest company that
we are working with is more than10,000 people.
So the spectrum of customers isvery wide.
We've worked with more than 100customers so far.
Across this whole spectrum.
We have seen the product beingused from actually one of the
things that I have found mostmagical and I believe like we
have built the product beingused from.
Actually one of the things thatI have found most magical and I
believe like we have built theproduct, but I still find like
(02:49):
sometimes that, wow, this ispossible to do with AI now Is
that the same platform can beused to actually screen all
kinds of roles.
So because we are usinggenerative AI and because we are
using a lot of the intelligencethat has been built, the
foundational intelligence thathas been built in the world.
You actually don't need to bevery narrow in what you are
(03:11):
using Babel bots for.
So we have seen it being usedfrom technical roles,
non-technical roles, fieldengineers, sales, investor
relations, engineering, design.
I think at this point there'sprobably like more than 500
different roles that the ourcustomers have used us for, and
I'll honestly confess that someof the roles I didn't even know
(03:32):
they existed and we just hadcustomers create those
interviews and conduct those,and I'm like, oh my god, like
I'm just learning by what ispossible if you have like
powerful tools like these.
So, yeah, it is a veryversatile, universal tool that
can be used to assist anyrecruiter in their top of the
funnel madness, honestly, yeah.
Speaker 1 (03:52):
Are you seeing
interest from like primarily
tech industry or is there aspecific industry interest that?
Speaker 2 (03:56):
you see?
Yeah, sure, yeah, you had askedthat question.
Of course, tech industry isvery interested to try anything
new, but what I've also seenwith tech industry is that
they've had assessment tools inthe past, right, so they've had
coding challenges and they haveother kinds of assessment tools.
So for them it's like anevolution of some of those
things that they're alreadyusing.
I think the customers that findit completely unique and it's
(04:20):
changing the game for them ispeople who are not coming from
completely tech backgrounds,right, so more traditional
industries.
So we have customers from realestate, we have customers from
construction, we have customersfrom insurance and finance and
financial services.
We have I'm sure I'm forgettingtelecom.
We have customers fromcompanies who support telecom
(04:43):
towers in India, of course,staffing companies now as well,
and what we realize is thatstaffing companies also have not
had a lot of tools that theyhave actually traditionally used
for the spectrum of roles thatthey normally Purely tech seemed
(05:14):
to just like really find thisas liberating as to how they can
actually change their workflowsand how they can in fact bring
some standardization to how theywere doing things.
Speaker 3 (05:19):
That's awesome.
Thank you, Elijah.
Any thoughts right now?
Follow up questions.
I was curious with some of thelarge language models that
you're using.
You mentioned they're reallyversatile, right, and you can
use them for a lot of differentpositions have you found any
limitations where the largelanguage models struggle with I
don't know, like a certainindustry or a certain type of
position or any little nuances,and the LLMs still need to
(05:41):
develop that knowledge set inthese little niche areas or
anything like that.
Speaker 2 (05:46):
Yeah, that's a really
good question.
In fact, what we have seen,that out of the box, these LLMs
are about 80% good, pretty muchacross the board.
But if you see in our industry,80% good is not good enough
because you're assessing peopleand it is a real opportunity for
a person, right, it's not likea little video or a reel that
(06:08):
you're putting it out there andthat it can have errors.
The expectation from thecustomers is of a very high
quality output, right, but whatout of the box you may get?
Only about 80.
So we have a fairly like.
Our team consists of a lot oflike annotators and people who
are making sure that we aredelivering the, the rights out
of a quality to the customer.
(06:30):
So there is this path that youhave to traverse to make these
outputs like completelyconsumable by the customer,
which includes a lot ofannotation and just making sure
that the speech to text tospeech and all those things are
doing what they're supposed todo.
But having said that, evenafter that, what we have
realized?
That in one particular case,what we felt the customer had
very hyper local requirements.
(06:51):
Right, they wanted thecandidates to actually just know
very specifically about aneighborhood, about the sort of
shopkeepers in that particularneighborhood.
It was, it was like yeah.
So, for example, if you are likerunning a, a bike showroom in
in your area and or even, let'ssay, a doctor's clinic in an
area and you are a medicalrepresentative and you want to
(07:14):
test somebody on the localknowledge, llms are not very
good because there is not thatamount of local knowledge that
has been published about thatlittle neighborhood that they
are looking for that works there.
So I think, when it comes tohiring in very hyperlocal
pockets, unless you areassessing somebody for some very
basic skills, you're going tosee a limitation that I don't
(07:38):
think LLMs will be able toaddress in the near future even
actually in the far future,because it's not worth it to
have that level of detail in anLLM at any point of time.
I'm not foreseeing that pathvery quickly.
But yeah, for the samecandidates, if you wanted to
assess them on something thatwas a little bit more
standardized and it didn't needthat hyperlocal knowledge, then
(07:58):
it would be fine.
But that knowledge itself isnot there in LLMs the hyperlocal
knowledge.
Speaker 3 (08:04):
Do you think
something like Google's Gemini
using like Google listings andGoogle maps and things like that
might they might be the onesthat have the most amount of
like local knowledge that couldbe integrated into an LLM right
and used as training data?
Speaker 2 (08:22):
It's a great idea.
I think it is not so much justthe information, though that is
missing right and used astraining data.
It's a great idea.
I think it is not so much justthe information, though, that is
missing right?
I think what's missing is that alot of this knowledge about the
nuance of a neighborhood isresident in people's minds.
So, for example, in aparticular area, if, like a
patient's, if a doctor prefersto give a certain kind of
(08:44):
medication to a patient becauseof like the local nuances of
that place, or maybe that's whatthey have a better reputation
there versus another doctor inthe same area for the same
ailment, maybe prescribingsomething else, they're actually
both right because the patientsare getting fixed right, but
there's no right answer there inthat case.
So I think some of thisknowledge seems very subjective
(09:07):
and I think it's just the lackof knowledge.
I think it's just like thatlack of judgment.
I wouldn't say lack of judgment, but that bridge between the
knowledge and that judgment isnot necessarily written down
anywhere in a very clear way.
If I just see as what's theterm for it that basically, can
I see a pathway to thatdiscovery?
(09:28):
It looks a little unlikely.
Speaker 1 (09:31):
I wouldn't say well,
but it just looks a little
unlikely yeah, okay, so I guess,one of the follow-up questions
I have on that real quick isyeah, I've actually I think it
was elijah was it on Ben'sepisode to you of BrightHire
where he was talking about like80% out of the box and then it's
the 20% that?
Speaker 3 (09:50):
I think he's working
on.
Was that him or was that?
I think it was Ben.
Yeah, okay nice.
Speaker 1 (09:55):
Yeah, yeah, um, yeah.
So I'm curious to learn alittle bit more about that.
I think our audience would alsobe curious, like that 20 that
just doesn't work immediatelyfrom leveraging an llm like open
ai's api or like what, what isactually being done in that
remaining 20?
Like how are companies actuallytraining the llm at that point?
(10:18):
Like where does where's thework essentially going?
Toward what's the work going?
Speaker 2 (10:22):
Yeah, I'll give you
an example of what seems to be
difficult to do with the highamount of accuracy that is
needed.
For example, in India, you canask what is somebody's current
salary and what is theirexpected salary.
Now, this is a very simplequestion.
Almost every intervieweverywhere in the world they may
(10:43):
not be asking your currentsalary because it is against the
regulation in, like us, and youcan a lot of places.
But almost everybody wants toknow what is, what's the rate
that you are willing to work.
Right, a full time or like parttime.
Now, the way that thisinformation is shared by
candidates is quite contextual,right?
Somebody may say that, like inindia and in India I will talk
(11:06):
in right, lakhs per year is100,000 equivalent.
Right, one lakh is 100,000.
Not $100,000, but just thenumber.
So now we are saying somebodysays, hey, I'm looking for
something between three and four.
Okay, now in a human context,the recruiter will absolutely
immediately know that thisperson is talking about a salary
between 3 and 4 lakhs, versusif the same person says that,
(11:31):
hey, I'm expecting a salarybetween 70 and 80,000.
Now, in a human context, youwill absolutely again know that
this is a per month salary thatthey are looking for because the
role.
And now there is actuallynothing wrong with the llm
because it's picking up exactlylike the thing that the person
is saying.
But this is where the humancontext is just very wide,
(11:54):
because this recruiter has donehundreds of interviews and they
know, when these two words arebeing used without any units,
what they mean, right.
So now I think the llms arebecoming smarter and picking
these things up, but theaccuracy level is not high
enough, because now, if you makean error on this and the
(12:17):
company shortlists lostshortlist people who are only
within that acceptable budgetthat they have, if you make an
error, you may actually thecandidate may miss out a chance
on your because of your error.
So those kind of errors arelike absolutely like
unacceptable.
So now?
So there are like these littlemodels that we are creating
(12:38):
which have their own rules that,okay, this model in India, if a
candidate is talking about itin India, this is what they
would mean, since this is like amid-level position 70 to 80
probably here, or definitelyhere means it's a per month
salary in INR rather than a USDsalary.
So these kind of like finetuning things are things that,
(13:00):
as we get more and more refinedin terms of talking to human
beings as they talk to otherhuman beings.
We'll get more accurate, butthey're not.
They're out of the box today.
Speaker 1 (13:10):
That's super helpful
and that's, I think, this salary
conversation is.
That's a great example.
I don't, I'm almost like alittle bit.
I don't even.
I haven't actually thoughtabout that, that didn't even
occur to me.
That would be such a difficultquestion to to ask leveraging an
llm.
But yeah, you're absolutelyright, it's.
I haven't really thought abouttoo much, but there's probably
different, like a hundreddifferent variations of ways
that somebody could answer thatquestion maybe not that many,
(13:33):
but like that's right and youfeel like it's so innocuous and
everybody's asking that questionin every job interaction.
Speaker 3 (13:41):
But it is surprising
how many flavors it can take and
what all people can mean yeah,and different countries too,
right, like there's othercountries where that same
context you know is, yeah, alocal recru, you know is, yeah,
a local recruiter would knowthat.
But yeah, the LLM isn't goingto pick that up.
Speaker 2 (13:59):
Yeah, I'll give you
an example.
It happened yesterday with oneof our UK customers.
We were testing something andthe person said that they're
looking for 2000 pounds orsomething.
They just said they're lookingfor 2000 pounds and the LLM
picks it up as actually theweight the pounds weight they're
looking for 2000.
Oh my God.
So you just don't want to sharethis transcript with the
(14:19):
customer because it's funny.
But if it happens too oftenthen they won't like you.
Speaker 1 (14:25):
That's funny.
That's funny.
Hey, so what's been yourexperience?
It sounds like you have workedwith a lot of international
customers.
What about, like languagebarrier, having the lm operate
in different languages?
Is that something that's prettyeasy out of the box, or is that
difficult?
I'm wondering, too, it's, ifyou're doing like any kind of
prompt engineering, settingparameters and whatnot.
If, in translation, that startsto get jacked up a little bit,
(14:48):
what's?
I mean it's I don't know if doit sounds like you most likely
are working in several differentlanguages, right?
How is that going?
Is that a big part of that 20%that you need to refine, or is
that actually pretty simple?
Speaker 2 (15:00):
No, actually.
So we do a few more languageswhich have been in deployment
and at this point we canprobably do more than 20
languages at any point of time.
So I think language per se isnot an issue and there's a lot
of like other LLMs that havesolved it.
Of course, if the corpus ofthat language is large, then the
accuracy is large and that'swhy English is so much more
(15:20):
accurate than you would havemaybe any other languages.
But the big languages in Indiaalso have fairly large corpus
because we've had a lot ofliterature and there's like
movies and like some other mediathat is all getting consumed in
creating these LLMs.
So I don't think the languageitself is.
That itself is not a very bigbarrier.
Surprisingly, many of the LLMsare very good at mixed language.
(15:44):
They actually get that contextquite quickly.
For me personally, it was asurprise because if you see, in
any country where English is notthe first language but people
are working in some sort of awhite collar type environment,
the languages are getting mixedup now.
So there's always going to besome smattering of English with
(16:04):
your local thing and it's likeits own flavor, its own chutney
of language that is appearing ina lot of places, so we were not
sure how we will be able todial up or down this mixed,
mixed flavor, which is actuallythe colloquial language in most
places is llms are quite goodactually at that, at least with
the popular languages, and Idon't want to generalize very
(16:26):
because, like we have not lookedat every language so I don't
know if french and english mixwell together or spanish and
english, I think, probably mixbetter.
But yeah, I think.
But from what our experiencesacross Hindi and some of the
South Indian languages has beenpretty good to have these mixed
language interactions andactually, by the way, that I
think is one of the beauty ofthese AI recruiter agents is
(16:48):
that traditionally there used tobe a barrier between what the
recruiter was comfortable inspeaking in versus what the
candidate could have beencomfortable in speaking in, and
this technology actually removesthat barrier.
So if you are hiring, you canalways give candidates the
option to speak in a languagethat they are more comfortable
in and still be able to justrepresent themselves with as
(17:09):
much confidence as they would insomething that maybe English
was not giving them otherwise.
So I think that has actuallythis has opened up, in fact, the
options for candidates to be alittle bit more authentic in
these, in these job interactions, job interviews.
Speaker 1 (17:25):
It's super helpful.
I was just taking some notes onthat.
I want to make sure to mentionit in the description or title
of the episode, just like somenuance there, because that's I
don don't know.
That's just some really goodinsight, I think, for folks to
think about, like the languageaspect and, as well as what you
mentioned about training, thesystem on aspects related to
salary or otherwise.
I think there's a lot ofcompanies coming out right now
(17:46):
trying to add this type offunctionality to to the
recruiting tech company orwhatnot, and so it's a product
rather, and I think it'sinteresting.
It's something that people aregonna have to think about, like
how far along are companies andessentially training systems on
these types of things, and sothat's really cool, I suppose.
Elijah, are you good to move onto a slightly different topic?
I got something in mind, but Iwant to make sure.
Speaker 3 (18:08):
Yeah, yeah, go ahead.
I have a question I can asklater.
Speaker 1 (18:11):
Okay, cool, I guess
in terms of like where to go
next.
Now it sounds like you'restarting this like top of the
funnel, so it's like the initialscreening conversation.
So are your customers typicallyleveraging this before
candidates speak with recruiters, or what's the workflow there?
Speaker 2 (18:29):
Yeah, yeah, yeah, I
think, if companies use it
perfectly as designed, airecruiter agents should be your
first conversation with thecandidate and because it's a
digital interaction which isalmost human, you can actually
talk to a lot more candidatesnow, to a lot more candidates
(18:51):
now, and because the candidatescan speak to them whenever they
are available, which istypically night and weekends,
everybody can speak to theseagents, right.
So you are just like, not boundby time, bandwidth, scheduling,
nothing.
So I feel like this should bethe best use of this should be
is, in fact, when companies areengaging immediately, as soon as
(19:13):
the candidate has shown aninterest, and I think, if you
think about it, just like insales, also, right, inbound
queue is always a little bitnicer than like an outbound
queue, right.
So, same thing, like thecandidates who look for you and
have applied, because you havethese AI agents now, you can
immediately, pretty much realtime, do an interview.
(19:33):
So imagine you are on a careerpage and a company's career page
as a candidate and you applyfor a position of I don't know
supply chain manager or whatever.
Like you know, apply there andinstead of applying or uploading
your resume now, immediately,you can actually just talk to an
AI recruiter right there.
Actually, just talk to an AIrecruiter right there so you can
(19:55):
just finish your first round ofinteraction with the company,
instead of uploading your resumeand then waiting for somebody
to call you back.
And then set up a call and justgo through that, that like
mechanism that we've all beenused to for the last 20 years or
so 20 years in my own memory,but but yeah, you don't need to
do all that.
So I think the the one of thebest places to.
It's also the place whereactually the most drop-off
(20:18):
happens.
You speak to a hundredcandidates, or, but you end up
liking only 10, so you areeffectively 90.
Conversations were not veryproductive for various reasons,
and it could be some somethingabout the company, something
about the candidate.
It's not like it's always badcandidates, but it's just it was
not a good match for variousreasons.
So wherever there is the most insome sense wasted human effort
(20:40):
is a good place to put somethinglike this, because you also
want to be fairly standardizedat that layer right and reduce
any kind of like screeningbiases that may come into the
picture by because, like youcouldn't reach out to someone
and you just thought, hey, thisresume doesn't look as great as
I like.
There's so many things, likeyou know.
I mean, I think this we canhave a whole episode on just
little biases that we all livewith.
(21:00):
But yeah, agents, actuallypeople argue but in fact I think
in this use case we will seemore and more that in fact ai
agents are less.
They make things just moreuniform for everyone.
I think a lot of companies youwill see are like.
For example, we don't even tellyou the gender and I know I'm
(21:23):
speaking in US and gender itselfhas become like a big topic, so
I don't want to go very farinto it but in general, we are
not using voice to determinewhether this person sounds like
a male voice or a female voiceor whatever.
So we are not processinganybody's name to determine what
could be their background.
(21:43):
We are not determininganybody's any other location
information that they have given, or this person from Bay Area,
so maybe they may be somethingdifferent than like somebody
from New York or whatever.
So we are not using any of thisdemographic data to assess a
candidate.
So it's very I think.
I feel it's more democraticthan in a lot of other processes
.
Speaker 1 (22:02):
Yeah, that makes a
lot of sense, yeah.
Speaker 3 (22:05):
Do you?
see yeah, I was just curious.
Do you think that these voiceagents are going to be more
readily accepted, because youtalked about inbound versus
outbound, by like inbound andthe applicants versus reaching
out to candidates and then thecandidate they're a passive
(22:27):
candidate, they're open to aconversation, but do you think
they'll take conversations withan AI agent versus a recruiter
who can answer all theirquestions, or do you think the
AI agent really is going to beable to pitch the company really
well?
I'm just curious of that AIagents for recruiting being used
for that inbound versusoutbound on the sourcing side.
Speaker 2 (22:50):
So I think first
we'll solve for the inbound.
Outbound will also happenbecause there are AI SDR agents
whose full job is to do theselling.
So it isn't like that.
Ai agents are not being usedfor selling right.
So here we're talking aboutcombining two characteristics
One is selling the opportunityand the other is assessing
(23:11):
somebody's fitment for aparticular role, depending on
whatever is the custom criteriathat the company has given right
.
So we are not coming up withthe criteria.
The companies are like givingus a job description and that is
being used with their sort ofblessing.
In some sense, they are theones who say, okay, this is
making sense, this interview ismaking sense to me.
So I feel like it's not veryfar, elijah, that we will be
(23:34):
able to do outbound also withthe same effectiveness.
But right now, I think the placewhere if you're looking for,
like, really senior candidates,where you have very few
candidates and you have to lookfor them on LinkedIn or other
places and there's the only fivethat you found, probably like
it's, you're not gaining much byusing an agent that time.
So companies also need to besmart about what really they are
(23:55):
solving for.
But if you have, if you have anexisting database, for example,
and many companies have that,and we are working with many
companies, in fact, just toharness that existing database
now.
So you have an existingdatabase.
These applicants may have cometo you many years ago, they may
not even remember, but you couldjust go use these agents to now
reach out back to them and sayhey guys, it looks like this is
(24:16):
a new role and you may besuitable for it.
Do you want to just have aquick conversation about it?
So those things are quitepossible today.
They will be quite effectiveeven today.
So I think, unless you are in avery, very niche area where you
have very few candidates, where, in fact, more of the work is
going in scouting them ratherthan in evaluating them, then I
(24:36):
think these AI agents will beworking with different levels of
effectiveness, but it won't bevery bad anywhere.
Speaker 1 (24:41):
That's actually a
really good question, elijah,
really good.
So in a similar vein, there's Idon't know like there's some
people that talent acquisitionleaders that do not like the
idea of an AI screeninginterview of any kind written
voice whatever as the firstinteraction with a candidate.
(25:04):
I've even heard from someleaders that they would be open
to more of AI managing aninterview a little bit further
down funnel versus at the top ofthe funnel.
So I'm curious I'm assuming insome cases you've received some
pushback in terms of being theAI being the first touchpoint.
Do you have customers that areleveraging your product a little
(25:27):
differently, where maybethey're not actually pushing it
out as the first touchpoint butthey're actually just like
leveraging it for a later round?
Or I'm just curious if you seepeople trying to plug in your
product at different parts ofthe workflow.
Speaker 2 (25:40):
Yeah, yeah, actually
great questions, guys.
I'm loving this conversation.
It feels like a deep dive, sovery nice, yeah, so, absolutely
so.
I think what we have seen insome cases, customers would
request their recruiters tostill have that first call with
the candidate to just say that,hey, I'm calling from this XYZ
(26:01):
company and you've beenshortlisted for this role.
Are you broadly interested?
If you have any questions, youcan ask me now.
And they have this five minuteconversation, right.
And the person said, hey, thissounds great, what's the next
step?
Five-minute conversation, right.
And the person said hey, thissounds great, what's the next
step?
So this recruiter will now say,okay, I'm just going to send
you a link of a little interviewwith our AI recruiter.
Please go through that.
And that is how we move forwardin the process.
(26:23):
So this is a very good hybridapproach because it also really
takes care of a lot of maybecandidates who were really not
very interested and they mayhave applied without thinking so
hard, but what it has done forthe recruiter is actually saved
a ton of time.
They did like a five minuteinteraction.
I don't think you need anythingmore than that.
Like, that conversation waspretty much as I described.
There is like not a lot more tothat.
(26:44):
So they don't have to make anynotes, they don't have to
actually in some sense gothrough a detailed interview in
the first round, because that iswhat the AI recruiter could do,
which goes in deeper into theskills and other things that in
fact, many times the recruitersthemselves are not very in some
sense trained to evaluate.
Right, because, like, you're arecruiter, you're not, like, you
have not written code or you'venot done a lot of supply chain
(27:07):
stuff or anything else.
Or even in like, things geteven wonky when it is like
clinical research and thingslike that.
Right, those are regulatedindustries, so not every
recruiter is trained tointerview for these, but you
still are able to make thatfirst touch point with a
candidate and then send out likethis as a, like a sort of a, I
would say, one level down in thefunnel step.
(27:28):
So there are companies who areusing that as well.
In our particular platform wealso have a second and third
second round of interviews aswell.
So if you wanted to set upsecond round of interviews, you
could do it.
We actually have a.
So we talk a lot about thevoice agents, but our
interaction also has.
In fact, we have a fairly largequestion bank where you can do
(27:50):
a small assessment, likemultiple choice questions as
well.
You can do an EQ test, apersonality test.
You can do like a basic logic,basic math test, like other
technical skills that you wantedto just like do a little code
test on.
So all this is actually part ofthe platform, it's just that we
talk more about the voice agent.
But you think in our caseactually we have seen multiple
(28:14):
customers that have reducedtheir number of interviews,
interview rounds, because youare getting so much signal about
a candidate very early in theprocess.
Speaker 1 (28:23):
That's really
interesting.
Evaluation tests that'sinteresting, Like when you're
talking about these logic, math,EQ, behavioral, maybe types of
assessments, right?
Are those just basically out ofthe box to the LLM or how
structured and how much are youessentially optimizing the LLM
to be able to handle those typesof questions and interviews or
(28:45):
assessments?
We are very cautious withactually using LLM to be able to
handle those types of questionsand interviews or assessments.
Speaker 2 (28:47):
We are very cautious
with actually using LLM real
time for multiple choicequestions Because, again, I
think in this particularinteraction, being a little bit
cautious is actually helpful,because we want to be just very
candidate friendly.
Just because you're trying tosolve a particular problem of
efficiency or about reachingthem early, it shouldn't it
(29:08):
shouldn't make it difficult forthem to get selected right.
So what we have we've done abunch of testing and what we
have seen it's better to usellms to create these in advance,
these multiple choice questions, rather than do them real time.
So we have used them to createlike really large question banks
, along with a lot of otherinformation to create those.
But we don't we don't createthem while the interview is
(29:33):
going on.
So there's a little bit of it'sa little bit guardrails around
how we are using LLMs in in theassessment part, like the
multiple choice question.
Speaker 1 (29:40):
That sounds a little
bit different than was it right,
higher Elijah Cause I was, Iremember, with Ben.
I was trying to dive in to.
This is a little different, butthere's a parallel here.
Just hang in there with me.
When we were talking with Benabout, okay, they were basically
generating questions based on ajob description that the system
would also generate.
(30:01):
Right, this is just basic LLM,out-of-the-box stuff and they
would basically generate customstuff.
And they basically generatecustom questions and it was
basically for the interview andthen their product will sit in
on the interview, do all thenote taking and then do the
evaluation and see what's beenasked and what gaps they have
and that kind of stuff.
And so one of the things theywere saying is we'll generate
(30:21):
our own questions, but then alsothe hiring team can put in any
questions they want.
And so my question was likewhat if they don't ask something
that they should like,something obvious, like, for
instance, salary range, and thenthe LLM just doesn't realize,
it just doesn't ask, it doesn'tthink to ask.
So I was like how do you solvefor that?
(30:41):
It didn't really seem, and I'msorry if I'm wrong but I don't
know.
Elijah, if you've mentioned, itdidn't really seem like they
had a solution for that,specifically to ensure that the
LLM wasn't missing stuff if thehiring team missed it and the
LLM missed it out of the box.
So I guess that's justsomething to keep in mind as
well.
Do you have any thoughts onthat and how?
That's where you're saying likethere's these predetermined
(31:04):
question banks.
Speaker 2 (31:05):
I don't know if
somehow that's leveraged so
those predetermined questionbanks are only for the multiple
choice question part.
Most of the interview is aconversation like you and I are
having.
Right, you are asking mesomething, I'm responding and
you are picking up on what Ihave said to ask a probing
question.
That's how, how all interviewsgo right.
(31:25):
You build on what the candidatehas said and you go deeper and
deeper to some extent and thencome back to some baseline.
So, like what you're sayingexactly, you can have a whole
interview.
But if you don't, if you end upnot asking about the location
preference or the salary orpermit to work in a certain
country, then actually thatinterview was not complete in
that sense, right.
So the way the conversations aredesigned is we make the
(31:50):
candidate a little bitcomfortable.
We ask them a little bit aboutthe background, like what they'd
like to do in the free time orsome question around some
lightweight question, just sothat they get used to talking to
an AI agent, and then wetypically we dive deeper, a
little bit into their background.
Maybe it could be educationalor most of the recent projects
that they have done.
Then it goes into what whateverwas asked in the job
(32:12):
description or whatever wasneeded in the job description
and these questions are notactually pre-defined because the
job descriptions are changingand even if you have a starting
question, that is the samebecause the job description is
the same.
The follow on questions the L2sand L3s are pretty much
dependent on what the candidateis saying, right?
(32:33):
So if you could ask a candidateabout how they scale a database
, there can be 100 differentanswers to this.
So the follow on question willdepend on whatever this person
did in terms of scaling adatabase versus what another
person did.
So it's really so.
That's where actually llms comein.
Is that the probingconversation that happens with
the voice conversation that'shappening.
(32:54):
That is actually not preset.
It's just that only later on,if we ask them any multiple
choice questions, those I'msaying are not dynamically
generated.
Those come from like a, aquestion bank, and they are
sourced for that particular role, but they are not being
generated on the fly.
Speaker 1 (33:12):
But then there would
be, like some should be, like
from a prompt engineerperspective, on the backend.
It would be saying, hey, makesure you're collecting this and
this, so it's not like aprerecorded question per se, but
there there should be, it wouldseem, something that's in place
to ensure that the LLM doescover.
Speaker 2 (33:29):
Yeah, yeah,
absolutely, because, see,
finally, it's a business case weare solving, for it's an
interview that you are doing.
An interview has some musthaves, so of course the we have
to make sure that those aregetting covered, yeah.
Speaker 1 (33:42):
Yeah, yeah, because I
was just wondering how that's
done.
Speaker 2 (33:44):
There's like a broad
skeleton to the conversation and
within that skeleton you canmove around a lot.
But yeah, I was just like.
Speaker 1 (33:55):
the one thing that
I've been thinking through is,
you know, the idea is to helpevaluate more fairly, more
objectively, but like part of itbeing thorough is filling in
for the gaps that the hiringteam has, part of being thorough
is filling in for the gaps thatthe LLM has.
And if the LLM there's a gapand the hiring team has a gap,
(34:17):
it's like how do we actuallyhelp our customers not miss
stuff?
And so that's where it's likegetting into training the system
.
Maybe that's like more in the20% side.
It's not just like about, I'massuming like asking okay, you
got to know how to collect thesalary, but it's there's certain
things that we have to know tofully evaluate a candidate,
whether the hiring team realizesit or not, or whether the LLM
(34:40):
is going to figure it out or not.
Speaker 2 (34:41):
That's some of what
I've been-.
No, you can't have llm, figurethis out, okay.
So you can't have a little.
So this is more around productdesign.
Yeah, so this is around theproduct design it's.
We were building this producteven before we were working with
llms.
So bevel board started with theidea of solving for
asynchronous conversations orasynchronous interviews.
(35:03):
So if you, if you go back alittle bit in history, just five
, six years, conversational AIhas been around since 2015, 2015
, 16, if you see, people havestarted to claim that the
chatbots had become moreconversational.
And if you know, like Googlehad this tool called Dialogflow.
Now, dialogflow was somethingthat made it really easy for you
(35:25):
to create like a tree of aconversation.
So now that was.
That is called a decision tree,that at every step, you are
giving some two or threedecisions that this particular,
this tree can take and theconversation goes in those parts
.
So the parts are very well set,right.
Nowm, now you can actually chatwith chat, gpt or claude now, or
(35:49):
like sonnet or anyone, thewhole day, like you can keep
asking questions.
It'll keep the.
There is no agenda to thatconversation, right, because
there is no goal that you'retrying to achieve together.
You are curious about somethingand it is answering.
The way babble bots is designedis something in between.
You have a structure to aconversation because you're
trying to achieve a businessoutcome which is fully
(36:11):
understanding a candidate'sfitment and interest and sort of
suitability for a particularrole as much as is possible to
evaluate, and then give that ina very structured way to the
company so they can decidewhether this person is getting
shortlisted or rejected.
We do not make the decision,the decision is always made by
them.
But this structure is reallyimportant because otherwise how
(36:33):
will they compare the candidates?
So you can't have a completelyfree-flowing conversation
because that's not useful, it'snot like it's not possible, it's
just not useful.
Speaker 1 (36:44):
Yeah, that's super
helpful.
I know we're coming up on timehere, so I guess we should
probably stop.
I feel like we could definitelykeep going for a while.
Raleigh, are there any kind ofother final thoughts you wanted
to share, or anything at all,before we jump off today?
Speaker 2 (37:00):
I just think we're
just like living in a very
magical time.
I think recruiting is, I feelis going to be fundamentally
shifting and I think we've beentalking about these things like
time to hire and all thesemetrics for a very long time,
but I don't think actuallysubstantial movement happened in
those particular metrics in thelast 10-15 years.
(37:22):
But I just don't see a worldwhere recruitment stays the same
.
It is just like the wholemarketplace of the world where
you have people looking foropportunity and there are
opportunities available anywherein the world.
That's fundamentally shifting.
I don't think that shift iseven very far away.
And all this is happeningbecause now you are not bound by
somebody's time, availability,language, language assessment
(37:46):
style.
It's a very democratic and Ithink it's going to be a more
beautiful world with likebasically the right kind of
people getting the right kind ofopportunities.
That's what my hope is and, yes,we are very excited to help get
us to that future.
Speaker 1 (38:01):
I love it.
I love it.
This has been great, Elijah.
Any other thoughts on your endbefore we jump off?
Speaker 3 (38:07):
No, it's a beautiful
vision.
I love that too.
Thank you for sharing that,Ralu.
Speaker 1 (38:13):
Well, ralu, this has
been great.
I learned a lot.
We were able to get into somenuance that Elijah and I weren't
able to get into in the pastcouple of podcasts with Bright,
higher and Pillar Founder andCEOs.
Thank you for explaining someof this more nuance and getting
a little bit more technical withus, which is great, I think.
Probably, given your background, we were able to get into some
more nuance, which is, I think,very helpful for our audience.
(38:34):
It's truly a great episode.
Thank you so much for joiningus today.
Speaker 2 (38:38):
Thank you.
Speaker 1 (38:38):
Thank you, this was
really good, thank you everyone
for tuning in and we'll talk toyou next time.
Take care.
Speaker 2 (38:44):
Bye.