Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hey, welcome to the
Breakthrough Hiring Show.
I'm your host, James Mackey.
We got Sal Megas today.
He is the co-founder and CEO ofMetaView.
Sal, it's really good to haveyou back on the show.
Thanks for joining us.
Speaker 2 (00:11):
Thanks for having me,
james, pumped to dive in again.
Speaker 1 (00:14):
Yeah, absolutely.
We also got Elijah here, ourco-host.
What's up, Elijah?
Speaker 3 (00:19):
Hey, james, good to
be back.
Speaker 1 (00:21):
All right, so
MetaView.
A lot of folks in therecruiting space have heard of
MetaView.
You guys have been around forwhat?
Six, seven years?
Is that right at this point?
Speaker 2 (00:29):
That's right.
Yeah, that's right.
Speaker 1 (00:31):
Cool.
So yeah, it's a name in theindustry that a lot of people
have heard of and I know,particularly over the past
couple of years, I think, fromlike you're getting some really
incredible momentum and traction, even more than what you
already had, which I knew youwere already pretty well known.
So I'm excited to dive intothat.
But for folks that maybe aren'tas familiar with you and
(00:52):
MetaView, it'd be great if youcould just give us a bit of an
intro background on yourself,your founding story, that kind
of stuff would be really cool.
Speaker 2 (00:59):
Yeah, totally so.
Yeah, so I started MetaViewreally off the back of my
experiences hiring at a reallyhigh growth company, so I was
not in recruiting, but I was ahiring manager at Uber.
I was a product manager thereand building out, contributing
to hiring more product managers,engineering managers, product
marketing managers, designmanagers everything you expect.
(01:19):
My co-founder was anengineering lead at Palantir and
was deeply involved in hiringat Palantir and him and I would
sort of jam on things that maybewe could solve in the world.
And one of the things that wethought was a really big A
problem and obviously Bopportunity was we're part of
these companies Uber andPalantir that take hiring
incredibly seriously and aredoing everything they can to get
(01:41):
world-class results in theirhiring process.
But still there's a lot ofguesswork involved.
It's very lossy.
You're losing a lot of theinformation that you're
receiving during the hiringprocess.
Just generally like a messything and everyone knows this
right, that's not new, so Idon't need to like labor that
too much, but the thing that wethought was changing in the
world was given.
It's so clearly obvious that themost important data when you're
(02:02):
trying to hire someone is theinformation you get from them
when you're having aconversation with them.
Are we now in a world where youcan capture and do something
with that data instead of justasking your sort of however many
hundred people in your companyrun interviews to write that
information down and hopefullydo something with it that way?
So that was what we thought hadchanged six or seven years ago
as more interviews started tomove onto digital formats.
(02:23):
Had changed six or seven yearsago as more interviews started
to move onto digital formats and, as you mentioned, we've built
a product that some customersloved, I would say, whereas now
in the last two years or two anda half years maybe, since that
GPT moment, suddenly everythingon our roadmap that we thought
was maybe 10 years away just gotdragged forward and you can
just do magical things with thisunstructured data that just
(02:45):
make recruiters' lives fareasier hiring managers way more
confident in their decisions,recruiting leaders far more
informed about what's actuallygoing on in the hiring process.
There's just a ton ofdownstream impact when you are
capturing that data, so that's abit of a long background to say
what we really do is we focuson capturing the data from
within the interview process sothat organizations can radically
(03:07):
increase their efficiency andtheir precision when they're
hiring.
Speaker 1 (03:11):
Love it and I'm
really excited to learn about
what direction you want tocontinue to build in as well.
I was looking at the websiteand I wanted to get a poll.
I don't know, maybe this isalmost going too specific up
front, but one of the things youmentioned is looking at
different reports, right, andI'm curious what type of
reporting and analytics you'redoing within the product and
(03:32):
maybe because that sort of alsoinforms potentially product
roadmap, right, like I thinkElijah when we were talking with
was it Ben over at BrightHirethey were doing some analytics
around, like how well the hiringteam was interviewing, like
giving the interviewers a score,and then I think Pillar was
going in a different directionwith some of their analytics and
I can't remember, but it seemsthat, like, people are thinking
(03:53):
about measuring interviewingsuccess in different ways and
leveraging this technology indifferent ways.
So I'm curious, like, when itcomes to actionable insights and
reporting, what's like thefocus for your team right now?
Speaker 2 (04:05):
Yeah, things around
interview quality are important.
We have various features andfunctionality in various ways.
You can query the AI to helpinform you on that.
I think there's way more thatwe can all do there.
Where MetaView is really strongon the reporting side is, I
would say more on the tacticalelements.
It's like understanding who arethe candidates in my pipeline
who have said that they'veclosed deals over a certain size
(04:27):
and are willing to relocate tothis area.
Like I'm trying to fill thisrec, metaview knows you're
trying to fill this rec becauseyou were part of the intake
meeting either with the hiringmanager or with the client and
these are the people thatactually match up to that.
So it's a little bit moretactical, connected to the thing
that is stressing out arecruiter or a hiring manager on
that day.
So we do think of it asreporting because the output is
(04:48):
often a list or a chart of well,these are the number of people
that match these conditions thatyou've seen over time.
But in some cases it's almostjust a very, very sophisticated
AI filter of all of theconversations you've had on the
platform.
So really common ones we seeare things like one I mentioned
there, which is AEs who matchcertain conditions of the types
of deals that they've closed,and maybe even the previous
(05:10):
companies that they've workedfor.
So I can go and source fromthose companies because I think,
if I want more people like that, we use people use MetaView a
lot, for we think of it asdynamic salary reporting.
What I mean by that is, ofcourse, there's really robust
data sets you can get for salarybenchmarks, but they're not
real time and all the timeyou're getting that.
(05:32):
While you're getting that datalet's say once a month, once a
quarter when it refreshes you'reactually getting hundreds of
data points every day of whatcandidates are telling you their
salary expectations are.
Now that doesn't mean that'sdefinitely what they're getting
paid, but you're at leastlearning what their expectations
are.
And that's a really interestingdata point if you can look at
it over time and again, see howit differs in geography or by
role and all these differentthings.
So that's a really common onethat people do as well.
(05:54):
And then the last one I'd say isquite related to one of the
other ones you mentioned, whichis a lot of the time, I think,
the very specific.
Hey, this is a good interviewand this is exactly how this
interview needs to change.
I actually don't think we'rethere yet, because I think
conversations with candidatesare rightfully more nuanced than
that and we don't want thesesuper robotic hey ask this.
(06:14):
Then this formula, which Ithink is the limitation of
having reporting that focuses onthat too much.
You get false positives right.
You call someone out for beinga bad interviewer and actually
they're not, and they getflagged with a bad interviewer
and then you just losecredibility and you lose trust
in that platform.
So we really try and avoid thatstuff.
But what we can do really wellis flag anomalies like something
(06:35):
that is clearly out of linewith how interviews are being
run in the rest of the company.
People also use MetaView forthat, and it's almost again.
You introduced this question orthis segment with the word
reporting.
We do call it reporting.
This is our AI reportsfunctionality, but actually
sometimes, when I'm talkingabout this with customers, I
talk about it more as like an AIscout.
You can think of this as younow have a colleague, a coworker
(07:00):
, an AI frankly, in all of theseconversations and whatever it
thinks you should really knowabout in any one of these
conversations, or as any trendemerges across these
conversations, whether it's hey,they look like there's a really
low talk time over here, areally high interrogation
interview over here that thenresulted in the candidate
rejecting the offer.
Like fusing these differentdata points is something that
(07:22):
results in and again, it's notnecessarily a report in terms of
something you can show to yoursenior leadership team every
month of hey, this is how we'regetting better over time.
Some of that stuff would becool as well.
It's more listen.
There was a problem yesterday.
Why don't we do something aboutit right now so that we can
save that candidate?
Speaker 1 (07:37):
So for.
So that's really helpfulcontext too, and the reason I
led with the reporting angle isreally just to try to show like
a fresh perspective and startoff an episode a little
differently.
Maybe we have others in thepast, so I wanted to dive into
that and it's.
But it's interesting becauseyou're saying, like what you
usually say is recruiting isasking the AI agent essentially
hey, in the last three months,can you what are the people that
had an average deal size over100k or whatever that?
(07:59):
Whatever you just said, I havebeen thinking a lot about
candidate rediscovery lately asa use case for AI and different,
just different use cases for AI.
And I'm just wondering too isthere like a broader application
there where it's like, ifsomebody was interviewed two,
three years ago where you could,would you be able to ask like
MetaView agent, hey, can youpull a list of people for all
(08:21):
the revenue, jobs or for thisrole or whatever for the past
three years that fit thefollowing criteria or that
answered questions?
In this way can be used likefurther back for candidate
rediscovery as new recs arecoming up.
Speaker 2 (08:34):
Yeah, the only reason
I don't give an example around
like three years ago, I guessone, is because you mentioned it
.
A lot of our growth has been inthe last two years, so most of
our customers don't have threeyears of data.
So like even just conceptually,it's not like something they can
go and do right now becausewe're getting more new customers
every month and this sort ofthing.
But I think there is some liketime limitation as well, where I
think one thing that's reallyamazing about starting to
(08:56):
capture and harnessconversational data which every
recruiting leader should bedoing, in my opinion I think
it's irresponsible not to atthis point is one thing you're
doing is you're making thehalf-life of your data much
longer.
You're making it so that evenright now, the data you have in
your applicant tracking systemabout a candidate once that hire
is made, you're probably almostnever going to look at some of
(09:16):
that data again.
So even by getting three monthsor six months or a year's usage
out of it, it's reallyimproving things.
I would say there might be somediminishing returns of a silver
medalist candidate that was fromthree years ago or something,
just because so much will havechanged in their life by the
time you're looking at themagain three years later.
But of course, there could becases where it's interesting.
But yeah, there's no limitationon the timelines other than
(09:36):
what the customer imposes.
Some customers like to deletedata regularly and they might
get rid of it.
But that's on them.
Speaker 1 (09:42):
Yeah, for sure.
Yeah, I guess that also dependstoo, is like where they want
the data stored, right.
Speaker 2 (09:47):
Yeah, yeah.
Speaker 1 (09:48):
Okay, cool, elijah, I
could keep going here, but any
questions top of mind for youright now?
Speaker 3 (09:52):
I just wanted to
mention.
I think the compensation pieceis huge, right?
Every tech startup that I'veworked with over the past seven
or eight years.
That's one of the key points,right, they want from almost
every candidate interview iswhat are their expectations?
Right?
We got to be careful because inNew York City and different
(10:13):
locations around compensationand asking them like what
they're making now, but weusually ask it in a way where
it's more nuanced of what areyour expectations for a role
like this.
But that data is reallyimportant, right, and especially
for I don't know if you'reseeing customers use it like
this, but we've had a number ofrecruitment leaders or senior
(10:33):
recruiters wanting to change thecomp band with the comp and
Ben's team and they then needthat data to give pushback to
the comp and Ben's team to saythis is what candidates want at
the experience level that we'relooking for and this is what
they're asking for and here'sour range, right, and we're not
going to get the talent that thehiring manager needs and that
(10:56):
we need.
Are you seeing it used in kindof a similar way with your
customers?
Speaker 2 (11:03):
Oh, 100, 100.
Yeah, yeah, all the time, and Ithink one like bit of flavor
that I'd add to that.
So I think the the ability toextract that data even though
you haven't structured ityourself that's the really
that's the big thing that'schanged right.
You've just your company hasbeen bombarded with all of these
words by these candidates andyou can use AI to make sense of
some of that without having tohave your team members structure
(11:24):
it manually.
That's obviously the magicthere.
But there is something that'sactually quite powerful, even
just about the basic concept ofthe audio or the video of the
candidate explaining theircompensation expectations to
share with folks in, let's say,comp and bends, right, because
suddenly you've gone from, hey,the recruiting team is saying
that they're not hitting theirheadcount goals because of comp
and and they're just complainingabout it.
(11:44):
And it's very easy to distance,distance yourself from it
almost.
But actually even just hearinglike one or two sound audio
clips or video clips ofcandidates saying it, it just
brings a sort of a level ofrichness and a level of realness
to it that makes it creates amuch better conversation
basically with internally tosolve it.
So, yeah, that's really common.
There's there's all sorts ofcases beyond just straight up
(12:06):
recruiting, like other HRfunctions, where you see
benefits and learnings fromwithin this data as well, things
like operationalization of thevalues.
Whenever an HR team leads orworks with the executive team on
really re-instituting or evensometimes redesigning values,
one of the first places thatgets pushed to of course rightly
(12:27):
is the interview process, butit's not necessarily
recruiting's job.
There's still this handovermoment where it's an HR slash
leadership thing and then it'snow we need to operationalize it
in the interview process.
But again, there's really highquality feedback about the
salience of that culturalmessaging or the messaging of
that values that is actuallyreally important to the C-suite
(12:49):
and the HR team and things likethat.
So there's a lot of valuebeyond just the recruiting team
actually in the recruiting data.
Speaker 3 (12:55):
Yeah, that's great.
One more quick question.
Okay, so this is a very directquestion, but I think it's super
relevant for your buyersbecause I've been in that seat
myself and I think what a lot oftalent acquisition leaders when
they're looking into interviewintelligence tools or whatever
category, however you want tophrase that.
(13:17):
You've got your generic toolslike Fathom, zoom, ai, so you've
got those not purpose-builttools that people may be using,
recruiters may be using.
So first part is like how isMetaView and even the category
of, maybe interview intelligencedifferent and better than those
more generic tools?
(13:38):
Because there is a premiumright versus the generic tools.
And then the other part of thatquestion is how is MetaView
different than maybe other kindof players or competitors in the
space and where do you seeMetaView going?
Do those kind of two parts makesense?
Speaker 2 (13:54):
Yeah, for sure.
I think there are three partsto the answer and I probably
don't need to separate outgenerics from other folks in the
space.
I think that.
So before I get into thespecifics which, don't worry, I
will the core of thedifferentiation between
something like MetaView and Zoom, microsoft, copilot these
obviously amazing companies whoare putting hella effort into
(14:17):
building amazing AI products thekey difference is obviously
focus right.
We are actually truly thinkingabout what's the day of life
like in a recruiter's world or ahiring manager, truly thinking
about what's the day of lifelike in a recruiter's world or a
hiring manager's world.
What's the outcome they careabout?
Really importantly, what's theother pieces of context that we
can infuse our AI with to makeit outperform?
A generic AI?
That can be everything from assimple as how we fine tune or
(14:41):
how we build our system promptswithin our product, all the way
through to knowing if we knowthat this candidate is going for
the senior software engineerrole and this is the job
description for that role andthis is the scorecard for that
role.
There's just a bunch that ourAI can do to more successfully
create really high quality notesout of that conversation that
are really relevant to therecruiter or the hiring manager
(15:03):
or the approver of the candidatethan a generic tool can, and so
if those big guys decide thatthey also want to do that work
of sucking in other job contexts, then that's what it would take
to match the accuracy which isreally what I'm getting at there
, that accuracy of the notesthat we can achieve within a
recruiting context.
So that's really the first oneis accuracy.
(15:24):
Now MetaView even in relationso much when I compare MetaView
to other interview intelligencetools, I really think the
difference comes down to we'rereally an AI company first, and
and so much of the startingpoint is broadly similar, which
is let's start by capturing theconversation, because that's the
(15:44):
new thing, that's the new datasource that's suddenly useful.
Actually, I think a lot of thedownstream and I think people
are seeing that already if folkscheck out various products ends
up being quite different.
So for us, accuracy is actuallythe number one thing.
You cannot build any excitingAI applications on top of this
conversational data if your AIdoes not have an accurate
(16:07):
understanding of the context.
So these are very tacticalthings I'm about to explain, but
they're actually reallyimportant If your AI, if you're
recruiting AI, which isobviously what we are for many
of our customers and what we'reincreasingly trying to be
doesn't understand that whenthat candidate said my comp is
$200, they meant $200,000.
That's really important for theAI to understand within this
(16:29):
context.
What's being talked about here.
What's the currency?
What's the actual numberthey're referring to.
Same thing, when they'retalking about what languages
they, what programming languagesthey are familiar with.
When they say C sharp, theydon't mean I see sharp things.
They obviously they're talkingabout the programming language.
So I know this sounds very italmost sounds funny to think
about it this way, but that'sliterally the work that we are
(16:49):
going on in our company multipletimes every week where we're
realizing okay, we need to bakein this additional context,
create additional, add things toour knowledge base in our
library so that our AI issmarter about these things.
Not just because we want thenotes to be perfect that is
really important but alsobecause if I want to run a
report on that data later, Ican't do it if it hasn't
actually understood the contextcorrectly.
(17:09):
And increasingly, as you wantthe AI to get more and more
agentic and proactive again, ifit doesn't have the right
understanding of the context, itwon't be able to do those
things.
So the way that differentiationright now manifests itself is
that MetaView has by far themost accurate notes on the
market.
The great thing about that isit also is the thing that saves
recruiters and hiring managersthe most time, because if
(17:31):
something's not accurate and youhave to fix it, then actually
you're not saving yourself timeat all.
You're doing the opposite.
Another differentiation from ageneric will be workflow
integration.
The fact that MetaView willspeak to your applicant tracking
system, your HRIS, yourscheduling systems.
All this makes it just a wholelot easier to not only adopt but
(17:51):
also administer.
So right now, if you wanted toroll out, if you wanted to make
Gemini your note taker of choicewithin your recruiting workflow
, you'd have to rely on yourinterviewers every time
remembering to click the littlebutton to make sure they
captured the interview right.
It's just not really a reliableway to capture what is really
important data for you, whereaswhen you take a recruiting
(18:12):
specific approach through a toollike MetaView, you can
administer this centrally.
You decide which conversationsget captured and which don't.
You might want every interviewstage for the senior software
engineer role captured, all theway up to the final call which
you specifically don't, unlessit's got this person on it, in
which case I do you can actuallyorchestrate the AI and capture
what you want, as opposed toagain have this generic approach
(18:34):
of, well, everyone has theirown Gemini, they can use it as
and how they wish, which hasvalue to it in other cases, but
I think recruiting is enough ofa snowflake to warrant its own
thing.
So that's the here and now.
It's really accuracy andworkflow.
I'd say in future, the wayyou'll see things developing,
and I obviously can't speak toother folks' roadmap, of course,
but what I know we care mostabout is much as workflow.
(18:57):
Integration is important,actually most of the work that
we think about because we'rereally focused on the people
that actually do the hiring therecruiters, the hiring managers,
the interviewers, the VPs whoapprove roles and who obsess
over who's actually in thecompany.
A lot of the work that they dowhen they're hiring doesn't
actually live in any systemright now.
Right, it's a lot of time.
It's going on in their headlike trying to digest.
(19:18):
Well, how does that personcompare to this person
previously?
Or it's like a hallwayconversation they have with a
colleague, or a side Slackthread or debrief is a really
common case as well.
Basically, a lot of hiringdoesn't really happen in any
system at all, and so what wereally care about is building AI
, tooling and enabling,essentially, a workspace for a
lot of that work, as opposed tothe things that are currently
(19:39):
done in other systems but couldbe improved by AI.
We're a little bit less focusedon that, partly for strategic
reasons, because we don't haveas you know, there's other good
companies that can probably dosome of that stuff but mainly
because we actually think that'sthe most important stuff.
Actually, the most importantpart of hiring is the stuff
that's happening, like betweentwo people's ears and when
they're trying to communicatewith each other, and that's the
(19:59):
stuff that doesn't really have ahome.
Yeah, they're trying tocommunicate with each other, and
that's the stuff that doesn'treally have a home.
Yeah, you'll see more and morefrom MetaView, where it really
helps people in a very hands-onway in their next steps after
many of these conversations,whether that's the next step
after an intake meeting and thething that a recruiter will do
after that, the next step aftera debrief and whatever the
hiring manager the recruiterwill do after that, or, of
(20:21):
course, the next steps afterinterviews and what that means
for our downstream interviewsand how we should change those
in order to get the informationwe need by the end of the
process with the candidate.
Speaker 3 (20:30):
Yeah, I appreciate
you sharing all that.
It makes me think about weeklysyncs.
I know a lot of recruiters useweekly syncs.
Is that something sometimespeople use MetaView for to
almost have a history of therole, like how changes are being
made and be able to reference.
Mr and Mrs Hiring Manager, yousaid three weeks ago we were
(20:51):
changing the focus of the roleand you didn't want us to source
more candidates like that, orwe removed a requirement.
But we've done that manually,right.
I've done that in paperdocumentation before.
Is that kind of a use case thatsometimes people use MetaView
for?
Speaker 2 (21:06):
Yeah, yeah, and in
fact, we've gone one step
further now, and whether it's inyour weekly sync or whether
it's in any meeting, really, ifit's really clear that the role
context has changed, thenMetaView's AI will now go and
literally suggest changes tomake your job description.
So I think this is a greatexample of what I mean by this
(21:27):
proactive agentic AI, of listen,we know you've just had a
conversation about this role.
Yeah, there's a bunch of thingsyou're going to do as a result
of that conversation.
Sure, one of the things thatsometimes gets left behind is
well, hang on, jd, we actuallywrote that two years years ago.
It's now nothing like the rolethat we're we're focusing on,
and that's the type of thingthat MetaView can stay on top of
(21:48):
very easily, very cheaply foryou.
So, yeah, that happens all thetime.
It's actually another greatexample, I would say actually,
elijah, of, again, when you'rethinking about, do you go
generic or do you treat this as,again, as a bit of a unique?
Do you want a uniqueintelligence layer, for this is
maybe the way I would put it,rather than a unique note taker?
I think I don't really care whothe note taker is, it's more
about the intelligence layer.
That's a great example, right?
(22:08):
Because this concept thatthere's this, these meetings
don't run where we're going tohave this meeting, we're only
going to talk about this topicand we'll stop that meeting when
that topic's finished, and youcan break it up that way.
It's obviously much more fluidthan that and so having that
sort of like consistency, almostthat lineage of we know how
this role has gone over time,and actually we talked about
these three candidates last weekand this week we're talking
(22:29):
about these four, two of whichwere also present last week.
So if I want to aggregate thedata about these two candidates
split across these threemeetings, like all of that stuff
is I'm not MetaView's not thereon every single aspect of that
yet, but that's exactly the typeof world that people should.
They should expect that right.
They should expect the AI toknow.
Actually, in this meeting withJames, you were talking about
this candidate because, yeah,the AI is never going to be as
(22:52):
smart as you.
It's never going to be able tohelp you as much as you could
help yourself if it doesn'tactually have all the context
that you have, which is oftenthrough your conversations with
hiring managers and whoever else.
Speaker 3 (23:03):
Yeah.
And it could even suggest,right, if it has the context of
the job posting or the jobdescription, it could say, hey,
based on this context, werecommend these two changes to
these two bullet points.
Do you want to accept thatRight?
And then yeah potentially evenpush that to the ATS at some
point.
That's what it does.
Oh, okay, great yeah.
(23:25):
Hey, I'm done.
James, go ahead.
Speaker 1 (23:27):
Oh yeah, no, it's
just, it is kind of incredible
how much training AI, how muchit really does take it's to get
all the nuance.
It's like this endless buildand refinement and same with, of
course, like on the productside, building out the features.
Customers continue to ask moreand more.
It's definitely just 10x morevalue going with an
industry-specific tool and youreally start to see that when
(23:47):
you're talking with yourcustomers and they're asking you
to build different features orbetter features, or you're going
through the motions of trainingthe system Right now with my
company, june, a very earlystage but essentially doing a
candidate pre-screening with ourAI agent, june, and we're doing
I think I mentioned before werecorded it's like inbound,
outbound and candidaterediscovery.
Right, we're doing it in thosethree different use cases.
(24:09):
It's just as we're goingthrough and training June.
It's every time, every week,there's oh no, we have to train
this, we have to add this, wehave to find this, or okay, wait
, june's responding this way.
Why wait?
Why is June doing that?
Okay, now we have to go likeit's a continuously add more and
then, of course, like theproduct and feature workflow was
just there's so much to do,right, there's just so many
different things to make it workreally well within a specific
(24:31):
use case, versus just a generic.
Speaker 3 (24:33):
Why isn't there like
a specific?
I'm just curious, like whyisn't there a recruiter specific
LLM that all these likerecruitment products could
leverage as like a layer betweenGPT or Gemini or whatever,
where it's like maybe just aslight extra cost on the call
(24:55):
right, the API call, you'reputting it through this
recruiter.
Maybe it's open source and alot of people are building it
because a lot of there's so manyuse cases for recruitment to
build really good recruitmentproducts.
It'd be nice, like a lot oftheir recruitment products use
people data labs for data yeahbut if there was, like this,
(25:15):
recruiter specific llm and thenmaybe there's one for sales, and
like that'd be cool I don'tknow, I don't know who would be
like.
Speaker 2 (25:22):
I think we've got our
we're, we're a very capable
team and we've got our handsfull bring building out the sort
of the slice of the hopefullylike ever-growing slice of the
stack that we really care about.
So I think there's, frankly,just the resources and I'm by
that with team, but also likecompute resources to build that
layer would actually be prettyhigh.
I think I'd imagine the sort,the more net, the more efficient
approach is probably the one wehave, which is like Team Black
(25:44):
Hours, leveraging this superintelligence and then
internalizing, obviously havingthe domain expertise within our
company.
Speaker 1 (25:51):
Yeah, yeah, elijah,
from a baseline proficiency
perspective.
The LLMs they are smart enoughto understand what recruiting is
right, sure sure right so therealready it has a base of, but
then, yeah, it's going to varyvery much.
Speaker 3 (26:02):
So, specifically
based on the use case, but if
it's 30% of the way there andthen you're saying you have to
do, you know MetaView has to doanother 70% to get it like a
hundred percent of the way there, I just I don't know.
Just be nice.
Speaker 2 (26:17):
I wouldn't say, it's
I.
I think it's like it's probablymore Pareto Like.
Speaker 1 (26:20):
it's like the last
20% is the hardest but for any
one domain it matters a lot.
Speaker 2 (26:25):
And then I would also
say some of it is not.
In some cases it's nothing todo specifically with the
intelligence of the reply fromthe LLM and how recruiting
appropriate it is.
It's, again, it's to do withthe context that you're able to
infuse it with.
And so a company like mine, theway, what we think about a lot
(26:50):
is how can we get right ofadmission to context.
Now, again, what our bet isthat the most important context
to be able to give an LLM is allthe information the candidate
just told you in a conversation.
That's in many cases going tobe the most important thing.
And actually if you combinethat with even with an off the
shelf LLM, you're going to getbetter outputs about who that
person is and how can I getinformation about them than you
would if you had not thatcontext.
And so I think really what wethink about the word we use most
internally is context a lot oftime, or what other contexts can
(27:11):
we help?
Sometimes that is literally justthe context of getting like an
artifact I've mentioned italready like a job description,
but sometimes actually the moreimportant context, because I use
that a little bit as athrowaway.
Actually the more importantcontext is actually, if we can
get you collaborating with theAI on top of the creation of
those notes or the creation ofthat JD.
Actually, that's also context.
(27:31):
The AI has the context of okay,they didn't like it when I did
this, they did like it when Idid this.
They wanted to remove thissuggestion, but they accepted
this suggestion.
That's all this layer uponlayer of, like, proprietary,
user level and company levelcontext that you're building up
and that's how you're going toget these truly magical
experiences.
And so I think maybe the answerto your question, which
(27:53):
initially was this why is it tomake sense of this recruiting
layer on top of the llms isprobably it's probably just more
just that I think the reallyexciting things to build are
more related to what context canyou, how can you, what
guardrails can you put aroundand how can you encase the
context such that you get justmagic out of whatever models the
geniuses at Anthropic andOpenAI produce for us next?
Speaker 3 (28:16):
Yeah, that's great.
Speaker 1 (28:19):
I just say let's talk
about the future right as a
tail end of our episode here.
It'd be cool to hear yourthoughts in terms of the current
roadmap, as much as you feelcomfortable sharing.
And then I would love to hearabout some of the more recent
conversations you've had withcustomers.
What features and products arethey most interested in?
(28:39):
What are they saying, hey, canmany of you do X right?
What are those types ofconversations look like right
now?
Speaker 2 (28:46):
Yeah, sure, yeah.
So we base our roadmap aroundthree principles that we have.
These are very much principlesthat we have arrived at by
working alongside customers, butalso things that we just think
are the right way to build thefuture of the workspace for
hiring, the AI workspace forhiring.
So pillar number one isprecision.
(29:06):
So what can we do to helppeople make far more high
probability decisions whenthey're hiring?
The second is efficiency whatcan we do to enable them to
spend far less time on theundifferentiated work within the
hiring workflow?
And then the final one isadoption, which is less flashy
but so important, because hiringis such a team sport.
(29:28):
Right, if you only get therecruiting team on something
like MetaView, it's good, don'tget me wrong.
It's a really good place tostart, but you're not getting
the full value.
You need to actually work out away of how can I get these
hundreds of people in thecompany, many of whom I don't
actually know by name and I haveno relationship with whatsoever
.
How can I get them to use thisthing as well, so that we
actually A can make their liveseasier and they can do a better
(29:48):
job at hiring, because they wantgreat colleagues.
But B, we can start to harnessthis fresh corpus of data, which
is all of this unstructureddata we get from candidates.
So those are the threeprecision, efficiency and
adoption.
On the precision side of things, one of the things we talked
about previously is in this call, in this conversation, is
(30:10):
around starting to identify,either based on historics or
based on your design, how wellare interviews matching up
against our expectations of whata good interview looks like.
So that's something that wehear from customers sometimes.
It's something we're working onat the moment.
Where we get really excited iswhere you can start to use that
data to evidence match.
So how can you find from thisconversation, given the JD,
(30:31):
given the scorecard, given therubric, here are the elements
that seem to align really well.
These are the things we shouldbe including in the scorecard
because it seems to match upreally well with the thing we're
saying we're looking for.
So those are the type of thingswe mean when we're talking
about precision, things that canreally give you pause or can
really accelerate you in yourthinking when you're trying to
figure out.
You know what I really like thatcandidate, but I can't quite
put my finger on why which is socommon or the vice versa
(30:53):
sometimes.
I think there's a reallyinteresting climate at the
moment around hiring as well,where we've gone through this
phase of being so anti-gut likeyou should not be using your gut
at all.
Like your gut is terrible, likenever listen to your gut, and
obviously a lot of the franklymost successful people in the
world would often say no, youshould probably listen to your
gut a little bit.
And I said I think there'sgoing to be an interesting clash
(31:14):
between those two things.
My personal take is you onlycaught your gut because you
haven't thought of the words tobe able to articulate it yet.
So there's probably somethingthere and if you had the time,
you could spend the time onreally unpacking it and
realizing, yes, that's the thingthat I wasn't but didn't think
was on point with that candidate.
And yes, I know I didn'tinclude that in my sort of
(31:35):
specification initially, but I'monly human.
I didn't realize that was goingto be something.
So now we should include it andI think those things are
totally fine.
I think we should have moreaccepting.
We should accept.
That is the part of a part ofhow you are anyway.
Point is that's some of thethings we think about within
precision, within efficiency Imentioned.
There's a lot of things thathappen off the back of these
conversations.
After an intake meeting, youmight go and redraft a job
(31:58):
description, or you might go andcreate an interview plan or
create a list of samplecandidates in order to calibrate
with the HR.
There's just all these sort ofbodies of work that often result
in one, two, three week delaysbefore you even start to see a
candidate.
Right, you go from an intake.
You'll take a week to get backto them with.
Here's a list of.
Maybe you take a couple of daysto get back with a few example
(32:20):
candidates.
They actually they get busy.
They don't reply to you rightaway with what do you think of
those people?
Suddenly, a week has passed andthen you get in the second
batch.
Yeah, cool, we're calibrated.
Now let's start sourcing.
Oh dang, we haven't actuallygot a JD.
Let me spend a bit of time.
(32:42):
Candidate can often take four,five, six weeks, which is, I
think, again one of the reasonspeople almost don't rely on
hiring as a key way to achievetheir goals in a given quarter
or half year.
Because you're like my team forthis half year has decided
already.
Because it's going to take methree or four months to hire
anyone.
I think we can change thatreally considerably.
So that's the stuff on theefficiency side, really a lot of
the work product thatrecruiters and hiring managers
have to put out there.
(33:02):
They're things that can be muchmore deeply assisted by AI.
And then on the adoption side,it's probably not as fun to talk
about that, but there's just alot we can do around getting
people to more familiar withsome of our I don't want to say
more expert features, because Ithink that's almost like the.
That's a bad excuse to have forus.
But essentially there are somereally powerful ways that you
(33:23):
can collaborate with the AIwithin MetaView.
Let's say, you can instruct theAI exactly how you want your
notes laid out and you canattach different sets of
instructions to different rolesin your database.
So you might be working 10different roles and obviously
you want a different structureof notes for each one.
You can tell the AI which setof instructions of how to take
(33:44):
notes apply to which roles,which technical skills you're
most interested in for whichdifferent roles and therefore
which to pull out and flag andcreate as an attribute within
the candidate profile.
There's a ton of these reallypowerful things you can do which
I guess, I'll just say, notenough of our customers are
doing.
So we think we can do a lotmore in the product just to get
people adopting those things.
So, yeah, that's what's top ofmind, which I think is very
(34:05):
connected to what we're hearingfrom customers as well.
So, yeah, I think that coversit, that's awesome.
Speaker 1 (34:11):
I think we have time
for one more question, or I have
one more question, elijah.
I don't know if you have onemore, but just I I'll jump in
quick.
Are you seeing any specificindustries, customer segments,
outperform others in terms ofwhere you're seeing most
traction?
Like, where are you seeingpeople really?
Where's the demand for thisproduct?
Speaker 2 (34:29):
We tend to focus on
startup, mid-market and maybe
small enterprise tech, so that'squite a broad range, I would
say you obviously get slightlydifferent.
The startups and the mid-marketwere quicker to just realize
and understand just the hugeproductivity gains associated
(34:49):
with not having their peoplehave to worry about being the
people responsible for capturingthis data anymore, and so they
reacted really well just to theefficiency you could buy very
cheaply by not having yourpeople do this.
I think now you're seeing uppermid-market small enterprise
think a lot more about some ofthe strategic stuff like the
reporting, and almost it's myresponsibility as a talent
(35:09):
leader to be capturing this databecause there's already things
I can do with it and who knowswhat I'll be able to do with it
in future as well.
So now you're starting to seethat sort of manifest and it's a
slightly different catalyst,slightly different.
It's a slightly different likecatalyst for adopting and it was
slightly more delayed, I wouldsay as well.
So, yeah, that's beeninteresting.
I would say outside of outsideof tech executive search and
(35:30):
almost not boutique staffingfirms but like highly focused
staffing firms are also havealso been really interesting
that their adoption has beenreally impressive.
Frankly, I would say, a lot ofour most expert users are exec
search folks, because they theyhave these really high quality
products they like to give totheir clients, which is well
(35:50):
this is my write-up on thiscandidate or they are turning up
for a meeting with a client andthey really their clients are
paying them a lot of money andso they really want to be able
to represent those candidateseffectively and have information
at their fingertips whenthey're asked it by their often
like C-suite client, whatever itmight be.
So they actually value.
I think they're really on thebleeding edge of needing the
(36:13):
functionality as well.
So I think that's been asurprise.
It wasn't a focus for us.
It now really is, because wehave a pretty big business there
and a lot of great customersand whatnot, and also they
obviously relate very closely totech.
A lot of them do executiverecruiting for tech companies,
so it's all sort of oneecosystem really.
But yeah, that's probably beenthe I wouldn't say surprise.
(36:34):
It's pretty intuitive when youthink about it.
But I've been really impressedby the organic pickup.
Speaker 1 (36:41):
That's awesome,
awesome.
Thank you for sharing, elijah.
Do you have any other questions?
Speaker 3 (36:45):
I just wanted to
compliment the PLG motion.
It seems like you guys havebeen running with the freemium
model right, so you do have a $0a month so people can try it
out.
Speaker 2 (37:00):
Is it up to maybe 24
conversations?
It is.
I think it's 20, 20 per month.
And yeah, we said we wereoffering.
We used to operate as a freetrial so you could just try it
out for a short period and thenyou have to decide you want to
keep using it or not.
We have recently switched to afree plan so you can now keep
using the product ad infinitumfor free if you want to.
There's obviously really goodreasons to upgrade to a team
(37:20):
plan or a professional plan, butthe free products really like
it's definitely the best thing,like no excuses.
Basically, if you're arecruiter and you're using a
generic note taker, let's say,and if the reason for that is
because it was cheaper thanMetaView, which the professional
version of MetaView rightly,given it's a professional tool
is more expensive than some ofthose generics, then there's no
longer that excuse becauseactually the free plan on
(37:41):
MetaView is really, yeah, reallypowerful.
Speaker 3 (37:44):
Yeah, so you're
giving away a lot of value there
.
And then also forgive me if I'mwrong, but don't you do
something where, if peoplewithin a company are using it,
you highlight to someone whosigns up that they actually have
a bunch of colleagues alsousing MetaView, or something
like that?
Speaker 2 (38:05):
Yeah, we do that
partly because obviously hiring
is a team sport, so sometimesit's good to know if your
colleagues are on it.
But actually the main reason isbecause one of the other
conditions of the free plan isyou have a personal usage limit,
but there's also a companyusage limit.
It's good for you to know howmany other people in the company
are on the product, becauseactually, if there's 20 of you
using it, you're probably goingto hit your company limit quite
soon and therefore so you mightwant to have a conversation
internally or reach out to oneof our team and say, hey, let's
(38:27):
move to the actual team plan,because we're all going to get
blocked soon.
Speaker 3 (38:31):
I think it's social
validation as well, though in a
positive way.
When I was testing it out andusing the product, I looked on
there and I saw a few otherpeople.
I was like, oh, this is great,there's other people using it.
It was a good decision for meto sign up for the free trial or
whatever.
I think there's a lot to thatsocial side as well.
Just seeing your colleagues onit makes you feel like you made
(38:54):
a good decision and you know whoto ask if maybe you need some
help, so I think that's a greatmove.
Speaker 2 (39:00):
Nice Thanks.
Speaker 1 (39:03):
Awesome.
This has been a great episode,definitely, as always.
I feel like I've learned a lotand I wanted to just say, sal,
thank you so much for joining ustoday and sharing your insights
and everything you've learnedover the years building MetaView
.
I'm really excited for you andyour team.
It's really great to see howwell things are going.
Just looking at your website,your team's progressed a whole
lot since last time we connected.
(39:23):
It's amazing to see those logosand to see the functionality
and what you've been able tobuild out.
It's really cool.
It's really impressive.
So, thank you, I appreciate youcoming on today.
Speaker 2 (39:33):
Oh man, thanks so
much for the kind words and yeah
, anytime.
Speaker 3 (39:41):
Always, always,
always pumped to chat with you,
bo.
Hey you too, elijah.
It's good seeing you too.
Speaker 1 (39:43):
I'm glad you can make
it Always a good time.
Cool hey for everybody tuningin.
Thank you so much for joiningus and we'll talk to you real
soon.
Take care Bye.