All Episodes

October 27, 2023 21 mins

Imagine having the power to expose gaps in the workplace misconduct screening process and make your company a safer and more inclusive place. That's exactly what you'll be able to do after this engaging chat with Micole Garatti and Brendten Eickstaedt of Fama. We dig deep into how Fama is revolutionizing the background screening process, using AI to track misconduct indicators that often slip through the cracks of the criminal justice system. Understand the importance of pinpointing a vendor's definition of AI, the potential risks of mislabeling, and the intricacy of using markers in this process.

Delve into a robust conversation that dissects the complexities of AI regulation, keeping the recent New York case at the forefront.  Strap in for a stimulating discussion that promises to equip you with the knowledge to navigate the ever-evolving landscape of AI and workplace safety.

Special mini series recorded with Oleeo at HR Tech 2023 with hosts Ryan Leary, Brian Fink, and Shally Steckerl


Listen & Subscribe on your favorite platform
Apple | Spotify | Google | Amazon

Visit RecruitingDaily
Twitter @RecruitingDaily
Join the Secret Sourcing Group
Learn more about #HRTX Events

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
We are back at the HR Tech Expo on the conference
room floor, live at the Oliobooth.
As Ryan would say, we arepowered by Olio.
Today.
It's the second day.
It's nearing well, it's pastlunch, so things are starting to
slow down a little bit.
However, it might be getting alittle more fun now, since some

(00:31):
of the more let's just sayserious people have already made
their way to the airport.
So here we are.
I have as our guests now wehave Fama, which is Latin for
fame, also Spanish for fame.
I have Mikol Grady and howabout you try saying it Brenton

(00:54):
Ikested.
Okay, I'm the chief technologyofficer of Fama and Mikol is the
director of product marketing.
Not the director of product,but the director of product
marketing.
And what is Fama?

Speaker 3 (01:07):
Yeah, so we do online screening for misconduct, and
we do that because backgroundscreening technology is the
oldest technology in the market.

Speaker 1 (01:15):
Background screening yeah.

Speaker 3 (01:17):
It's the oldest technology in the market and
it's gotten really, really goodat doing checks quickly and
efficiently, but it's alsomissing some key signals of
misconduct, because mostmisconduct is never accorded
right in the criminal justicesystem.
But just because somethingisn't recorded in the criminal

(01:38):
justice system doesn't mean thatit's appropriate for work.
And so 95% of global backgroundscreening vendors actually
partner with us to help closethe gaps and make workplaces
safer and more inclusive andimprove quality of higher
retention.

Speaker 1 (01:53):
Okay, so it is a technology that covers the gap
between what you are supposed tolegally check for and what
might make a difference, thatyou might be missing.

Speaker 3 (02:10):
Right, for example.
Like most, harassment is neverrecorded as a crime.

Speaker 1 (02:16):
Right, yeah, a lot of it.

Speaker 3 (02:17):
Unfortunately, most intolerance is never accorded as
a crime and even most violenceis never recorded as a crime.
But that doesn't mean we wantemployees coming to work and
being violent toward others.

Speaker 1 (02:28):
Being borderline violent but not violent enough
to have gotten caught RightRight or threats.

Speaker 3 (02:35):
Like most people making threats, are not going to
jail for it and beingprosecuted Right and so.

Speaker 1 (02:42):
Bigotry and nefarious , otherwise nefarious behavior.

Speaker 3 (02:45):
Right.

Speaker 1 (02:46):
Racism and things that.
Yeah, wow, that's kind of heavy.
I'm depressed now.

Speaker 2 (02:53):
Sorry about that.

Speaker 1 (02:55):
Please don't run my name.

Speaker 3 (02:56):
The good thing is is that we help prevent companies
from hiring workers that aredoing that Right, right and so
it's more of like the absence ofthose things.

Speaker 1 (03:06):
But the companies aren't hiring you.
They're hiring some kind ofbackground check company and you
are their power for the mostpart on the online screening
front.
Right, oh, okay, so what otherfront is there?

Speaker 3 (03:16):
Well, I mean, there's identity checks and
verification.

Speaker 1 (03:19):
Oh, okay.

Speaker 3 (03:19):
You know what I mean.

Speaker 1 (03:21):
There's other components that we don't do but
we complete the.
Yeah, gotcha.
So there's also that somebodyhas to run to the local office
or whatever.

Speaker 3 (03:30):
Yeah, that's not happening anymore, hopefully.

Speaker 1 (03:32):
Yeah, I've heard it has.

Speaker 3 (03:34):
Still.

Speaker 1 (03:35):
Yeah, because there are some jurisdictions that
don't have their records online.
So yeah, which really delaysthe process Because, as you said
, a lot of the vendors nowadaysoffer 24 hour turnaround.

Speaker 3 (03:46):
Right.

Speaker 1 (03:47):
Except for when you have to literally send a runner
to the local and they have tohire a PI that has the license
to go in and say, hey, no, wereally got this person's
permission to run theirbackground check.
We're not just here SnoopyLoopy.

Speaker 3 (04:02):
Yeah, the most running we do is from a
treadmill desk while we like,Basically yeah.

Speaker 1 (04:08):
I want a treadmill desk, all right.
So HR technology, have you hada chance to kind of look around
and see what's going on around?

Speaker 2 (04:19):
here.
Oh, yeah, a little bit.

Speaker 1 (04:21):
Yeah, when you walk around, what do you see is, in
your opinion, just really trulyinnovation in technology.
Have you seen anything that'slike wow, that's new and I like
it and it's.

Speaker 3 (04:36):
That's really interesting.

Speaker 2 (04:40):
I think from my perspective I've seen a few
things that I think areinnovative, but I'm distinctly
interested in helping companiesfind and hire good people.

Speaker 1 (04:55):
And so that's partly what we do Find and hire okay.

Speaker 2 (04:59):
So looking at other companies that are doing similar
things not exactly the waywe're doing them, but different
components of really findingquality people, quality of hire
so I'm seeing some technologiesout there that are doing that
and I think that's reallyinteresting.

Speaker 1 (05:15):
And that's good.
Okay, what have you found isthe kind of I don't know the
common denominator across whatyou're seeing here today.

Speaker 2 (05:29):
I mean, everyone's talking about.

Speaker 3 (05:30):
AI right.

Speaker 1 (05:31):
So that's what does that really mean?

Speaker 2 (05:35):
I mean it could be as simple as just a bunch of if
then statements.

Speaker 1 (05:41):
Right, that's a decision tree.
That's how I feel about it.

Speaker 2 (05:45):
As to be as complex as LLMs and things like that,
where they really truly are sortof quasi thinking on their own.
We're seeing, I see, all kindsof things going on on the floor
here that run the gamut of that.

Speaker 1 (06:03):
I wish, yeah, if I could tell every attendee not
the vendors, but every attendeeto do one thing.
It would be to actually ask thequestion what exactly is AI to
this company?
Because, there's just kind of ablanket term and it's really not
a common denominator.

(06:23):
It's just a common word, but itcould be completely different
things that they're just callingit and it used to be other
things.
Semantic search was huge a fewyears back and essentially, in
some ways, this is also bad.
You need an ontology for someof the models to work, or, if

(06:44):
you don't have one, you trainedit and when you trained it, you
created ad hoc ontology.
The CTO is shaking his head up,so I must have nailed that one
In the industry that you're in.
So what part of Fama does islook for indicators in a way, so

(07:05):
there must be some markers thatbecome these indicators.
Right, Is there an AIapplication there?
Or is that too dangerousBecause the AI might like
mislabel or misinterpret theindicators.

Speaker 2 (07:23):
So I think there is an AI component there and we do
use it.

Speaker 1 (07:26):
The categorization of it.

Speaker 2 (07:28):
Yeah absolutely to categorize and classify the
content, and a lot of it is toweed out the vast majority of
things that aren't problematic.

Speaker 1 (07:40):
Which, let's face it is hopefully the most.
The majority should just beregular, non-ranking.

Speaker 2 (07:45):
Absolutely.

Speaker 1 (07:46):
I mean we've filtered out 95 plus percent of
everything, because you're justlooking for the risks, not the
non-ranks Exactly, and then onceyou get into the classification
of things that are problematic,that's when you start talking
about.

Speaker 2 (08:04):
what are you concerned about in terms of bias
in the AI or things like that,and to me, a lot of it comes
down to the fact that there'sstill a person involved.

Speaker 1 (08:14):
There's a fact checker.

Speaker 2 (08:15):
There's a fact checker.
Yeah, I was going to ask youabout that.

Speaker 1 (08:17):
Yeah, yeah, so you're using let's call it AI, but
really is, you're using thetechnology, the large data sets
and the ability for machines todo something that they can do
very well, that humans can't do,which is to process a lot of
information.
You're using that as anassistive technology.
It's helping bring to thesurface the things that you

(08:39):
should look at.
And then someone looks at thatand makes a decision.
Because that's not quite whereAI is yet, it's not decision
capable yet Our AI does not makedecisions, doesn't decide.

Speaker 2 (08:49):
Right, it really just says this is something you
might want to look at.

Speaker 1 (08:53):
And that is a huge factor with the New York case.

Speaker 3 (08:58):
Right, yes.

Speaker 1 (08:59):
And what has transpired there and is probably
going to continue repercussionsthroughout the whole ecosystem,
is because, just generallyspeaking, it's hard to get
legislation through andgoverning bodies tend to be very
I don't want to say lazy, butthey tend to.
If something has already beendone, they tend to sort of carry

(09:22):
that on Right.
So they're going to take thatcase and move it forward and
apply it in other ways Exactly,and what?
The real essence in myexperience of that case correct
me if I'm wrong is essentiallyletting the automation make the
decision.
Absolutely.
That's exactly what it is.
Which is what you're notsupposed to do, correct.
So that regulation I'm glad for, yes, yeah.

(09:43):
But then if they over-regulate,they're going to make it so
that you cannot apply thetechnology.
So therefore it will beregulated or illegal or
prohibited to actually use theassistive technology for the
things that it's good for,exactly.
So that's the other side is ifwe over legislate the
protections, totally yeah, andnot take advantage of the

(10:06):
machine's power.

Speaker 2 (10:08):
I love the idea of it being a system technology.

Speaker 1 (10:10):
Yeah.

Speaker 3 (10:10):
That's really what it is.

Speaker 1 (10:12):
That's what it is.

Speaker 2 (10:12):
That's how we should be using it, because there's
just nuances of human judgmentthat you can't encode yet.
Maybe someday we will be ableto.
Yeah, I don't know.

Speaker 1 (10:21):
I mean there's.
I have a background innon-verbal communication and
intercultural communication andI've just always been a huge
proponent or advocate of thefact that you have differences
and not just nuances, butoutright differences in
semiotics and the meaning ofthings, and that you just can't.

(10:43):
A machine is not going to beable to determine something
that's new because it hasn't hadexperience with it.
Correct, when humans can makethat logical leap In the future,
that AI that really is able tocreate cognition may be able to,
but right now we don't haveanything even remotely close to

(11:04):
that, exactly right.
What about the case of mistakenidentity?
Michael Smith, right.
How is AI handling that?

Speaker 2 (11:19):
Or not?
Is it poorly handled?
Again, it's assistive.
So we do have AI that helps usidentify the people, using,
basically, algorithms thattriangulate based on data about
a person their name, theiraddress, their general location
those types of things.
So it helps us weed in and weedout again, but it ultimately

(11:41):
comes out to a person againmaking the decision about is
this actually the person thatwe're looking at or is this a
different chance?
Because we're FairCurrentReporting Act compliant, we have
to have at least three markersfor each profile that we
identify.
So you know, is one of themvisual?

(12:05):
Yes, it can be.

Speaker 1 (12:06):
It absolutely can be, so we can look at a picture.
We can look at a pictureAddress.

Speaker 2 (12:11):
Address phone number email picture.

Speaker 1 (12:15):
Yes, correct that correlate.

Speaker 2 (12:18):
In order to say this is that person.
The data we already know istrue about them Got it yeah.

Speaker 1 (12:23):
Do you need their permission?

Speaker 2 (12:26):
We do yes.

Speaker 1 (12:27):
For this consent-based.

Speaker 2 (12:28):
Our case is consent-based.
Again back to the FairCurrentReporting Act stuff we do have
to consent Absolutely.

Speaker 1 (12:35):
What about Americans with Disabilities Act?

Speaker 2 (12:40):
That's a good question.
In what way?

Speaker 1 (12:45):
Well, utilizing sort of the aggregation of
information may potentially andyou may not pass this to the
employer, which would protect mefrom that, but may potentially
reveal that perhaps I amdiabetic.

Speaker 2 (13:01):
Yes, I see what you're saying.
So, yeah, we weed out.
That's part of how we helpcompanies take compliance is
because instead of them going onexactly.

Speaker 1 (13:13):
No, that's right.

Speaker 2 (13:14):
Instead of them going on and looking at it themselves
and seeing protected classinformation.

Speaker 1 (13:19):
we actually Look Shelley's wearing a Jewish star,
exactly.

Speaker 2 (13:23):
We actually filter that out.
We only present the things thatare relevant, and so if it's
relevant to misconduct,misconduct behavior or what, if
it has anything to do with-.
Acts, things they've done.
Yeah, exactly, exactly so.
We're never going to show theirreligion.
We're never going to show thattype of.
Thing.

Speaker 1 (13:41):
And what about Weed it's.
It's kind of a thing, right,it's a pass.

Speaker 2 (13:48):
We do flag for cannabis.
But if the company wants toknow, if the company requires it
, yeah, yeah, right, and it's.
You know, in places LikeCalifornia it doesn't matter,
right?

Speaker 1 (14:00):
some companies don't ask and won't, and won't care
exactly.
And then there's the conflictof like, well, in the red, in
the state in which they resideit's legal for recreational use
and in the state that thecompany is the employer of
record it isn't.
So you know which right whichone has jurisdiction, and so
that's why some companies Simplysay we're not gonna ask because

(14:23):
it's getting cloudy.
Yeah, but then there's also,before all of this, the fact
that I might be you know, theremight be a picture of what might
Could potentially be consideredmisconduct, in that I'm like
smoking a dubia at a party, andthat may be lack of judgment you

(14:44):
know poor judgment but it'salso medical.
So you got the like.
Well, you know, how do you know?
Shelly doesn't have a Whateverlicense to you know, right,
right.
So that's where the human comesin and goes.

Speaker 2 (14:57):
Okay, this is not relevant, because exactly, and
it's not only our humans at Famathat are reviewing it, then you
know, Because we don't score orsay this person is good or bad.
We just Shows it doesn't showit.
We just surface the information.
Okay, the hiring team can thenuse that information to make got

(15:18):
it.

Speaker 1 (15:18):
So it's like hey need to look at this and determine
Okay, you talk to Shelly and sayhey, shelly, do you?

Speaker 2 (15:24):
you know, are you a medical marijuana patient?

Speaker 1 (15:28):
I would be in a business again, but maybe it is
certain yeah right.
So that's that's interesting.
So that's an application from.
In my opinion, that's anapplication of technology that
People haven't really talked alot about.
Right is is this I Don't like aclassifier classifications is
not a good example.

(15:48):
Aggregation, aggregation ofinformation right.
When you look at the modelslike chat GBT, for example,
there's a lot of informationthat was fed into it.
That is very non-homogeneous.
It's law, medicine, you know,books, fiction, nonfiction,

(16:08):
whatever.
But when you look at the kindof information you guys are
looking at, it's, it'sHomogenous in that it's all
social content.
So you can more tightly definethe railings that you can
operate into, and I don't seeenough of that.
Have you, have you seen otherOrganizations out there that are
really tightly defining therails?

(16:29):
I don't see enough of that.
I see a lot of generallytalking about AI and it seems
like a lot of it.
Is that connecting to chat, gbtor?
whatever right, and that's thatnon-homogeneous information.
Do you see the other side?
A lot.

Speaker 2 (16:44):
Not really.
No, not not so much I think.
I think it's something we'regonna see.
You know, obviously a generalmodel of intelligence is
something everyone's going afteryeah.
And you see any hasapplications it has applications
, absolutely, but I think thatthat you know the application of
more specific models that aremuch simpler than LLM.

Speaker 1 (17:08):
But that's a good job much simpler very, very well
for a particular domain.

Speaker 2 (17:13):
You're gonna see that Continue, I think, and I
hopefully get become more usedbecause I think it's it's more
More accurate in a lot of cases.

Speaker 1 (17:24):
That's right yeah particular for a particular use
case one of my mentors in thewhole artificial intelligence,
space and Automation,particularly before it was
called AI, you know.
I mean it's before this wholenew AI thing is Come out.
So like, let's just say, priorto chat, gbt we used to call
automation and I was really bigon writing and and using it for

(17:45):
practical reasons.
But the point is that I hadthis mentor that told me.
He explained this in A way thatreally stuck with me.
He said the robots, themachines, the program, the
technology can do something Very, very well in a way that is
faster, better, more accuratethan a human could, if it sticks
to that one thing.

(18:06):
So, for example, he used therobot hammering a nail.
If you have a nail hammeringrobot, that nail hammering robot
is going hammer nails Faster,more efficiently, more
accurately than a human everpossibly could.
But don't ask it to make you acup of tea, right?
Right, exactly and so that'swhat I'm seeing is that when you
use the what, when you need tohammer nails and use a nail

(18:29):
hammering robot, you've got anadvantage, but when you need to
Create a compelling story anduse a new Heming robot, now
you're like just you're.
You're really just applying AIjust for the sake of applying.

Speaker 2 (18:42):
You're not taking applying.
Yeah, absolutely, absolutely.
I think that's you know.
I Think it's natural thateveryone you know Gravitates
towards what's the newest,coolest thing, but there's
plenty of AI out there.
There is yeah around for muchlonger than Then.
Jet gbt, that does really goodvery specific right stuff.

Speaker 1 (19:04):
And so some a.
So we really don't talk aboutsocial media Much anymore
because it's just media likeit's now everywhere, right, but
when you guys are out therelooking for Evidence of behavior
, social media is a bigcomponent of it.
Right, because, let's face it,if I intentionally wrote an
article on LinkedIn, I meant forthat to be seen.
I'm not gonna be somehowleaking.

(19:27):
You know, there there've beenother companies before.
There was one called crystal.
You might have heard of surethat was it like interpreting
all of your profile information,trying to make some sort of
prediction about yourpersonality and stuff like that.
Yep, how close is this to that?

Speaker 2 (19:46):
it's actually Something that we're working and
we're definitely looking to sayhow do we get a, how do we get
fit for a position right rightnow?

Speaker 1 (19:56):
It's Flagging potential risk, correct, but
there's also this, this findingpotential matches, which is the
not the not the opposite, butit's, it's another part of the
spectrum.
It's now.
You're not like trying toprotect someone from risk.

Speaker 2 (20:11):
Now You're saying let's find someone that actually
might be a really good matchbased on this totally, and it's
sort of the the positive andnegative being put together in a
way.
Yeah you can say, okay, thisperson isn't isn't partaking in
in any misconduct, and hispersonality fits this.
The conduct they are partakingin is good is a good match for

(20:32):
right, absolutely yeah so that'sthat's.

Speaker 1 (20:35):
That's something we're working on.
That's the future.

Speaker 2 (20:37):
Yeah, okay, probably probably in six months or so
you'll see something like it.

Speaker 1 (20:43):
Nicole.
What do you have to say aboutthat?
You've been very quiet.

Speaker 3 (20:45):
Yeah, I've been.
Just I've been enjoying theride of this podcast and the
turns.

Speaker 1 (20:50):
Yeah, yeah, what he said yeah, no, I mean.

Speaker 3 (20:53):
Brenton is always he's one of the smartest people,
but he's very.
He doesn't talk a lot.

Speaker 1 (21:00):
So me getting into talk is a big win.

Speaker 2 (21:02):
Absolutely, oh right.
I don't think he's ever talkedenough to lose his voice in his
life.

Speaker 3 (21:16):
So I've just been happy listening to him talk
about AI and tech and that'swhat he's he loves and he's
really, really, really good atdefinitely comes across and I
appreciate your perspective onit, especially what you know.

Speaker 1 (21:26):
We have to look forward to that.
That's exciting and it's a lotless boring than all the like
Generally.

Speaker 3 (21:35):
I yeah, it's to me it's.

Speaker 1 (21:36):
It's got a real application, so exactly.
Well, thanks so much for beinghere today.
We are live at the HR tech expoin the Olio booth with the
folks from Fama.
Thank you very much.
Thank you you.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.