Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
You found the podcast Go Beyond the Brief, where we
take a deep dive into the societal currents shaping our lives. Together,
we'll explore the often unseen forces at play. We'll examine
the research, dissect the data, and most importantly, if you're
seeking to understand what's shaping our society, this is the.
Speaker 2 (00:18):
Place or hiring. You definitely know the promise of AI.
You know unparalleled speed efficiency.
Speaker 3 (00:24):
Right, these automated systems. They can screen thousands of resumes instantly,
use predictive analytics.
Speaker 2 (00:30):
They seem to reduce human error. Ostensibly it really sounds
like the silver bullet for staffing. It does, but here's
what we really need to wrestle with today. Every layer
of automation you introduce, well, it's also a layer of
magnified systemic legal risk. Yeah.
Speaker 3 (00:46):
Absolutely. If your algorithm is biased, and many can be,
you're not just risking one discrimination lawsuit, you're potentially discriminating
at scale. And that massive risk is exactly why regulators
are moving so fast. We're diving into a really crucial
and frankly complex topic today, navigating this rapidly emerging patchwork
(01:06):
of us AI employment regulations.
Speaker 2 (01:09):
It's a patchwork because there's no central federal law governing
this yet.
Speaker 3 (01:12):
Exactly, which makes understanding these well, these disparate local and
state rules absolutely essential for anyone operating, especially across state lines.
Speaker 2 (01:21):
Okay, so our mission for this deep dive is to
get into the two jurisdictions that seem to be setting
the nationwide standard, or at least influencing it heavily. That's sorry. First,
New York City's Local Law one forty four. That one's
already in operational reality, it's been in effect for a.
Speaker 3 (01:37):
Bit since July five, twenty twenty three.
Speaker 2 (01:39):
Yeah. And second, the really big regulatory foundation being laid
in California. They're Title I revisions to the Code of Regulations.
They're said to dramatically reshape hr TECH by October first,
twenty twenty five.
Speaker 3 (01:53):
So we need to figure out the scope first, what
technologies are actually cover, I mean, is it just fancy
AI or simple stuff too?
Speaker 2 (02:00):
And maybe more importantly, what specific non negotiable compliance actions
are now required? You know, what do you have to
do to stay legally defensible?
Speaker 3 (02:08):
Okay, let's start right there with definitions. Because these laws
they're intentionally written to be technology agnostic.
Speaker 2 (02:14):
They have to be right, they need to last longer
than the current buzzwords the current generation of AI tools exactly.
So when the law talks about AI, are we only
talking about like deep learning and complex neural networks or
does a simple keyword filter count? What's the real scope here?
Speaker 3 (02:30):
Well, the scope is intentionally massive, designed that way to
ensure durability. California, for instance, uses the term automated decision
system or ADS ADS. Okay, and that definition is incredibly broad.
It covers basically any computational process, whether it's derived from
machine learning, statistics, even classic AI that makes or and
(02:50):
this is key, merely facilitates a decision regarding an employment
benefit facilitates.
Speaker 2 (02:55):
Wow, Okay, that sounds like the widest possible.
Speaker 3 (02:58):
Net it pretty much is. That means high aring, promotion,
performance reviews, even termination recommendations could fall under it.
Speaker 2 (03:04):
So give us some examples of an ADS under California's definition.
Speaker 3 (03:07):
Sure, think about systems that screen resumes based on specific
patterns or keywords, or maybe tools that direct job adds
to target specific demographic groups online. It also includes systems
analyzing audio or video recordings to score a candidate's you
know fit or communication style or even using digital games
or puzzles to assess skills or personality traits.
Speaker 2 (03:30):
So if data is processed in any way to help
make a key employment decision.
Speaker 3 (03:34):
It's likely covered in California.
Speaker 2 (03:35):
Yes, got it? Now, how does that compare to New
York City's definition? Is it? Just as broad.
Speaker 3 (03:41):
NYC's term is Automated employment Decision Tool AEDT, And it's
actually a bit narrower in its required function, which is interesting.
How the tool has to substantially assist or replace discretionary
decision making, specifically for hiring or promotion.
Speaker 2 (03:59):
Okay, so substantially assist or replace What does that mean practically?
Speaker 3 (04:03):
Well, imagine a simple third party background check tool. If
it just delivers raw data like criminal records or credit history,
that might not qualify as an AET because a human
still makes the entire decision.
Speaker 2 (04:14):
It doesn't replace the discretion exactly.
Speaker 3 (04:17):
But contrasts that with an automated ranking system that scores
candidates and say auto rejects everyone outside the top ten percent.
That definitely qualifies because it's removing or heavily replacing human
discretion in the screening process.
Speaker 2 (04:31):
That makes sense. Okay. One point from the sources that
I just can't get past. Is the applicability threshold in California.
Who does this actually apply to?
Speaker 3 (04:39):
Yeah, that's huge. The regulations apply to nearly all businesses
operating in the state that regularly employ five or more employees.
Speaker 2 (04:46):
Five or more. That's tiny, it.
Speaker 3 (04:48):
Is, and that low applicability bar is why these regulations
are such a potential game changer. It means small businesses
maybe using an affordable off the shelf recruiting platform, they're
now suddenly under the same complex legal scrutiny regarding algorithmic
bias as the biggest tech firms in Silicon Valley. It
democratizes the compliance.
Speaker 2 (05:07):
Burden in a way, right, and that scrutiny is tied
directly to California's existing anti discrimination laws. Let's shift fully
to California's focused now, because this really seems to be
where the liability rubber meets the road. The Title II
revisions explicitly extend the Fair Employment and Housing Act, that's FEAA,
it's anti discrimination protections directly to the use of AI
(05:30):
and these ADS tools.
Speaker 3 (05:31):
And what's really critical here is that the prohibition covers
both intentional discrimination what lawyers call disparate.
Speaker 2 (05:38):
Treatment like deliberately setting a tool to exclude a certain.
Speaker 3 (05:41):
Group, correct, but it also covers practices that have a
disparate impact, meaning they unfairly screen out a protected group,
even if the intent behind the algorithm or its use
was completely neutral. California is really trying to smoke out
those hidden biases that might be baked into the training
data or the system logic.
Speaker 2 (06:00):
And that brings us directly to this specific ban on proxies.
This sounds complicated. How does a seemingly innocent piece of
data become well legally toxic?
Speaker 3 (06:10):
Yeah, proxies are a major focus. A proxy is essentially
a neutral seeming data point that happens to be highly
correlated with a protected characteristic like race, national origin, gender, age, disability.
So the law prohibits using something like say zip codes
or maybe even the type of undergraduate institution someone attended,
if that data effectively stands in as a substitute as
(06:33):
a proxy for a protected basis you absolutely cannot legally
consider directly.
Speaker 2 (06:38):
So like using zip code is a stand in for race, potentially.
Speaker 3 (06:41):
Precisely, but the concept and this is where it gets
even more complex and frankly a bit terrifying for developers
it goes one layer deeper, the proxy for a proxy.
Speaker 2 (06:52):
A proxy for a proxy. Okay, unpacked that This is.
Speaker 3 (06:54):
Where algorithms start making connections that sometimes humans can't even
easily trace back or wouldn't consciously make. Is when systems
analyze seemingly benign data points, maybe a candidate's vocal tone
in an interview, recording their facial expressions during a video assessment,
or even reaction time in a screening game.
Speaker 2 (07:11):
Things that seem objective on.
Speaker 3 (07:13):
The surface exactly. But these things seemingly neutral can easily
correlate with protected characteristics. Think about vocal tone potentially correlating
with national origin or a non native accent, or reaction
time potentially correlating with age or a neurological disability.
Speaker 2 (07:28):
Ah I see, So a system might penalize someone for
say a flightly slower reaction time, not realizing it's link
to a protected condition.
Speaker 3 (07:37):
Precisely, And that is where the law steps in and
says hold on liability can be assigned. The employer has
to understand, or at least try to understand and account
for the correlations their models are making, even the indirect ones.
Speaker 2 (07:50):
That really forces a serious compliance conversation inside companies, doesn't it?
Speaker 3 (07:54):
Absolutely so.
Speaker 2 (07:55):
In California, with all this focus on proxies and disparate impact,
companies legally mandated to perform bias testing on these ADS
tools before they use them.
Speaker 3 (08:05):
Okay, This is a key difference between California and say NYC.
In California, it's not a direct legal mandate that says
you must audit, but it's a very powerful de facto.
Speaker 2 (08:16):
Requirement to facto.
Speaker 3 (08:17):
Howso, because the regulations clearly state that the evidence of
anti bias testing, or perhaps more palasly, the lack of
evidence of such testing, will be a key factor when
cords or regulators assess discrimination claims.
Speaker 2 (08:28):
Ah. Okay, so not doing the testing makes you look
bad if a clam arises.
Speaker 3 (08:32):
It does more than look bad. The regulations say the
absence of testing can and likely will weigh against employers.
It weakens your defense significantly.
Speaker 2 (08:42):
So it's like the regulating governance through the back door.
Why not just mandate the testing upfront like NYC.
Speaker 3 (08:48):
Well you could see it that way. It's arguably a
strategically brilliant aspect of the regulation. It leverages the existing
powerful framework of discrimination law HU by saying the absence
of ten testing hurts your case. They effectively force businesses
into adopting proactive auditable governance models, not because of a
(09:09):
direct order, but simply as a necessary step for building
an affirmative legal defense if they're ever challenged. If you
haven't done the work, your position in court is just much,
much weaker.
Speaker 2 (09:18):
That's a really powerful incentive model. Wow. And to support
that increased scrutiny, the sources also note a huge change
in data governance, specifically record keeping.
Speaker 3 (09:28):
Yes, that's another big operational shift. California has doubled the
required record preservation period for all ADS related data.
Speaker 2 (09:34):
Doubled from what to what from two.
Speaker 3 (09:37):
Years, which was standard from many employment records to four years.
And this covers everything scoring outputs, information about the training,
data's provenance, the audit findings, if you did them.
Speaker 2 (09:48):
And why four years.
Speaker 3 (09:48):
Specifically to align directly with the state's Statute of limitations
for filing civil rights claims under seahead, they want the
evidence to be available for the entire period someone could
potentially bring a loss.
Speaker 2 (10:00):
Okay, that makes sense. Four years is a long time
for data retention in this.
Speaker 3 (10:03):
Context, it is It requires robust systems and processes all.
Speaker 2 (10:07):
Right, let's pivot now to New York City's Local Law
one forty four. Because this isn't theoretical, right, It's been
in four since July five, twenty twenty three. This is
the current operational reality for many.
Speaker 3 (10:20):
That's right, this is happening now.
Speaker 2 (10:21):
So what does NYC demand operationally? What are the must dos?
Speaker 3 (10:25):
NYC lays out three essentially non negotiable mandates for employers
using aedts. First, there's the independent bias audit independent.
Speaker 2 (10:33):
Okay, what does that entail?
Speaker 3 (10:35):
It must be conducted annually, and crucially, it has to
be performed by a truly independent auditor. That means the
auditor cannot be affiliated with the employer or with the
vendor or developer of the tool being audited.
Speaker 2 (10:48):
So you can't just have the vendor audit their own
tool or do it internally.
Speaker 3 (10:52):
Absolutely not. It has to be external and unbiased.
Speaker 2 (10:54):
And what specific data must that audit actually reveal? What
are they looking for?
Speaker 3 (10:59):
It require Yer's calculating very specific metrics, primarily the selection
rate for different demographic categories, and then the impact ratio.
Speaker 2 (11:07):
Impact ratio.
Speaker 3 (11:08):
Yeah, that basically compares the selection rate of a specific
demographic group to the selection rate of the highest selected group.
It's looking for disparities across categories like sex, race, ethnicity,
and importantly intersectional categories like comparing black women to white
men for instance.
Speaker 2 (11:23):
Okay, so you get this audit, you calculate these ratios.
Now here's the part that seems counterintuitive. The source material
says NYC doesn't actually mandate that the employer stops using
the AEDT if bias is found in the audit. Is
that right?
Speaker 3 (11:37):
That is correct? And it seems odd on the surface.
Speaker 2 (11:39):
Doesn't it. Yeah, if the audit proved your tool is biased,
you don't have to stop using it. What's the point
of the audit then?
Speaker 3 (11:46):
Ah, But here's the legal catch. It's quite clever. Actually,
once you have that independent audit report documenting bias, well,
you now have explicit, documented knowledge of that bias.
Speaker 2 (11:57):
Right, you can't claim ignorance anymore.
Speaker 3 (11:59):
Precisely, so, if you continue using that tool known to
be biased, your potential liability under broader federal anti discrimination
laws like Title seventh that the Civil Rights Act increases
dramatically because it starts to look less like unintentional disparate impact,
which might have some defenses and potentially more like intentional discrimination.
(12:20):
Because you knew about the bias and use the tool anyway,
it creates this massive kind of self imposed legal risk
if you ignore the findings.
Speaker 2 (12:26):
Wow, that's fascinating. Sort of weaponizes the audit data against
the user if they don't act on it. Okay, what
about the other two pillars in the NYC?
Speaker 3 (12:34):
Pillar two is public transparency. A summary of those annual
bias audit results, including the key metrics like impact ratios
and the data source use for the audit must be
publicly posted on the company's.
Speaker 2 (12:46):
Website publicly like anyone can see.
Speaker 3 (12:48):
It anyone, no logins, no hiding it behind a career's
portal paywall. It needs to be easily accessible. This makes
it instantly discoverable by the city's enforcement agency, the Department
of Consumers and Worker Protection or DCWP, and of course
by potential plaintiffs and their lawyers or even just job seekers.
Creates market pressure too, Okay.
Speaker 2 (13:08):
Public transparency and the third operational demand that one focuses
on the candidate experience. Right, yes.
Speaker 3 (13:14):
Pillar three is advanced notification and accommodation. Candidates or even
current employees who are based in NYC must receive notice
at least ten business days before an AADT is used
on them.
Speaker 2 (13:25):
Ten business days that seems like a long lead time in.
Speaker 3 (13:28):
Fast based hiring, it is, and that notice has to
be specific. It must explain what the tool does, what
job qualifications or characteristics the tool is assessing, and critically
provide clear instructions on how the individual can request an
alternative assessment method or a reasonable accommodation.
Speaker 2 (13:45):
That ten business day waiting period. I can see how
that creates a really significant, almost unavoidable operational burden, especially
for high volume industries like retail or hospitality.
Speaker 3 (13:56):
Absolutely, it slows things down considerably. It's probably the most
operationally disruptive piece for many companies using these tools at
scale in NYC.
Speaker 2 (14:05):
So let's zoom out. We have this conflict or maybe
tension NYC's mandatory annual public audit versus California's powerful legal
insteadive model for auditing combined with strict proxy rules and
longer record retention.
Speaker 3 (14:18):
Right, different approaches, different pressures.
Speaker 2 (14:20):
If you're a multi state company operating in this well
legal mindfield, how do you even begin to manage this,
especially when you factor in vendor relationships for the tools themselves.
Speaker 3 (14:30):
Yeah, the complexity for multi state employers is immense, and
it's not just CAA and NYC. You have other states
like Illinois, Maryland, Colorado, all with their own emerging rules
and timelines. It's constantly evolving, so you can't.
Speaker 2 (14:44):
Just comply with the lowest common denominator state.
Speaker 3 (14:46):
Absolutely not, That's a recipe for disaster. And this complexity
is compounded by how California in particular is handling vendor
liability through its codification of agency theory.
Speaker 2 (14:57):
Agency theory. Okay, does that mean if you outsource the
AI tool to a third party vendor, you also outsourced
the liability or our businesses finding vendors just won't take
on that risk.
Speaker 3 (15:08):
California has made it very clear third party AI tool
developers and vendors are legally defined as agents of the
employer using the tool, meaning meaning the buck stops with
the business the employer. Even if you outsource your entire
resume screening process to a fancy AI vendor, you, the employer,
are still legally responsible for the discriminatory impacts of that tool.
(15:28):
Under FIFTHEA, you cannot use the vendor as a legal.
Speaker 2 (15:31):
Shield, so thorough vendor due diligence isn't just a best
practice anymore. It sounds like a foundational legal necessity.
Speaker 3 (15:38):
It absolutely is. This requires really digging into your vendor contracts.
You need explicit provisions guaranteeing their anti bias testing protocols,
transparency about data provenance, and critically clear indemnification agreements where
the vendor takes some financial responsibility if their tool leads
to FEA violations. Getting vendors to agree to strong indemnify cation, though,
(16:00):
can be tough.
Speaker 2 (16:01):
I bet so for any large or growing company listening
right now trying to navigate this fragmented, complex regulatory environment,
what's the single most viable strategy? How do you stay
compliant everywhere?
Speaker 3 (16:15):
Honestly, the only truly viable path forward, especially for national employers,
is to aim to exceed the highest.
Speaker 2 (16:20):
Standards, meaning comply with the strictest rules out there, even
if you don't operate there yet.
Speaker 3 (16:25):
Essentially, yes, you need to build a unified compliance framework
that meets or ideally surpasses California's most stringent requirements because
they are currently among the most comprehensive.
Speaker 2 (16:35):
So that means things like that.
Speaker 3 (16:36):
Means implementing robust, ideally external bias audit protocols, even if
your primary jurisdiction doesn't explicitly mandate it yet like NYC does.
It means training your teams understanding proxy risks, and it
definitely means strictly adhering to that four year record keeping
rule from California for all ADS related data everywhere you operate.
Speaker 2 (16:57):
Because if you comply with California's high.
Speaker 3 (16:59):
Bar, you are likely compliant or very close to compliant
across most other jurisdictions currently active or proposing similar regulations.
It's about building for the strictest environment to achieve broader compliance.
Speaker 2 (17:11):
Okay, the lesson here seems crystal clear. Then, the era
of treating your automated hiring or HR systems as some
kind of impenetrable black box is definitively over gone. Whether
you're motivated by NYC's explicit audit mandate and public shaming
potential or California's threat of a significantly weakened legal defense
if you don't do the work. Proactive, transparent and auditible
(17:34):
governance is simply the new standard requirement for staying in
business and using these tools legally.
Speaker 3 (17:39):
And I'd argue this is actually far more than just
avoiding fines or legal threats, though that's obviously crucial. How
So using that audit data proactively those selection rates, those
impact ratios allows companies to genuinely assess and potentially improve
their actual diversity, equity, and inclusion outcomes.
Speaker 2 (17:59):
Ah, turning the compliance burden into something.
Speaker 3 (18:01):
Strategic Exactly, it transforms a legal burden into a potential
strategic advantage by helping ensure your talent pipeline is actually
selecting the most qualified and diverse pool of candidates possible,
rather than inadvertently filtering people out based on flawed algorithms. Yeah,
you can use the data to make better fare systems.
Speaker 2 (18:20):
That's a great point. Okay, So here's our final thought
for you, the listener to chew on stemming from this
cross jurisdictional conflict and the operational realities given that California
might soon pass even more legislation, like the proposed No
Robo Bosses Act, which could mandate human oversight for critical
employment decisions like discipline or termination, how might that mandatory
(18:41):
ten business day waiting period, the one currently required by
NYC's Advanced Notification Rule. How might that permanently change the
expected speed and flexibility of modern high volume hiring pipelines
across the entire country. Will AI hiring ever truly be
fast again once these notification and oversight rules become more widespread.
Something to think about.