Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:01):
Hey, it's Mark and
welcome back.
Today we're talking about whenyour boss is a robot,
understanding AI in theworkplace and your rights.
Sigmund Freud lived between 1856and 1939 and was therefore
witness to the surge oftechnology that resulted from
the Industrial Revolution.
(00:23):
While he acknowledged theusefulness of the technical
innovations of his day, he wasalso somewhat skeptical of them.
Freud famously commented, manhas, and as it were, become a
kind of prosthetic god.
He argued that humans, throughtechnology, have created
artificial limbs and tools thatamplify their abilities, making
(00:44):
them godlike, but also creatingnew troubles.
Freud had no idea what wascoming.
The science fiction future thatwas unimaginable in Freud's day
has arrived.
And it's reviewing your jobapplication.
Artificial intelligence is nolonger just something we see in
movies.
It's making real decisions aboutreal people's livelihood every
(01:09):
day.
And while AI promises efficiencyand objectivity, it's bringing
some very human problems intoAmerica's workplaces
discrimination, privacyviolations, and a fundamental
shift in the balance of powerbetween workers and employers.
If you've applied for a jobrecently, there's a good chance
(01:29):
an algorithm screens your resumebefore any human eyes ever saw
it.
In fact, about 65% of companiesnow use some form of AI or
automation in their hiringprocess.
That's not necessarily a badthing, except when the algorithm
is making biased decisions thatwould be illegal if a human
(01:50):
manager made them.
Here's a comforting thought.
Computers can't be racist,sexist, or ageists.
They're just following theirprogramming, right?
Unfortunately, it's not thatsimple.
AI tools from learn from data,and if that data reflects
historical discrimination, theAI will perpetuate that
(02:13):
discrimination into the future.
When Amazon deployed an AIhiring tool, the tech giant
discovered their algorithm wasdiscriminating against women.
The system had learned from thecompany's past hiring patterns,
which favored men, wasessentially programmed to
continue that bias.
Think about that.
(02:34):
One of the world's mostsophisticated technology
companies with virtuallyunlimited resources couldn't
create an AI hiring system thatdidn't discriminate.
If Amazon struggled with this,what are the odds that the
automated system reviewing yourapplication is fair?
The resume scanner that dingsyou for not having the right
(02:55):
keywords might be eliminatingqualified women because men's
resumes historically usedifferent terminology.
The video interview, AI thatanalyzes your facial expressions
and speech patterns, could befiltering out candidates based
on race or ethnicity.
The chat box that the askpre-screening questions might
(03:19):
create barriers for olderworkers who are less comfortable
with the technology, even whentech uh proficiency isn't
required for the job.
Here's what every worker needsto understand.
We were just following thealgorithm, it's not a legal
defense.
Under federalinterdiscrimination laws, you
(03:41):
don't need to prove youremployer intended to
discriminate against you basedon sex, race, religion, uh
discrimination, disability, age,and uh and another protected
characteristic.
You only need to prove thattheir policies had a
discriminatory effect on youremployment.
Or as the Supreme Court recentlyheld in Muldrill versus City of
(04:03):
St.
Louis, you experienced some harmin the terms and conditions of
your job.
This principle applies whetherthe decision was made by a
biased manager or a biasedalgorithm.
In 2023, the EEOC, the EqualEmployment Opportunity
Commission, settled its firstever AI hiring discrimination
case, recovering$365,000 for agroup of job seekers.
(04:26):
That settlement sent a clearmessage: employers remain liable
for discriminatory outcomes evenwhen those outcomes are produced
by automatic or automatedsystems that they purchase from
third-party vendors.
The legal landscape for AI inemployment has become
dramatically unclear, and thatshould concern every working
person in America, me included.
(04:50):
On his first day in office,President Trump rescinded
Executive Order 14110, which haddirected federal agencies to
address AI-related risks,including bias, privacy
violations, and safety concerns.
The EOC removed key guidancedocuments explaining how Title
VII and the AmericansDisabilities Act applied to AI
(05:10):
tools.
The Department of Labor hassignaled that its prior guidance
on AI best practices may nolonger reflect current policy.
In other words, the federalgovernment has largely stepped
back from regulating AI in theworkplace, leaving workers with
far less protection than theyhad just months ago.
(05:31):
Fortunately, several states havestepped into the vacuum.
New York City's Local Law 144,which took effect on January
1st, 2023, requires employersusing automated employment
decisions tools to conductindependent bias audits and
provide notice to jobcandidates.
Illinois recently amended theIllinois Human Rights Act to
(05:52):
prohibit employers from using AIin ways that lead to
discriminatory outcomes based onprotected characteristics.
California has introducedseveral bills aimed at
regulating AI and employment,including the no I like this
phrase, the title, No RobobossesAct, SB7, which would require
(06:12):
employers to provide 30 days'notice before using any
automated decision systems andmandate human oversight and
employment decisions.
Over 25 states introducedsimilar legislation in 2025.
For workers in Connecticut andNew York, the current situation
is particularly frustrating.
Connecticut saw a bill fail thatwould have protected employees
(06:36):
and limited electronicmonitoring for employers.
While New York City hasprotections, New York State has
yet to pass comprehensive AIemployment protections beyond
those affecting state agencies.
While much attention focuses onAI and hiring, the technology is
being used throughout theemployment relationship, often
without workers' knowledge orconsent.
(06:59):
AI systems are increasingly usedto monitor employee
productivity, track keystrokes,analyze work patterns, and even
predict which employees arelikely to quit.
These tools raise profoundprivacy concerns.
AI systems often require accessto employee communications,
performance records, personalinformation, and companies may
(07:22):
unknowingly cross legalboundaries that could result in
privacy violations or breach ofemployment agreements lawsuits.
Illinois Biometric InformationPrivacy Act, BIPA, BIPA, has
been particularly impactful.
Companies have facedmultimillion dollar settlements
for BIPA violations related toAI systems that analyze employee
(07:44):
facial recognition, voicepatterns, and other biometric
identifiers without priorconsent.
Some proposed legislation wouldaddress AI-driven workplace
surveillance.
California's AB 1221 and AB1331would require transparency and
limit monitoring during off-dutyhours or in private spaces like
(08:06):
a bathroom.
But in most states, employershave broad latitude to monitor
workers using AI tools, oftenwithout their knowledge.
Because I have said before,employers are little private
governments and they can dowhatever they please.
And really, there's not much thestate and federal governments
can do unless it's a flagranterror or a violation.
(08:31):
The Stop Spying Bosses Actintroduced in Congress would
prohibit electronic surveillancefor certain purposes, including
monitoring employees' health,keeping tabs on off-duty
workers, and interfering withunion organization or
organizing.
However, this legislation is notyet enacted into law.
AI tools aren't just screeningjob applicants, they're making
(08:55):
recommendations about who shouldbe promoted, who should be
disciplined, and who should belaid off.
And because machine learningsystems become more entrenched
in their biases over time,discriminatory patterns can
become a vicious cycle.
The more AI makes biaseddecisions, the more that bias
becomes embedded in the trainingdata for the next generation of
(09:16):
AI tools.
Employee privacy rights don'tdisappear simply because an
employer is using an AItechnology.
Under both federal and stateemployment laws, employers have
an obligation to protectemployee information and notify
workers about monitoring or datacollection practices.
This is the old is your employerrecording you and giving you
(09:40):
notice to it?
So some states require thatemployers give notice.
So the law is relatively new inthat respect.
AI, obviously, notices arebeing, you know, in each state
are separate and different.
Many jurisdictions requireexplicit employee consent before
collecting or processingpersonal data for AI training
(10:02):
purposes.
Simply updating the employeehandbook may not be sufficient.
Specific agreements addressingAI data use may be required.
However, workers often face acoercive choice, consent to
extensive AI monitoring and datacollection, or lose your job.
(10:22):
The practical reality is Stark.
AI systems learn from the datathat are fed.
If that data includes yourcommunication, performance
records, and personalinformation, your employer may
be using your privateinformation in ways you never
imagined, and potentially inviolation of your privacy
rights.
If you're concerned about AIaffecting your employment,
here's what you need tounderstand.
(10:45):
Discrimination based on race,sex, religion, national origin,
age, disability, or geneticinformation is illegal, whether
the discriminatory decision ismade by a person or an
algorithm.
Retaliation for complainingabout discrimination is also
illegal.
So know your rights.
Ask questions.
You have the right to know if AItools being used are being used
(11:08):
to make employment decisionsabout you.
While not all states requiredisclosure, asking the question
puts employers on notice thatyou're paying attention.
New York City employers, forexample, must provide notice at
least 10 business days beforeusing an automated employment
decision tool.
Document everything.
If you suspect AIdiscrimination, document the
(11:30):
circumstances that you find.
(11:58):
Employers should providealternatives to AI tools when
necessary.
Be aware of data privacy.
Understand what employee datayour employer collects and how
it's used.
In some states, you have a rightto regarding your personal
information.
Illinois workers, in particular,have a strong protection under
(12:19):
BIPA, as we discussed before,for biometric data.
Don't assume the decision isfinal.
Just because an AI rejected yourapplication or recommended
disciplinary action doesn't meanthat decision was correct or
legal.
Automated tools make mistakesand they can be challenged.
And as I talked about in thepast, uh the decision by a
(12:41):
former uh sorry, an employee uhwho sued Workday under that same
premise.
The future work is here and it'sincreasingly automated, but
workers still have rights.
The fact that an employer isusing sophisticated technology
doesn't give them permission todiscriminate, violate policy, or
(13:01):
ignore employment laws that haveprotected workers that have
protected workers for decades.
As legislatures continue tograpple with how AI to regulate
IA and employment, thefundamental legal principles
remain unchanged.
Employers cannot discriminatebased on protected
characteristics.
They cannot retaliate againstworkers who assert their rights,
(13:22):
and they must respect employeeprivacy within the bounds of
applicable law.
The AFL CIO put it well insupporting proposed federal
legislation, quote, workingpeople must have a voice in the
creation, implementation, andregulation of technology, end
quote.
That voice includesunderstanding when your rights
are being violated and takingaction when they are.
(13:45):
The paradox identified by Freudin his quote above about humans
becoming prosthetic gods isnowhere more evident than in the
realm of AI.
While the technology indeedgives humans godlike powers,
Freud also noted that thesetechnologies are processes that
were not naturally grown andtherefore can cause problems for
(14:07):
the human condition.
Freud questioned whether thesetools truly lead to happiness,
even as they increase humanpower.
In the American workplace, weall have to grapple with the
paradox as AI becomesincreasingly common in everyday
life.
If you believe you've beendiscriminated against by an AI
hiring tool, unfairly monitoredby an automatic surveillance
(14:28):
system, or subjected to biasedAI-driven employment decisions,
you don't have to accept it.
The law is evolving rapidly, butyour fundamental rights as a
worker remain protected.
The employment attorneys atKerry and Associates understand
both technology and the law.
We've been following theseissues closely and are prepared
to help workers navigate the newfrontier of employment law,
(14:49):
which it certainly is.
Whether you're facingdiscrimination in hiring, unfair
AI driven performanceevaluations, or privacy
violations through workplacesurveillance, we can evaluate
your situation and advise you onyour legal options.
Hope you enjoyed this uh episodeand talk to you soon.