All Episodes

October 23, 2025 16 mins

Comment on the Show by Sending Mark a Text Message.

Your next performance review might be scored by a model you’ve never met. We dig into how AI is reshaping hiring, promotion, discipline, and workplace surveillance, and we explain what that means for your rights under anti-discrimination and privacy laws. From the promise of efficiency to the reality of bias, we unpack why intent isn’t required for liability and how disparate impact applies whether a manager or a machine makes the call.

We walk through real examples, including Amazon’s abandoned hiring tool that learned to prefer men, and the EEOC’s first AI hiring settlement that signaled employers can’t outsource accountability to vendors. We also trace the policy whiplash: federal agencies stepping back from guidance, while states and cities step up. New York City’s bias audits and applicant notices, Illinois’s expanded protections and BIPA enforcement, and California’s “No Robobosses” proposals point to a patchwork of rules that matter the moment software touches your resume, your video interview, or your keyboard.

Surveillance is expanding too. Keystroke tracking, productivity dashboards, and biometric tools promise insight but raise serious questions about consent, data handling, and monitoring off-duty or in private spaces. We share practical steps: ask if AI is used in decisions about you, request accessible alternatives, document outcomes that don’t add up, and remember that retaliation for raising concerns is illegal. The technology may be new, but your core protections are not. Subscribe for more clear guidance on navigating AI at work, share this conversation with a colleague who needs it, and leave a review to help others find the show.

If you enjoyed this episode of the Employee Survival Guide please like us on Facebook, Twitter and LinkedIn. We would really appreciate if you could leave a review of this podcast on your favorite podcast player such as Apple Podcasts. Leaving a review will inform other listeners you found the content on this podcast is important in the area of employment law in the United States.

For more information, please contact our employment attorneys at Carey & Associates, P.C. at 203-255-4150, www.capclaw.com.

Disclaimer: For educational use only, not intended to be legal advice.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:01):
Hey, it's Mark and welcome back.
Today we're talking about whenyour boss is a robot,
understanding AI in theworkplace and your rights.
Sigmund Freud lived between 1856and 1939 and was therefore
witness to the surge oftechnology that resulted from
the Industrial Revolution.

(00:23):
While he acknowledged theusefulness of the technical
innovations of his day, he wasalso somewhat skeptical of them.
Freud famously commented, manhas, and as it were, become a
kind of prosthetic god.
He argued that humans, throughtechnology, have created
artificial limbs and tools thatamplify their abilities, making

(00:44):
them godlike, but also creatingnew troubles.
Freud had no idea what wascoming.
The science fiction future thatwas unimaginable in Freud's day
has arrived.
And it's reviewing your jobapplication.
Artificial intelligence is nolonger just something we see in
movies.
It's making real decisions aboutreal people's livelihood every

(01:09):
day.
And while AI promises efficiencyand objectivity, it's bringing
some very human problems intoAmerica's workplaces
discrimination, privacyviolations, and a fundamental
shift in the balance of powerbetween workers and employers.
If you've applied for a jobrecently, there's a good chance

(01:29):
an algorithm screens your resumebefore any human eyes ever saw
it.
In fact, about 65% of companiesnow use some form of AI or
automation in their hiringprocess.
That's not necessarily a badthing, except when the algorithm
is making biased decisions thatwould be illegal if a human

(01:50):
manager made them.
Here's a comforting thought.
Computers can't be racist,sexist, or ageists.
They're just following theirprogramming, right?
Unfortunately, it's not thatsimple.
AI tools from learn from data,and if that data reflects
historical discrimination, theAI will perpetuate that

(02:13):
discrimination into the future.
When Amazon deployed an AIhiring tool, the tech giant
discovered their algorithm wasdiscriminating against women.
The system had learned from thecompany's past hiring patterns,
which favored men, wasessentially programmed to
continue that bias.
Think about that.

(02:34):
One of the world's mostsophisticated technology
companies with virtuallyunlimited resources couldn't
create an AI hiring system thatdidn't discriminate.
If Amazon struggled with this,what are the odds that the
automated system reviewing yourapplication is fair?
The resume scanner that dingsyou for not having the right

(02:55):
keywords might be eliminatingqualified women because men's
resumes historically usedifferent terminology.
The video interview, AI thatanalyzes your facial expressions
and speech patterns, could befiltering out candidates based
on race or ethnicity.
The chat box that the askpre-screening questions might

(03:19):
create barriers for olderworkers who are less comfortable
with the technology, even whentech uh proficiency isn't
required for the job.
Here's what every worker needsto understand.
We were just following thealgorithm, it's not a legal
defense.
Under federalinterdiscrimination laws, you

(03:41):
don't need to prove youremployer intended to
discriminate against you basedon sex, race, religion, uh
discrimination, disability, age,and uh and another protected
characteristic.
You only need to prove thattheir policies had a
discriminatory effect on youremployment.
Or as the Supreme Court recentlyheld in Muldrill versus City of

(04:03):
St.
Louis, you experienced some harmin the terms and conditions of
your job.
This principle applies whetherthe decision was made by a
biased manager or a biasedalgorithm.
In 2023, the EEOC, the EqualEmployment Opportunity
Commission, settled its firstever AI hiring discrimination
case, recovering$365,000 for agroup of job seekers.

(04:26):
That settlement sent a clearmessage: employers remain liable
for discriminatory outcomes evenwhen those outcomes are produced
by automatic or automatedsystems that they purchase from
third-party vendors.
The legal landscape for AI inemployment has become
dramatically unclear, and thatshould concern every working
person in America, me included.

(04:50):
On his first day in office,President Trump rescinded
Executive Order 14110, which haddirected federal agencies to
address AI-related risks,including bias, privacy
violations, and safety concerns.
The EOC removed key guidancedocuments explaining how Title
VII and the AmericansDisabilities Act applied to AI

(05:10):
tools.
The Department of Labor hassignaled that its prior guidance
on AI best practices may nolonger reflect current policy.
In other words, the federalgovernment has largely stepped
back from regulating AI in theworkplace, leaving workers with
far less protection than theyhad just months ago.

(05:31):
Fortunately, several states havestepped into the vacuum.
New York City's Local Law 144,which took effect on January
1st, 2023, requires employersusing automated employment
decisions tools to conductindependent bias audits and
provide notice to jobcandidates.
Illinois recently amended theIllinois Human Rights Act to

(05:52):
prohibit employers from using AIin ways that lead to
discriminatory outcomes based onprotected characteristics.
California has introducedseveral bills aimed at
regulating AI and employment,including the no I like this
phrase, the title, No RobobossesAct, SB7, which would require

(06:12):
employers to provide 30 days'notice before using any
automated decision systems andmandate human oversight and
employment decisions.
Over 25 states introducedsimilar legislation in 2025.
For workers in Connecticut andNew York, the current situation
is particularly frustrating.
Connecticut saw a bill fail thatwould have protected employees

(06:36):
and limited electronicmonitoring for employers.
While New York City hasprotections, New York State has
yet to pass comprehensive AIemployment protections beyond
those affecting state agencies.
While much attention focuses onAI and hiring, the technology is
being used throughout theemployment relationship, often
without workers' knowledge orconsent.

(06:59):
AI systems are increasingly usedto monitor employee
productivity, track keystrokes,analyze work patterns, and even
predict which employees arelikely to quit.
These tools raise profoundprivacy concerns.
AI systems often require accessto employee communications,
performance records, personalinformation, and companies may

(07:22):
unknowingly cross legalboundaries that could result in
privacy violations or breach ofemployment agreements lawsuits.
Illinois Biometric InformationPrivacy Act, BIPA, BIPA, has
been particularly impactful.
Companies have facedmultimillion dollar settlements
for BIPA violations related toAI systems that analyze employee

(07:44):
facial recognition, voicepatterns, and other biometric
identifiers without priorconsent.
Some proposed legislation wouldaddress AI-driven workplace
surveillance.
California's AB 1221 and AB1331would require transparency and
limit monitoring during off-dutyhours or in private spaces like

(08:06):
a bathroom.
But in most states, employershave broad latitude to monitor
workers using AI tools, oftenwithout their knowledge.
Because I have said before,employers are little private
governments and they can dowhatever they please.
And really, there's not much thestate and federal governments
can do unless it's a flagranterror or a violation.

(08:31):
The Stop Spying Bosses Actintroduced in Congress would
prohibit electronic surveillancefor certain purposes, including
monitoring employees' health,keeping tabs on off-duty
workers, and interfering withunion organization or
organizing.
However, this legislation is notyet enacted into law.
AI tools aren't just screeningjob applicants, they're making

(08:55):
recommendations about who shouldbe promoted, who should be
disciplined, and who should belaid off.
And because machine learningsystems become more entrenched
in their biases over time,discriminatory patterns can
become a vicious cycle.
The more AI makes biaseddecisions, the more that bias
becomes embedded in the trainingdata for the next generation of

(09:16):
AI tools.
Employee privacy rights don'tdisappear simply because an
employer is using an AItechnology.
Under both federal and stateemployment laws, employers have
an obligation to protectemployee information and notify
workers about monitoring or datacollection practices.
This is the old is your employerrecording you and giving you

(09:40):
notice to it?
So some states require thatemployers give notice.
So the law is relatively new inthat respect.
AI, obviously, notices arebeing, you know, in each state
are separate and different.
Many jurisdictions requireexplicit employee consent before
collecting or processingpersonal data for AI training

(10:02):
purposes.
Simply updating the employeehandbook may not be sufficient.
Specific agreements addressingAI data use may be required.
However, workers often face acoercive choice, consent to
extensive AI monitoring and datacollection, or lose your job.

(10:22):
The practical reality is Stark.
AI systems learn from the datathat are fed.
If that data includes yourcommunication, performance
records, and personalinformation, your employer may
be using your privateinformation in ways you never
imagined, and potentially inviolation of your privacy
rights.
If you're concerned about AIaffecting your employment,
here's what you need tounderstand.

(10:45):
Discrimination based on race,sex, religion, national origin,
age, disability, or geneticinformation is illegal, whether
the discriminatory decision ismade by a person or an
algorithm.
Retaliation for complainingabout discrimination is also
illegal.
So know your rights.
Ask questions.
You have the right to know if AItools being used are being used

(11:08):
to make employment decisionsabout you.
While not all states requiredisclosure, asking the question
puts employers on notice thatyou're paying attention.
New York City employers, forexample, must provide notice at
least 10 business days beforeusing an automated employment
decision tool.
Document everything.
If you suspect AIdiscrimination, document the

(11:30):
circumstances that you find.

(11:58):
Employers should providealternatives to AI tools when
necessary.
Be aware of data privacy.
Understand what employee datayour employer collects and how
it's used.
In some states, you have a rightto regarding your personal
information.
Illinois workers, in particular,have a strong protection under

(12:19):
BIPA, as we discussed before,for biometric data.
Don't assume the decision isfinal.
Just because an AI rejected yourapplication or recommended
disciplinary action doesn't meanthat decision was correct or
legal.
Automated tools make mistakesand they can be challenged.
And as I talked about in thepast, uh the decision by a

(12:41):
former uh sorry, an employee uhwho sued Workday under that same
premise.
The future work is here and it'sincreasingly automated, but
workers still have rights.
The fact that an employer isusing sophisticated technology
doesn't give them permission todiscriminate, violate policy, or

(13:01):
ignore employment laws that haveprotected workers that have
protected workers for decades.
As legislatures continue tograpple with how AI to regulate
IA and employment, thefundamental legal principles
remain unchanged.
Employers cannot discriminatebased on protected
characteristics.
They cannot retaliate againstworkers who assert their rights,

(13:22):
and they must respect employeeprivacy within the bounds of
applicable law.
The AFL CIO put it well insupporting proposed federal
legislation, quote, workingpeople must have a voice in the
creation, implementation, andregulation of technology, end
quote.
That voice includesunderstanding when your rights
are being violated and takingaction when they are.

(13:45):
The paradox identified by Freudin his quote above about humans
becoming prosthetic gods isnowhere more evident than in the
realm of AI.
While the technology indeedgives humans godlike powers,
Freud also noted that thesetechnologies are processes that
were not naturally grown andtherefore can cause problems for

(14:07):
the human condition.
Freud questioned whether thesetools truly lead to happiness,
even as they increase humanpower.
In the American workplace, weall have to grapple with the
paradox as AI becomesincreasingly common in everyday
life.
If you believe you've beendiscriminated against by an AI
hiring tool, unfairly monitoredby an automatic surveillance

(14:28):
system, or subjected to biasedAI-driven employment decisions,
you don't have to accept it.
The law is evolving rapidly, butyour fundamental rights as a
worker remain protected.
The employment attorneys atKerry and Associates understand
both technology and the law.
We've been following theseissues closely and are prepared
to help workers navigate the newfrontier of employment law,

(14:49):
which it certainly is.
Whether you're facingdiscrimination in hiring, unfair
AI driven performanceevaluations, or privacy
violations through workplacesurveillance, we can evaluate
your situation and advise you onyour legal options.
Hope you enjoyed this uh episodeand talk to you soon.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Ruthie's Table 4

Ruthie's Table 4

For more than 30 years The River Cafe in London, has been the home-from-home of artists, architects, designers, actors, collectors, writers, activists, and politicians. Michael Caine, Glenn Close, JJ Abrams, Steve McQueen, Victoria and David Beckham, and Lily Allen, are just some of the people who love to call The River Cafe home. On River Cafe Table 4, Rogers sits down with her customers—who have become friends—to talk about food memories. Table 4 explores how food impacts every aspect of our lives. “Foods is politics, food is cultural, food is how you express love, food is about your heritage, it defines who you and who you want to be,” says Rogers. Each week, Rogers invites her guest to reminisce about family suppers and first dates, what they cook, how they eat when performing, the restaurants they choose, and what food they seek when they need comfort. And to punctuate each episode of Table 4, guests such as Ralph Fiennes, Emily Blunt, and Alfonso Cuarón, read their favourite recipe from one of the best-selling River Cafe cookbooks. Table 4 itself, is situated near The River Cafe’s open kitchen, close to the bright pink wood-fired oven and next to the glossy yellow pass, where Ruthie oversees the restaurant. You are invited to take a seat at this intimate table and join the conversation. For more information, recipes, and ingredients, go to https://shoptherivercafe.co.uk/ Web: https://rivercafe.co.uk/ Instagram: www.instagram.com/therivercafelondon/ Facebook: https://en-gb.facebook.com/therivercafelondon/ For more podcasts from iHeartRadio, visit the iheartradio app, apple podcasts, or wherever you listen to your favorite shows. Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.