All Episodes

November 12, 2024 65 mins

This conversation features an interview with Hilke Schellman, author of "The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, Fired, and Why We Need to Fight Back Now." The host, Nola Simon, shares her personal experiences and concerns about AI in hiring processes, which led her to Schellmann's work.

Key points discussed include:

  1. The increasing use of AI in hiring processes, especially for high-turnover positions.
  2. Potential biases and inaccuracies in AI hiring tools, such as:
    • Favoring certain names or keywords unrelated to job performance
    • Misinterpreting data and making incorrect inferences
    • Potentially replicating existing workforce inequities
  3. Lack of transparency and oversight in AI hiring systems, with many companies unaware of how their tools actually make decisions.
  4. The need for thorough testing and scrutiny of AI hiring tools to ensure fairness and effectiveness.
  5. Concerns about how AI might disadvantage certain groups, including immigrants, non-native English speakers, and those with speech differences.
  6. The tension between efficiency in hiring processes and finding the most qualified candidates.
  7. The importance of accountability and responsible use of AI in hiring practices.

Key Questions Raised:

- How accurate and fair are AI hiring tools really? - What data are these systems using to make decisions? - How can job seekers know if AI is being used to evaluate them? - Are companies doing enough due diligence on the AI tools they use? - How can we ensure AI doesn't perpetuate existing biases in hiring?

Action Steps for Employers:

1. Thoroughly test any AI hiring tools before implementation 2. Regularly audit AI systems for biases and inaccuracies   3. Maintain human oversight and don't rely solely on AI rankings 4. Prioritize finding qualified candidates over speed of hiring 5. Be transparent with candidates about use of AI in hiring process

Action Steps for Job Seekers:

1. Be aware that AI may be used to evaluate your application 2. Focus on clearly communicating relevant skills and experience 3. Consider how AI might interpret information on your resume 4. Prepare for potential AI-powered video interviews 5. Advocate for transparency in hiring processes

Key Takeaways:

- AI hiring tools often have hidden biases and flaws - More scrutiny and testing of these systems is urgently needed - Job seekers have little visibility into how they're being evaluated - Companies need to balance efficiency with fairness and accuracy - Human oversight remains crucial in hiring processes

Hilke Schellmann, is an Emmy award winning investigative reporter and assistant professor of journalism at New York University.

As a contributor to The Wall Street Journal and The Guardian, Schellmann writes about holding artificial intelligence (AI) accountable. In her book, The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired, And Why We Need To Fight Back (Hachette), she investigates the rise of AI in the world of work. Drawing on exclusive information from whistleblowers, internal documents and real‑world tests, Schellmann discovers that many of the algorithms making high‑stakes decisions are biased, racist, and do more harm than good. 

Her four part investigative podcast and print series on AI and hiring for MIT Technology Review was a finalist for a Webby Award.

Her documentary Outlawed in Pakistan, which played at Sundance and aired on PBS FRONTLINE, was recognized with an Emmy, an Overseas Press Club, and a Cinema for Peace Award amongst others. In her investigation into student loans for VICE on HBO, she uncovered how a spigot of easy money from the federal government is driving up the cost of higher education in the U.S. and is even threatening the country’s

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
I'll see you next time.

Nola Simon (00:24):
so much for joining me.
I'm Nola Simon.
I'm your host of the Hybrid RemoteCenter of Excellence and joining
me today is Hilke Schellman.
Did I pronounce your last name right?
Yes.
So it's

Hilke Schellmann (00:33):
Hilke Schellman.
Hilke.
Yeah, it's a, German first name.
Very hard.
Sorry, my dad is actually German,

Nola Simon (00:39):
but I don't actually speak

Hilke Schellmann (00:41):
German.
The funny thing is Germansdon't know the name either.
So they're often like Heike, Zilke, what?
So it is, it is an enigmafor anyone I encounter.

Nola Simon (00:51):
Okay that's fine.
I'm glad I asked.
So thank you.
She's the author of the algorithm,how AI decides who gets hired,
monitored, promoted, fired, andwhy we need to fight back now.
And so it was actually interestinghow I came across your book, and I
want to tell the audience a little bitabout how I became aware of your work.
And so it started, honestly, Istarted on my own noticing like AI

(01:14):
being a trend back when I started mypodcast back in like December of 2021.
And I interviewed a guy from EightfoldAI and he told me all about the way
they were using it in hiring and firing.
But also he was telling me about howthey can identify transferable skills.
And I found this very interesting.
They have contracts with government toreally identify transferable skills that

(01:37):
existed in long term unemployed people.
With the goal of actuallygetting them back to work.
And I'm like, wow, that's fascinating.
I never considered that was a possibility.
And he came up with this examplethat, if you have somebody who has
went to university, they have amath degree, but they have always
worked in customer service, you couldliterally get them into data analysis.

(02:00):
because the skill sets are reallythe same, but data analysis is
really more future focused andalso pays a lot more, right?
So people are sitting on skillsets that are extremely valuable.
They don't even realize what thevalue is of those skill sets.
And that really appealed to me becauseI have a degree in math and I spent

(02:21):
20 years working in customer service.
Now that's how it started.
So then I also was aware of a companycalled Plum who worked with Scotia
Bank to actually replace their resumes.
And so basically nobody submits aresume for Scotia Bank in Canada.
They use this.
AI profile called Plum.
Now, I ran a personal profile formyself, and it came up with all kinds

(02:46):
of interesting things, except one ofthe lines was, basically, you're better
doing work that doesn't really takeany initiative, that it's repetitive,
and you need structure, and you needto be told what to do, basically.
And I'm like, Yeah, that's not me.
Thank you.
I'm working for myself.
And my podcast, I've got, I'mnearing a hundred episodes.
So it's yeah, nobody's beentelling me to do my podcast.

(03:09):
Thank you very much.
So I wanted, but what I wanted to knowand why I had approached the CEO of Plum
was what is, What does AI actually knowabout me that is pulling that result?
Because if I'm running my ownprofile and I notice an inaccuracy
in it, so what's causing it?
Where is it pulling from?
How can I influence that result sothat, I'm going to draw something

(03:33):
that's more accurate, but alsowhat are the repercussions of that?
inaccuracy, right?
And how do you get it removed?
How do you get it fixed?
And how do you do the due diligence ondoing like on any of that information?
Because if I applied for a job andthat came up and the job is about
innovation and, taking initiativeand, every job that I'd be interested

(03:58):
in, and that's what's coming out ofthat profile, that's gonna shoot you
dead in the water right there, right?
Yeah.
And then I could not getan answer from the company.
She didn't initially respond to mebecause I'm like, I want to interview you.
And then she goes to me and she evenactually walked by me on a stage.
I had told her that I would be there andshe walked by me and wouldn't talk to me.
So it's okay.

(04:18):
So I, I've given up on, on that, butI was fascinated with reading your
book because you focused on both ofthose companies and I was like, Oh,
somebody else noticed this and putit together and put it into the book.
And that's where I knew I reallywanted to interview but we started,
there's one more piece of the story.

(04:39):
I didn't, I'd introduced you tosomebody else that I had as a guest on
my podcast, and that's Swetha Redney.
She's a career coach up in Sudbury.
And I had noticed Swetha trying to get.
Information and trying to get media toactually interview her about concerns
that she had about AI and immigrants.

(05:00):
If you're if English is notyour 1st language, how does
that influence an AI interview?
These 1 way video interviews, doyou have accommodations, right?
If you have a stutter, if youhave a lisp, all of these things
factor into how you perform.
And personally, again, my ownpersonal story going back, there's
something about my voice thatautomated systems do not like.

(05:22):
So Siri and Alexa hate me.
My kids used to razz me becausethey could get Alexa to talk and
Alexa wouldn't respond to me.
So they would harass me using Alexa.
I've turned Siri off because it's soreliably does not understand my voice
that it's more hassle than it's worth.
And one time I had a bad accidentthey actually sorry, I've

(05:43):
got music playing somewhere.
Can you just pause for a moment?
Yeah.
Hang on a second.

(06:14):
Oh my God.
It was an electric.
It started playing in my ear.
Now it heard you.
Finally.
Now it hears me.
This is why I hate it so much.
Because, literally, it was played likethe theme song from the Mickey Mouse Club.
I'm like, did I say anything atall about the Mickey Mouse Club?

Hilke Schellmann (06:33):
Wait till it starts again.

Nola Simon (06:34):
Oh, God.
Apparently that's dangerous to talk about.
Yeah, so I used to have to, Iwas checking the status because
I was waiting for my license tobe reinserted after the accident.
And I had to wait my eight yearold up to say the letter S.
Because the system would notrecognize me saying the letter S.
I literally have concerns about if I everhad to do one of those video interviews,

(06:55):
how would that actually affect my voice?
English is my first language.
I don't have a concern otherwise.
But there's something about how automatedsystems, Recognize my voice that I don't
trust that they're going to get it right.
So I have a lot of concerns.

Hilke Schellmann (07:11):
So we could totally test that in different ways.
Yeah, exactly.
That's right.
We usually have people who havemaybe a dialect or an accent, right?
Or have a speech disabilitythat we mostly concerned about.
I've never had somebody who isa native English speaker who
has automatic systems that.
Don't want to listen to you.

(07:32):
I know

Nola Simon (07:32):
it's so aggravating and so annoying.
And I actually I was working for a bankbefore my job was restructured and they
had an automated system as well, too.
And I had run tests for themdemonstrating like how badly you
couldn't recognize me and they didn'tknow how to, Handle that right?
And again, I volunteered to be a subjectfor them so they could test it out.

(07:53):
But I got restructured beforethat project could go anywhere.
But yeah, no, I find that fascinatingto me because again, if this can
happen to me, then how does it happento, how does it affect people who
have bigger barriers than I face?
Yeah.
And that's where it's the only thingthat I know how to do anything about is
to ask questions and to talk about it.

(08:16):
And so that's why I'm very gratefulto have you on the podcast so we can
amplify this conversation because I dothink that it's extremely important.
Because it's only the tip of theiceberg where we are right now.
Yeah.
So before we really get into ityou're a professor at New York
University and you're In the journalism

Hilke Schellmann (08:33):
department, teaching journalism.
I'm in, I'm a professor in thejournalism department, teaching
journalism, what I love.
So I have the dream job.
I get to teach what I love to do.
I get to do journalism.
So I'm I feel very blessed with that job.

Nola Simon (08:47):
Yeah.
And the AI really fell in yourlap too, when you were actually
in a cab and this guy told youhe had an interview with a robot.
And that started, that questionreally just started opening up the
whole Pandora's box for you, right?

Hilke Schellmann (09:02):
Yeah, totally.
We always wonder like where dojournalists get their ideas from and
this was literally like me in the backof a Lyft ride talking to the driver
and he was like, I've had a weirdday that doesn't ever happen to me.
It's usually they say I'm fine.
How are you?
And he's yeah, I've had a weird day.
And I was like, Oh, yeah, what happened?
He's I was interviewedby a robot for a job.
And I was like, robot?

(09:22):
He's yeah, I got a call from a robot.
So probably like a pre recorded voice.
And he had applied for a baggagehandler position at at a local airport.
And he was just really just almostspeechless about the process, because
he was like, that has never happenedso weird and I took note of that and
was like, Oh, I've never heard of that.
And then, I went to an AI conferencea few weeks later, and somebody talked

(09:43):
about AI and algorithms being used tocheck people's calendars and absences.
And she also did mention, oh,they use it for hiring and.
She told me about, could thecompany hire of you, which is one
of the largest providers in thisspace and it just started this.
And when I started looking into it, I waslike I didn't know how ubiquitous it is.
And I think talking to yourpoint of feeling like, whoa,
like if, It's inaccurate for me.

(10:04):
What is it for peoplewho have way less agency?
And I think that in introduction ofalgorithms and AI and hiring does shift
the power balance, it's always beenmore power with the employers, right?
Because make the final decision,of course, but now it feels like
that, you could like Think aboutwhat am I put in my application?
Who do I list as a reference?
So you were a little bit of thecurator of your passport, maybe.

(10:26):
But now companies assess you with AIor without even without knowing, right?
Like you may be thinking you're doinga video interview, little did you
know that they use AI on you or yousend in your application material.
And if you send it through like LinkedInor any of the large job platforms,
they all use AI, no recruiter wants tohave just a folder with 2000 applicants

(10:47):
Transcribed Their resumes, right?
There's all kinds of ranking inthe way this ranking happens.
It's like really curious, right?
Like, how do we know that the personranked at number one is better than
the person ranked at 100 and thoseare like feel interesting questions.
And I started looking into that.
And at the time, like a lot ofthe I was first used on people.

(11:07):
What the industry calls likehigh turnover high volume job.
So we hire a lot of people and youhave a lot of turnover often for like
retail positions, fast food service.
Yeah, customer service call centershave a really high turnover too.
So we see, in one of the in one ofthe tests that we did the company
is actually for a call center.
So the video is like somebody, youhave this irate person on the phone.

(11:29):
How are you going to calm themdown with one of the tests, right?
So we see that.
And I would say that folks who are lookingfor those kinds of job have probably one
of the least power in the workplace andthe least time and get don't make often
a living wage or not near living wage.
And they're being subjected to thisfirst, whereas CEO is another like sort
of leadership positions very rarelyget subjected to these kinds of tools.

(11:52):
We've seen it like climbed a little bit,I've seen it used for flight attendants,
for teachers now I've seen it being usedfor a lot of recent graduates because a
lot of people feel hiring managers feellike, Oh, there's so many people, they
all have great bachelor's degree, butthey don't have a lot of work history.
So they all look alike.
So let's do a skills capability assessmentbecause we don't know a lot about them,

(12:14):
so we see it used mostly for that.
And we see it, I think what happenedwith the dawn of job platforms and
now with generative AI companiesare flooded by applications.
And I think that is,there's a real need there.
IBM said they get about5 million applications.
Google get about 3 millionapplications per year.
Goldman Sachs said for theirsummer internship alone, they

(12:35):
got over 220, 000 applications.
There's not enough humansto go through all of them.
And we know that humans are biased too.
So a lot of companies turn totechnology because they, they're
drowning under applications.
We now more than ever, they're drowningbecause, a lot of job applicants use
generative AI to generate a cover letter,to generate Resume and now we have

(12:56):
a I that can actually apply for you.
So you don't even have to do anything.

Nola Simon (13:00):
The battle of who has the best a I

Hilke Schellmann (13:03):
it is a I versus a I out there and, at one point, you might
have to ask okay, what's still real here?
What are we talking about?
What are the capabilities?
How do we actually hire the best people?
And I don't think wenecessarily have an answer there

Nola Simon (13:16):
well, and how many people are actually avoiding that whole
process and just working the networkand, That then becomes problematic too,
because then it's like, who, and interms of equity and, equality, that's,
Automatically problematic because, peopletend to have closed networks, right?

(13:38):
Like they, they like peoplewho are like them, right?
So if you're building a process, that'sautomatically going to be biased,
then is it's example of systemsworking as systems are designed
to work and that's really what theoutcome is that everybody wants.

(13:58):
Yeah, we don't Diverse workforce.

Hilke Schellmann (14:01):
Yeah.
That's a

Nola Simon (14:02):
question that you have to ask, right?

Hilke Schellmann (14:03):
Yeah.
And especially like whenyou build tools based on the
current workforce that you have.
Yeah.
Most companies are not diverse, right?
So the the problem is thatyou might replicate, right?
The historical inequities that you'vebuilt into your workforce for hiring
more men or more white people, right?
Over time, right?
And then the tools pick up.
Their facial expressions, the wordsthat they use, their manners, their

(14:26):
way of behaving and game playingmight just hire more white men
which I don't think is anyone'sintention, but is a likely outcome.

Nola Simon (14:36):
Because they the systems have access to your hierarchy
through performance reviews oranything that's written, right?
You have an example in the book about,this AI had learned that the name Thomas.
It was, yeah, it was getting more points.
My hypothesis is that this was built.
This is a resume screener.
And if the resume happened to include thename, Thomas, it was rated more, it was,

Hilke Schellmann (14:55):
it was, yeah, it was getting more points.
That my hypothesis is that this was built.
This was a resume screener.
that it was probably built with a bunchof resumes of people who are currently
successful in the role, which often meansthe people who are doing the job now.

Nola Simon (15:09):
Yeah.

Hilke Schellmann (15:09):
So you give the tool like thousand resumes of the
people you have employed right now,or you have been recently employed
the last six months or so made itto the last round of interviews.
And you say those peopleare the successful people.
And then the tool doesa statistical analysis.
And apparently in this Statisticalanalysis, the word Thomas came up and
became statistically significant, sothen the tool gave people who had the

(15:31):
word Thomas on their resume more points.
So obviously, any human knowsthat Thomas, the word Thomas,
doesn't qualify you for anything.
The machine obviously doesn't havea conscience and doesn't have any
ethical ideas and I should also say,Humans are also problematic, right?
We know from like social sciencethat like we send resumes with more
Caucasian sounding names versusAfrican American sounding names, right?

(15:54):
There's a lot of human bias too,that we know that people get fewer
callbacks who have African Americanfirst sounding names, but like a machine
was supposed to be objective, right?
That's what.
These AI vendors sell thatit like democratizes hiring.
It has no bias.
It is absolutely fair.
And that has not been the case.
And I was really surprised when Italked to industrial organizational

(16:16):
psychologists who said like all of thetools that you looked at had problems.
And other employment lawyer told melike every fourth tool he looked at
resume parser had problematic keywords.
He found in one of them,the word Africa and African
American were used as keywords.
Another tool.
The word baseball.
If you had the word baseball onyour resume, you got more points.

(16:36):
If you had the word softballon your resume, you got fewer
points, which points, which is

Nola Simon (16:40):
which is excellent reason that you don't want to put your
activities your after curriculars.
And I don't

Hilke Schellmann (16:47):
know what happens for the people who
don't have any hobbies, right?
We don't know if they getpenalized or not, right?
But that's the problem.
A tool will look at everything that'son a resume, doesn't know that maybe
Python, a programming language, is a moreimportant skill for a software developer
job than hobbies, which really shouldbe, not be part of the decision making
at all, because hobbies doesn't say moreabout like your socioeconomic background.

(17:10):
If you put snowboarding or skiing,probably means in some societies
that you have more money than othersthat you come from a privileged
background because you can afford it.
That means more that tells you moreabout your background than your actual
skills and capabilities to do the job.
Unless it's a skiing instructor,they probably need skiing skills.
But other than that it doesn'thave any bearing on the job.

(17:31):
And we see this again and again, and in alot of these tools and that does worry me.
And we have so little oversight wherein this case is the vendors themselves
didn't find the problem, right?
It was like when a vendortalked and talked to an employer
and, about using the system.
And then some of the employersdo their due diligence.
They bring in outside counsel,they start the system, and

(17:54):
then they find the problem.
And a lot of times companies usedeep neural networks, which you don't
need to know what that means, but itmeans that you have training data.
to build the model, and you can look atthe results, but you don't necessarily
know what happens inside the machine.
We can ask the machine whatexactly happened, but most
companies don't do that.
So a lot of AI vendors don't knowwhat is the machine inferring upon?

(18:17):
And what we see a lot of machine learning,there's been like a famous example
where I think somebody built a tool thatwas supposed to understand what's the
difference between huskies and wolves.
and fed a lot of pictures in the model.
And, the model miraculously learnedwhat's a husky and a wolf because if
you send in a new photo it was like,oh, husky, yes, until it didn't work.
And then the the folks asked thetool what did you infer upon?

(18:40):
Was it the nose?
Was it like the fur ofthe husky or the wolf?
How did you know the difference?
And the tool highlighted the snowin the background of the huskies.

Nola Simon (18:50):
Because it turns out

Hilke Schellmann (18:50):
the Huskies had like snow in the
background, the Wolves did not.
So the tool that actually didn'tknow what's the difference
between Husky and Wolves, what?
The Wolves didn't have snow?
Or the Huskies didn't have snow?
I don't remember, actually.
I would have expectedWolves to have more snow.
One of, one of them didn't have snow.
Yeah.
And that was and that'swhat the tool learned.

(19:11):
And so that's what wecall, and this is actually.
More prevalent than you think inthese systems, like we think it
does X, but when we tested, welearned that it doesn't do anything.
So there's another example with The COVIDcough, like during the pandemic, there
were a lot of companies that tried tobuild AI systems where you like cough into
the phone to your doctor, the tool willtell you, do you have COVID or not, right?

(19:33):
And that never worked out.
And what we saw in even like academicliterature where people said, Oh, we
found the COVID cough when you tested it.
What it actually found was that ithad the people who were, had the
COVID cough and most often in the ICU.
So you heard the beeping of themachines and the people who didn't
have COVID were in, whateverscenario where they were coughing.

(19:54):
So the tool had learned that COVID meansmachines beeping in the background.
So obviously had to learnanything about COVID.
So we see this again andagain, it's a real problem.
And I'm.
Not too hopeful that obviouslythat the hiring space isn't spared
from this problem because I foundthis problem now many times.
And that's just, as you said,the tip of the iceberg, right?

(20:14):
I haven't looked at all the tools.
I don't have access to all the tools.
So we need a lot more scrutiny.
We need a lot more testing.
So I hope that people in talentacquisition or in hiring will hear this.
Oh, hear me speak like pleasetest these tools and do these like
super cheaper tasks that I do.
If I can speak German to a tool and stillget a six out of nine English proficiency

(20:35):
score that the tool probably doesn't work.
Do some kind of testing and to reallyunderstand what does this tool do?
And if it doesn't do what it'ssupposed to do, you really might want
to rethink using this for hiring.
It matters who gets a job.
It matters to job seekers.
I'm relying on making money to putfood on the table to have an apartment.
I'm nervous before a job interviewbecause it matters if I get the job.

(20:58):
I know for employers, often itfeels ah, so many candidates.
We reject any, them anyways,but It doesn't matter.
And I'm sure it mattered to thehiring manager and the talent
acquisition manager at one pointthat they did get the job, right?

Nola Simon (21:10):
And there's legal ramifications as well too.
So if you're using tools that havebuilt in discrimination, like you
said, like they're learning thingsthat aren't necessarily relevant.
An example would be like justuniversity related to like
social economic class, right?
If you're filtering out yourAI tools, filtering out,
Harvard or whatever university.

(21:31):
There's long time studies and proof thatshow that those aren't necessarily tied
to skills and ability, but those aredefinitely tied to socio economic class,

Hilke Schellmann (21:41):
right?
Yeah.
Yeah, and I feel especially with theresume parser there was 1 example that it
had highlighted, had learned that Syriaand Canada are, Predictions of success.
So everyone else was fascinatedwith that one since I'm

Nola Simon (21:52):
from Canada.

Hilke Schellmann (21:55):
Exactly.
I was like, look, the Canadians, likethey're, moving forward at a faster clip.
But you know what this means forpeople from other places that
they would be potentially getdiscriminated against, which is
illegal in the United States, right?
Like they're illegal place.
Yeah.
I think what's a little bit problematic islike that the tools themselves obfuscate
some of the decision making, right?
Because we don't actually know that Canadaand Syria were indicators of success,

(22:19):
which we shouldn't be using, becauseunless somebody digs deep into the system,
we just see the results of the ranking.
And they look very convincing.
Even when I did tests, I know thatthese tests are bogus that I was like,
okay, there's no science behind them.
But when you still see your score,you're like 78%, blah, blah, blah.
It's really hard to not see that scoreand feel like, oh, that means I'm this.

(22:43):
You still put meaning towards it, eventhough, it's actually meaningless.
And I think that's.
What's really problematic, right?
As an hiring manager, you getthis list of 1000 people, and
you're going to call the first 20.
I'm not going to call 998 that person,even though I have no idea how this
banking was really put together, right?
I assume the machine, somebodydid their due diligence, and these

(23:06):
machines work and pick the best people.
But I think it was interesting from onesurvey that Joe Fuller did at Harvard.
We know that he surveyed like2200 or so people in leadership
when their company use an AI tool.
Almost 90 percent said, Oh, we knowthat they reject qualified candidates.
So we know that they don't actuallyfulfill that promise that they

(23:28):
find the most qualified candidate.
But we still use it because I thinkthe tools make hiring so much more
efficient, so much easier, cut downthe number of days that you're hiring.
And that is something that companiescare about, which I think is
probably the wrong incentive.
Who cares if you foundsomebody in two days?
Are they the most qualified?
Can they actually do the job the best?

(23:48):
If it takes three weeks maybe that's okay.
That isn't necessarily theincentive that is being used,

Nola Simon (23:54):
right?
Yeah, exactly.
That's right.
And it comes down totrust and accountability.
Really?
Are you walking the talk?
And are you beingresponsible for what it is?
You're doing and that seemsto be lacking in companies
that are actually using these.
These AI systems.
And again, I don't know that'stheir intention, but it is.
I don't think it's

Hilke Schellmann (24:13):
at all their intention.
Yeah, this is like a tale of likegood intentions, maybe gone bad or
having unintended consequences, right?
I don't think actually anyonebuilding these systems or using
these systems has any ill intent.
I think everyone.
Really does probably want todiversify the workforce, want to
give more people a chance but it'snot really fully understanding how

(24:34):
are we actually using this system?
How do these systems make decisions?
Yeah, because they are literally blackboxes, like that, what we always say,
and I think that's, really problematic.
And I think, a lot ofcompanies come into this.
They want to have they want to save money.
They want to have moreefficient hiring technology.
So they don't want to necessarilynow hire someone to oversee these

(24:55):
systems and test them continuously,like they're easier and faster.
So they're not going to maybe applylike a lot of scrutiny to this.
They're just like, okay, it works.
It cuts down hiring.
We need to employ fewer people.
That's all I want.
And I

Nola Simon (25:09):
mean, it's basically software as a service, right?
So like the vendors could be updatingthe software as they go, right?
So they really should keep on top of that.
Just from a compliance point ofview, you would think that you'd want
to make sure that you're reviewingthe terms and conditions every time
there's an update to the software.
I don't understand what therisk people are like, where's

(25:30):
the cybersecurity and the riskassessment that goes along with this.

Hilke Schellmann (25:35):
Yeah.
And I'm sometimes wondering aboutthis too, because, like we reported
very early on in 2018, after I met theLyft driver and the next year I was
working for the Wall Street Journal.
And we did this like a 10 minute,as a video investigation, because we
felt like, oh, this is, it's aboutvideo interviews mostly and other
AI and hiring, but it felt like,oh, we want to show this in video.
And we looked at HireVue and HireVue atthe time still use like facial expression

(25:59):
analysis Ation of our voice and analysis.
And it also transcribedthe words that you used.
And, when I started digging deeper,at first I was like so surprised.
I was like, oh, I didn't know thatfacial expression and job interviews
are predictive of success in a job.
What Interesting way to measure things.
And then when I dug deeper andI talked to expert, they're
like, there's no science here.

(26:20):
I was like.
Wait, what?
There's no science here.
They're like, yeah, wedon't have any science.
What facial expressions you have to havein a job interview that predicts how
good you should be at any given job.
And I was like, Oh but they're using it.
And and then I talked to folks, thereis the effective computing companies
that, that, that put out the tools.
They also say we havethe same six emotions.

(26:40):
If you're smiling, you're happy.
If you're frowning, you're angry.
And when I talked to psychologists,they said Oh yeah, that's 60 years ago.
We thought that, but we knownow that like different cultural
backgrounds, different societies,we don't have the same emotions.
Like we are likely to havesimilar emotions, but not always.
And so this is notpredictive at all either.

(27:00):
And I was like, Oh, I was like, thisis really problematic, but large
fortune 500 companies were using thetechnology without questioning it.
From what I know.
So I was really surprised that I was like,I do wonder, like, where are those people?
Where are the compliance people?
I don't know if each into that.
hiring space or they feel like becausethere is like almost no way for

(27:25):
people to get access to these systems.
If the person who builds the systemdoesn't know what the tool infers
upon, like the people using it,the companies don't know it, job
seekers are never going to find out.
So maybe the risk is very low.

Nola Simon (27:37):
Yeah.
There's no transparency.
Like even as a job seeker, it's onlybecause of press releases and whatnot.
Like I knew that Scotiabank wasusing Plum, for example, and
that's why I was researching Plum.
But most of the time they don't identifywho's providing the documentation.
the services, right?
Yeah,

Hilke Schellmann (27:55):
I think after, like some sort of scandals, right?
Amazon tried to build an algorithmictool that scans resumes, and they built
a tool that was downweighing women.
If you had the word woman or women onyour resume, it would downweigh you.
So I think, They got a lot of backlashfor that as I think since that companies
have also been very careful to publicizeanything bad about these tools, right?

(28:18):
Like I have talked to, like afterthe book came out, I got lucky.
Before too, but like really after thebook came out, I had talked to a lot
of CHROs, like chief human resourceofficers, chief people officers,
and Oh yeah, we use that tool.
We have the same questions andproblems that you uncovered.
And then we like stopped using it.
And I was like that's great.
I'm glad to hear it.
Did you tell anyone?

(28:39):
Because Company X is still using it.
Could you publicly say that youfound out these tools didn't work?
And they're like, No, we can't publiclyacknowledge that because they're
afraid of class action lawsuits.
They're afraid of press thatthey use the tool that's flawed.
You can understand it.
But the problem is we don't reallyhave much progress then, right?
No one calls out the bad actors, or notthe bad actors, but like when tools don't

(29:03):
work, they don't call out the companies.
And, I think the vendors are unlikelyto dig deep and really check out
their systems and change them.
They do want to market and sell, right?
And there's still companies thatmerely believe them that this works.
So yeah.

Nola Simon (29:19):
So what's the solution?
Like you've been on record sayingthat, regulation is needed.
Government involvement is needed.

Hilke Schellmann (29:27):
Yeah.
We just did a recent election inthis country, so I'm not sure.
under a Trump administration, therewill be more regulation, right?
In fact, we've seen a lot of deregulationunder the last Trump administration.
So I'm not very hopeful.
We haven't seen some forward movementunder the Biden administration.
There has been like AI notlost, but like hopeful, like we

(29:48):
should really look into this.
And there has been like progress but wereally haven't seen an agency stepping in.
starting to approve thesetools or test them or anything.
I think that is, there isn't a wholelot of political will, I think, to
do that, and maybe also not a lot of.
knowledge because it is tricky to, and Iwouldn't recommend that government agency

(30:09):
do this like little testing that I do.
I sometimes do this thing where bring aquestion to like computer scientists and
sociologists and we do larger Studies.
Yeah.
That's how we found the problem andtheir hallucinations and whisper.
That's also how we found out that some AItools who take your personality from your
LinkedIn or your Twitter feed don't workand that you have multiple personalities

(30:32):
and even the same AI tool has differentthings finding out about your personality.
, if it looks to you at your LinkedInor Twitter, which is, against the
theory, like if you use personality,you would assume that it is a stable.
Thing, right?
Otherwise, if it changes every5 minutes, then you really
shouldn't use it for hiring.
So we had also

Nola Simon (30:49):
Social media is a filter to begin with, right?
Who you are on LinkedIn may be verydifferent from who you present yourself
to be on Twitter or threads or whatever.
I absolutely agree.
You adopt personas, orat least some people do.

Hilke Schellmann (31:02):
Yeah but I think the companies would say you
didn't write this to get a job.
Especially your tweets, sothere's something about you
that you're not hiding, right?
This idea to look underthe hood of a job seeker.
It is pretty prevalent.
I think the same idea with checkingyour facial expressions like you
get something up about yourselfthat you weren't even knowing about.
And we know from history ofcompanies analyzing like handwriting

(31:26):
and other things and hiringthat, there is no science there.
There is no science.
But we've seen this again andagain, because it does feel.
So intuitive, right?
It does feel intuitive that something inyour writing says something about you.
So I could analyze that and findthe real you that you can't hide.
But so we fall into that again.
Those are those are assumptions that,that, that worry me in this space.

(31:47):
Let me see.
Yeah.

Nola Simon (31:50):
But Europe isn't having the same sort of problem
because of the GDPR, right?
And the privacy?

Hilke Schellmann (31:57):
Yes and no.
I do think that Europe is like amore regulated place on earth, right?
Like we see much more regulationcoming out of the European Union.
And we see, they have, we callthose like omnibus legislation
because there are large laws.
So one is GDPR, the general dataprotection privacy law even the
UK adopted, although it's nolonger in the European union.

(32:19):
And I was like lucky to find who I calllike patient zero Martin Birch who got
his who had to take a plum assessment,like a gamified personality test to,
to really basically describe it you.
Get some, I don't know, youdo some Tetris thing games.
And you have to answera couple of questions.
What would you in the workplace,if you encounter this or that and
they come up with the personalityand capability analysis of you.

(32:43):
So he was he had appliedfor a job in data.
And he was like, I was datascraping all day at my current job.
So he was like, I was really surprisedthat, I wasn't, asked to do an assessment
on my capability of data analysis anddata scraping, which I do in my job,
which I applied for at Bloomberg.
But I was asked to answer thesequestions about my personality and
puzzle Tetris and other things.

(33:05):
And he was also surprised thathe pretty much got an instant
within a day, a rejection.
He was like, Huh.
For a job that I'm already doing thatI applied for at this other company.
That's interesting.
So he was in a journalismadjacent job, right?
He was for a news organization.
He wasn't like a reporter,but he knew what reporters do.
So he thought about the GDPR.

(33:26):
He asked for his data.
And then he got the data and hescored really low and then he found
out that Bloomberg had automaticallyrejected him, which is under, some
other laws, is not legal, you haveto disclose that if you use automatic
And Bloomberg didn't disclose that.

(33:46):
So he started, he started likea inquiry, a legal inquiry.
And he settled with the company, but,it's one of the first people that I ever
heard from that like actually had thedata to look at and ask about it exactly.
And I think that happens so rarely.
And it doesn't happen veryoften in Europe, right?
Because a lot of Europeans don'teven know that they have this
right or they know how to get it.

(34:07):
And then when you get the data, it'salso really hard to really understand it.
Okay.
Capability or not, I'm, I don'tknow if it's right or not.
There's no authority to go toand say I don't think it's right.
Who would you call?
We see a little bit more and now we seethe EU AI Act, which actually outlaws
outright the emotion recognition.
So that kind of tools wouldnot, will no longer exist in

(34:28):
the European Union legally.
If you use it, it would be illegal.
And there's a high bar for hiring,which I'm really happy about because
I've been saying for a while now thatI think, hiring is high risk decision
making, like it makes It is important tounderstand who gets hired and we're not.
So it should be up there withlike other high risk things about
who gets a mortgage, who gets aloan, who goes to jail, like those

(34:49):
are or how long you go to jail.
Those are also high risk decisionmaking and hiring should be one of them.
So there is a little bit morescrutiny that companies have to face.
We not exactly know how it's goingto interpret it in every country and
every like sort of societal system.
There's like general guidance.
How is this actuallygoing to be implemented?
It's going to be interesting.

(35:10):
So we're not 100 percent sure,but we see, a little bit of that.
We've seen promising things in California,but they've recently been turned down
by the government, governor there.
With the Trump administration,I'm not seeing that we'll
have a lot of regulation.
Maybe on the state level,but not on the federal level.
Probably not.
So we'll see what happens.
I do think there needs to be moreregulation, but I think actually, I don't

(35:30):
know if every regulation will catch.
Problematic use case.
So I think that like companies needto speak out and push the vendors to
do better, or we need to have morenon for profit initiatives to build
tool in the public interest, right?
I think as the barriers of buildingsome of these tools come down and
make it easier for a lot of people.

(35:52):
To do the question is could we, couldcivil society build tools in the public
interest that are better than someof the commercial companies put out
there and we can actually make publichow the tools were built, what the
assumptions are built in and all of thosethings that that we criticize that we
don't know from commercial suppliers.
That's all I do.

Nola Simon (36:12):
Are you familiar with Amy Webb?
She's a futurist.

Hilke Schellmann (36:14):
Yes.
Yes.
I've interviewed heractually for an article.
Oh, have

Nola Simon (36:17):
you?
Okay.
What do you think about her point?
Her point is that she thinksthat regulation is going to be
so challenging, especially to getit consistent across the globe.
And really it has to be a financialincentive for companies to really build it
ethically and Responsibly and if companiesthen hire the are used the AI produced

(36:41):
by that company, then they get a rewardfor choosing the ethical AI company.
Do you think that's viable?

Hilke Schellmann (36:49):
Yeah.
I don't know if it'sviable because I think.
I do agree that regulationcan't catch them all, right?
Like it's not really going to work out.
It might work for like the Europeanunion, but that doesn't necessarily
transfer to all these other jurisdictions.
So it is really problematicand it can't catch them all.
So if we rely on industry itself, likemaybe a system where we reward, or
we have some sort of certification werequire we publicly shame companies

(37:14):
who don't use certification.
But the question is like.
does the certification work?
And I would argue that we've seenin some industries that we have
like thick leaf as certifications.
We've seen this about diamondswhere diamonds come from.
We see this, we callit greenwashing, right?
That companies like buy a certificationthat they look like a green, organic,

(37:34):
super environmentally friendlycompany, but then they dump their
trash in the landfill and theyactually don't recycle, whatever
their promise it's actually not true.
They're just buying.
The certification.
So I am worried about that too, that wethen have another level of that looks
like this is certified and this is agood algorithm, then no one looks at it

(37:55):
and knows is this actually good or not?
And the, I wish we had like clearstandards on how to judge algorithms.
They're being developed in the EuropeanUnion, but there's still a lot of
we don't know a lot of things here.
So I hope we will have standardsand maybe if we had a little
bit more transparency and demandaccountability and explainability.

(38:15):
A lot of times when I do some ofthese tests, like talking German to
the tool and then, I get a, I got asix out of nine English proficiency
and I talk to the developers.
Oh, before we publish, we always go back.

Nola Simon (38:25):
Yeah.

Hilke Schellmann (38:26):
Ask for comment and also maybe we made a mistake, right?
Like we want to know.
So we go back to the developers.
And I was just, I waslistening and listening.
I was like, wow.
He was like, it's maybe because Germanand English is in this 5D space.
The languages overlap.
And I was like, 5D, I was like,I'm really not understanding.
I was like, I can't follow.
I was like, can you just explain tome if you were in front of a judge,

(38:49):
how, why I was scored six out of nine?
Can you just explain how youwould explain in front of a judge?
And he couldn't.
And I was like, I, 5Ddoesn't mean anything to me.
It seems very complex.
Can you just explain it?
Like how the score came together.
And I think if people can't dothat tells us something like if you
can't explain why this person wasscored this way, we have a problem.

(39:13):
So I think we need much moreexplainability, transparency.
If he could get that through governmentregulators, that would be great if he
could have like benchmarks standards,we've now seen in New York, there is
a law that companies who use AI andhiring or algorithms and hiring they
do have to get an audit every year.
It's up to the companiesto actually decide how much

(39:34):
AI and algorithms they use.
So that's flawed.
And then the audits.
It's also unclear necessarily it's likeusually follows a lot of people follow the
four fifth rule which is actually already.
a guideline by the Equal EmploymentOpportunity Commission in, in, in the US.
So it's like just auditingwhat's already being audited.
And we know that those kindsof audits are really flawed.

(39:55):
They just look at differentgenders, like male and females and
different sexes against each other.
They don't look at the intersectionof Black, Black women versus
Caucasian men, for example.
We know at those intersections wheremarginalized identities crossover That's
where I see most of the problems inthe system because marginalized groups
are underrepresented in the tool and inthe training data, often not the tools.

(40:19):
Then they become underrepresentedin the tools and the models.
And then we have thethe problem multiplies.
And so we don't look at that.
There's no mandate to look at that.
And we've actually, when somebodylooked at that, we found problems.
I'm hopeful ish.
That this will help.
A little bit.

Nola Simon (40:37):
Let's talk about the whisper network.
So you just published anarticle in the Associated Press.
I read it this morning.
And it's basically about howyou found problems in terms of a
transcription having a hallucination.
And honestly, this is 1 of theuse cases that I use AI for often
when I do podcast, I'll take thetranscript and I'll just put it into.

(40:58):
perplexity.
I prefer that.
And it spits out my shownotes and I read it briefly.
And usually it's I don'treally have a concern about it.
Also it's, that doesn'thave a lot of effect.
Couldn't tell you anybody who'sever called back to me saying you
wrote really great show notes.
So I don't spend a lot of time on it.
But the whisper network is actuallybeing used in hospitals and this is

(41:20):
where it can be very interesting in termsof transcripts for medical use that.
Are then actually they'redeleting the audio.
And so the only record thatremains is this transcript.
And you found in your test that there arehallucinations and the transcripts are
including text that actually was never

Hilke Schellmann (41:41):
said.
Just chat GPT, but this isa transcription tool, right?
Like a benign.
Tool that companies use everyday people use every day.
It's very ubiquitous.
So let me take one step back and tellyou how we got to this because it
actually relates to AI and hiring andit relates to you know that I work with
computer scientists and sociologists.
So I've looked into AI.

(42:03):
one way video interviews, right?
And even regulators have asked me, andI've wondered about this too what happens
with people who have a speech disability,or have a dialect, or have who have an
accent, exactly what we talked about.
If they use these systems, or arebeing asked to use these systems
the tool then takes the audioand transcribes it into text.
Is it fair?

(42:24):
To people with different dialectsor accent and speech disabilities.
So I was asked by regulators and othersand I'm like, why are you asking me?
I'm just a lone journalist.
But I was like I guess noone is looking into this.
So I talked to a computer scientistand a sociologist and I was like,
I think we should really test this.
So I asked the companieswhich tools do you use, right?

(42:44):
Because most companies in thespace don't build their own tools.
You buy one on the market becausethere's large companies that use those.
So we took those tools and then, aswe were doing, we found a a database
with audio recordings of people whohave a speech disability and people
who don't have a speech disability.
And we had about, 13, 000recordings as a whole.

(43:04):
And we ran it through thesetranscription software.
And as we were starting to do theexperiment, we also heard about open
AI snooze, new whisper tool, whicheveryone was like raving about.
And millions of people werealready using it's open source.
It's very cheap.
It has a very high accuracy ratethat everyone was raving about.
It can also use it can translate betweendifferent languages and be like, okay,

(43:25):
this becomes, seems to be like one ofthe market leaders, we should also.
And when we looked at the scatter plots,we did see that for Whisper, like a
lot of, there were very low error ratesand some, but there were some plots
that had like ginormous error rates.
And we were like, how do you geta thousand percent error rate?
Like everything must be wrong in thesetranscripts compared to the ground truth.

(43:47):
So we did a human.
Intelligence investigation andlooked at these two because you
know we were like did we upload awrong clip like what happened here.
And when we looked into it we sawthat it like in some instances
just hallucinated whole storieslike made up whole paragraphs that
weren't there in the first place.
And when we did a close analysislike it also had facial commentary,

(44:10):
it, it had hallucinated a newmedication, like all these new
things that like, were never there.
A medication that didn't actually exist.
Yes, it was, wasn't it psychoactivated?
Yeah, it was a hyper, hyperactivated.
Antibiotics.
It also like hallucinated violenceand all kinds of things, because, it

(44:30):
learned from everything on the internet.
So like the stuff that we puton the internet and in books
and stuff, it learns from that.
But we hadn't seen this ina transcription software.
And when I started looking at GitHub,where a lot of like open source developers
congregate, there were just like somany entries of what is happening?
What is this tool doing?
I was like, Okay, we're not the only one.
So it wasn't like a problemthat only we encountered.

(44:51):
This is like much, much bigger.
And then I found this companycalled Nabla that does we see this
a lot now, medical transcription.
So where doctor patientconversations are being recorded.
And then a system conscribesit and summarizes and you.
Does these doctor notes, right?
So your doctor can spend more timewith you instead of documenting
everything and taking notes.
It sounds like a great idea, but somecompanies, I use whisper and not the

(45:16):
only one that uses whisper, but it isthe, I think the, the only one that
I talked to that, so they work withabout at least 30, 000 or so medical
providers in the U S and large.
systems and hospital systems.
And, they built their own toolon top of the whisper interface.
And it summarizes the recordingsand what the tool also does, they,

(45:39):
it, because of privacy concerns, itthrows away the underlying recording.
So if I am a doctor, I look at my notesand I'm like, Was that really sad?
I can go back to the transcript, butthe transcript might not be accurate
because there might be hallucinationsin there, and I can't go back to
the actual recording to check.
Was that actually said did anyonetalk about antibiotics or, whatever

(46:00):
the hallucination might be.
And, to a lot of people who look atAI, this was really problematic that
a company was using that, not, andI also asked them, I was like, we
know that whisper hallucinates andthey're like, oh yeah, we know too.
We try to mitigate it.
And I was like, oh.
Were you able to get rid of it 100percent because I've never heard of
anybody being able to mitigate it100 percent and they were like, no,

(46:22):
but I think we have it under control.
And I was like, okay Sowhat are the, what are

Nola Simon (46:26):
the impacts of having a medical transcript that is inaccurate
and includes a hallucination?
So that could be, it could go to court.
It can be used as evidence to actuallydeny coverage for insurance claims.

Hilke Schellmann (46:42):
Yeah, it also is in your presumably in your
electronic medical records forever.
So you might have so itcould lead to misdiagnosis.
Yeah it could have alifelong implications.
You might also get a prescriptionfor the wrong medication
all kinds of consequences.

Nola Simon (46:58):
It can even, honestly, with the changes in law, if it picks up somehow
that you had an illegal abortion, doesthat then lead to legal consequences?

Hilke Schellmann (47:09):
Yeah,

Nola Simon (47:09):
if that's not something that haven't happened.

Hilke Schellmann (47:14):
Yeah, and I think that's, it could be that it hallucinates
something, but it could also be like whathappens if the transcript is actually
accurate and the transcript is used formore, it's not protected by HIPAA, right?
Which is the privacy loss inthe United States because that.
mandates that medical informationstays private between a patient
and a doctor, but it's also nowshared with other companies.

(47:37):
And what we've seen is like nowpatients are asked to sign a
release to actually sign away.
Their HIPAA rights that, says Iacknowledge that the transcripts or the
end of recording sometimes is being sharedwith some large tech companies, these
providers and, that does worry me too,that when we go to the doctor, we get
this tablet where you just sign thingsand, just sign here, you don't even read

(47:58):
it, and people aren't even aware of thepotential consequences that these very
personal I think a lot of things thatwe tell our doctor, we might not want
to tell others very private informationgets shared with large tech companies.
Then they use it against for fortraining data and other things.
And who knows who uses it, who is beingsold to the next time what happens if

(48:19):
a company goes bankrupt, like that isit's usually very valuable material
that then the next company buys.
And, yeah, or even just

Nola Simon (48:26):
like you run the test and go, tell me what you know about
this person and all of a sudden yourprivate medical records show up in
like a general inquiry that anybodyin the world could actually ask.

Hilke Schellmann (48:39):
Yeah, and, I find that, they're like anonymized,
but we also know from likeanonymized data that they can be.
Pretty easily right there.
There's a lot of data on us out there.
So it does worry me.
And I know of companies use thiskind of medical data to then predict
this is what happens to people.
They need a therapist.

(49:00):
They need back surgery.
We already know it's not.
100 percent predicted protected.
I'm not saying it's the specifictranscript, but there is like data
lakes of medical data on Americans andother people out there that is already
used for kind of predictions and andthings that I think people would find
very creepy if they know why they'rebeing suggested that everyone has a bad

(49:21):
day and they should see a therapist.
I might, When my company sends me a nudgelike that, I might be like, Oh, yeah,
maybe I'm having a bad day if I wouldknow that the company has derived this
information because I through my spouse ofmy medical benefits, and that might be a
signal of divorce and I need a therapist.
I might feel very different thatmy company shares that kind of

(49:42):
information with a third party provider.

Nola Simon (49:45):
Yeah, it's so concerning.

Hilke Schellmann (49:47):
Yeah, that's probably going to be the subject of
my next book is AI and health care.
Yeah, that's fascinating becausewe see a lot of AI also moving
into the workplace, right?
That can find out like, are we depressed?
Are we anxious?
And I think there is somethingthat like, if you share this with
your doctor, I hope gets protected.
We just talked about that the protectionisn't ubiquitous, but but also What

(50:10):
happens in the workplace with like, whenwe have these kinds of tools are they
used only for the benefit of employees?
Are they also being used against them?
That was

Nola Simon (50:18):
one of the questions in your book that I found the most fascinating,
because it goes back initially to myinterest in the eightfold, which was,
if you identify transferable skillsthat are valuable, but that Employee
doesn't necessarily want to use thatskill because they find it draining or
they have trauma associated with, thatwork in the past or whatever reason

(50:39):
they don't want to do it anymore.
What are the repercussions forturning down what the companies
perceives to be an opportunity?
Yeah.
To benefit from your skill.

Hilke Schellmann (50:49):
Yeah.
And I think also what I thought isinteresting about eightfold is like,
it has these like career ladderingthat people in your position did
this, they became vice president fiveyears because they did X, Y and Z.
Here's some, LinkedIn learning orwhatever learning platform that
you can use to get those skills.
And that's interesting.
So but it could also be used formanagers to find like high performers

(51:12):
and it could also be used to find.
Maybe what we might see from theplatform is low performance right
people who didn't advance in five yearsto vice president who took longer.
But the platform doesn't tell youwhy that is are they slow learners or
did they have difficult pregnancy ordid they have to take care of their
parents and you know didn't want.
Yeah, there's multiplereasons why you might.

(51:33):
And like.
When I asked them that question,I was like it could also
be used to penalize people.
They're like everyone knows about it.
The employer knows about it,about our assessment of them
and the employee knows as well.
And the employer you could questionit, but I don't think it uses it.

Nola Simon (51:49):
If you're in a state that has at will employment and
you don't like the answer to thatparticular question, then you could
also use it as a reason to terminate.

Hilke Schellmann (51:58):
Yeah, totally.
Exactly.
Exactly.
And I think there, there isn'ta whole lot of protection.
It could also be that maybe you usedanother learning platform, right?
To have new skills and that getnever registered in the company's
system, which uses maybe anotherevent or you read a book.
Yes, and that never is partof the system either, right?

(52:18):
Like we can think of so many usecases that are actually not part of
the what a company collects on us.
And there's no way to like manuallyentry that and be like, no, but I
did get these skills somewhere else.
And we don't necessarily know howcompany executives or managers
actually use these systems.
And I think that opens itup for some doubtful things.

(52:39):
And we've seen this, unfortunately,again and again, that companies
like collect data for one thing,and then they want to analyze it.
And it's actually not then theirlayoffs, they want to look at the data
that they collected for one thing.
It's actually not used for layoffs,but it's not supposed to be used
for layoffs, but they still use itbecause they have the data, right?
Once you have it, I know it.

(53:00):
Exactly.
And I think that's another problem.
Once you have the data, once you dothe analysis, like if I think that
you are at flight risk, meaningyou're going to leave the company,
the, there are these indicators thatsay this person is 80 percent likely
to leave the company in a year.
Am I in a manager and I want to keepyou maybe I'll give you a promotion.
Did you deserve it?
I don't know.

(53:20):
Probably somebody else is not gonnaget a promotion or a raise because
there's only a limit, a limited pod,and I only do that based on this.
flight risk indicator.
How can I unsee that?
Am I going to put you forwardfor leadership training if you
think you're already one way out?
I don't know.
But the question is you don't evenknow that the tool predicted that
you're with one foot out the doorand you may or may not be, right?

(53:42):
Like it's right.
Exactly.

Nola Simon (53:43):
There might be something keeping you there like
a personal relationship that isn'tgoing to show up on any tool.

Hilke Schellmann (53:48):
Yeah.

Nola Simon (53:49):
You follow the advice to have friends at work.
I

Hilke Schellmann (53:53):
mean, there are so many things connected to this.
It's just, it's wild.
So

Nola Simon (53:57):
I wanted to make sure that we were going back to the book.
You've fabulous book.
I highly recommend itreally got a great job.
Thank you.
What are the successes thatyou've had with the book that
you really want to highlight?
Because it's done so well.

Hilke Schellmann (54:13):
Yeah, I got really lucky that I think, it was
like a, it was like a a topic thatis really in the zeitgeist, right?
And I think a lot of job seekersalso feel like I've heard of
algorithms, how do they really work?
And do you think that people intalent acquisitions Are also like
thinking do these tools work?
And there may have like similar questions.
So I think it was alsoreally interesting for them.
So I've been lucky to talk in actuallylike HR, talent acquisition conferences.

(54:37):
And I'm very grateful for that.
Cause I think those are the peoplethat I actually have maybe more
than job seekers, actually maybe thepower to actually investigate these
tools, maybe make changes, I've alsospoken at data science and technology
companies for the people who buildthese tools to be like much more.
thoughtful and think through and talkto different people and not just assume

(54:59):
that they're domain experts about hiring.
And really, they have no clue how toreally hire and what are their best case
scenarios and how to not discriminate.
So I also tell them like, here'show, what you should avoid.
And here's exactly how you avoid it.
Don't do this, don't do that.
So I'm really grateful that I feel likepeople who built the technology and use
the technology are actually listening.

(55:21):
And then I love just connectingwith like readers from all over the
world, it's amazing when you get amessage from like Brazil and the next
day from Sweden, somebody read yourbook was really meaningful to them.
Like I was just like, very honored.
And then recently I gave aTED talk about the subject.
Oh, I listened to the podcast.
Yeah.
And getting new finding new audiences.

(55:41):
So hopefully, it's alsoreached some lawmakers.
So hopefully there will be change.
And I'm just like, so gratefulfor people who pick up the book,
spend so much time with it.
And I hope to illuminate a new worldthat maybe they didn't know about.
And it's endlessly fascinating tome how we quantify human beings.
So I, I hope that like rubs off onpeople too, that it is interesting, like
how we try to make sense of the world,

(56:03):
In in, in different ways.

Nola Simon (56:05):
Yeah, no, it's fascinating.
And it validated me because I'm like,I don't have a journalistic background
at all, but I went back and I'm like,Oh, look, I was hearing really a lot of
the similar work and I'm like, exactly.
And I think that's what I want to do.
journalistic instincts to say,Hey, I'm noticing these things.
This is good.
Yeah.

Hilke Schellmann (56:22):
Welcome to our world.
Like the next book about AI andhiring and in the world of work.
But I actually think I alwayswant to encourage people.
I tell them like steal my methods.
Like some of them are super basic.
Some of them are more elaborate, butyeah, these methods to scrutinize
these AI tools, because as we can see,like technology companies just put
out these tools and, for example, withOpen AI knew about the hallucinations.

(56:47):
In fact, they put it in the paper.
They did say in one of theirdisclosures, it shouldn't be
used for high risk use cases.
Lots of people don't need the model card.
So they ignored theadvice from the company.
They took it from it's not like theydidn't know, but you wonder like,
why did they release a flop system?
There's all kinds of questionsthat we can ask about this.
So I feel like.
We need to be much moreskeptical users, testers of

(57:10):
these technologies and push back.
So I want everyone to steal my methodsand do, invent your own ones, test
these tools push back against thecompanies so we build better tools.
Because you think that AI canbe transformative technology.
The jury is still out if we use itto predict the future of job seekers.
It's really a question if weactually, if AI can help with that.

(57:32):
But I think it's really helpfulin in, in other use cases.
But in some maybe we shouldn'tuse it and we should push back and
like highlight how it doesn't work.
So I think everyone can do the work.
We don't have enoughjournalists to do all this work.

Nola Simon (57:43):
Yeah.
This is what my fourthpodcast episode about it.
So

Hilke Schellmann (57:49):
I'm doing my part.
Thank you.
Yeah.
And if anyone wants to be intouch with me I'm on LinkedIn.
You find my email on my pet faculty page.
There's only one Helga Shellman.
Please be in touch.
Like I'm happy to I always loveto hear from job seekers or people
who work in talent acquisition.
I love talking to folks.
It might take a few days to like respond.

(58:09):
And, I have a bunch of obligations.
I also have a four year oldkid I like to spend time with.
And, she demands a scavengerhunts for sweets with her mom.
But, I love to be in touch withpeople and I, I'm so grateful for
everyone who's listening and has wroteme and, has engaged with the book.
It's just wonderful.

Nola Simon (58:26):
Okay.
That's good.
I think make sure that everythingis linked in the show notes.
And I really appreciateyou making the time.
So thank you so much.
I'll see you next time.
Advertise With Us

Popular Podcasts

Stuff You Should Know
The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.