All Episodes

October 29, 2025 29 mins

Corporate America has bet on AI to make work faster and cheaper. Companies like Meta and Microsoft are laying off employees, hoping it will save them money. But a new study has found that there’s a growing wave of “workslop” and AI is actually making more work for the people left in these organizations. It’s also costing companies millions. Dexter talks with one of the authors of the study, Kate Niederhoffer, to give us an inside look into the details of the study, what advice she has for organizations and individuals dealing with this new technology, and why AI workslop is a symptom of a much bigger problem.  

Got something you’re curious about? Hit us up killswitch@kaleidoscope.nyc, or @killswitchpod, or @dexdigi on IG or Bluesky.

Read + Watch: 

Kate’s paper, AI-Generated “Workslop” Is Destroying Productivity: 

https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:09):
One of the first examples that I saw was a
really long memo with this style that we talk about
as purple prose.

Speaker 2 (00:18):
It's like when people.

Speaker 1 (00:19):
Just use really flowery language that's elaborate and long.

Speaker 3 (00:25):
Kate Niedhoffer is a social psychologist and the vice president
of Betterop Labs, which recently conducted a study about a
new phenomenon in the workplace AI WORKSLOP.

Speaker 1 (00:34):
We ask people to share examples of workslops that they've received.
They'll tell us like, there's this weirdness, like there's something
off about this, and feeling so confused and unsure what
the content is really about. But then it's like I
don't know what to do about this. I don't know
if I should just redo it myself, start from scratch,

(00:55):
tell the person, ask the person. Sometimes there's a power
dynamic that makes it really complicated to even engage, and
so it leaves people sort of paralyzed. It's like I
don't know what this is, what it means, or what
to do about it, and I still have to do
the work.

Speaker 3 (01:11):
What Kate's study found is that contrary to what AI
companies have been promising, AI is creating more work for
people we're having to spend time reading emails written by
chat GPT, watching automatically generated PowerPoint presentations, and dealing with
computer code that looks okay but doesn't actually run properly. Meanwhile,
Mark Zuckerberg is telling his workers to use AI to

(01:32):
work five times faster, and a bunch of companies are
still laying off employees under the assumption that they can
save money by just using AI to do that work.

Speaker 1 (01:41):
What we're doing is slashing the human and eliminating that
whole productivity potential. I think it's backfiring because you're having
a well being tax on the people who stay, who
have lost their peers, who are taking on all of
this work. It's just a compounding negative effect.

Speaker 3 (02:02):
This study went kind of viral and made some headlines,
and if you just read the headlines, it can seem
like the results prove that AI shouldn't be used in
the workplace at all, But that wasn't really the point
of the story. Betterup's main thing is that they give
advice to companies. Usually that advice isn't just given out
for free in public. Companies pay money for this insight.

(02:22):
So my interview with Kate was an opportunity to see
what kind of conversations are actually happening behind the scenes
at workplaces that are experimenting with AI, the kind of
conversations that might also be happening where you work. Kaleidoscope

(02:44):
and iHeart podcasts.

Speaker 4 (02:46):
This is kill switch. I'm dexter Thomas, I'm sorry, goodbye.

Speaker 5 (03:29):
How would you define workslap workslop is AI generated text
that looks like it completes a task, but.

Speaker 1 (03:40):
Upon further inspection reveals that it lacks context and all
the cognitive effort that is really necessary to complete a task.

Speaker 3 (03:50):
So, in other words, it's when you get sent an
email or a presentation that's very clearly written by chat GBT,
when there's a bunch of M dashes and bullet points,
flowery language phrases like delve into the stuff that at
first glanced kind of looks like a well thought out
piece of work, but if you keep reading, it's obviously robotic.

Speaker 1 (04:09):
We see a lot of examples of emails, reports, code,
a lot of like vibe coded code, even apps. So
people talked about the creation of prototypes, for example, that
are based on prompts, and so they're sort of brittle
experiences that don't really have substance underneath them.

Speaker 3 (04:28):
Where did this phrase work slob come.

Speaker 1 (04:30):
From My colleague Jeff Hancock at the Stanford Social Media Lab.
And we had been studying the ways that managers make
decisions about delegating tasks to humans versus AI, and we
knew about AI slop, you know, that already existed, and
so we were thinking about, like, but what is it
in the workplace and why is it so different? So

(04:52):
Jeff came up with that first definition of AI generated
content that fulfills the appearance of completing the task, but
it doesn't actually have this like substantive cognitive work or
the decision making that a task really requires. And so
we went out with this definition we started to study it.

Speaker 3 (05:10):
Kate and her team worked with Stanford Social Media Lab
to study this. They surveyed over one thousand workers across
fields from finance to medicine to government.

Speaker 1 (05:18):
We start by kind of like funneling them into it
by saying, like, we have some questions about your experience
with AI at work. We explain what AI is so
we're all on the same page, and then we ask
them how often do you use AI at work? And
we ask if people in their company have required them
to use AI. As you know, many people are mandated
to use AI, so we want to get a sense

(05:40):
of what their policy is. And then we start asking
questions about how often they send or share work with
colleagues that uses AI to help produce it. And then
we say, how much of the AI generated work that
you send to colleagues do you think is actually unhelpful,
low effort, or low quality?

Speaker 2 (05:57):
And people readily admit to that.

Speaker 1 (06:01):
And then we say, well, how much of the AI
generated work that you send a colleagues do you think
is helpful and high quality? So you know, we have
to make sure that we're not being biased. And then
we say our most important question to give us the
prevalence is in your job, have you received work content
in the last month that you believe is AI generated
that looks like it completes a task at work, but

(06:23):
is actually unhelpful, low quality, or seems like the sender
didn't put in enough effort.

Speaker 3 (06:29):
So you asked that question, which is essentially your definition
of WORKSLOP without actually using it for them so as
not to bias them. How many people answer yes.

Speaker 2 (06:38):
To that question forty percent?

Speaker 3 (06:41):
Forty percent in the last month. That's a really high number.
I can only imagine that's going.

Speaker 1 (06:46):
To go up.

Speaker 2 (06:47):
Yeah, I agree.

Speaker 1 (06:47):
It's something that we're really interested in tracking. We also
wanted to know how frequently people experience it. So if
you think that that's the prevalence that forty percent of
people have experienced it, then how much of the work
that they receive do they think fits this description? And
it's about fifteen percent. So fifteen percent of the work

(07:09):
that you receive is AI generated work that looks like
it completes the task but really doesn't.

Speaker 3 (07:19):
Okay, so this sounds a little annoying, but aside from
the mild annoying, so the deep annoying somebody might feel
from getting an email that pretty clearly is written by Chadgibt,
we're seeing a presentation that was written by chadgpt. What
else are the effects we should.

Speaker 4 (07:36):
Think about here?

Speaker 1 (07:37):
So it's a lot of time wasted, that's the first thing.
It has an emotional impact that's pretty strongly negative emotions
like being annoyed, frustrated.

Speaker 2 (07:49):
You've been confused.

Speaker 1 (07:51):
And then I think the most insidious impact of all
is this interpersonal judgment and evaluation that you think that
the person who sent you the workslop is less capable,
less creative, less trustworthy. It's like you're immediately casting judgment

(08:12):
on your ability to work with this person over time
and to collaborate effectively. And that's so important. That's the
most important thing in the workplace is that we believe
that our colleagues are competent, capable people with shared goals
that we can align with and do our best work.

Speaker 3 (08:28):
So the respondence to your study, you're telling you this
that when they get workslop from somebody, they don't trust
that person anymore.

Speaker 1 (08:34):
They trust them less.

Speaker 3 (08:36):
Yeah, you know, so I've gotten pitches asking me to
cover things. So it's hey, dexter, I've seen that you
covered X. Here is this thing that I do, or
here's this thing that my client does. And it was
always in the same format. It's like a three paragraph format,
and it would be a brief sentence telling me how

(08:57):
cool I am, and then a couple of paragraphs telling
me what their client's product or project was, and the
third paragraph telling me why it's so interesting in giving
me suggestions for how I could cover it. And I
was seeing the same format over and over and over again.
And that made me not only dislike the PR person,

(09:19):
but dislike the person who I'm supposed to be covering it.
So I start to think, whoever this person is, I
don't want to have anything to do with them because
they're having some PR person who's just running their stuff
through perhaps chat Gibt, perhaps Claude, whatever, and so I
start to Yeah, I start to have a pretty negative
opinion totally.

Speaker 1 (09:38):
I think that's the first phase of it. It's like
you recognize some linguistic cues that seem repetitive or template,
and like, to me, I experience it as so deceptive.

Speaker 2 (09:49):
There's something first.

Speaker 1 (09:51):
That's empty about the text and hard to read. But
then that second experience, which is so emotional. It's like
I'm confused, I'm frustrated, I'm annoying, angry, I feel deceived.
And then there's like that third process, that interpersonal process
that kicks in and you're like, I don't want to
work with this person, I don't trust this person, I
don't like this person in your case, and that's when

(10:13):
we saw those results in our study. That's when I
was like, oh God, we're in for something bad here.

Speaker 3 (10:20):
The feeling of anger or frustration that you get when
someone dumps a bunch of AI generated slop on your desk.
This is what Kate would call the emotional tax of
work slop. About half the people surveyed said that when
they receive workslop from a colleague, their opinion of that
colleague changed, and at interpersonal level, this is obviously a
bad thing, but this is also the sort of thing

(10:40):
that a company would want to avoid because it translates
into less money for the company.

Speaker 1 (10:46):
People are spending just under two hours on each incent
of workslop, and so if you think about you know,
the forty percent of people who have experienced it, the
rate with which they experience and set it can cost
up to nine million dollars for an organization. So it's

(11:06):
a very very costly phenomenon right now. In addition to
the cost of using these tools.

Speaker 3 (11:14):
The MIT Media Lab recently reported that despite thirty to
forty billion dollars being invested in the generative AI, ninety
five percent of organizations are getting zero return on that investment.
And on top of that, Kate's study suggests that it's
actually even worse. A lot of businesses are losing money
to work slab so two questions, why is work slab

(11:37):
happening and whose fault is it? The second one you
might think, you know, the first one that answers a
little deeper, that's after the break So is there an
age demographic that's more likely to be getting or receiving

(12:01):
WORKSLOP in your study?

Speaker 1 (12:03):
I don't have a demographic breakdown right now. It's something
that we're working on. We know from our previous research
that older people trust AI more, and we know that
trust in AI, especially when a sort of blind trust,
can lead to this over reliance and can produce more WORKSLOP.

Speaker 3 (12:22):
So that right there is really interesting because the tendency
with any kind of new technology, with almost any new
technology in the past, we've expected for young people to
be the early adopters, and that I think societally we
start to blame young people for all of the ills
that that new technology brings. But what you're saying about
trust in AIS is really interesting.

Speaker 1 (12:43):
So I think there is something really interesting happening with
age here. You know, agency understanding how to use the
tools in a discerning way, knowing when to use them,
how to use them. That's a really good positive predictor
of not producing WORKSLOP, and so it's really possible that
maybe a more tech native person would have higher agency

(13:06):
and be less likely to over rely on the tools.
There's this really interesting distinction and the way that people
use AI. We call it the pilot mindset. So people
who are high agency, high optimism in the way that
they use AI, and we see that pilots are not
producing as much WORKSLOP as the inverse, which is passengers.

(13:29):
Passengers are low agency, low optimism. They're either not using
AI at all still or they're using it to create
a whole bunch of workslop.

Speaker 3 (13:40):
So this is something that people are telling you because
you also asked not only have you received workslop, but
you're asking people fairly directly, are you generating AI work
that maybe is unhelpful and just passing it off like
it's your own work? And people are saying yes to this. Yeah.

Speaker 1 (13:57):
What's incredible is that even with the self report or
biases that exist, people are still admitting to producing, you know,
just under twenty percent of their work it's WORKSLOP.

Speaker 3 (14:10):
And I am going to assume that there are people
who are doing this and they're just not willing to
admit it.

Speaker 2 (14:16):
I completely agree.

Speaker 1 (14:18):
I will say that what we're starting to see is
the biggest predictor of creating WORKSLOP is actually being kind
of burned out. So people say that, like the reason
why they're generating it themselves as authors of WORKSLOP is
because these tasks that they're using AI to do are

(14:39):
one of many things that they're responsible for doing, so
they have a really full plate. You know, there have
been layoffs in the past few years, and so people
have increased span of control. They're having to do the
work of multiple people, and that makes it really tempting
to try to get a tool to do work on
your behalf. Also, I don't know if this feels true

(15:01):
for you, but a lot of what we hear from
our members and organizations is that, like everything feels really urgent,
and that's another predictor of producing work slow. It's just
the pressure to perform right now is really high, and
people are really depleted and haven't recovered from COVID.

Speaker 2 (15:21):
It's really tempting.

Speaker 1 (15:22):
When you have a very powerful tool to rely on
it to help you out. It's just that people are
doing that in a really blind way.

Speaker 3 (15:32):
I mean, I would go further than saying it's tempting.
I would say it's the logical decision.

Speaker 1 (15:37):
Right.

Speaker 3 (15:38):
If you were at a job that you're watching people
getting laid off, you think you might be next, and
you're doing multiple people's job and you're not getting paid anymore,
why not use AI to get some of that stuff
off of your plate. What we're talking about here is
we're talking about fundamentally, I think in empathy thing which

(16:01):
is workslop. Maybe just the sign that, Yo, this person's overworked,
that's what I should think, But my immediate knee jerk
reaction might be this person's lazy, that're a bad person,
they don't respect me, right, And how do you even
get around that?

Speaker 1 (16:17):
It's such a compassionate reframe on the situation, and maybe
you just came up with a new intervention. We could
try to do an experiment on that of just you know,
when people, you know they're about to start judging people,
that maybe we could just introduce the idea that people
are producing workslop because they're overworked, because everything feels urgent

(16:38):
and important, and maybe just by telling people that it
can initiate a dialogue that's more about hey, can I
help you approach these tasks in a different way.

Speaker 2 (16:48):
Can I take something off of your plate?

Speaker 3 (16:50):
I love your optimism.

Speaker 2 (16:53):
My collaborator is Canadian. I can't help it.

Speaker 3 (16:58):
I mean, I mean real, just real, Because maybe the
workslop thing isn't necessarily a sign that everybody's using chatgabt
or the chatchabt is bad or the claud is bad.
Maybe that's not really it. Maybe it's more of a
deeper fundamental societal thing, which is that yo, people are overworked.

Speaker 2 (17:19):
I think that's exactly right.

Speaker 1 (17:20):
Like that has been the focus of our research for
the last year, is about this crisis that we're in.
We've seen this over the past ten years in our
data that people are decreasing in the amount of resilience
and agility and these foundations of well being they have.
We're an all time low. So people really lack motivation, optimism, agency,

(17:42):
and we're just hungry for compassion and we just need
someone to refuel us so that we can be motivated
and feel like we matter and our work matters.

Speaker 3 (17:52):
Again, basically, the conditions that are causing the problem here,
like we are going to continue to create those conditions.
The product is the output, which is the product which
is the output with this sounds like a spiral to.

Speaker 2 (18:09):
Me, a vicious cycle. I think there's a way out.

Speaker 1 (18:14):
We are thinking a lot about what type of interventions
are possible to prevent work slog and there are a
few things that are possible.

Speaker 3 (18:23):
We'll get into that after the break. So Sam Altman said,
I believe, paraphrasing him, that this year we may see
AI basically quote unquote join the workforce. You've studied how

(18:48):
actual employees feel about basically AI kind of joining the workforce.
Where are we in terms of that right now?

Speaker 2 (18:56):
Actually, you may be surprised.

Speaker 1 (18:58):
I think that people are really excited about AI joining
the workforce.

Speaker 2 (19:03):
When we ask people about.

Speaker 1 (19:05):
This idea of managing a hybrid workforce of humans and agents,
for the most part, they express positive emotions like excitement, optimism, confidence.

Speaker 3 (19:15):
Well, when you say people here, are we talking managers?
Are we talking CEOs, bosses? Are we talking workers?

Speaker 1 (19:21):
Okay, that's a great question. Primarily when we ask managers,
they're more excited than individual contributors about AI joining the workforce.
And what people are wanting is training, And so we
have done a lot of work trying to understand what
type of training people need in management skills to best

(19:42):
manage AI, and for the most part, relational skills training,
a much more human type of training providing context, listening,
empathy is far more effective than task based training, where
you're giving directions on how to prompt, how to specify style,

(20:04):
how to evaluate the AI's output accurately. There are all
these courses you can take right now in AI literacy.
They're all about these task based training, hard skills to
learn how to do this. We find then if you
teach people to be really good managers, they're going to
do a really good job managing the AI and have
better results.

Speaker 3 (20:27):
So really a big part of reducing workslot falls on
the managers and the CEOs. They're the ones responsible for
creating an environment where people can be comfortable with how
AI is being used, or where people can be comfortable
speaking up if something's wrong.

Speaker 1 (20:42):
One of the things that we're seeing is that having
an environment that's psychologically safe, where people have the ability
to ask questions and take risks in a way that
feels safe, so they don't have to be worried about
whether using the AI is demanded or appropriate or permissible,
you know. So, I think part of that is like,

(21:03):
can I try this out? Can I disclose that I've
used it here? Can you give me some feedback on
whether this quality is high enough? Also training those mindsets,
so the pilot mindset, teaching people to have these skills
to be augentic over their usage of AI and also optimistic,
so curious and confident and willing to explore different ways

(21:24):
to use it to be more creative as opposed to
feeling like you have to use it. And then I
often like to say that AI is a multiplayer tool,
multiplayer game. And if we remind people that there is
a human recipient on the other end of your work,
and your goal is to collaborate with another human, and

(21:45):
you think about the potential negative impact of producing thoughtless
work that's AI generated, then I think even that psycho
education and the salience of knowing that there are humans
on the other side of the tool can be really
helpful to having positive collaboration.

Speaker 3 (22:02):
Again, I think the study in some ways is a
kind of a warning. And basically, right now I can
totally understand why somebody would generate work slap, and it
would be great if their bosses or their company would
recognize the root cause of why they're using chat GBT

(22:25):
to write code or write a presentation that would be amazing.
Let's be realistic, that probably won't happen. And so your
study finds forty percent of people have received work slap
in the last month. I see that going up. Let's
talk about a five year in the future period. What

(22:45):
does that look like.

Speaker 1 (22:46):
Yeah, it's a really complicated moment that we're in right now,
and I think the pressure to do more with less
is increasing, and there are so many unknowns about the
future of humans and the way that we work together and.

Speaker 2 (23:03):
The tools that we use.

Speaker 1 (23:04):
And they don't mean to be pollyannish and thinking about,
you know, the way that we can help and the
way that we can train people. I do think there
are deeper issues to bring like politicians and policymakers in.
But I have to say that, like, we've been through
this before. So think back to the early two thousand

(23:27):
and tens when social media came around and people had
to people meeting organizations and the people within them had
to learn to communicate and collaborate in a new way.

Speaker 2 (23:38):
You know, it reshaped the whole.

Speaker 1 (23:40):
World of just as an example, marketing and advertising, it
was a totally enormous shift in the way that we
do work, the way that we relate to each other,
the way that we use tools to get our work done.
And I think that gives me hope that you know,
this is a really, really powerful tool. We can create

(24:01):
a ton of value. We may need to slow down
a little bit, but you do have to stay relevant
with the tools that we all have access to right now,
to test them out, to try them out, to figure
out if you know, in the same way that years
ago we learned code, maybe no longer as relevant, but

(24:22):
I think you have to figure out how to use
these tools.

Speaker 3 (24:26):
I mean, yeah, that's kind of the wild thing, is
I mean, I remember when a journalist lost their job,
you know, somebody to get up on Twitter and say, hey,
learn to code.

Speaker 1 (24:33):
Ha ha ha ha ha.

Speaker 4 (24:34):
Right.

Speaker 3 (24:35):
Ironically, some of those same people I think are probably
now looking for a gig along with the rest of us,
because entry level coding gigs, I mean Claude and chat
GBT kind of has that on lock. It may be
generating terrible code, but it may be quote unquote good enough,
or as you say, it may be workslop and then
somebody gots to come in and clean it up. But

(24:57):
the company hasn't quite figured that out yet.

Speaker 1 (25:00):
Right, So maybe there's a new industry of people who
have strong critical thinking skills and the ability to evaluate
the quality of code and to detect these signals and
to point that out. You know, I could see that
being an unexpectedly useful ability. I noticed that, Like, my
kids are way better at detecting AI generated images whenever

(25:21):
they are those games.

Speaker 2 (25:22):
You know, AI generated or not.

Speaker 1 (25:24):
I get them wrong every time, and my kids know
every single time, Like it's so obvious.

Speaker 2 (25:29):
So maybe there are.

Speaker 1 (25:30):
Some cues that you know, younger demographics, people who have
more hourly jobs can develop to help them evaluate and
work on their critical thinking skills to help the rest
of us prevent it.

Speaker 3 (25:43):
I mean, I'm starting to wonder if maybe the one
prediction that we could make is that there might be
some kind of new category of job created, which is
basically work slop manager, like somebody who can who can
catch the work slop and say, I know what that is,
let me fix that workslop Jim workslop janitor. You gotta
be kidding me, man, You're right, I think some people

(26:05):
are that right now.

Speaker 2 (26:06):
I think they are for sure that it's.

Speaker 3 (26:09):
A little bleak, but maybe it's necessary. Maybe this is
where we're headed. Maybe not everybody figures out a way
within their company to make it so the workslop isn't
being generated, and so just understanding that this stuff is
going to be generated, and then we got to have
somebody clean it up before it gets put out. You know,
Kate and I didn't realize this when we were talking,

(26:31):
but it turns out that the phrasing of janitors for
AI workslop is actually already a thing. I was just
searching and there is a site called code janitor that'll
charge you a couple thousand dollars to clean up the
AI generated code so that your app will stop crashing
and actually work. This is a niche job that I
think is probably going to spread out to other industries,

(26:51):
maybe even the industry that you work in, to which
you might say, well, wouldn't it just be better to
have hired somebody who's good at their job to do
it right there the first time? Yes, probably, but if
you're not the boss, that decision isn't always up to you.
Thank you for checking out another episode of kill Switch.

(27:12):
If you want to email us we're at kill Switch
at Kaleidoscope dot NYC, or we're also on Instagram at
kill switch pod. And if you like the show, which
hopefully you do because you're all the way at the end,
think about leaving us a review. It helps other people
find the show, which helps us keep doing our thing.
And once you've done that, did you know that kill
Switch is on YouTube? So if you like seeing stuff

(27:34):
as well as hearing it, you can have the link
for that. In the show notes, kill Switch is hosted
by Me Dexter Thomas. It's produced by Sena Ozaki, Darluck Potts,
and Julia Nutter. Alexanderveld also helps with production on this episode.
Our theme song is by me and Kyle Murdoch and
Kyle also mixed the show. From Kaleidoscope, our executive producers

(27:54):
are Ozma Lashin, Mangesh Hajikadur, and Kate Osborne. From iHeart
Our executive producers ur Katrina Norvile and Nikki E. Tor Oh.
And one last thing, I don't want people to get
the impression that workslop is just people turning bad work
into their bosses. The study shows that workslop also flows
down the organizational ladder too. Kate, give me an example.

Speaker 1 (28:16):
There are some people who talk about receiving them from
their ceo.

Speaker 2 (28:21):
Here's an example.

Speaker 1 (28:23):
The CEO of our company sent over some notes for
fundraising strategy. It was just a whole lot of words
saying nothing Chat GPT puke he sent over would have
been so much more helpful as a paragraph written in
his own words. The style of writing, reliance on bullet points,
the fact that our CEO readily admits using GBT for everything.
He had just gotten back from vacation. He was asking

(28:46):
GPT for parenting advice. So it's just like.

Speaker 2 (28:48):
Going on and on.

Speaker 3 (28:51):
I'm sorry, Chat GBT for parenting advice is you know.
I'm gonna leave that alone because that's an entirely different episode.
I don't want to touch that right now. Wow, I really
hope that we do not need to make an episode
about people using chat GBT for parenting advice. Please do
not make us make an episode about that anyway.

Speaker 4 (29:12):
Catch on the next one. Good Bye,

kill switch News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

About

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies!

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.