All Episodes

October 11, 2025 34 mins
The source documents expose a significant ethical crisis regarding the hidden labor required to train and moderate advanced Artificial Intelligence models, focusing specifically on Kenyan workers in Nairobi’s "Silicon Savannah." These individuals, often highly educated but facing high unemployment, are employed by outsourcing firms like Sama and Remotasks to perform critical data annotation and content moderation for global tech giants, including OpenAI, Meta, and Google. The material reveals that while these jobs are marketed as a pathway out of poverty, workers are often subject to exploitative wages, earning as little as $2 an hour, and are forced to review extremely graphic and traumatic content, leading to severe psychological scars and PTSD. Ultimately, the texts argue that this business model represents a form of "AI colonialism," wherein the vulnerable labor of the Global South subsidizes massive profits and technological advancement in the Global North, prompting workers to organize for better pay and legal protections.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to the deep dive Today. We're navigating a really
profound paradox right at the heart of well, the modern
tech revolution, this huge quest for advanced artificial intelligence.

Speaker 2 (00:10):
Yeah, and we're aiming to cut through all the buzz,
you know, the funding rounds, the massive valuations you hear
about with generative AI.

Speaker 1 (00:17):
Exactly to expose the hidden costs, human costs of actually
creating it.

Speaker 2 (00:22):
It's the infrastructure story that honestly, almost everyone misses. We
tend to think of AI as just code, algorithms, silicon.

Speaker 1 (00:29):
Chips, right, pure tech.

Speaker 2 (00:31):
But our sources, the material we've dug into it reveals
that the foundation is intensely human labor and often it's exploitative,
absolutely necessary, but also deeply emotionally grueling work.

Speaker 1 (00:44):
So our mission for this deep dive is pretty clear.
We're examining some really extensive source material. It details the
lives of these essential workers, primarily in Kenya, who are
doing this foundational, frankly brutal labor that makes models like
CHATCHYPT safe usable, and let's face it, commercially via.

Speaker 2 (01:00):
And it's a story set in a place that's been
branded internationally with this huge promise. Kenya but the reality
on the ground it reveals severe peril for the people involved.

Speaker 1 (01:12):
The setting is Nairobi, Kenya, a city often championed globally
as the Silicon.

Speaker 2 (01:17):
Savannah, that labels Silicon Savannah. It's not just aspirational, is it.
It reflects a very deliberate national strategy. Kenya has really
positioned itself as Africa's tech frontier, and.

Speaker 1 (01:28):
They've been successful in attract the investment right, billions pouring
in for the digital.

Speaker 2 (01:32):
Ecosystem, absolutely billions in foreign investment. They're pushing this digital
revolution narrative hard, which.

Speaker 1 (01:37):
The government, President William Rudo's administration is heavily promoting. We
see tax incentives, pushes for it, education.

Speaker 2 (01:45):
And positioning AI as the key driver high tech jobs,
economic growth. It's meant to be the model for African
economic advancement, supposedly.

Speaker 1 (01:53):
And that's why the sources they paint such as stark
almost painful conflict, because alongside this national promise, the country
is grappling with just this overwhelming youth unemployment crisis.

Speaker 2 (02:04):
Yeah, that vulnerability is key. It creates, frankly, the perfect
environment for exploitation when these supposed jobs of the future
actually arrive let's.

Speaker 1 (02:13):
Put some numbers on that economic pressure. Oh, because it's intense.
Every single year, roughly a million Kenyon youth enter the
job market, many with university degrees, looking for formal work.

Speaker 2 (02:24):
A million, and the competition is just crushing. If you're
under thirty five in Kenya, your unemployment rate is hovering
near forty percent.

Speaker 1 (02:32):
Forty percent, that's nearly half the young skilled workforce desperate
for an opportunity.

Speaker 2 (02:37):
So when you're facing statistics like that, almost any stable job,
especially one that seems modern tech related, it looks like
a genuine lifeline. Even if the pay isn't great.

Speaker 1 (02:48):
It's seen as a way out of precarious informal labor,
which is the reality for so many exactly.

Speaker 2 (02:53):
And these aren't unskilled workers. They're motivated, they're educated, often
English proficient. They see these AI training jobs initially at
least as a real opportunity, a step up.

Speaker 1 (03:03):
But the sources we've looked at they indicate what's actually
being offered is, well, it's anything that the future they imagine.
We're talking poverty wages for work that is emotionally devastating.

Speaker 2 (03:15):
Workers earning as low as what two dollars an hour
sometimes less sifting through the absolute darkest, most toxic corners
of the Internet.

Speaker 1 (03:24):
It really frames this as more than just a labor issue.
It feels like a structural ethical crisis.

Speaker 2 (03:29):
It absolutely is the rush, this incredibly profitable rush to
build these intelligent machines and then sell them primarily to
the global North. It's being subsidized, subsidized by an underclass
in the global South.

Speaker 1 (03:42):
The source material uses some really powerful language here. It
describes how this labor model transforms the lives of these
workers into collateral damage.

Speaker 2 (03:50):
Collateral damage. That phrase just hits hard, doesn't it.

Speaker 1 (03:53):
It does. It suggests the harm, the emotional, the financial harm.
It's just seen as an acceptable cost, maybe even a
necessary one, or technological progress and.

Speaker 2 (04:01):
Profit, which forces us to ask a fundamental question, can
we really call this innovation if its foundation rests on
such deep structural injustice.

Speaker 1 (04:10):
So if we look closer at the actual work, what
does it entail? You mentioned it's surprisingly manual, cognitively demanding
the humans in the.

Speaker 2 (04:20):
Loop, Yes, humans in the loop. Let's unpack that blueprint
of AI labor. What are these critical functions?

Speaker 1 (04:26):
Okay, let's define it.

Speaker 2 (04:28):
Primarily it falls into two main buckets, data annotation and
content moderation. Data annotation is well, it's the grunt work
teaching the AI to see and understand the world.

Speaker 1 (04:39):
So that's tagging images, categorizing tect identifying things in.

Speaker 2 (04:44):
Data exactly, tagging vast amounts of data, labeling images, classifying text,
Drawing boundaries around objects.

Speaker 1 (04:49):
We saw specifics on that right, the bounding boxes. What
does that actually look like for a worker.

Speaker 2 (04:53):
Imagine you're looking at a complex video feed or maybe
a satellite image. The worker has to painstakingly draw these
pers decisse digital boxes, sometimes complex polygons around every single
object a person, a car, a piece of luggage, maybe
even specific facial features.

Speaker 1 (05:09):
And then categorize each one correct.

Speaker 2 (05:10):
It requires incredibly high accuracy, intense focus, often for nine
hours straight.

Speaker 1 (05:15):
That sounds unbelievably monotonous.

Speaker 2 (05:17):
It is monotonous yet critical. If you mislabel an object,
you corrupt the training data. The whole AI model's performance
can suffer. So the pressure for accuracy is immense.

Speaker 1 (05:26):
High accuracy demanded repetitive work under time pressure. Okay, And
the other category, content moderation. That's where the psychological toll
really comes in. Flagging toxic content. Why is this human
filtering so absolutely non negotiable for these big AI models.

Speaker 2 (05:44):
Well, the fundamental reason is where the training data comes from.
These algorithms, especially early models like GPT three, their train
on just massive amounts of raw, unfiltered Internet data, petabytes
of it.

Speaker 1 (05:56):
The Internet is well, it's the Internet exactly.

Speaker 2 (05:58):
It contains the absolute worst of humanity, alongside the best
hate speech, graphic violence, illegal material, deep seated biases. It's
all in there.

Speaker 1 (06:07):
So without humans teaching the AI what's unacceptable, what's harmful.

Speaker 2 (06:11):
The model would just spew it all out, as the
sources put it, It would spew unchecked biases and horrors scraped
from the web. Without humans labeling what is violence, what
constitutes abuse, what's harmful misinformation versus fact, the AI is
essentially useless or even dangerous.

Speaker 1 (06:30):
So these workers, they're basically the purification system, content scrubbers,
cleaning the engine so that we the end users, get
this polished, seemingly safe output from something like chat shept.

Speaker 2 (06:42):
Precisely, they are the essential but invisible sanitation crew for
the digital world. And this labor it's outsourced globally. Who
are the main corporate players actually benefiting from the setup
It's not some fringe operation.

Speaker 1 (06:54):
No, not at all. We're looking at the biggest names
in tech. Global giants Open Ai, Meta, Google, Microsoft, even
eBay are mentioned in the sources as relying heavily on
this outsourced.

Speaker 2 (07:05):
Model and to execute it they contract with specialized outsourcing
firms many base right there in Nirobi.

Speaker 1 (07:10):
The sources point to two major players being central here Sama,
which used to be called Sama Source.

Speaker 2 (07:15):
Right Sama and Scale AI's gig platform Remo Tasks. At
their peak, these two firms employed over three thousand locals
in Nairobi just doing this kind of annotation and moderation work.

Speaker 1 (07:28):
Some that's particularly interesting though, because of its history it
presents this ethical complexity. It's a San Francisco based B corp,
a benefit corporation founded on a specific social mission, Wasn't it?

Speaker 2 (07:40):
That history is absolutely crucial. It helps explain the deep
sense of betrayal. Many workers felt. Sama started in two
thousand and eight with this stated commitment to ethical practices.
Their claim was they wanted to lift fifty thousand people
out of poverty through digital jobs.

Speaker 1 (07:55):
They really branded themselves around ethical AI, a pathway out
of poverty using technolog a.

Speaker 2 (08:00):
Very powerful morally compelling narrative. But the sources show a
really critical pivot point, a shift that fundamentally undermined that
original mission. Around twenty nineteen, Sama shifted it moved towards
a more aggressive, traditional for profit model. They heavily expanded
their AI data services to meet the absolutely massive demand
coming from big tech.

Speaker 1 (08:20):
Oh so chasing the.

Speaker 2 (08:20):
Boom exactly, and the sources argue that this dual identity
keeping the humanitarian B Corp branding while pursuing profit margins
through these low wage, high volume contracts created the perfect storm.
The very conditions for the exploitation were.

Speaker 1 (08:36):
Discussing and for the educated young people in Nairobi landing
a job with a company like Sama, initially it must
have seemed like that promised opportunity finally arriving.

Speaker 2 (08:45):
Oh absolutely, we have that specific story. The anecdote about
nuf Tolly Womblow. He's a mathematics graduate, a father of two.
His experience really humanizes this whole abstract discussion was his
initial feeling. He was quote elated when Sauma hired him
in twenty twenty two. He talked about dreaming of a
stable income, something to support his family properly.

Speaker 1 (09:06):
But instead of the high tech career path.

Speaker 2 (09:07):
Instead, he found himself locked into that nine hour daily grind,
drawing those bounding boxes, classifying videos, labeling text snippets so
the AI could understand, and pretty soon he'd be dealing
with the kind of content that would just shatter his
peace of mind.

Speaker 1 (09:22):
Okay, so we've established the type of labor intense, often disturbing.
Now the core question why would someone with a math
degree fluent in English except maybe two dollars an hour
for a nine hour shift doing this?

Speaker 2 (09:38):
And the answer really digs into the harsh economic reality
of Kenya. The local context is absolutely essential to understand
the leverage these companies have.

Speaker 1 (09:47):
These jobs are attractive despite the pay.

Speaker 2 (09:49):
They are genuinely attractive, even at rates way below what
you'd see elsewhere, simply because the alternatives are so grim.
You mentioned the informal sector earlier that employs something like
eighty three percent the Kenyan workforce. Formal jobs are incredibly scarce,
and what.

Speaker 1 (10:04):
About average earnings for those who do have work.

Speaker 2 (10:06):
Average monthly earnings for many Kenyons barely break two hundred dollars.
On top of that, Kenya's labor laws are well relatively
lax and enforcement and scope. There's no universal minimum wage
covering all sectors.

Speaker 1 (10:17):
What is the baseline wage in Nairobi?

Speaker 2 (10:19):
For example, for a typical job, say a receptionist in Nairobi,
the official baseline is only about a dollar fifty two
cent an hour.

Speaker 1 (10:26):
Okay, So when these AI data jobs come along, offering
say one dollars seventy five or two dollars an hour
plus the promise of stability working with technology, it.

Speaker 2 (10:36):
Looks like a significant step up, especially for IT and
engineering graduates who might otherwise face unemployment or informal work.
They pursue these jobs aggressively.

Speaker 1 (10:46):
It plays right into what the civil rights activist Nrima
Wako Guas said. She critiqued this dynamic perfectly.

Speaker 2 (10:52):
She did. She noted that these big American tech companies
advertise these as tickets to the future, but in reality
they're just a new form of exploitation, using the promise
of tomorrow to mask the grim reality of today.

Speaker 1 (11:05):
Let's really drill down into the specifics now the pay disparities,
especially for that incredibly fraud contract, the work Sama did
for open ai to make GPT three safety.

Speaker 2 (11:14):
Yes, that's specific contract starting in late twenty twenty one.
It provides perhaps the most damning evidence of this structural imbalance.
The sources all confirmed GBT three was initially quote notoriously toxic,
just full of the bad stuff from its training data.

Speaker 1 (11:27):
So open Ai needed to implement safety guard rails urgently, urgently.

Speaker 2 (11:32):
And they outsourced this intensely traumatic moderation work to Sama
in Nairobi.

Speaker 1 (11:37):
Now, what was OpenAI actually paying Sama for this work?
Per worker per hour?

Speaker 2 (11:42):
Okay, this is critical. Open Ai paid Sama a significant
globally competitive rate twelve dollars and fifty cents per hour
per worker. That figure isn't disputed. It shows the tech
giant was willing to pay a fair global rate for
this essential and frankly dangerous labor.

Speaker 1 (11:58):
Twelve fifteen hour paid by the tech giants to the
outsourcing middleman Sama. Now the crucial part, what did the workers,
the actual people enduring the trauma? What did they take home?

Speaker 2 (12:09):
This is where the money just seems to evaporated. Pacelips
were reviewed by journalists and they show a staggering disparity
junior workers taking home a base monthly salary around one
hundred and seventy dollars monthly monthly. Once you factor in
mandatory deductions, fees, whatever else, that base rate equated to
only about a dollar thirty two cent hourly.

Speaker 1 (12:25):
One dollar thirty two cents.

Speaker 2 (12:27):
Senior workers could climb up to two dollars an hour,
but only if they consistently hit extremely high, often unrealistic
productivity targets.

Speaker 1 (12:34):
That is a colossal margin. Open Ai pays twelve dollars
and fifty cents the worker gets, let's be generous, one
dollar fifty to two dollars on average. That's what six
to nine times difference. Yeah, where does the other ten
dollars fifty cents per hour go?

Speaker 2 (12:47):
Well, that huge margin covers Sauma's administrative costs, their operational overheads,
their profits, and crucially, it subsidizes the massive profits of
the tech giant open Ai by letting them secure this
absolutely essential labor at a tiny fraction of what it
would cost them in the US or Europe. It's a
financial insulation layer, keeps the.

Speaker 1 (13:06):
Cost off opening eyes, direct books.

Speaker 2 (13:08):
And off their conscience. Perhaps Noftaliwombalo, the worker we mentioned,
his direct protest about this really puts it in sharp perspective.

Speaker 1 (13:16):
What did he say?

Speaker 2 (13:16):
He was completely unequivocal. When he spoke to the press,
he connected the pay directly to the work's global value.
He said, they pay thirty dollars in the US, but
here it's scraps.

Speaker 1 (13:24):
Scrap.

Speaker 2 (13:25):
Yeah. He completely refuted the idea that just because Kenya
is a lower income country, two dollars an hour is
somehow sufficient pay for highly skilled, psychologically damaging labor that
is fundamental to a multi billion dollar product.

Speaker 1 (13:39):
His point is clear, They're doing the exact same work,
the same foundational crucial task as a moderator sitting in
Dublin or California, but earning maybe one fifteenth of the rate.

Speaker 2 (13:51):
Its exploitation hiding behind geographical wage differences. And it wasn't
just this traditional contractor model like SAMA causing percarity. Big
economy model like scale AI's Remote Tasks platform had similar
pay issues, arguably with even less job security.

Speaker 1 (14:08):
How did remo tasks work?

Speaker 2 (14:10):
Remote Tasks was pitched as this flexible paper task model,
but the rates often worked out to under two dollars
an hour, sometimes significantly lower depending on task complexity.

Speaker 1 (14:20):
And the biggest complaint there wasn't just the low pay, right,
It was something more insidious.

Speaker 2 (14:25):
Yes, The most common complaint echoed by dozens of workers
in the sources was about accounts being frozen, specifically accounts
being frozen right before payday was due.

Speaker 1 (14:33):
So you do the work, you put in the hours,
you see your earnings accumulate, and then poof, the company
could just freeze your ac count and effectively zero out.

Speaker 2 (14:40):
What you rowed exactly. It seemed like a structural way
to avoid paying for labour already completed, a built in
loophole for wage theft essentially.

Speaker 1 (14:48):
And this precarity it had very real consequences for those workers.

Speaker 2 (14:52):
Absolutely amid a rising tide of complaints and negative press coverage,
Remote Tasks just abruptly shut down its Kenyon oper rations
entirely in March twenty twenty four, just hould the plug
left potentially thousands of workers instantly stranded no severance, often
with unpaid earnings just locked away in those frozen accounts.

(15:12):
This happened after a Meta also terminated its contract with
Sama back in January twenty twenty three, which led to
around two hundred sudden redundancies and major legal fights over
severance pay.

Speaker 1 (15:23):
It really highlights how vulnerable these workers are. The companies
can just pack up and leave, leaving skilled people destitute
because there are few formal protections holding them accountable locally.

Speaker 2 (15:32):
And when you zoom out from these specific cases connect
this model to the broader global dynamics. Researchers have given
it that specific, really potent name we mentioned earlier, AI colonialism.

Speaker 1 (15:43):
AI colonialism. It sounds dramatic, but is it just hyperbole?

Speaker 2 (15:46):
The researchers argue, it's not hyperbole at all. They assert
that this outsourcing model contracting middlemen in low wage hubs
to handle essential, often toxic labor. It directly echoes historical
colonial extraction patterns.

Speaker 1 (16:01):
But instead of extracting minerals or crops, exactly.

Speaker 2 (16:04):
The commodity being extracted here is the cheap, compliant, educated
labor of the local population. Kenya's educated, English proficient youth
become the resource.

Speaker 1 (16:14):
Their labor fuels the billion dollars, sometimes trillion dollar valuations
of these northern tech companies, while the workers themselves survive
on just fractions of the revenue generated pure extraction.

Speaker 2 (16:25):
It creates this profound structural imbalance. The global South provides
the critical human input, bears the emotional and psychological cost,
and effectively subsidizes the innovation and massive wealth accumulation happening
entirely in the global North. The economic distance the layers
of contracts they insulate the tech firms from direct ethical responsibility.

Speaker 1 (16:46):
While the economic side is devastating. The sources make it
incredibly clear that the psychological toll, the trauma incurred in
these content moderation trenches, is perhaps even more profound and
often completely ignored. We really need to spend some time
on this this.

Speaker 2 (17:00):
Part of the story. It really reveals the darkest truth
about how these sophisticated AI models were cleaned up. These workers,
especially those on that open AI contract starting late twenty
twenty one, they weren't just tagging pictures of cats or cars.
They were intentionally immersed in the absolute depths the underbelly
of the Internet, specifically to train the AI's safety algorithms,

(17:24):
to teach the machine what horror looks like so it
could avoid generating it.

Speaker 1 (17:27):
And the specific content they had to review it was
harrowing the sources listed out.

Speaker 2 (17:32):
The list is just devastating to read. Graphic depictions of
child sexual abuse, bestiality, torture, extreme violence, human rights violations,
detailed descriptions of suicide, murder, sexual assault. This was an
accidental exposure. It was sustained close engagement with psychologically toxic
material as a job requirement.

Speaker 1 (17:50):
There was that one quote from a SAMA employee cited
in the investigation. It's unforgettable.

Speaker 2 (17:57):
Yeah, the one describing recurring visions to reading a man
having sex with a dog in front of a child,
and then adding that was torture.

Speaker 1 (18:05):
That's not just a stressful job. That is state sanctioned,
professional forced exposure to human depravity and immense suffering day
after day.

Speaker 2 (18:15):
And it wasn't brief exposure either. We're talking nine hour shifts.
SAMA officially claimed that the quotas were maybe around seventy
snippets per shift.

Speaker 1 (18:24):
But the workers reported something different.

Speaker 2 (18:26):
Workers reported they were actually processing between one hundred and
fifty to two hundred and fifty snippets of this complex,
disturbing content daily. Think about that pace. That's relentless psychological exposure, and.

Speaker 1 (18:37):
What about the working environment itself to pressure for management?

Speaker 2 (18:40):
Nathan and Kunzimana, another worker from SAMA, He described the
atmosphere as constant anxiety. His words were, we were walking on.

Speaker 1 (18:47):
Eggshells, walking on eggshells.

Speaker 2 (18:49):
Onrealistic deadlines, complex labeling or classification tasks that needed to
be done in mere seconds, and this pervasive, constant fear
of being fired, fired if they complained, fired if they
fell behind on quota, fired if they seemed too affected
by the content.

Speaker 1 (19:06):
So no room to even process what they were seeing.

Speaker 2 (19:09):
Absolutely not. It prevented them from taking necessary mental breaks.
Fussica Burhan Gebrecaden, another moderator. Her recollection was just raw horror.
I looked at people being slaughtered all day long. This
combination of high speed exposure to trauma, high stakes, and
the company claiming this ethical mission, it's deeply hypocritical.

Speaker 1 (19:29):
So if this work is globally recognized as necessary but
also psychologically toxic, I mean it's often classified as a
high risk occupation in the US. What actual mental health
support was provided, especially by SAMA, a company claiming to
be a b corp committed to ethical labor.

Speaker 2 (19:44):
Well support was promised naturally, but workers consistently reported it
was conditional, inadequate, and sometimes felt like a cruel joke
given the severity of the exposure.

Speaker 1 (19:52):
What do they actually offer.

Speaker 2 (19:53):
Generic wellness perks things like group counseling sessions, not individual therapy,
guided meditation apps. Apparently even KFC rewards for hitting high
speed targets on particularly grim tasks.

Speaker 1 (20:07):
KFC rewards for processing trauma faster.

Speaker 2 (20:10):
It sounds unbelievable, but that's what the sources report. The
core issue was the lack of access to meaningful, personalized care.
Workers said their requests for crucial one on one therapy sessions,
the kind you'd actually need to process severe trauma, were
often denied.

Speaker 1 (20:25):
Denied why, What was the reason given?

Speaker 2 (20:27):
The reason boiled down to productivity. It was directly linked
back to the profit driven structure of the contracts. Also,
one on one therapy sessions would take a worker off
the production floor for too long that would impact the
productivity quotas the number of snippets process per hour, which
were set by the contracts with clients like open AI
and Meta. The company prioritized fulfilling the volume demands of

(20:48):
the contract speed and volume of toxic content processing over
the individual mental well being of the person doing that.

Speaker 1 (20:55):
Processing, profit over people essentially digitized.

Speaker 2 (20:58):
Sterkly and the result volting mental health crisis for many
of these educated, often young workers, was absolutely devastating.

Speaker 1 (21:06):
What kind of symptoms were reported.

Speaker 2 (21:07):
The symptoms are classic signs of severe prolonged psychological trauma
PTSD post traumatic stress disorder, chronic insomnia, crippling anxiety, and tragically,
the complete breakdown of personal relationships family relationships due to irritability,
emotional numbness, withdrawal.

Speaker 1 (21:26):
We have that specific tragic story of Richard Mathinch. He
was thirty seven. After just eight months working on the
open Aye contract, he spiraled into a profound depression.

Speaker 2 (21:37):
His words were that the work destroyed me completely. That
depression led to him losing his family. He was actually
one of the four former SAMA workers who signed that
open letter to Kenyan authorities back in twenty twenty three, the.

Speaker 1 (21:47):
Letter detailing the psychological.

Speaker 2 (21:49):
Harm exactly explicitly detailing the harm they suffered on the
job and demanding government intervention, demanding accountability, and as.

Speaker 1 (21:57):
If the psychological toll wasn't enough, the source add another layer.
Many of these workers commute from Nairobi's huge, dense informal
settlements slums, essentially where mental health is still a profound taboo.

Speaker 2 (22:10):
Yeah, that social reality just compounds the crisis immensely. These
workers endure horrors for nine hours a day and then
they go home to communities where there's often no safe
space to even talk about it. Mental illness carries a
heavy stigma, so they.

Speaker 1 (22:24):
Can't talk about it at work for fear of being fired,
and they can't talk about it at home because of
the stigma.

Speaker 2 (22:29):
Exactly, the silence drives the trauma deeper, which brings us
back to Nimo Wakojea's blunt summary. It remains tragically accurate.
These are AI sweatshops with computers instead of sewing machines,
the exploitative structure where the worker's well being is utterly
secondary to production targets. It's just been digitized, same logic,
new technology.

Speaker 1 (22:49):
It's a devastating picture. But importantly, the sources also detail
a growing resistance, an organized fight back. We need to
shift focus now to those voices, the workers fighting back
and the avenues they're exploring to demand some form of justice.

Speaker 2 (23:03):
Yes, the fight for dignity is real and it's becoming
highly organized. Let's go back to enough. Tolly Wambalo, after
he voiced concerns internally about the pressure, the unreasonable conditions,
he was summarily fired.

Speaker 1 (23:17):
Just fired for speaking up.

Speaker 2 (23:18):
Fired. He recalled the kind of corporate thanks they'd get
for their efforts. They used to say thank you with
a soda. That was the level of appreciations. But Lombala
isn't silent now he's actively suing Sama and also Meta,
alongside about two hundred of his former colleagues. The lawsuit
alleges unreasonable conditions that led directly to documented psychiatric harm.

(23:39):
They want accountability, and what.

Speaker 1 (23:41):
About the workers on the gig platforms like Remo tasks?
They faced that specific injustice with the frozen accounts? Did
that view resistance too?

Speaker 2 (23:48):
Absolutely? The source material is full of examples of that
instant procarity. If Antas Konyugi earning a dollar fifty cent
an hour, his account just abruptly closed before payday wiped
out his weak's earnings. Joymanio, a single mother specializing in
labeling complex medical images, talked about literally skipping meals to
afford the data bundles she needed just to do the gigs.

Speaker 1 (24:09):
These aren't abstract complaints. These stories highlight the profound personal
sacrifices these educated professionals are forced to make just to
participate in this supposedly futuristic economy.

Speaker 2 (24:20):
And for those who simply couldn't endure the toxic labor anymore,
like Duncan Coach who moderated gore content and quit after
developing severe panic attacks.

Speaker 1 (24:29):
What recourse did they have?

Speaker 2 (24:30):
Often the only recourse was to walk away, to sacrifice
their only source of formal income because the job was
literally destroying their health. There was no safety net.

Speaker 1 (24:40):
The global reaction to these stories once they started coming out, yeah,
it was pretty rapid, wasn't it, Especially on social media?

Speaker 2 (24:46):
Oh, the outrage gained traction incredibly quickly.

Speaker 1 (24:49):
Online.

Speaker 2 (24:50):
There was this one viral post in twenty twenty three
from the Twitter handle at africafact Zone. It just perfectly
encapsulated the injustice.

Speaker 1 (24:56):
What did it say?

Speaker 2 (24:57):
It said Kenyon's paid two dollars just to make Chatch
safer while writing feces for foreign elites. That post got
something like six six hundred likes. It focused intense international
scrutiny right onto this hidden AI supply chain, and.

Speaker 1 (25:13):
The conversation quickly broadened beyond just the low wages, didn't it.
People started connecting it to bigger structures.

Speaker 2 (25:19):
That's right. Online threads exploded, connecting the dots. Commentators like
one user at seashell Storm explicitly called out the imperial
undertones of AI development, arguing this whole extractive model was
primarily benefiting Western imperialism.

Speaker 1 (25:35):
So the narrative shifted from just a labor disputing Kenya
to a critique of global power dynamics in the digital
age itself precisely.

Speaker 2 (25:44):
Now, let's address the core corporate rationale here. Why this
whole outsourcing model in the first place. Why isn't this essential,
if unpleasant labor being done in California or Dublin, where
these tech companies are actually headquartered.

Speaker 1 (25:57):
What's the strategic advantage for them?

Speaker 2 (25:59):
The primary driver seems crystal clear from the sources. It's
about corporate evasion, evasion of regulatory oversight and financial responsibility.
Tech firms outsource these toxic, low wage tasks specifically to
sidesteps stringent US and EU regulations. That includes avoiding state
minimum wages like California's fifteen dollars an hour minimum, but

(26:20):
maybe even more importantly, avoiding the robust mandatory mental health
and safety protections that applied domestically to workers exposed to
traumatic content.

Speaker 1 (26:30):
So by creating geographical distance, they create legal distance, ethical distance,
clausible deniability. And when they're actually confronted with these findings,
how do the companies respond.

Speaker 2 (26:41):
They engage in this highly coordinated game of deflection. It's
quite predictable according to the sources. Okay, like what open AI.
The end client, the one paying the twelve dollars and
fifty cents, they deferred all management responsibilities entirely to Summa.
They basically said, some as the contractor, we just buy
the service. Working conditions aren't our direct.

Speaker 1 (26:59):
Resptability, washing their hands of it.

Speaker 2 (27:01):
Completely, then Sama the middleman. When challenged about the one
dollar thirty two corret wage, they justified it by citing
local living wages and the Kenyon labor context, completely ignoring
the massive disparity with the thirty dollars plus hourly rates
their US counterparts earned for the exact same function, and
ignoring the twelve dollars and fifty cents they themselves received.

Speaker 1 (27:24):
It's a neat trick adhere to the bare minimum letter
of local law while ignoring the global ethical context of
a multi trillion dollar industry built on this labor.

Speaker 2 (27:34):
That corporate distance allows them to maintain that ethical facade
while benefiting from the exploitation, and that is precisely why
the fight back is now focusing squarely on accountability. What
are the actual legal and structural pathways these workers are
pursuing for justice.

Speaker 1 (27:48):
It seems that the resistance is pretty formidable now and organized.

Speaker 2 (27:51):
It really is. The labelers form their own association, the
Data Labeled Association or DLA, and crucially, they've successfully allied
with established trade unions in Kenya.

Speaker 1 (27:59):
Why is that alliance important?

Speaker 2 (28:01):
It gives them much greater leverage for collective bargaining. They're
not just asking for better pay anymore. They are demanding
critically specific stipends, money earmarked specifically to compensate for and
treat the psychological trauma they suffer doing this work.

Speaker 1 (28:16):
So compensation for the harm done.

Speaker 2 (28:18):
Yes, And they aren't just relying on slow legal systems either.
We mentioned the lawsuits, but they're also taking their case
directly to the company's profiting from their labor.

Speaker 1 (28:27):
That open letter. In May twenty twenty.

Speaker 2 (28:29):
Four, exactly nearly one hundred labelers signed that coordinated open
letter addressed directly to the big US tech firms. It
stated their conditions incredibly plainly. We watch murder child abuse
for less than two lawers ers. They're demanding direct accountability
from the giants at the top of the Chine Open AI, Meta, Google.

Speaker 1 (28:49):
And legal activists are backing this up right, challenging the
outsourcing loophole itself.

Speaker 2 (28:53):
Yes. Figures like Corey Kreator from the legal nonprofit Foxglove,
which is supporting the Kenyan workers lawsuits, argues that because
this labor is absolutely indispensable to the tech firms products,
these workers are effectively employees in all but name, meaning
the tech firms bear direct, moral, and potentially legal responsibility
for their well being, regardless of the contractual layers they've

(29:16):
set up. If your multi billion dollar product cannot function
without this specific labor, you are responsible for the conditions
under which that labor is performed. You can't just outsource
the responsibility away.

Speaker 1 (29:29):
That's a fundamental challenge to the entire outsourcing model that
props up so much of the tech industry.

Speaker 2 (29:35):
It absolutely is, and it raises the question of broader
systemic responses. What regulatory changes are being suggested globally to
stop this kind of exploitation from just moving somewhere else.

Speaker 1 (29:46):
Are there calls for international regulation?

Speaker 2 (29:48):
Yes, increasingly so. Organizations like the Partnership on AI that's
a nonprofit focused on AI standards. They're urging mandatory independent
audits of the entire AI supply chain, similar to the
kind of audits done for ethical manufacturing, looking at labor conditions.

Speaker 1 (30:03):
And thinkanks are weighing into The.

Speaker 2 (30:04):
Brookings Institution, for example, advocates for global supply chain regulations
that would specifically ensure ethical labor practices, treating the human
labor component of AI just as critically as say, conflict
minerals and hardware supply chains, making it transparent and accountable.

Speaker 1 (30:22):
And what about locally within Kenya itself? What needs to
change there?

Speaker 2 (30:26):
Significant changes are needed in the legal and governmental framework
to protect these workers who are really at the cutting
edge of the digital economy but lack specific protections.

Speaker 1 (30:35):
What are the calls for action?

Speaker 2 (30:37):
The sources call for immediate action from Kenya's Ministry of
Labor to actually enforce the existing protections more robustly. But
more importantly, there's a call to establish a specialized minimum
wage for digital workers.

Speaker 1 (30:49):
A digital worker minimum wage.

Speaker 2 (30:50):
Yes, a wage that doesn't just reflect the baseline poverty
levels of the informal sector, that acknowledges the specialized skills,
the essential nature, and especially the often toxic and psychologically
damaging nature of data labeling and content moderation work. It
needs a specific higher floor.

Speaker 1 (31:07):
We should also acknowledge there has been some corporate response.
Right SOMEMA made a change after the intense scrutiny.

Speaker 2 (31:13):
Yes, SOMEMA did make a concession. Following the negative press
the loss of major contracts like Metas, they announced a
pivot starting some time after twenty twenty three. They stated
they would move away from the most intensely toxic content.

Speaker 1 (31:26):
Annotation and focus on what instead.

Speaker 2 (31:29):
Focus more on less harmful tasks like computer vision annotation, training,
self driving cars for instance, or other forms of data
labeling that don't involve constant exposure to graphic violence and abuse.

Speaker 1 (31:40):
But as the sources clearly argue, that pivot doesn't address
the past harms, does.

Speaker 2 (31:45):
It not at all? It does nothing for the trauma
already inflicted on workers like Richard mathinj Ornough Dali Wombllo,
and it doesn't guarantee structural reform across the industry to
ensure this kind of exploitation doesn't just continue, perhaps under
different company names or in different regions within Kenya's Silicon
Savannah or elsewhere. Accountability for the past and prevention for
the future are still missing.

Speaker 1 (32:06):
So this deep dive. It really throws into sharp belief,
this profound, immediate contradiction right at the heart of the
AI revolution we're all living through.

Speaker 2 (32:15):
It does the incredible eloquence, the polish, the seeming intelligence
of models like GPT. It masks this brutal, silent scream,
a scream emanating from thousands of screens in places like Nairobi.

Speaker 1 (32:28):
Let's just recap the core findings again. We're talking two
dollar wages maybe less, shattered psyches from constant trauma, stolen
futures for educated young people. This is the hidden, subsidized
foundation of so much advanced AI.

Speaker 2 (32:42):
And it means every time we marvel at a generative
AI output, every time we use these tools, we really
have to remember the human trauma and the deep economic
injustice that's literally embedded in its training data. It's built
on that sacrifice.

Speaker 1 (32:54):
So the mandate for true ethical innovation, it seems, it
has to demand something much more radical equity.

Speaker 2 (33:00):
It has to fair pay that actually reflects the global
value of this essential work, not just local poverty levels,
robust mental health of support systems that genuinely prioritize well
being over relentless production quotas and fundamentally the real inclusion
of the global South, not just as cheap labor pools,
but in sharing the staggering profits being generated by this technology.

Speaker 1 (33:23):
Because the current model it just feels structurally unsustainable and
profoundly unethical.

Speaker 2 (33:29):
Deeply unethical. This investigation really compels you, the listener, to
reconsider the entire ethical foundation of AI as it's currently
being developed.

Speaker 1 (33:38):
That final thought from the source material really resonates, which
one that the future of AI isn't truly coded just
in silicon or algorithms alone. It's forged in human dignity.

Speaker 2 (33:48):
Hmmm, that's power.

Speaker 1 (33:50):
And until that dignity is universally respected, valued, and actually
paid for, this whole glittering industry rests upon a really shaky,
unjust foundation.

Speaker 2 (33:59):
Indeed, and for anyone listening who wants to know more,
we encourage you to research the Data Labelers Association in Kenya,
look into the work of organizations like Foxglove fighting these
legal battles, and follow groups like the Partnership on AI
pushing for industry standards and regulation.

Speaker 1 (34:16):
Because this fight for fair labor in the digital economy,
it's not some niche story happening far away. It's truly
a battle for the soul of this next technological frontier.

Speaker 2 (34:25):
It affects us all, absolutely, so next time you interact
with an AI model, maybe pause for a second. Consider
that invisible human labor chain, the one stretching back to
Nairobi that brought that seamless, perhaps even helpful or entertaining
experience to your screen.

Speaker 1 (34:41):
Because the human suffering, as we've learned today from these sources,
it's been the ultimate hidden subsidy powering this digital revolution,
and that is absolutely something to all over and perhaps
explore further on your own
Advertise With Us

Popular Podcasts

Stuff You Should Know
Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.