All Episodes

October 18, 2025 • 31 mins
The sources, which include excerpts from a discussion with Professor Elena Martinez and supporting data from organizations like the McKinsey Global Institute and the World Economic Forum, explore the complex relationship between artificial intelligence (AI) and the future of labor. Martinez argues that the central concern is not mass job displacement, which she sees as unlikely due to AI's limitations and its capacity for augmenting human work, but rather the equitable distribution of economic value. She contends that AI is concentrating wealth among a few corporations, necessitating policy interventions such as taxing AI profits and redefining compensation models to ensure workers are fairly rewarded in an increasingly automated economy. The sources ultimately advocate for a shift toward a human-centered economic framework that values uniquely human contributions like emotional intelligence and creative judgment.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to the deep dive. For years, really the conversation
around artificial intelligence AI and the future of work, it's
been stuck, stuck on this one big worry. Will I
be replaced?

Speaker 2 (00:12):
Mm hmm, that's the headline. Fear, isn't it totally?

Speaker 1 (00:15):
We picture these automated systems taking over everything, transactions, reports,
driving cars, you name it. But honestly, that fear, and
it's a real fear. It kind of makes us focus on, well,
maybe the wrong thing economically.

Speaker 2 (00:28):
Exactly, It's understandable. But the real tension here isn't just
about jobs disappearing. It's about how the money gets distributed,
how compensation is radically restructured. Right, we're kind of past
the point of asking if humans will work with machines
that's happening. The huge question, the one that's going to
define this century, is whether people will still get paid for,

(00:48):
you know, financially and socially for what they do contribute,
especially as those contributions change in this AI powered economy.

Speaker 1 (00:55):
And that is exactly what we're digging into today. That's
the mission of this deep dive, and we want to
get past all the noise, the hype, the fear and
really ground this discussion. We're using excerpts from a really
key text, AI labor and the value distribution dilemma, and
we're pairing that with insights from Professor Elena Martinez. Her
Stanford lecture series The Future of Work in an Algorithmic

(01:17):
Age is just well phenomenal.

Speaker 2 (01:20):
She really nails down the core issues she does.

Speaker 1 (01:22):
So our goal here is to give you the listener
a shortcut, a way to understand the economic mechanics at play,
to move beyond just robots stealing jobs and into this
much trickier challenge of well, how do we make sure
human value is still value.

Speaker 2 (01:37):
And how compensation itself needs to evolve. It's not just
about you know, learning a new skill for a new job.
That's part of it, but it's much bigger. It's about
redesigning the whole market so that this incredible wealth AI
can generate, well, it actually benefits everyone, not just the
people who own the code the capital.

Speaker 1 (01:53):
Okay, so let's start maybe with a bit of history,
because that context helps explain why this anxiety feels both
new and somehow really old at the same time.

Speaker 2 (02:02):
It's a great place to start. This fear technology making
humans obsolete it's not new at all. It's like this
deep cultural script we keep running.

Speaker 1 (02:09):
You mean, like the Luddites.

Speaker 2 (02:10):
Exactly, Like the Luddites early nineteenth century England. These weren't
just random vandal smashing machines. They were skilled weavers, artisans.
Their entire way of life, their economic value was being
completely overturned by automated looms.

Speaker 1 (02:26):
They saw the writing on the wall.

Speaker 2 (02:28):
They absolutely did. They knew mechanization meant either unemployment or
maybe worse, being pushed into these low wage, dangerous factory jobs.
And you hear echoes of that exact fear today, right
when we talk about algorithms replacing customer service reps or
the potential impact of self driving trucks on millions of drivers.

Speaker 1 (02:47):
So that pattern leads to this idea. Economists talk about
the Luddite fallacy, the belief that tech automatically leads to
permanent mass unemployment.

Speaker 2 (02:55):
Right, and historically that fallacy hasn't held up. New technologies
do destroy old jobs, no question, but they also eventually
create new industries, new roles. They absorb the displaced labor,
though often that transition is painful and slow.

Speaker 1 (03:08):
But Professor Martinez says ai might be different this time.

Speaker 2 (03:12):
She draws a really crucial distinction here. She's cautious. She says, Look,
the Luddite fallacy might have been wrong about the total
number of jobs in the long run historically, but it
could still be painfully right about wages and how value
gets distributed. Okay, so, she argues, AI is fundamentally different
from say, steam power or electricity those replaced human muscle power. AI,

(03:37):
especially modern AI large language models. They're starting to replace
or augment cognitive tasks, white collar work, and they do
it with a speed and a scalability we've just never
seen before.

Speaker 1 (03:47):
So it's not just getting rid of tasks, it's changing
the remaining human tasks in a really fundamental way.

Speaker 2 (03:53):
Precisely, it redefines them. It takes over some parts, augments
human abilities and others, creates some entirely new roles we
haven't even thought of yet. But the key is it
shifts where human effort is most valuable and therefore where
our compensation should come from.

Speaker 1 (04:06):
It feels like the common idea is that tech just
substitutes for humans, right, like a machine does the whole job.

Speaker 2 (04:11):
Yeah, and that's a misunderstanding, according to Martinez. She insists
AI is primarily an augmentation tool, not a total replacement tool,
at least for now and in most complex domains, okay,
it takes over the grunt work, sifting through massive data sets,
the repetitive calculations, the complex predictions. This basically pushes the
human worker up the chain. You're forced to operate at

(04:34):
a higher level, focusing on things like strategy or complex
communication or ethical oversight. These are areas where AI is
still frankly pretty.

Speaker 1 (04:43):
Weak, which brings us to her definition of work itself.
You mentioned she redefines it, Yes.

Speaker 2 (04:47):
And this is critical. If we only think about work
as like ours, spent typing or priving things, we miss
the entire point in an AI world. She has this quote,
and it's worth really thinking about. Work is not just tasks.
It's a creation of value, economic, social, and cultural. AI
doesn't erase value creation. It redistributes it.

Speaker 1 (05:06):
Okay, redistributes it. That sounds like the core idea.

Speaker 2 (05:09):
It really is. That concept of redistribution is central to
this whole dilemma. Think about it. Values being created incredibly fast,
and AI can analyze a billion data points in what
the time it takes us to sip our coffee. But
the traditional places where a human worker plugged in doing
the input, processing the data step by step, those steps

(05:30):
might shrinkle, vanish. The human input might now be just
setting up the system, writing the initial prompt, or maybe
doing the final check the ethical sign off at the end.

Speaker 1 (05:40):
So the human is still crucial, maybe even more crucial
for that high level part, but the actual time spent
might be tiny compared to the AI's output exactly.

Speaker 2 (05:48):
The leverage is immense. So if our pay systems are
still based on ours work or linear input, we completely
misvaluing that crucial, high leverage human contribution.

Speaker 1 (05:58):
And the economic benefit of all that redistributed value just
flows upwards to the owners of the AI, the shareholders.

Speaker 2 (06:04):
That's the risk, she highlights. The compensation structures haven't caught
up with the technology's ability to scale value nonlinearly. We
need to figure out how to pay for that initial
spark of creativity, or the vital ethical check, or the
complex communication needed to actually use what the AI produced.

Speaker 1 (06:20):
Okay, So that reframing helps us get past this myth
of total automation, as Martinez calls it, this idea that
humans will just be completely replaced everywhere, Right.

Speaker 2 (06:29):
That narrative, while dramatic, really misrepresents what AI can actually
do well and maybe more importantly, what it can't do.

Speaker 1 (06:36):
So let's get specific. What does AI excel at.

Speaker 2 (06:40):
It's brilliant at tasks that are structured, repetitive, and data heavy.
Think sorting huge spreadsheets, running high frequency trading algorithms, generating
standard legal clause is basic coding, maybe first tier customer
support using scripts. It's essentially a super powerful pattern matching machine.

Speaker 1 (06:58):
But where does that pattern matching fall short? Where do
humans remain essential?

Speaker 2 (07:02):
It falls short where the world gets messy, unpredictable, and
requires real human understanding. AI struggles badly with truly novel situations.
It hasn't seen data for It has no real ethical compass,
no deep grasp of context, certainly no emotional intelligence.

Speaker 1 (07:19):
Can you give an example?

Speaker 2 (07:20):
Martinez uses a great one from law. AI tools now
are amazing. They can draft a pretty solid initial contract
based on precedence, or sift through millions of documents or
legal discovery way faster and cheaper than a human paralegal.
But that AI cannot sit across a table and negotiate
a tense, high stakes deal. It can't read the room
the body language, the subtle cues. It can't understand the

(07:41):
nuances of a courtroom atmosphere or genuinely console a client
who's just received devastating news, the human lawyer's value shifts
upwards to those uniquely human skills.

Speaker 1 (07:50):
So the AI handles the routine, the human handles the complex,
and the human exactly.

Speaker 2 (07:55):
The human lawyer provides the strategic thinking, the ethical judgment,
the emotional connection, the persuasion, all things needed to apply
the law in the real, messy world. We see the
same pattern in healthcare. AI diagnostic tools are getting incredibly
good at analyzing medical scans like X rays or MRIs.
Sometimes their accuracy rivals or even beats human radiologists for

(08:17):
specific tasks.

Speaker 1 (08:18):
So does that mean radiologists are out of a job?

Speaker 2 (08:20):
Not necessarily. What happens is the AI takes over the
high volume routine screening. This frees up the human radiology's limited,
expensive time to focus on the really complex ambiguous cases,
to collaborate with other specialists like oncologists, and critically to
communicate sensitive diagnoses in treatment options to patients. That human
touch remains paramount.

Speaker 1 (08:42):
So the AI raises the baseline and pushes the human
expert towards more complex problem solving and communication.

Speaker 2 (08:48):
Precisely, and you even see hints of this in creative fields.
AI tools can generate images, music, even text. Now they
can handle the technical rendering the variations.

Speaker 1 (08:56):
Right, we see those calls everywhere now.

Speaker 2 (08:58):
But the original concept, the culture insight, the emotional core,
the why behind the art that still largely comes from
the human. It becomes more of a collaboration, human creativity
guiding machine precision.

Speaker 1 (09:09):
Is there data to back this up? This idea that
maybe jobs won't disappear overall, but just change.

Speaker 2 (09:15):
There is some optimism in the data, yes, regarding overall
job volume. McKinsey's Global Institute in a twenty twenty three report,
they acknowledged that yeah, maybe around thirty percent of current
work activities could potentially be automated by twenty thirty. That
sounds scary, it does, but they also projected that during
that same period, millions of new roles will likely emerge,

(09:36):
roles centered around developing, deploying, maintaining, and critically using AI
systems effectively.

Speaker 1 (09:42):
And the World Economic Form said something similar.

Speaker 2 (09:44):
It did. Their twenty twenty three Future of Jobs report
also predicted a net positive impact on job numbers overall.
They specifically pointed to the growth of roles like AI trainers,
people who teach the AI data explainability specialists. Let's if
that's someone who can actually look inside the AI block box,
audit its decisions and make sure they're fair, unbiased, and

(10:04):
follow regulations crucial for trust. And also roles like human
AI interaction designers, people focus specifically on making these complex
systems usable and effective for people. These jobs are all
about that interface between human goals and machine capabilities.

Speaker 1 (10:22):
Okay, so the evidence suggests maybe we won't see mass unemployment,
but a massive shift in what jobs exist exactly.

Speaker 2 (10:28):
It seems to mitigate the worst fears of the Ladite
fallacy regarding job numbers. However, and this is Martinez's crucial
warning the pivot point. Really, job creation by itself isn't enough.
We mean, it's not enough if those new roles are,
as she puts it, low wage, precarious, or undervalued.

Speaker 1 (10:45):
Ah Okay, like the gig economy.

Speaker 2 (10:47):
That's probably the clearest most immediate example of this risk
playing out right now. Think about ride sharing drivers, food
delivery workers, content moderators, even some freelance coders on platforms.

Speaker 1 (10:57):
They're definitely working with AI systems. The apps, mantage, every
I think, rotes, prices, assignments.

Speaker 2 (11:02):
Absolutely, the platform's AI is the dispatcher, the price setter,
the performance manager. These workers are generating huge amounts of
value for the tech companies that own those platforms, yet
their compensation is often driven down by the algorithm itself.
They face precarious work, usually no benefits, wages that barely
cover their costs.

Speaker 1 (11:22):
So they're contributing massively to the value creation, but they're
not seeing the rewards precisely.

Speaker 2 (11:28):
The structure of the system allows the vast majority of
that new AI enabled value to flow to the owners
of the platform, the capital owners. The workers are augmented
by AI, yes, but their compensation is often suppressed by
it too.

Speaker 1 (11:41):
That gap between the value they helped create and the
compensation they receive. That is the value distribution dilemma.

Speaker 2 (11:47):
That's it exactly. Let's nail down this core problem a
bit more. Our traditional ways of paying people are labor markets.
They evolved mostly around compensating linear contribution. Ours worked, pieces produced,
lines of code, written things you could easily count. AI
breaks that linearity completely because it can generate value at
scales we've never seen before, often with very little additional

(12:09):
human input needed for each new unit of output.

Speaker 1 (12:12):
You mentioned the revenue paradox before. Can you break that down?
How does that work?

Speaker 2 (12:16):
Sure? Think about an AI trading system making millions of
profitable micro trades every second, or a large language model
answering millions of customer questions instantly generating personalized responses. These
systems create enormous revenue, often in real time, but operating
that system doesn't necessarily require a proportional increase in human

(12:36):
labor hours or wages. Once the AI is built and trained,
the marginal cost of it doing one more trade or
answering one more query is practically zero.

Speaker 1 (12:46):
So the scale is almost infinite, but the human input
needed to operate it at that scale is minimal.

Speaker 2 (12:50):
Relatively minimal, yes, compared to the output. Maybe you need
engineers for upkeeps and monitoring. But if the core value
generation is algorithmic and scales exponentially, how do you fit
that into a wage system based on paying someone for
forty hours a week of linear effort.

Speaker 1 (13:05):
The machine provides the scale, but the pay structure doesn't
recognize the human role in enabling or overseeing that scale.

Speaker 2 (13:12):
That's the crux of it. The economic value gets captured
primarily by the capital, the algorithm itself, the servers the
intellectual property, rather than flowing proportionally to the labor that
might have built it, trained it, or now manages its application.

Speaker 1 (13:27):
And you mentioned history has warnings about this, it does.

Speaker 2 (13:30):
Martinez draws these really striking parallels to the First Industrial Revolution.
That era undoubtedly created immense national wealth, things previously unimaginable,
but for the average worker it led to decades of
brutal wage stagnation, terrible working conditions, the destruction of communities,
all because they didn't own the new means of production,

(13:51):
the factories, the machines.

Speaker 1 (13:53):
I was a period of huge inequality. Right, wealth soared
for the factory owners, while workers often lived in poverty, a.

Speaker 2 (13:59):
Massive disc connect, a moral failure as much as an
economic one, and Martinez Warren's AI could easily trigger similar,
maybe even more extreme inequalities if we're not proactive.

Speaker 1 (14:08):
Now, because the wealth generation is even faster, potentially even
more concentrated.

Speaker 2 (14:12):
Exactly, she explicitly notes this, the wealth generated by AI
is concentrating in fewer hands, primarily tech companies and their shareholders.
It makes intuitive sense. If you own the algorithm that
scales infinitely, you capture the lion's share of the value.
If you're just a user, or maybe someone whose data
helps train the algorithm, or a worker managed by the algorithm,

(14:36):
you risk seeing your economic contribution continuously devalued.

Speaker 1 (14:39):
And this isn't just a prediction. Is that there's data
showing this shift is already happening.

Speaker 2 (14:43):
There is, It's not just theory. If you look at
long term data like from the US Bureau of Economic Analysis,
the trend is pretty clear. Between roughly twenty twenty twenty,
the share of US national income going to labor that
means wages, salaries, benefits, it actually declined. It dropped from
about sixty th three percent down to fifty nine percent. Now,
four percentage points might not sound like a catastrophe, but

(15:04):
over two decades in an economy the size of the
US that represents trillions of dollars shifting away from workers'
compensation and towards corporate profits, dividends, capital gains.

Speaker 1 (15:14):
And is that shift more pronounced in AI heavy sectors.

Speaker 2 (15:17):
Yes, that's where it's often most visible. In tech, in finance,
and logistics sectors that were early and aggressive adopters of
automation and algorithmic processes. It suggests the efficiency gains aren't
being broadly shared with the workforce enabling those gains.

Speaker 1 (15:30):
Wow. Okay, so the takeaway is stark. We're not just
theorizing about the future. We're seeing a structural shift in
wealth happening now.

Speaker 2 (15:37):
Which leads directly to Martinez's central mandate. The answer isn't
to try and stop the technology that's futile and likely counterproductive.
The answer is to consciously redesign our economic systems. We
need rules, policies, and compensation models that ensure a fair
payment and redefine how we even measure and reward human contributions.
In this new era, we have to prioritize human well

(15:59):
being alongside corporate efficiency.

Speaker 1 (16:02):
Okay, so if we don't intervene, Martinez paints a pretty
bleak picture a two tiered society, A small group of
highly paid tech elites designing and managing the AI, and
then everyone else.

Speaker 2 (16:14):
A potentially vast underclass. Yeah, stuck in low wage, precarious gigwork,
their labor price to manage by algorithms designed to minimize
costs that's the risk of inaction.

Speaker 1 (16:24):
So what's the alternative? What are the policy pillars she suggests?

Speaker 2 (16:27):
To prevent that, She outlines a really comprehensive four pillar approach.
It's about intervention on multiple fronts simultaneously.

Speaker 1 (16:34):
Let's take them one by one. Pillar one upskilling and
education reform. This sounds like the most obvious one. Helping
people adapt.

Speaker 2 (16:42):
It is crucial, but it needs to be much more
than just a few coding boot camps. We need a
societal commitment to genuine, lifelong learning. As entire job categories
get reshaped by AI, people will need ongoing, affordable, maybe
even subsidized access to training.

Speaker 1 (16:59):
Training in what specifically.

Speaker 2 (17:01):
Coding is part of it, sure, but maybe not the
most durable part. The real focus should be on skills.
AI doesn't do well data literacy, understanding how to work
with data, AI ethics, knowing the risks and biases human
AI collaboration, how to effectively use these tools, and those
uniquely human skills, complex creative problem solving, critical thinking, advanced communication, empathy.

(17:27):
These have a much longer shelf life than any specific
programming language.

Speaker 1 (17:30):
Are there any good models for this kind of large
scale retraining?

Speaker 2 (17:34):
Martinez points to Germany's Industry four point zero initiative as
a strong example. It wasn't just a government program, it
was a national strategy. It brought together government funding, industry participation, unions,
educational institutions all focused on retraining their existing manufacturing workforce
for highly automated AI driven factories. What made it effective
They specifically targeted mid career and older workers, recognizing their

(17:56):
existing expertise was valuable but needed updating. They offered modular
training certifications that built towards new AI augmented roles. It
was about investing in their human capital to maintain their
competitive edge, not just letting people fall behind.

Speaker 1 (18:10):
That makes sense. Okay, so education is key, But what
about pay itself? That brings us to Pillar two redefining
compensation models. If hourly wages don't work well with AI scale,
what does This is where we.

Speaker 2 (18:23):
Need real innovation. We have to shift from just paying
for input like time spent, to finding ways to compensate
for output scale or maybe shared ownership. Martinez talked seriously
about exploring things like cooperative ownership models.

Speaker 1 (18:36):
What would that look like?

Speaker 2 (18:38):
Imagine if the gig workers, the platform contributors actually owned
a stake, even a small one, in the platform they
help operate and generate value for. They wouldn't just get
an algorithmically set low wage. They'd share in the exponential
profits generated by the AI systems they're interacting with.

Speaker 1 (18:53):
That feels like a pretty radical change to how companies
are structured.

Speaker 2 (18:57):
It is, but maybe necessary and for the huge number
of freelancers and gig workers who don't have one single employer.
Another absolute necessity is creating portable benefits.

Speaker 1 (19:07):
Systems, meaning benefits that follow the worker, not tied to
a specific.

Speaker 2 (19:12):
Job exactly right now, if you jump between five different
freelance contracts in a year, you often lose access to healthcare,
retirement contributions, paid sick leave every single time you switch.
It creates massive insecurity. Portable benefits tied to the individual,
perhaps through centralized funds, union agreements, or mandatory contributions from platforms,
would ensure people have that basic safety net regardless of

(19:35):
their employment status that week or month.

Speaker 1 (19:36):
Does anyone do this well? Martinez mentioned Scandinavia.

Speaker 2 (19:40):
Yes, several Nordic countries have models that move in this direction.
Often it's not just government mandates, but strong collective bargaining
agreements that set standards for independent workers too. They negotiate
contributions into shared funds for things like healthcare, pensions, unemployment support.
By making benefits universal or near universal and tied to
the person, they significantly reduce the procarity that often comes

(20:03):
with non traditional work. It makes freelancing or gigwork a
more sustainable option.

Speaker 1 (20:08):
Okay, so better education, new pay models, portable benefits. But
all of this costs money, significant money, which leads to
Pillar three, taxation and redistribution. How do we fund this
without killing the innovation AI brings this probably.

Speaker 2 (20:24):
The most contentious area. The fundamental goal is to capture
some of that massive wealth being generated by AI driven
productivity and redirected back into society to pay for things
like universal basic income, better social safety nets, public health care,
or those retraining programs we just talked about.

Speaker 1 (20:38):
The idea of a robot tax comes up a lot here.

Speaker 2 (20:41):
It does taxing the actual robots or the software licenses.
But Martinez actually cautions against a simple robot tax.

Speaker 1 (20:50):
Why wouldn't that directly target the automation.

Speaker 2 (20:54):
Her concern is that it might unfairly penalize companies simply
for investing inefficiency, which can have broad benefits, it might
discourage useful automation. She suggests a more nuanced approach might be better,
perhaps taxing the massive profits generated by data driven business models,
or taxing the sheer scale and speed of transactions that
AI enables, like high frequency trading.

Speaker 1 (21:15):
So taxing the outcome the outsized profit or value capture
rather than the tool itself.

Speaker 2 (21:20):
Exactly target the value generated by that near zero marginal
cost replication. The revenue from that kind of tax could
be much larger and potentially less disporting to innovation.

Speaker 1 (21:29):
Are there any real world experiments with this?

Speaker 2 (21:31):
South Korea is an interesting case as a world leader
in deploying industrial robots, They've experimented with adjusting tax incentives
related to automation investments. Specifically, they reduced some tax breaks
for companies investing heavily in automation, with the stated goal
of using those funds to support workers potentially displaced by
those same systems. It's an early example. We're trying to

(21:54):
explicitly link the gains from automation to social support mechanisms, if.

Speaker 1 (22:00):
That makes sense. Finally, Pillar four, ethical AI development. This
isn't just about money. It's about the rules of the
game right. Making sure the AI itself is an exploitative.

Speaker 2 (22:10):
Absolutely essential, because AI isn't just processing data anymore. Increasingly,
it's managing people think about hiring algorithms, performance monitoring systems,
wage setting platforms. We need regulations that mandate these systems
are designed and deployed with human well being, fairness, and
transparency baked in from the start.

Speaker 1 (22:28):
You mentioned algorithmic wage suppression earlier. Can you explain that again.

Speaker 2 (22:32):
Yeah, it's a major risk, especially in the gig economy.
The platform's AI has access to real time data on
worker supply, customer demand, competitor pricing. It can constantly adjust
wages downward to the lowest point workers likely to accept,
just to maximize the platform's profit margin. It's not necessarily
about efficiency. It can become a tool for actively depressing
wages across an entire.

Speaker 1 (22:53):
Sector, So the algorithm itself becomes a force for inequality.
How do we stop that?

Speaker 2 (22:58):
Transparency and accountability or key Workers need to understand how
these systems make decisions that affect their pay and livelihood.
They need avenues for appeal. Martinez points to the European
Union's AI Act as a potential global benchmark.

Speaker 1 (23:10):
Here what does the EUAI Act do?

Speaker 2 (23:13):
It sets up tiers of risk for AI applications for
high risk systems like those impacting employment or access to
essential services. It mandates things like human oversight, rigorous testing,
clear documentation, and the ability for individuals to challenge algorithmic decisions.
It's about putting guardrails in place, making sure the black
box isn't used unfairly against people.

Speaker 1 (23:34):
Okay, those four pillars education, new compensation, fairer taxation, and
ethical design feel like a comprehensive plan, But Martinez also
talks about something less tangible. Doesn't she the purely human.

Speaker 2 (23:46):
Element she does. She emphasizes that beyond the economics, we
need to remember the inherent value of work that AI
simply cannot do. The roles built on empathy, genuine human connection,
nuanced understanding, teaching, exactly caregiving, teaching, counseling, community organizing, jobs
where emotional intelligence, ethical judgment, mentorship, compassion aren't just nice

(24:09):
to have, they are the absolute core of the value provided.
AI can process data infinitely, but it can offer genuine
compassion or build deep trust.

Speaker 1 (24:18):
And yet, paradoxically, these are often the jobs that are
current economy undervalues. The most monetarily speaking.

Speaker 2 (24:24):
That's the critical irony we have to fix. Martinez argues
very strongly that as AI takes over more routine tasks,
we must use policy to deliberately elevate and properly compensate
this care economy.

Speaker 1 (24:35):
How can AI help there?

Speaker 2 (24:36):
If at all, AI could actually be hugely beneficial by
automating the administrative burdens in these fields. Imagine AI handling
scheduling for nurses, doing initial drafts of lesson plans for teachers,
managing insurance paperwork for therapists. That frees up the human
professional to focus more time on direct human interaction, on
the part of the job that truly matters.

Speaker 1 (24:57):
But that only works if their pay reflects that higher
value work.

Speaker 2 (25:01):
Right, Precisely, if AI reduces a teacher's grading time, their
salary shouldn't stagnate or fall. It should arguably increase because
their time spent mentoring, inspiring, or addressing individual student needs
becomes even more central and valuable. Policy has to ensure
the monetary compensation matches the societal value.

Speaker 1 (25:20):
What about the creative sector. That seems like another area
where the human element is key, but AI is making
huge inroads.

Speaker 2 (25:26):
It's a fascinating and complex area right now. Tools like
mid Journey for images, or various AI music generators or
text models like xai's grock, they are definitely transforming creative workflows.
They can generate content incredibly quickly.

Speaker 1 (25:40):
Does that devalue human creativity?

Speaker 2 (25:42):
It changes the landscape. The AI can handle rendering variation,
maybe even initial brainstorming. But Martinez argues, and I think
most creators would agree that genuine originality, deep emotional resonance,
cultural relevance, that spark still primarily comes from the human creator.
The AI is a powerful tool, but the intent and

(26:04):
the unique perspective are human.

Speaker 1 (26:05):
But there's a risk of exploitation there too, Isn't there?
If AI models are trained on existing human created art
without permission or compensation.

Speaker 2 (26:13):
A huge risk. That's why new compensation models are desperately
needed here too, Things like clear ownership rules for AI
assisted works, robust systems for royalties, and strong legal protections
for creators against platforms that just scrape their work to
train competing models without paying for it. The value chain
has to acknowledge and reward the original human course of creativity.

Speaker 1 (26:33):
Okay, one last big piece the global picture everything we've
discussed is complex enough within one developed country. What does
this look like for developing economies?

Speaker 2 (26:42):
This is where the stakes are potentially even higher, presenting
both enormous risks and maybe some unique opportunities.

Speaker 1 (26:49):
What's the main risk?

Speaker 2 (26:50):
The biggest risk is rapid, large scale job displacement in
sectors that have been crucial for development in recent decades.
Think about the huge call center in industries in places
like the Philippines or India, or manufacturing hubs across Southeast
Asia that rely on relatively low skill, repetitive tasks. AI
powered automation could wipe out millions of those jobs.

Speaker 1 (27:12):
Very quickly, which could reverse years of economic progress.

Speaker 2 (27:16):
It absolutely could potentially triggering social instability and widening the
gap between developed and developing nations even further as the
AI capital concentrates in wealthier countries.

Speaker 1 (27:26):
But you said there's an opportunity to yes.

Speaker 2 (27:28):
Martinez highlights the potential for some developing nations to actually
leap frog traditional stages of industrial development. Instead of needing
to build massive physical factories or infrastructure first, it could
potentially jump straight into the global digital economy enabled by AI.

Speaker 1 (27:44):
How would that work?

Speaker 2 (27:45):
She gives the example of Kenya's a Jura Digital Program.
It's a government initiative focused on training young people in
specific AI related digital skills, things like complex data labeling
and annotation, which is crucial for training AI models or
even algorithmic auditing.

Speaker 1 (28:02):
So they become participants in the AI supply chain itself exactly.

Speaker 2 (28:06):
They move from potentially being displaced by automation to providing
the high value human input needed to build and refine
AI systems globally. It allows them to sell specialized cognitive
labor on the world market.

Speaker 1 (28:18):
That sounds promising, but is it scalable? Is it enough?

Speaker 2 (28:22):
It's a start, but it requires investment in infrastructure, and
Martinez has a critical warning. Without significant global cooperation, this
potential will likely go unrealized. Wealthy nations with the existing
AI infrastructure, the massive data sets and the capital will
likely just dominate, potentially turning developing economies into mere suppliers
of cheap data or low level annotation labor.

Speaker 1 (28:43):
So another form of digital colonialism almost.

Speaker 2 (28:46):
That's the risk to avoid that. She advocates strongly for
international frameworks, things like promoting open source AI tools to
level the playing field, establishing global funds for retraining initiatives
and poorer nations, and setting international standards for fair data
usage and compensation. It requires a conscious effort to share

(29:07):
the benefits of AI globally, not just hoarde them.

Speaker 1 (29:10):
Wow. Okay, we've covered a huge metaground here, from laedites
to algorithms, from job fears to global economics.

Speaker 2 (29:18):
It's a complex picture, it really is.

Speaker 1 (29:20):
But if we had to boil down the core message
from Professor Martinez, the main takeaway it feels like this
fundamental shift and focus, stop obsessing only about job loss
and start demanding fair value distribution.

Speaker 2 (29:33):
That's exactly it. The nature of work is changing dramatically,
but the value that humans contribute, whether it's writing code,
providing empathetic care, having a creative insight, or making a
critical ethical judgment, that value needs to be recognized and
crucially compensated fairly, and our systems need to be redesigned
purposefully to achieve that.

Speaker 1 (29:51):
Her roadmap isn't just philosophical, is it. It's quite practical,
very pragmatic.

Speaker 2 (29:55):
It requires action from everyone. Governments need to step up
with smart regulation, fair taxation and robust social safety nets.
Builnesses need to look beyond pure short term profit and
explore things like profit sharing, ethical AI deployment, and individuals workers.
We all need to embrace continuous learning and adaptation, the

(30:15):
goal being the goal being an economy where the incredible
wealth generated by AI actually lifts society as a whole,
where it provides stability and opportunity for everyone, not just
for a tiny fraction at the top.

Speaker 1 (30:27):
It really puts the power and the responsibility back on us,
which leads perfectly into the final thought from Professor Martinez.
You wanted to share?

Speaker 2 (30:35):
Yes, She concludes one of her lectures with this really
powerful statement, and I think it's the perfect challenge for
all of us listening. AI doesn't determine our future. We do.
Let's make sure it's one where everyone gets a fair share.

Speaker 1 (30:46):
AI doesn't determine our future. We do that really lands.

Speaker 2 (30:50):
I mean, the final challenge for you, the listener building
on that is to really reflect on your own work,
your own skills. Ask yourself, what is the unique, perhaps
irreplaceable human value that I bring. Is it my ability
to connect with people, my ethical judgment, my creative spark,
my skill in synthesizing complex ideas. Identifying that value clearly
is the first step. The next much harder step is

(31:12):
demanding that our economic systems recognize and compensate that value
fairly in this rapidly changing world. That's the work ahead.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.