All Episodes

November 1, 2025 • 61 mins
The source provides an overview of the burgeoning labor market created by generative AI, arguing that the technology has led to a "Cambrian explosion" of new occupations rather than mass unemployment. It identifies sixteen emergent roles, ranging from the technical, such as Prompt Engineer and AI Safety Systems Engineer, to the ethical and creative, like AI Ethics Auditor and Diffusion Restoration Artist. The episode details the core tasks, median salaries, and required aptitudes for these professions, noting that the market is bifurcating into roles that build the AI stack and those that ensure its responsible use. Furthermore, the document contrasts these new opportunities with several roles that AI has largely eliminated, underscoring the necessity of rapid national reskilling initiatives to adapt to this job-morphing tide.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome back to the deep dive. If you've been following
the conversation around generative AI over the last couple of years,
you've likely heard one phrase repeated maybe ad nauseum job loss.

Speaker 2 (00:12):
Yeah, the sky's falling.

Speaker 1 (00:13):
Exactly, the warnings of mass unemployment. They've dominated headlines. But
you know, when we strip away the noise and look
directly at the latest labor market data, the story we
uncover is well, it's completely different.

Speaker 2 (00:27):
It really is, and frankly, far more electrifying.

Speaker 1 (00:30):
Yeah, we seem to be moving past the fear of
automation and stepping directly into an explosion of highly specialized
job creation.

Speaker 2 (00:38):
That's really the core insight from all our source material today,
and it's critical for you the listener, to grasp this.
We are not witnessing a net subtraction of jobs. No,
we're witnessing what economists and analysts is calling a Cambrian
explosion of new occupations. It's a great term. It really is.

Speaker 1 (00:55):
Captures the speed.

Speaker 2 (00:56):
Yeah, these are highly specific roles that were, you know,
statistically negligible, meaning they barely even existed just five years ago.
Of years incredible, Yet by the third quarter of twenty
twenty five major job platforms like LinkedIn, Glassdoor. They're reporting
over one point seven million open roles that fit this
emergent description.

Speaker 1 (01:16):
One point seven million. The velocity of this change is
genuinely staggering. Think about this job postings specifically containing the
phrase prompt engineering.

Speaker 2 (01:27):
The famous one.

Speaker 1 (01:28):
Right reported a forty two fold increase between twenty twenty
three and twenty twenty five. Forty two time, forty two
fold growth in just two years. That's not a trend.
That's a tectonic shift.

Speaker 2 (01:40):
Totally, and the infrastructure is confirming it too. The US
Bureau of Labor Statistics, you know that bastion of governmental.

Speaker 1 (01:46):
Data, usually slow moving, Right.

Speaker 2 (01:48):
They've quietly added eight AI specific occupational cost to their system.
They're formalizing this new segment of the workforce.

Speaker 1 (01:55):
Okay, so this confirms it's not just like venture capital
hype or fancy titles it startup.

Speaker 2 (01:59):
Not at all. This is a structural transformation of the economy.
Our sources define the core concept beautifully. I think this
is not replacement. It is recombination. Recombination, so AI, particularly
generative AI. It automates the rote the predictable cognitive labor,
the kind of work you do on autopilot.

Speaker 1 (02:17):
Maybe right, like spreadsheets or basic writing tasks.

Speaker 2 (02:20):
Exactly. This is analogous to how say the combustion engine
or electricity automated muscular labor a century ago. The human
worker is now freed up in theory to tackle the
infinitely more complex and frankly high stakes tasks the design,
the governance, the interpretation, and the creative extension of the

(02:41):
machine itself.

Speaker 1 (02:42):
Okay, let's unpack this then. Our deep dive mission today
is to go granular. We are focusing on sixteen specific
emergent roles identified across our source material.

Speaker 2 (02:51):
Sixteen p ones.

Speaker 1 (02:52):
Yeah, and these roles span the entire AI stack, right
from the foundational engineering that makes the models run efficiently
the nuts and bolt, to the ethics and security that
ensure they don't you know, cause harm, and even to
the new creative roles that guide them.

Speaker 2 (03:05):
And as we go through these sixteen specialties, you'll notice
something interesting. The market is clearly bifurcating, splitting into two
crucial camps.

Speaker 1 (03:12):
Okay, what are those?

Speaker 2 (03:13):
First, you have the hands on specialists, those who build,
tune and maximize the technical performance and efficiency of the
AI stack. The engineers, the architects, got it. Second, you
have the human centric specialists, those who ensure that stack
serves humanity effectively, ethically and securely without breaking the economics, social,

(03:34):
or legal contract.

Speaker 1 (03:35):
The governors the guides.

Speaker 2 (03:38):
Precisely, and both camps command extremely high value in the market.
Right now.

Speaker 1 (03:42):
Okay, so we have a roadmap for you today. We've
grouped these roles by function to give you a clear
understanding of what a day in that job actually looks like.

Speaker 2 (03:51):
Yeah, the daily reality.

Speaker 1 (03:53):
What aptitude is required to succeed, and crucially, what the
compensation looks like, because several of these jobs are climbing
into the highest south tiers currently available, some really big numbers.

Speaker 2 (04:02):
Definitely.

Speaker 1 (04:03):
All right, let's start with the uh, the hard hat workers,
maybe the people who are building the digital skyscrapers, the
architects and engineers.

Speaker 2 (04:11):
Good analogy.

Speaker 1 (04:12):
These are the roles dealing directly with model mechanics, efficiency,
foundational interaction, and given the intense technical specialization required, this
group maybe unsurprisingly falls into the higher salary bands.

Speaker 2 (04:25):
Yeah, we're talking generally what one hundred and ten thousand
dollars up to north the two hundred fifteen thousand.

Speaker 1 (04:29):
Dollars exactly serious money.

Speaker 2 (04:32):
So the first one, probably the most recognizable new job title,
the one that saw that forty two fold increase.

Speaker 1 (04:38):
In postings, the prompt Engineer.

Speaker 2 (04:40):
Bingo prompt engineer cliking in with a median salary of
around one hundred and forty five thousand dollars.

Speaker 1 (04:46):
Wow.

Speaker 2 (04:47):
This role went from like a niche hobby something people
did on the side.

Speaker 1 (04:51):
Right right, playing with early models.

Speaker 2 (04:52):
To a formalized, highly paid engineering discipline almost overnight. It's
kind of wild.

Speaker 1 (04:57):
So what's the essential challenge here? What do they do?

Speaker 2 (05:00):
It's about translating fuzzy, often ambiguous human intent what the
user thinks they want into the absolutely deterministic token sequences
that a large language model needs to execute consistently and reliably.

Speaker 1 (05:12):
Okay, so it's like being a very precise translator for
a very literal minded robot.

Speaker 2 (05:16):
That's a great way to put it. You're giving a
robot per safe instructions for subjective task and.

Speaker 1 (05:21):
The origin story here it sort of solidifies its importance,
doesn't it.

Speaker 2 (05:25):
It really does. It truly began around twenty twenty one
with early access to Opening Eyes di VINNGAPI. People realize
that you know, minor tweaks to the instruction sets produced
wildly different quality outputs.

Speaker 1 (05:36):
Little changes, big impact.

Speaker 2 (05:39):
Huge impact. But what really professionalized it, what turned it
into a real discipline, was anthropics twenty twenty three Constitutional
AI paper.

Speaker 1 (05:47):
Ah, Constitutional AI, I remember that.

Speaker 2 (05:50):
Yeah. It formalized the concepts of value alignment and safety
not through retraining the entire massive model, but through clever,
rigorous internal prompting prompting as a control mechanism.

Speaker 1 (06:01):
So this is way beyond just typing better questions into
chat GPT.

Speaker 2 (06:05):
Then, oh absolutely. When we look at the daily reality,
we see highly rigorous engineering cycles. Prompt engineers are constantly
ad testing one hundreds, sometimes thousands of PROMPT variants hundreds, well, yeah,
sometimes using evolutionary algorithms to find the optimum instruction set.
They're measuring divergence against a gold standard using metrics like

(06:25):
ble Eu or Rouge scores.

Speaker 1 (06:28):
Okay, b l Eu and Rouge. You threw out a
couple of acronyms. There can we pause on those? What
are they measuring?

Speaker 2 (06:33):
Sure? Yeah, good point. There are essential tools for this role.
Blu and Rouge scores. They basically measure the statistical overlap
between the model's generated output and a predefined human written
reference text, so.

Speaker 1 (06:47):
Like how close did the machine get to the ideal
answer exactly?

Speaker 2 (06:50):
For instance, a PROMPT engineer uses these to ensure that
if a model is supposed summarize a legal brief in
five hundred words, every single five hundred word summary generates
it's consistently high quality and structurally similar to the ideal
summary they have.

Speaker 1 (07:03):
So it's about consistency and alignment, not just is it good.

Speaker 2 (07:07):
Precisely, they don't just ask if the output is good,
they ask if it's consistently aligned with the objective At
a statistical level, it's about reliability at scale.

Speaker 1 (07:14):
Okay, that demands a really interesting mix of skills then
linguistic nuance, statistical intuition, and real world product taste.

Speaker 2 (07:23):
Absolutely, and they're often pitted against red teamers, you know,
people actively trying to break the system with tricky.

Speaker 1 (07:28):
Prompts right the adversarial side.

Speaker 2 (07:30):
Yeah. One senior PROMPT engineer, quoted in our sources shared
the kind of intense, cross disciplinary nature of their work
was a great quote.

Speaker 1 (07:39):
What did they say?

Speaker 2 (07:40):
They said, I spend half my day arguing with a
language model about whether a children's book Dragon should have
existential dread.

Speaker 1 (07:47):
Okay.

Speaker 2 (07:47):
The other half I argue with lawyers about whether the
dragon's dread is copyright infringement.

Speaker 1 (07:51):
Wow, okay. That perfectly captures the tightrope wlok, creative output,
technical consistency, and immediate legal risk all in one role.

Speaker 2 (08:00):
Exactly. It's a high wire act Pashtage tax check day
for hardening the inference stack, safety and efficiency.

Speaker 1 (08:06):
All right, let's transition from creation to protection, because when
you put a powerful model out into the real world,
the potential for disaster is well, it's massive, it really is.

Speaker 2 (08:16):
Which brings us to the AI safety systems engineer. This
role commands a staggering median salary of two hundred and
ten thousand.

Speaker 1 (08:23):
Dollars two hundred and ten k. That's one of the
highest compensated roles on our entire list, isn't it It is?

Speaker 2 (08:28):
And the salary reflects the systemic risk they manage. Their
core task is hardening the deployment environment, the live inference
stack against high stake threats.

Speaker 1 (08:38):
So we're talking about things like prompt injection.

Speaker 2 (08:41):
Yeah, that's a classic one where an attacker tricks the
model into bypassing its safety rules. But also newer, maybe
more insidious attacks like data.

Speaker 1 (08:50):
Poisoning, contaminating the training data, right.

Speaker 2 (08:53):
Or unauthorized weight ex filtration.

Speaker 1 (08:55):
Okay, weight exfiltration, that term might be new to some listeners.
Could you simplify why what that means and why it's
such a major concern for companies.

Speaker 2 (09:03):
Sure, think of the model's weights, those billions of parameters
trained over millions of dollars in thousands of GPU hours.
That's the secret recipe. It's the brain of the AI
core ip exactly. So when we talk about weight exultration,
we mean the unauthorized downloading or copying of those trained parameters.
It's essentially stealing potentially billions of dollars of intellectual property.

Speaker 1 (09:25):
Right, it's not just stealing code, it's stealing the intelligence.

Speaker 2 (09:28):
Precisely, And the safety se the safety systems engineer is
the last line of defense against that theft or misuse.
They use highly specialized AI security tools, often operating deep
inside the model's processing pipeline. Like what kind of tools
our sources mention? Things like Nemo guardrails, which are designed
to create protective barriers around the model's instructions kind of

(09:48):
like a firewall, and gearak, which acts as an m
vulnerability scanner actively hunting for ways the model can be
tricked or.

Speaker 1 (09:55):
Broken, so they're constantly probing for weaknesses constantly.

Speaker 2 (09:59):
The highest takes are best summarized by a quote from
a faithty se at a major lab. They said, every
deployment is a potential zero day. My job is to
make sure the models are air RF reflex stays hypothetical.

Speaker 1 (10:12):
Wow our air RF for listeners not familiar. That's the
Linux command to basically delete everything.

Speaker 2 (10:18):
Recursively, the most dangerous command. Yeah. Their job is to
ensure the high power model never gets a chance to
autonomously execute something that catastrophic in the real world. That
level of liability definitely justifies the compensation.

Speaker 1 (10:30):
Yeah. Absolutely. That focus on liability leads us kind of
neatly to the next engineer, someone who focuses not just
on security risks, but also financial risks.

Speaker 2 (10:39):
Ah, the money side exactly.

Speaker 1 (10:41):
Let's talk about operational expenditure, the cost of running these things.
The token efficiency accountant or TEA who earns around one
hundred and forty one thousand.

Speaker 2 (10:50):
Dollars TEA, I like that acronym. This role is financially
crucial because running high volume, high power language models is
extremely expensive, like eye wateringly expensive. Right, The tea's core
task is simple in concept but complex and execution. Minimize
API spend and context window bloat. They are the financial

(11:10):
auditors and efficiency experts of the computational pipeline.

Speaker 1 (11:13):
Okay, so how do they actually achieve these savings? The
outline mentioned techniques like distillation, quantization, speculative decoding. Those are
pretty heavy technical terms. Can you give us a quick
analogy for how these processes save money?

Speaker 2 (11:24):
Yeah, let's try. Let's take quantization, maybe the simplest concept
in essence, Traditional AI models use very precise numbers like
thirty two bit or sixteen bit floats to store their weights.
These are massive files, require lots of memory, lots of compute.
Quantization is like taking that highly precise scientific notation and

(11:45):
compressing it into a less precise format, say eight bit integers.
It's kind of like turning a high res photo into
a jpeg.

Speaker 1 (11:51):
Okay, so you lose a tiny bit of fidelity maybe,
but save a lot on size and speed.

Speaker 2 (11:56):
Exactly, you say, vast amounts of memory and processing power
while ideally losing negligible performance. You're making the delivery format
smaller and faster.

Speaker 1 (12:05):
Got it. What about distillation.

Speaker 2 (12:07):
Distillation is another cool one. It's basically training a smaller,
much less costly model to mimic the outputs of a larger,
very expensive teacher model for a specific task. The tea
uses all these tricks and more to cut the compute bill.

Speaker 1 (12:20):
And why does this matter so much that the listener,
or rather to the companies employing them.

Speaker 2 (12:25):
The scale is the key. Our source material shows that
a mere five percent savings in a large chatbot deployment
serving millions of users, just five percent translates to three
to five million dollars annualized savings.

Speaker 1 (12:39):
Wow. Okay, now that one hundred and forty one thousand
dollars dollar makes perfect sense. You pay them a lot
to save you millions.

Speaker 2 (12:44):
Precisely, The quote from a tea at a large consumer
facing AI company nails it. I'm the miser who counts
every sixteen bit float like it's a paper clip. They're
measuring computational waste at the most granular, financially sensitive level.

Speaker 1 (12:59):
Okay, so we've saved money in the cloud. Now let's
look at saving power, but at the very edge of the.

Speaker 2 (13:03):
Network, right moving out of the data center.

Speaker 1 (13:05):
This brings us to the edge inference plumber at one
hundred and fifty five thousand dollars. This sounds like it's
all about taking those massive power hungry models and shrinking
them down to run in the smallest possible physical space.

Speaker 2 (13:16):
That's exactly it. The plumber's job is really an exercise
in computational physics. Almost their core task is porting large models,
often models exceeding seven billion parameters, which is huge seven billion, yeah,
porting them to low power edge devices think iPhone chips,
tiny embedded systems and factory robotics, or even satellite constellations orbiting.

Speaker 1 (13:36):
Earth wow, satellites too.

Speaker 2 (13:39):
Yeah. And the power constraint is the absolute killer here.
They have to make these things operate on minimal power
budgets also less than two hundred milliwats hmm, tiny amounts
of power.

Speaker 1 (13:50):
That sounds technically well, almost impossible just a few years ago,
as it is even being achieved now.

Speaker 2 (13:57):
The combination of those compression techniques we just talked about, likeation,
combined with radical hardware advancement specialized chips okay. Our sources
highlighted the twenty twenty four Apple Neural Engine three, for instance.
That specialized chip allows models like Microsoft's five three medium,
which is still pretty capable to run directly on a
local device at high speeds like twenty eight tokens.

Speaker 1 (14:17):
Per second without needing to pay a data center constantly.

Speaker 2 (14:19):
Exactly no network latency, more privacy works offline. So the
plumber insures that the model, once compressed, can actually interact
correctly with that specialized low power silicon.

Speaker 1 (14:30):
Enabling applications that need to be fast and local, like
drone navigation or industrial inspection robots correct.

Speaker 2 (14:37):
Their work is vital for defense, robotics, autonomous systems, anything
where latency or connectivity is a critical failure point. The
edge plumber quoted from Mandural, a defense tech company, They
crystallize their mandate really well, what to say, I make
GPT run on a drone that weighs less than a
bag of sugar.

Speaker 1 (14:56):
Okay, that's a fantastic visual. It makes the complexity and
the physical concer joint really palpable.

Speaker 2 (15:01):
Yeah, it perfectly encapsulates this architectural camp they're building the brain,
securing the brain, and ensuring the brain can run anywhere
cheaply and efficiently.

Speaker 1 (15:09):
Okay, Moving now from the hard infrastructure, the plumbing and wiring,
to the content itself. We hit the next major category,
the data curators and artistic guides right.

Speaker 2 (15:19):
These roles are focused on well data generation, media reconstruction,
and making complex abstract data visible and usable. They kind
of connect the raw model to the sensory world we
live in hashtag tag tag five to five growing data
in a GPU greenhouse.

Speaker 1 (15:34):
So data. Everyone knows data is the fuel of modern AI.
But getting good data, real world data is hitting some
major roadblocks, isn't it.

Speaker 2 (15:42):
It really is expense, inherent bias and historical data and
increasingly regulatory restrictions like GDPR or EPA. This necessity, this
data scarcity, in some ways, drives the demand for the
synthetic data. Soemmia earning around one hundred and thirty two thousand.

Speaker 1 (15:57):
Dollars synthetic data Sumelia like a wine expert, but for
fake data.

Speaker 2 (16:02):
Kind of their core task is to strategically curate, augment
and crucially de bias synthetic data sets. They're particularly vital
for high stakes fields that need perfect or near perfect
data like robotics, complex computer vision, medical imaging.

Speaker 1 (16:17):
So why synthetic? Why not just use real data even
if it's hard to get Well.

Speaker 2 (16:21):
Let's consider the why if you're training a robot to
grasp every possible type of object in a warehouse, collecting
real world images and physical interaction data for every single
variant is prohibitively expensive and incredibly slow.

Speaker 1 (16:33):
Okay the scale issues right.

Speaker 2 (16:35):
And worse, if you train a medical AI on real
world patient data, it often reflects historical biases. Maybe certain
demographics were underrepresented in past studies. This can lead to
poor diagnoses for those groups.

Speaker 1 (16:47):
So the AI learns the biases from the data exactly.

Speaker 2 (16:51):
Synthetic data aims to solve this by generating perfect, customized
data points that can be free from historical biases, and
they can be generated off affordably. At massive scale, you
can create exactly the data you need.

Speaker 1 (17:03):
And our sources mentioned a specific technology enabling this and
videos get three D.

Speaker 2 (17:08):
Yeah, Getty three D version two was highlighted. This software
allows for photorealistic three D asset generation from something as
simple as a two D sketch, which is pretty amazing.
But the smaliest job isn't just to hit the generate button, right.

Speaker 1 (17:20):
It's more curated than that.

Speaker 2 (17:21):
Much more. They think those perfect synthetic assets and then
they season them. That's the actual terminology.

Speaker 1 (17:26):
They use seasoning, seasoning the data like cooking.

Speaker 2 (17:29):
Exactly like cooking. They season it with domain randomized.

Speaker 1 (17:32):
Physics domain randomized physics. Okay, break that down.

Speaker 2 (17:34):
Think of it like this. If you're training a self
driving car purely in a simulated environment, it might learn
perfectly for that clean, predictable simulation, but.

Speaker 1 (17:44):
The real world isn't clean or predictable precisely, so.

Speaker 2 (17:47):
The sommelia ensures that the virtual rain in the simulation
has variable intensity. The glare from the setting sun realistically
obscures sensors. Sometimes the random pedestrian who jumps into the
road does so it's slightly different speeds and angles. They're
generating the perfect customized chaos required for robust training against
real world unpredictability.

Speaker 1 (18:08):
Okay, so they are not just creating data. They are
managing the quality, the variety the edge cases, and the
ethical implications of the data they generate.

Speaker 2 (18:16):
Absolutely, their quote captures the necessity of this work perfectly.
Real world data is expensive, biased, and often illegal to collect.
I grow edge cases in a GPU greenhouse.

Speaker 1 (18:27):
Grow edge cases in a GPU greenhouse. I love that
they're literally AI data farmers.

Speaker 2 (18:32):
Yeah, it's a fantastic description hashtag tacktack six art restoration
and mapping the latent space.

Speaker 1 (18:37):
All right, now for a shift maybe into culture and history.
This next role sounds fascinating. The diffusion restoration artist earning
around one hundred and eighteen thousand dollars.

Speaker 2 (18:49):
Yeah, this is a really cool intersection of like digital
archaeology and high power generative modeling.

Speaker 1 (18:54):
So what's their task restoring old photos?

Speaker 2 (18:57):
It's broader than that. Their task is to reverse engineer damaged, corrupted,
or extremely low resolution media. Could be photos, could be filmed,
could even be audio. Potentially. They use specialized inversion.

Speaker 1 (19:10):
Techniques, inversion meaning working backwards.

Speaker 2 (19:12):
Exactly, working backwards to figure out what the original image
was are most likely was based on the damaged version,
and then they guide generitive models like stable diffusion XL
to reconstruct a culturally and historically plausible original. It's like
super high tech art.

Speaker 1 (19:26):
Restoration and we're already seeing real market applications for this.

Speaker 2 (19:29):
Oh yeah, Hollywood studios are apparently hiring these artists to
restore degraded thirty five millimeters film negatives, bringing old movies
back to life in higher fidelity than ever before, and
auction houses are using them to help authenticate disputed artworks.
Imagine digitally reconstructing the obscure details or missing sections of
a disputed warhole painting to verify its provenance.

Speaker 1 (19:52):
So it's using the AI's knowledge of art history and
style to film in the blanks plausibly.

Speaker 2 (19:58):
Precisely, it's the ultimate applic caation of the latent space
that vast compressed knowledge bank inside the generative model. They're
essentially leveraging that knowledge to fill in the blanks in
a way that respects cultural context, rather than just making
a random guess.

Speaker 1 (20:13):
The artist's quote explains this really well, doesn't it.

Speaker 2 (20:16):
It does. I don't paint the missing frames. I interrogate
the latent space until it confesses what was there.

Speaker 1 (20:21):
Interrogate the latent space until it confesses. That's brilliant. They're
extracting latent truth.

Speaker 2 (20:26):
Yeah, exactly.

Speaker 1 (20:26):
Okay, So if the restoration artist is interrogating that latent space,
the next role with the Layton space cartographer is basically
mapping it out for the rest of us.

Speaker 2 (20:35):
You got it. And this is a highly technical, highly
interpretive role, earning a substantial one hundred and eighty two
thousand dollars median salary one.

Speaker 1 (20:42):
Hundred and eighty two K. So what are they mapping?
What does that even mean?

Speaker 2 (20:46):
This is perhaps one of the most conceptually abstract jobs
on the list, but it's becoming vital for enterprise strategy.
The cartographer's core task is taking abstract concepts, say a
billion data points representing customer sentiment or compareative analysis reports
or global scientific research papers.

Speaker 1 (21:03):
Huge messy data sets.

Speaker 2 (21:05):
Exactly, and embedding these concept manifolds. Then they build interactive
three D dashboards that allow non technical users like executives
to visually explore that incredibly complex data.

Speaker 1 (21:17):
So we're talking about visualizing high dimensional data that traditional
spreadsheets or bar charts simply cannot handle. The outline mentioned
umpath projection. What exactly is UM projection and how does
it make that visualization possible for normal human eyes?

Speaker 2 (21:32):
Yeah, UMP is key here. It stands for uniform manifold,
approximation and projection. Think of it this way. Imagine you
have a vast network of relationships, say fifty thousand variables
describing as single customers preferences. That data technically exists in
fifty thousand dimensions.

Speaker 1 (21:47):
Which is impossible for us to visualize.

Speaker 2 (21:49):
Right, humans can only really perceive three dimensions well. UMAP
is a mathematical technique, an algorithm that takes that incredibly
high dimensional data set and intelligently condenses it down into
two or three dimensions for visualization.

Speaker 1 (22:05):
But it preserves the important relationships.

Speaker 2 (22:07):
That's the crucial part. It ensures that the critical topological relationships,
the clusters, the distances between points which concepts are neighbors,
are maintained as accurately as possible in the lower dimension map.

Speaker 1 (22:20):
So if two research papers were conceptually very close in
the original fifty thousand dimensional space of ideas, they will
still appear close together on the final three D map
you can look.

Speaker 2 (22:31):
At exactly The cartographer uses Cuda Accelerated Uma, which needs
powerful GPUs to run quickly, and visualization tools like three
dot js to render these interactive maps.

Speaker 1 (22:41):
And the value for a business.

Speaker 2 (22:43):
Instead of waiting through a million documents or database rows,
a CEO can literally spin a three D map on
their screen, see a cluster of customer dissatisfaction metrics appearing
right next to a cluster of competitor risk signals, and
immediately grasp the potential connection.

Speaker 1 (22:57):
That makes the abstract tangible and actionable. Okay, I get it.
The quote from the cartographer A mid Journey summarized it perfectly.

Speaker 2 (23:03):
Which one is that.

Speaker 1 (23:05):
I turned seventy seven billion dimensions into a map you
can spin with a mouse.

Speaker 2 (23:09):
Yeah, that's it. Turning overwhelming complexity into strategic clarity. That's
the job.

Speaker 1 (23:15):
Okay, we've established the builders and the curators, the architects
and the artists. Now let's pivot sharply to that second
major camp you mentioned earlier, the governors and watchdogs.

Speaker 2 (23:25):
Right, the crucial oversight layer. These roles are essential because
they ensure models are safe, compliant, ethical, insecure, and frankly,
this area is being driven almost entirely by a global
push toward regulation and managing rapidly increasing corporate liability.

Speaker 1 (23:41):
The lawyers are getting involved.

Speaker 2 (23:43):
The lawyers, the regulators, the insured, yeh everyone hashtag tech.
Take seven enforcing compliance and ethical alignment. And the first,
maybe most important compliance role that's emerged is the AI
ethics auditor. They command a very strong salary around one
hundred and sixty eight thousand dollars.

Speaker 1 (23:57):
One hundred and sixty eight K. And this job is
direct result of regulations coming online.

Speaker 2 (24:02):
Absolutely think EU AI Act California's various initiatives. This job
exists because of that regulatory push across North America and
Europe primarily.

Speaker 1 (24:13):
So what's their core task? Is it just checking boxes?

Speaker 2 (24:16):
It's much more than that. It's formal and rigorous. They
run structured adversarial audits on production models to surface specific failings.

Speaker 1 (24:24):
Adversarial audits meaning they try to break it ethically exactly.

Speaker 2 (24:27):
This includes identifying disparate impact, that's bias against protected groups
in areas like loan applications or hiring decisions. It also
involves hunting for system vulnerabilities like jail breaks where you
trick the AI into violating its own rules, and general
value misalignment where the AI's goals don't match human values.

Speaker 1 (24:45):
And the regulatory context is the real engine here?

Speaker 2 (24:48):
You said totally. The European Union's AI Act, which was
finalized in twenty twenty four. It mandates third party audits
for any AI system deemed high risk. It's not optional, okay.

Speaker 1 (24:58):
And when regulators say high risk, what does that actually
mean in practice? What kind of systems are we talking about?

Speaker 2 (25:03):
I Risk systems are generally those that interact critically with
human life rights or significant opportunities. So think AI used
in predictive policing, credit scoring, immigration decisions, critical infrastructure management,
or healthcare triage and diagnostics. If your company uses an
AI system in any of these areas within the EU,

(25:25):
it must undergo external audits by people like these AI
ethics auditors, and.

Speaker 1 (25:29):
There are consequences for failing these audits big ones.

Speaker 2 (25:31):
Yeah, fines obviously, but also in the US, California's AB
three thirty one provides a private right of action that
means individuals can actually sue companies if audit failures result
in discriminatory outcomes or harm.

Speaker 1 (25:44):
Wow. Okay, that completely transforms the auditor from like an
academic research post into a mandatory compliance function carrying significant
corporate liability exactly.

Speaker 2 (25:53):
Which explains the salary. It's a high stress role given
the legal exposure. A lead auditor at a compliance firm
captured this anxiety perfectly in one of the quotes all
love it. They said, we're the food inspectors of the
attention economy. Nobody wants us until the salmonella hits Twitter.

Speaker 1 (26:08):
That's grimly accurate. They manage systemic reputation risk and legal risk.

Speaker 2 (26:13):
Okay. Next up, still in the watchdog category, we have
the Hallucination Detective, earning around one hundred and fifty nine
thousand dollars.

Speaker 1 (26:22):
Hallucination Detective love the name. This is about stopping the
AI from making stuff up.

Speaker 2 (26:26):
Precisely in high stakes fields, especially legal tech, medical tech,
financial advice, a single model, fabrication or hallucination can lead
to malpractice suits, catastrophic misdiagnoses, huge financial losses. The stakes
are incredibly high, so.

Speaker 1 (26:42):
The detective's job is purely to stop the AI from
inventing facts. But how do they achieve this? When the
model is inherently trained to generate coherent text, whether it's
factually true or not. That seems like the core problem.

Speaker 2 (26:54):
It is the core problem. They rely heavily on building
what are called retrieval augmented generation guardrails a GhIE a RAG. Okay,
we hear that acrodem a lot, Now, what does it do?

Speaker 1 (27:03):
RG is one of the most important concepts in applied AI.
Right now. Instead of letting the model rely only on
the sort of frozen knowledge it gain during its initial
static training phase.

Speaker 2 (27:12):
Which might be out of date or incomplete, right or
KYAK forces the model to first look up relevant information
in a verified, external curated database what they call a
ground truth oracle before it generates an answer.

Speaker 1 (27:26):
H okay, So the model isn't just speaking from memory,
it's actively retrieving and citing its sources from a controlled,
trusted library before it speaks exactly.

Speaker 2 (27:36):
And if the r GAG process can't find a verifiable
source in that trusted oracle that supports an answer, the
model is instructed to respond with something like I don't know,
rather than inventing an answer just to sound confident.

Speaker 1 (27:48):
That's a big shift building in humility.

Speaker 2 (27:51):
It is, and the hallucination detectives are also responsible for
building and maintaining those ground truth oracles, making sure the
trusted library is accurate and up to date. The challenge
is immense because, as a detective working at Harvey dot AI,
a legal AI company, observed, what did they say? The
model lies with such poetic confidence that sometimes the truth
feels rude.

Speaker 1 (28:11):
Huh. That's great. They have to engineer humility into an
inherently sometimes pathologically overconfident system.

Speaker 2 (28:19):
That's the job.

Speaker 1 (28:19):
Okay. That requirement for fact checking, for grounding in reality
leads us to another critical, maybe less technical, but equally
important compliance role. The AI Carbon Accountant, making around one
hundred and thirty six thousand dollars.

Speaker 2 (28:33):
Yeah. This role is a direct response to the massive,
often hidden energy consumption and carbon footprint of large scale AI.
People are starting to realize just how much power these
things use.

Speaker 1 (28:45):
Right, Training these huge models takes incredible amounts of.

Speaker 2 (28:48):
Electricity incredible amounts. Their core task is comprehensive life cycle
greenhouse gas GHG tracking for AI systems. This requires understanding
the distinction between Scope two and Scope three emission and specifically,
in the context of AI.

Speaker 1 (29:02):
Scope two and Scope three, can you remind us what
those mean?

Speaker 2 (29:05):
Sure? Scope three in this context covers the upstream emissions,
primarily the carbon generated during the massive training runs powering
those colossal GPU clusters often run by third parties like
cloud providers.

Speaker 1 (29:16):
Okay, the energy used to build the brain right.

Speaker 2 (29:19):
Scope two covers the downstream emissions from inference. That's the
energy consumed when the model is actually being used by
millions of users every day running in the company's own
data centers or cloud instances.

Speaker 1 (29:29):
Got it training versus using exactly?

Speaker 2 (29:31):
And the AI Carbon Accountant tracks all of this and
tries to optimize the entire pipeline for efficiency measured in
things like petaflop hours per kilogram of courodo equivalent.

Speaker 1 (29:41):
So they are balancing computational power with ecological impact. And
the reason this job commands one hundred and thirty six
thousand dollars is again because of regulatory mandate correct.

Speaker 2 (29:51):
Precisely, the SEC Climate Disclosure Rule, effective as of twenty
twenty four in the US, treats large AI training runs
and a specifically defined large as those exceeding one hundred
megawatt hours of energy consumption.

Speaker 1 (30:04):
One hundred megawatt hours. How big is that?

Speaker 2 (30:07):
It's big? A typical frontier model training run often exceeds
this threshold many many times over. So under SEC rules,
AI training is now potentially a material financial event that
must be disclosed to investors as part of climate risk reporting.

Speaker 1 (30:19):
Wow. So AI training energy use is now a boardroom
level financial and regulatory.

Speaker 2 (30:24):
Risk correct which requires dedicated auditing and accounting expertise.

Speaker 1 (30:28):
The analogy provided in our source material was striking here too.

Speaker 2 (30:32):
Yeah, the flight analogy right.

Speaker 1 (30:34):
Training a frontier model emits more carbon than a transatlantic flight.
My job is the boarding pass audit.

Speaker 2 (30:40):
It really puts it in perspective. They are essential for
balancing innovation against mandatory corporate social responsibility disclosures and increasingly
investor pressure hashtag tag tag eight. Protecting digital property the
CSI of stolen neurons.

Speaker 1 (30:56):
Okay, now let's move from regulatory compliance to pure secure,
but focus specifically on protecting the intellectual property that is
literally baked into the model itself.

Speaker 2 (31:05):
The model weights again exactly.

Speaker 1 (31:08):
This brings us to the model custody forensic analyst earning
a significant one hundred and seventy four thousand dollars median.

Speaker 2 (31:14):
Salary model custody forensic analyst. This sounds like the CSI
of the AI world you mentioned before. What is their
core task when dealing with proprietary weights. Their job is
literally to trace ip leakage. If a proprietary model, one
that a company invested hundreds of millions, maybe billions in training,
suddenly appears for download on a public repository like hugging Face,

(31:35):
or maybe on a dark web torrent site, which happens,
it does happen, the analyst must prove custody. They need
to forensically prove, often in a way that will stand
up in court, that the leaked weights belong to their organization.

Speaker 1 (31:49):
How is that even technically possible weights are just huge
lists of numbers? Right? How can you put a unique,
provable fingerprint on billions of parameters?

Speaker 2 (31:58):
Yeah, it's incredibly challenging. It relies on cutting edge technical defenses,
primarily water marking techniques embedded during the training process itself.
The sources highlighted Anthropics twenty twenty four initiative Project.

Speaker 1 (32:09):
Spino, Project Steno. What does that involve?

Speaker 2 (32:12):
It involves embedding unique cryptographic canaries directly into the model's
activation patterns during training.

Speaker 1 (32:17):
Okay, canary in the activation patterns, that's fascinating. What does
that mean? In simpler terms, how does a canary work?

Speaker 2 (32:23):
Here? Imagine the model's internal structure, its neural network is
like a complex digital sculpture. The forensic analyst, or rather
the team that trained the model initially, doesn't just put
an obvious sticker on the outside. Instead, they subtly alter
the underlying digital clay in a unique hidden pattern. Okay,

(32:44):
this pattern is carefully designed to be invisible to the
model's performance metrics. It doesn't hurt how well the model works,
but it's almost impossible to remove without fundamentally damaging the model.
This unique digital fingerprint that canary is triggered or revealed
only when the way to inspected using the company's specific
internal tools are keys.

Speaker 1 (33:03):
Ah. So if the weights leak and appear publicly, the
analysts can download them, run their special check, find the
hidden canary pattern, and say, aha, this fingerprint proves these
weights are ours.

Speaker 2 (33:14):
Precisely, it provides cryptographic proof of origin and ownership. It's
truly high stakes digital espionage and counter espionage.

Speaker 1 (33:21):
Yeah, it really is. The quote summarizes the new reality
of digital assets perfectly. Weights are the new source code.
I'm the CSI for stolen neurons.

Speaker 2 (33:30):
Couldn't say it better myself.

Speaker 1 (33:32):
Okay, let's shift gears completely. Now move into the interaction layer.
This category focuses on managing the direct human machine interaction,
orchestrating complex real world applications, and navigating the really profound
emotional and physical impact of these increasingly autonomous systems.

Speaker 2 (33:49):
Right, how do we actually use these things safely and effectively?
And how do we deal with the consequences hashtag tag
tag tag nine. Orchestrating action and managing autonomy. Starting with
a complex execution in the enterprise, we have the multimodal orchestrator.
This is a highly paid roll around one hundred and
ninety five thousand dollars.

Speaker 1 (34:07):
Medium multimodal orchestrator, So like a conductor for different types
of AI.

Speaker 2 (34:11):
Exactly like a conductor, these people are the master builders
of the entire end to end enterprise solution. They integrate
different specialized AI models into a single, cohesive, functioning workflow.

Speaker 1 (34:22):
So their core task is chaining different model types together,
like vision models, language models, maybe action APIs.

Speaker 2 (34:28):
Precisely chaining them into complex, coherent, and most importantly, reliable workflows.
This is the difference between having a bunch of incredibly
powerful individual tools lying around and having a fully automated
functional assembly line that actually accomplishes a multi step business
process yea, often spanning multiple platforms.

Speaker 1 (34:48):
Can you give an example of such a workflow?

Speaker 2 (34:50):
Yeah, the sources provided a really good medical one watch
surgical video, draft, the op note, highlight anomalies, and schedule
follow up.

Speaker 1 (34:57):
Okay, that single task requires a lot different steps, a lot.

Speaker 2 (35:01):
It requires at least three entirely different AI capabilities, plus
interaction with the traditional database system. So maybe the GPT
four to zero vision model has to accurately interpret the
surgical video, then pass that structured data flawlessly to a
powerful language model like Claude three point five sonnet for
drafting the operation note, which then must integrate seamlessly via

(35:22):
a robotic process automation RPAAPI into a traditional Electronic Health
record EHR system to save the note and schedule the.

Speaker 1 (35:30):
Follow up a point wow. Managing those handoffs between different systems,
especially ensuring the system doesn't introduce errors or lose information
at those transition points, that must be the hardest part
of the job.

Speaker 2 (35:41):
It absolutely is, because each API, each model has different strengths,
different weaknesses, different latency profiles, and crucially different failure modes.
The orchestrator must understand the entire system's weaknesses as deeply
as its strengths, so.

Speaker 1 (35:59):
They have to design robust error handling and fallback mechanisms
right into the system architecture exactly.

Speaker 2 (36:05):
It truly requires a strategic system level vision combined with
deep technical knowledge of multiple AI domains. Their analogy is spot.

Speaker 1 (36:12):
On the film director one.

Speaker 2 (36:13):
Yeah, I'm a film director, but the actors are APIs
and the budget is measured in flops.

Speaker 1 (36:17):
That's great. It captures the creative integration and the resource
constraints perfectly. Okay, From the complex integration of APIs in
the digital world, let's move to the physical world of autonomy.
The autonomous vehicle teleoperator. This role makes around ninety two
thousand dollars plus potentially a twenty five percent differential for
night shifts, placing them in the lowest compensation band on

(36:38):
our list.

Speaker 2 (36:39):
Yeah, well it's the lowest compensation band here. It is
still a high responsibility, high stress role, no question.

Speaker 1 (36:45):
So what's their core task? They're not actually driving the
car most of the time, right, not.

Speaker 2 (36:49):
Usually their core task is remote human intervention for level
four autonomous systems. So when a robotaxi or maybe an
autonomous delivery vehicle encounters a scenario it hasn't been train for,
like a flooded street, really complex unexpected construction zone, maybe
an accident.

Speaker 1 (37:05):
Scene, the computer gets confused, right.

Speaker 2 (37:07):
It stops safely and requests immediate human override. The teleoperator
takes control.

Speaker 1 (37:11):
Remotely, so they are sitting in a remote operation center,
essentially taking the virtual wheel in real time, often from
hundreds of miles away. What kind of infrastructure is required
for them to perform this safely. The lag must be
a huge issue.

Speaker 2 (37:26):
Lag is the issue. The technical constraint is extreme. They
require ultralow latency, typically demanding less than one hundred and
twenty milliseconds of round trip delay to safely perceive the situation,
take over, and execute a maneuver.

Speaker 1 (37:40):
One hundred and twenty milliseconds. That's incredibly fast.

Speaker 2 (37:43):
It is. This relies on cutting edge communication infrastructure, usually
proprietary blends of things like Starlink satellite communication combined with
private five G mesh networks installed directly in the cities
where the vehicles operate. Without that rapid, reliable response time,
the entire safety case for these level four autonomous vehicles
basically collapses.

Speaker 1 (38:04):
Yeah, you can't have lag when you're remotely steering a
two ton vehicle through city street.

Speaker 2 (38:08):
Absolutely not. It's a fascinating combination of like video game skills,
physical dexterity translated remotely, and intense technology management.

Speaker 1 (38:17):
The quote from the Telly operator at Zeke's, the autonomous
vehicle company, really summed up the kind of frustrating half
autonomous nature.

Speaker 2 (38:23):
Of the job.

Speaker 1 (38:24):
Yeah was thatwen, I'm the world's highest paid uber driver
except the passengers. A two ton robot and it argues.

Speaker 2 (38:29):
Back, huh. Yeah. The system is constantly querying the human,
asking for confirmation or clarification, requiring a specific type of
sustained mental focus and vigilant not just driving, but supervising
hat tag tag tax height ten. Navigating emotional and strategic impact. Okay,
moving from physical autonomy now to maybe emotional interaction, we

(38:50):
find roles dealing with the profound psychological impact of companion
AI and related technologies.

Speaker 1 (38:56):
Right, this is where it gets really personal definitely.

Speaker 2 (38:58):
First is the AI therapist. This role earns around one
hundred and three thousand dollars, but there's often a twenty
five thousand dollars premium if they are licensed clinical psychologist
or a therapist, reflecting the need for actual professional credentials.

Speaker 1 (39:10):
So this role isn't just about making AI more friendly.
It's designed to mitigate genuine, documented psychological harm in this
rapidly emerging frontier of human AI relationships.

Speaker 2 (39:21):
That's exactly right. The core task is moderation and prevention.
They monitor and moderate user interactions with companion bots, you know,
ass like Replica or maybe more specialized mental health support bots.

Speaker 1 (39:31):
And what are they looking for?

Speaker 2 (39:32):
Specifically? They're trying to prevent unhealthy emotional dependency or what's
called parasocial harm.

Speaker 1 (39:37):
Parasocial harm, Yeah.

Speaker 2 (39:39):
That's where the user develops a strong, one sided, often pathological,
emotional attachment to the non sentient machine, potentially substituting it
for real human connection and support.

Speaker 1 (39:49):
And this isn't just a theoretical risk. There's actual clinical
evidence emerging now, isn't there?

Speaker 2 (39:55):
Absolutely? Yeah. Our source material sited at twenty twenty four
study in jamous psychiatry. It established a clear link between
high usage, which they defined as over forty hours per
week interacting with an uncounseled companion AI.

Speaker 1 (40:08):
Forty hours a week. That's a full time.

Speaker 2 (40:10):
Job, it is, and they link that high usage to
significantly increased attachment, anxiety, and other negative psychological outcomes in users.
So the AI therapist acts as a crucial behavioral guardrail.

Speaker 1 (40:21):
How what do they do?

Speaker 2 (40:23):
They might step into council users directly, help them set
digital boundaries, manage expectations about the AIS capabilities, and sometimes
even work with engineers to implement technical limitations on interaction
frequency or maybe the type of emotional interaction the bot
engages in.

Speaker 1 (40:36):
So it's necessary human oversight for a system that's often
designed to be endlessly engaging, endlessly agreeable exactly.

Speaker 2 (40:44):
A therapist working at Replica Health highlighted the relentless nature
of the work with a great quote. Let's hear it,
the bot never sleeps, so neither do I. Wow.

Speaker 1 (40:53):
That says a lot. They're essentially managing the ethical boundaries
of digital intimacy and dependency, a very challenge role. Taking
that concept maybe one step further into the realm of
perhaps the most profound human experienced grief, we find the
AI morning doula salary around eighty nine thousand dollars.

Speaker 2 (41:13):
Yeah, another role firmly in that lowest compensation band, but
dealing with incredibly sensitive issues.

Speaker 1 (41:18):
So the morning Doula, their role guides families through these
rapidly expanding digital afterlife services. What does that involve?

Speaker 2 (41:25):
Their task involves ethically managing the digital legacy of someone
who has passed away. This can include things like training
AI voice clones on the deceased passed audio.

Speaker 1 (41:34):
Recordings so you can talk to them again.

Speaker 2 (41:36):
Or scheduling periodic personalized messages from beyond maybe texts or
emails generated by an AI trained on their writing style,
to be sent to loved ones on anniversaries or birthdays. Wow,
and also handling the practicalities like ensuring the ethical sun
setting or ongoing maintenance of digital avatars or social media memorials.

Speaker 1 (41:55):
This is a real market. People are offering these services.

Speaker 2 (41:58):
It's a market that is apparently bloating. And the sources
mentioned companies like Eternal Digital based in Tokyo and hereafter
AI in San Francisco reporting three hundred percent year over
year growth.

Speaker 1 (42:09):
Three hundred percent growth, that's huge.

Speaker 2 (42:11):
It is. The DULA is there primarily to mediate the
complex emotional process for the grieving family, to ensure that
the technology is genuinely serving the human need for memory
and connection, rather than say, prolonging unsustainable forms of grief
or creating a kind of technological denial. They manage the
very delicate boundaries between remembrance and digital haunting.

Speaker 1 (42:32):
Almost yeah, I imagine that requires enormous sensitivity, ethical judgment,
and a deep understanding of psychological boundaries. It's not a
technical role primarily, not at all.

Speaker 2 (42:42):
It's deeply human. The dula's insight provided in our sources
is perhaps the most moving quote we found in all
the material.

Speaker 1 (42:49):
Oh was it?

Speaker 2 (42:50):
Grief doesn't end, it just learns to live in the cloud.

Speaker 1 (42:54):
Wow. That's yeah, that's powerful. It's a profound recognition that
technology is now fully integrated into the most intimate, often
tainful parts of our emotional lives. Absolutely, Okay, finally, in
this human interface section, we absolutely must discuss the crucial
role that bridges all the technical innovation we've covered and
the strategic decision making happening in the C suite.

Speaker 2 (43:16):
Right connecting the tech to the business strategy.

Speaker 1 (43:18):
The AI literacy evangelist who makes around one hundred and
twenty seven thousand dollars.

Speaker 2 (43:23):
Evangelist spreading the gospel of AI literacy.

Speaker 1 (43:26):
Pretty much, their core task is strategic edgitation at the
highest levels. They design and deliver high level workshops and
briefings their audience, executives, board members.

Speaker 2 (43:36):
And what are they teaching them? Not coding?

Speaker 1 (43:38):
Presumably, no, definitely not coding. They translate complex transformer mechanics
like how the AI actually works internally, what its fundamental
limitations are, its common failure modes, into understandable strategic risk
and opportunity.

Speaker 2 (43:54):
The importance of this translation function cannot be overstated. Really,
technical brilliance is almost worthless if the CEO doesn't understand
the potential fiduciary risk of deploying that brilliance inappropriately, or
if they miss a huge strategic opportunity because they don't
grasp the capability.

Speaker 1 (44:10):
And we're seeing this become formalized now the outline mentioned certifications.

Speaker 2 (44:14):
Yeah, the rise of executive AI literacy certification trends really
underscores the seriousness here. Our source notes that something called
Xai's Grock one Literacy Badge, presumably a certification from Elon
Musk's company right, is apparently appearing on forty percent of
Fortune five hundred proxy statements.

Speaker 1 (44:30):
Forty percent. That means AI literacy is quickly becoming a
required or at least highly desirable element of corporate governance
and demonstrating fiduciary duty for board members.

Speaker 2 (44:40):
It seems so they aren't training executives to become engineers.
They are training them to become informed consumers, critical evaluators,
and strategic leaders in an AI powered world.

Speaker 1 (44:51):
So they teach executives how to manage the technology, how
to ask the right questions, not how to build it
themselves exactly.

Speaker 2 (44:57):
The evangelists quoted formally with me Kinzie provide the most
strategic summary. CEOs don't need to code. They need to
know when the model is bluffing.

Speaker 1 (45:06):
No, when the model is bluffing, that's perfect. That ability
to gauge the model's confidence, its reliability, and critically when
it is overextending its actual competence, that seems paramount for
strategic leadership.

Speaker 2 (45:18):
Today, absolutely crucial.

Speaker 1 (45:19):
Okay, we've thoroughly covered sixteen distinct and highly specialized roles. Wow,
ranging from the microscopically technical like the edge inference plumber,
and to the deeply profoundly human like the AI morning Doula.
Now let's zoom out for the final part. What connects
these sixteen specialties. Let's try to analyze the meta skills,

(45:41):
the overarching aptitudes that seem required to succeed in this
new AI economy, regardless of the specific job title.

Speaker 2 (45:47):
Yeah. Synthesizing the patterns our sources identify five recurring aptitudes
that thread through many, if not most, of these roles.
These look like the transferable high value skills that are
defining the modern maybe future proof worker.

Speaker 1 (46:02):
And acquiring any one of these skills probably dramatically increases
your earning potential in market relevance.

Speaker 2 (46:07):
Right now, I think that's fair to say based on
the salaries we've seen hashtag tac tag eleven the meta
skills that thread them all.

Speaker 1 (46:14):
Okay, let's start with the first one. Statistical intuition. This
sounds like more than just passing a statistics examine college.

Speaker 2 (46:22):
Much more. It's an innate comfort with the fundamentally probabilistic
nature of AI. It's the ability to feel, rather than
just calculate concepts like log probabilities, entropy, confidence intervals, calibration curves.

Speaker 1 (46:35):
How does that apply practically?

Speaker 2 (46:37):
Well, If you are a prompt engineer, you need to
intuitively grasp why changing a single word in a long
prompt might drastically alter the model's probability distribution for the
next generated token, leading to a totally different output. If
you're a hallucination detective, you need statistical intuition to spot
when a model is consistently overly confident in a wrong

(46:58):
answer that India hates a flaw in its calibration, its
self assessment of certainty. It's really about understanding the uncertainty
baked into the AI's output and knowing how to engineer
around it or account for it.

Speaker 1 (47:10):
Got it Understanding the fuzziness. Okay. Next up is the
maybe surprisingly crucial second skill, product taste. This sounds like
a soft skill, but the sources frame it as absolutely critical.

Speaker 2 (47:23):
Why it seems critical because when the machine can generate
infinite variations infinite outputs quickly and cheaply, the human differentiator
becomes judgment. Product taste is the ability to discern outputs
that are truly delightful or exceptionally effective from those that
are just merely functional or worse, generic and mediocre.

Speaker 1 (47:41):
AI excels at average.

Speaker 2 (47:42):
Maybe AI often excels at hitting the median output quality.
The human job increasingly is pushing that quality to the
ninety nine percentile. A diffusion restoration artist needs to know
not just what the pixels allow, but what is culturally
plausible and esthetically excellent for the period. A synthetic data
somlia needs good taste to know what kind of data
variation truly challenges a robotics model in a useful way,

(48:04):
versus what variation is just computationally expensive noise. This human
layer of taste ensures utility, user engagement, and ultimately competitive
differentiation in a world where everyone has access to similar
base models.

Speaker 1 (48:18):
Okay, that makes sense. Moving into the governance camp. Now,
the third metascill identified is regulatory translation. This sounds like
bridging the abstract legal world and the concrete technical stack.

Speaker 2 (48:29):
That's exactly what it is. It's a complex translation layer.
You need to be able to read abstract, often quite vague,
international compliance clauses like the requirements for algorithmic fairness or
data minimization found in regulations like GDPR or ISO forty
two thousand and one, okay, and translate those abstract principles
directly into actionable technical configurations. Maybe those configurations are expressed

(48:50):
in specific yamal files controlling model behavior, or Python scripts
for monitoring bias, or specific database scheme is for data handling.

Speaker 1 (48:58):
So the AI Ethics uditors one hundred and sixty eight
thousand dollars salary is basically justified entirely by this skill.

Speaker 2 (49:05):
Largely yes, translating a legal mandate, thou shalt not discriminate
into a deployable, auditible, technically sound system configuration. Run these
specific statistical tests on these outputs monthly, and flag results
outside these bounds. If you cannot translate the regulatory risk
into technical action, you cannot effectively govern the system.

Speaker 1 (49:25):
Right translation is key fourth meta skill, and this seems
essential for getting anything done at a high level, is
narrative persuasion. This sounds like the art of selling complex,
maybe long term and often non revenue generating concepts to
key stakeholders.

Speaker 2 (49:39):
Yeah, basically communicating effectively upwards and sideways. This skill is
needed whether you are selling the value of product taste
or the necessity of safety measures. You might be the
most brilliant AI safety systems engineer who has identified an
absolutely existential vulnerability in your company's flagship product. But if
you can't convince a skeptical chief financial offer or CEO

(50:01):
to approve a multi million dollar mitigation effort, maybe a
costly retraining run or delaying a product launch which is
necessary for alignment and risk reduction but doesn't immediately generate
obvious revenue, well, your brilliant technical work might be irrelevant
in practice.

Speaker 1 (50:18):
So this is about communicating systemic risk and strategic opportunity
in clear, compelling business terms, influencing the budget, influencing the
board exactly.

Speaker 2 (50:27):
It's about getting buy in and resources.

Speaker 1 (50:29):
And finally, the fifth meta skill, which we saw repeatedly
underpinning the high salary governance and safety bands ethical stress testing.
This sounds less like a specific task and more like
a professional mindset.

Speaker 2 (50:41):
It really is a mindset. It's a constant, institutionalized paranoia
required for responsible AI deployment. It means proactively systematically asking
the hardest what if questions? What is the absolute most
malicious way this system could possibly be misused by a
bad actor? Or what if the worst person on earth
gets their hands on this API key?

Speaker 1 (51:01):
Thinking like an attacker or thinking about unintended consequences?

Speaker 2 (51:05):
Right, the AI safety systems engineer and the AI ethics auditor,
they live and breathe this kind of stress testing. They
try to predict and defend against unintended negative consequences or
potential malicious use before the product ever ships. It's the
proactive defense against societal harm, reputational damage, and legal liability.

Speaker 1 (51:26):
Okay, those five statistical intuition, product taste, regulatory translation, narrative persuasion,
and ethical stress testing. It's clear how they define the
value proposition of the human worker in this new economy,
and as these meta skills become crucial, we are seeing
the educational pipelines rapidly changing to meet the demand, aren't
we It's not just about four year degrees.

Speaker 2 (51:47):
Anymore, absolutely not. The speed of learning has to try
and match the speed of innovation in this field. We're
seeing specialized boot camps popping up like Refactors twelve week
prompt engineering and safety intents of course, offering highly focused
job training very quickly. We also see university microdegrees becoming
popular like Stanford CS two two four N which focuses

(52:07):
on production mL systems, and of course professional certifications from
platforms like AWS, Google Cloud, Deep Learning dot I. These
are quickly becoming the new standard for validating competence in
these specific skills and technologies. The learning landscape is shifting
hashtag tag tag twelve salary gravity and the counter narrative.

Speaker 1 (52:26):
Let's talk more about the economic structure behind this explosion.
The compensation bans we have touched on they tell a
powerful story about where the market is placing value right now.
Looking at the October twenty twenty five data cited in
the sources, we see the market has rapidly stratified compensation
based on it seems two primary factors.

Speaker 2 (52:44):
Yeah, it looks like technical complexity and the level of
corporate liability managed are the big drivers we can clearly
define four distinct compensation bands emerging from the sixteen roles.

Speaker 1 (52:53):
Okay, what's the first band?

Speaker 2 (52:54):
The lowest band, roughly eighty thousand to one hundred and
ten thousand dollars, includes those high volume human interface and
support roles. I think the autonomous vehicle tele operator and
the AI morning doula. The median compensation here is around
ninety five thousand dollars. These are absolutely essential, often high
stress rolls, but they generally require less deep engineering expertise

(53:15):
compared to other bands.

Speaker 1 (53:16):
Got it banned too?

Speaker 2 (53:17):
The next ban one hundred and ten thousand to one
hundred and fifty thousand dollars includes roles that require pretty
specific domain expertise combined with strategic communication skills. This is
where we find the synthetic data sumlia, the diffusion restoration artist,
and the AI literacy evangelist. Their median compensation sits around
one hundred and thirty four thousand dollars. Okay.

Speaker 1 (53:36):
Bands three.

Speaker 2 (53:37):
The third band one hundred and fifty thousand to two
hundred thousand dollars hits the core engineering and compliance functions
that carry significant technical responsibility and often direct regulatory exposure.
This includes the prompt engineer, the AI ethics auditor, the
hallucination detective, and the eedgen friends plumber. Their median total
compensation is about one hundred and seventy five thousand dollars okay,
and the top tier and finally the highest band two

(53:59):
hundred thousand dollars plus includes the real systemic risk managers
and the master integrators primarily the AI safety Systems engineer
and the multimodal orchestrator. Their median is around two hundred
and fifteen thousand dollars. The market is clearly signaling that
the highest immediate value is placed on those who manage
the fundamental security, safety, and operational coherence of the entire

(54:21):
complex AI system.

Speaker 1 (54:23):
That makes sense. Now, We absolutely must address the counter
narrative here, the job loss side, to maintain balance, because
the fear isn't entirely unfounder right.

Speaker 2 (54:31):
Jobs were lost, Yes, absolutely, we have to acknowledge that
primarily the job's loss where those focus purely on the
kind of automatable cognitive labor we mentioned at the start
highly repetitive knowledge work.

Speaker 1 (54:42):
And our sources confirm some significant shutting in specific areas
they do.

Speaker 2 (54:46):
Manual data labelers whose work was crucial but highly repetitive,
saw an estimated eighty five percent reduction in headcount globally
since around twenty twenty three, as automated and synthetic data techniques.

Speaker 1 (54:56):
Improved eighty five percent. Wow.

Speaker 2 (54:58):
Yeah. Junior copy writers whose work often involves synthesizing and
rewriting basic content based on templates or existing sources, saw
perhaps a sixty percent reduction, and level two IT help
desk roles reportedly declined by about forty five percent as
generative AI tools became much better at handling increasingly complex
user triage and troubleshooting queries.

Speaker 1 (55:19):
So real displacement happened.

Speaker 2 (55:20):
Yes, this is important context, but the net effect completely
alters the overall narrative. According to the US Bureau of
Labor Statistics data cited for the second quarter of twenty
twenty five, what did it show? The US economy added
roughly one point one million AI augmented roles the types
of new specialized roles we just discussed, while shedding about
seven hundred and eighty thousand of those more automatable roles

(55:40):
in the same period.

Speaker 1 (55:41):
Okay, so one point one million gain versus seven hundred
and eighty thousand lost. That means the focus shifts entirely
from a narrative of zero sum loss to one of
massive net creation of jobs, albeit different kinds of jobs exactly.

Speaker 2 (55:54):
The transition is undoubtedly painful and disruptive for those individuals
displaced from the automatable roles. We can't minimize that. But
the overall macroeconomic picture, at least based on this data,
is one of expansion and increasing specialization, not contraction. Hashtag
tag tag tech thirteen policy implications the job lattice.

Speaker 1 (56:14):
So this structural shift, this net creation, but significant transformation,
it really requires abandoning the old metaphors, doesn't it. We
need to stop talking about the simple job replacement model.

Speaker 2 (56:25):
Yeah, the idea that one job is just swapped out
for another or just disappears.

Speaker 1 (56:29):
That's too simplistic, and instead start talking about the job
lattice metaphor. This was introduced by the Mackenzie Global Institute.
According to the sources right, the job lattice tell us
more about the lattice concept. What does it suggest about
the nature of job creation in the AI era.

Speaker 2 (56:43):
The lattice model suggests that the automation of a specific
task doesn't just eliminate a job. It detabilizes and fractures
the existing work into new necessary components, creating new tasks
and rolls around the automation itself. Hose mckenziy's analysis suggested
that for every single task that gets fully automated by AI,
roughly one point four new human specialties or major task

(57:05):
categories are spawned around it. The system doesn't eliminate the
need for human intervention entirely, it shifts it. It creates
new needs for managing the interface, new governance roles, new
quality control, new ethical oversight, new interpretive needs.

Speaker 1 (57:20):
So the sixteen roles we discussed today are tangible proof
of this lattice expanding both horizontally into new domains and
vertically in terms of complexity and oversight.

Speaker 2 (57:30):
That's exactly the idea. The lattice grows more complex, It
doesn't just shrink.

Speaker 1 (57:34):
Okay, this has really stark and urgent policy implications for governments,
for education.

Speaker 2 (57:38):
Systems, huge implications. Societies that choose to view AI primarily
as a job destroying tsunami and maybe try to resist
it or overregulate it out of fear, will likely see
mass unemployment and severe social strain because they fail to
prepare their workforce for the new roles being created.

Speaker 1 (57:54):
On the lattice, they'll drown in the change.

Speaker 2 (57:56):
Potentially, Conversely, those societies that treat AI as a job
morphing tide a powerful force that reshapes the employment terrain
but also creates new opportunities and focus on adaptation, they
will likely surf the change to greater prosperity.

Speaker 1 (58:11):
And the key determining factor between drowning and surfing seems to.

Speaker 2 (58:14):
Be rescilling velocity.

Speaker 1 (58:16):
The speed at which a nation can equip its workforce
with those five meta skills we outlined earlier statistical intuition,
product taste, regulatory translation, persuasion, ethical stress testing. That speed
is paramount.

Speaker 2 (58:30):
It seems to be the critical variable, and our sources
provide a somewhat worrying and comparison here on that front.

Speaker 1 (58:36):
Between Finland and the US.

Speaker 2 (58:37):
Yeah, Finland launched a National AI Challenge twenty twenty five
program aiming to get a significant portion of its population
basic AI literacy. They've apparently already enrolled one percent of
their entire adult population in free government backed courses covering
prompt literacy and foundational AI concepts.

Speaker 1 (58:54):
One percent of the whole adult population that's impressive penetration
for a national program.

Speaker 2 (58:58):
It really is. The US, despite clearly leading in the
underlying technological innovation, lags significantly behind in broad public reskilling efforts,
with only about point three percent comparable enrollment in similar
large scale programs according to the sources.

Speaker 1 (59:14):
Okay, so we innovate the tech, but maybe aren't preparing
the broader workforce fast enough.

Speaker 2 (59:20):
That's the concern raised. Closing that skills gap. Increasing that
reskilling velocity is arguably the single greatest economic and political
challenge in the next decade. If we can accelerate that reskilling,
the lattice expands opportunity for everyone. It helps ensure that
the one point one million new, often high value roles
are filled not just by a small highly educated elite,

(59:42):
but also by the workers displaced by automation, giving them
pathways onto the new lattice.

Speaker 1 (59:47):
The job of the human has fundamentally shifted, it seems,
from inputting data and repeating tasks to directing, governing, curating,
and managing the machines that.

Speaker 2 (59:55):
Do the repetition that seems to be the core shift.
Hashtag outro, hashtag tech typ fois final takeaway and provocation.

Speaker 1 (01:00:01):
Okay, So if we synthesize this entire deep dive, try
to boil it down. The key insight seems to be
that the future of work demands humans manage the interface,
the ethics, and the complex infrastructure of systems that have
largely mastered wrote cognition.

Speaker 2 (01:00:17):
Yeah, our primary economic value is shifting away from being
cogs in the cognitive machine to being the conscience, the director,
the cartographer, the safety engineer, and maybe even the emotional
anchor for the machine.

Speaker 1 (01:00:30):
And it's telling that the roles commanding the highest salaries
that safety engineer, the orchestrator, the ethics auditor are precisely
those dedicated to maintaining alignment, coherence, safety, and security at
a systemic level.

Speaker 2 (01:00:42):
It really suggests that the most inherently human job, the
most valuable human job going forward, may simply be the
one that keeps the machines aligned with human values, keeps
them human in a sense.

Speaker 1 (01:00:53):
That's a powerful summation of the transition we've covered today.
All Right, as always, we leave you our listener with
our final provocative thought. Chew on after this deep dive. Okay,
let's hear it if AI's primary purpose or at least.
Its current trajectory is to automate cognitive labor. Which of
those five emergent meta skills we discussed statistical intuition, product taste,

(01:01:13):
regulatory translation, narrative persuasion, or ethical stress testing, which one
do you think will be the first to be successfully
automated away or significantly augmented by the next wave of AI.

Speaker 2 (01:01:25):
Ooh, that's a tough one. Automating the meta skills themselves exactly?

Speaker 1 (01:01:30):
And if or when one of those core human skills
is automated, what entirely new, perhaps even more fundamentally human
role will that automation then create on the ever expanding
job lattice.

Speaker 2 (01:01:41):
What jobs will automating the AI governors create? A fascinating
question to ponder.

Speaker 1 (01:01:47):
Indeed, think about the jobs that automation spawns next. Thank
you for joining us for the deep dive.

Speaker 2 (01:01:51):
We'll see you next time.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.