All Episodes

📢 Want to thrive in 2026?
 Join the next AI Business Transformation cohort kicking off January 20th, 2026.
🎯 Practical, not theoretical. Tailored for business professionals. - https://multiplai.ai/ai-course/ 

Learn more about Advance Course (Master the Art of End-to-End AI Automation): https://multiplai.ai/advance-course/

Are you preparing for AI disruption or are you already behind the curve without realizing it?

While hiring slows and layoffs remain low, AI is quietly carving out $1.2 trillion worth of human labor and you may not see it until it's too late.

In this Weekend News episode of Leveraging AI, Isar Meitis unpacks eye-opening data from MIT, PWC, McKinsey, and more, all pointing to a looming shift in the global labor market. The twist? Most of the risk is still hidden below the surface.

Whether you're a CEO, department leader, or simply trying to future-proof your team, this episode gives you the data, context, and strategies to stay competitive as AI transforms every corner of the white-collar world.

In this session, you’ll discover:

  • The Iceberg Index: MIT's simulation of the U.S. workforce and why visible AI disruption is just the tip
  • The surprisingly low adoption of AI among global workers — and why that’s both a risk and an opportunity
  • Why entry-level jobs are vanishing — and what that means for workforce development
  • How businesses are navigating 10–20% overcapacity while still starving for AI talent
  • Why reimagining work, not just automating it, is the only viable strategy
  • What the Claude 4.5 Opus and Flux 2 releases tell us about the next wave of AI capabilities
  • A sober look at the flawed optimism in McKinsey’s and PWC’s economic projections
  • Why AI fluency is now the #1 most in-demand skill — and how to get ahead of the curve
  • Plus: How OpenAI and Perplexity just turned holiday shopping into an AI-powered game

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker (00:00):
Hello and welcome to a Weekend News episode of the

(00:03):
Leveraging AI Podcast, thepodcast that shares practical,
ethical ways to leverage AI toimprove efficiency, grow your
business, and advance yourcareer.
This Isar Metis, your host, andthis week we're going to start
with focusing on the impact ofAI on the global and US job
markets.
There has been several differentstudies from leading

(00:23):
organizations such as MIT andthe Department of Labor and PWC
have been released in this pastweek related to the impact of AI
on the job market.
And we're going to dive intothose.
We're going to talk about newreleases.
The two most interesting onesare Claude 4.5, Opus and Flux
two.
We're also going to talk aboutthe impact of the release of

(00:45):
Gemini three on open ai, andwe're going to end with AI
assistance for holiday shopping,which is already in full swing,
and so we have a lot to cover.
So let's get started.
We are going to start with a newtool that was developed by MIT
and Oak Ridge NationalLaboratory.

(01:08):
And what it helps to do is toget a reality check on the
impact of AI on the US labormarket.
The way they've done theresearch, which is by itself
really fascinating, is theybuilt basically a labor
simulation that looks at 151million US workers, each as
individual agents mapping over32,000 scales across 923

(01:34):
occupations in 3000 counties inthe us.
With the goal of comparing eachand every one of these data
points to the current existingand future capabilities of ai.
Or as the r and l director andco-leader of the research said,
basically, we are creating adigital twin of the US labor
market.

(01:54):
So they are simulatingeverything as far as down to the
most simple task across almosteverything in the US labor
market.
Again, 32,000 skills.
So what did they find?
They found that current AIsystems, the tools we have
access to right now can replace11.7% of the tasks performed by

(02:17):
the US workforce.
That is approximately$1.2trillion in annual wages that
again, AI can do right now.
Now, in this study, they havecoined a phrase called the
iceberg index, which isbasically highlighting the big
difference between what we'reseeing clearly right now as far
as AI impact versus what'sactually happening below the

(02:39):
surface.
That is the real truth,basically like the tip of the
iceberg.
Based on their finding thevisible disruption right now of
ai, what they call the primarydisruption, which is clear in
technology computing, and itaccounts for just 2.2% of the
total wage exposure.
But the overwhelming majority ofrisk is below the waterline,

(03:01):
basically like a iceberg.
And it is embedded into more orless every white collar routine
function across major sectorslike finance, healthcare,
professional services, etcetera.
And in roles such as hR,logistics, finance, office
admin, et cetera.
Now they've built this tool forstate and government use cases,

(03:23):
and there are several stateswho's already paying attention
and starting to take actionbased on this research.
So Tennessee cited the IcebergIndex in an official AI
workforce action plan.
Utah, North Carolina are workingon similar reports, and so this
effort is going to have realimpact on at least state level
decision making processes.
But the finding again, isstaggering.

(03:44):
It is showing that right nowwith current tools, without any
future additional development inai, more than 11% of the tasks
that are performed acrossmultiple industries and roles in
the US economy can be replacedby ai.
This is a very, very significantnumber Which, as I mentioned
many times before, if companieslearn how to really benefit from

(04:08):
that, that means they can eithergrow by more than 11%, or they
will eventually have to let goof 11% of their workforce.
Otherwise they will not becompetitive and they will risk
the entire business.
And so this number, while notbeing very high, it is very
clear of how much risk there isright now developing with the
current AI capabilities.

(04:29):
Another research that wasreleased earlier this month was
released by PWC.
It wasn't specifically about ai.
It is called PWCs GlobalWorkforce Hopes and Fears Survey
2025.
The survey itself was actuallyconducted earlier this year with
nearly 50,000 respondersspanning over 28 sectors in 48
major economies.

(04:50):
So this is a global survey, notjust around the us.
What they found when it comes toAI is actually really surprising
on two aspects of the spectrum.
One is how low is the globaladoption of AI is so far?
So only 14% of global workersuse genai daily at work, so the
numbers grow dramatically if welook at their weekly numbers,

(05:12):
but only 14% use AI every singleday up from 20% in 2024.
I must admit, I'm completelyshocked and blown away by these
numbers, and they sound crazylow to me because every company
that I interact with, and in thelast two and a half months, I've
been interacting with adifferent company every single
week.
And in some cases when I didkeynotes, multiple, sometimes

(05:34):
hundreds or thousands ofdifferent CEOs and people in
leadership positions, everybodyseems to be using AI at a
relatively high frequency.
So 14% sounds really low, butthat's what the research has
actually found.
But the flip side of that isthat those 14% of people that
say that they use AI dailyreport, a 92% productivity
boost, and 70% of them reportexcitement about AI's impact on

(05:58):
their job.
Now the other interestingparameter is that optimism when
it comes to AI currently ishigher than the anxiety that
comes from the AI impact onfuture jobs.
So 47% of workers that respondedto the survey are curious about
ai and 38% think it will have apositive impact on their job.

(06:18):
However, there's a very bigdifference between managers and
non-managers.
Non-managers, only 43% ofnon-managers have a positive
feeling about the job futurerelated to AI versus 72% of
executives.
Same thing when it comes to theprovided resources for learning.
51% of non-managers feel thatthey get the right resources for

(06:42):
learning and training about AIversus 72% of senior leaders.
There are also very bigdifferences between different
sectors.
So tech leads with 35% daily genAI usage among the different AI
adopters with 71% of tech peoplesaying they're getting learning
career boosting skills out ofusing ai, and that is provided

(07:04):
by their companies.
This is significantly higherthan in other defense sectors.
And as we've seen in research inprevious weeks, and we're gonna
talk about this more again thisweek entry level workers face
the highest level of uncertaintywith managers of these people in
these kind of positions aresplit with their prediction.
38% of these managers predictjob cuts and 30% predict job

(07:28):
gains when it comes to entrylevel positions.
But either way, it is very clearthat entry level positions are
at risk with the introduction ofai.
Again, more about this shortly.
McKinsey also just released anew report related to this
topic.
It was called Agents, robots,and US Skilled Partnership in
the Age of ai.
And the narrative behind thisreport is saying that AI is not

(07:52):
necessarily going to displacejobs, but instead it's going to
define a new future in whichhumans and ai, both agents for
white collar jobs and robots forblue collar jobs, are going to
partner together to achievewhatever the goal is of that
specific type of work.
Now in their research, they'renot necessarily looking at the

(08:12):
current level of exposure, butpotentially at the future level
of exposure.
And they are estimating that AIwith robots has the opportunity
to automate 57% of current USwork hours.
That's more than half.
Now, they're doing that withouttoo much research.
They're trying to extrapolatefrom the current capabilities
and trying to figure out whatthey will be in the future.

(08:34):
This means that by 2030, the USeconomy, can unlock an estimated
2.9 trillion in annual economicvalue.
What they're saying is, is inorder to capture this value, the
economy as a whole and companiesand individuals will have to
completely redesign processesaround collaborations in humans
and machines versus trying toautomate individual tasks, which

(08:57):
is what most companies andorganizations are doing right
now.
I'm talking about this a lotwhen I work with senior
executives about the fact thatyou have to reimagine how your
company works.
You have to, in many cases,reimagine the goods and services
that you're delivering by how AIwill impact their value in the
future with AI and the abundanceit's going to provide to your
customers.

(09:17):
And so it is definitely a bigchallenge to organizations to
try to reimagine how they'vedone business in the past 10,
20, 30, 50 years, and try tothink about it in a completely
different way.
But if they will, the games canbe very significant.
Now, the report also finds notsurprisingly skyrocketing demand
for AI skills.

(09:37):
Or how they defined it, thedemand for AI fluency that
they're defining as the abilityto use and manage AI tools, and
that has grown sevenfold in justtwo years, outpacing every other
skill in the US job postingright now.
This is basically signaling toanybody out there that if you
don't know how to use AIeffectively, you should start

(09:58):
doing that because it's the mainthing that companies are looking
for right now across the boardin all different industries.
The report also introduces a newindex they call the skill change
index, or CSI.
Which provides a tool to trackor measure the exposure of
different roles to automation.
Not surprisingly, the highestexposure right now is in digital

(10:19):
and information processingskills.
While interpersonal skills likecoaching and negotiating and
caring are at much lower risks.
Again, not a big surprise there.
Now, as I mentioned, the reportdistinguishes between agents,
which are machines thatautomating non-physical work and
robots, which are machines thatare automating physical work.
And they're stating that rightnow, two thirds of all US work

(10:41):
hours are non-physical work,meaning AI agents are the
primary immediate driver, and tobe fair, they are way more ready
for this kind of work.
So their humanoid robots, aredeveloping and they're
developing fast, but they'recurrently too expensive and not
available in large enough scalesto take over the physical work.
They're saying that right nowthe average cost of these robots

(11:03):
are 150 to$500,000 per robot.
Where in order to be moredramatic in their impact on the
economy, there needs to bebetween 20 and$50,000.
But that's definitely withinreach.
Now, I must admit that I don'tagree with their conclusion that
the future will be collaborationversus replacement of jobs and I
say that because of severaldifferent reasons.

(11:24):
Reason number one is that thereis limited demand, right?
So let's say that you can dothings way more efficiently by
working collaboratively with AI,and you can grow your business.
Well, not all businesses cangrow by 20, 30, 50, a hundred
percent because the demand islimited.
And once you reach that demandlevel, then it doesn't matter
how much more efficient you canbecome, you will not be able to
sell any more.

(11:45):
And so at that point, you gointo competition with other
people in your industry that arealso becoming better and better
because they're using ai, whichmeans some people will be let go
because of the gainedefficiencies of this
collaboration between AI andhumans.
Now, if we're analyzing theexample that this study gives,
it is claiming that thereplacement theory doesn't work.

(12:06):
They cite that between 2017 and2024, the employment of
radiologists grew by 3% annuallydespite rapid AI advancements in
image analysis.
They're using this as a proof tothe fact that AI is just
augmenting roles rather thanreplacing it.
And I find some very seriousflaws in this flaw.
Number one is that they'relooking at data from 2017 to

(12:27):
2024.
When we look at 24 to 25, we seethe huge spike in the ability of
AI to analyze information anddefinitely to analyze images, to
understand that what happened inthe previous seven years is not
the same as what's happened inthe last two years as far as AI
capabilities to make thesechanges.
The other thing they're nottaking into consideration is the

(12:48):
timing.
It takes systems to adjust,especially really large system
like the healthcare system inthe us.
And yes, it takes time for thesesystems to adjust, but they will
adjust eventually.
And so while I agree that thefuture depends on human's
ability to collaborate physicalmachines or thinking machines, I

(13:09):
do not think that will not haveimpact on job displacement.
I think it will have a verysignificant job displacement
impact because of what I saidearlier.
But I do agree that the currentapproach of most companies, at
least in the initial stages oftrying to automate specific
tasks versus looking at thebigger picture, is missing the
bigger point.
Or as the author's state and Iquote, integrating AI will not

(13:32):
be a simple technology rolloutby re-imagining of work itself,
redesigning processes, roles,skills, culture, and metrics.
So people, agents, and robotscreate more value together.
Another survey from a companycalled Bearing Point.
Has surveyed over a thousandglobal executives.
Have found that 50% of C-Suiteleaders report that their

(13:54):
organizations currently have 10to 20% of workforce over
capacity, which is directlyattributed to early stage
automation, and the failure toredesign roles and processes
effectively around these newchanges.
Now to make this more extreme,the findings show that nearly
half, 45% of the people thatwere surveyed are expecting

(14:17):
staggering 30 to 50% excesscapacity in the next few years.
So I assume you're askingyourself, why aren't these
people let go?
If an organization has a 10 to20% over capacity, why are they
keeping the people?
And it connects back exactly tothe point from the previous
article.
Most of these organizations aretrying to figure out the new.
Way to do business.

(14:38):
Instead of letting go of goodpeople that have experience,
that knows the industry, thatknows the organization and so
on, they're trying to figure outwhat is the new way of running a
business with these newcapabilities.
Initially, without lettingpeople go, which is a good news,
at least in the short run.
Or like one of the partners atBearing Point that was a part of
this research has said, and I'mquoting rather than layering AI

(15:01):
onto outdated functions, theyare beginning to deconstruct
traditional role definitions andrebuild them around human agent
collaboration.
Cool.
Now the report also mentionedanother interesting balancing
act that these companies have todo.
On one hand, they're quoteunquote stuck with over capacity
on some aspects of the company.
And on the other hand, all ofthese companies are suffering

(15:23):
from big shortage in AI talentand AI critical domains.
And so on one hand, they wannahire more people to help them
figure out ai.
On the other hand, they have toomany people on some aspects, not
exactly knowing how to rebalancethem.
And this learning period, if youwant, represents serious
challenges for senior leadersacross different companies and
different industries around theworld.

(15:44):
Now, I can tell you that in thepast two and a half months,
almost every single week, I'veeither done a keynote at a con,
at some kind of a professionalconference or have done specific
workshops to different companiesaround the world.
And in all of them, this is veryclear that senior leaders are
not exactly certain how is thefuture going to evolve, and
hence how they need to plan forthat across multiple different

(16:06):
aspects of the organization fromthe operational side, and
definitely on customer servicecall centers, data analysis and
stuff like that, which are veryclearly more prone to AI
automat.
And I think this uncertaintywill be a part of the internal
communication and strategy insenior leadership across
multiple industries around theworld for the next few years

(16:29):
until the dust settles, if itwill settle and we'll have a
clear view of how the future ofwork is going to be done, how
this partnership is going tolook like, how will that impact
supply and demand?
How will that impact competitionbetween different companies but
if there's something that isvery clear is that companies
that figure this out first aregoing to have huge benefits and

(16:50):
incredible competitiveadvantages.
And individuals who can developthese scales are at a very high
demand right now, more or lessin every industry around the
world, which by the way might bethe biggest recommendation for
people getting out of collegesand looking for their first
jobs.
As more and more reports areshowing that finding entry level
jobs is getting harder andharder, a new CNBC report

(17:13):
reveals that the traditionalpath of how we learn on the job.
Basically, you get hired for anentry level job.
This is how you learn theindustry, the company, the roles
and you progress from there isdisappearing or at least
shrinking very fast.
The data comes from a researchdone by Venture Capital firm,
signal Fire, and what they foundis that the share of entry level

(17:35):
hires at Top Tech companies hascrashed by 50% since 2019.
If you look at more recent data,the recent graduates account for
just 7% of new hires in 2024compared to 11% in 2022.
So in just two years, thepercentage of new hires and
company has shrunk by 33%, againfrom 11 to 7% of the global

(18:00):
hiring.
Another data point comes fromresearch company LIO Labs, which
found that job posting forentry-level positions in the US
has fallen by approximately 35%since January of 2023.
So finding a job straight out ofcollege is becoming harder, but
it also raises a very bigconcern to the future of these
companies and how will peoplegrow within these businesses.

(18:23):
And the report details, whatthey call the breaking of the
unwritten covenant betweenemployers and grads, where
previously companies providedtraining, basically showing
people how to get into theworkforce and work within
companies in exchange foraffordable labor.
That was basically the deal.
Now with AI being able to doweeks worth of entry level jobs

(18:44):
in just seconds or minutes,businesses are less incentivized
to keep their side of the deal.
Why would I hire a person andteach them when I can have AI
just complete the job?
The question becomes how willthese companies develop the next
level of mid-level people andmore advanced capabilities, and
eventually senior leaders ifthey don't have people learning

(19:07):
how the company works straightfrom the beginning, starting at
the bottom and working the wayup to the top.
But while all of this ishappening, while the reports are
very, very clear on the futureimpact and the current potential
impact of AI with its currentcapabilities on the current
workforce, the discussion at thetop of the economic forums in
the US are looking at currentnumbers versus future impact,

(19:31):
which is really, really scary.
A, B, c.
News just released an articlethat shows that the number of
Americans that are applying forunemployment benefits fell by
6,000 to only 216,000 for theweek ending of November 22.
This figure is significantlylower than the expected 225,000.
Again, the number was actually216,000, but it also the lowest

(19:52):
level of claims since April of2025.
So in the last six months.
The same is true for the fourweek moving average has
declined, which is sowing atrend of less people that are
seeking to get unemploymentbenefits.
So less people are losing theirjob than they did earlier this
year and even a few weeks ago.
But at the same time, the samereport finds that it's harder to

(20:15):
find a job.
So the number of peoplereceiving benefits after the
initial week has rose by 7,000,which means people who have been
unemployed finding it harder tofind a new job right now than
they did previously.
So this basically suggests thatcompanies maybe are not firing
people, but they're also nothiring people.
So there's basically a hiringfreeze in the economy right now.

(20:37):
And this is obviously a broadaverage painting the picture
with a very broad brush, butthis is very clear from all the
recent surveys and reports thatwe just reviewed.
Companies are not letting peoplego because they're trying to
figure out how to move forward,but they're also not hiring new
people and definitely not hiringnew entry level jobs.
But my biggest problem with thisreport from CNBC is that the

(20:59):
conversation is about whatshould happen in the economy
based on the currentunemployment levels.
And they're even suggestingwhether the Fed should or should
not cut rates in the next cycle.
And this really scares mebecause they're looking at the
current numbers of unemployment,trying to understand what is
coming in the future, and thisis a very bad way to measure

(21:21):
that and to plan what's comingin the future.
Every research out there, I justgave you three or four different
highly respected sources ofresearch, is showing that AI
will have a significant impacton future tasks and future roles
across more or less every aspectthan every role in the economy,
whether in the US or around theworld.
And yet the people in charge ofmacroeconomics, including the

(21:44):
Fed, are looking at the currentunemployment numbers in order to
make their decisions.
This connects very clearly in mymind to the opening, the
prerecorded opening of thispodcast that talks about the
earthquake that's alreadyhappened in the middle of the
ocean that is starting attsunami, and yet the leaders of
the economic forums are lookingat what people are doing at the

(22:05):
beach right now, and now they'rehaving fun or not to decide
what's gonna happen in thefuture.
The tsunami hasn't hit the coastyet, but it is definitely coming
and not being prepared for itbecause the people on the beach
right now are having fun is nota very good way to prepare for
what's coming, at least in myeyes.
So what is the bottom line ofall of this?
And I'm gonna give you anotherpoint afterwards when we start

(22:25):
talking about Anthropic andtheir research.
So before I give you my summary,let me give you one more data
point from Anthropic Anthropicjust released their latest
version of the potential impactof AI on jobs that they have
released several times in thepast.
By reviewing how people actuallyengage with Claude.
So they have looked at a hundredthousand sample conversations
from Claude and they using thatto extrapolate how will that

(22:49):
impact different jobs anddifferent tasks.
And what they found that the AIassistant reduces individual
task completion time by anaverage of 80%.
What they found is that peopletypically use Claude for complex
tasks that would takeapproximately 90 minutes and
that the AI assistant cancomplete in seconds or just a
minute or two.

(23:09):
As expected, these are not thesame across different domains.
So, healthcare assistant taskssaw a 90% speed increase while
task related to hardware issues.
Saw only a 56%, time savings.
But either way, these are verysignificant.
By the way, the longest tasksthat people used this for was
for management tasks, basicallymaking strategic decisions in

(23:30):
legal, which they estimate thatwould've taken over two hours
for humans to complete withoutthe assistance.
Now they're mentioning by theway, that there are limits to
the way they analyze, becauseall they know is what's
happening inside the chatitself.
They don't know how much timepeople are actually investing in
checking the output of the ai.
So the time saving is probablynot the real numbers.

(23:51):
They're probably smaller becausepeople have to verify the
outputs.
But still, let's say it's not90% or 50%, let's say it's 20%,
that is still very significant.
So what's the bottom line of allof this research?
The bottom line is that we havea new technology that can
replace or dramatically augmenta growing percentage of tasks in
the economy.
It will require companies andindividuals to completely

(24:13):
reimagine how work is currentlybeing done.
So instead of trying to automatesimple steps of what we're doing
right now is think about what isit that we're trying to do from
a jobs to be done perspectiveand how can AI assist, replace,
augment most of the aspects ofthat process and then rebuild it
from the ground up around thisnew capability.
Now this need to reimagine howwork is done.

(24:36):
Today is more or less the onlything that is slowing down the
AI revolution that is happeningbecause big organizations cannot
change fast.
It is very hard for largesystems to change, and hence
this will take a while and it isslowing down the capabilities of
the technology and the realimpact that we're seeing on the

(24:56):
actual world and workforce.
But it is coming and it isinevitable because eventually
everybody will figure this out.
Now if you want, the biggestproof for that is all you have
to do is take the parameter thatI shared with you a few weeks
ago.
The concept of revenue peremployee is a great way to
measure efficiency of companies.
the average revenue per employeeof traditional SaaS companies is

(25:19):
around$250,000 per employee onaverage.
If you compare that to the topAI native companies out of
Silicon Valley, right now, theyhave about$2.4 million in
revenue per employee.
That's 10 x.
That means that companies thatare restructured around AI
collaboration in this case, theydon't have to restructure.

(25:39):
They just structure from thebeginning around that can be 10
x more productive with thecurrent level of ai.
Which means, again, if you dothis right now and you move
quickly, you can gain amazingmarket share as these companies
are growing to hundreds ofmillions of dollars in revenue
with a really small number ofemployees.
But as everybody figures thisout, the it will level the

(26:01):
playing field and as I mentionedpreviously, the demand is
finite, meaning not everybodycan grow to 10 x size, they are
right now, which means just fromthe competition perspective,
this will force companies to letpeople go and to do 10 x with
less people.
They will force companies to letpeople go and instead of doing
10 X with the current number ofpeople, they will do two x with

(26:24):
half the people or somethinglike that, which means there are
going to be a lot of peopleunemployed.
And that will put a very bigquestion mark on the potential
economical impact of ai.
Because if nobody can buy thegoods and services well, you
can't really sell them, whichthen can collapse the entire
global economy because there'snot going to be demand because
people will not have money.

(26:45):
While most of the research thatI shared with you today is
optimistic about the potentialimpact of this on the world
economy, I don't really see howthat's possible.
Again, because I don't see thatthere's infinite demand for
everything.
This just cannot happen.
So while the MIT researchmentions 1.2 trillion in wages

(27:06):
that can be done by ai and theMcKinsey research talks about
unlocking trillions in economicvalue.
To gain those trillions, weactually need people buying
those goods and services.
Even if people don't have money,then who's gonna buy these goods
and services?
And if they cannot, then thereis no growth in trillions.
So I don't see how this can playout the way these research are

(27:27):
showing.
Again, I'm not taking anythingaway from McKinsey or MIT or
PWC.
They all have way more peopleand way smarter people than I
am.
I just think they're looking atthis from a pure mathematical
perspective and not from aneconomical viability
perspective.
And I haven't yet seen onereport that talks about that
aspect of it.

(27:47):
So either I'm delusional orthese reports are very
optimistic when it comes to whatcan be, at least from a short
term perspective, the impact offiguring out how to apply AI in
a collaborative way or howeverelse they want to describe it.
But there's one thing that isclear from all of these reports,
which is the big lack ofunderstanding of how to apply AI

(28:09):
effectively.
Both inside of these companiesto define strategy as well as
the huge need for these skillswhen these companies are hiring.
And it all comes down totraining, education, and AI
fluency.
Inside the companies.
It has to do with trainingpeople and leadership in order
to figure this out.
And outside of the companies, itcomes to the individuals you to

(28:30):
train yourself on how to usethese tools effectively in order
to get hired and get paid moremoney by these companies who are
struggling to find people withbetter AI skills.
So if you want some specificquotes from them, in the
McKinsey Global Instituteresearch, they said, and I'm
quoting, the outcomes for firms,workers and communities will
ultimately depend on howorganizations and institutions

(28:51):
work together to prepare peoplefor the job of the future.
Another quote from the samearticle.
Integrating AI will not be asimple technology rollout by
re-imagining of work itself,redesigning processes, roles,
skills, culture, and metrics, sopeople, agents, and robots
create more value together.
Again, it's all aboutredesigning role skills,

(29:13):
culture, which all comes back totraining.
Another quote says, demand forAI fluency, the ability to use
and manage AI tools has grownsevenfold in two years faster
than any other skill in US jobposting.
In the bearing point study, thequote is workforce planning,
talent development andorganizational design will need

(29:33):
to be rethought from the groundup.
You get the point.
It's all focused aboutunderstanding AI deeper, finding
talent that currently knows AIin order to reimagine the work,
in order to gain the benefitsfrom ai.
This is what we have beenfocused on for the last two and
a half years with companytailored workshops and open to
the public courses.

(29:54):
So if you are in a leadershipposition in an organization and
you still do not have a plan onhow to figure out AI for your
company, for your industry, foryour department, for your team,
whatever the case may be, reachout to me.
There's a link in the show notesto set up a meeting with me on
my calendar so you can discussyour current needs and I can see
if I can help you or at leastguide you in the right

(30:15):
direction.
And if you are an individual andyou understand the need that is
coming in the market and youwant to build your personal
future, come and join ourcourses.
We have the next cohort of theAI Business Transformation
Course, which I have beenteaching for over two and a half
years now, with thousands ofbusiness people who have
transformed their personalwellbeing as well as the success

(30:36):
of their teams and the peoplethat they manage.
The next cohort of the AIBusiness Transformation Course
is opening on January 20th,2026.
It is literally the best way foryou to take the first steps and
accelerate your understanding ofAI across multiple aspects of AI
related, specifically twoaspects of business.
So this is not a theoreticalcourse.

(30:58):
We're not gonna talk aboutconcepts of ai, but all of it is
around practical use casesacross multiple aspects of the
business.
And if you are looking for waysto improve your career success,
this is an amazing way to dothat.
And there is no better time thanthe beginning of 2026 to make
that next step.
So if you want to see moredetails and join us in January,

(31:19):
there's a link for that in theshow notes as well.
But since we mentioned Anthropicand the research, let's continue
with Anthropic.
Anthropic just released ClaudeOpus 4.5.
Those of you who need somebackground, all the models that
Claude release have threedifferent variations with Haiku,
sonnet, and Opus.
A few weeks ago anthropicreleased Sonnet 4.5, but they

(31:40):
did not release Opus 4.5.
Sonnet 4.5 took over the entireleaderboard, more or less for
every single aspect of AI usage.
But since then, several modelscame out, including Gemini three
and Grok 4.1.
Both of them, took leadingpositions over Claude.
So now 4.5 Opus, which is thelargest model, is trying to

(32:02):
reclaim the throne and to befair, not completely
successfully yet.
That being said, it isdefinitely a better model than
sonnet 4.5 and as an example,they have let Opus 4.5 take a
take home exam that is designedfor prospective performance
engineering candidates andClaude Opus 4.5 scored higher
than any human candidate everscored on that test within the

(32:26):
two hour time limit.
This model outperforms sonnet4.517 out of eight programming
languages on differentbenchmarks, and it comes at a
new pricing scheme that makes itcheaper than the previous Opus
model, with$5 per million inputtokens and$25 per million output
tokens, it is also built to runa lot more efficiently.

(32:47):
Anthropic also added a newfeature into the API calls for
this model called EffortParameter.
This effort parameter allows tocontrol how much Claude actually
thinks in order to deliverdifferent tasks.
And what they're saying is thatin the medium effort settings,
the new Claude Opus 4.5 doesbetter than sonnet 4.5, while

(33:09):
using 76% fewer output tokens.
So it's not only that they madethe tokens cheaper, they can
achieve the same results whileusing significantly less tokens,
which overall would make theeffort significantly cheaper
than previously possible.
Now while coding is a big dealbecause it drives a huge
percentage of the revenue forClaude, this model is also
supposed to be better than aprevious model.

(33:30):
In vision, reasoning,mathematical skills and overall
outperform the sonnet 4.5 acrossmultiple benchmarks and
Anthropic being anthropic andfocusing on safety, they're
saying it's that their mostrobustly aligned model, yet with
lower chances of promptinjection attacks can be applied
to this model.
So how is this model doing onthe LM Arena board?

(33:53):
Well, it is currently rankednumber three on text after
Gemini three Pro and Grok 4.1thinking.
It is ranked number one for webdevelopment.
If you look at the overalltable, it is ranked third after
Gemini three Pro and Grok 4.1thinking where it is sharing the
first position across severaldifferent aspects and ranked
lower on several other aspects.

(34:15):
Overall, still reigning supremewith number one across the board
on everything is Gemini threePro, another.
Very interesting thing thatAnthropic has released today is
that they have developed a newmethodology that allows AI to
run effectively across multiplechats.
Now, what is this coming tosolve?
The core challenge is obviouslythe context window.

(34:36):
So those of you who don't knowand have not been listening to
this podcast for a while, everychat that you run has a limited
memory that can be used in asingle chat.
It is called the context window.
you try to run bigger and biggertasks, you come to the end of
the context window and then youhave to start from the
beginning.
Well, what Anthropic hasdeveloped right now is a twofold
solution with two types ofagents.

(34:56):
One is called the initializeragent, and the other is a coding
agent.
And what the Initializer agentdoes is it creates a summary
basically of every chat inA-J-S-O-N format that can be
delivered to the next chat withclear instructions on how to
continue from the previous chat.
In addition, it gives veryspecific instructions to the
coding agent on exactly whatfeature to work on at every

(35:18):
single time, out of the biggertasks, being more like a manager
versus just a deliverer ofinformation.
And what this new methodologyenables is to run a process of
developing multiple componentsin a single shot, as the AI on
its own will continue acrossmultiple different chats, but

(35:38):
continue the development in astructured and well organized
way.
Now this follows a very similarrelease from OpenAI that I
reported last week.
So the ability to run acrossmultiple chats while continuing.
If you want the thought processbetween the different chats.
This is just the beginning, butas this evolves in the next
weeks and months, we might getinto a situation where the

(36:01):
concept of a limited contextwindow will be a thing of the
past, meaning a.
AI will be able to completetasks across multiple hours and
maybe days and maybe even weeksif it can keep a coherent flow
and continue working withoutbasically the limits of the
context window as it can deliverthe data from one context window
to the other and know exactlywhat to work on while keeping

(36:24):
the process going in aneffective way.
This is currently builtspecifically both in OpenAI and
in anthropic for coding.
But as they figure this out forcoding, there's zero reason why
this will not be effective formore or less everything else.
What does that mean?
It means a complete game changerto the level of sophistication,

(36:45):
complexity, and length of tasksthat AI can perform.
Connecting it back to theprevious points of what type of
tasks can AI perform today, whatit will be able to perform in
the very near future is gonna bea very, very big gap between
these because of this limitationthat will go away, AI will be
able to perform things that arefar beyond what it can perform

(37:06):
right now, which will have aneven bigger impact on how
companies can use it and on theoverall economy.
Sticking with new releases, butswitching to a different company
in a different field.
Black Forest Labs has releasedFlux Two, so those of you who
don't know Flux is the best opensource image generation model
out there.
It is extremely capable and itis very good across multiple

(37:27):
different functions while FluxTwo was just released, and it is
almost as good at almosteverything as Gemini Dano Banana
Pro.
So what did they improve fromFlex One?
It is significantly better atediting images, including
editing high resolution, realimages, and being able to make
changes to them while preservingdetail and coherence with the

(37:48):
original image.
It has multi reference support,very similar to the new nano
banana, meaning you canreference up to 10 other images
simultaneously and still keepconsistency of character,
product style, et cetera, fromall the different images into a
new image that you aregenerating.
It is very good at generatingcomplex typography, including

(38:08):
generating infographics and UImockups for entire websites, and
it is much better at adhering tothe structure of prompts and to
brand guidelines if these arebrought to its attention.
I loved using the originalversion of Flux.
I think it's an incrediblypowerful model and it has the
ability to train the model foryour needs.
It's called Building a LoRa,which allows you to train the

(38:29):
model on a specific style or aspecific product, or a specific
person, which enhances thecapabilities even further, which
is a functionality that has, asfar as I know, doesn't exist in
Gemini right now.
Now in testing that I've seendifferent people do online right
now, there's still an edge forthe recent nano Banana Pro over
flux.
When it comes to the finedetails of the outcome,

(38:49):
especially when it comes tomultiple aspects of text, like
creating an entire dashboard ora infographics, however, it is
doing it significantly fasterand way cheaper.
So from a cost to quality ratio,flux is in a much better
position than nano banana.
And in many use cases, it willbe good enough, it will be able

(39:09):
to deliver the work faster andat a much cheaper rate, which
for many of us is the bettertrade-off versus the slightly
higher capability.
And it's just gonna depend onyour particular use case.
So the good news is within twoweeks we got extremely capable
image generation and imageediting models.
One is open source one fromGoogle, and they are both at the

(39:31):
level that enable actual realcompany work and can
dramatically accelerate theprofessional aspect of image
generation.
Most people who use flex use itthrough third party tools and
through their API.
This is how I use it.
However, if you want to justtest it out, you get 50 free
renderings per day on the BlackForest Lab playground website,

(39:52):
which everybody has access to.
Staying on the topic of newreleases, Harvard Medical School
researchers just developed andreleased an AI model called Pop
Eve or something like that,which stands for Population
Calibrated Evolutionary VariantEffect.
This is a complete mouthful, butwhat it does is nothing short of

(40:13):
magical.
It is designed to replace oraddress the extremely time
consuming effort of diagnosingrare signal variant genetic
diseases.
When they tested the model, theygave it access to over 30,000
patients with undiagnosed severedevelopmental disorders.
The model on its own was able todiagnose.

(40:33):
One third of the cases, again,that was not diagnosed by people
before.
Even more impressive, is themodel was able to identify 123
genes that are linked todevelopmental disorders that has
not been previously known tocause such a disease.
Now, since the paper waspublished, 25 of these genes out
of the 123 found was verifiedand independently confirmed by

(40:56):
other research labs.
And these are the things thatexcite me the most about AI is
its ability to really addressreal problems that are currently
unad addressable by thetechnologies we had so far.
Being able to help people withreally serious diseases by being
able to diagnose exactly what'sthe problem with them at a scale
and speed that is unmatched, canreally help us solve really big

(41:18):
problems in the world, whetherit's genetic illnesses or global
warming, or access to cleanwater, clean energy, et cetera.
The biggest problems that theworld has today and AI should be
able to, and hopefully will helpus solve.
Before we switch to someinteresting insights from Sam
Altman and OpenAI related to therelease of Gemini three, and
before we talk about shopping, Iwanna share with you some

(41:40):
insights from the interestinginterview of Ilia Sr, who was
one of the co-founders ofOpenAI, and now is the
co-founder of SSI, which standsfor Safe Super Intelligence.
He was interviewed by Ddus onthe D dsh podcast about the
current state of AI and the workthat they're doing at SSI.
This interview is way moretechnical than it probably

(42:01):
should have been, which some ofthese interviews are.
So I.
Don't necessarily recommend youlistening to the entire thing
because there's a lot ofsegments that are highly
technical.
But that being said, theytouched on a few very
interesting points.
One was the question of how canthese models do so good at some
of these evals and then when youactually come to use them, they
are either worthless orsignificantly less capable.
And what Ilia suggested is, whatI have been claiming on this

(42:24):
podcast all along is that themodels are trained to be good at
the evals because this is one ofthe benchmarks that they can be
tested on.
So the reinforcement learning isrewarding them for being good at
the evals.
So they become very good at theevals.
That doesn't necessarily meanthey'll be really good at real
life.
So think about a high schoolstudent that is awesome at

(42:45):
solving the test he was beingtaught for the entire year on
how to solve specific types ofquestions.
That doesn't mean he's gonna bereally good at the actual
application of the topic.
And it is the same exact thinghere.
these models are rewarded to begood at the evals because this
is one of the ways to trainthem.
And so they become very good atthat, but necessarily at the
more generalized list, rigid andstructured version, which is

(43:08):
real life of solving problems ina specific category.
So this makes perfect sense tome, and this is what I assumed
all along, and this justverifies what I thought about
this topic.
The other interesting aspect inthe interview talked about the
transition from the age ofscaling to the age of research.
What Ilia is saying is that inthe early days of AI research,
or not very early days, butlet's say, Five to 10 years ago,

(43:30):
the main focus was aboutresearch because nobody had
enough compute and you had toinvest a lot in research in
order to come up with solutions.
Then we went in the last fewyears through the age of
scaling, all the companies wereable to invest billions or
hundreds of billions into moreand more compute, and they were
able to solve a lot of problemsby scaling.
Well, what he's saying right nowis that this age of scaling is

(43:53):
getting to the edge of itscapacity, which will force
companies and organizations andresearch labs to go back into
research and investing ways tobetter solve and better teach
the models versus just bruteforce of providing more compute,
and he's making a veryinteresting claim.
He's saying that when you thinkabout how humans work, humans

(44:13):
are currently way more capablethan these machines when it
comes to generalizedintelligence.
Despite the fact we havesignificantly less knowledge.
So the idea behind scaling waslet's take more and more and
more knowledge in thepre-training phase, and that
will generate better and bettermodels, which was conceptually
true and was true.
And yet, right now, these modelshave significantly more

(44:35):
knowledge than humans, and yetthey're significantly less
capable than us across multipledifferent aspects, especially
when you're generalizingintelligence or the way IA talks
about it.
He talks about robustness ofhuman understanding much more
deeply and exhibits his abilityto analyze and understand things
while having a tiny fraction ofthe data that AI has access to

(44:58):
in its training process.
So when ESH was pushing him totry to explain exactly what
they're trying to work on andwhat super intelligence looks
like, and how they're going torelease that in a safe way to
the world.
What they ended up with, Iactually really like what he's
saying is that what he's tryingto build is not necessarily
super intelligence, but a superlearner ai basically developing

(45:18):
an AI that learns much betterthan the way current AI learns.
And he's saying that will put uson the path for a GI and A SI.
And once they figure out how tomake the AI a much better
learner than it is today, theywill start releasing versions of
that to the world that willallow people to figure out how
to use this in a safe way.
I personally find this veryinteresting.

(45:39):
Again, we are in an era whereeverything was focused around
let's get more and more and morecompute, and because it was
obvious, as we get more computein a much larger data set, these
models become smarter.
But it is obvious that this isnot the final way to move
forward.
And I find this exciting becauseit means that we will need
potentially less resources,which would be less demanding on

(46:01):
our planet in order to achievebetter results with ai.
And just the fact that he wasable to raise billions to focus
purely on research says thatthere's other people who believe
that might be the right pathforward instead of just billions
investing hundreds of billionsin compute.
So overall, I think this is aninteresting path, and I really
hope this will yield successfulresults.

(46:22):
Now as I promise the impact ofthe Gemini three release on
OpenAI in a candid internal memothat was obtained by the
information Sam Altman hasalerted the employees of OpenAI
to brace for a period ofturbulence, or as he said, and
I'm quoting temporary economicheadwinds because of the success

(46:43):
of Gemini three.
Alman was very open about thesuccess of Google and how well
they built the model.
With the memo stating, and I'mquoting, google has been doing
excellent work recently in everyaspect, and it's specifically
relating to their advancement inpre-training of models, which
connects back to our previouspoint.
They have found more effectiveways to do pre-training, which

(47:06):
saves them time and money andyields better results.
This drove open AI to define anew project, which I am not sure
I'm pronouncing correctly.
It's called Shallow Pit orsomething like that.
And that new model, or that newproject, is supposed to address
the deficiencies they have inpre-training right now compared
to Google with the goal toreclaim their superiority in the

(47:28):
AI domain.
Now the report highlightssomething that I've been talking
about on this podcast for a verylong time, that it is not a fair
fight.
Google has a full vertical andhorizontal integration in their
approach to ai.
They have a full stack ofeverything.
They control their own chips, sothey're running on TPUs, which
they are developing,specifically optimized for what

(47:48):
they're trying to do.
They have their own datacenters, they have their own
distribution platforms acrossthe board from Android, to
Chrome to Google Workspace, etcetera And OpenAI relies on a
lot of other companies toprovide most of these
capabilities.
In addition, from a financialperspective, there is a huge,
huge difference with OpenAImaking 13 to$20 billion this

(48:10):
year.
There's different rumors withdifferent numbers, but this is
the ballpark, 13 to 20 billionwhile losing a huge amount of
money burning through a lot ofinvestor cash.
While on the other hand, Googlegenerated$70 billion of free
cash flow over this past year.
So Google has everything theyneed in order to keep on pushing
in that direction withoutraising crazy amounts of money.

(48:32):
And OpenAI will have to rely onother companies to deliver many
of these capabilities, whetherbuilding the data centers or
creating and delivering GPUs intheir perspective.
And obviously providing fundingfor all of this to work where
Google has all of that ready togo, including the ability to
distribute that across multipletools and gain the knowledge
from people using it across allthese different tools.

(48:54):
So do I think AI can beat Googlein this race?
I doubt it.
But can they stay a significantcompetitor to Google?
A hundred percent they can, andI'm sure they will stay in there
as long as they can manage theircash flows compared to the
amount of money they can raisein an effective way.
Since we are getting into themost wonderful time of the year,
at least from a retail shoppingperspective, I want to share

(49:17):
with you the two differentcompanies, OpenAI and
Perplexity, have releasedshopping research assistance to
their tools.
So right now, inside of Chachipiti, inside of the selection of
what you wanna select, such asdeep research, you can now
select.
Shopping research.
I've actually used it to shopfor a TV in this Black Friday
week, and I found it to be very,very helpful.

(49:38):
It is really good atunderstanding what your needs
are and then curating resultsbased on very complex ideas,
concepts, and queries that youwanna look for.
So you don't necessarily have tolook for a specific kind of a
tv, but just to find what yourneeds are, what your budget is,
and it will go and help you findthis specific thing you are
shopping for.
It can even help you with ideas.
So let's say you're looking forsomething cute for a

(50:00):
five-year-old girl.
It can give you ideas of whatyou can shop for, and then it
can shop, and then it can helpyou find the information and
compare different aspects ofthis to help you make an
educated decision on what youare trying to purchase.
I found this to be very helpfuland it saved me hours of doing
research on my own.
And OpenAI emphasized that theshopping results are organic and

(50:20):
unsponsored, meaning they areranked purely based on relevance
to the user's needs andrequirements and not based on
paid placements.
Like on most platforms outthere, whether you're shopping
on Google or you're shopping onAmazon, most of the stuff that
you're seeing is sponsored.
And here this is not the case.
They're trying to show youexactly what you're looking for
based on your defined needs,which again, I tested and I

(50:43):
found very helpful.
A very similar thing wasreleased by Perplexity.
So if you are a heavy perplexityuser or if not an open AI user,
you can go to the freeperplexity right now and use it
to help you in your holidayshopping.
Very similar concepts.
In both cases you can check outon the platform if it is
connected to a checkout processthat they already have
integrated with.

(51:03):
And this gives us a greatglimpse of the future that we're
walking or running or sprintinginto where AI agents are going
to help us with more or lesseverything that we're doing and
how we engage with the world,and definitely with the digital
world in this particular case,just in shopping.
But I think this will evolveover time to more and more
aspects of our engagement withour job and our personal lives,

(51:26):
and how we engage with the worldthrough digital interfaces.
As I mentioned, my very firstpersonal experience was very
positive.
There are many other aspects ofthis news this week that we're
not going to dive into somereally fascinating battles that
are going on behind the scenesfrom a chip wars between Google
and Nvidia new company andinvestment by Jeff Bezos and

(51:47):
many other good things thatthey're just not time to cover
all of it.
And if you wanna see all of it,you can sign up for our
newsletter.
It has all the stuff that wedidn't cover today or that we
don't cover every single week.
We usually include about 30.
Topics that we cover, maybe alittle less, but there's usually
about 50 that we curate everysingle week.
And if you wanna know the restof them, you can just sign up

(52:08):
for our newsletter.
It also includes all the eventsthat we're running, the training
that we're providing, theworkshops, the free stuff.
Everything you want is all inthat newsletter.
So go sign up for that.
And while you're looking for thelinks for that, you can also
look at the links for theworkshops and for the courses
that are upcoming, which willallow you to prepare yourself
and or your company for 2026 andbeyond.

(52:29):
If you are enjoying thispodcast, please hit the
subscribe button on your podcastplayer so you don't miss any
episode that we release.
And while you're at it and youhave your podcast player open,
please rate this podcast onApple Podcast or Spotify.
This helps us reach more peopleand I actually read all of these
reviews and get the feedback andtry to give you more of the

(52:50):
content that you are lookingfor.
So if you have specific thingsyou're looking for, please
include that in your review.
That is it for today.
Have an amazing rest of yourweekend, and we'll be back with
another how to episode onTuesday.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

The Brothers Ortiz

The Brothers Ortiz

The Brothers Ortiz is the story of two brothers–both successful, but in very different ways. Gabe Ortiz becomes a third-highest ranking officer in all of Texas while his younger brother Larry climbs the ranks in Puro Tango Blast, a notorious Texas Prison gang. Gabe doesn’t know all the details of his brother’s nefarious dealings, and he’s made a point not to ask, to protect their relationship. But when Larry is murdered during a home invasion in a rented beach house, Gabe has no choice but to look into what happened that night. To solve Larry’s murder, Gabe, and the whole Ortiz family, must ask each other tough questions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.