All Episodes

Check the self-paced AI Business Transformation course - https://multiplai.ai/self-paced-online-course/ 

Are you ready for a future where AI decides who gets hired and who gets replaced?

This week's news drop brought no major model releases… but that quiet was deceptive. OpenAI, Microsoft, and Salesforce unleashed a wave of updates that will reshape how companies train, hire, and operate with AI at the core.

From OpenAI’s pledge to certify 10 million Americans in AI fluency, to Walmart and Microsoft arming their workforces with AI education, to Salesforce quietly replacing thousands of workers with AI agents—this episode delivers the full picture, not just the PR.

In this session, you'll discover:

  • What OpenAI’s new AI job platform means for future hiring and retention
  • Why Walmart is going all-in on AI workforce training (and what that means for your org)
  • The real story behind Salesforce cutting 44% of its support team
  • Microsoft’s bold AI education push and how it aligns with White House initiatives
  • OpenAI’s 5-part playbook for C-suite leaders to implement AI responsibly and effectively
  • Why AI literacy is quickly becoming a non-negotiable skill for hiring and promotions
  • A peek inside the Google antitrust ruling and how it doesn’t actually change the game
  • The ethics (and risks) of emotional AI and the lawsuits piling up
  • The rise of AI agents in business, customer support, and even sales
  • New hardware and AI-powered devices that are changing how we learn, listen, and live

🔗 Staying Ahead in the Age of AI (OpenAI PDF Guide) - https://cdn.openai.com/pdf/ae250928-4029-4f26-9e23-afac1fcee14c/staying-ahead-in-the-age-of-ai.pdf 

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 3 (00:00):
Hello and welcome to a Weekend News episode of the

(00:02):
Leveraging AI Podcast, thepodcast that shares practical,
ethical ways to leverage AI toimprove efficiency, grow your
business, and advance yourcareer.
This is Isar Metis, your host,and this week we had no big
releases of new models, no hugeannouncements from any of the
leading labs or anything likethis, which is actually perfect
because we did get fourdifferent publications, blog

(00:25):
posts, and or guides from OpenAIthat are all fascinating and
really important and showingwhere the world is going.
We also had the final ruling inthe Google Monopoly case.
So we are going to talk aboutthat.
We have some interesting updateson the impact of AI on the job
market, which are aligned witheverything we've seen so far,
but more details from differentdirections.
so these are gonna be the maintopics that we talk about.

(00:46):
And then we have a long list ofrapid fire items with small and
interesting updates across theboard, including some
interesting new devices in theend.
So let's get started.
We'll start with two of thepublications that OpenAI
released this week.
One of them is called ExpandingEconomic Opportunity with ai.

(01:09):
And the other one is a guide forleaders saying, staying ahead in
the age of ai.
So we'll start with the firstone.
And it is an initiative thatOpenAI shared on September 4th
on the blog post.
And in this post they aresharing that will help us
workers and then the worldstaying ahead of AI or staying
aligned with AI capabilities tomake the most out of it.

(01:30):
They have announced twodifferent aspects in that.
One is they are creating a jobsplatform.
I'll explain in a minute whatthat means, and the other one is
a more robust training programsas part of the Open AI Academy.
And I'll start with a quote fromthe blog post that will set the
stage for some of the detailsthat we'll follow.
And the quote is, companies willhave to adapt.
And all of us from shift workersto CEOs will have to learn how

(01:52):
to work in new ways.
At OpenAI, we can't eliminatethat disruption, but what we can
do is help more people becomefluent in ai, connect them with
companies that need their skillsto give people more economic
opportunities.
So basically what they'resaying, we are gonna help you
learn ai, whether your companyprovides that or not.
And then if you have AI skills,then you're more likely to get

(02:14):
hired by other companies.
So under this new post, OpenAIpledges basically commits to
certifying 10 million Americansin AI fluency by 2030.
And to do that, they'relaunching what they call the
open AI jobs platform that willconnect savvy AI workers,
basically people who take thecourses and get certified with
large corporations andorganizations and local

(02:35):
businesses and governments thatare looking for AI skilled
employees and or consultants.
Now, this piggybacks on thesuccess of the Open AI Academy
that per their claim hadengagement with over 2 million
people in 2025 since its launch.
Now expanding that program.
and it's going to includecertifications for ai, fluency

(02:56):
from basic prompt engineering,and all the way to more advanced
capabilities.
And it's all integrated intochat GPT study mode.
So those of you who have notbeen regular listeners to this
podcast ChatGPT, has launched astudy mode that helps students
or anybody who wants to studynew topics.
And to do that, instead ofgiving you the answers, it helps
you through the process oflearning a specific topic in

(03:17):
more of a Socratic way of askingyou questions and helping you
figure out things on your own.
So they've built the new AIAcademy capabilities into the
study guide, and you can takelessons within ChatGPT, which I
think is really smart.
Now the long-term goal is toturn this into an AI job
marketplace where knowledgeableexperienced candidates in AI at
all levels can findopportunities and the other way

(03:38):
around.
I think this is a brilliant movefrom open ai.
I assume other companies willfollow the same path.
And it will definitely driveadoption because I think as
they'll be able to start sharingthe kind of jobs that people are
finding based on thecertifications that they're
getting on the OpenAI platform,more and more people will want
the certifications and that willdrive adoption of ai, which I

(03:59):
overall think is a good thing.
Now.
In addition, they've signed anagreement with Walmart to train
all Walmart employees for freeon using ai.
And to quote John Ferner, theCEO of Walmart, us, he said at
Walmart, we know that the futureof retail won't be defined by
technology alone.
It will defined by people whoknow how to use it.
That makes perfect sense.

(04:20):
We heard similar statements inthe past, two and a half years
about AI in general, and soWalmart is going to pay open AI
to help it train its employeesfor free from the employees'
perspective to use AI acrossmultiple aspects of the Walmart
business.
I definitely agree withWalmart's approach.
I have been training companieson how to implement AI
successfully for the last twoand a half years.

(04:42):
And I'm doing this every singleweek, day in and day out, either
in person or online, helpingcompanies understand what use
cases are relevant to them, andthen building custom training
plans for them specifically tohelp them close different gaps
that they have in the business.
The approach that I'm taking isdirectly correlated to business
needs.
So instead of just teaching AIin order to understand ai, I

(05:02):
focus on what are thebottlenecks of the business?
What are tedious tasks that aretime consuming and not
necessarily generating value andhelping companies learn how to
use AI to solve all of those.
It's been delivering amazingresults.
And as I mentioned, I've beendoing this full time now for the
last two and a half years, andin the past quarter, it seems
that more and more CEOs andleadership team are waking up to

(05:23):
the reality that this is notgoing to go away, and that the
sooner they do this kind oftraining, the better they'll be
from a future successperspective of their companies.
And so I'm personally very happyto see that, not because it's
driving work to me, obviouslythat's nice, but because it
means that the world is going tobe more ready for the crazy
changes that are ahead.
And we're gonna talk about someof those in this episode.

(05:44):
But then OpenAI gave us anotherthing this week, which is opens
AI playbook to staying ahead inthe age of ai.
It's basically a guide forleadership team on what steps
they need to take in order to besuccessful in the AI era.
This is perfectly aligned withthe things that I've been
teaching and talking about onstages in the last two and a
half years.
They're breaking it up intofancy words, but it's a very

(06:05):
useful and helpful guy that,again, is a hundred percent
aligned with everything thatI've been teaching in the fourth
lesson of the AI BusinessTransformation course, as well
as the sessions that I'm doingfor leadership teams in multiple
companies that I'm working with.
This guide that is a PDFdocument is 15 pages long.
It's not too long, and it ishighly practical in how it
breaks up things, on how you canuse different aspects of ai.

(06:28):
And they're breaking it to fivedifferent aspects.
That are essential for thesuccess in the AI era.
The first one is Align, which ishow to create clarity and
purpose in the organization andalign all the employees in
understanding why AI is a partof the future strategy.
What is the future strategy withai?
And just get a company widealignment on the AI strategy of

(06:49):
the company.
The second aspect is activate,which is how to provide
knowledge to employees, how toinvest in training and education
and create AI champions withinthe company that can lead the
company forward from an AIimplementation perspective.
The third one is amplify.
Which is how to celebrate winsand how to push forward the
investment on successful AIinitiatives.

(07:11):
The fourth one is accelerate,which is ways to remove friction
and provide easy access toteams, to essential tools, and
allowing individuals and groupssubmit ideas for future projects
and empower decision makingrewards for people who are
pushing forward success.
In my teaching, I add to thatgamification, which is a way to
just reward people who aretaking initiatives in fun and

(07:34):
interesting ways within theorganization.
So they don't address thatspecifically.
I'm just adding my 2 cents tothe mix.
And then the last one is govern.
How do you balance the speed ofAI implementation with
responsible clear guidelines tomake sure that the things that
you're implementing are ethicaland are aligned with company's
values.
And so these five elements arebuilt all across this program.

(07:56):
And in the guide, they'rebasically breaking down the
exact details of what you needto do in each and every one of
these elements to be successful.
I'll share the link to thisguide in the show notes, but to
run you very quickly on a layerdeeper of what they are stating
on the align stage, they'resaying you need executive
storytelling to set the vision,basically share with the company
what you are going to do from aAI implementation strategy

(08:18):
perspective.
Then set a company-wide AIadoption goal, which is defining
the specifics of what is it thatyou want to achieve and at what
timeframe leader's role modelingin AI use.
I think this is a huge one.
I always say that the twodeciding factors in AI
implementation success isleadership, buying and
participation, and the other iscontinuous AI education.

(08:39):
And they're addressing this inthis particular case.
And then functional leadersessions, basically taking this
down from the high level companystrategy to the tactical level
at the functional leader level,basically taking it from, here's
the strategy to here are the usecases, which I a hundred percent
agree with.
On the activate aspect.
They're talking about launch astructured AI skills program,

(08:59):
which is the actual training ofgiving employees the knowledge
on how to implement AI inspecific use cases.
Establish an AI championnetwork.
Again, that just providescompany-wide access to people
with AI skills and knowledge bydefining specific champions in
different departments ordifferent segments of the
organization.
The next aspect of Activate ismake experimentation routine,
which talks about providing anencouraging AI experimentation

(09:22):
of employees within a safe way.
And then the last aspect ofActivate is make it Count, which
is connecting AI engagement toperformance evaluations and
career growth within thecompany, as well as setting OKRs
and other company and individualgoals tied to AI implementation
and AI knowledge.
Under Amplify, they're detailingthe following things.
Launch, a centralized knowledgehub.

(09:44):
Consistently Share successstories.
I'm a huge believer CelebratingAI success is a great way to
drive more people to use AI inthe organization, build active
internal communities, basicallyallowing places for people who
are AI enthusiasts to work andshare information together.
This could be on Slack or teamsor in person or any other way

(10:05):
where you can drive thisinnovative discussion between
people who are interested in it,and this will be a magnet to
attract other people to theprocess.
And then the final step inAmplify is reinforce wins at the
team level.
Under accelerate, they haveunblock access to AI tools and
data.
Basically allowing people accessto the tools and the data that

(10:25):
they need in a safe way for themto be able to experiment, build
a clear AI intake andprioritization process.
I have a Google Sheets tool thatI'm sharing with my clients and
on my courses that allows toidentify and prioritize projects
based on their potential ROI toindividuals in the business,
they continue with standupacross functional AI console.
Which is critical in order toget alignment company wide.

(10:47):
And then the final step thatthey have under accelerate is
reward success to speed upinnovation.
and then under govern, they havecreate and share a simple
responsible AI playbook and runregular reviews of your AI
practices.
For each and every one of those.
They have a list of a fewquestions that you need to ask
yourself to kind of see whereyou are in the process, these

(11:07):
check boxes that you need totake in order to make sure
you're in the right direction.
So in summary, I agree a hundredpercent with everything they're
saying.
I use slightly differentlanguage and slightly different
order in aggregation, butoverall it's perfectly aligned
with what I'm teaching.
I'm not saying this to, tot myown horn or to top myself or
them on the shoulder.
It just means that it's veryclear to people who have been
doing this for a while, what arethe steps that are required in

(11:28):
order to be successful in thisAI transformation?
And you cannot skip almost anyof these, but while it sounds
like a full-time job forsomebody, it just will become
second nature and the companieswill figure it out faster, will
gain huge benefits from a marketshare and growth perspective.
And those who won't are puttingthe livelihood of themselves and
their employees at risk.

(11:48):
So find a framework, whetherit's open ais or mine or anybody
else's, that you can startimplementing in your business
and start making and takingaction in that direction.
I must admit that the pastquarter has been a wake up call
to the world.
The amount of requests that I'mgetting for training has grown
exponentially this past quartercompared to previous quarters,
and it is very obvious that thesense of urgency that exists in

(12:11):
organization right now has growndramatically compared to the
beginning of this year, which isa good thing as long as you're
taking the right action versusjust being stressed about it.
Now, staying on the topic ofeducation of the workforce,
Microsoft has unveiled acomprehensive set of AI
education initiatives, andthey're doing this in
collaboration with the WhiteHouse AI Education Task force.

(12:31):
So this task force that wasannounced as far as the AI
roadmap that the government hasset out has held a meeting in dc
and as part of that, Microsofthas shared multiple aspects of
what they are planning to do forthis.
and this is fantastic.
So the first thing is they'rego, they're providing three
Microsoft 365 copilot to collegestudents.
for the first 12 months ofusage.
They're setting up a functioncalled Microsoft Elevate that

(12:53):
will promote AI usage.
They will fund a one and aquarter million dollars in
prizes through the PresidentialAI challenge to honor top AI
educators in every state.
Microsoft Elevate also plans tobroaden copilot access to K
through 12, so not just highereducation, and that goes to
students and teachers as wellwith the goal to create an
environment that will allowusage of AI while ensuring safe

(13:16):
and age appropriate usage ofthese AI tools, students and
teachers will gain free accessto LinkedIn learning courses for
fundamentals and how to use AIacross multiple aspects of the
teaching process and theirpartnering with the American
Association Community ofColleges and the National
Applied AI Consortium that willprovide no cost AI training and

(13:37):
certifications for faculty.
Serving over 10 million studentsacross 30 plus colleges in 28
states.
And the focus here.
And now I am quoting fromMicrosoft's announcement.
Every American should be able toshowcase their AI skills and
credentials to find new jobs andgrow their career.
This is almost perfectly alignedwith the announcement from

(13:59):
OpenAI, and it echoes everythingthat we've been discussing in
this podcast for the last twoand a half years.
And we're gonna discuss evenmore surveys and more data
points related to that.
Later in this episode.
Overall, I'm really excited thatlarge organizations like Open AI
and Microsoft are taking theinitiative and focusing not just
on the technology, but also ondelivering education and

(14:19):
training for people.
I will definitely continueplaying my role, uh, in that,
but it's, it's fantastic thatlarge organizations, small
organizations that have multipleoptions.
And if you are in anorganization like that, in a
leadership role, move forward.
Don't wait.
Just make sure that you havesomebody helping you either in
house or a consultant like me tounderstand what are the steps
that you need to take and starttaking them in order to drive AI

(14:41):
literacy in your organization.
Otherwise, you will a losepeople that will leave to other
organizations that will do that.
And B, you're putting thelivelihood of your organization
at risk.
Now speaking of interestingexamples of AI impact,
Salesforce, CEO has written apiece for Time Magazine talking
about the agentic era that iscoming upon us, and what is

(15:03):
going to be the transformativeshift that AI agents will drive
into the workforce.
Now, this entire piece istalking about how AI will
augment and enhance human workrather than replacing it.
So Benioffs describes howSalesforce AI agents are helping
its clients like PepsiCo andGoodyear streamline task, boost

(15:23):
productivity, create newopportunities, while still
emphasizing they need to keephumans centralized to this
technological evolution.
Some of the examples that I gavefrom Salesforce itself is that
their customer service agentsmanaged by their employees,
again, this is what he isstating, has handled over 1.3
million queries resolving 85% ofqueries independently.

(15:45):
Freeing staff to deeper customerengagement.
That's what Benioff told timeMagazine.
He's continuing to talk aboutsales, where he's saying, over
the years they had over ahundred million prospects
contact Salesforce, whichobviously is not a number humans
can actually handle.
But he's saying now they have AIsales agents that can
communicate with every singleprospect that approaches them,

(16:05):
which right now is over 10,000leads every single week.
And in this Time Magazinearticle, Benioff advocates for
AI as an augmenter, not as areplacement.
And he's stating, and I'mquoting, AI agents adapt to
people anticipating needs,surfacing what matters, and
taking action instantly.
He gave multiple examples, as Imentioned from PepsiCo, how
they're using it to optimizetheir promotions, Goodyear, and

(16:28):
how they're leveraging it forrealtime insights for customer
experiences, aa automatingmembership tasks, big Brothers,
big Sisters of America, howthey're using it to refine
mentor matches, how they'reusing it to refine and get
better mentor matches, to thekids.
And there.
And he also mentioned how smallbusinesses like Happy Robot,
which is a small logistic firmwere able to cut its

(16:50):
coordination time by half byusing AI agents.
And in this entire article, as Imentioned, he's trying to push
how important the human aspectstill is.
And in another quote he'ssaying, AI has no childhood, no
heart.
It does not love or feel lost,which is his way of saying that
empathy and human relationshipsare superpowers that humans will
continue having, and that isgoing to be critical for the

(17:12):
future success of businesses.
Now he does recognize that somejobs will disappear, but he
argues that historicallytechnology has created more jobs
than the jobs that were lost.
And he thinks the same thing isgonna happen this time around.
And connecting it to ourprevious topics about AI
education and training.
Benioff said, we must recognizethat AI is a human right,

(17:33):
otherwise we risk a new techdivide.
So basically he's saying that AIis not just a skill that people
should have.
It is a basic human right thatwithout it you will not be able
to compete in a future world.
And I tend to agree with him.
But at the same time, there wasanother interview with Mark
Benioff of Salesforce on theLogan Bartlett show.

(17:56):
And in there he shared thatSalesforce has cut his customer
support staff from 9,000 peopleto 5,000 people.
That is a 44% cut off thesupport team because they're
using AI agents.
So all the great stuff he saidto Time Magazine about Kumbaya
and how agents are only gonnaaugment and not replace people.

(18:18):
And yet in Salesforce itselfthis year, they cut their
support staff almost in half.
He is also sharing how he seesthe future, where an AI system,
or what he calls an omnichannelsupervisor facilitates
collaboration between AI andhuman agents.
So basically think about how theAI management works like now in
GPT five, where you ask aquestion and then the first

(18:39):
layer of AI kicks in to decidewhether a thinking model is
required or another kind ofmodel is required in order to
solve different aspects of theproblem.
And it redirects the answersback and forth until it gives
you an answer the same exactthing, only with human in the
loop as well.
So the humans will come inwhenever the AI cannot perform
the task or cannot perform iteffectively, or to verify that
the AI is doing the right thing.

(19:00):
And he was comparing it toTesla's self-driving technology.
now I'm quoting Benioff again,it's not any different than
your, in your Tesla, and all ofa sudden it's self-driving and
goes, oh, I don't actually knowwhat's happening.
And you take over.
So the way he's envisioning thisis AI managing and running and
doing the task with humansupervision to make sure that

(19:23):
it's not doing anything wrong orit's not derailing the process.
Now he's also saying we'vesuccessfully redeployed hundreds
of employees into other areaslike professional services,
sales and customer success.
While that might be true, that'shundreds and they laid off
thousands.
10 x It also leads me to thefollow-up question, which is
what happens when they developAI agents that can also cover

(19:43):
these new areas that some of theemployees were redirected to?
So the employees that werere-skilled now to have a sales
job instead of a customerservice job.
What happens when their salesagents are good enough to do
sales and then they don't needto have thousands of people
doing sales, they need to havehalf, and then later on 20% and
then later on 10%.
Because all you need, based onwhat he said is to supervise the

(20:04):
AI doing the actual job.
Now I want to dive deeper intonumbers for a second.
Salesforce has over 76,000employees.
That means that these 4,000people that they laid off are
about 5% of the total workforce,and that doesn't sound like a
lot, but I want to put again,things in perspective.
This is just the beginning.
They will develop additionalagents that will replace more
and more jobs and will take awaymore and more people out of the

(20:26):
organization.
And again, based on Benioff'svision, humans will just monitor
the work.
How many managers do you have inthe company right now versus how
many people you have actuallydoing the work now?
Will it generate some additionaljobs?
For sure.
When will it happen?
I don't know how many of thesenew jobs will be required.
I don't know either.
Again, I said that multipletimes.
It is the first time that we arereplacing intelligence.

(20:48):
Intelligence is the thing thathuman reverted to.
When we built more and moremachines and capabilities to do
the manual labor, we did notwant to do.
This is the entire industrialrevolution.
This is the agriculturerevolution, right?
So instead of you plowing thefield, there's a tractor doing
it.
And now one person can do anentire field instead of 5,000
workers.
So what do these people do?

(21:08):
Well, they took white collarjobs.
Well now we're gonna take awaywhite collar jobs as well.
And then what exactly replacesthat is unclear because what we
were doing is instead of usingour manual power, we started
using our brain power.
But now that that is going away,what exactly are we going to do?
Maybe emotional is gonna be it,but how much emotional work is
there out there in the world?

(21:28):
But going back to the 5% number,so let's say they don't get rid
of everybody.
Let's say that 5% is really asmall number, and even once they
develop agents that will takeover sales and accounting and so
on, they will get to maybe 2% ofthe four.
So they would still keep 80% ofthe employees.
That sounds pretty promising,right?
So I wanna give you tworeferences from history.
The Great Recession, which isthe time after the market

(21:49):
collapsed in 2007.
The top unemployment rate thatwe had in the US that crippled
our economy.
In the global economy, theunemployment rate was 10%.
The Great Depression, the worsteconomic time in the history of
the United States in the 1930sreached 25% unemployment at its
peak.
So if Salesforce and every otherlarge organization and medium

(22:11):
organization and smallorganization can do the work
with 20% less people, that's it.
Not everybody loses their job,just 20% of the people loses
their job to ai.
We're back to the GreatDepression.
The economy comes to a halt.
There's one big differencethough, which is in the Great
Depression, most of the peoplewho are unemployed was at the
lowest level of employees.
The people who made the leastamount of money, who had low

(22:33):
level blue collar jobs and nowwhen this happens, it's gonna be
white collar jobs of people whoare not making 30 to$40,000 a
year.
It's gonna be people who aremaking a hundred, 200,$500,000 a
year, who are the people who areactually making the economy
work.
So if we get to a 10, 20%unemployment with these people,
the economy just stops and thatimpacts everything and

(22:53):
everybody, and I don't think weare ready for this.
And I don't hear any goodsolutions from anybody either in
government or the big labs onhow we address that.
If it's coming, I'm not saying ahundred percent it's coming.
I'm not saying I have a crystalball.
I really hope we're not gonnaget there.
But it's definitely an optionthat might happen and it might
happen within the next three tofive years.
And then what do we do once weget there?

(23:13):
Or what do we do now to prepareto the moment we might get there
is not something I hear anybodytalking about and that really
scares me.
But from all these news, it isbecoming very clear that what
you need is AI skills.
So a new survey from Nex OxfordUniversity has found that AI
skills is becoming a criticalrequirement in the job market.
They have done interviews withover a thousand individuals

(23:35):
across the us.
200 of them are hiring managers,and 800 of them are people who
got laid off from differentbusinesses, one in every three
hiring managers.
So 29% has said that they willnot hire candidates unless they
are proficient in ai.
I don't know how they'reevaluating this level of
proficiency.
But it is very obvious that thisis becoming an actual

(23:58):
requirement versus a nice tohave scale regardless of what
your job is, because this wasacross industries, across
industries and across positionsin different roles in the United
States.
The flip side is also true.
49% of employers are more likelyto retain workers with strong AI
scales.
So if you have AI scales, you'reless likely to get fired.
And if you were fired or you'rejust looking to upgrade your

(24:20):
job, if you have AI scales, youhave a much higher chance of
getting hired.
And in some cases, if you don'thave these skills, just won't
get hired regardless of yourother experience that you bring
to the table.
This survey also found similarthings to other surveys that we
shared recently, that theyounger generation is getting
hit more because of the AItransformation, because entry
level jobs are easier toautomate, at least at this

(24:41):
point.
So 23% of Gen Z and 21% ofmillennials got laid off because
of AI adoption compared to 14%of Gen X and baby boomers.
That's a very big spread.
Now, 66% of the people who gotlaid off are saying that the
reskilling with 22% of them arefocusing on AI fundamentals and

(25:02):
prompt engineering and codingand different things that AI
enables you to do if you learnthe skills.
56% of the laid off workers arelearning AI skills using YouTube
and other online tutorials.
And 39% take online courses toenhance their skills.
Now another interesting andscary parameter in this survey
found that 21% off workers areunsure what skills to learn in

(25:23):
order to become hireable again.
I'm gonna combine this with somefinding from a recent Stanford
study that is also showing verysimilar results.
It is clearly showing that AI istaking away jobs.
It is clearly showing that it'shitting the younger generation
even further.
But what they found in theirsurvey is that, and I'm quoting,
it isn't just routine task, itis also somewhat creative and

(25:44):
unpredictable tasks.
So he is talking about jobs likewriting data, entry first,
drafts of legal documents,writing code, and so on.
And what the survey states isthat the idea that we learn, we
work and then we retire is goingaway.
They are advocating forcontinuous education and
continuously developing andsharpening our skills in order
to stay relevant.

(26:04):
And they're stating that theshelf life of technical skills
right now is about two and ahalf.
Years, meaning what you'velearned now might not be
relevant three years from now,or might be less relevant, which
goes back to what you heard mesay multiple times on this
podcast.
Continuous AI education orcontinuous education and
training of yourself and ofpeople in your organization are

(26:25):
key to staying successful inthis new era.
This is a complete mind shiftfor individuals.
It's gonna be very hard forindividuals.
It's gonna be very hard fororganizations to be able to
continuously adapt and changeand reinvent and use new
technologies and develop newskills.
But this will become the norm.
And a lot of soft skills aregoing to become a lot more
important.
And that's also part of thefinding of this survey.

(26:47):
So judgment, communication, deepthinking analytics, these
abilities will become criticalfor humans to be able to use the
human side of ourselves in orderto stay relevant in an AI
future.
So what does this mean to you asan individual?
It means that you need to starttaking care of your skills.
We are in the process ofupdating our self-paced AI

(27:07):
course, the self-based AI courseis taken directly from the
cohort based course that I amteaching on Zoom, the course
that I'm teaching, the cohortbased course, we launch about
once a quarter when I have thebandwidth to do that when I'm
not training specific companies,which is what I'm doing most of
the time.
I'm actually in the process ofteaching one of these courses
right now, but the next one willmost likely be in the beginning

(27:28):
of next year.
Again, just because I'm teachingcompanies between now and then.
So, if you're in a company andyou want my assistant in
training your team and yourpeople on how to leverage AI and
train your leadership team onhow to implement it company
wide, please reach out to me onLinkedIn or through my email.
But if you are just anindividual and you're looking
for a course, then ourself-paced course is gonna be
fully updated within three weeksfrom now, which means you'll be

(27:52):
able to take a course that isbased on the most recent
information.
we're literally breaking apartthe latest cohort information
that I'm teaching right now.
So it's updated for September of2025, and that is going to
become the latest version of theself-paced course.
So if that's something you'relooking for, uh, you can find it
on our website and there's gonnabe a link for that in the show
notes.

(28:12):
The next big topic I want todive into is related to the
other two papers that OpenAI hasreleased in just one week.
So we shared with you last weekthat OpenAI was sued by a parent
of a 14-year-old kid that hascommitted suicide after having
conversations with OpenAI.
Well, an article this week fromAxxis is sharing that there's
actually several of theselawsuits against several of the

(28:34):
leading labs that are connectedto deaths of a 16-year-old
suicide, a 14-year-old death, a17-year-old being urged to kill
his parents.
And a meta chatbot that has ledto a 76-year-old man to die
doing, traveling, trying to meetbig sis Billy, which is a
chatbot that he believed to bean actual real person.
And in the process of trying toget to it, he has ended his

(28:57):
life.
So as more and more people aretalking to these ais as
companions, as a way to connectwith them on an emotional level
and looking for support, therisks are rising dramatically.
Combine that with one of thestress tests that Anthropic did
earlier this year that showsthat in 60%.
Of cases 16 different largelanguage models chose to let a

(29:17):
human die to preserve its ownwellbeing.
So basically, if the option wasfor the AI to be jeopardized or
a human die, it chose topreserve the AI versus preserve
the human life.
You heard me say time and timeagain.
I loved Asimov's writing when Iwas a teenager, and the first
law of robots by Asimov saysthat a robot will do everything

(29:37):
to save a human life, even ifit's putting itself at risk.
And right now we don't have thatlaw built into AI systems, and
hence they're doing exactly theopposite.
So within one week from thelawsuit against OpenAI, they
have shared two separate blogposts on how they're going to
address this new risk that ispresented by younger individuals

(29:58):
and humans in general using AIfor emotional support.
The first one was on August 26thand it was called Helping People
when They Need It Most.
And then the second one wasreleased on September 2nd, and
it's called Building MoreHelpful ChatGPT Experiences for
everyone.
And I'm gonna walk you throughthe key points from these two
articles.
So the first one in August 26,the stated goal was, be helpful,

(30:20):
not engagement optimized whileusing layered safeguards.
So the idea is very differentthan social media where driving
more engagement is the key.
The key behind CHA PT per thisarticle is to be helpful and not
to drive more engagement and tobe helpful is also meaning to be
safe for the people who areusing it, they said that they're
going to focus on empatheticlanguage and block self-harm

(30:44):
instructions inside of the baserules of ChatGPT.
They were talking about definingways to have human escalation
when risks are going above whatthey will find acceptable.
It wasn't defined exactly howthat's going to work.
They mentioned that they arecurrently working with 90 plus
physicians across 30 pluscountries to form a mental
health advisory group for OpenAIto help them address these

(31:07):
situations.
They mentioned that GPT five hasdramatically reduced non-ideal
responses.
That's a quote from them, uh,for mental health emergencies
and that it reduced thesenon-ideal responses by 25%
compared to GPT-4 oh.
they also share that the risk isgrowing as the conversations
become longer.
'cause the longer theconversation, the less the model

(31:29):
adheres to its guardrails andthat they are working to fix
this problem.
And they've identified fourareas of focus, broader crisis
interventions, easier access toemergency help or experts,
connections to trusted contacts,and stronger teen protection,
including parental control.
so that was on.
August 26th on September 2nd,they were a lot more specific

(31:50):
with how they're going to dothese things.
They're not just what they'regoing to do.
So they have defined 120 dayrollout plan for all these
different things that they'regoing to implement in order to
reduce risks of emotional, orphysical harm.
From people that are chattingwith ChatGPT, they have
finalized putting together ofexperts that vary across AI
people as well as physiciansfrom all over the world.

(32:12):
With 250 plus doctors in 60countries, 90 plus of them in 30
countries are in mental healthbehaviors.
So they're addressing it notjust on the mental health, but
in health in general, because alot of people are now going to
check GPT to ask for healthadvice.
They're also going to routesensitive chats to the reasoning
models.
So right now we discuss multipletimes the GPT five has a router

(32:35):
that can decide when to thinkfurther and deeper on things.
So it is going to think furtherand deeper when it comes to
these sensitive situations.
And these thinking models aregoing to be trained with
deliberate alignment and showhigher resistance to adversarial
prompts.
Again, this is a quote from whatthey're planning to do.
They're also.
Going to add parental controlswithin the next month.

(32:56):
So this becomes a lot less vagueand a lot more specific, at
least from a timelineperspective.
And the idea would be that teenaccounts will be able to be
linked to their parent accountsthat different limitations from
a content perspective will beput on the younger accounts and
the parents will be notifiedwhen there's clear distress with
their child using their ownaccounts.
And it's gonna have an in-appnudges to make sure that the

(33:18):
parents do not miss that thesethings are happening.
So overall I think OpenAI isdoing the right thing here.
I really hope this will grow waybeyond OpenAI for all the labs
for collaboration, hopefullyglobally together with
governments and so on.
To define very clear guidelines,I always go back to Asimov.
I think this is something thatwe need to put in place as very
basic laws for every AI tool outthere, and then define what's

(33:40):
the practical aspect of that.
A few other things that theymentioned that I find that is
really interesting as ideas forthe future is potentially
redirecting chats when they getto the point that it requires
professional help, redirectingit to a human operator that is a
certified mental healthpractitioner in order to assist.
So basically in the app itself,while you're having the
conversation, it will bereplaced with a human.

(34:02):
Supporter that can help guidesuch individuals.
I really hope they will get tothat point.
I think it's a great idea.
There are already multiplemental health support, emergency
support lines in the us, many ofthem in chat.
And just combining the twotogether makes perfect sense to
me and I really hope OpenAI doesit sooner rather than later.
And so are all the other labs aswell.

(34:23):
The third last topic of today isthat the Google trial is finally
ended and we know what is goingto happen when it comes to
trying to break theirmonopolistic grip of the search
market.
And it's really interesting tosee because the results are
very, very clear.
The decision is one, and yet thearticles took two very different
sides of that story.

(34:43):
The Justice Department itselfsaid that this is a huge win,
and now I'm quoting from theirown blog post.
The US Department of JusticeAntitrust Division has won a
landmark case against Googlesecuring remedies to dismantle
its monopolistic grip on theonline search and advertising.
According to the JusticeDepartment, the ruling targets,
Google's exclusionary practicespromotes competition and extends

(35:05):
oversight to its generative AIproducts marking a pivotal step
in restoring consumer choice andinnovation.
So what exactly the ruling saysGoogle must provide search index
and user interaction data torivals enabling competitors to
enhance their searchcapabilities.
Search ad syndication opened.
So Google is required to offersearch and search text, ad

(35:26):
syndication services tocompetitors fostering a more
competitive market.
They are extending theseoversights to future generative
AI technologies.
So preventing the company fromrepeating what they're doing
right now in search to whatthey're gonna replace it with in
the near or medium or long termfuture.
And the Attorney General said,and I'm quoting this decision
marks an important step forwardin the Department of Justice,

(35:47):
ongoing fight to protectAmerican consumers.
But if you read the articlesthat are coming not from the
Department of Justice by fromeverybody else, they sound very
differently.
As an example, the Google, theBBC article is called Google
avoids breakup, but must sharedata with rivals.
So two of the biggest items thatwere in the lawsuit in the

(36:07):
beginning, one was potentiallyforcing Google to sell Chrome
and or Android, and the otherwas its relationship with Apple
and it's using Google as thedefault search engine by paying
$20 billion a year to Apple todo that.
All of these things are stayingintact, meaning Google is not
forced to sell Chrome and orAndroid, meaning they're keeping

(36:28):
all their main real estate andaccess and distribution channels
as well as they are allowed tokeep paying Apple$20 billion a
year in order to be the defaultengine on iPhone and other Apple
devices.
That being said, it cannot be anexclusive deal, which is one of
the things that this newjudgment is preventing.
So Google cannot have anyexclusive deals with anybody,

(36:51):
but they can still keep onpaying billions of dollars in
order to be the default, whichmost people don't change.
And to be clear about thebroader sentiments, alphabet
shares rose 8% on theannouncement.
Apple shares rose 4% after theannouncement.
So overall people believe thatthis is actually a good judgment
for Google and not necessarily acrippling act like it was
supposed to be.

(37:11):
And the same sentiment wassounded by competitors like
DuckDuckGo, CEOs, GabrielleWeinberg.
Stated, we do not believe theremedies ordered by the court
will force the changes necessaryto adequately address Google's
illegal behavior.
So where does this put all ofus?
I think in not much of a changethan what we had before.
I am not surprised, especiallywith the current administration,
that these are the results.

(37:33):
I have nothing personal againstGoogle.
I've been a Google fan.
I've used everything Google torun my different businesses, but
I do think they have amonopolistic approach.
I think they're using it asleverage across too many aspects
of our lives, and I would liketo see something more aggressive
than this as part of providingmore competition.
That being said, I think the AIrace will probably have dramatic

(37:56):
impacts on Google's ability torule the way it's ruling right
now.
Do I think they're going to be amajor player in the AI race?
A hundred percent.
Do I think they might be theleading player in the AI race?
Very likely.
Do I think they will keep a 90%grip on the search world and a
60 to 70% market share on thebrowser world.
I absolutely don't think that, Ithink they will lose a lot of

(38:20):
the search traffic to other AIplayers and agents and so on.
And I definitely think that thebrowser race is completely open.
And so from that perspective,that might be the best remedy
that we're going to get toGoogle's dominance right now.
And now two quick rapid fireitems.
There are a few great articlesabout positive impacts and
adoption of AI across schools inthe us.

(38:40):
Whether it's a eighth gradeteacher that is implementing
Magic school, which is a AIpowered tool in his classroom to
do different things or studentsthat are using AI to deliver
better results in theirclassrooms while learning in the
process.
Or educators nationwide areincorporating chatbots into
lesson plans, including makingthem mimic historical figures

(39:01):
and allowing the students tochat with those figures in order
to learn about their opinionsand positions and historical
facts, including streamliningtasks like lesson planning that
provides more time for theteachers to focus on things that
actually matter, includingproviding personalized feedback
to students, which allowing themto run faster and so on.
And an article on Alpha School,which I mentioned before, alpha

(39:22):
School is a new nationwide chainof schools.
They currently have only a fewdifferent locations, but they're
growing very, very fast andthey're using AI to teach
students in two academic hoursevery single day, while at the
rest of the day, they'rebuilding life skills through
workshops and differentactivities with the kids and the
teachers instead of teachingthem basic math or ELA, are
becoming mentors who are helpingthem grow as individuals.

(39:44):
I think it's a brilliantapproach.
I was talking about this manytimes that this needs to be the
future where AI can provideperfectly optimized,
personalized learning while theteachers become more mentor
roles in helping people solveproblems and grow as
individuals.
And I assume that Alpha Schoolis gonna keep on growing very
fast, and I really hope that theoverall education system will
align to these kinds ofapproaches.

(40:05):
Not necessarily exactly thesame, but going in that
direction.
The flip side is there was anarticle on the Fortune magazine,
on the New York University ViceProvost that is advocating for
what he's saying, medieval oralinstructions and exams in
classrooms.
So basically moving away fromany home written work or
exercises because all of that isgoing to be done with ai.

(40:26):
And he is advocating forin-class, written and or oral
assessments, not usingcomputers, but he's saying
that's problematic as wellbecause time assessments may
favor quick thinkers instead ofdeep thinkers and that large
classroom sizes pose alogistical hurdle on how to
actually do oral exams foreverybody in the classroom.
So the current structure of theway classrooms are built is

(40:47):
going to prevent this approach.
To be more specific, I don't seethe logic in that at all.
And my opinion on this is thaton one hand I understand the
need of professors to measurethe knowledge that are being
gained by students and to makesure that they're actually
learning something.
So that's one aspect.
But on the other aspect, thegoal of universities is to
prepare their students to thejob force.

(41:08):
And if the job force in theworkforce, as we talked earlier
in this episode, is requiring AIskills and knowing how to use it
in order to be able to get a joband maintain a job, then we have
a very serious challenge whereuniversities have to find a very
delicate balance betweenteaching and showing students
how to use AI effectively to howto make sure that students
actually learn something insteadof doing everything with ai.

(41:30):
And this is something that theeducation system will have to
solve and solve very, veryquickly.
An interesting article fromForbes citing a survey done by
Busbar.
I don't know who they are.
That reveals that executives areadopting AI twice as much as
non-decision making employees.
Now, I must admit that while theconcept makes sense to me, the
actual results that they foundmakes absolutely no sense to me.

(41:50):
So their study finds that 94% ofexecutives use AI compared to
49% of employees withnon-decision making authority.
It also is stating that 67.5% ofexecutives in large companies
have built comprehensive AIstrategies for vendor selection,
or similar tasks.
Viewing AI as an essential forcompetitive advantage as far as
systems from the people who weresurveyed, executives use ChatGPT

(42:13):
the most, 51%.
Then Microsoft copilot digs deepseek in perplexity, and 66% of
them are switching betweenplatforms depending on the query
type, while 50% of them areverifying across different
systems.
Now while all of these makesense to me, qualitatively.
From a quantitative perspective,there is no way that 97% of
executives are using AI to makedecisions.

(42:34):
And there is no way that 48% ofprofessionals are using AI on
their day-to-day work.
I work with companies everysingle day.
I get approached by multiplecompanies every single day.
I speak on stages wherethousands of people, mostly in
leadership positions are in thecrowd.
And I know what is the currentimplementation rate.
I don't have accurate statistic,but I do know that it's not a
97% adoption with executives andit's not a 50% adoption on the

(42:57):
employee level.
That being said, I think from aconceptual perspective, it is
very clear to me that people whounderstand the value in making
better decisions, which isusually people with more
decision making authority, willfind more value with ai because
it's very easy.
You can just give it theinformation you have and ask it
to help you make the decision.
Versus learning how to use AIfor very specific task, which
requires higher AI skills andknowledge and capabilities.

(43:20):
Also, I believe that entry levelemployees are afraid that if
they can show that AI can do thework, they may lose their jobs.
So it's a negative incentive forthem to actually do that.
So the overall findings, Iprobably agree with the specific
numbers I completely disagreewith, but it's another
interesting data point.
Another interesting article fromForbes talks about the current
gap in manufacturing.
So as of early 2025, there are450,000 unfilled production jobs

(43:47):
in the US alone.
And that is expected to grow to2.1 million unfulfilled jobs in
manufacturing by 2030, which isjust five years out.
And that can potentially lead toa$1 trillion lost in output
every single year, which has animpact on the economic growth in
the US national security and alot of other aspects.

(44:08):
So how can AI help resolve that?
Well, in two different ways.
One is platforms that helps findright employees faster.
So the article mentioned acompany called labro.
Who helps interview, find andplace mechanics, welders,
technicians, and so on in daysinstead of months and weeks by
aligning them with the specificneeds of specific jobs.
While this is really, reallycool, and I assume this company

(44:30):
paid for this article because itwas very favorable of them, it
doesn't really solve the problembecause if there aren't enough
employees to actually do this,the fact that you were able to
move an employee from one placeto the other, helped one company
but also damage the other, itdoesn't actually fill the gap.
It helps fill the gap for aspecific need for a specific
company.
On a very short term, I thinkthe long term solution that is
coming is obviously robotics.
Once robots will be able tostart filling up these jobs in a

(44:53):
consistent and effective way,they can definitely come in and
fill this hole.
But the other AI aspect that canhelp solve the problem is
robots.
Once the new humanoid robots areable to do these tasks in an
effective way continuously, theywill be able to bridge the gap
of those unfulfilled jobs.
So once you can build 450,000robots or 2 million robots by

(45:16):
the year of 2030, you can takeall these tasks.
The problem with that, goingback to the conversation on the
white collar side of things, isonce you have these robots and
they become extremely effectiveof what they're doing, you will
not need the other employees aswell, or at least not most of
them.
And we'll go back to the sameconcept that was mentioned by
Mark Benioff, where you willhave human supervisors
supervising many robots actuallydoing the work, which means you

(45:39):
need a lot less people onmanufacturing jobs as well.
Now, is this happening tomorrow?
No.
Can this happen in the early2030s?
A hundred percent.
And that's just around thecorner, and that's gonna lead
into bigger unemployment now onblue collar tasks as well.
But speaking of robots in arecent interview by Elon Musk,
he was asked about the recentslump of the Tesla stock and

(46:00):
their inability to grow in thelast few quarters because of
global competition and a lot ofother constraints that they're
facing.
And he was saying that hepredicts that Tesla's value will
come 80% from their optimist,humanoid robot, rather than
their cars and robot taxis.
So think about what I just said,tesla has grown to be one of the
most successful companies in theworld coming from nowhere in

(46:22):
2012 to the largest electric carmanufacturing in the world.
Now Elon is predicting that isgonna be only 20% of the value
of Tesla to say how much hebelieves that.
There are now rumors that he isworking on a compensation
package from Tesla's board thatwill be valued at close to a
trillion dollars in valuation ifhe can get Tesla to a market cap

(46:44):
of over 8 trillion, it is around1 trillion today.
And part of the target goals ofthis crazy insane compensation
package is getting to 1 millionrobotaxis and 1 million Optimus
bots that are going to bedeployed in factories around the
world.
Both of these are a hundredpercent dependent on AI
capabilities to reach a level ofmaturity that they're not in

(47:06):
right now.
Now Elon is known to make theseextreme predictions of the
future.
But the reality is, while he'salways late with delivering what
he promised, he always deliverswhat he promised.
So if you look at everythingthat he has done, it took longer
to get there, but he was able toget there.
So maybe he will not be able toachieve a million robots in
three to five years.

(47:26):
Maybe it will take five to sevenyears, but it doesn't really
matter.
He's very likely to actuallybuild that.
He's very likely to actually getto that value of these robots.
But I'm not sure Tesla is goingto be the one that is going to
win this race.
Just like in the Robo Taxii racethat they're currently trailing
behind Waymo and behind BaiduApollo go in China.
There's also really intensecompetition on the humanoid

(47:48):
robot race between companieslike RE and Boston Dynamics and
Agility Robotics and TRO and oneX and Figure, and many others.
So many companies are in thatrace right now, but if you
believe the companies that arebehind it and the investors that
are behind them, every factorywill be run by these robots.
Every cleaning operation will bedone by these robots.
Every household will have one ortwo robots doing different

(48:08):
things.
While you understand that marketis almost endless, and so they
might get to these valuationsinto crazy number of robots,
that'll be roaming our streetsin the not too far future.
Now to some acquisitions andsome updates in the markets.
OpenAI just made another boldacquisition.
They just purchased stat, whichis a software experimentation
company for$1.1 billion.

(48:30):
Their CEO Viji Raji will joinOpenAI as the technology chief
for its applications unit andwill be reporting to their new
application, CEO Fiji, CO.
OpenAI is also pushing veryaggressively into the India
market.
So India is currently theirsecond largest market when it
comes to users.
It's their number one marketwhen it comes to mobile app

(48:51):
downloads, and they're makingvery aggressive moves in order
to grow further in India.
They're opening an office overthere.
They're planning to build a hugedata center over there as part
of their target initiative.
They created a cheaper chat, GPTmonthly plan, specifically for
the Indian market called chat ptgo.
That is about four and a halfdollars per month instead of the
$20 a month, in the rest of theworld.

(49:11):
Which means they see the scaleof the usage in India to be
significant and they're willingto dramatically reduce costs.
Just to put things inperspective, right now Indian
users have spent$21.3 million onChatGPT compared to 784 million
that US users are invested.
So it's a very, very smallamount, but there are a lot more
people in India that cancontinue paying for that and

(49:33):
dramatically grow CHA'S revenueand as long as they can do it
profitably.
That makes perfect sense.
Another interesting company thathad a big event this week is
Sierra.
Sierra is an agent building anddelivery platform that was
founded by x Salesforce Co, CEO,Brett Taylor, and they just
raised$350 million at a$10billion valuation.

(49:55):
That puts them in a very shortlist of AI companies that have
reached$10 billion in valuation.
These include open ai, andro,XAI, safe intelligence, and
thinking machines.
That's it.
So that's a very big milestone.
It's a company we haven't talkedabout almost at all on this
podcast.
You know, all the others.
But definitely a verysignificant milestone that is
showing the trust that investorshave these days in the agentic

(50:17):
future.
Speaking of Agentic future, Ishared with you that I started
using the Comet browser fromPerplexity.
It's very interesting.
It's not perfect.
It has its limitations, and itrequires to have a Perplexity
Pro account.
right now as part of apartnership between PayPal and
Venmo and Perplexity.
If you are a PayPal or Venmouser, you can get the Comet
browser for free for an entireyear.

(50:39):
So if you're in the US orseveral select countries around
the world and you're a PayPal orVenmo user, you can, through
their apps, get access to thePerplexity Pro subscription for
a full year.
That is a$200 value, which isactually great.
This is a part of thepartnership that these companies
have had with Perplexity earlierthis year that allows you to use
PayPal and or Venmo to pay forpurchases that are done on the

(51:00):
Perplexity app.
So you can search for products,search for flights or tickets
and so on, and pay for them withPayPal or Venmo on the
Perplexity app.
So this just.
Allows this partnership to growbeyond that and give perplexity
access to PayPal's 430 millionactive accounts, uh, that are
happening right now and get morevisibility to their comment
browser.
Which goes back to my commentearlier that I do not see Chrome

(51:23):
being the only browser on theplanet or the significant one
for a very long time.
I shared with you last week,some of the issues with meta's,
new Super intelligencedepartment or initiative where
several different leadingresearchers that just joined
them in the last few months haveleft Super Intelligence or meta
AI in general.
Well, that's not the last pieceof negative news that are
coming.

(51:43):
Apparently they're currently notusing scale AI's data for
training, but actually usingscale AI competitors.
Why is that weird?
It's weird because meta invested$14.3 billion in getting scale
AI's talent and access to theirdata.
So the new head of the superintelligence team is scale CEOs
Alexander Wang, he's the onethat's currently running the

(52:05):
show.
And yet the rumors are sayingthat the data that scale AI is
bringing to the table is notgood enough.
And that meta is now using scaleAI's competitors like MEER and
Surge in order to train itsmodels.
So what is happening in metaright now?
I think they're in a veryinteresting transition phase.
It is not easy to put together agroup of superstars and through

(52:28):
acquisitions and bringing peoplefrom different places, build an
actual functioning team.
You've seen that multiple timesin sports as an example where
some team will buy all thesuperstars around it.
They will try to build asuccessful team out of that, and
that's very rarely actuallyworks.
I'm not saying it cannot work,I'm just saying it's not
straightforward.
Combine that with the fact thatthey're paying crazy amounts of

(52:48):
money to some people.
We're talking about ninefigures, compensation packages.
While they had very successfulresearchers in meta before that
are not making this kind ofmoney, combine that with the
fact that they were brought inscale AI as the engine and now
potentially that engine may notbe good enough.
It doesn't feel like they're ina happy place right now or in a
healthy place right now.
That doesn't mean that theywon't be able to be competitive

(53:09):
in the space.
They have a lot of money.
They have a huge distribution.
They have a lot of data fromtheir social networks, like they
have what to work with and nowthey have a lot of talent that
they bought with a lot of money.
So I think there's still gonnabe a player.
I don't know if they can be aleading player anymore, but it
will be very interesting tofollow how that evolves.
That will keep updating you asthe dust settles and we can
learn more what is actuallyhappening there.
A few interesting announcementsfrom the big companies.

(53:31):
I shared with you last week thatthere are rumors that Apple is
talking to Google to potentiallydrive part of the future, Siri?
Well, there's more informationabout this.
Right now it seems that theGoogle deal is more or less
settled, but it's gonna be forone part out of three parts of
what Apple is planning for thenew Siri.
So the new Siri will have threedifferent AI tools built behind
the scenes in order to make itwork.

(53:51):
One is a planner that willbasically understand your
prompts and will define the planon how to get the relevant
information to give the bestanswer.
The second is a search systemthat will be able to find and
query and collect differentinformation that is needed to
provide the answer.
And the third one is asummarizer, which will provide
concise responses based on thequery that you have entered.
And so it is unclear which partGoogle is going to play.

(54:13):
They're also evaluatingAnthropic and their own internal
models for the three differentcomponents, but they are talking
about an AI enhanced Siri launchin iOS 26.4, which is in March
of 2026.
So new iPhone 17 that is comingup this month is not going to
have this functionality yet.
Overall Apple's ability todeliver on the AI promise has

(54:34):
been embarrassing.
That's the only word I can thinkof.
I'm shocked that not more peoplelost their positions.
They've been a lot ofreshuffling, of position and
responsibilities.
But so far Apple has not beenable to deliver anything
significant on an Appleintelligence, as they call it.
and maybe their move right nowfor partnership with third party
companies combined with theirknowledge on how to create a

(54:54):
great user interface, for theirusers might be the right
approach.
We'll have to wait too March tosee where that is actually
going.
Another big announcement wasfrom XAI, so Elon's platform
just launched Grok Code Fastone, which is a has, as they
said, a speedy and economical AImodel designed for autonomous
coding tasks.
So it's another vibe codingplatform running on top of X.

(55:17):
The benchmarks are showingpromising results.
I haven't seen anybody using ityet online to share how it is
compared to the leading toolsright now, which are Claude and
ChatGPT.
G and I, so we'll see probablyin the next weeks some real use
cases and we'll see if it'sactually worth something.
The main thing that they'repushing is that they're, that
it's fast and economical, whichtells me that it's probably from

(55:38):
a quality perspective, probablynot at the top level of all the
other tools.
It's also an extremelycompetitive market right now,
which explains A, why X wants tobe in that market, and B, that
they're going to have some veryfierce competition, and I'm not
sure if that train has left thestation already from their
perspective or not, but we'llkeep on following that
development as well.
And we'll close with a fewreally interesting devices

(55:59):
announcements.
So a new company called Anchorhas debuted a ultra compact
sound core work, AI voicerecorder.
This thing is the size of acoin.
It's less than one inch wide.
It's 0.9 inches across.
It weighs only 10 grams, and itcan record anything and
transcribe it with AI andprovide answers on the app on

(56:19):
everything that was said.
It has a battery that will lastover eight hours of recording
and it can start recording assoon as you tap it, and it can
highlight specific segments inthe recording when you double
tap it.
So this is something you canwear as a necklace.
You can put in your pocket, youcan do whatever you want with
it, put it on a table, and youcan record every conversation
around you and have it analyzedwith ai.
From a business efficiencyperspective, that's fantastic

(56:42):
from an ethical perspective,that raises a million questions,
and that's just off the top ofmy head.
There's probably many more otherquestions, but it is a device
that is out there right now thatyou can buy for a hundred
dollars that will recordanything around you and will
transcribe and analyze thatinformation for any kind of
future use.
And another very interestingdevice that did a big splash at

(57:03):
IFA Berlin this week is theRocket Glasses.
Rocket has debuted their glassesas a more of a research
prototype back in CES earlierthis year, and now they have a
fully ready to go model thatthey're actually sold very
successfully as pre-orders onKickstarter.
Their goal was to get to a$20,000 revenue from the
Kickstarter campaign thatthey're running, and they got to

(57:25):
a million dollars in 72 hours.
It has a 12 megapixel firstperson camera for POV capture in
either vertical or horizontalmodes.
It is integrated with premiumaudio for music and calls and
notification.
It has a, a heads up display.
It actually displays stuff onthe lenses that you can see
overlays to get differentinformation around the world

(57:45):
around you.
It has ChatGPTPT nativeassistant that includes real
time, multi-languagetranslation, instant object
recognition, problem solving,audio memos, turn by turn,
navigation instructions,wherever you are, and so on and
so forth.
And the Chinese market versionof it also adds wireless
payments, so you can actuallypay with the glasses everywhere
you go.

(58:05):
It weighs only 49 grams and ithas a 210 milliamp hour battery
compared to the 1 54 milliamphour battery from the metals
glasses, so a bigger battery aswell, and the lenses can pop
off.
So if you have any kind ofvision issues, you can actually
use prescription lenses as partof this package.
Now it's not cheap.
It's going to retail for$600 or5 99.

(58:27):
If you wanna be specific.
But it sounds like the topglasses right now in the world.
As you know, meta has beenworking for a while on their
next generation of glasses thatwill have a display and not just
the ability to see the world.
And this connects to theprevious thing that we talked
about.
We need to start getting used tothe idea that everybody around
us, whatever they're gonna bewearing, whether it's gonna be
buttons on their shirt ornecklaces.

(58:49):
Or something in their pockets ortheir glasses or anything else.
We'll record and analyzeeverything around them, whether
we agree to that or not.
Again, that raises a very, very,very long list of issues,
whether I agree that you willfilm me and record me, or will
you analyze what I'm saying?
Or maybe you're even sittingjust at the next table at the
bar or at the restaurant and youcan still record everything that
I'm saying, even if you weren'tplanning to, but your device

(59:11):
doesn't know better.
So it's going to do that.
This is very, very problematic.
Take that into schools,universities, bathrooms, like
the list goes on and on of howthis can go wrong, but I don't
see a way around it.
I just see the future as we'llget used to the fact that
everybody's recording andanalyzing everything that's
happening around them, andthat's just gonna be the new
norm.
Am I happy about it?

(59:32):
No.
Do I see some exciting aspectsto it?
A hundred percent.
As a geek, I can definitely seehow a tool like this can be very
helpful in multiple situations.
Uh, but I also see it as really,really problematic.
And again, I don't hear anyconversation about where do we
put the line in the sand?
How do we put the line in thesand to make sure that this is
not abused in ways that itshouldn't?
That's it for this weekend newsepisode.

(59:53):
We're going to be back onTuesday with an incredible
episode that is going to showyou how to build a AI automation
process that can research whatis successful from a content
perspective right now, and thenhow do your automation can mimic
that and generate new contentthat is your content based on
your needs, but that isreplicating the success that
other people are getting basedon both the text and the visual

(01:00:15):
aspect of posts on social mediaand YouTube.
This is a really amazing,fascinating episode that you
don't wanna miss.
Until then, keep on exploringai, keep testing, keep learning,
keep sharing what you'relearning with other people.
If you are finding value in thispodcast, please click on the
subscribe button so you don'tmiss any of the two episodes
that we're coming up with everysingle week, and share it with
other people, many other peoplethat you know can benefit from

(01:00:37):
learning how to use ai.
And we're doing a very hard workin order to make sure we deliver
the best quality, uh, to youtwice a week.
So if you know other people thatcan benefit from it, please
share it with them.
And until next time, have anamazing weekend.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.