All Episodes

Is the AI disruption overhyped or just getting started?

Yale says the labor market isn’t budging. Walmart is betting $1B that employee training is the missing piece. Meanwhile, Gen Z is pivoting to trades in an AI-fearing talent shift no one saw coming.

This week’s AI headlines tell a much deeper story than flashy product drops.
From ChatGPT turning into a shopping mall to Claude going full autonomous coder, and the rise of “work slop” at the office—every release points to a strategic fork in the road: consumerization vs. enterprise agents.

Your job as a business leader? Know which wave to ride—and when.
This episode delivers the insights to help you navigate the noise, avoid the hype, and see what’s really happening under the surface.

In this session, you'll discover:

  • Why Yale’s new research says there’s no labor disruption yet—and what that doesn’t mean
  • How Walmart’s $1B upskilling initiative reflects a bigger workforce gap than most execs are ready to admit
  • The quiet revolution: Claude 4.5 coding autonomously for 30 hours straight
  • OpenAI’s wild move into consumer land with Sora 2 + an invite-only social video feed
  • Why Instant Checkout turns ChatGPT into an e-commerce front-end (and how it could threaten Amazon)
  • The rise of “work slop”—and the reputational risk it brings to your team
  • Agentic browsers are here: Comet, Opera Neon, and more change how we interact with the web
  • AI in Hollywood: The synthetic actress already replacing human stars
  • And a shocking stat: 58% of employees are using AI tools with no training—and leaking sensitive data

Yale Budget Lab: Early Evidence of AI’s Labor Market Impacts - 

https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-current-state-affairs 

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
GMT20251004-112425_Record (00:00):
Hello and welcome to a Weekend News

(00:02):
episode of the Leveraging AIPodcast, a podcast that shares
practical, ethical ways toleverage AI to improve
efficiency, grow your business,and advance your career.
This is Isar Metis, your host,and we have an explosive week of
AI news.
I probably could have recordedthree or four episodes, uh, just
this weekend on things thathappened.
There were a lot of rumors aboutnew releases that are coming in

(00:22):
the next few weeks and in Q4 ingeneral, and we got many of
those this week.
And I was seriously consideringstarting with those, but then I
decided there's some other stuffto talk about other than new
releases.
So we got a lot of interestingsurvey and research that is
showing that AI may not be.
As disruptive as we thought, atleast not at the speed that we

(00:42):
thought, and I thought it wouldbe very important to share that
information.
So we're going to start withimpacts of AI on the workforce
in general.
There's a lot of interestingthings to cover.
Then we are going to dive intoall the new releases, and we
have a lot to cover there fromopen ai, anthropic, Google, et
cetera.
And then we're gonna do a veryquick rapid fire across multiple

(01:03):
new things that are gonna beincluded like every week.
I cannot include all the newsfrom this week in a single
episode.
And so if you want to know therest and there is a lot to know,
make sure that you sign up forour newsletter and there's a
link in the show notes to dothat.
But as I mentioned, lots tocover.
So let's get started.

(01:25):
So, as I mentioned, there areseveral sources that are showing
that AI is not as disruptive, atleast at the speed that we
thought it is.
And one of these sources were aconference hosted by the
information, which is a magazinethat shares a lot of AI related
stuff and technology things.
And Sarah G, the founder ofConviction, in that conference,
predicted that it at least thatit will take about five years

(01:48):
for businesses to fullyintegrate ai.
And she was citing the 20 yearsadoption curve.
It took cloud computing toreally be a part of every
business as we know it today.
And so she believes that largerorganization cannot move at the
speed that AI is moving and thatit is going to be.
A while, again, five years, thatwill give us more time to adapt
and get ready for how it isactually going to work.

(02:10):
She is talking first andforemost, not about the tech
itself, but as extensive workertraining that is lacking right
now, and that is a necessity forthe success of AI implementation
and successful usage of AItools.
I've been saying that if you'vebeen listening to this podcast
every single week.
Now in the same conference, thechief AI officer for SAP also

(02:34):
noted that actually the fastestthings that are happening are
the boring, mundane tasks, andhe's saying that 34,000
employees of SAP are alreadyusing AI for simple routine
tasks like document processingand receipt auditing.
So the small, daily simple stuffis actually relatively easy to
implement, where obviously themore complex, more sophisticated

(02:55):
and advanced stuff takes longer.
But this is the stuff that hasmore impact on the economy, on
the results, on performance, andon efficiency.
The same article also mentioneda lot of experimentation and
failure as the first stages ofAI implementation, like
implementing any new kind oftechnology or any new kind of
processes.
Which is basically saying thatnot every deployment is going to

(03:16):
work, which means it's gonnatake time to get it right and
then adjust from there.
That being said, it is clear toeveryone, and you hear that from
multiple angles.
That new startup or smallstartups that can move fast, can
gain a huge benefit because theycan use these technologies
significantly faster, with lessoverhead because they're
starting without the baggage ofthe people, without the training

(03:39):
and or an older tech stack thatthey don't have to take into
account.
And so it is very clear toeveryone that there's gonna be
this new era of AI focused, AIcentered companies that are
gonna be able to runsignificantly faster.
But bigger enterprise businessesor traditional businesses will
take much longer to adapt thanmaybe we thought in the
beginning.

(04:00):
But the main focus, as Imentioned, is around employee
training and the gap that thereis right now.
Many sources are showing thatcompanies are providing
employees licenses withoutproviding them adequate training
across multiple aspects of it.
I'm gonna discuss more of thatin a minute.
Now, another interesting anglethat I must admit I was very
surprised about and I reallydove into that information, is a

(04:22):
research paper by a companycalled Jobber, JO double BER,
that is an all-in-one solutionfor home services pros.
So they have an angle here, so Iwant to put that very clear out
there.
But based on their annual surveyon blue collar attitudes, young
workers are overwhelminglyfaVEOr trades like carpentry,

(04:42):
plumbing, electrical work, etcetera, for their potential
future jobs to resistautomation.
So, what they've done in thispaper is a combination of two
things.
They gathered information fromother third party sources that
were released recently, likedifferent respectable surveys
and research from differentcompanies, as well as have

(05:03):
performed their own research onGen Zers with over a thousand
high school and college ageindividuals between the ages of
18 and 20.
And what they found is actuallyvery interesting.
So, as I mentioned, they firstof all quoted several external
sources.
Some of them we've heard before,such as that college grad
unemployment spikes to 4.6%.

(05:23):
They're also mentionedinformation from a different
source that the bachelor degreecosts tops half a million
dollars right now.
If you factor in tuition, lostearnings, so you're not working
at that time and loan interest.
And again, they have an agendahere.
They wanna show that going downthe trades path is the right
path to go.
But still, this is a huge amountof money.
And they showed a lot ofbenefits of why should you go

(05:44):
down the trades path.
But their survey information isactually very interesting.
So they asked Gen Zers, what arethe biggest concerns about going
to college?
Number one thing was studentloan debt.
More than 57% of youngsters saidthe same thing.
But then immediately after that,in numbers two and three, as
were uncertain job prospectsafterwards, a lack of real world

(06:08):
experience.
When you come to compete withai, that's obviously a issue
because AI already know a lot ofthe stuff that you don't know,
and it can be trainedsignificantly faster.
So 39%, almost half of people,ages 18 and 20 that were
surveyed, said that uncertainjob prospects afterwards is one
of their biggest fears going tocollege.
They also asked Gen Zers parentshave recent trends or

(06:30):
development caused you toreconsider the type of career
path you encourage your kid topursue?
33%.
So the larger group said they,it did not change, but 25%, one
in every four parents said yesdue to the rising cost of
university education.
And then 15% said yes because ofconcerns about AI replacing

(06:52):
white collar jobs.
Now, 77% of Gen Zers themselvessaid that it is important to
them that their job would behard to automate and be replaced
by AI and from a careerconsideration.
The number one factor of GenZers for career selection was
career, job security overpassion and salary

(07:12):
consideration.
So this was the number onefactor was job security with
obviously AI being one of thebigger risks Obviously parents
reflected very similar concerns.
More than 51% of parents said,that risk of air driven job loss
influences the advice thatthey're going to give their
kids.
So what this means, this meansthat even the perception of AI

(07:33):
impact on the workforce.
And if you've been listening tothis podcast, you know, I think
it's gonna have a verysignificant impact on the
workforce.
But as I mentioned, we're gonnashare a few other examples of
why this may not be as crazy asI at least thought so far.
But it is very clear that it'salready impacting the decisions
on what to study on youngindividuals and their parents,

(07:55):
which by itself will impact theworkforce.
Now staying on the topic oftraining workforce, we mentioned
that before, but now it'srolling out Walmart have just
deployed a plan to train all ofits employees on AI usage in
2026.
They're planning to invest$1billion in skills investment
through 2026, not just in ai,but other aspects of re-skilling

(08:18):
their workforce in preparationfor the future.
They're hosting a Skills FirstWorkforce Initiative on
September 25th in theirBentonville, Arkansas
headquarters.
And the summit will include 300plus expert for firms to be able
to share different types ofskills that will be delivered to
all of Walmart employees in thenext year.

(08:40):
The AI training is going to bedone in partnership with OpenAI,
as we shared in previous weeks,which is showing you that the
gap in skill capabilities andspecifically AI skill is a big
issue even in the largestorganization in the world.
Some organizations, Walmart isjust one example, is starting to
take this very seriously.
But they're starting to takethis very seriously.

(09:01):
Only now we're talking about Q4of 2025.
More about that shortly now.
The research that really caughtme off guard was a research that
was done by the Yale Budget Laband that study found that there
has been no significant labordisruption from AI 33 months
after the ChatGPT debut in late2022.

(09:24):
So again, on this podcast, allI'm talking about is job
disruptions, ob, job disruption.
Job disruption.
And so I really invested ingoing in deeper on this research
and understanding exactly whatthey've done to see that there's
no agenda behind it and so on.
And they actually took a veryinteresting approach and the
research seems completely legitand the findings is very hard to

(09:45):
deny.
So what they've done is they'veactually tested the occupational
mix across different sectors anddifferent aspects of the
economy.
And the four key takeaways are,and I'm quoting while the
occupational mix is changingmore quickly than it has in the
past, it is not a largedifference from predated in the
widespread introduction of AI inthe workforce.

(10:07):
Number two, currently measuresof exposure, automation and
augmentation show no sign ofbeing related to changes in
employment or unemployment.
Number three, better data isneeded to fully understand the
impact of AI on the labormarket, which is just basically
saying, okay, we're not sureyet, but this is what we're
seeing right now.
And number four, we plan onupdating this analysis regularly

(10:29):
moving forward.
Now, the way they've done theanalysis, as I mentioned, is
really interesting.
They've used a methodology thatthey're calling a dissimilarity
index, which is the change inoccupational mix over time using
monthly data from CPS.
Which is the current populationsurvey done by the US Census.
So what this is basicallychecking is over time, month

(10:51):
over month, what kind ofoccupation people have and how
that changes over time.
And they compared severaldifferent times in history and
several different introductionof new technologies to see if
the current trends are differentthan historical trends, which
basically means that we're gonnahave a big difference in impact.
And the reality it's not.
I will share the link to theactual survey and the results,

(11:12):
but if you look at the graphs,they're absolutely astonishing.
Now for each and every one ofthe graphs, they also have a
baseline, and that baselinetakes a period in time in which
there was no any significantintroduction of new technology
in order to compare it.
And the graphs almost alignacross all the different times
of history, different sectors inindustries and so on, including

(11:34):
the ones that are the topexposed to AI changes.
So there are still bigdifferences between the
different sectors.
Information sector sees thebiggest difference, followed by
financial sector, followed byprofessional and business
services.
But overall, if you compare thetrends over time to previous
years, to previous changes, andeven to the historical average,

(11:55):
it's not that much of adifference, which basically
tells us that AI is changing theway we work and maybe changing
the types of professions.
But overall, from an employmentperspective, it is not making a
significant change.
Now they even tested the youngergeneration, and were trying to
show that even in this market,there is no big significant
change.
So they compared the timeframesof 2015 to 2025 compared with

(12:20):
2022 to 2025 to show thatthere's only a slight increase
in the occupational mix of thisspecific sector, which is the
younger generation that iscoming into the workforce.
Now, why did they choose tocombine ages 20 to 24 together
with ages 25 to 34?
Is it to hide something in thedata of just one of them?
I don't know.

(12:40):
Again, I don't have access toall the data behind the scenes,
even though they have shared thedata.
So you can go and download theactual raw data that they used
and even the calculation thatthey've done.
So this looks like a verylegitimate and serious research
that has addressed this from anangle that I haven't seen before
that is trying to see theoccupational mix.
Basically what people that areemployed are doing and what's

(13:00):
the unemployment rate acrossthese different levels.
And they're showing that there'svery little difference right now
than previous trends that we'veseen in the past.
Now, how does that align withprevious research that we've
seen?
I think they're looking atslightly different things.
I think this research is lookingmore into what kind of
occupations we have and thepotential of creation of new
jobs that might replace old onesversus some of the other studies

(13:22):
that we've seen, like theyounger generation unemployment.
That is definitely looking likea fact.
And as they mentioned now that Ifound this research, I'm gonna
look for every time they issuethat, and I'll keep you updated
on their continued finding.
Another interesting researchthat was released this week that
has a very significant impact ontheir workforce was conducted by
the National CybersecurityAlliance, also known as NCA, and

(13:44):
they found that 43% of workershave shared sensitive data with
AI tools, including companyfinancials and client
information.
65% of responders to the surveyuse AI daily, which is a 21%
increase year over year, andthey surveyed over 6,500 people
across seven differentcountries.
Now, here's the kicker.

(14:05):
58% of workers report noemployer provided training on AI
related data security andprivacy risks.
That is almost two out of everythree employees surveyed saying
they did not get proper AItraining, and yet they're using
AI every single day.
Now previous surveys have seenthat most people bring their own
AI to work, meaning they wouldbring to work the same AI tools

(14:27):
they use at home, and about 90%of those people.
So about two thirds of peoplebring their own AI to work, and
about 90% of the people thatbring their own AI to work use
the free AI tools.
So the free cha, GPT, the freeclo and so on that by
definition, train on your dataand we'll use your data for
future use of the AI lab.
And this obviously generates avery big risk of companies.

(14:48):
96% of IT professional in aSailPoint survey flagged AI
agents and AI tools as asecurity risk, and yet 84% of
them said that their companiesare already using these tools.
So you see that in addition tothe lack of training that
provides higher efficiencies andactually benefits from using ai,
in addition, it generates a hugeproblem from a data security

(15:10):
perspective because of lack oftraining.
Now, if you think that securityis the only problem, and if you
think that AI is driving greatbenefits and it's worth the
risk, there's another researchthat came up this week from
Stanford reveals really scaryresults on the negative
efficiency impact that AI has onthe workforce.
So Stanford Social Media Lab andBetterUp Labs did a survey of

(15:32):
full-time US workers and foundthat 40% of them receive AI
generated content.
That is, and I'm quotingMasqueraded as good work, but
lack the substance tomeaningfully advance a given
task or what's known in thecommon language today as slop.
And there's a few otherinteresting points here.
This phenomenon is thrivinghorizontally.

(15:53):
So people peer to peer at thesame level of the company
report, about 40% of whatthey're getting is work slop.
While the low quality AI contentis only reported to direct
managers at 18%.
What that tells us is thatpeople can actually identify low
quality AI outputs and they'restill sharing it amongst their

(16:15):
peers, and they're only sendinga smaller portion of it to their
managers.
Hopefully the better portion,This is obviously not a good
sign because what it tells us isinstead of going to the person
who sent the information andsay, Hey, what the hell are you
sending me the scrap for?
This is just wasting my time.
Go and actually do your job.
They are continuing to forwardthis information and despite the
fact that they know it is badbecause they know it is bad,

(16:35):
otherwise, they will also reportit to their managers.
Now, not surprisingly, the mostexposed AI industries like
technology and professionalservices report the highest work
swap VEOlumes compared to otherindustries.
Now, if you're one of thosepeople that generates low
quality AI outputs and share itas your work output, half of the
servant workers, so 50% ourviewing work, slop, submitters,

(16:57):
the people are sending them thiskind of information as less
creative, less capable, and lessreliable, and 42% deemed them
less trustworthy, and 37% seethem as less intelligent.
What that tells us is that it isrelatively easy to identify AI
generated content at this point,or at least low quality AI
generated content, and what ittells about the person is that

(17:18):
they're going to be lesstrustworthy, less reliable, less
capable, and less intelligent,which is obviously not something
you want for yourself or forother employees in your company.
There were other similarresearch that I shared with you
in the past few months,including the Chicago and
Copenhagen study that yieldedonly minor gains a few hours a
month from AI that was brokendown into smaller bits and
pieces.

(17:38):
That is hard to actually benefitfrom.
But it is very, very obviousthat despite the fact that there
are no clear productivity gainsin many companies, again, going
back to training and properimplementation.
People are still doing it andthey're doing more and more of
it, despite the fact it makesthem look bad just because it's
easy and people are lazy.
And if there's a lazy path, somepeople, I'm not saying everybody

(18:01):
will take that path, whichdoesn't project very well on the
future.
Now, the bigger problem thoughis how does this whole AI thing
is impacting the next generationof employees?
So there was an interestingarticle on Time Magazine this
week in which a Chicago highschool graduate exposed what
she's saying is, my generationis afraid of thinking without

(18:23):
ai.
In her essay to Time Magazine,she wrote that her peers from
straight A students tostruggling learners lean heavily
on ChatGPT for essays, math andquizzes, and completely dropping
critical thinking as a keyaspect of their learning and
intellectual process.
The author is claiming that manystudents, herself included, are

(18:43):
using AI to create and churn outhigh scoring essays and other
homework, but they're strugglingin the actual class discussions,
which is not surprisinglybecause they didn't do the
actual work.
So what do all of these surveystell us?
They tell us that change iscoming and it's coming fast.
It may not be coming as fast aswe thought, and it may be having

(19:03):
just initial impacts on theworkforce and so on.
But it is amplifying and it'samplifying very fast.
And the really scary thing isthe gap in training, the gap in
training in schools, the gap intraining in higher education,
and the gap in training in theactual workforce.
It is very, very clear that thefactor of explaining to people

(19:24):
the pros and cons of AI and howto use it effectively and
efficiently, will be the biggestdifferentiator between
successful people to lesssuccessful people and successful
companies and less successfulcompanies.
And I've been saying this forthe last two and a half years,
or almost three years now, ofworking with companies on how to
implement AI successfully now.
Going back to the specificworkforce aspect of this, it is

(19:46):
very, very clear that manycompanies are leaving this to
the employees to figure out ontheir own, which is a huge
mistake.
Both in means of the efficiencygains that are not being gained.
Basically giving employees toolswithout giving them the proper
training is an expense and ahuge risk from a data security
perspective.
Now I must say that something isshifting right now.
So as I mentioned, I've beendoing this.

(20:07):
I've been training companies inworkshops and courses and
consulting for the past two anda half years.
Day in, day out, something hasshifted dramatically between Q2
and Q3 of 2025.
The amount of demand I'm gettingfor my services has skyrocketed
sometime in the beginning of Qoh three of this year and
continuing into Q4.

(20:27):
The amount of requests I'mgetting for evaluations of
current capabilities anddefining the exact gaps and
roadmap for training foremployees is just through the
roof.
I'm literally traveling everysingle week in the next six
weeks doing workshops todifferent companies all around
the country and actually inEurope as well.
Which is a good sign.
Which connects me back to thepoint that I mentioned earlier

(20:48):
about Walmart finally doingcompany-wide training on AI to
all its employees or to therelevant employees.
Every company needs to do thisright now, but you need
different kinds of training.
I've done these kind ofpresentations and training and
workshops to board members ofNASDAQ traded companies.
They need to understand this tomake the right decisions.
I've done basic workshops togeneral introduction to
generative AI to company wide.

(21:10):
I've done company specific anddepartment specific training,
both online and in person, andeach and every one of these has
pros and cons, and they're allavailable, not just from me.
The important thing is you needto analyze as a manager in your
company what are the gaps inskills and in knowledge when it
comes to ai, and you have tofind a solution for that.
And you need to do this very,very quickly because as we will

(21:32):
learn in the next segment ofthis podcast, it is going to be
a lot more significant goinginto 2026.
I'll mention two more thingsabout the kind of training we
provide.
In addition to company specificworkshops and training that we
provide, we also have trainingopen to the public.
So if you're an employee in acompany that is not providing
you this kind of training rightnow and you do not wanna fall
behind and you want to be ableto secure your employment and

(21:55):
your future, we have two kind ofcourses.
We have a self-paced course thatwe fully updated in September of
2025.
So the course you're going totake is going to be aligned with
the course that I was teachingas an instructor led course that
ended just a couple of weeksago.
And there's a link in the shownotes that you can go and take
that course at your own pace.
We're also going to open a newcohort of our highly successful

(22:18):
AI business transformationcourse, the final dates have not
been decided yet, but it'sprobably going to start at the
end of November and theregistration will open in the
next couple of days.
So stay tuned.
I will keep you updated in thenext episode.
And now two new releases fromthis week.
And we're gonna start with theone that caught the most amount
of attention, which is open AIjust released SORA two.

(22:39):
So Sora, those of you who don'tremember, caught us all by
surprise as an amazing videomodel that promised a lot and
took a very long time todeliver.
And then when it delivered wasah, not that great and not as
amazing as they initiallypromised.
And very quickly, other modelscaught up and shortly after when
VEO three was released byGemini, it kept SORA very far
behind.
While SORA two is a whole newdifferent level, it is at the

(23:02):
level of VEO three in videogeneration.
It has the capability togenerate background sound and
effects and VEOice andconversation just like VEO
three.
It has incredible physics andsome of the videos that they
shared are absolutely mindblowing when it comes to showing
real world physics with carsdrifting and ice skaters and
dragons soaring through the airand so on and so forth.

(23:23):
Not that I'm sure about thephysics of dragons soaring, but
I assume it's similar to birdsand airplanes, but everything
looks absolutely incredible,including a digital version of
Sam Altman that is sharing allthe benefits of this new model.
Now, in addition, they created anew app.
It is a social media app that,again, was rumored for a very
long time, and the social mediaapp.
It allows you to create share Cand remix AI generated videos

(23:48):
that are generated by SORA.
It was downloaded 164,000 timesin the first 48 hours of its
existence.
It's currently only an iOS app,and by the way, it's only by
invitation right now.
So even if you can download theapp, you may not be able to use
it.
So in two days, it got to numberthree in the download charts in
the US and Canada, and in daythree, it became the number one

(24:09):
downloaded app on iOS.
Now the app enables you togenerate ten second clips from
text prompts and or photos, andyou can add dialogue and sound
effects as I mentioned before oras Sam Altman called it, the
GPT-3 0.5 moment of video.
Now, a few interesting thingsabout the app and Sora tool in
general.
First of all, you can insertverified likeness into videos.

(24:31):
Basically, you can claim thatyou are you and prevent other
people from remixing or deepfaking you as you, because now
it will know that it's you.
I'm not sure how will thatactually hold over time, but
it's definitely a move in theright direction.
Users can also share thesevideos publicly or in specific
group chats or one-to-one andremix other creators while

(24:53):
tweaking the prompts or previousvideos to create slightly
different videos from theoriginal ones.
Now the app itself generates aTikTok style feed that is
personalized by your activity,your location, and your ChatGPT
chat history, which is there'san opt out option for that.
So like everything else insocial media, you are the

(25:14):
product and the way you pay iswith your privacy only.
Here, in addition to your feedhistory, it's also using your
ChatGPT data, which is notsomething you have in other
social media platforms, and youcan decide whether you like that
or not.
And again, at least there's anOPTOUT option that you can
choose.
Now currently, as I mentioned itis by invitation only and it is
offering it from OpenAI to heavySORA version one, users and pro

(25:38):
subscribers.
So if you're a pro subscriber,you get immediate access, but
they are like everything else inOpenAI are gonna release this to
them plus and teams and freeusers over time.
Now, in addition, obviously theAndroid app is coming soon.
I'm an Android user, so overthere I signed up for a wait
list.
So basically I'm waiting for theapp to be available.
It's not available yet.
I will let you know once it is.
And an API for SORA two iscoming as well.

(26:01):
Now one thing that they've donethat is interesting and that is
gonna get them in trouble andalready starting is from a
copyright perspective, it is anopt-out option, meaning right
now, as an example, you cancreate fictional universes that
look very much like Star Warsand specific video games unless
the companies who created theoriginal ones choose to opt out
from being able to be used inSORA.

(26:22):
On the flip side, it blocksunverified real people, so
unless you have an approval forsomebody to use their face, it
is supposed to block you frombeing able to generate them,
which again, I think is a greatidea.
Now also from a security andsafety perspective, all videos
get visible, watermarks andinvisible digital credentials
that can be identified as ai.
It also has teen safeguards,which means there's no adult

(26:45):
kind of content that you cangenerate.
And it's very limited with theamount of nudity and stuff like
that we will allow you togenerate.
And it is combined with theirnewly released ChatGPT parental
controls to monitor what you'redoing on that app as well.
And not just on ChatGPT.
From a look and feelperspective, or from a concept
perspective, it is very similarto the app that Meta released
last year, or not a new app, butthe vibes mode inside the Meta

(27:08):
Universe, which is very similar,just allows you to create a feed
and get a feed and remix thefeed of AI generated videos and
images.
What they're claiming is thatit's supposed to foster
collaboration and creativitybecause of the ability to remix
versus just passive scrollingthrough specific content.
Now, when I saw this, I gotreally excited and really scared
at the same time.

(27:28):
And after reading Sam Altman'spost on this it amplified both
of these feelings.
So let me read to you a shortsegment out of the blog post
that Sam shared.
So I'm specifically quotingSam's Fear section about what
this may lead to.
So here we go.
Social media has had some goodeffects on the world, but it
also had some bad ones.
We are aware of how addictive aservice like this could become,

(27:51):
and we can imagine many ways itcould be used for bullying.
It is easy to imagine thedegenerate case of AI video
generation that ends up with usall being sucked into a RL
optimized LOP feed.
The team has put great care andthought into trying to figure
out how to make a delightfulproduct that doesn't fall into
that trap and has come up with anumber of promising ideas.

(28:15):
We will experiment in the earlydays of the product with
different approaches.
So this is from Sam and now backto my personal thought of this.
As always, you need to followthe money.
If OpenAI will not monetizetheir social feed, it has a
chance of putting the rightlimits in place and hopefully
then also gaining approval fromsociety for doing the right

(28:36):
thing by trying to minimize thatimpact.
However, if their goal would be,and it might be to maximize time
in the feed, just like any othersocial media platform.
Then this could go terriblywrong.
So just think about the currentsocial media concept.
It is an algorithm that lookingfor whatever it can find in

(28:56):
order to make you stay on theplatform for as long as
possible.
Well, right now, this algorithmcan generate the actual content.
It is not the way it'simplemented right now, but it is
definitely possible.
The algorithm can see whatyou're responding to, can
understand the cues, and in alater state, may even look at
your selfie camera in real timeto see your actual real-time
responses and reactions andcreate content on the fly to

(29:20):
keep you as engaged and ashooked as possible to that
content.
This feels very much like aBlack Mirror episode, and sadly,
I feel this future is veryclose, even if not from OpenAI.
Then it's going to come frommeta or other social media
platforms.
Now, in addition to the risk ofbeing more addictive than

(29:41):
current social media feed andsucking people deeper and deeper
into a virtual reality universe,it also will reduce social
connections because our feed,your feed will be different than
anybody else's feed becauseit'll be created in real time
for you, which means sharing itwill not be relevant because
it's never gonna be as excitingas the feed that was created
specifically for me.
So a lot of ways this could goterribly wrong, and all the

(30:03):
components for this really badmeal is, are already available,
right?
So the social feed is there, theability to generate content on
the fly is there the ability tosee our responses and our impact
is already there.
And so, this might be a verynatural next step to meta and
OpenAI.
And again, anybody else wants togo down that path, which I am
really scared by it, especiallyhaving three teenage kids.

(30:26):
But the other big thing thatOpen AI launched this week
probably will have an evenbigger impact on the world,
which is they have releasedtheir e-commerce integration
straight into chat calledInstant Checkout, which is
basically a way for you to buygoods straight on the ChatGPT
interface.
So this feature that wasdeveloped together with Stripe

(30:47):
is also releasing an open sourcenew model that is called Agentic
Commerce Protocol that willallow other companies to use the
same exact concept.
This initial launch from ChatGPTis in partnership with Etsy as
well.
So you can now buy Etsy stuffstraight on ChatGPT and shortly
after they're going to include amillion Shopify merchants on the

(31:09):
platform as well.
It is already rolling out toplus pro and free users which
will enable all these users todo the checkout straight on the
ChatGPT app.
As a reminder, they have over700 million people using ChatGPT
every single week.
As, as a reminder, two weeksago, we've learned from open AI
themselves that more than 50%are using ChatGPT for personal
use cases.
So this is a perfect alignmentbecause some of the things that

(31:31):
we're doing on our personalday-to-day stuff is buying
stuff.
Now the new open source protocolis also very interesting because
they're taking a play of the MCPmove by Anthropic.
So Anthropic released MCP as anopen source model just about a
year ago, and that took theworld by a storm.
And I think OpenAI are expectingtheir agentic commerce protocol
to do the same thing.
And I don't see any reason whyit wouldn't.

(31:53):
Now, at least as of right now,product recommendations in
ChatGPT remain organic andunsponsored, meaning if you're
gonna ask a question, you'regonna get recommendations, but
nobody's paying to show youthese as ads.
Again, this might change in thefuture, and if it does appear,
then you get an instant checkoutbuy option.
And if you click on that, it'sgonna take you to a checkout
steps as we are used to themerchants retain full control as

(32:15):
the record holder forfulfillment.
So if you're buying it from anEtsy supplier, the Etsy supplier
will be the one that is be, willbe the merchant of record and
not ChatGPT or OpenAI, or athird party, basically, very
similar to any other checkoutthat you know in other platforms
as well.
And these merchants will be incharge of returns, payments,
supports, et cetera.
Again, just like on other thirdparty aggregators or platforms

(32:36):
like Etsy and like Shopify.
And this new protocol ensuresthat you have complete control
and with explicit confirmation,so the system won't just buy
stuff on your behalf.
And from a data security andperspective and so on, it is, as
I mentioned, using Stripe in thebackend.
So everything we know fromStripe that is now used across
the web, everywhere is obviouslystill there.

(32:56):
So from a data security,, it issafe as well.
Etsy shares jumped 16% followingthis announcement, which is not
surprising because it's obviousthat a lot of people will be
able to find because it's veryobvious that this can drive
significant change to the waypeople are shopping today.
Now, this is not a surprise.
One of the things Fiji cmo, thenewly appointed CEO of
application shared, that's partof the revenue that they will

(33:18):
get between now and 2030 isgoing to be through commissions
from the checkout process.
So let's do a quick math of whatthis can look out.
ChatGPT, very, very quickly,right now they have 700 million
users that are probably acrossthe billion next year.
Let's say a person does apurchase on the platform once a
quarter, and I think over timethat will increase.
But let's say it's once aquarter, that's a billion

(33:39):
purchases once a quarter.
Let's say OpenAI makes half adollar for on average as a
commission on each and one ofthese purchases, they make half
a billion dollars every quarter.
That's$2 billion a year.
That is very significant money.
And again, I think the trend ofshopping together with AI agents
is just going to increase, whichmeans this can be tens of
billions of dollars in the2030s, not just for open ai, but

(34:01):
for any agentic approach.
But because Chachi, Pia rightnow in a very big lead when it
comes to personal usage of ai, Ithink they will be the ones that
are going to gain the most.
And these numbers will startcompeting with Google shopping,
and with Amazon very, veryquickly.
But the biggest release of thisweek wasn't from OpenAI.
The biggest release of this weekactually comes from Anthropic,

(34:24):
which just released ClaudeSonnet 4.5, which is an
incredible superpower when itcomes to writing code, creating
autonomous agents, and doingcomputer and browser use in
levels that we've never seenbefore.
As part of the release, theyhave shared that Clotted solved
sonnet 4.5 autonomously coded aSlack slash teams style chat app

(34:48):
in 30 straight hours of runningautonomously.
That generated over 11,000 linesof code.
This is quadrupling their priorOpus four model with a seven
hour coding limit.
Now it obviously broke all theexisting benchmarks on coding,
so it scores 77.2% on the SWEbench, verified 82% with

(35:09):
parallel compute, 61.4% on OSworld computer tasks up from 42%
of sonnet four.
So a 50% increase from 42% to61% from one model to the next
anthropic calls it.
And for probably a good reason,the best model in the world for
real world agents coding andcomputer use with three x better

(35:30):
browser navigation than lastOctober's technology.
One of the better testers ofthis was Canva, and they praised
it as an incredible tool for,and I'm quoting complex long
context tasks like engineeringand in-product features, but
they delivered a whole suite oftools and not just the model
itself.
And we're gonna dive into someof the things, but a very short

(35:50):
list is it includes virtualmachines, memory and context
management, multi-agent support,Claude code checkpoints for
rollbacks, and many more.
Now Scott White, the productlead at Claude AI said this is a
continued eVEOlution on Claude,going from assistant to more of
a collaborator to a fullautonomous agent that is capable

(36:10):
of working extended timehorizons, but it is built not
just for programming.
So Diane Penn, the head ofproduct management at Anthropic
said it is operates at a chiefof staff level for tasks like
calendar syncing, dashboardinsight, and status updates, and
it's been actually reallyhelpful generating spreadsheets
with LinkedIn profiles.
And she gave an example.
It's been actually reallyhelpful generating spreadsheets

(36:33):
of LinkedIn profiles so I canemail them from a hiring
perspective.
Now, they're also saying thatSonnet 4.5 is their most aligned
model yet, which is basicallyreducing the risks and levels of
deception and prompt injectionsand other risks that come from
using different models.
So in addition to being the morecapable, they're saying it is by
far the safest, they kept thesame pricing as the previous

(36:54):
model.
So you get$3 per million inputtokens and$15 for a million
output tokens.
And it's available right now onClaude ai, on the web, iOS and
Android, as well as the API,Amazon Bedrock, Google, and
Google Vertex ai.
Now they included a lot ofreally powerful features, mostly
for developers.
So the new API features includea beta memory tool for basically

(37:14):
unlimited context via file-basedstorage and context editing.
It also knows how to shorten thecontext from one step to the
other on its own in order tosave tokens and optimize how it
is running through longersession.
And it is also very good, as Imentioned, in creating
presentations, spreadsheets,animations, and dashboard, which
goes way beyond just writing andexecuting code.

(37:34):
They also released a new browsernavigation extension for Chrome
that is currently available onlyfor max users.
So the people are paying$200 amonth, and the examples that
I've seen online are absolutelymind blowing on what people are
doing with this right now.
Several different good examples,both on X and on YouTube are
showing this performingsignificantly better then other

(37:56):
Agentic browsers right now, theyalso released a new concept that
is called Imagine with Claude,that is on research preview that
is basically creating softwareon the fly.
So you create the first screen,you're describing what you want
to do, it creates it, and thenwhen you click on something, it
will create that somethingwithout you having to tell it
what to do.
It will kind of assume whatneeds to be the next step in the

(38:17):
navigation or the developmentand the software basically right
itself based on the user'sinteraction with the software.
This is a whole new way ofconcept of looking at software
and application.
It will be very, veryinteresting to see if it's
actually doing anythingproductive, but if it is going
to completely change andtransform the world of
applications, and the where theyare created.
If you think about it, if theapplication will create itself

(38:37):
on the fly to complete what youneeded to complete, why would
you ever install an applicationthat somebody else has created?
Now, they also released anintegration into Slack that is
built together with Salesforce.
You can use it in threedifferent ways.
You can add Claude in privatemessages to get it to work as an
assistant.
You can use an AI assistantpanel that to do channel based

(38:58):
queries and learn about all theinformation in the channel, and
it can do thread participationwhere Claude Drafts responses
for your review.
So when somebody calls you in achannel, you'll be able to see a
potential response ready for youto go, and all you have to do is
edit it, approve it, and it willbe going right there and then,
so you don't have to writeeverything on your own.
It has access to private andpublic channels that you have

(39:20):
access to, so it's aligned withwhat you have access to
completely when it comes topermissions.
But in what you can see, it cansee everything and it can get
information and researchinformation, and get context
from everything that is inthere, including attached
documents and linked pages andso on.
Rob Seaman, the chief productofficer of Slack, said every
company is on its way tobecoming an agentic enterprise.

(39:43):
By bridging the gap betweenClaude and Slack users, we're
creating a seamless AIexperience.
And it is available as of rightnow to all paid Slack plans and
as an MCP connector to Claude ifyou have a teams or an
enterprise plan, so you can getit both on the cloud side and on
the slack side, both of it withpaid plans.

(40:04):
Now, it is very clear that inthis release, philanthropic is
going all in on the enterpriseworld initially, and the biggest
thing is on the coding world in.
Or as their tech lead, SholtoDouglas said, the most
economically important andimmediately trackable area is
coding for the AI world.
So if you remember, we talkedabout the big focus of G PT five

(40:27):
on taking over philanthropic asthe leader in the AI coding
world, and they were able to dothat both on the benchmarks as
well as day-to-day use.
More and more people reportedthat G PT five is actually
outperforming Claude 4.1.
Well now Claude 4.5 iscompletely breaking away both
immunes of the capabilities ofthe model itself, the amount of
time that it can workindependently, as well as the

(40:49):
tooling that comes with it,especially on the developer
backend side with multipleaspects and capabilities for
very granular control On the APIside, which is how philanthropic
makes most of its money,'causemost people are using Anthropic
coding, not on the Anthropic,not on the Claude coding
platform or the Claude appitself, but actually through the
API with IDs such as cursor, etcetera.

(41:11):
So as an example, theycompletely revamped their SDK.
They're rebranding it fromClaude code SDK to Claude agent
SDK, which is reflecting the newfocus of where this is all
going.
And the SDK comes with a hugevariety of controls and
capabilities that are alreadybuilt in, including, as I
mentioned, the ability toconnect and work and create

(41:33):
other tools beyond code, such asCSV file, spreadsheet,
presentations, dashboards,visualizations, and so on and so
forth.
And the examples that gave, thatthey gave are showing a
versatile agent applicationuniverse with finance agents and
personal assistance and customersupport and deep research
capabilities, as well as anagentic feedback loop where

(41:54):
Claude operates on its owngathering context, taking
action, and then verifying andtesting its own work, learning
from the process before comingand correcting itself in the
process, which again allows itto work for 30 hours nonstop of
writing code, and it will justbe a matter of time before it
does it on other aspects aswell.
It is also very good at contextgathering across multiple

(42:14):
channels, combining agentic,search with semantic, search
with subagents, that can work inparallel to find context in
different information indifferent places, and combine it
all together and with acompaction tool that auto
summarizes messages to reducethe amount of tokens that it is
using in the process.
It also comes with a huge rangeof action that it can take.

(42:34):
Obviously.
The first one is using tools,which we already know, but this
has been around before.
But it also adding dash andscripts, which is allow flexible
tax tasks like extractinginformation from documents and
PDF attachments, codegeneration, which we talked
about, and MCP support foreverything inside that
environment to connect toanything in the outside world
and several differentverification models, including

(42:56):
rule-based visual feedback, suchas being able to see the screen
and the output that isgenerating and LLM to evaluate
what's actually being generated.
Whether it makes sense in thebigger context overall, a very
strong, very powerful play byphilanthropic to put itself as
the dominant power when it comesto agents.
And again, combine that with avery capable browser agent that

(43:18):
can run in the browser andconnect and report back to the
API.
So you can now run a developmentand ask the code itself and ask
Claude to go and do the researchon its own before it continues
to write the code.
It will go to the browser, findthe right website, see how a
specific thing operates, findthe API documentation and come
back and then implement it allits own, all in an agent

(43:41):
autonomous way.
It is absolutely mind blowingand scary.
And again, the browser usage cando a lot more stuff and I've
seen really amazing examples Ithink this will become the norm
in 2026.
We will slowly stop usingbrowsers as we do today, and
we'll move more and more intothese age agentic browsers.
I must admit, I started usingComet about two months ago, and

(44:03):
there's more and more thingsthat I do only in Comet, like
every technical work, everydevelopment of automation in NA
10 and so on.
I do only in come right nowbecause it just helps me and
amplifies what I know how to dowith things.
It can research on its own andthen do for me, and this is just
the very early stages of that,which is a great segue to our
next segment of AI browsersbecause we got a few new of

(44:23):
those this week as well.

Speaker (44:26):
So let's talk about agentic browsers.
The first one that has news fromthis week is actually the one
that maybe is the most commonly.
Use right now, which isPerplexity Comet browser.
Well, comet started as limitedto only their pro level
subscribers, and as of thisweek, perplexity Comet AI Power
browser is completely open andfree to the public to use.

(44:47):
The goal is obviously to stayahead from an adoption
perspective and a market shareperspective.
Now that Google has alreadyreleased a Gemini in Chrome in
September, and Anthropic justdebuted their capabilities in
August, and now, as I mentioned,introduced a whole new version
of it as well as OpenAI thatlaunched operator earlier this
year.
But there is still a plus tierfor perplexity comment that

(45:10):
provides information frompartners such as CNN, Conde
Nast, Washington Post, LosAngeles Times, fortune, Lamont,
and La Figo.
So all of these publishers arebuilt in with live data that
streams only into the plusversion.
That is a paid version, but youcan use it for everything else,
like I'm using it for free rightnow.
And as part of the announcement,perplexity shared that future

(45:32):
enhancements that are comingsoon will include a mobile
version of the Comet browser, abackground assistant that can
run async ironically, inmultitasking while running
several different things at thesame time, or for browsing on
one thing and doing research onthe other, and so on and so
forth.
In addition, by the way, theirphone assistant is probably the

(45:53):
best assistant right now oniPhone.
It is definitely much betterthan soa.
Sadly, works not as good onAndroid, and I'm an Android
user, but if you are on iPhoneand you're looking for a really
solid phone assistant than theperplexity assistant is actually
a very good choice.
And as I mentioned earlier, I doa lot of my technical work and
definitely all my entertainmentautomation building on come

(46:15):
because it helps me a lot in theprocess.
So now that it's free worthchecking it out.
Another contender that has beenin the browser universe for a
while but has not participatedin the age agentic browser
universe yet and made a bigannouncement this week is opera.
So opera just announced neon,that they have teased since May
of this year, and it's nowavailable in several different

(46:36):
modes.
They released it initially as aclosed beta on September 30th.
This is through their NeonFounders program, which is$20
per month for the standard, or adiscounted option for$60 for six
months, which makes it$10 amonth.
Uh, if you're committing for sixmonths.
Uh, broader rollout is expectedsometime in the near future.

(46:58):
And as I mentioned, they haveseveral different functions.
The main, most interesting oneis neon do, which as the name
suggests, autonomously andpulling from multiple browsing
windows as well as tabs, as wellas, which is very interesting
from your browsing history.
So when it's executing tasks, itcan go not, not just to the
existing open tabs, but also toyour browsing history to find

(47:20):
relevant information likesummarizing information from
previous things that you lookfor, or even if you're looking
at word related websites, it canpull that information as well.
The other interesting is that itis running locally, so the AI is
running onboard the computer andnot sending information to the
cloud, which is obviously abenefit from a privacy
perspective.
And it is showing you what theagent is doing at any given

(47:42):
point, basically showing you itsthought process and chain of
thought, so you can pause it andintervene whenever you want.
And the other very interestingapproach is that it has what's
called cards, which think aboutlike I-F-T-T-T, those of you who
tried this, the if this and thatsetup that Google has for a
while now, you can set updifferent use cases, basically

(48:03):
defining what the scenario needsto be, and then you can rerun
these scenarios every time youwant to run them.
Think about any repetitiveworkflow that happens in a
browser, which could be any workrelated, personal, or a
combination of the two.
You can save as a card and thenrun the card again when you just
wanna run the same repetitivetask again, which I think is a

(48:23):
very smart idea.
And in addition, they have anaggregation of that, which is
more sophisticated, more complexthings, which instead of being
tabs, they're like mini browsersof their own that has multiple
tabs in multiple chats and docsand other contextual information
to allow the overall process torun.
There's a similar approach tothis in arc, which is another AI

(48:46):
based browser, and they havewhat they call spaces, but this
is a little more sophisticatedand it allows you to group
different tabs and stuff likethat.
So very interestingfunctionality from Neon, and I
cannot wait for it to beavailable to the public to test
it out.
In addition, they have a versioncalled Neon Make, which
generates editable mini apps andwebsites and reports and games

(49:07):
and so on.
Basically, anything thatrequires running code and
running it in a browser similarto what we know from other tools
such as cloud artifacts orGoogle and Chat GPT Canvas, that
can generate code and run itwithin the browser itself.
Another interesting variation onthis concept is a company that
is based in New York in Berlincalled Data and Data launched

(49:29):
their ERF product for betarelease on October 1st.
It is a free.
Local first application that isfusing between AI part browsing
together with something likenotebook, LM style notebook.
So the idea is making a researchfocused tool that can still take
actions in your browser andsearch autonomously across

(49:50):
multiple aspects of yourinformation.
So you give it the topic youwanna research and the areas of
information you want to coverand where it should find that
information or let it run on itsown and then it will generate
things.
Think about like a notion stylesummary that is editable that
you can then edit and share in alibrary with other people.

(50:10):
So a slightly differentapproach.
They also have the ability tocreate mini apps and interactive
graphs and charts and displaysand app outlets, and small piece
of code.
Very similar to all the othertools as well.
So slightly different approach,slightly different focus, same
kind of concept.
Another contender.
And the last one that we aregoing to talk about that is
slightly different but itbecause it's not to the public,

(50:32):
but it is very, very powerful,is Cursor, which is now maybe
the most hype and most used IDEwhen it comes to AI based
solutions, has introduced cursorbrowser agent.
And what they have done is theyhave integrated their existing
agent capability together withAnthropics web browsing and
control capability to executesophisticated tasks such as

(50:54):
scraping data, analyzing thatdata, and then cataloging it in
whatever way you want as astarting point for or in support
of the code that then the IDactually generate and it allows
it to get additional contextautonomously from the browser.
But in addition to justresearching information, it can
actually run and operatewebsites because it is using the

(51:15):
agentic capability inside thenew web agent from Anthropic, so
it can actually click on linksand fill out forms.
Or extract data from web pagesor anything it needs to do in
order to help the coding sideget access to the information or
the processes that it needs inorder to do the things that it
needs to do.
Now, it also works the other wayaround, meaning if you need to

(51:36):
extract information, the codingside can write Python code to
then scrape information from thewebsite so it had access to it
in the following steps, and thiswheel goes on and on and on.
Why is this helpful?
Well, in a really wide range ofuse cases, the first one is
obviously when you needinformation in order to continue
your development, such asgetting API documentation from a

(51:56):
website in order to complete theprocess that you're working on
or if you wanna write scripts toactually operate systems and you
wanna understand what thesystems are and how they work.
All these things are possiblenow autonomously, using this
really powerful and combinationof the cursor IDE, together with
the Anthropic web agentcapabilities.
But as I mentioned, the cursorversion of this is specifically
tailored for developers workingwithin Cursor and just as a

(52:19):
plugin or an extension of thatuniverse.
So it's not directly competingwith some of those other tools
Now to some other releases thatare not the biggest ones of this
week, but they're stillinteresting and exciting.
So the first one we're going totalk about is Google Light
Gemini 2.5 flash image, alsoknown as nano banana transitions
from preview to full productionreadiness as of the announcement

(52:41):
of October 2nd, which basicallymeans that all the functionality
is now available througheverything Gemini, including the
API and Vertex AI from Google.
Those who somehow miss thecraze.
Nano Banana is an incredibleimage generator that is now
baked into the Gemini universe,and you can run it independently
or as part of the Gemini chat.
The one interesting thing thatthey fixed as part of this

(53:04):
border release is maybe the mostannoying thing that was blocking
me from using Gemini on some ofthese cases, which is the lack
of control on aspect ratio, soit only generated square images
so far, and now it supports 10different formats including 21
by nine, 16 by nine, four bythree, three by two, and
obviously square and the otherway around.
Portrait with nine by 16, threeby four, and two by three.

(53:26):
And what they call flexible,which is five by four and four
by five.
So basically almost any kind offormat that you want is now
available built into nano bananawithout having to switch it to a
different tool and do an outpaint in order to get the aspect
ratio that you want.
That was driving me crazy innano banana before, so I'm
really excited to see that.
But Google also releasedsomething really interesting
without actually sharing it.

(53:47):
I literally stumbled upon itwhile going to Google AI Studio
to check out the newfunctionality of Nano Banana.
And I found in Google AI Studioa new release that I learned
they just released in Octoberthat is called AI Studio Build
Mode.
And AI Studio Build Mode isbasically a free vibe coding
tool that is baked right intoGoogle AI Studio.

(54:08):
Now, I haven't tested it yet.
I literally just found ityesterday as I was searching for
some other information, but itseems to be.
Something very similar, at leastto the lower level code
creation, uh, tools out there.
It's probably not competing withthe repli and the lovable of the
world yet, but the direction isvery clear.
When Google starts testingsomething, they usually are

(54:29):
later on deploying it one way oranother.
So, if you want to start playingwith Vibe coding and you don't
wanna pay the main tools outthere today, starting with
Google's new tool might be avery interesting option.
I will test it out and I willshare with you what I found.
But I've seen a few peopledeveloping basic games and stuff
like that, very easily.
Again, just by simple promptingand getting fully functional

(54:51):
applications.
So what does that mean?
It means that Google, likeeverybody else is looking in the
direction of how to get morepeople to write code with AI in
simplistic ways, and that is oneof their ways to test the
functionality and thecapabilities that they have in
there.
Again, do I think they're goingto try to compete with cursor on
the professional ID side or withrep and lovable and other vibe

(55:11):
coding tools?
I don't think so, but I willkeep you posted if I see that
they're changing direction.
But it is very clear that therewant to include some kind of
vibe coding offering into theiroverall Gemini package.
Now speaking of coding tools andplatforms and APIs and so on, a
very interesting piece ofinformation caught my attention
this week on X, and this issomebody posted screenshots from

(55:35):
Open Router showing the numberone used tools for several
different aspects.
So those of you who don't knowOpen Router, I've been using
Open Router for probably a yearand a half right now.
It's an incredible website thatallows you to connect to their
API once and behind the scenes.
They're connected to all theother large language models and
any other AI related tools.
And you can, within your APIcall, call any other tool API.

(55:58):
So basically you do oneintegration once, and then if
you want to compare differentmodels, you can switch back and
forth without really changinganything other than calling the
new model by name.
And they have a leaderboard thatshows how many tokens are being
consumed from every single APItype that you can imagine.
And right now in the generalleaderboard in number one and

(56:19):
number two.
Our Grok four fast, free andgrok code fast.
One ahead of Gemini, ahead ofCLOs on it.
4.5.
And even when aggregated byprovider and not by model
specific, xai is number one.
Google's number two, anthropicis number three, and Xai is
number one by a very, very bigspread.

(56:40):
With 1.24 trillion tokens over576.
Billion of Google.
So more than double, the tokensthat is being consumed on Google
is being consumed on X ai, andit's more than triple the amount
on anthropic, which is tellingyou that while we're not talking
about Xai a lot, they have avery capable AI solution out

(57:02):
there right now.
And as we mentioned last week,they have by far the most cost
effective AI right now, whichmay hint why on the API side,
they are leading the race as ofright now by a very big spread
because you can get very solidresults for significantly less
money than you can get from theother models.
And when it comes to costeffectiveness, right now,
they're far ahead and this is agreat proof of that.

(57:24):
And staying on the same topic oftaking models and making them
more cost effective, deep Seekjust launched a new variation of
their model.
It's called Deep Seek V 3.2dash.
X or exp, and it's a verysimilar approach to what Grok
did with Grok for Fast, which istaking a model, making it
significantly smaller,significantly faster, almost

(57:46):
aligned with the big modelscapabilities, but for a fraction
of the cost.
So right now it is 2.80 centsfor a million tokens of input
compared to 7 cents.
That was the price for itbefore.
So a 65% discount to get verysimilar results.
And this is a trend that we aregoing to continue seeing.

(58:06):
New models are gonna come out,but then they're gonna build
these smaller variations of thesame models that drive similar
results, but significantlycheaper for us to consume, which
is great for all of us becauseit allows us to get better
intelligence for significantlyless and faster.
The cool thing about this one isas all the other products from
Deep Seek, it is open source andyou can use it for whatever you

(58:28):
need and get it from huggingface and GitHub and use it for
your AI implementation.
It is currently the cheapest outof these faster, smaller models
other than G PT five Nano, whichis still leading from a cost
perspective at half a cent permillion tokens.
Since we talked about deep seek,let's stay in China.
Alibaba just announced somethingvery interesting.

(58:48):
Their TONGY lab has introducedwhat they're calling age agentic
continual pre-training, or aagentic CPT, which is a new open
source framework that trainslarge language models in a much
more efficient way than theexisting process.
And their main model that usesthis new architecture is
significantly smaller than topmodels today, and yet it rivals

(59:10):
many of them on researchcapabilities.
It is the first open sourcemodel to exceed 30 points on
humanities last exam, which isshowing very strong capabilities
in research and data analysiswhile being a significantly
smaller model.
And as I mentioned, opensourcer, you can use it for your
needs as well.
Now from China back to the usMicrosoft has made some

(59:31):
interesting announcement thisweek as well.
They have integrated co-pilotchat into Word, Excel,
PowerPoint, outlook, and OneNotefor all of Microsoft 3, 6, 5
users without requiring anadditional license.
So basically if you have thesetools, then you have Microsoft
Copilot baked into them.
They're also making theMicrosoft 365 copilot into more

(59:51):
interactive with agent mode,which enables users to guide the
tool through complex multi-steptasks in these tools such as
Excel and Word and so on.
And guess where this is comingfrom?
If you are listening and payingattention, you probably guessed
correctly, it comes from theirnew integration with Anthropic.
So they took the reallyincredible capability of
Anthropic to create CSV filesand Word documents and

(01:00:13):
presentations, and they'rebaking them into the Microsoft
tools based on the announcementthat they made two weeks ago,
that they're gonna add Anthropiccapabilities into the open AI
capabilities inside theMicrosoft copilot universe.
And staying on the topic of newannouncement or an old
announcement that is actuallyfinally materializing.
Amazon finally is releasingAlexa Plus, which is something

(01:00:36):
that they've announced in thebeginning of this year.
This is basically a LLM based,much more interactive version of
Alexa that understands context,that knows what you are
searching, that knows theproduct you are buying on Amazon
that has access to the internet,can do research for you and can
actually take action, like bookreservations, order food, and so
on, all with the Alexa that youhave at home or in your car or

(01:00:57):
wherever it is that you're usingalexa.
This is a big jump from acontextual understanding
perspective of Alexa whileconnecting it to all the things
that make Alexa very helpful asit is today, such as having
access to all your musiclibrary, whether on Amazon
Music, or on Spotify or whateverit is that you're listening and
you're viewing history on Amazonvideos your orders on Amazon

(01:01:18):
itself or just searching the weband so on.
All of that bake into theexisting user interface, whether
on your mobile app, on the Alexadevice, in cars that run, Alexa,
and so on.
So this is supposed to bedeploying out.
Right now I have multiple Alexasat my house.
Will be interesting to see ifthe old models will upgrade
automatically if something needsto be done, if there are any

(01:01:38):
limitations, and I'll keep youposted as I learn more.
And we're gonna end with threeinteresting announcements from
OpenAI and then a AI basedactress that is taking over the
world by a storm.
So the three big announcementsfrom OpenAI, one is they just
released their.
H one results, their revenue hasrocketed 16% over the entire

(01:02:00):
year of 2024.
So the first half of 25 has seena 16% growth over the 12 months
of the previous year.
but at the same time, they haveburned through$2.5 billion in
cash because their expenses weresignificantly higher than that
huge jump in revenue, and thatled to a$7.8 billion operating

(01:02:21):
loss in a$13.5 billion net lossin just the first half of the
year.
That being said.
They have just raised another$40billion at a$300 billion
valuation, so they ended thefirst half with$17.5 billion in
cash in securities, which cankeep them running at this crazy
burn rate, at least another sixmonths.

(01:02:42):
They're also pushing a lot moreaggressively on market share,
and they've invested$2 billionon sales and ads, which is
again, more than double than theentire investment in that topic
in the entire year of 2024.
Now they're projectingcontinuous crazy growth and a
$9.4 billion in revenue in 2026,but$115 billion in cumulative

(01:03:06):
burn rate through 2029, which isan insane amount of money and
puts their profitability at avery big question mark, at least
between now and then.
The other interestinginformation that was released,
uh, this week is on how muchpower OpenAI is going to consume
compared to benchmarks that weknow today.
So they're planning in theirpartnership with Nvidia, a 10

(01:03:27):
gigawatts data centers.
That is in addition to the sevengigawatts data centered that
they're planning as part ofStargate.
Now, putting that inperspective, a 10 gigawatt
project alone matches the amountof electricity that is used by
the entire city of New York.
The seven gigawatts number iswhat San Diego consumes during a

(01:03:47):
really large heat wave, top peakusage.
So that gives you an idea on howmuch power this company is going
to consume with the next fewyears on regular basis.
One of the top researchers onthis topic from University of
Chicago, Andrew Chean warns thatAI could consume 10 to 12% of
global power.
Obviously creates a very seriousenvironmental impact, and while

(01:04:12):
everybody's talking about tryingto make this green, the reality
is as of right now, it's veryfar from green and it's using
energy from traditional carbonbased fuels, which is not a good
thing for any of us.
I really hope that sometime inthe immediate future will help
us solve for green energy.
Either better solar paneltechnology or the Holy Grail of

(01:04:35):
fusion.
And the other thing that I wannashare with you that you can go
and explore on your own is thatOpenAI started publishing a new
series that's called OpenAI onOpenAI, in which they're
basically sharing in videos aswell as blog posts, how they are
using the technology internally.
And they're talking about go tomarket and they're giving
multiple examples, showing youhow open AI is using the open AI
technology, which can give yougreat ideas and inspirations on

(01:04:57):
what is possible with AI today.
And then to end on a reallyinteresting, weird, bizarre,
surprising, scary topic is theintroduction of Tilly Norwood.
So who the hell is TillyNorwood?
Tilly Norwood is a fullysynthetic AI-based actress that
is crafted by a Dutch AI studio,and they offered it as an

(01:05:18):
actress Two leading agencies.
They're basically saying thatshe could be the next Carla
Johansson or Natalie Portman.
She's this cute, sophisticated,young actress, only, she's not
real.
Now, obviously, that drove ahuge celebrity backlash that
said, this is horrible and thisis the end of the world, and how
people are doing this, and whathappened to human connections

(01:05:40):
and so on and so forth.
This comes from all sides ofthe, A list of actors and
actresses in the world, as youwould expect.
As well as the SAG AFTRA Unionof the Actors, which includes
about 160,000 actors that werevery loud against it.
If you remember the strike fromlast year or two years ago when
they went on strike andbasically shut Hollywood down

(01:06:02):
until the Hollywood studiossigned an agreement saying they
will not.
Use AI to replace them.
And the writers, if youremember, those of you who were
listening to the podcast backthen know that I said that it's
the most ridiculous agreementever signed because the studios
don't stand a chance.
And the reason they don't standa chance is because it doesn't
matter what they sign, they willcome the time.
And that time is coming probablyfaster than we think in which

(01:06:23):
new types of studios will showup that will be able to generate
movies, full fledged movies,instead of for 20 to$200 million
budget for.
20,000 to$200,000 in budget, andthis is all gonna be AI based.
There's not gonna be any actors.
There's not gonna be anycameras, no lighting, no
filming, no editing, nomicrophones, none of that is
gonna be there, and yet a lot ofpeople will want to see it.

(01:06:45):
That can lead to a huge varietyof different kinds of futures.
Some of it might be that peoplewill be willing to pay more for
human based movies.
I really hope that that's thecase.
The flip side can also happenthat we will get flooded with
low quality, but a huge varietyof new movies because anybody
will be able to create moviesand maybe we'll be able to go to
the movie theater and play$2 towatch a movie instead of 25 or

(01:07:09):
28 or$30 to watch a movie, whichwill then allow more people to
go to the movies.
So there are.
Benefits to this approach,unless you're an actor and
you're afraid for your job.
But that is very similar to manyother jobs that AI is going to
wipe out or at least impactnegatively.
Uh, do I think that the nearfuture we'll have at least a
hybrid model?

(01:07:29):
I'm a hundred percent sure ofit.
Like in the very near future,we'll start seeing more and more
AI in movies, either AI actorsor entire ai.
Movies and they're going to beblockbusters and a lot of movies
that people will wanna see thatmay not have human actors in
them at all.
Do I think that's gonna replaceall actors in the immediate
future?
No, but I think our kids anddefinitely our grand creeds

(01:07:50):
wouldn't care.
All they would care about isthat the movie's exciting and
that it's moving them and thatcan, they can enjoy it and they
wouldn't really matter to them.
That sometime in the past,humans used to do that.
This sounds terrifying and scaryto some.
Sounds really exciting toothers.
I think it's gonna be anexplosion of creativity,
allowing a lot more people tocreate full movies or TV series
or even short videos and films,which I think is great.

(01:08:12):
I am not happy about the impactthat is going to have about the
actors and videographers andeditors and, and all the other
people that go into creating amovie, but I don't see any way
around this.
This is very similar to whathappened to cartoons that were
drawn by hand, and now the vastmajority of them 3D and computer
generated, and I would seriouslydoubt if there's any handmade
animation movies still beingmade and yet this industry is

(01:08:35):
still growing and a lot ofpeople love watching animated
movies and so on.
This might be just the nextvariation of that.
That's it for today.
If you have been enjoying thispodcast, please subscribe to it
so you don't miss any singleepisode.
And while you're at it andyou're subscribing, if you give
us a review on your favoritepodcasting platform and click
the share button and send it toa few people who can learn from

(01:08:57):
it as well.
I would really appreciate it.
They would really appreciate ittoo.
And we'll be back on Tuesdaywith another fascinating,
incredibly interesting how-toepisode that is going to show
you how to better manage yourtime using AI in several
different methods and time isthe only resource you cannot get
more of.
So learning how to manage itmore effectively, both for your

(01:09:18):
personal and professional lifeis extremely valuable.
So come join us on Tuesday.
And until then, have anincredible rest of your weekend.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Cardiac Cowboys

Cardiac Cowboys

The heart was always off-limits to surgeons. Cutting into it spelled instant death for the patient. That is, until a ragtag group of doctors scattered across the Midwest and Texas decided to throw out the rule book. Working in makeshift laboratories and home garages, using medical devices made from scavenged machine parts and beer tubes, these men and women invented the field of open heart surgery. Odds are, someone you know is alive because of them. So why has history left them behind? Presented by Chris Pine, CARDIAC COWBOYS tells the gripping true story behind the birth of heart surgery, and the young, Greatest Generation doctors who made it happen. For years, they competed and feuded, racing to be the first, the best, and the most prolific. Some appeared on the cover of Time Magazine, operated on kings and advised presidents. Others ended up disgraced, penniless, and convicted of felonies. Together, they ignited a revolution in medicine, and changed the world.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.