All Episodes

Is AI really ready to revolutionize the workplace—or are we all just beta testers with fancy job titles?

In this episode of The Leveraging AI Podcast, Isar Meitis dives into the latest reports, product launches, and behind-the-scenes drama shaping the future of artificial intelligence. From jaw-dropping usage stats from OpenAI and Anthropic, to Microsoft’s digital agents with KPIs, to the billion-dollar race to build gym-trained AI employees—this is the stuff the headlines aren’t telling you.

In this session, you’ll discover:
- How 700M people are using ChatGPT every week (and why 73% of it isn't even for work)
- Why Claude is powering a silent automation revolution (77% of its tasks are full-blown process automation)
- Why most employees are *still* barely scratching the surface of AI at work
- The alarming global divide in AI adoption (Spoiler: Israel and Singapore are leading by miles)
- OpenAI’s shocking take on hallucinations—and why your AI might be confidently wrong, often
- Why Salesforce’s AI agent rollout is a cautionary tale in overhype and underdelivery
- Microsoft + Workday’s plan to treat AI agents like actual employees (KPIs and all)
- Meta’s AR glasses and the “Zoom avatar” future that’s either cool or creepy
* 🛒 Why OpenAI wants ChatGPT to start shopping for you (and why that’s a data privacy nightmare)
- The billion-dollar training gyms building AI agents to take over real-world business tasks
- The inevitable tension between faster, cheaper, and... accurate AI
- What business leaders need to *urgently* understand about reinforcement learning and economic displacement

📘 OpenAI: Real-World Usage of ChatGPT
🔗 Economic Research: ChatGPT Usage - https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f142e/economic-research-chatgpt-usage-paper.pdf 

📘 Anthropic: Claude Usage & AI in the Economy
🔗 Anthropic Economic Index – September 2025 - https://www.anthropic.com/research/anthropic-economic-index-september-2025-report 

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker (00:00):
Hello and welcome to a Weekend News episode of the

(00:02):
Leveraging AI Podcast.
The podcast I chairs practical,ethical ways to leverage AI to
improve efficiency, grow yourbusiness, and advance your
career.
This Isar Mateis, your host,I've got really bad allergies,
uh, today, so I have.
Apologize in advance for mystuffy nose, but we have a lot
of interesting things to cover.
We're going to deep dive intotwo papers that were released in
parallel from open AI and fromanthropic revealing how people

(00:25):
are actually using AI in thereal world, which is fascinating
to learn.
We are also going to talk aboutthe impact, or in some cases,
lack of impact of agents aroundthe world and where are agents
going to take us in the verynear future with updates from
multiple companies on thistopic.
And we're also going to deepdive into the release of GPT
five Codex.

(00:46):
Which is ChatGPT's coding agentthat was just released this
week.
And then we have a lot of otherRapidFire news starting with a
lot of cool, interesting newreleases from OpenAI and from
other players as well.
There's a lot of interestinginvestment news and a lot of
other stuff that happened thisweek, so we have a lot to cover.
So let's get started.

(01:08):
As I mentioned, both OpenAI andAnthropic released two papers
that are sharing how people areactually using ChatGPT, and
Claude in the real world.
And this is really interestingfor several different reasons,
but the main one from myperspective was perfectly
captured by Peter McElroy, whois Anthropics Economist who
said, what we really hope withthis data is to help people

(01:31):
understand and anticipate how AIat the frontier is changing the
nature of work.
Now, the reality is that's morethe anthropic side on the open
EI side.
We will see it's not just work,but it's actually creeping more
and more into our personallives.
So a few numbers from OpenAI.
They have over 700 millionweekly active users as of July
of 2025, sending over 18 billionmessages every week.

(01:55):
To do the math very quickly,that's 29,000 messages every
second.
This is an insane number.
Now the most interesting thingabout the OpenAI paper is that
non-work-related ChatGPTmessages surge from 53% in June
of 2024 to 73% in June of 2025.
So right now, about threequarters of the messages that

(02:17):
are sent through ChatGPT are notfor work related stuff.
Now on the Anthropic side, it isdefinitely not the same, but
over there on the Anthropicside, the most interesting
aspect is that 77% of API tasksthat are using cloud in the
backend, are built towardsautomation of processes rather
than augmentation.

(02:37):
Meaning building work relatedthings that replace humans
rather than help humans do thework more effectively.
While the trend is not asurprise, the number of, again,
77%, three quarters of the usageof the API is driven for
automation.
the reason it's not a surprisebecause Claude's success has
been riding on it becomingdefacto the main use solution

(02:58):
for coding.
And so most coding tasks aroundthe world when they're built
into automated processes are nowdone autonomously, obviously, or
somewhat autonomously, whichdrive these numbers.
But if you take that beyondwhat's happening right now, and
you try to project what thatmeans, it means that as these
models are gonna get better atother things beyond just coding,
the same exact trend is expectedover there.

(03:20):
Meaning we're expecting modelsthat will be ahead of the others
in building and understandingspecific automations to do a lot
more automation than they'redoing augmentation right now,
which means significantpotential impact on future jobs.
So back to the open AI side ofthings.
What are the top things thatpeople use ChatGPT for?
29% of users on the personalside use it for practical

(03:41):
guidance.
24% use it when looking forinformation and 24% use it for
different writing skills.
And on the work side on ChatGPT,most of the work that is done is
done for editing and translatinguser generated text.
Diving into clothes breakdowncoding is at 36% of overall
tasks.
Education task rose from 9.3% to12.4% and science related task

(04:07):
research and so on rose from6.3% to 7.2%.
But as you can see, the vastmajority, 36% of cloud's usage
is for coding.
And going a layer deeper.
The amount of full stackdelegation on these coding tasks
has rose from 27% last year inDecember of 2024 to 39% in

(04:28):
August of 2025.
So the direction is very, veryclear.
More automation, lessaugmentation as these systems
are getting better.
Another interesting thing thatcame out of these two papers is
that the global AI adoption isuneven with higher per capita
usage in wealthy nations andmore tech concentrated places
like Israel and Singapore, withIsrael being seven X the

(04:49):
average, and Singapore being 4.6x, the average with Canada 2.9 x
the average, while emergingeconomies like India and Nigeria
are very far behind with 0.27and 0.2 uh, usage compared to
the average of the entiresurvey.
This means that the inequalitiesthat exist today may actually be

(05:09):
increased rather than decreasedby the introduction of ai, which
is really sad to hear because AIdoes provide the opportunity for
regions with less technology andless access to knowledge to
close the gap, which is what Ireally hope that will happen
eventually.
Diving more into demographics onthe ChatGPT side, it is very
much skewing towards the youngergeneration.
46% of all messages come frompeople under 26, which are

(05:33):
significantly less than 46% ofthe population.
So a big concentration on theyounger generation with work
related usage, growing with ageand education, meaning people
who are older and more educateduse Chachi pity more for work
and less for personal stuff, andyounger generation use it more
for personal stuff.
Diving even deeper into the openAI classifications, they have

(05:54):
classified 49% of the messagesas asking ChatGPT something 40%
as doing something and 11% asexpressing something.
So what is the bottom line fromboth of these papers?
And again, there's gonna belinks to both of them in the
show notes.
You can take the links, theseare actual research papers, so
you can drop them into a toollike Notebook lm and get the

(06:16):
highlights or read the wholething if you're interested.
But the bottom line is that AIadoption is growing fast.
It is growing very unevenly, andit is growing faster in
established countries, inestablished regions versus
everybody else.
The other thing is that Open AIor ChatGPT is the household name
for ai.
Especially for youngergeneration, it is being the

(06:38):
place to go to get any kind ofinformation, any kind of
consultation that is not workrelated.
aNd the thing that I take out ofall of this is that most people
don't really have a clue how touse AI at work.
Because if most people are usingAI work, and now I'm talking
specifically about chat g pt, asfar as asking it for knowledge
and asking it to rewrite andtranslate things, they're
missing about 95% of what AI cando that it's actually really

(07:00):
good at that providessignificantly more business
value.
So what this means is that meansthat even people in surveys that
are saying that they are usingAI at work are really not using
its to its full potential.
And that means that there's ahuge still opportunity in the
workforce to really benefit fromAI way beyond what we are
benefiting from it today.
Now OpenAI released anotherinteresting paper this past

(07:22):
week, which talks about thecauses and the potential
resolutions for AIhallucinations.
So the paper that is literallycalled Why Language Models
Hallucinate shares the resultsof a research performed by an
OpenAI that is shedding a veryinteresting light on why these
models hallucinate.
So those of you who don't know,which I assume is very few of
you, AI models make stuff up andthey make stuff up in surprising

(07:42):
ways and in surprising places.
And they make stuff up in thesame convincing way.
They share the real truth andaccurate information with us,
which make it very, very hard tofind.
And so what OpenAI found in theresearch is the reason why large
language model hallucinate isdriven by the way they are
trained and evaluated posttraining.
Basically what they're saying isthat the models get incentivized

(08:03):
to guess over admitting thatthey don't know the answer.
So think about when you aretaking a multiple choice test,
and if you don't provide anyanswer, you by definition get a
zero on that particularquestion.
However, if you guess you have a25% chance, if there are four
potential answers, and since AIgets scored on similar kind of
tests, it is preferring to guessand make stuff up, then actually

(08:26):
not provide any answer.
Since OpenAI found this out,they have trained GPT five
differently.
So this results in a veryinteresting output.
GPT-5 thinking mini shows a 52%presentation rate, meaning not
answering a question because itis not sure about the answer
with 22% accuracy and 26% errorrate.

(08:46):
So 22% of the time it answeredcorrectly, 26% it answered
incorrectly, and 52% of the timeit decided not to answer at all.
This is comparing to oh fourMini that has 1% presentation in
meaning it chose not to providean answer only 1% of the time
compared to 52% of the time ofGPT five, but it got 24%

(09:06):
accuracy versus 22% accuracy ofGPT-5.
Why?
Because it was guessing andsometimes those guesses were
actually correct.
The problem was the error ratewas 75%, so it got it wrong 75%
of the time, and it slightlyincreased the accuracy from 22
to 24% just by guessing all thetimes that ChatGPT five did not

(09:28):
provide an answer.
So what does that mean?
That means that if we learn howto train model, encouraging them
not to guess and providing thema higher score.
When they provide an answer thatis uncertain in saying, I don't
know this, or I'm not certain,or This is the level of
accuracy, I think about thisparticular thing, and give that
a higher score in the trainingprocess, rather than guessing an

(09:49):
answer that sometimes we write,which will give them a higher
score in the evaluation, willlead to significantly less
hallucinations.
In one of the examples that theygave in doing a summary about a
specific individual where hisbirthday was not available, the
model chose to guess thebirthday.
Think about the chances of that.
It's one in 365, right?
It's a very low chance, and yetthe model chose every time to

(10:09):
guess the birthday in order toprovide all the information that
the user requested.
This obviously does not make anysense, and so if the training
will be better, these things inthese kind of situations, the
model would just say, I don'tknow that information, and we'll
move on.
Now, part of the problem infixed business problem that the
business incentives that arebasically driving AI consumer

(10:29):
development remains misalignedwith reducing hallucinations
that I explained.
Yes, everybody wants lesshallucinations, but less
hallucinations means a lot morecompute in thinking about the
problems and figuring out whennot to answer the questions that
leads to three differentresults.
Result.
Number one, it's gonna be moreexpensive to get answers result
number two, it's gonna takelonger to get answers and

(10:49):
result.
Number three is we're not alwaysgonna get answers.
All three of these things arethings consumers do not want.
So there will have to be somekind of a balance between these
forces of saying, I want faster,cheaper, and accurate results.
Where the reality is if you wantfaster, cheaper, and better
results, it's not something youcan get, right.
So the choice is between gettingfaster, cheaper, less accurate

(11:14):
results to getting moreaccurate, but slower, more
expensive, and in many cases notgetting results at all.
At all.
Because the AI will just tellyou that you, that it doesn't
know, which in my perspective isactually better.
But when people are chasingcheaper solutions all the time,
this may not be the onlyincentive that will drive how
the future models will actuallyoperate.
The bottom line is, is it's avery interesting path to

(11:35):
dramatically reducinghallucinations and driving a
situation where the AI would sayit doesn't know, which, as I
mentioned, is something I'vebeen doing while prompting for a
very long time and incentivizingthe AI through prompts to tell
me that it doesn't know or theinformation doesn't exist where
we were looking for it ratherthan making up an answer.
It works very, very well.
It actually works extremely wellon ChatGPT compared to, let's

(11:56):
say, Gemini, where it works notso well.
So from research papers into theimpact of agents on the
workforce, where we are rightnow and where we are going.
And so where we are right now isactually very interesting.
There were two independent,unrelated articles about how
slow.
AI agents adoption actually iscompared to the promise and the
hype earlier this year.

(12:16):
Now, to put things inperspective, AI agents has
quadrupled since last year, soit's still growing very, very
fast.
However, one article talks aboutthe early predictions of Mark
Benioff, the CEO of Salesforcerelated to the future of agent
force, which is what Salesforceis going all in about, which is
agents within the Salesforceenvironment and if you remember,

(12:36):
Benioff said, absolute year ofAgent Force when he was talking
about making predictions for2025.
And this prediction basicallycrumbled under the weight of
consumer skepticism and compleximplementation and cost and many
other things that actually slowthe progress, as well as really
fierce competition from multipledifferent angles.
So the reality as of right nowis that fewer than 5% of the

(12:59):
150,000 companies that useSalesforce are actually paying
for Agent Force nine monthsafter its launch.
If you remember earlier in thisyear, Mike Benioff himself was
quoted calling their architecttechnical team crazy and pushed
to firing all of them after theyweren't big clients of Agent
four setup complexities.
The reason he was saying that ishe was claiming that it takes

(13:21):
minutes to deploy, and yet hisactual implementers say that
it's a very complex and tediousprocess.
This team was dismantled at theend of 2024 and it was put back
together in August of 2025 aftermany consumers said that they
need this kind of assistant thatsuddenly was not there and not
available.
They were also saying that thisteam was right, and it is a very
complex process and it doesrequire a team like that to help

(13:43):
put Agent Force in place.
Now it's not all bad.
Agent Force has hit a hundredmillion dollars in annual order
value by May of 2025 with 6,000paying customers.
But even that number is notcompletely accurate because this
is bundled with new databasestructures and database
deployments and we don't exactlyknow how much of that revenue
comes from the new databasesversus how much of that revenue

(14:04):
actually comes from the agentsbeing deployed and being used.
The other big problem, as Imentioned, is competition.
So the initial pricing was$2 perconversation, and that is more
or less double than some of therivals are offering right now.
So there's a very big pressureon Salesforce to reduce prices,
which may or may not beprofitable for them.
So there's a lot of otherissues, that is slowing the
adoption of agents, at least inAgent Force right now.

(14:26):
Now in general, we see when itcomes to adoption.
And the same thing is true forSalesforce is that smaller tech
startups embed AI significantlyfaster than bigger legacy firms
that take a very long time todeal with their legacy data that
is broken and that there areprocesses that are not aligned
and sometimes not mappedproperly and so on and so forth.
So smaller, faster tech orientedcompanies do faster, better

(14:50):
implementation slower, oldschool companies take
significantly longer toimplement these AI agenda
solutions.
Now, if you think about the factthat right now S-A-P-I-B-M,
Oracle AWS, Microsoft andstartups like Sierra, are all
offering similar solutions.
And you understand that thecompetition is dramatically
intensifying.
So while the pie is growing,there's gonna be a lot of people

(15:10):
taking pieces of the pie.
The overall success of each andevery one of these providers is
limited.
aNd we shared with you lastweek, the irony in which Agent
force was supposed to be anenhancer of employees in the
company.
And yet inside of Salesforce,they just let go of 4,000 job
cuts in their customer servicebecause Agent Force resolves 83%
of their own support queries.

(15:30):
Another company that is facingthe same kind of hurdles is
Microsoft.
So Microsoft is pushing its AIoffice apps very aggressively,
and yet there's a very seriouspushback against the$30 per
month per user.
Investment that is required inorder to have the copilot
upgrade across the board.
Now, combine that with the factthat we mentioned last week that
they're planning to integrateAntropic'ss models as part of

(15:52):
the solution, especially nowthat it can generate PowerPoint
decks and Excel analysis veryeffectively.
Probably the best in the marketright now, but with the
anthropic models being a lotmore expensive, you understand
that puts even more pressure onMicrosoft in how they're going
to grow that aspect of theirbusiness.
Now, the good news for Microsoftis other than copilot, their
Azure cloud is booming becauseof the AI server rentals by

(16:15):
multiple companies, mostly openai.
Which is providing a huge boostof capital into AI in Microsoft
to allow them to potentiallysponsor at least the initial
investment in getting people touse copilot.
And once they're hooked, you cando whatever you want.
But that's about it.
As far as the headwind when itcomes to implementing agents in
the workforce, all the rest ofthe news in the agent stuff of

(16:35):
this episode and in general isvery positive and very
aggressive moving forward.
A very interesting news fromthis week is that Microsoft is
teaming up with Workday in orderto provide a unified solution
that will allow to manage agentsjust like you manage employees.
So Microsoft Enterra agent IDpairs with Workday's agent
system of record, also known asa SOR to give AI agents built on

(16:59):
Azure AI Foundry and copilotstudio verified identities.
Basically making them employeesof companies that has clear set
of permissions and businesscontext and secure levels of
access to specific pieces ofdata and so on and so forth.
And if you take this a littlebit to the future, these agents
will have career goals and KPIsthey need to hit and so on.
Just like any other employee.

(17:20):
And all of that would be able tobe managed on the Workdays
platform just like any otheremployees.
So, as an example, agents thatare built on top of A SOR will
log usage and users and impacton productivity reports.
While the intra ID fromMicrosoft will let admins audit
and provide access or revokeaccess from different people and

(17:40):
agents, from different tools anddata within the organization.
And Garrett Kasmir provides agreat quote that summarizes it.
AI is not a single vendorsolution.
It is an ecosystem that emergeson the shared data, shared
governance, and sharedintelligence across a network of
systems.
Basically what he means is youcan do this on your own.
You have to be integrated intothe actual company systems and

(18:02):
processes, and they are nowjointly providing that kind of
solution.
If you think that's the end ofthe story when it comes to
Microsoft and agents, well,Microsoft just announced that
they're going to providing aplethora of different co-pilot
agents that are gonna be runningwithin teams.
So these family of agents willbe joining your teams meetings,
taking notes, suggesting timeslots for different topics, even

(18:23):
doing overrun alerts for thesetopics, answering questions
based on data in your entireMicrosoft ecosystem, creating
documents, creating tasks, andyou'd even be able to do this
without one tap mobileactivation for just hallway
conversations with people from amobile app.
So these family of agents, someof them are built to face the
conversation.
Basically participate in theconversation and take notes and

(18:45):
be active in the conversation.
Some of them are more backendoriented, where they will gather
data that you need fromSharePoint and other sources,
summarizing information andproviding immediate answers for
information that you need versusyou saying, okay, let's go find
this information and meet againin a few days.
The information willautomatically pop up and be
shared with the relevant peoplein the relevant format right

(19:06):
there in the team'sconversation.
I think this is an amazingpromise and step in the right
direction as far as creatingenterprise level efficiencies.
will it actually work and willpeople actually use it?
That's a whole differentquestion, but combine that with
the previous announcement fromMicrosoft and Workday, and you
understand where this is going.
The future of the workforce is ablended workforce with agents
and humans working in tandemacross everything in the

(19:28):
company.
With agents taking more and moreof the tedious task and humans
can focus on the more high valuetasks, the question is how many
humans we actually need in thatsituation.
In most of the tedious work,which is an X percentage of
overall work is just doneeffectively and efficiently by
agents.
Another company that is movingin the same exact direction that
competes with teams is Zoom.

(19:48):
Zoom just announced an upgradedAI companion that is not just
available in Zoom.
You'll be able to run it inZoom, Google Meet, and Microsoft
Teams for transcription notetaking and data finding across,
again, all these platforms, andthat is obviously to compete
with other platforms that hasalready been out there.
I've been using one of them fora very long time.
So you have platforms like Readai, OT Fireflies, granola,

(20:10):
fathom, uh, Circleback.
All of these are working acrossall the different platforms, and
what Zoom is doing is basicallysaying you'll be able to use the
Zoom AI agent companion to doall these things across the
different platforms and notbeing just locked in, into the
Zoom solution.
The other thing that they'readding is live humanoid avatars
that can join on your behalf tomeetings.

(20:31):
So while the CEO of Zoom, EricQan is saying, this is great
because when you're not cameraready, you'll be able to have
your avatar join instead of you.
I see this as a very problematicnew step in the wrong direction.
A because of deep fakes, you canhave people use this technology
in order to make belief that youare talking to a specific person

(20:52):
when you're not talking to thatspecific person.
This had already happened a yearago in Singapore driving a tens
of millions of dollars of fundstransferred from a company to a
potential new supplier after acontroller was having a Zoom
conversation with someone who hethought was the CFO of the
company that he knew well, thatthat wasn't done with a Zoom

(21:13):
agent.
That was done with a deep fakeplatform.
But it doesn't matter.
The fact that it's gonna bebuilt into the platform, just
gonna make it easier to actuallymanipulate.
I also am not very happy from avery personal level to have
other people's avatars joininstead of them to the call.
Like, if I can't have a callwith you, I don't need the call.
I definitely don't need youravatar to show up instead of
you.
So while I really resent thatconcept, I think this is part of

(21:36):
the future.
Now, I think some individualsand companies would love that
and some companies will help,will hate that, and it's gonna
be a while for us to get used towhat's allowed and not allowed
and what's acceptable or notacceptable.
We might be able to block themin specific conversations or
specific platforms, but this iswhere it is going, whether I
like it or not.
Now in addition to just joiningthe calls and taking notes,
these Zoom agents will be ableto do things like create emails

(21:58):
and documents of differentaspects that related to the
meeting, prep for the meetings,do deep research in preparation
for the meetings or after themeetings and there's even a
custom agent builder thatprovide with MCP support.
So it's going to be a verypowerful AI based solution for
the Zoom suite, and as Imentioned, beyond Zoom as well.
But maybe the craziest and thescariest article this week when

(22:19):
it comes to agent talks aboutboth anthropic and open AI and
how they're investing over abillion dollar each in creating
a reinforcement learningenvironment based on real life
scenarios.
They're calling them learninggyms that basically mimic real
life operations in Salesforce,Excel, Zendesk, et cetera, all
the tools that we use regularly.
So how does this work?

(22:40):
You create a cloned enterpriseenvironment with all the apps
that the enterprise environmentwill have, and then you have AI
agents play in that environmentand get feedback from humans
just like any otherreinforcement learning that
teaches them how to actuallywork in an enterprise
environment across a very widerange of applications.
As I mentioned, OpenAI spentabout 1 billion on this kind of

(23:01):
process this year, and they'reprojecting to spend eight
billions on this by 2030.
Combine that with the ability todo computer use, basically take
over your computer, rundifferent applications, and you
understand that these tools,after being well trained on how
to actually work in anenterprise environment, will be
able to basically do anythingany employee in the company does
across all the different apps inthe tech stack of any company.

(23:24):
One of the OpenAI seniorexecutives said privately, the
following, the entire economybecomes an RL machine.
RL stands for reinforcementlearning.
Basically, what they're sayingis the world becomes their
training ground for the nextplatform, which will then
replace everything that happensin the real world.
So what does this mean?
This means that the risk for jobreplacement displacement higher

(23:49):
unemployment is someone isgrowing dramatically because the
labs now have the budgets andthey have the infrastructure to
train models beyond just thestuff that they're doing right
now, and to actually train thembased on the work we do on the
day-to-day on most companiesaround the world.
And once agents will know how todo that well.
Most of these tasks are gonna bereplaced by agents because
they'll be able to do it for afraction of the cost and at a

(24:09):
fraction of the time.
Now staying on the agentuniverse, but diving deeper into
the code world.
OpenAI just released GPT fiveCodex, which is a fine tuned
version of GPT five that isdesigned specifically to be an
AI coding assistant.
Now the new model is availablealready on Visual Studio Code
extension Codex, CLI and CodexCloud.

(24:31):
So different ways to access it,and it was built to be a real
competitor to c Claude and theirhuge success in the AI coding
universe.
As we mentioned earlier in thisepisode.
Now some historical context towhat happened.
Those of you who has watched therelease of GPT-5, it was very,
very obvious that Open AI isfocused all in on taking back

(24:52):
the lead in the coding wars thathas driven Claude's revenue from
1 billion last year 5 billionright now.
Most of it, as we saw earlier inthis episode, 70 7% of it geared
towards automation encoding.
So a little bit of history.
In March of 2024, anthropicreleased Claw three, which was
the first model that wasactually somewhat good at
programming and started gettingon the map in that particular

(25:15):
field.
June of 2024, they releasedClaude 3.5 sonnet, that was the
big first boom of Claude into aplatform that is really good at
writing code in mid 2024.
Cha GPT released GPT-4 0.0 thatwas supposed to be the contender
that will take back the lead,but it just wasn't good enough.
That was followed in February of2025 with Claude 3.7 sonnet,

(25:37):
which was really good at codingand completely took the field
and became the default toolbehind the scenes for most
coding platforms out there.
And that started the craze ofpushing the success of Claude
very, very aggressively forwardin May of 2022.
Claude four was introduced bothOpus and Sonnet, and that drove
them even further ahead in thecoding world.

(25:59):
And that led us to the releaseof GPT five this summer that was
supposed to take that backshortly after, followed by Cloud
Opus 4.1.
That is supposed to be animprovement over that.
So that kind of brings us totoday the open air are still a
little bit behind and theyneeded to fine tune GPT-5
specifically for coding and thatwhat Codex does.
So GPT five Codex outperformsGPT-5, the regular model on the

(26:21):
SWE bench verified, which is thetop way to verify coding models.
Right now, there's still noformal score on the actual
leaderboard.
So right now, Claude four Opusis the leader with 67.6% success
versus GPT-5.
The regular GPT-5 with 65%, so avery small spread between the
two.
And now based on open AIthemselves, GPT-5 Codex is

(26:43):
better than GPT five.
They did not say if it'sactually better than Cloud four
Opus, but I'm sure we'll knowthat within a few days or a
couple of weeks once peoplestart using it.
It has a few interestingfeatures.
The two most interestingfeatures.
One is that the modeldynamically adjusted the time it
needs to think about differenttasks based on the complexity of
the task that it needs to do.
Now that's very different thanwhat GPT five is doing.

(27:04):
GT five is deciding whether tothink, whether to use a thinking
model or not with a router.
In this particular case, it'sactually decides in real time
how much time it needs to investin specific aspects of the task.
So it's a much more granular wayto approach the problem.
Simple task can run very quicklyand run faster, and they're
claiming that they have observedit work for up to seven hours

(27:25):
straight on its own, on a singleprompt.
Now the other interestingfeature is that it has what they
call a Codex Cloud extensionthat can be attached to most IDE
platforms that will thendelegate specific tasks to the
cloud that can run in parallelto what is happening in the IDE.
So the IDE, those of you whodon't know is the platforms in
which you develop code.

(27:45):
So tools like visual Studio orCursor.
And what this allows you to dois in addition to the model that
is running within Cursor orwithin Visual Studio, there's a
parallel path where the model onthe IDE can delegate some of the
tasks to the cloud platform torun in parallel while keeping
the context of what is happeningwithin the IDE.
That's a very interestingapproach, and I said multiple

(28:07):
times in the past few months onthis podcast that the tooling
are gonna make as big of adifference as the quality of the
models moving forward, as thegaps between the quality of the
underlying models is shrinking,and the way it's going to be
used and the way it's gonnaenable people to benefit from
them is gonna make a very bigdifference.
Now I wanna dive in for a secondto the seven hour work

(28:27):
capability.
We shared with you several timesthe research by a company called
Meter that has done a longrunning research in the past few
years that is looking at thegrowth of AI models capabilities
by evaluating how long would ittake a human to do the task that
the AI is doing with the ai,doing it at a 50% success rate.

(28:49):
Now that sounds really poor.
Who the hell wants an AI thatdoes things that are 50%, but
the actual result doesn'tmatter?
The improvement in that overtime is what matters.
What they found is that AIdoubles the length of time of
the human task.
It replaces every seven months.
So if the AI can do a task thata human did in 10 minutes, after
seven months, it can do a taskthat a human does in 20 minutes,

(29:09):
then 40 minutes, then an hourand 20 minutes, and so on every
seven months.
I shared with you last week thatRepli released agent three,
which I now use every singleday, and it blows my mind.
Agent three can think for abouttwo and a half hours.
That completely breaks theduplication every seven months
because it can think 10 x onwhat the previous model did.
That was earlier this year, soit's not doubling every 10

(29:32):
months.
Now it's 10 X every sevenmonths, which is a very
different scale.
GPT Codex five claims it canwork for seven hours.
That potentially also breaks themeter scale maybe.
And the reason I'm saying maybebecause what the meter scale
evaluates is not how long themodel works, but actually how
long the human works.
So this task that took GPT fiveCodex seven hours to do, how

(29:54):
long would that take a human todo?
So I can tell you from apersonal experience that using
rep agent three is notnecessarily the fastest way to
do tasks.
There are many cases in which Ican actually stop the agent and
actually give it comments myselfand dramatically accelerate the
process.
So saying, what's the point?
Why would you wait for the Rapidagent three to do the work?
Well, the reality is I don'tneed to do anything.

(30:14):
Last weekend I used Repli todevelop a whole new feature in a
new application that I'm workingon, and it worked for 59 minutes
on its own.
In those 59 minutes I had dinnerwith my kids, and when I came
back, it was still working, andthen I sat down and played bass
until it was done.
And if this was during workhours, I could have done my
emails, work on other things,and so on while it's running in
the background.

(30:34):
So whether the hour of work orseven hours of work is less and
more efficient, that the humanwork at the same time is less
relevant, as long as it's doingthe work effectively and it's
actually doing it properly,because that means that while
the agent is running, you can dosomething else.
That's it for the deep dives fortoday.
But now we have a lot of stuffto talk about in the Rapid fire

(30:55):
And since we just talked aboutOpenAI, we're gonna share some
new features and coolcapabilities from OpenAI.
The first thing is that OpenAIreleased what they call
developer mode beta, and it'salready available.
I already have access to it andalready turned it on.
And what it enables you to do isit enables you to connect any
MCP client to your chat.
Those of you who don't know whatMCP is, it stands for Model
Context Protocol.

(31:15):
And what it basically means insimple English is that you can
connect any third party tool,application and software into
your AI with five minutes ofeffort.
Meaning you can connectSalesforce, JIRA, Asana,
Microsoft platforms, anythingthat has an MCP server that
somebody has already created,whether the company themselves
or a third party straight intoyour chat and ask questions.

(31:35):
That makes it absolutely magicalbecause it is a hundred x
multiplier on the capabilitiesof the chat tool without those
MCP capabilities now.
Claude already has that for afew months now, and I've been
using it in Claude, and I thinkit's absolutely amazing.
It's magic what it can do.
The problem with that is thatintroduces a lot of backdoor and
data security capabilities whileyou are using it, and that can

(31:59):
come from just the fact thatthere's another backdoor to your
data.
It comes from the fact thatthese tools can now read and
write data on your actualplatforms.
So let's say Salesforce is anexample.
Your ChatGPT conversation cannow change the data in your
Salesforce database, which youmay or may not be ready for, and
you may or may not like whatit's actually doing because it

(32:19):
may or misunderstood what youask it to do.
Combine that with potentialhallucinations of the model,
plus prompt injections or justmalicious MCP servers by third
parties that are actually builtto steal your data.
And you understand it is theperfect storm of risk versus
reward.
So companies and individualswill have to define what is
acceptable and how to reduce therisk while enjoying the benefit.

(32:40):
The specific note, if you aregoing to activate this mode on
your ChatGPT says it allows youto add unverified connectors
that could modify or erase datapermanently use at your own
risk.
So if you still wanna do thisit's actually very easy to do.
Go and click on the bottom lefton your user.
Go to settings, connectors,scroll all the way down.

(33:00):
There's an advanced section,click on advance, and then
there's a tuggle button to turnon developer mode.
Then you're gonna have a redcircle around your prompt line
instead of the regular blackcircle.
It's kinda like highlightingthat you are dealing in
uncharted territory, which Ithink is a good idea from T's
perspective.
Kinda like highlighting the riskon the day-to-day usage versus
just the one-time click of theradio button inside the

(33:21):
settings.
But the capability is extremelypowerful, and I definitely see a
future where we'll figure out away to reduce the risk while
enjoying these amazing benefitsthat MCP provides.
another half new feature inChatGPT is a branching feature.
You can now start a conversationin a regular chat in ChatGPT and
then branch out newconversations in several
different places whileremembering the history of what

(33:42):
happened before.
So if you want to exploredifferent aspects of a business
plan, you wanna dive into thefinance and in another
conversation you wanna dive intothe operations.
And in a third one intomarketing, after you've done all
the initial setup and explainingwhat the plan is, that's an
option.
If you wanna do this as a studyguide and you wanna deep dive
into different aspects oflearning, you can do that as

(34:03):
well, and so on and so forth.
To access this feature, you justhave to scroll down to the end
of an answer from chat GPT,click on the ellipses, which are
the three little dots, and thenthere's an option called branch
in a new chat that's gonna opena new tab in your browser and
it's gonna have the sameconversation, but you can
continue it in a differentdirection.
The reason I'm saying this isjust half a feature is because
that capability kind of existedbefore I.

(34:25):
now, if you go to any prompt inthe chat, there's a little
pencil button that you can clickon.
If you click on that, itactually creates a new prompt
that is a new branch in theconversation, and you can
continue moving back and forthbetween the branches because
there's a button in the promptthat shows you are you in
version one, version two,version three, how many versions
that you want, and each andevery one of them, you can
continue a separateconversation, but the user

(34:46):
interface of that is not veryuser friendly.
Now, it literally creates a newchat in your left panel that
shows you that this is a newconversation, but it is based on
the old conversation.
So again, going back to tooling,it's a better way to access the
same feature, which makes itmore accessible to people.
Go play with it if you find thisvaluable.
I think it's great.
I've been using the old featurefor a very long time and it
provided a lot of value to me.

(35:07):
Another thing that OpenAI hasreleased this week is something
that we're talking about for along time.
So in the desktop application,there is now a new Build for
orders section in the settings.
Basically paving the way fornative checkout on the ChatGPT
desktop app.
What does that mean?
It means you'll be able to giveit your credit card and ask it
to go buy stuff for you, and itwill be able to do the research
and find the right product.

(35:28):
And actually check out rightthere on the Chat G PT app.
This is another step into anagentic future in which AI can
do stuff for us.
Going back to OpenAI, pushingvery hard on the non-business
aspect of Chat two P users, andyou see why this is a very
aggressive step in the rightdirection.
Combine that with another thingthat OpenAI just released or is
about to release, which isparental control.

(35:49):
So we talked a lot about this inthe last previous episodes last
week and two weeks ago, as faras the lawsuits, and the risk to
younger individuals.
So there are two things thatOpen AI are rolling out.
One is parental control for kidsunder the age of 13, which will
block the young individuals fromaccessing different aspects and
different types of chats, andwill allow the parents to set

(36:09):
different limits, tweak responsebehavior, and get distress
alerts if specific things happenin the conversations.
This is supposed to startrolling out later in September,
and they're also building asolution that will automatically
detect users that are under 18based on how they conversed and
will change the model's behaviorto fit an age sensitive group of

(36:30):
users.
So again, all these things aredirectly connected to what we
started with in the beginning,that more and more chat, GPT
users are using it for personaltasks.
And the younger generation is avery big focus of that.
So open air are bothcapitalizing on that field.
They're gonna make millions andpotentially billions of dollars
from commissions through thosecheckouts.
That's part of their plan forthe future that they shared

(36:50):
several times as their visionmoving forward, but it also is
coming to address the risks ofyounger individuals and how the
use of AI may impact theirlivelihood.
So overall, I'm happy with thedirection that OpenAI is taking
on both these aspects.
Other interesting launches, thispast week, Gamma, the AI part
presentation platform hasunveiled version three of their

(37:12):
platform, which is a major jumpahead for version two.
Like everything else, a lot moreagentic and a lot more capable
in doing multiple things.
Gamma has been a very successfulplatform when it comes to
generating presentations andgenerating landing pages and
webpages based on singleprompts.
And now they're taking this intoa completely different level,
and the age agent side of it issupposed to really understand

(37:33):
your intent and act accordingly.
So even very short prompts canyield very significant results.
One of the examples they gave ismake it more visual and that
will prompt an entire set ofchanges to your presentation
just by providing this reallysmall feedback.
Or it can understand rough notesand screenshot that was taken on
a whiteboard and then turn theminto a complete presentation
based on that and so on and soforth.

(37:53):
So again, a very big jumpforward to a very capable system
already.
Another huge announcement thisweek comes from HeyGen.
So HeyGen is a platform thatallows you to generate AI
avatars, either of yourself orsynthetic avatars that they have
many of in their database.
I've been using HeyGen for overtwo years now.
It's a great platform where theyjust announced that they are
merging or they acquired Elisa,which is a company that is

(38:17):
specializing in ai, videoediting and manipulation.
And the combination of these twocompanies is going to be
extremely powerful.
So think about video agents thatact as directors paired with
Hyperrealistic avatars.
So what does that mean?
It means you have a bunch ofagents that can create scripts,
edits, translations, captions,feedback loops, all with a

(38:38):
single prompt that will thendrive the output of the avatar.
This is going to be a completechange of what we know from
avatars right now because itwill make the whole generation
solution significantly moreefficient.
Or like the announcements saythey're focusing same day
workflow.
So it starts with a prompt, andthen it uses on-brand avatar,
invoices, specific companyassets, pacing and captions,

(39:02):
channel ready formats, orwhatever formats you need.
Brand kits, compliance checks,multi-language translations, and
one click push to LinkedIn.
YouTube, your CMS, et cetera.
Something that used to takeweeks to entire teams will now
take hours to a singleindividual.
Another really interestingrelease this week comes from
Gens Spark.
So Gens Spark is a company outof California that has built a

(39:23):
generalized agent that can domore or less anything you want
from writing code to browsing,to creating presentations to
video generation and so on andso forth.
They just released their own AIbrowser for Windows and for Mac.
Now what makes it uniquecompared to all other AI
browsers right now is that itincludes 169 free open weight
open source models that arerunning on the device itself,

(39:46):
which means these models run onyour computer and none of your
data gets sent to a third partycloud provider, which does two
things.
A, it makes it lightning fast,and B, it makes it private
because no data is being sent tothe cloud.
It also means it'll probably bea lot cheaper to use because you
don't need any subscriptions forAPI of any of the third party
tools because the models arerunning locally on your

(40:08):
computer.
Other than that, it works verysimilar to other AI browsers,
and this is definitely going tobe the way of the future.
We shared about some of them inprevious episodes, but it, they
can browse the web on their own.
They can have context acrossmultiple tabs, and they already
include built in super agentsthat can scan pages for
contextual data, auto runningprice comparisons across
multiple websites, finding dataacross multiple YouTube videos

(40:31):
and creating summaries of it andso on and so forth.
Extremely powerful capability,and it has an autopilot mode
similar to all other AI browserswhere it actually takes control
over the web browser and cannavigate and click and change
things and so on.
So while this is not the firstAI browser, it is the first that
I know that actually runs ondevice, that is a very
interesting angle that we'll seeif it's gonna catch other

(40:52):
browsers as well.
Now in addition to the technicalnews from OpenAI, there's a
bunch of interesting, morestrategic news from OpenAI.
The first one relates to theirnew agreement with Microsoft
that we shared last week, thatthey finally signed an LOI or an
MOU, for their new partnership.
So OpenAI is project to takedown the 20% revenue sharing
that they have with Microsoftright now to 8% that is supposed

(41:13):
to give them a net of over$50billion between now and 2030
that they desperately need inorder to pay for compute.
As you remember, they justsigned a deal with Oracle that
for$300 billion worth ofcompute, so they need to find
ways to pay for that.
And these 50 billion is one wayto pay for that stupid amount of
money that they're supposed tocome up with.
and another interesting newsthat s related to that is that

(41:35):
OpenAI just recruited mikeLiberator, who is the former CFO
of Xai.
So this is a very interestingpiece of information because of
two different reasons.
One, XAI has done some thingsthat are absolutely incredible
when it comes to startup setup.
They were able to raise about$10billion in just a few months,
which is almost impossible toany company in history unless

(41:58):
Elon Musk is behind it.
But you also need somebody onthe finance side that is very,
very good at putting these kindof deals together.
They also built the most capableAI data center in the world.
Still today in about 90 days,which again is unheard of.
It usually takes nine to 18months to do something like
this.
And the person that was behindthat acceleration process on the

(42:18):
financial side is now workingfor OpenAI.
The other obvious reason is thatthis adds more gasoline to the
fire of the very interestingrelationship between Sam Altman
and Elon Musk.
So a very quick recap.
Elon Musk was one of theco-founders of OpenAI.
He was one of the most seniorbackers from a financial
perspective, and he left afterfighting with Sam Altman because

(42:40):
he wanted to run the company.
He wanted to turn to be a partof Tesla.
That didn't work out.
He left.
He then started suing them sincethen, this relationship has been
escalating to very negativedirections.
And this is just another way forSam Altman to stick it to the
man if you want while hiring hisprevious, CFO, he actually
resigned from ex AI in July ofthis year.

(43:01):
but now he's been hired byOpenAI and he will probably
drive the speed in which OpenAIcan do its financial operations.
The information had a veryinteresting article analyzing
the new structure of open AI andtrying to understand what shares
will belong to who in the newrestructuring.
This is not formal data, but theinformation usually gets the

(43:23):
relevant information, punintended.
based on this article in the newformation, the nonprofit arm of
OpenAI will own 27% of the newcompany that currently values
the nonprofit arm at$135billion, which makes it
potentially the highest valuednonprofit in the world.
I don't know that, but it'sdefinitely up there.
Microsoft is going to own 28%,and it's gonna be the largest

(43:44):
shareholder that values thatinvestment at$140 billion right
now, which is more than 10 x,which Microsoft actually
invested in OpenAI.
25% is gonna be owned by OpenAIcurrent and past employees.
That values that at 125 billion,which is gonna make a lot of new
multimillionaires in the SiliconValley area.
13% will go to investors thatinvested in the company in 2025.

(44:08):
That values that at 65 billion.
The 2024 investors gets 4%.
That values that at 20 billion.
2% goes to the originalshareholders, which is$10
billion and 1% goes to open AI'svery first investors, which is$5
billion.
Now, that sounds like a verysmall amount of money compared
to the other amounts of money,but these people invested less

(44:28):
than$200 million to get a profitof 5 billion about seven years
after.
That's not a bad investment fromopen AI to Antropic's
Antropic'ss, CEO, Dario Amide inthe Axios AI plus DC Summit has
shared that currently Claude iswriting most of the code for the
next version of Claude.
The exact quote is, the vastmajority of future Claude code

(44:50):
is being written by the largelanguage model itself.
Now, Jack Clark, one of theco-founders, clarified that
Claude cannot yet manage all ofits own development, but that
the portion of what it canmanage is growing rapidly.
Combine that with what we sharedin the beginning of this episode
is that most of cloud users areusing it to replace workers
rather than enhance workers evenright now, and you understand

(45:13):
that the next version willenable a lot more of that and it
will come faster because the AIis writing the code, and so
we're getting into a verydangerous segment of the impact
of AI as it's going toaccelerate even faster and
enable even more automation,which will allow it to
accelerate even faster and so onand so forth.
As part of this process,Antropic's is dramatically
growing its offices inWashington DC in an attempt to

(45:36):
impact the government and thedecision making process and
trying to push for highersecurity and safety.
Their head of policy, JackClark, and their CEO Dario Ade
just held a marathon event,started at September 16,
pitching the house and senateleaders and committee chairs on
AI exponentially crazier surge.
That's a quote, basicallywarning that the deployment of

(45:58):
AI as it is today is putting atrisk a lot of the stuff that we
know and that they need to bemore involved in order to reduce
these risks.
It is very obvious to me and tomore or less anybody who is in
this field that the 2026 midtermelections and the 2028
presidential race will be highlyimpacted by the impact of AI on
jobs.
And it will become a major topicand a major concern.

(46:20):
And so definitely the partieswill be deeply involved in
trying to understand whatactions will they take or can
they take?
That's the bigger question,because I don't think any of the
parties have the tools or thevision to actually fight what's
happening right now.
I don't think they grasp howfast it is coming.
I don't think they grasp theimpact that it can have.
And I think by the time theyfigure it out, it will be too

(46:40):
late to respond.
So my personal feeling is thatwe're going into some very murky
waters in the next three to fiveyears before we figure this out.
What happens in those three tofive years might not be very
happy times.
In parallel, anthropic isdeepening its relationship with
the US and UK AI Safety Labs,granting them more and more
access to their models asthey're being developed, not as

(47:02):
a one-time event, but as anongoing partnership that helps
them actually find differentrisks in the platforms and
reduce it and address it beforethe models are released.
I salute Antropic's for doingthat, and I really, really hope
that all the different labs willdo the same.
And I hope even more that thegovernments will make that
mandatory and maybe aninternational body of experts
that will evaluate models beforethey're being released, can be

(47:24):
established to address not justUS models, but models from all
over the world.
With all of that happening.
There was another big event thisweek.
The event was METAS Connect 2025event, and they've done a lot of
unveilings in that.
The focus was obviously alltheir smart glasses and there
are three new sets of glasses.
The first one is their consumerready smart glasses with built
in high resolution, see-throughdigital display.

(47:46):
It's basically a heads updisplay that can project high
resolution video onto theglasses themselves while not
blocking what's behind it.
The price tag of these glassesare$799.
They're a little bulkier thanregular glasses, but the
capabilities they wrap insidethe glasses is absolutely
incredible.
The glasses are controlled viahand gestures, using a, what

(48:07):
they call metas neural band,which is a wristband that is
powered by neural technology.
So very small hand gestures canimpact how you control the
glasses and what's gonna be thedisplay and so on.
And it can be used for thingslike watching videos, reading
and responding to messages,receiving video calls, following
map directions, and so on and soforth.
All projected, straight intoyour glasses.

(48:27):
It also includes a 12 megapixelcamera that allows you to take
videos and photos and obviouslyupload them or connect them
directly to WhatsApp, Facebook,uh, et cetera.
Now at the price point of$800,that's not a cheap toy.
But if you think about thevision that this will replace
cell phones, and we are verymuch used to paying over a
thousand dollars or$1,200 fortop cell phones.
If I can do everything I can doon my cell phone with glasses

(48:49):
and my hands stay free, and Ican still see the world around
me, that makes it very, veryattractive, and that's
definitely the direction thatmeta is pushing.
The second set of glasses thatthey released is the highly
anticipated Oakley Meta,Vanguard Smart Glasses, which is
built specifically for athletes.
So these are more of awraparound, sporty looking for
cyclists, runners, skiers, andso on.

(49:10):
The price tag is$499.
It's supposed to be released inabout a month, on October 21st.
It can capture three Kresolution videos with a 12
megapixel high wide anglecamera, and it is really built
for sports, so it has a muchlonger battery life, so you can
be outdoors and don't have tocharge it for about nine hours.
The buttons are on the bottom ofthe frame so you can access them

(49:32):
easily while wearing a helmet.
It connects seamlessly to Garminsmart watches, to track fitness
stats, and it integrates withStrava as well for those of you
who cycle.
And it is IP 67 dust and waterresistant rating, which means
you can wear it outdoors with noproblem.
And then there's the secondversion of the RayBan Meta.
That is going to be priced at 379.

(49:52):
that is an increase from the 299 of the original model and
they're already available forsale.
They've doubled the battery lifeto eight hours and it also has
the three K Ultra HD videocapability.
Now, the demonstrations didn'tactually go very well at the
event, so Mark Zuckerberg wastrying to connect to a video
call to their CTO AndrewBosworth, and that didn't work.

(50:15):
After several differentattempts, they also tried to do
a live AI feature on the RayBanmeta glasses, and that failed as
well.
So while this is a consumerproduct that is supposed to be
ready for prime time, it isprobably not fully baked yet.
That being said, I said thatmultiple times on the show.
I think this is the future.
It's a very scary and weirdfuture where everybody will have
wearables that can record andanalyze everything around them.

(50:37):
And from a privacy perspective,that breaks almost every law and
social agreement that we haveright now.
But we also didn't think thateverybody will share everything
they do on social media 20 yearsago.
And now everybody shareseverything that they do on
social media.
And so I think it's just aprocess that we will adapt to.
Definitely the youngergeneration that is already used
to sharing everything they'redoing on social will have no

(50:58):
problem with that whatsoever.
It does raise a lot of concernwhen it comes to places with
security information or privatespaces and so on, and I do see
that becoming a thing wherethere's gonna be private spaces
in which this technology willnot gonna be allowed.
They also shared, uh, new thingslike, horizon TV Entertainment
Hub for their Quest headsets andHypers Escapee, which is enables
quest to scan the actualphysical environment and create

(51:21):
a digital twin on it.
So you can play games or dothings in a virtual environment
that mimics a live environment.
And they announce substantialimprovements to the Horizon
Engine and to the Horizon Studioplatform that allows people to
develop AI solutions integratedinto their 3D worlds, which
combines everything that Metahas worked on in the last few
years.

(51:41):
So the Metaverse and AI and ARand VR capabilities all into one
environment.
And as of right now, meta isdefinitely the leader in that
field.
And going from meta to Gemini.
Gemini just topped the numberone spot on the US App Store for
the first time.
It kicked ChatGPT to number two.
It also climbed from number 26on the Google Play Store to

(52:05):
number two on the Google PlayStore in the us.
And all of that is driven byNATO banana.
So the new image generationmodel that everybody wants to
use has driven a huge spike inthe demand for the Gemini app.
And it also drove interestingrevenue to Chachi piti.
So Gemini's iOS earnings has hit$6.3 million year to date with

(52:25):
August alone, delivering$1.6million, which is over 1200%
growth from their revenue fromthat channel in January of this
year.
So what does this tell us?
It tells us not that people likeGemini more, but that people
really like creating images, andthat's their platform to do
this.
But it's a very smart move fromGoogle because it drives people
to the Gemini ecosystem.

(52:46):
I must admit, I use Gemini alot.
I use both the Gemini chat aswell as Gemini in the different
platforms.
So within, the G Suite toolsthat I'm using for work, and I
love the results.
I just did an incredibleanalysis yesterday with multiple
PDFs that I had insideattachments.
On Gmail that Gemini helped meto identify that I dropped into
Google Drive, that I thencreated an initial analysis of

(53:06):
data that is grabbinginformation from tables in these
actual PDFs, turning it into anExcel file, and then continuing
the conversation with Gemini inthe Excel to do multiple
analysis, something thatwould've taken me hours to do
and took me about 25 minutes.
So those of you who are notusing Gemini within the G Suite
environment are missing out ifyou're on the Google universe.

(53:26):
That's it for today.
There are many more news itemsthat did not make it into this
episode, including a very newinteresting release from Alibaba
with their new deep researchmodel that actually beats
ChatGPT's Deep research with amodel that is 25 times smaller
and more efficient.
Many new updates and interestingnews about humanoid robots and
even in the$5 billion investmentof Nvidia in Intel stock and

(53:49):
their future collaboration onchip development, that will
bring NVIDIA's capabilities intoa chip that will be together
with the CPUs that are developedby Intel.
So lots of interesting news inthe newsletter.
So if you want that, just go andsign up for our newsletter.
There's a link in the shownotes, and then you can get all
those others news as well.
We'll be back on Tuesday with avery interesting and different

(54:09):
how to episode that is built ona presentation I did for
teachers this past week that isshowing how to build really
advanced applications inside ofChatGPT, Claude, and Gemini.
You can use it for business aswell.
So look out for that on Tuesdaymorning.
if you are enjoying this podcastand finding it valuable, please
give us a five star rating onyour favorite podcasting app,

(54:31):
and please share the podcastwith other people.
Pull the phone right now out ofyour pocket unless you're
driving.
Click on the share button andsend it to a few people that you
know that can benefit from it.
I know you know people that canbenefit from it.
I know that you understand howimportant this is, and I would
really appreciate if you dothat.
And until next time, keep onexploring ai, keep sharing what
you're learning with the worldand have an amazing rest of your

(54:52):
weekend.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Cardiac Cowboys

Cardiac Cowboys

The heart was always off-limits to surgeons. Cutting into it spelled instant death for the patient. That is, until a ragtag group of doctors scattered across the Midwest and Texas decided to throw out the rule book. Working in makeshift laboratories and home garages, using medical devices made from scavenged machine parts and beer tubes, these men and women invented the field of open heart surgery. Odds are, someone you know is alive because of them. So why has history left them behind? Presented by Chris Pine, CARDIAC COWBOYS tells the gripping true story behind the birth of heart surgery, and the young, Greatest Generation doctors who made it happen. For years, they competed and feuded, racing to be the first, the best, and the most prolific. Some appeared on the cover of Time Magazine, operated on kings and advised presidents. Others ended up disgraced, penniless, and convicted of felonies. Together, they ignited a revolution in medicine, and changed the world.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.