All Episodes

Check the self-paced AI Business Transformation course - https://multiplai.ai/self-paced-online-course/ 

What happens when AI not only matches but beats the best human minds?

OpenAI and Google DeepMind just entered  and won the "Olympics of coding", outperforming every top university team in the world… using off-the-shelf models. Now, combine that with agents, robotics, and a trillion-dollar infrastructure arms race, and business as we know it is about to change — fast.

In this Weekend News episode of Leveraging AI, Isar Meitis breaks down the real-world implications of AI’s explosive progress on your workforce, your bottom line, and your industry’s future.

Whether you’re leading digital transformation or trying to stay ahead of disruption, this episode delivers the insights you need — minus the fluff.

In this session, you'll discover:
01:12 – AI beats elite humans at coding using public models
05:15 – OpenAI’s GDP-VAL study: AI outperforms humans in 40–49% of real-world jobs
12:56 – KPMG report: 42% of enterprises already deploy AI agents
18:02 – Allianz warns: 15–20% of companies could vanish without AI adaptation
29:22 – OpenAI + Nvidia announce $100B+ infrastructure build
33:30 – Deutsche Bank: AI spending may be masking a U.S. recession
43:15 – Sam Altman introduces “Pulse”: ChatGPT gets proactive
and more!

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 2 (00:00):
Hello and welcome to a Weekend News episode of the

(00:02):
Leveraging AI Podcast, a podcastthat shares practical, ethical
ways to leverage AI to improveefficiency, grow your business,
and advance your career.
This is Isar Metis, your host,and we have so much to talk
about today.
First of all, we're going totalk about a really big and
interesting milestone that AIhas achieved this week when it
comes to competing with humanson performing tasks, and what

(00:24):
might be the implications ofthat and of, related research
from open AI on the global jobmarket.
Then we're going to dive intothe topic of agents and how they
are impacting the world Newagent type of releases, new
research about agents and so on.
So, a lot of agentic relevant,interesting news.
The third big topic of today isgonna be the massive investments

(00:45):
that are happening right now inAI infrastructure and what is
planned for the next few yearsand what might be the
implications of that.
And then we have a very longlist of rapid fire items,
including really interesting newreleases of features and
capabilities and models,including a feature of ChatGPT
that Sam Altman calls hisfavorite AI feature.
So stick around.
There's a lot to cover.

(01:06):
Let's get started.
Last week was the InternationalCollege eight Programming
contest, also known as ICPC washeld in Baku Azerbaijan last
week., in that contest, which isconsidered the Olympics of
programming for colleges fromall around the world.

(01:26):
The top and most brightestprogramming students from 139
universities from over a hundredcountries compete to try to
solve 12 really complex problemsin programming within five
hours.
There were two AI companies thatparticipated in this contest
this year.
One was Google DeepMind withGemini 2.5 Deep Think, and the

(01:47):
other is Open AI with GPT-5.
Open AI models were able tosolve 12 out of the 12 problems,
which not a single human teamwere, was able to do.
GPT-5, the regular GPT-5 thatyou and I have access to nailed
11 of the problems in the firsttry, and then an experimental
model was working on thetoughest program and was able to

(02:09):
solve it after nine attempts.
Google deepminds Gemini 2.5 deepthink again, a model you and I
have access to solve.
10 out of the 12 problems,including the most complex
problem that no student team wasable to solve.
Both these achievements earnedthe gold medal equivalent if it
was a human team actuallycompeting in the contest.

(02:29):
Now, in addition to the factthat he was able to solve all
these problems, Gemini solvedeight of the problems in just 45
minutes and then the twoadditional problems in three
additional hours.
Still way ahead of the five hourlimit that most participants
have to use all of it in orderto compete in an effective way.
Now, if you remember, not toolong ago, we reported that a

(02:50):
model from ChatGPT and a modelfrom DeepMind has successfully
competed and earned gold medalsin the mathematical Olympics
that was held earlier thissummer.
However, both of the models thatparticipated in the Olympics
were experimental models, whichare not the commercially
available models that you and Iuse in this particular one.
In the coding one, they wereusing, as I mentioned, for most

(03:12):
of the staff, the same exactmodels that we use, meaning they
were not being specificallytrained in order to solve these
kind of problems, and yet theywere able to beat all the human
participants in this contest.
Now, who else won gold in thisOlympics?
Well, the top human teams arefrom St.
Petersburg's State University,the University of Tokyo and two

(03:33):
Chinese universities ongUniversity Singha University as
well.
I potentially mispronouncingthem.
So All these universities wongold and Harvard and MIT earned
silver, but none of the teamswere able to solve 12 out of the
12 problems.
and The open AI model was theonly one that was able to do
this.
Now, what does this mean?

(03:54):
It means that enclosed loop,well-defined problems.
AI currently beats the besthumans on the planet that are
training for months to competein this exact field.
Now, as I mentioned, the recentone, the coding one compared to
the math one is even moreinteresting because it was using
available models, the samemodels that are released and

(04:15):
available on open AI andGemini's platform for UE me to
use every single day.
So this is not a custom effortto try to beat this particular
use case, but it's a generalizedmodel that can do a gazillion
other things that can beat thetop humans in the world in
complex coding problems.
no, To be fair, we gottaremember that both math and
coding are a very well-defineduniverses, right?

(04:38):
So it's not they have a lot lessof the complexities of real life
and real business becausethey're very structured and
rigid kind of environment.
And AI definitely excels inthese environments.
However, the trajectory of nothaving any tool that does it two
years ago to it beats the topteams on the planet to two and a
half years later is a very fasttrajectory.

(05:01):
And all the other fields willjust follow.
It's just gonna take longer.
But it is very, very clear thatit's going to be better than all
humans at anything, sometimewithin the next few years.
But the next question that thisraises is, what is the
implication or application thaton real world problems, so not a
contest with a five hour limitwith a very well-defined set of

(05:22):
requirements, but in real lifewith actual jobs?
Well, we got the answer to thatas well.
This week, OpenAI introducedwhat they call GDP Valve, which
is a process that they havecreated in order to perform
research to understand what aregoing to be the implications of
AI on actual real world utilityacross multiple industries.

(05:42):
So the benchmark is designed inorder to assess the AI
performance.
On real world tasks from 44different occupations across
nine key US GDP sectors.
So this was very much UScentric.
However, it is very similardefinitely to the Western
hemisphere because the samesectors and the same industries

(06:02):
are aligned with what you'll seein the rest of the world, or at
least of the, or at least thedeveloped world.
So how did this benchmark work?
Well, the GDP valve encompasses1,320 tasks total with 220 in
what they call the gold subset,which is the key more important
aspects.
It's covering the majority ofthe work activities for over 44

(06:26):
high wage knowledge occupationsin nine different sectors.
And the selection process forthe specific tasks in the
benchmark involve choosingsectors that each contribute
over 5% of the U-S-G-D-P.
And these sectors include realestate and rental leasing,
government manufacturing,professional, scientific and
technical services, healthcareand social assistance, finance

(06:49):
and insurance, retail trade,wholesale trade and information.
And collectively, these ninesectors represent 75.7% of us,
GDP.
So basically three quarters ofus, GGDP is represented by these
tasks and sectors that wereevaluated in this new evaluation
criteria.
Now, the tasks themselves arederived from real work products,

(07:12):
so legal briefings, engineeringblueprints and so on.
Actual use cases that are usedby real people in real
industries.
And they were presented byprofessionals averaging 14 years
of experience.
Obviously, includingmultimodality across files and
images and charts and so on.
And it took weeks to set it upand it took multiple experts to

(07:33):
review the outputs of themodels.
So how did the valuation processwork?
Well, very similar to the LMChat arena.
The experts received twoanswers, one from humans and the
other from different models, andthey had to grade them across a
detailed rubric, across severaldifferent aspects of how good
the answer was compared to otheranswers.

(07:54):
And the models they evaluatedwere GPT-5 with thinking
capabilities and Claude 4.1.
So 5GPT-5 with an enhancedthinking version matched or
surpassed human experts in 40.6%of tasks across these 44
different occupations.
Claude Opus 4.1 are preferredhumans in 49% of tasks.

(08:16):
Now the other interesting thing,they also tested older models.
So they also tested GPT-4 as anexample, and what they've
learned is that the performancehas doubled in success rate from
GPT-5 4.0, which came out in thespring of 2024 to GPT-5, which
we got in the summer of 25.
So within one year, the resultsin real life work related tasks

(08:37):
has doubled in its success rate.
Now, in addition, models havecompleted the task in
approximately a hundred timesfaster and a hundred times
cheaper than the human experts.
Now that speed does not includethe time that will requires
humans to oversight and reviewthe information.
but the AI part of the task wasdone at a hundred times faster

(08:57):
and a hundred times cheaper thanthe humans.
Now, they also categorize thesuccess of the models.
We're saying that Opus 4.1excels in aesthetics and
formatting of the results whileGT five shines in accuracy and
domain knowledge.
But the trajectory is veryclear.
So going back to how we started,we talked about a closed loop,
well-defined contest, and inthis case, this is the real work

(09:19):
done to generate 75% of the GDPvalue of the United States of
America economy.
And right now with the currentleading models, the models are
performing about half the tasksbetter than human experts with
14 years of experience, andthey're doubling their
capability every year, again,based on real research with real

(09:42):
people doing a blind test.
So in theory, if this trajectorycontinues, and I dunno if you'll
continue in the same exacttrend, but I think the direction
is clear by a year from now.
So the summer or the fall of2026, these models will be able
to do 70 to 80% of the tasksbetter than human experts in

(10:03):
these fields on the stuff thatdrives the US economy.
So what does that mean?
It means that the future inwhich there is very serious
competition between employingpeople and just using AI to
solve problems is potentially ina significantly shorter timeline
than everybody thought before.
It also means, it means thatright now you can do about half

(10:24):
the tasks in most companies inthe US with AI better than
humans can do it right now.
Now, yes.
Does that require humanoversight?
A hundred percent.
Will it require less humanoversight in the future?
A hundred percent.
What does that mean to oureconomy?
What does that mean to theunemployment rate?
What does that mean to thelivelihood of people who their
jobs will go away andpotentially their life Ewings

(10:45):
will go away?
Uh, very big questions that Idon't think anybody has answers
to, but I think this veryimportant research we will
hopefully sound the alarm muchlouder than anything that we've
seen before.
I really hope that people in thegovernment will start paying
more and more attention to thiskind of research.
This is the best research that Ihave seen so far that looks at

(11:06):
it from a broad, professionaldone research perspective to
show what might be theimplications of AI on the
economy And the job market.
And it scares the hell out of meto be completely frank.
I wanna update you on somethingthat is very important that
could dramatically help youlearn AI faster, and that is the
fact that we just relaunched ourself-based AI business

(11:28):
transformation course.
This course is based on thecourse that I'm teaching, live
on Zoom, with the benefit thatyou can take it at your spare
time and break it down intospecific sessions.
In addition, we are notlaunching another course
probably until the beginning of2025, and there's gonna be a
wait list open in the immediatefeature.
But if you want to learneverything that we have been

(11:48):
teaching companies andindividuals and leadership teams
across multiple industries withthousands of people around the
world who are successfullytransforming their businesses on
what they've learned, you nowhave the opportunity to do this
at your own pace.
In addition, I just finishedteaching a cohort of the course
and we just finished updatingthe course to the latest and
greatest.

(12:08):
So the self-paced course thatyou can take right now, there's
gonna be a link in the shownotes is updated as of September
of 2025, which is right now.
And so you're not gonna be usinga recording that is six months
old or a year old, but it'sactually from the last few weeks
with all the relevant updates onthe models and capabilities and
tools and so on.
So if you want to learn how touse AI effectively across

(12:32):
multiple aspects of thebusiness, and this is not a
fluff course, this is nottalking about the history of AI
or the technology of ai, it'sactual practical use cases, and
how to apply different AI toolsin order to achieve business
goals.
Go check it out.
There's a link in the shownotes, and you can sign up and
start taking the lessons rightnow.
And from that, let's continue tothe age agentic news of the

(12:52):
week.
There's a lot of interestingthings, uh, in the age agentic
world that has happened.
So first of all, KPMG which is alarge consulting company, has
released their Q3 AI quarterlypulse survey that they've been
running for about two yearsright now.
And the survey is based on KPMGsurveying 130 C-suite executives
at companies that are at least abillion dollar in revenue.

(13:14):
So these are large enterprisesand it gives you a great idea on
what's happening in there whenit comes to AI and agent
implementation.
And what they found isabsolutely stunning.
42% of organizations now deployat least one kind of AI agent.
That's up from 11% two quartersago.
So that's four x the amount ofagentic being actually deployed.

(13:37):
So these are not tests, theseare things in deployment,
running, doing actual work for Xin six months.
From the survey in Q1 of thisyear now from the organization
surveyed, they plan to pour ajoint$130 million into AI in the
next year significantly up fromQ1 levels and 57% of these

(13:58):
c-suite leaders anticipatemeasurable ROI within 12 months
of deployment of the AIsolutions.
And they're basing that onquantifiable gains with 97% of
surveyed people say they've seenimproved productivity in their
companies, 94% saying they sawenhanced profitability because
of using ai.

(14:19):
And these are two very differentthings, right?
So on one hand, profitabilitymay mean you're getting
efficiencies, meaning you'recutting the bottom line.
But a lot of them mentioned thatthey're actually able to grow
sales and grow the top linewhile using AI strategically,
which I talk about a lot when Ido workshops for C-Suite and
leadership teams, talking aboutthe two potential benefits of

(14:39):
ai, not just being cost cuttingand efficiencies, but also being
able to address new kind ofmarkets with new products that
you couldn't do profitablybefore AI existed and now you
can, which opens many, manyopportunities for more or less
every company on the planet.
Now, when it comes to what'sslowing companies down, 82% of
companies flag poor data as thebiggest AI obstacle.

(15:00):
This is a jump from 56% justlast quarter.
I think this is a realism thatcompanies, As they're starting
to deploy actual solutionsversus testing small scales of
them, the data gap becomes moreand more obvious.
And the second most mentionedissue is cybersecurity worries,
which hits 78% of the peoplesurveyed.
And that is very, very clearbecause now you're exposing your

(15:21):
data to a lot more connectionsand so on, which leads to a lot
more loopholes in your datasecurity.
the two very interesting piecesof information that came out of
the survey.
78% of leaders admit that oldschool KPIs miss the full
potential of the AItransformation.
Meaning the way we're measuringreturn right now may not fully

(15:42):
cover everything that people areseeing.
A lot of it is because the smallday-to-day stuff that people are
gaining as far as benefits byusing models to answer emails,
summarize meetings and so on, isvery hard to capture.
And it is very, very real in thevalue that it provides the
organization and the economy asa whole.
And the other really interestingpiece of information is that the
resistance of employees to theAI implementation has plummeted

(16:06):
from 47% to 21%.
And Steve Chase, the US ViceChair and Global Head of AI and
Digital Innovation at KPMG said,and I'm quoting, agents are
taking on repeatable, measurablework where time and cost savings
show up directly in the metricsorganization track today.
That clarity is why leaders feelso confident about achieving ROI

(16:30):
in the next 12 months.
The results are visible,tangible, and compounding
quickly.
Now, what is that leading to?
It's leading to the expectedoutcomes.
56% of those being surveyed havesaid that they're planning entry
level hiring tweaks next year.
What is that leading to?
Well, it's leading to verypredictable results.
56% of the people surveyed areplanning entry level hiring

(16:52):
tweaks within this coming year.
We're already seeing that,right?
We talked about this in thispodcast multiple times.
Why in entry level jobs?
Well, because these are theareas that the easiest for AI to
do consistently.
Think about what we talked aboutbefore.
These multiple tasks that openAI have reviewed in the
research, they compared it topeople with 14 years of
experience, not to entry levelpeople.

(17:13):
If the research would've been toentry level people, the answer
is instead of AI being able todo it about 50% of the time, it
will probably be AI can do itabout 85% of the time, which by
the way might be the case nextyear for everyone, or maybe in
two years for everyone, but thisis where it's going.
So it is definitely impactingalready the entry level hiring,
and it will probably impact allhiring as AI evolves and gets

(17:35):
better.
The other thing that it does is82% of leaders foresee their
industry's landscape radicallyaltered within the next 24
months.
And 93% of people are creditingGen AI for making these
differences and providing acompetitive edge to their
companies.
So think about what that means.
These are people from more orless every industry and 82% of

(17:55):
them are thinking that theindustry landscape, what we know
as the big companies, smallcompanies, successes and
failures and so on, is going tochange within the next two
years.
Now we got another confirmationor data point for the disruptive
nature and the transformativenature of AI on the future
global economy.
So virgin Mazen View, which I'mprobably butchering the name and

(18:19):
I apologize for that.
The Chief Information Officerfor the Global Equities, Allianz
Global Investors forecast that15 to 20% of currently listed
companies in the world couldvanish within the next five
years due to failure to adapt toai.
They're calling it digitalDarwinism.
So basically they're saying thatthe companies who will not
effectively adapt to ai, 15 to20%, one in every five or six

(18:43):
companies in publicly tradedsuccessful businesses will
disappear within the next fiveyears.
They mentioned that as part ofthe Bloomberg's Investment
Management Summit in London,now, they said that the AI
driven equity surge is not abubble, but, and I'm quoting
Fragile Boom, underpinned byfundamentals.

(19:04):
Basically what they're saying isthat the underlying fundamentals
are real.
Companies are actually growing,they're selling more stuff.
They're generating crazyrevenue.
They're growing really, reallyfast, and so that's not what you
see in a regular bubble.
I would say that the fact thathe added the word fragile in the
beginning tells you that hestill has significant worries
about the volume and the speedthat the investments are going.

(19:27):
More on that shortly in the nextsegment of this episode.
But for now, let's continue withagents.
So notion just released Notionthree, which is unleashing AI
agents across everything notion.
But the AI agents and I'mquoting Notion is saying, can do
everything a human can do inNotion which is autonomously,
building pages and databases,searching across workspaces,

(19:48):
slack and the internet, andexecuting plans for up to 20
minutes across hundreds of pagesall at once.
So these notion agents arebasically connected to
everything notion.
They remember user preferencesand context and content
preferences and it knows whereeverything is stored and it can
apply everything it knows aboutthe user as well as everything
it knows about the data in orderto do tasks that humans can do.

(20:12):
Inside of Notion, what does thatmean?
It means that you can run yournotion operations significantly
faster with these agents becausethey can do the stuff that
humans used to take multipleminutes or sometimes hours to
do.
The agents can do in seconds orminutes, basically cutting the
effort to achieve specific goalsby probably 10 x.
Or the way notion define it.

(20:33):
This is an instant teammate thatis available that is highly
effective and can do things foryou within the notion
environment, but also beyond thenotion environment because it
integrates with Slack and theinternet and can collect
information beyond the standarddefault notion dataset.
Another company that did a bigsplash when it comes to Agent
this week is Citigroup.
So Citigroup is starting a testwith 5,000 Citigroup employees

(20:55):
that will test what they callthe stylus workspaces, agentic
features, which is theirinternal system.
Which is a homegrown agentenvironment that can do a lot of
things across the city universe.
So what can these agents do?
Well, they can autonomouslyhandle multi-stage tasks like
client research across datasets, profile, building,
documentation, translation, andall from a single prompt.

(21:16):
So the goal is to basically havea sidekick or a partner to every
employee out of those 5,000 ormultiple of these subset agentic
employees under every employeeto be able to perform tasks that
they otherwise would've donemanually and now the AI is going
to do for them.
Behind the scenes, stylistsintegrates both Google, Gemini,
and philanthropic Claude for aflexible solution that will

(21:37):
allow it to maximize thebenefits of each and every one
of these models.
And.
Deliver nuanced results acrossall the different tasks that it
needs to deliver.
Now, Citigroup has put verystrict cap limits on how much
compute these agents canactually use, but that being
said, their chief technologyofficer, David Griffith says, as
model pricing decline rapidly,traditional return on investment

(21:58):
models may become outdatedquickly, making long-term
planning difficult.
So there are two lines here.
One is the amount of compute youneed to run more and more
agents, which is a line that'sgoing up and up and up.
But on the other hand, the costof compute is going down, down,
down.
And what is the combination andthe trajectory of these lines
moving forward when you try tocombine them together is very,
very hard to predict.

(22:19):
Making it very hard forcompanies to figure out where
this is going.
But doing a large scale testlike this one with 5,000
employees, with multiple agentsfor each one that can do
multiple jobs is a great way tofigure it out.
Now what they're saying is thatthey're putting this experiment
under a microscope, and thatthey're going to track the
behavior, the output boosts andthe cost value ratios to decide

(22:40):
on how and if they're going toscale this, or what aspect of it
they're going to scale and soon.
But they also said that thelevel of reliability that
they're seeing from these agentshave seen a significant boost in
the past six months, which hasnow enabled them to drive more
and more automation andautonomous behaviors from these
agents.
Now, per Griffith this level ofautomation promises that I'm

(23:00):
quoting a massive boost ofcapacity.
But he also said it's too earlyto predict the effect on jobs.
And I would say it is very easyto predict the effect on jobs
because they have two optionsand two options only, or
combining both of them.
Option number one is to growfaster than the efficiencies
that you're getting, right?

(23:21):
So if you're getting a 30%efficiency, if you can grow by
using that extra capacity ofemployees by 40%, then it makes
sense.
You provide value back to yourshareholder, your stock price
goes up, everybody's happy.
However, if you cannot grow by40% and you have a 30% savings,
then some people will have togo.
Now this is just looking atefficiencies.
If you look at the broaderpicture, there needs to be a

(23:43):
return on the cost of settingthese systems in place and of
running these systems.
So you need to look at the ROIbeyond just the savings of time
over the specific individualsand being a publicly traded
company that is scrutinized fortheir profitability.
They will have to show positiveROI, which means if they will
have to cut staff in order tojustify the investments in
capacity for the ai, they willdo that.

(24:05):
Now, put that in perspective ofwhat we shared before with the
digital Darwinism from Allianz,and you understand that each and
every one of these largecompanies is.
That is potentially facingextinction.
If you are following theDarwinism concept, then they
will invest everything they caninto AI and will cut everything
they need in order to be able toafford that.

(24:27):
Just to reduce the risk of beingeliminated or becoming less
relevant and less competitive inthe next few years.
Anybody that says otherwise,like in this very gentle thing,
it's too early to predict theeffects on jobs.
It is not too early.
The effect is very obvious.
It is already happening and it'sgoing to be amplified
dramatically, even just based onthe results that we have today
and that we learned about fromthe research that we shared with

(24:48):
you earlier in this episode.
Staying on Ag Agentic News,perplexity just released a email
assistant that can handle manydifferent tasks in your inbox,
such as verifying email,checking them, drafting
responses, setting up meetings,and so on.
It is currently only availableto their max tier so that people
are paying them$200 a month.
But just like many otherprevious releases from

(25:09):
perplexity and from othercompanies, this might be just a
first step before a broaderdeployment to everybody that
uses perplexity.
As part of the launch of thisproduct, currently they are
integrated with Gmail andOutlook, which obviously covers
the vast majority of email usersin the world, but they're
planning to connect it to moreemail platforms and other
systems as well in order toprovide a more comprehensive

(25:30):
solution for our day-to-dayworkspace.
So again, more agents in moreplaces, connecting to more and
more of the day-to-day stuffthat we're doing, coming from
every single direction frommultiple companies, integrating
with the systems that we use themost today, such as our Outlook,
Gmail account, and notion, etcetera.
But the transformation does notstop there.

(25:52):
So these are white collar jobs,but blue collar jobs are gonna
get hit as well.
And we've talked about this inthe past before that robotics is
coming and that it's comingfast, but it's just a few years
behind.
So in a very good example ofwhere this is going, a new
collaboration between theAmerican Bureau of Shipping,
known as a BS and Persona ai,which is a robotics company.

(26:13):
They just signed a long-termpartnership in order to provide
robots to the shipyardoperations.
Now over there, it's actuallyprobably not going to replace
employees, or not exactly,shipyards currently face
attrition of 22, 20 5% amongaverage workers, and 30, and
sometimes 50 to 60% attrition onfirst year employees.

(26:34):
So they are shorthanded all thetime.
A lot of people are leavingbecause it's not an easy job to
do and people are looking forother things.
And so this new roboticapproach, the idea is to replace
the employees that they'relacking instead of the employees
that they have right now.
And their initial focus forthese robots is data collection
for ship construction andclassification.

(26:56):
This means that a BS standardson how to do data collection,
data analysis, and so on aregoing to change in order to make
them a better fit for theserobotics.
So the way the shipyard work isbeing handled is going to change
dramatically.
So it's not just, oh, let's lookfor efficiencies, is let's find
different, more innovative waysto do this because now we're
gonna have scales that were notavailable before.

(27:18):
I shared with you a few monthsback when a big infrastructure
project was happening in mycommunity and there were dozens
of people walking around thestreets with metal detectors and
spray cans spraying wheredifferent lines are running.
I'm like, there is no way robotsare not gonna take this over and
probably can take this overright now of just collecting the
data of where the differentlines are passing.
So when you're digging, you'renot running into them.
This is just one very simpleexample that I saw with my own

(27:40):
eyes, but these kind of thingsare gonna get replaced by robots
probably faster than we can eventhink.
Now if you think about ashipyard environment, and I
obviously never worked at ashipyard, but I can definitely
imagine how it looks like it's ahectic and dynamic environment
that requires adaptability andhumanoid robots with quote
unquote thinking brains is theperfect solution in order to be

(28:02):
able to fit into the existingoperation without making
significant infrastructurechanges.
Now speaking of robot andchanges that are coming, Chinese
startup, a head form hasunveiled a humanoid robot head
that mimics human emotions andit's really weird to look at it.
So they released, two differentvideos of their robots.
One looks like a human, theother one looks like an elf.
They call it the Elf series.

(28:22):
So it makes sense that just hasincredibly realistic facial
expressions and it is blinkingand move its head and lips and
everything in a way that is.
Nothing but weird and scary tome.
But what they're saying is evenweirder and scarier.
They are claiming that yes, theycan't anticipate exactly the
full trajectory on thedevelopment of this and the

(28:44):
deployment of this, but theyanticipate that within no more
than 20 years, robots will beable to look exactly like people
and we won't, and we won't beable to tell the difference.
This is the mother of allscience fiction, right?
If you think about a lot ofmovies that we've seen where we
can't differentiate between therobots and the humans, they're
claiming this is coming withinless than 20 years.
Now, if we think about any othertechnological prediction that

(29:06):
happened in the past that alwaysfell short, that might be 15
years or 10 years, where we'regonna have robots that are
indistinguishable from actualhumans.
I don't like this thought atall, but the technologies are
being developed right now, andwith the right business model,
this is where it is going to go.
So now after talking aboutagents and robots and how
they're gonna impact theworkforce, let's talk about why

(29:27):
I think this is not stopping andthis is the crazy investments
that are happening right now inthe infrastructure that is
supposed to drive all of this.
So OpenAI just announced that aspart of their target initiative
they are going to be building acapacity of seven gigawatts of
new data center in the nearfuture.
So they're building five newsites, plus an expansion of

(29:50):
their Abilene, Texas, and coreweave projects.
And they're doing it way aheadof their initial schedule.
So we shared with you theircrazy partnership and deal with
Oracle for$300 billion worth ofextra capacity, 4.5 gigawatts of
extra capacity to be specificacross several different
locations.
And as I mentioned, part of itis just the expansion of their

(30:12):
existing cloud setup in Abilene,Texas, which could be housing
400,000 GPUs by mid 2026.
So less than a year from today.
Now a quick recap on what isStargate and how it was
announced.
So it was launched in January atthe White House with President
Trump, and it's a partnershipbetween Oracle, SoftBank and

(30:32):
OpenAI in order to createsignificantly more compute for
OpenAI to keep them competitive,presumably against China.
So if you remember, Ted Cruzsaid, we should have a very
light touch regulatory approachbecause we are in a race with
China.
It basically tells you thiscurrent administration will do
everything required in order tokeep the US companies ahead,
which means allowing them tobuild, consume a huge amount of

(30:54):
power and generate significantlymore environmental impact than
probably any project in humanhistory.
Now we have a new deal from thisweek where Nvidia is going to
invest a hundred billion dollarsin OpenAI via a phased approach
that will generate 10 gigawattsof GPU powered data centered
owned by OpenAI, right?

(31:16):
So the goal here is that Nvidiais going to provide these data
centers or these GPUs to thedata centers that OpenAI is
going to own, versus the computethat they rent right now from
Microsoft and Oracle and so on.
So how is this still going towork?
Nvidia is going to fund 4.5million new GPUs for a 10
gigawatt data centers startingin the second half of 2026.

(31:38):
Just to put things inperspective, the deal with
Oracle is for 4.5 gigawatts, sothey're talking about 10.
That more than doubles theamount of compute just from this
crazy large Oracleinfrastructure that they are
planning now there are a lot ofdiscussions about the bubble and
how crazy these investments areand so on.
There are a lot of people thatare saying, oh, so the way this
is going to work is that Nvidiais going to invest a hundred

(32:00):
billion in open ai, so they havea hundred billion to actually
pay Oracle out of the$300billion that they need to pay
Oracle that they don't have.
And this way Oracle will be ableto make a lot more money that
will be able to invest in Nvidiaand doing the circular
generation of money out of thinair with billions and billions
of dollars.
The reality is, I don't thinkthis is completely accurate
because I think this, forseveral different reasons.

(32:22):
One, this is presumably on topof the previous investment, so
these a hundred billion dollarsin NVIDIA new chips is supposed
to be new processing capability,not as part of the previous
project.
And the other is, it was madevery, very clear by Nvidia that
these are gonna be owned by OpenAI and not by Oracle and or
somebody else.
So while I understand theconcerns and while I understand

(32:45):
the fears of the crazy bubblethat is potentially shaping in
front of us, I don't think thisis exactly the right place to
say, oh, here's a smoking gun ofwhat's actually happening.
As another data point on howcrazy is the need for ai compute
is right now AI chipmakercerebra, which makes the world's
fastest inference chips rightnow.

(33:05):
So, putting things inperspective, the GPUs that
Nvidia is generating that hasgrown them to be the most
valuable company in the world isused right now, both for
training AI models as well asfor inference.
Inference is when the AI isgenerating outputs.
It's called inference.
And so the fastest inferencechip right now is by, there's

(33:27):
another company called Grok, uh,grok with a Q, not grok with a K
to be separated from X AI's,grok large language model.
So Cereus is currently raising$1billion at a$8 billion valuation
in a move that is showing youvery, very clearly what is the
demand for what they're doing.
There's definitely a future inwhich GPUs are gonna be used

(33:48):
more for training new models andless for inference because these
inference chips aresignificantly cheaper and run
significantly faster, providinghigher efficiency at the
inference time.
And if you think about it,inference is gonna be the core
of compute in the future aswe'll.
Need to train less modelsbecause the existing models will
just gonna be good enough, notnecessarily the existing today,

(34:09):
but the ones that will exist inthe future are gonna be good
enough for most things.
And then just using them isgonna be the need.
And for that, there are betterchips than the Nvidia GPUs, at
least right now.
Now, in addition to this crazyinvestment by OpenAI on new
compute to deliver everythingthat they need to deliver,
whether it's training models orcreating inference or building
devices and so on.

(34:30):
OpenAI is also committed toinvesting a hundred billion
dollars in backup servers.
So this is stacked on top of the$350 billion of rental of
compute that they have betweennow and the end of 2030,
combined with the additional ahundred billion from nvidia.
You kind of get the point.
These are numbers we've never,ever, ever seen in any industry,
in any infrastructure ofanything ever before, but this

(34:52):
is what OpenAI are committed toas of right now.
But they are not alone.
So Microsoft just announced thatthey are converting the Mount
Pleasant Wisconsin Foxconneighth Wonder of the World
Building into a powerhouse AIdata center campus.
Putting things in perspective.
They're taking an existingbuilding that was built to do

(35:13):
something else and failed andturning it into a high capacity
data center with hundreds ofthousands of Nvidia gb, 200 GPUs
and fiber connections acrosseverything to make it extremely
fast.
They're claiming by the way,that the amount of fiber optics
there is 180,000 kilometers,which is four and a half times,

(35:33):
which means it can go aroundEarth four and a half times.
Now the good news is the coolingfor this facility is all closed
system liquid cooled, which ispresumably cooling 90% of the
space without wasting any newwater, which is good news from
an environmental perspective.
However, the energy that thiswill consume is 250 megawatts

(35:55):
solar farm matched by fossilfuel usage and other renewables.
To put things in perspective, afew green organizations from
Wisconsin are saying this equalsto over 4.3 million homes.
And to put that intoperspective, Wisconsin as a
whole has 2.8 million homes.

(36:16):
So this one facility willrequire more electrical power
than one and a half times theentire homes of the state.
Now, here's another data pointfor you for how much investment
is going into this ai boom.
A new report that was publishedon CNBC talks about the seven
elite private tech companiesthat has completely sky rocketed

(36:40):
in the past two years to acombined value of 1.3 trillion.
These are not publicly tradedcompany, these are private
companies, so obviously thebiggest one is open AI with a
valuation somewhere between 350billion to 500 billion.
Elon Musk's Xai with a$200billion valuation, who by the
way just raised another$10billion just shortly after the

(37:00):
previous$10 billion raiseearlier this year, and the
valuation went up from 150billion to 200 billion in just a
few months.
Anthropic with 178 billionvaluation, which is kind of
weird when you think about therebeing behind, uh, XAI when it
comes to valuation.
But Elon is Elon and he can dostuff that a lot of other people
cannot do when it comes tovaluation of companies.

(37:21):
Databricks after that with ahundred billion dollars in
valuation.
And three additional non-AIcompanies such as SpaceX,
Stripe, and Unreal.
Now, to explain how muchinvestment these companies has
racked up, AI startups havevacuumed$65 billion across 19
companies just this year.
That is 77% of all privatecapital.

(37:44):
have to say this again, 19companies has gotten$65 billion,
which is 77% of all privatecapital this year.
This tells you the crazy amountof concentration of investment
that's going into the morepromising AI companies out
there.
That being said, the CEO ofForge investment, Kelly

(38:04):
Rodriguez says, we've not seenthis in any private market ever.
Companies that are growing ahundred, 200, 300% on numbers
that are already pretty big, andshe doesn't only just means the
valuation, she means the revenueas well.
Let's just think aboutAnthropic.
Philanthropic went from 1billion in revenue at the end of
last year to$5 billion rightnow, not even a year later.

(38:26):
bUt with all that money, most ofit going to infrastructure,
Deutsche Bank has sound, thealarm as far as the really scary
underlying economy problems thatthis is hiding.
So their head of research,George Velas, has mentioned that
AI CapEx has outpaced allconsumer spending to drive the
US GDP growth this year, but inorder for the tech cycle to

(38:50):
continue contributing to GB toGDP growth capital investment
needs to remain parabolic.
This is highly unlikely orbasically what he's saying.
He's saying that if you take outthe crazy investments that
happen in AI in 2025 in the USand you take that out of the
current equation, the US economywould've been in a recession.

(39:12):
So while we're seeing theeconomy growing and we see GDP
growing, what is basicallysaying that is fueled by ai and
AI infrastructure and the restof the economy is actually
shrinking right now, which isdefinitely not good news unless
the output of the AI willgenerate significant actual
growth in other industries, thatwill then compensate for that

(39:33):
really big jump in investment,specifically in AI
infrastructure.
In a Bain and Company reportthat was released this past
week, they project that AI needsto generate$2 trillion in annual
revenue by 2030 in order to justfund the 200 gigawatts of global
compute that it is committing toright now.
And even with all the savingsthat they're seeing right now,

(39:55):
they see a shortfall of$800billion in the equation as of
right now.
Do I agree or disagree?
It's very hard for me to say.
These people are way moreknowledgeable than me when it
comes to doing these kind ofcalculations.
I would just say that I think AIwill drive such an incredible
shift.
In everything we know about mostindustries and most markets that

(40:15):
we can't project how it willactually impact the economy
because we're walking into auncharted territory.
But it's so far from charteredterritory that is very, very
hard to project.
Even for people who are highlyexperienced in doing these kind
of projections, There are alsooptimistic groups like Goldman
Sachs that think that AI willeventually create a significant
productivity across multiplemarkets.

(40:37):
But there are also othernegative views that are saying
that the s and p 500 right now,as an example, has a huge
exposure to ai.
And if the AI boom collapses,anybody who's an investor in the
s and p 500 is gonna get a veryserious hit.
So you hear voices on both sideof that equation.
I think both of them areguessing and speculating based
on a lot of unknowns.
And we'll just have to play thisout and see how this works.

(40:58):
So that's it for the deep dives,that what we've learned from
them is that there is thegreatest investment in history
in AI infrastructure right now,more than any investment in
anything else in the past, in areally, really high speed, at a
really high scale, fueled by aglobal race to get first to
A-G-I-A-S-I and beyond, and thatthe companies that are behind it

(41:21):
are investing amounts of moneythat sound completely legendary.
However, each and every one ofthese things is leading to
results that potentially put atrisk every single job or many,
many jobs across the knowneconomy without anybody knowing
what other jobs, if other jobswill emerge to replace them.
So overall, this doesn't looktoo promising to human jobs in

(41:43):
the, at least near future untilwe figure out what other jobs we
can do in the AI era.
Now let's switch to rapid fireitems.
There's a lot of stuff to cover.
The first segment in the RapidFire is gonna be about new
releases and new capabilitiesand existing models.
The most interesting one thisweek is that Xai announced Grok
for Fast.
And Grok for Fast is one ofthose mini models like we've

(42:04):
seen before with O three Miniand so on that are almost as
good as the existing model.
So Grok four Fast performsalmost as good as Grok four
across multiple benchmarks, butit comes at a 98% price cut.
So you're gonna pay 2% to getalmost the same exact outcome.
And in addition, this new modelcomes with a massive 2 million

(42:26):
tokens cost context window,which is the top of what is
being offered right now.
Which is at the high end of thescale, similar to Gemini 2.5 Pro
and way above, uh, ChatGPT as anexample.
It is also very good atmulti-modal capabilities, and it
has improved dramatically incode generation and basically
everything else you want to useAI for.
And in addition, based on thirdparty analysis.

(42:47):
It is currently by far thefrontier runner when it comes to
cost efficient intelligence.
If you want to get solidintelligence at a very low
price, grok four fast is yournumber one option, and I'm sure
a lot of people that aredeveloping against APIs are
gonna switch to it at leastuntil the next thing comes over.
So right now you can get a topof the line capability for 20

(43:09):
cents for 1 million input tokensand 50 cents for 1 million
output tokens, which is way morecompetitive than any other
platform out there right now.
Now speaking of compute andinvestments and how much more of
it will become available andwhat will enable and being able
to run models in a moreeffective way and so on.
Leads us to the nextannouncement.

(43:29):
Sam Altman, like he likes to do,dropped a cryptic X post this
week that talks about a newcompute intensive offering.
So I will read the exact quotefrom there.
Over the next few weeks, we arelaunching some new compute
intensive offerings because ofthe associated cost.
Some features will initiallyonly be available to pro
subscribers and some newproducts will have additional

(43:50):
fees.
Our intention remains to drivethe cost of intelligence down as
aggressively as we can and makeour services widely available,
and we are confident we will getthere over time.
But we also want to learn what'spossible when we throw a lot of
compute at today's model's costat interesting new ideas.

(44:11):
What does that mean?
Well, nobody knows.
There are a lot of speculations,which leads us to the next
topic, which is severaldifferent people have been
reporting that they're seeing anew selection of models in the
open AI dropdown menu and intheir API, that is called alpha
models.
Under these alpha models, twoseparate agent models appeared.
One is called agent withtruncation and the other one is

(44:33):
called agent with promptexpansion.
These are not very good names,but A, there's a very good
chance that was not supposed tobe exposed to the public, and b,
open AI has not been very goodat making up names in general,
so that would surprise me ifthat ends up being the final
name of these new features.
When using these features, whatit was activating is agent mode,
but in a very specific setupthat will still allow it to run

(44:55):
browser and use tools and so on,but with more capabilities than
it had before.
Now, shortly after these reportscame out, these models were
rolled back and were notavailable anymore.
But is this what Sam Altman wasreferring to?
New agent capabilities runningwithin the chat GPT agent mode?
Nobody knows, but it feels likethis is the direction that

(45:16):
they're going, which isdeploying more agent
capabilities.
That by definition will requirea lot more compute.
But as Sam mentioned, we willknow in the next few weeks.
And I promise to keep you postedas things are happening.
But there is a feature thatOpener did roll out this week,
which is a feature that SamAltman calls and I'm quoting
today, we are launching myfavorite feature of Chachi Piti,

(45:36):
so far, called Pulse.
This also points to what Ibelieve is the future of
ChatGPT, a shift from being areactive to being significantly
proactive and extremelypersonalized.
So what is this feature and howcan you get it?
So first of all, it has beendeployed, first and foremost to
the pro subscribers, the$200 amonth people, and only on the

(45:56):
mobile application.
It is generating between fiveand 10 personalized cards as you
open chat GPT-5, every singleday.
And these include an image in alittle bit of text.
And when you click on it, ittakes you to a more detailed
view of the same topic.
And these are gonna bepersonalized topics that you
care about.
This could be soccer updates ortravel plans for a specific

(46:16):
destination, or a diet friendlymenu and so on and so forth.
And you click on those and itwill take you to that specific
information.
The other thing is it is cappedby the number of these cards
that you're going to see.
At the end of it, there's astatement saying, great, that's
it for today.
And the idea is intentionally tomove away from the endless
scroll time waste of socialmedia.

(46:37):
So basically is to give you highvalue, organized and highly
personalized information thatyou need to start your day.
This could be personal, thiscould be business related.
Now, this feature integrateswith their existing connectors
such as Gmail and calendarparsing, which means it can
surface emails and agendas andso on, and it is connected to
the chat's long-term memory forcontext and understanding of

(47:01):
what you're working on, what youlike and so on, which will
become more and morepersonalized over time.
In the demo that was provided byChristina Wadsworth Kaplan, she
showed how Pulse Auto addedLondon running routes for her
based on her jogging history andalso offered pescatarian tweaks
to her dinner reservations inorder to help her find the right

(47:21):
restaurants open AI's new CEO ofapplications, Fiji CMO Blogged
we're building AI that lets ustake the level of support that
only the wealthiest have beenable to afford and make it
available to everyone over time.
And ChatGPT Pulse is the firststep in that direction, starting
with pro users today, but with agoal of rolling out this

(47:41):
intelligence to all.
But I wanna go back to Sam'stweet and I'm gonna read the
second half of it again, andthen I'm gonna refer to it.
This also points to what Ibelieve is the future of
ChatGPT, a shift from being allreactive to being a significant,
proactive, and extremelypersonalized.
What does that mean?
It means we get a glimpse intothe future.

(48:02):
A AI that is not a chat, butthat actually is proactive and
participates in our day-to-dayactivities across the entire
day.
From personalized to businessthings fully connected to the
entire universe of our personalknowledge.
Again, both personal and work.
Now, if you combine this withwearable devices such as glasses

(48:25):
or whatever it is that OpenAI isdeveloping.
And that means we're gonna havedevices that are always aware,
always collecting informationabout what we do.
And you understand that thesetools will become extremely
powerful.
It will know everything aboutus.
It can become the mostincredible support sidekick that
we ever could dream about.

(48:46):
It can help promote healthyhabits, so don't eat this or eat
that, or go work out morefrequently, or get up and walk a
little bit or connect withfriends and so on.
But it can also do better timemanagement that we're doing
right now.
Allow us to focus when we needto focus.
Allow us to plan things thatwe're currently finding hard or
don't have time to plan asdeeply as we wanted to, and

(49:07):
assist with personalization andprioritization of things, and so
on and so forth, acrossbasically everything that we're
doing.
This is.
Really exciting and really,really scary at the same time.
A, because the complete invasionof privacy of technology into
everything in our lives, whichis just gonna be an extension of
what we know today.
From, as an example, Googleknowing everything about me

(49:28):
because I have all my data inthe Google Universe and I use an
Android phone.
So it's just gonna be a verysignificant expansion of that.
But in addition, it is scarybecause I feel that we will
become completely dependent onthis technology.
If this technology will beassisting us with making every
decision along the day, how willwe know to make our own
decisions without thistechnology?

(49:48):
And I think very quickly willbecome addicted to this.
While ChatGPT is saying, this isgonna be different than social
media because you won't be ableto score to scroll forever.
I think the negative impactmight be much more significant
than social media because itwill be integrated into
literally everything that we'redoing.
Now from open AI to anthropic ina topic that's actually related
to open ai and Anthropic has gota lot of bad press in the last

(50:11):
month or so for things notworking well with anthropic code
generation capabilities, whichwas the most advanced and most
capable code generation tool.
So far, it has been the defaultfor most developers around the
world and it has fueled thecrazy growth of philanthropic
from$1 billion in revenue to$5billion in revenue in less than
a year.

(50:31):
Well, there's been a lot ofspeculation around what's
actually happening with theClaude models.
A lot of people were speculatingthat Anthropic is throttling
down the models in order to savemoney and so on and so forth.
Well, philanthropic just cameout with a report that is
sharing that they found threesignificant bugs that has
together led to the outcome thatClaude responses were

(50:53):
suboptimal.
I'm not gonna dive into theexact technicalities of these
bugs, but the reality is thethree bugs combined could have
affected up to 30% of cloud codeusers at the end of August
through the beginning ofSeptember, when they start
deploying solutions for that,that took them about two weeks
to deploy.
So through mid-September, theywere able to roll out the entire

(51:15):
solution for all these differentissues, which presumably is
supposed to, to solve theproblem.
They swore they're notthrottling the models and that
they're providing the bestcapabilities that they can to
everybody all the time.
The reality is that the bigwinner out of this is open ai,
and the reason for that is aspeople were getting very unhappy
with the degraded results fromClaude, GPT-5 has been shining

(51:37):
as the most capable codegeneration tool right now.
And I assume, and again, there'sno statistics that I found, but
I assume that a lot of peoplehave jumped ship from running
anthropic behind the scenes onthe development processes to
running GPT-5.
Which leads me to one of themost interesting new phenomenas
of the AI era, which is how easyit is to switch from one

(51:58):
platform to the other.
So if you think historically, ifa critical infrastructure of
your company you were not happyabout it, and you wanted to
switch, let's say you wanted toswitch from SAP to Oracle or the
other way around, it would haverequired months of planning,
more months of execution, andmillions of dollars in
investments to switch from onecore infrastructure to the
other.
Right now, switching from Claudeto Chachi, PT to X and vice

(52:22):
versa in your.
As the underlying infrastructurein your tech stack takes seconds
and practically has no cost.
Like in your IDE as an example,your co-generation environment,
you can go in the backend drop,click the dropdown menu and
select the different model andbe up and running, and that's
it.
No cost, no investment, noinfrastructure change.
And in this reality, it needs tobe very scary to the companies

(52:45):
that create this infrastructurebecause until they have very
significant integrations intocompanies systems, they will be
able to be replaced with asimple dropdown menu and with no
cost to the company.
So there's significantly lessstickiness than we are used to
in the tech industry.
Another big interesting releasethis week comes from bike dance

(53:06):
from China.
They just released Sea Dreamfour, which is the latest image
generation model that issupposed to be as good as Gemini
nano banana, and it is supposedto be competing directly with
that.
That's why they released thismodel.
It is very good at generatingimages.
It is very good at generatingtext.
It is very good at generating atediting images, and it runs 10 x

(53:26):
faster than their previousmodel, so a lot to look for.
I haven't tested it myself, butit comes at a very attractive
price point of$30 for a thousandgenerations at a very high speed
and at a higher resolution thanGemini nano Banana.
It is currently ranking secondon the LM Arena, text to image

(53:47):
ranking, but a very, very closesecond.
So Nano Banana has 1,152 pointsand Seed Dream four has 1,151
points.
So this is a fraction of apercent difference in the
models, and it is cheaper to runC Dream 4.2.
Now, if you're just generatingit on your regular chat, it

(54:07):
doesn't really matter, but ifyou want to use it through the
API, then it matters a lotbecause it's gonna be
significantly cheaper togenerate very similar results.
It is also ranking second, butwith a bigger spread on image
editing capabilities withGemini.
Nano Banana with 1337 points andC Dream with 13.
13.
Again, still a relatively smallspread, but not as close as the

(54:28):
other one.
And they are all far ahead ofall the other models behind
them.
Another interesting feature thatwas released this week was
released by Meta.
Meta, just launched Vibes onSeptember 25th, which is a
dynamic feed of AI generatedshort videos that invites users
to browse, create, and remixcontent and music and styles to

(54:48):
generate new content out ofthat.
So you can either create videosfrom scratch or remix existing
content from all the otherplatforms and reshare them back
to the other platforms.
So if you see a meta AI video onInstagram as an example, you can
tap to bring it into the Meta AIapp, remix it, add music, change

(55:08):
the style, and repost it back toInstagram, Facebook, et cetera.
So this is a whole newinteractive environment for
generating, editing and remixingexisting AI generated videos.
Users have generated over 10,000videos in just the first 24
hours of this feature.
Do I think it's gonna take overour feed?

(55:30):
Probably not.
Do I think it's gonna play rolein the future?
Feed a hundred percent.
Because you'll be able to createreally cool videos or just not
possible before, unless you areDisney and were willing to
invest millions of dollars ingenerating something cool, but
now anybody can generate them.
And so I have zero doubt thatthey will be a part of every
feed that we see in the future.
But I do think people are stillwanna know what other people are

(55:52):
doing, what people areachieving, and so on and so
forth.
So I don't know how big thiswill turn.
Time will tell.
And let's switch gears fromtalking about features and
releases to really big strategicpartnerships that have rolled
out this week.
The first one is Lex Share,which is a key supplier to Apple
that saw their shares surge 10%after there's been reports of a

(56:15):
partnership with OpenAI on theirnext AI driven device.
So this Chinese manufacturerthat has been assembling the
Apple AirPods and the Vision Prois working on a prototype device
together with ChatGPT, which isleading to a lot of
speculations.
Whether that's the devicethey're developing with Johnny
Ive, or not, it is notcompletely clear.
But what is clear is that OpenAIis going all in on devices.

(56:37):
The information shared this weekthat in addition to the crazy
investment in Johnny, ive andhis team.
Yeah, just as a quick reminder,that was$6.5 billion early this
year.
Open AI has been recruitingaggressively hardware engineer
and designers directly fromApple.
So while there's been 10previous Apple Engineers in
OpenAI in 2024, they'verecruited over two dozens in

(57:00):
2025.
ANd in addition to Lux Share,open Air has approached other
suppliers of Apple devices forpotential components such as
components to a speaker model inone of the future devices.
The latest rumors is that thefirst open AI device that is
targeting a release in late 26or early 27 is going to be a

(57:22):
speaker without a screen.
But either way, it puts a lot ofpressure on Apple, not just
because OpenAI has been stealingsome of their top engineers, but
because of what it means toApple.
Apple is a device company.
They're a hardware company.
They've built an incrediblebrand that millions and millions
of people in the world aredependent on, but they've been

(57:42):
failing in the AI race time andtime again in the last few
years.
And with the soaring popularityof ChatGPT as a tool that
everybody knows as ai and withthe soaring and with the growing
popularity of wearable devices,definitely the glasses from
Meta.
But again, once OpenAI comes upwith a ChatGPT based device, X
number of millions of people aregonna buy it.

(58:04):
And eventually these will starttaking market share from iPhones
and other Apple devices.
And I'm certain Apple is awareof it, the window of opportunity
is closing because if OpenAIreally releases this product
within about a year from now,this puts a very, very serious
pressure on Apple to dosomething and do something
successful and do it fast, whichthey were not able to do in the

(58:26):
past two years.
Another huge partnership thisweek that actually connects to
one of the earlier topics thatwe discussed in this episode is
OpenAI.
And Databricks has forged amulti-year, a hundred million
dollar deal to enable a Newgendevelopment platform based on
OPI tools inside the Databricksenvironment.
So the goal of the partnershipis to enable enterprises to

(58:49):
easily develop AI agents usingthe open AI capabilities while
connecting it to theirproprietary data that is hosted
on the Databricks platform.
This obviously makes perfectsense to both companies and it
will provide a lot of value tothe companies that are using the
Databricks platform.
Another big announcement thisweek when it comes to
partnership, the generalservices administration as also

(59:10):
known as GSA, has signed a dealwith Xai for 18 months agreement
to deliver Groks latest modelsto anybody in the government for
42 cents per organization in thegovernment.
In addition to delivering themodels for practically free, XAI
is committing engineers to helpagencies implement grok rapidly
and successfully.

(59:30):
Plus, they're suggestingtraining for the employees in
order to explain to them how tobest use the models in a good
way.
We reported about similar dealsthat were signed with Microsoft
and OpenAI just in the past fewmonths, but my biggest question
really goes back to training,right?
So providing the models to thegovernment is great.
Uh, the fact that the companiesare willing to do it for free, I

(59:52):
don't think they're doing it outof generosity.
I think they just understandthat after the free segment is
done, they will be able to makebillions of dollars from
government, employees being usedto using their models and having
it fully integrated intomultiple different processes
within the government.
The government being the largestemployer in the US makes it all
makes perfect sense.
Also, it's good relationshipswith the administration, so it

(01:00:14):
all makes great sense.
The problem is training.
Who is going to provide thetraining to the government
employees in order to use andleverage these tools
effectively?
That still stays unclear.
What I see in many companiesthat I work with is like, oh
yeah, we're investing a lot ofai.
We got everybody licenses forwhatever, you know, whether it's
ChatGPT, or Microsoft Copilotand so on.
But they're not seeing anyresults, and in many cases

(01:00:35):
they're seeing negative impactbecause there's no proper
training to the employees.
And I have a feeling that asimilar thing on a much larger
scale is gonna happen in the USgovernment.
So I really hope that thegovernment will figure out how
to provide adequate andcontinuous training to
government employees on how toleverage AI effectively and
safely so we can all benefitfrom this of potentially getting

(01:00:56):
better government servicescheaper and faster.
From government.
Let's switch to legal for aminute.
Xai just filed a lawsuit onSeptember 25th accusing OpenAI
of stealing trade secrets viahired away employees.
Now, this is not the first andprobably not the last lawsuit
between Elon Musk and SamAltman, or between Xai and

(01:01:16):
OpenAI.
It is very clear that this isbattle is not going to stop.
In this particular case, as Imentioned, Musk is claiming that
Sam is stealing talent, but notjust the talent, but they are
actually using trade secretsthat were developed inside of
Xai as they're getting theemployees into OpenAI.
The lawsuit names three formerXAI employees, two engineers,

(01:01:37):
and one senior executive thatallegedly passed along
proprietary source code andother business secrets as they
were joining open ai.
Now, I'm not going to dive intothe whole history between these
two individuals, but this thingdoesn't seem to quiet down.
It's just getting worse andworse.
I don't know if to say that'sgetting ridiculous is the right
word to use, but it is gettingridiculous.

(01:01:58):
They're acting likefive-year-olds and mostly Elon
is acting like a five-year-old.
The other thing that thisbrought me to think about is
that engineers are jumping shipin this.
Race all the time from onecompany to the other.
And if this somehow makes it tocourt and somehow makes it to
trial and somehow gets to asituation where the court is
saying, no, you might, youcannot do this or you have to do

(01:02:19):
that, then this is going to stopsomething that's been happening
for decades and definitelyhappening in the last two years,
which is people that are movingfrom one company to the
competition, it is very, veryhard to draw the line in the
sand on what they know versuswhat is considered stealing
trade secrets.
Uh, and it's gonna be veryinteresting to see how this
evolves.
Again, I don't think this isgoing to go very far.

(01:02:39):
I definitely think this is justa personal battle between Elon
and Sam, and this is justanother way for Elon to put
another stick in the wheels ofOpenAI.
Again, I don't personally thinkthis is gonna go very far.
Another big partnership that weactually announced a couple of
weeks ago, but is now actuallyavailable to the public, is
Microsoft 365.
Co-pilot now includes ClaudeSonnet four and Claude Sonnet

(01:03:01):
Opus 4.1.
As part of its selection ofmodels, there's even a try
Claude button in the researcherapp inside of copilot, and it
allows the users to switch tocloud to Claude Opus 4.1.
Deep reasoning tasks for deepreasoning tasks.
Replacing the chat GPT models.

(01:03:21):
One of the several differentreports have showed that
Anthropic models significantlyoutperform open AI when it comes
to integration with Excel andPowerPoint.
So that's another reason to, forthem to be well integrated into
the environment.
And it makes perfect sense toanthropic because it provides
them access to huge enterpriseaudience.
And it makes perfect sense toMicrosoft because it allows them
to diversify and reduce theirdependency on open AI models.

(01:03:44):
I would not be surprised ifother companies are rolled into
there, if Xai is going to reallybe continuing at the trend they
are right now.
They're probably going to benext.
That's it for this weekend,we'll be back on Tuesday with
another how to episode.
In this particular case, we'regoing to show you how to create
incredible videos that canpromote your brand across social
media and other channels inseconds without teams and people

(01:04:06):
and influencers and so on all onyour own while using AI with all
the pros and cons.
That comes, uh, with doing whatI just said.
If you have been enjoying thispodcast and finding value in it,
please hit the subscribe buttonand please share it with other
people that can benefit from it.
Literally just click on shareright now and just think of a
few people that can benefit fromlistening to this podcast and

(01:04:27):
just send it to them.
I'm sure they will.
Thank you.
I will thank you as well.
And for now, keep on exploringai.
Keep sharing what you learn withme and with other people
wherever you can.
It will help all of us to reducethe potential negative outcomes
out of ai and have an amazingrest of your weekend.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Cardiac Cowboys

Cardiac Cowboys

The heart was always off-limits to surgeons. Cutting into it spelled instant death for the patient. That is, until a ragtag group of doctors scattered across the Midwest and Texas decided to throw out the rule book. Working in makeshift laboratories and home garages, using medical devices made from scavenged machine parts and beer tubes, these men and women invented the field of open heart surgery. Odds are, someone you know is alive because of them. So why has history left them behind? Presented by Chris Pine, CARDIAC COWBOYS tells the gripping true story behind the birth of heart surgery, and the young, Greatest Generation doctors who made it happen. For years, they competed and feuded, racing to be the first, the best, and the most prolific. Some appeared on the cover of Time Magazine, operated on kings and advised presidents. Others ended up disgraced, penniless, and convicted of felonies. Together, they ignited a revolution in medicine, and changed the world.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.