All Episodes

Money flood - insane revenue and valuation growth, AI impacting every industry, Open AI and Microsoft deal, new time compute records are changing the game, the first AI government member, and more important AI news for the week ending on September 12 2025

Is AI on the verge of world domination… or an economic meltdown?

This week’s AI headlines weren’t about shiny new model releases and that’s a good thing. It gave us time to zoom out and examine the billion-dollar chess game shaping our future.

From OpenAI’s $115B spend-fest to the first AI government cabinet member, and from Replit’s code-writing agents to copyright lawsuits with a twist — this episode is a crash course in just how *wild* and *wide* AI's reach has become.

Here’s your witty but grounded executive summary of the week’s most impactful AI news — handpicked and broken down by your host, Isar Meitis, with direct implications for how business leaders should think, adapt, and move.

In this session, you’ll discover:
- OpenAI’s capital-intensive moonshot and why it may still not be profitable in 2030
- Microsoft’s unexpected pivot: From exclusive OpenAI integration to paying AWS for Claude
- The first AI cabinet member in Albania here’s why it might be brilliant (or backfire)
- AI-made movies & TV are no longer a fantasy, OpenAI is backing a full-length feature
- Funding frenzy decoded: Databricks, Replit, Perplexity, and others are raising billions
- "Thinking" AI that works for hours: How new models are pushing past past limitations
- 5,000 AI podcasts a week for \$1 each?! The scary-fascinating rise of mass-produced audio
- FTC probes AI’s influence on kids and what it means for regulation & trust
- AI-powered AR glasses from Amazon — coming to delivery drivers and consumers near you
- Duke gives GPT-4o to all students what this means for the future of higher education
- Why Apple is strangely silent on AI this year, and what it could cost them

Google Cloud AI Agent Handbook (PDF) - https://services.google.com/fh/files/misc/ai_agents_handbook.pdf


About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 2 (00:00):
Hello and welcome to a Weekend News episode of the

(00:02):
Leveraging AI Podcast, a podcastthat shares practical, ethical
ways to leverage AI to improveefficiency, grow your business,
and advance your career.
This is Isar Metis, your host,and we had another week in which
there were no big launches ofnew models or no big
announcements from the big labs,but there are a thousand other
things to talk about, and Iactually like it when we
actually have.

(00:22):
Of time to talk about the biggerpicture.
So today is gonna be a lot aboutthe bigger picture, and we are
going to talk a lot about twomain aspects of ai.
One is multiple examples of howbroad the impact of AI is going
to be on our lives.
We're going to cover multiplestartups and companies and
developments and show how manyareas, fields, industries, and

(00:45):
so on they're involved in.
So that's gonna be one big topicthat we're going to talk about.
The other big topic is going totalk about the crazy amount of
money that is being poured intoAI right now with new funding
grounds from several differentlarge companies, new revenue
numbers from multiple differentcompanies.
So that is also going to show ushow fast and how enormous this
industry is growing.

(01:06):
We're going to talk about Chinaand we are going to talk about
the first AI government cabinetmember in the world.
So that's gonna come all the wayin the end, but we have a lot to
talk about.
So let's get started.
Before we jump into all thetopics I mentioned earlier, the

(01:26):
one big discussion this weekthat we have to start with is
open AI projections on how muchmoney they're going to spend in
the next few years.
So open AI are projecting, theywill spend$115 billion between
now and 2029.
Now, the previous projectionsthat they had just earlier this

(01:47):
year was 80 billion, which isalready a crazy amount of money
that is unparalleled withanything we've seen in history.
So it's now almost one and ahalf times that amount in their
new projections, fueled byseveral different aspects, which
we're going to detail now.
To be fair, there's also a hugeboom in their revenue.
So OpenAI is currentlyprojecting$13 billion of revenue

(02:11):
in 2025.
That's three and a half timeslast year's revenue, and 300
million above their recentprojections.
They're also projecting that therevenue from CHA alone will grow
to 90 billion by 2030, which isa 40% jump from their previous
outlook that they shared againjust a couple of months ago.

(02:33):
Going back to the spending side,OpenAI is projecting to burn
through$8 billion in cash in2025.
This is 1.5 billion above theirQ1 forecast.
So just two quarters ago theywere projecting this is going to
be six and a half billion.
Their projections are right now8 billion.
They're planning that number tobe$17 billion in 20 26, 30 5

(02:56):
billion in 2027 and 45 billionin 2028.
That's four x their previousestimate for 2028 numbers.
So what's the breakdown of thatinsane amount of money?
Well, training new models willcost 9 billion this year, up
from 2 billion last year.
They're planning next year, so2026 to spend 19 billion on
training new models whileinference, which is using the

(03:19):
GPUs in order to generate the AIthat we use.
So the consumption of AI isgoing to hit 16 billion in 2025,
totaling 150 billion between nowand 2030.
Combine that with the insaneamount of money that OpenAI is
planning to invest in datacenters.
Which in their currentprojections is going to come
close to a hundred billiondollars more later in this

(03:40):
decade on own servers.
So compute that they will ownthat is going to get to a
hundred billion dollars invalue.
This is per CFO Sarah Fryer.
Now part of the jump in theseexpenses obviously come from the
talent war.
So employee salaries jumped to700 million last year.
Now coming close to 1.5 billionin 2025.

(04:01):
So basically more than doublingnow.
Yes, they grew the workforce,but a lot of it comes from just
much bigger compensationpackages.
A big part of it was obviouslymeta's poaching attempts, but
it's not just them.
There's a fierce competitionacross all the main labs that
are trying to tempt leadingresearchers jump ship from one
company to the other.
Now the company has alreadyraised over$38 billion,

(04:22):
including all the rounds they'vedone so far, including the
backing from Microsoft in thebeginning and obviously the
other infusions that theyreceived through the years with
recently SoftBank, providing alot of cash with the amount of
rest of the cash that SoftBankhas already committed.
This number will come to 60billion.
the 38 billion they raised sofar, leaves them with only$7.6
billion of cash, now, anotherinteresting aspect on the

(04:44):
revenue side is that OpenAI isplanning to monetize the free
users in exciting ways.
So they have a lot more freeusers than paid users.
They are projecting$110 billionbetween 2026 and 2030 via
non-subscription perks likeshopping affiliates or ads.
Through these new mechanismsthat are not introduced yet,

(05:06):
they are planning to make two to$15 annually per free user at a
80 to 85% margin by 2030.
And by then they're expecting tohave 2 billion weekly active
users.
And so combine these two numberstogether and you understand that
they're sitting on a cashmachine, even from the free

(05:26):
users that are not paying themdirectly.
So what is the bottom line ofall of this?
Well, Sam Altman in a statementto his employees, said that
their company might be The mostcapital intensive startup of all
times, and I tend to agree.
I don't think we ever hadanything coming close to these
numbers.
But the other very interestingthing is, despite the explosive

(05:47):
growth of OpenAI and theexplosive growth they're
projecting for the future, whichis even bigger than what we've
seen so far.
Their free cash flow projectedfor 2030 is on a very slim
margin.
Meaning despite the fact thatthey are going to be generating
tens of billions and potentiallyhundreds of billions of dollars
between now and then, they maynot be profitable at in 2030,

(06:11):
which means that the crazyamounts of money that they're
raising might be at risk unlessthey can keep on raising these
kind of funds, uh, moving on inthe future, which, if they are
growing at the space is probablylikely, but what happens to the
global economy between now andthen?
We'll have a huge impact on thatas well.
So this is definitely thelargest investment experiment of

(06:32):
potentially all times.
Now, as you remember, some ofthe funds that were inked for
them in the soft pack deal arepending and depending on them
switching their model from anonprofit to a for-profit
organization.
And one of the things that wasputting that at risk is their
inability to get to an agreementwith Microsoft on how their
partnership looks like movingforward.

(06:52):
Because a lot of the detailswere not clear, you know, in the
early days, Microsoft were theonly backer and there were
different deals signed.
And now it's a very differentsituation.
And these conversations havebeen going on for about a year
since OpenAI suggested thatthey're going to switch to a
for-profit model.
Well, finally, OpenAI andMicrosoft has reached a
preliminary agreement to revisetheir multi-billion dollar

(07:14):
partnership and they just signeda non-binding memorandum of
understanding, also known as anMOU.
That is defining their new phaseof their partnership.
Now, both companies did notprovide any details on how
they're gonna look like, butthey both said that they really
want to finalize all the detailsand get the final agreement
signed.
The actual quote is together weremain focused on delivering the

(07:36):
best AI tools for everyonegrounded in our shared
commitment to safety.
That's basically what they saidso far.
So it seems to be moving forwardat least on that front.
That being said, there aremultiple bodies that have had
discussions with the AttorneyGeneral of California and
Delaware trying to prevent thatrestructuring, from happening.
So there's still hurdles to gothrough.
I mentioned time and time againbefore, I think there's way too

(07:59):
much money involved in this notto be successful.
And there's also the currentgovernment that is definitely
pro business and less forregulation.
And so I assume, I don't knowthat obviously, but I assume
eventually they will be able tomake this transformation and
keep moving forward to spendthis crazy amount of money in
the next few years and generatethis crazy amount of money in

(08:20):
the next few years.
But potentially as part of thisagreement, even though it was
not officially tied together,Microsoft made another very
interesting announcement thisweek in which Office 365 copilot
will now also integrateAnthropics Claude Models
alongside open AI's ChatGPTmodels, which is the only thing
that was backing it so far.
So the statement said thatMicrosoft found Anthropicss

(08:42):
clause, sonnet four outperformopen AI GPT five in tasks like
automating financial functionsin Excel and generating more
aesthetically pleasingPowerPoint presentations.
This is per the quote from theinformation.
Now to make this even moreinteresting, Microsoft will pay
Amazon Web Services AWS toaccess cloud models.
So right now, all the chat GTmodels that are running in the

(09:04):
backend of copilot are hosted onMicrosoft Azure, meaning they're
running it at cost.
They're not paying anybody elseto do this.
But now to integrate Claudesmodels into the mix as well,
they will have an additionalcost despite that co-pilot
pricing currently, as of rightnow, stays the same at$30 per
month per user.
Now per Microsoft, over ahundred million customers

(09:24):
currently use at least onecopilot product with Office 365
Copilot is estimated to generateover a billion dollars in annual
revenue and to explain how bigthe future potential is, this is
currently 1% that is using thesemodel out of the 430 million
paying users that Microsoft hasoverall.
So the opportunity for growth isinsane.

(09:45):
Now, this is not the first timeMicrosoft works with Anthropics.
Microsoft has integratedClaude's models into GitHub
copilot in the past, and it'sstill running in the backend.
So it's not their firstpartnership, but it is a very
interesting move by Microsoft tobasically say, we are going to
deliver the best models we canin the backend of copilot, and
it's not gonna be necessarily asolo show by OpenAI.

(10:07):
I find this as a veryinteresting move by Microsoft.
It makes perfect sense.
It makes the same amount ofsense as OpenAI using other
providers for their GPUs beyondjust Azure, which has been
happening for a while now.
So this becoming a lesscommitting partnership between
these two parties.
And again, I think for the sakeof both of them, it will be good
if they finalize the terms ofthis agreement.

(10:28):
Now speaking of Anthropic, theyjust agreed to pay$1.5 billion
settlement in a class actionlawsuit over ping nearly half a
million books to train theClaude models.
So if you remember, we sharedwith you in June that a US
district judge named asla hasruled that Anthropics AI
training of copyrighted books isexceedingly transformative and

(10:52):
protected under fair usedoctrine.
Basically, he was saying, andI'm quoting like any reader,
aspiring to be a writer.
Anthropicss, LLMs, train uponworks not to Replitcate or
supplant them, but to turn ahard corner and create something
different, which was aincredibly important for the AI

(11:13):
fair use claims.
However, that was related to thebooks they actually purchased.
He was very critic of the factthat they have downloaded and
pirated between 465,000 and500,000 books from shadow
libraries like Library Genesis,and other websites that provide
access to pirated books.
So this new settlement issupposed to offer about$3,000

(11:35):
per work to writers.
Of these half a million books,this is four times the potential
statuary damages and 15 times ofthe Innocent Infringement Award.
So it is a very significantpenalty to Anthropics that, as I
mentioned, they agreed to pay.
That being said, the judgehimself said that this deal is,
and I'm quoting full ofpitfalls, and he set the

(11:57):
deadline of September 15th toget a final drop dead list of
all the books that were pirated,and he's claiming he's gonna
finalize his review on September22nd.
Following by saying, we'll seeif I can hold my nose and
approve it.
Basically, he's not happy withthe settlement, but he's saying
that he's most likely will tryto avoid the stench, in other

(12:19):
words, and make it move forward.
I think both aspects of thisruling are very important.
On one hand, he is clearlystating for the first time that
training AI models on contentthat is out there is considered
fair use, which is a huge winfor the AI labs.
On the other hand, he's sayingyou have to have legal access to
that content, otherwise it'spiracy and you will pay hefty

(12:41):
fines.
And I must admit, I'm happy withboth sides of that equation.
How exactly that going to evolveas far as getting access to
internet content that is alreadyout there and is open to the
public is still early to know.
We know that there's been manylicensing deals signed between
the leading labs and many largenews and content providers, so
that might be a path forward forthe bigger players.

(13:02):
But what happens to people likeme who generate content every
single day?
I don't think there's acompensation mechanism for that
yet.
I don't know if there will be,but at least it's a step in the
right direction.
This ruling has got mixedreviews and feedback from people
in the Association of Americanpublishers and the authors
guild.
So on one hand they are happywith the fact that they're

(13:22):
getting compensation.
On the other hand, they're nothappy with all the detailS.
But I guess that's true in everysettlement.
Not everybody's gonna be ahundred percent happy, but
everybody can live with thesolution and move forward, which
I think this is what's gonnahappen in this particular case.
Now, speaking of governmentinvolvement with ai, the Federal
Trade Commission, the FTC haslaunched a probe into the
leading AI platforms and howthey are providing access or

(13:45):
protecting kids and youngindividuals from the usage of
ai.
So, according to the FTCs.
Press release.
The agency is issuing orders toseven major companies, alphabet,
Inc.
Character Technologies,Instagram, LLC, meta Platforms,
Inc.
OpenAI, snap, and XaiCorporation to provide detailed
information on their AI chatbotdevelopment, monitoring, and

(14:07):
risk mitigation for young users.
Now, their goal is to basicallyunderstand how these chatbots
simulate human-likecommunication and build
relationships and understandinghow that is going to be
monetized in order to verifythat user engagement and
specifically younger userengagement, teenagers and
children.
And how is that going topotentially impact their

(14:28):
development and what risks doesit generate for these users?
The FTC Chairman, AndrewFerguson emphasized, and I'm
quoting, protecting Kids online,is a top priority of the Trump
Vans administration, sofostering innovation in critical
sectors of our economy.
Basically they're saying whilethis administration is go, go,
go on ai, they want to also makesure they keep kids and children

(14:50):
safe while we're developing thistechnology.
And I'm very excited to hearboth.
I am really happy that they'redoing this right now.
We shared with you a lot aboutthe recent steps that OpenAI is
taking in that direction becauseof the lawsuits that they're
facing and the loss of life thatthey have potentially or
allegedly has led to.
And so as I mentioned, assomebody who has three kids, I

(15:11):
am really excited to see thatbigger bodies and groups,
including government he'slooking for ways to mitigate the
risk that this generates towardschildren.
I think as a society, we've donea horrific job when it comes to
social media impact on youngindividuals, and I really hope
that we're going to learn fromthat and prevent or at least
reduce the risks of AIamplifying the damages that

(15:31):
social media has already done.
So now let's talk about whatareas of our lives and what kind
of new innovations and whataspects of modern society are
impacted by ai.
But we're actually going tostart with the one place where
it's not happening yet, which isApple's September, 2025 iPhone
17 event, which was a huge eventlike Apple does every single

(15:52):
time.
And there was one thing that wasvery clearly absent, or it
wasn't completely absent, but itdefinitely was very far from
center stage, which is appleintelligence.
So if you remember last year'sevent, iPhone 16 was all about
Apple intelligence.
The phones and the AirPods andeverything else took second
stage.
And the main stage was all aboutApple intelligence.
And this year it was definitelynot the case after the huge

(16:14):
disappointments time and timeagain and not being able to
release any or almost any of thethings they promised.
The only thing they were able torelease are minor things that
are more toys than actualbeneficial.
So, you know, being able tocreate emojis and stuff like
that.
But nothing significant when itcomes to real AI value on the
phones.
So the event itself was muchshorter than usual.
It was only an hour and 15minutes.

(16:34):
And Apple intelligence.
Took a very small component ofthe overall event.
Yes, they have released a newiPhone, a new AirPods, a new
Apple watch, and they sharedabout their new advancements in
Apple, silicon and hardware andsoftware and all of that.
But there were definitely a lotof crickets when it comes to
Apple intelligence.
They have shared some updates inApple Intelligence, but they're

(16:55):
minor like translations iniMessages and FaceTime.
But these are not the big thingsthat they were promising, and
these are far behind what Googlehas already released with Pixel
10 and what Samsung is expectedto release in their next
offering.
While we have been sharing allthose developments on this
podcast, it is very clear thatApple is not where it needs to
be when it comes to deliveringAI to its users.

(17:16):
Now, surprisingly, despite allof that, their stock is not
taking a hit at least yet.
I'm must admit, I'm personallysurprised.
I think Apple will have to makesome significant moves.
And as we shared last week,those moves might come from
partnerships Google andpotentially other vendors to
actually drive Appleintelligence, at least in the
near future.
But now let's start talkingabout where AI is impacting.
So first of all, there was avery interesting blog post by

(17:38):
Anderson Horowitz, this pastweek, a 16 Z, one of the largest
vC funds in the world, they knowone or two things about what's
happening and they released avery interesting paper that
they're calling the GreatExpansion and New era in
consumer software.
They share that the amount ofcompanies that are reaching a
hundred million plus in a RR intwo years is nothing like we
ever seen before.
And the interesting thing isthat they're saying that the

(17:59):
great expansion comes throughusage based billing and consumer
to enterprise transition, whichare two things which were not
common before the AI era.
So pre ai, most consumersoftware relied on one of two
mechanisms.
Either ad based revenue such asInstagram, TikTok, Google, et
cetera, or a flat feesubscription.
Like most SaaS that we knowtoday, and with the top

(18:22):
companies retaining 30 to 40% ofrevenue after the first year, so
many SaaS companies had asignificant high churn rates
that had to be replacedconsistently in order to keep
growing the company.
And what they're saying is thatthere is a more than a hundred
percent retention, which I'llexplain in a minute how that
works when it comes to using AItools.
And the reason there's more thana hundred percent retention is

(18:44):
many users that have used AI onthe personal level and continue
to use it on a personal level,bring it to their work as well,
which now means you have anotherlicense of the same person using
AI for work.
So that adoption between thepersonal use and the enterprise
use is actually giving companiesa higher than a hundred percent
retention, which again, comparedto traditional sas, which was

(19:05):
less than 50% on average.
They're also mentioning thetiered approach of between, you
know,$20 to$250, depending onwhich platform you're on and
what kind of licensing you'reon.
So.
That tiered approach enablescompanies to charge people the
right amount of money for theamount of tokens that they're
consuming, and the people whoare consuming more than that
have to continue paying throughjust subscription based to the

(19:27):
API.
And so the usage based billingallows these companies to
continue to grow as the usagegrows and not defined by a flat
fee per user.
The other interesting aspect, asI mentioned, is the fact that
most of these companies hasgrown their initial growth
through consumer based productsthat then led to B2B revenue,
which always takes longer todevelop compared to traditional

(19:49):
SaaS, which had to start withB2B.
Like there's very few companieswho developed a B2C product who
then evolved to be a B2BcompanY.
and this is happening with ai.
So a lot of the things that wetook for granted through the
SaaS era are now beingchallenged and developed in
completely new ways that werejust not in existence,
definitely not at this scalebefore.
So the way we do business withsoftware is changing and

(20:13):
evolving just in front of oureyes.
So the first aspect we coveredis how we actually do business
with software today.
The next one is education orhigher education.
So Duke University justannounced that all their
undergraduates plus staff,faculty, and professional school
students gained free unlimitedaccess to GPT-4 oh starting on
June 2nd, 2025.
And they're running a pilottogether with open AI together

(20:35):
with.
A tool that they're called Duke,GPT, which is a secure
university managed AI interfacethat is prioritizing privacy and
integrated resources.
And all of those are combinedinto a university wide attempt
to understand how to integrateAI into the higher education
process.
Now, they announced this in Mayof 2023 when the goal is by the

(20:58):
end of the fall semester to do asummary and understand how this
has worked.
What are the pros, what are thecones, what are the issues that
they tackled, and how are theyplanning to move this forward?
And there are mixed feedbacksfrom different professors.
Some of them are embracing it,like David Carlson, the
associate professor of CivilEnvironmental Engineering that
allows AI in his machinelearning course if students

(21:18):
disclose usage.
And he stated, you take creditfor all of Chu's mistakes and
you can use it to supportwhatever you do.
Basically, you're free to useai, but if you get this wrong,
you're gonna fail the course.
And it's your fault even if theAI is the one that did it.
And I think that is the rightapproach because I think that is
the way the business world isgoing to operate and the whole
point of these.

(21:39):
Of your higher education, unlessyou're staying in the research
universe, is to prepare you forthe real world and the business
world.
And I think this is a veryhealthy approach.
There were other professors thatare still banning it, especially
in the areas of humanities,arguing that it completely
erode.
The value of developingindependent thinking.
And one of the arguments thatone of the professors used is,

(22:01):
if you want to be a goodathlete, you would surely not
try to have someone else do theworkout for you, which makes
perfect sense.
So she's saying that usingChatGPT or any other tool if you
are trying to learn how towrite, is exactly like allowing
somebody else to run for you ifyou're trying to develop as an
athlete.
And I must admit what you'resaying makes sense.
That being said, I thinkteaching people how to

(22:23):
collaborate with AI to developideas and improve their text is
probably the right approach, butdefinitely still a mixed
approach in academia.
I do think that academia isgonna get challenged as AI gets
more and more adopted, both byindividuals as they're coming
into academia, as well as by theoutput that they're expected to
deliver.
Again, people who are ready tofill up job positions in

(22:46):
society, which will require themto know how to use AI across the
board in everything that they'redoing.
But the fact that big knownuniversities are now running
these initiatives is showingvery clearly that AI will have a
significant impact on educationand higher education.
The next topic is our day-to-daylife with rumors on Amazon, are
also developing augmentedreality glasses.
Actually two separate models ofaugmented glasses.

(23:09):
One is called Jayhawk forconsumers.
Basically something that willcompete with metas, Raybans, and
Leys.
And the other is actually calledAmelia, and it's gonna be for
their delivery drivers, helpingthem with package sorting
delivery, turn by turninstructions and so on.
So this is gonna be more of ajob oriented set of glasses.
No clear details are releasedyet.

(23:30):
This was all disclosed by theinformation, but they're
planning the release of theirconsumer glasses, either late
2026 or early 27, and theworkers may start using the
glasses in Q2 of 2026.
Now combine that with theexplosive growth of these kind
of glasses, and more and morecompanies are coming in,
including Chinese manufacturers,and you understand that in the
very near future, most of thepeople that are going to be

(23:51):
wearing glasses outside areactually going to be recording
and analyzing everything in theworld around them.
Which means everything you do isgoing to be recorded if you are
in public and analyzed if youare in public.
And this raises a lot ofquestions, especially once
you're start diving into peoplenext to you in the ATM machine
can record everything thatyou're doing.
People in public restrooms willrecord everything that they're

(24:13):
doing as long as they're wearingsome kind of a device,
courtrooms, et cetera, etcetera.
You understand how this can govery, very wrong and you
understand how there's gonna benew rules and regulations and
social.
Agreements of how and what isacceptable, and it is going to
be very different than what weknow right now.
Another area where AI will havea significant impact is on

(24:35):
filmmaking and TV series making.
I must admit, I projected backin 2024 that in 2025, somebody
will start creating AI basedseries and potentially full
films.
And this week was the first timeI actually found the first AI
based series.
It was really weird and yetreally interesting.
The show is called UnansweredOddities, and it is weird and

(24:58):
exciting at the same time.
So if you wanna look forsomething that will show you
what the future might look like,how one individual can create an
entire TV series and actuallycapture your attention at least
for a while, then go and checkit out.
And this is just the very firststep.
But this week OpenAI announcedthat they will be fueling a
groundbreaking new movie calledCreators, which is going to

(25:19):
showcase the potential of AI tocreate movies faster and cheaper
than traditional Hollywoodmethods.
So as you might remember, therewas a creator's short film in
2023 that used Dali, which isopen AI's early days image
generator to generate the visualworlds of a movie that was then
animated by Emmy Award-winningcreators.

(25:41):
Now, that was a short film thatwas using very basic
capabilities of ai.
Well, this is gonna piggyback ontop of that.
So the new Creator movie issupposed to be a full length
movie, and the plan is toproduce it for less than$30
million.
Now,$30 million sounds like alot of money, but to put things
in perspective, producing ToyStory four costs$200 million.

(26:02):
Now the other thing is this filmis supposed to take nine months
to complete, beginning to endusing open AI's technology to
generate the characters, thebackground, and using humans in
order to animate and createvoice actors and refining the
work of the ai.
The other reason to involvehumans in the process is to
ensure copyright eligibility,because as we all know, as of
right now, AI generated contentalone doesn't have copyright

(26:25):
protection.
Now even the script that isgonna be mostly written by
humans is going to be assistedby AI tools.
OpenAI is going to provide thetools, compute and resources,
but is not directly involved inthe production or any marketing
decisions of the movie itself.
So again, this is moving exactlyas I expected and it's going to
have a dramatic impact onHollywood and other production

(26:48):
studios.
If you remember when there wasthe big strike in Hollywood just
over a year ago, and eventuallythey signed the deal, I said,
that deal is going to beworthless.
Because once others will be ableto generate AI movies for a
fraction of the cost, thesestudios, regardless of which
agreement they sign, will havetwo options either to stay with
the agreements that they signedor run out of business.
Either way, the people who signthe agreement will have no job.

(27:11):
And so I don't think we're thereyet, but I think we're
definitely moving in thatdirection.
What does that mean to thefuture of filmmaking?
I don't know.
I assume there's going to besome kind of a premium for
actual humans acting in movies,but I don't know how long that
can last from a financialperspective.
How much more money will peoplebe willing to pay at the cinema
to watch a movie that theycannot differentiate from

(27:31):
actual, real life and realactors.
So time will tell how that isgoing to evolve, but I can tell
you it is not going to be whatit is right now.
Staying in the contentproduction and entertainment
business, a new startup has acrazy ambitious new goal in the
podcasting universe.
Wright, who is a former Wanderlyexecutive has established a new

(27:53):
company called Inception Pointai, which is gearing up to
unleash 5,000 podcasts and 3000episodes per week, all produced
for under$1 each.
Just to put things inperspective, if you are running
lean, like I am, running everypodcast episode, takes a few
hours to produce, I am not gonnashare with you what's my hourly

(28:13):
rate, but it is more than$1 perhour, and it takes a few hours
and then there's editing, andthen it gets released to you for
you to enjoy and consume it.
So producing thousands ofepisodes for$1 per episode is in
a very different scale comparedto actual human generated
podcasts.
Now their goal is to go afterniche topics like weather
reports and queer key sports,and the idea is to create

(28:38):
podcasts in areas where thereare not enough of them right now
while providing deep contentinteresting elaborations on
these niche topics where theymay or may not exist.
This will obviously go beyondthat because if they do find
success and if they do crack thecode on how to generate the
content at the right length withthe right amount of humor.
At the right amount of speedwith the right amount of
whatever they want because it'sAI generated, they can reproduce

(29:00):
that and go after any othertopic in the world.
Now, as a podcaster, am I happyabout that?
I must admit that, A, I don'tthink I can do anything about
it.
And BI think there is going tobe, in this particular case room
for everyone, right?
So yes, will, I have to competewith AI generated shows.
I think I already do.
There are several differentshows out there, especially on
AI that are AI generated.

(29:21):
I don't think it's at thatscale.
I don't think there's a kind ofmoney that they're going to
raise in order to do that, butit's already happening.
And I think just like any otherfield when it comes to content
creation and content generationand content consumption, AI is
going to play a big role in thatgeneration.
people have to pick and choosewho they wanna listen to or
follow or view, and they willhave to choose how much of it is

(29:42):
AI generated versus humangenerated.
I must admit that.
I think the next generationwon't even care.
Well, right now it sounds like abig deal for us.
Like, oh, I really wanna listento a real person.
I think my kids won't reallymatter as long as they're going
to find the content interesting,engaging, and so on, they will
consume that content regardlessof how it was generated, and
they won't really care whetherit's human generated or not,

(30:04):
because they're going to growinto a world where a lot of the
content is AI generated.
Now staying on the topic ofdifferent fields, but now diving
more into new features fromcompanies that we know and how
they are delivering new valuefrom their existing platforms.
Claude just announced that youcan now create an edit Excel,
word, PowerPoint and PDF filesstraight on Claude.

(30:26):
It is currently only availablefor Max team and enterprise
users.
The pro users access, which isthe plan I'm on, is coming soon.
The demos are absolutely mindblowing, and they're showing how
you can take raw data and createreally polished outputs, whether
it's detailed statisticalanalysis, complete sets of Excel
outputs, and as well asPowerPoint presentations, et

(30:48):
cetera.
There's been a very interestingexample by Ethan Malik on
LinkedIn.
That is showing how with asimple prompt, he was able to
create a whole set of multiplesheets with multiple formulas
all connected to one another.
That is generating a financialmodel and projections for a
startup company.
Now in addition, because itknows how to do all these
things, it is a great way tocross format different aspects.

(31:09):
So taking inputs from A PDF,converting it into PowerPoint
slides, and from that generatinginvoices, in spreadsheets or
whatever other kind ofcombination that you want,
because it understands thecontent in all these platforms,
and it also knows how togenerate the output in all these
platforms in a much moreadvanced way than we've seen so
far.
Now, if you are on one of theeligible levels of subscription,

(31:31):
all you have to do is go to yoursettings, features,
experimental, and then activatethe relevant functions and
you'll be able to use them.
You can also connect it to yourGoogle Drive to save all the
outputs straight into a specificfolder on your drive.
Another thing that Claude hasannounced this week is a memory
feature for teams and enterpriseplans.
So as of September 12, Claudememory capability enables to

(31:53):
retain and reference teamprojects, client details, and
work patterns boostingproductivity across the board.
That's coming from Claude'sannouncement itself.
Now the interesting thing is alittle different than Open AI's.
Memory Cloud creates distinctmemory pools, so different
segments and different memoryareas for each project.
Basically keeping theinformation not distributed

(32:13):
across the board, which is avery important aspect.
So if you think about exposingyour company's information to
Claude for each to rememberdifferent things, you do not
want anybody in the company tohave access to the information
that HR has uploaded witheverybody's reviews and
salaries, as an example.
So the ability to keep thememory confined and confidential
is a very positive approach,which I do not believe that

(32:34):
OpenAI has right now acrosstheir platforms.
Uh, I believe this is the futurethough.
Where you would be able to havethe memory turned on while
knowing that it's turned on andaccessible just to the relevant
people in the organization.
Now, just like in OpenAI, youcan go and control what Claude
remembers.
You can view and edit the memorysummaries in your settings and
instruct Claude to focus on orignore specific details or

(32:55):
specific areas of interest.
Now another interesting aspectof this that they announced,
which I'm still not exactly surehow it's gonna work, they said
that users will be able totransfer the memory details from
other AI tools and or fromClaude to other tools.
So basically you can back up andmigrate the memory of Anthropic
Claude into other tools and viceversa.
I don't think there's a standardright now on how you do that,

(33:17):
but I think that's a veryinteresting approach for the
future.
In general, I think we're goingto see more and more separation
of data and tooling on the AIworld, which will allow much
broader adoption because yourdata will stay yours and you can
then switch tools around as youwish.
Right now, companies areinvesting billions of dollars in
making that happen, basicallyseparating the two layers, but I

(33:38):
think it will become common andthe basic practice as we move
forward.
Another company that made a bigannouncement this week is Adobe.
Adobe has just launched a suiteof AI agents powered by Adobe
Experience Platform, also knownas a EP.
And the main thing that theyreleased is the A EP Agent
Orchestrator role, which is aplatform that enables businesses
to manage and customize Adobeand third party AI agents in and

(34:02):
under the Adobe universe,including tools that will be
able to understand all thecontext that is in your entire
Adobe universe, understand itand take multi-step actions in
order to support differentprocesses.
So the two interesting thingsthat I find here is, one is the
fact that they are going toallow third party agents to be a
part of that environment, andtwo is that they understand that

(34:23):
one of the first needs as youstart adding more agents is the
orchestration aspect of it.
So if you're adding 3, 4, 5, 20,30, 300.
Agents into your universe,somebody needs to manage that.
And the people are alreadypreoccupied with doing other
things.
And so building an orchestratoragent that can control the other
agents and define what data itneeds to be exposed to, when it
needs to be a part of theprocess and so on, is basically

(34:45):
hiring your first manager beforeyou're hiring employees.
And that makes perfect sense tome.
Another big agent announcementthis week is Replitt.
Replit, just announced Replitagent three, which is a huge
shift from A, what their toolcan do, and B, the goals that
they have set for their company.
Agent three is offering 10 xgreater autonomy than agent two,

(35:05):
and it allows to do a lot morestuff including build, test, and
fix in real time.
Now I'm a heavy Replit user.
It's my number one go-to vibecoding platform, and I did not
know they released it until itpopped up on the screen and then
I started engaging with it andsince I'm working on the
platform every single day, itwas incredible to see the change
in how more powerful the newmodel is.

(35:27):
It is actually running things onits own, testing stuff on its
own, giving itself feedback andcontinually iterating and fixing
stuff without even me having tobe involved.
It was operating the actualapplication.
So it's using a computer usefunction to actually run the
application that it isdeveloping in order to test it
and figure out what's going on.
It is now writing its own logsand its own breakpoints and

(35:49):
testing it on its own.
Previously, it did that.
It created log capabilities, butthen I had to go and open the
console on the right side of thebrowser and navigate to the
right place and copy the outputof the logs and paste it back
into Replit.
Right now, Replit does it on itsown in its own console, and it
does the loop without me havingto do anything.
Now, according to Replit, thiscan happen because the new model

(36:11):
can run for 200 minutes.
That's almost three and a halfhours without supervision.
In addition, the tool itself cancreate other agents.
So what does that mean?
It means that right now you canuse Replitcate to create agents
for anything in your company.
It becomes an agent factory thatcan then connect to anything you
will allow it access to, but inaddition, it can spin up its own

(36:32):
agents in order to complete thetask that you gave it and help
Replit create agents that willhelp Replit create more agents,
et cetera, et cetera.
Going into this loop that makesmy head hurt.
But it really is an incrediblejump forward in vibe coding and
in the ability to understand orto get a glimpse into what the
future looks like when thesoftware can generate other
pieces of software to helpitself solve problems and so on

(36:55):
and so forth.
Now.
In addition, they've alreadybuilt connectors from Agent
three to non platforms such asNotion Linear Dropbox, and
SharePoint, which basicallymakes it a advanced version of
make.com or NA 10 and so on, andthey're merging it with their
ability to write really advancedcode.
And I think this is the futureof everything, right?

(37:17):
My personal experience this weekhas proven to be so impactful on
the concepts and how Iunderstand them now versus how I
understood them just a few daysago.
The ability of software tocreate agents to help in the
process is something I neverfully understood, and I probably
still don't fully understand.
But seeing it in front of my owneyes and seeing it generate its
own tests and running them andthen solving problems just blew

(37:40):
my mind, and I'll take that toany aspect of your work.
Anything you are strugglingwith, any bottleneck you have in
your company, you'll be able toopen rep it or any other tool
that will compete with it andsay, Hey, here's the problem
that I have.
I want you to look at it.
I want you to come up withsolutions.
I want you to develop thesolutions, and I want you to
test the solutions and then justgive it to me when it's working
and it will just do it.
So this is from Replit, butstaying on the same topic of

(38:02):
software development and how AIis pushing the boundaries of
that forward, a company calledBlitzie, who is an autonomous
software engineering platform.
They look way beyond just thecode generation, but the entire
software lifecycle.
They have just took the top spoton the SWE Benchmark, which is
the top benchmark for softwarecreation, and they scored an

(38:24):
86.8% on that benchmark, whichis 13% higher than the previous
top score.
It is also the largest singletime advancement of this
benchmark since it wasestablished.
So to put things in perspective,that's a huge jump of the score
on that benchmark.
And the way they're achieving itis by enabling hours long

(38:47):
reasoning instead of seconds orminutes, like their CTO said the
unsolvable weren't actuallyunsolvable.
They just required deeperthinking then System one AI
could provide by design.
Our platform enables AI to thinkfor hours or days rather than
seconds or minutes.
Unlocking solutions to problemthat stumped every previous
approach.
They also gave some practicalexamples that go beyond the

(39:09):
benchmark, like how Blitz was onits own, able to modernize four
millions lines of Java code injust 72 hours.
So on these last two topics,there are two things I want to
say.
Obviously the forefront of whatAI is impacting right now is
software generation, but it justgives us a crystal ball on how
it's gonna impact everythingelse in business as it evolves.

(39:30):
It is going to be as good at theother things as it is in code
generation right now, and it'sgoing to progress at the same
pace.
It's just a little bit laggingbehind because code is a much
more structured universe whereit can excel.
I don't see any reason why it'snot gonna happen in other
worlds.
The other aspect that is veryinteresting is that it shows
that the expansion of AI can bedramatically extended just by

(39:52):
giving it more time to think.
Basically, the test time computelaws that we have seen less than
a year ago for the first time isproving to be something that
could be expended way beyondwhat we know it right now.
So when you see ChatGPT orClaude, gR or any of the other
tools think, and it usuallythinks for two minutes, three
minutes, five minutes, it cantechnically, apparently think

(40:12):
for hours and potentially daysto solve significantly more
complex problems, whichbasically means that if you be
willing to pay for it, you canget unlimited reasoning on
really complex problems andsolve them.
That comes to business orscience or education, or any
other aspect that you want,which I find a really exciting,
B, really scary.

(40:34):
And yet this seems to be thesolution that both these
companies have taken to go waybeyond what they were able to do
before with ai.
And I have a feeling thistranslates very well to any
other field as well.
Now staying on the topic ofagents and agentic behavior,
Google Cloud just released acomprehensive handbook detailing
10 practical applications of AIagents that can enhance business

(40:55):
efficiency.
Now, this is a very non-techy,user friendly kind of notebook,
so if you are deep into theagent world, that's not gonna
provide you any value, or atleast it'll give you some
overview, but nothing more thanthat.
But if you don't know much aboutagents and what they can do for
you right now, this is a greathandbook and we're gonna link to
that in the show notes.
The paper is called AI AgentHandbook, just as simple as

(41:15):
that.
It's A PDF and you can get itfrom the Google's website, but
the 10 agents effortlesslysearch for enterprise data like
never before.
Number two, transform complexdocuments into engaging
podcasts.
This is the feature we know fromNotebook, LM and Gemini.
Number three, generate your bestideas in minutes.
Number four, consult an experton anything.

(41:35):
Number five, personalizedcustomer experience at scale
with multi-agent ai.
Number six, boost marketingengagement and conversion rates.
Number seven.
Shorten the sale cycle.
Number eight, find a bug in yourcode and fix it with just a
prompt.
Number nine, simplify onboardingor other HR workflows.
And number 10, build your own AIagent.

(41:58):
So each and every one of thoseare tools that they've already
developed or platforms thatthey're developed, allowing
users to develop their ownagents.
And this is just a way to exposepeople to what agents are and
what they can do today.
And again, this is nothing tootechnical.
It's a relatively shortdocument, and it's just
explaining what Agentic futurecan look like across multiple
aspects of our businesses.
Another interesting piece ofstatistics from that is saying

(42:19):
that 33% of new enterprise appswill include agentic AI by 2028.
This is up from less than 1% in2024, and they're claiming that
because of that, it'll enable15% of day-to-day work decisions
to be made autonomously.
So what they're claiming is thatwithin two and a half years,
more than 10% of the business asusual, is gonna be done by AI

(42:41):
and not by humans.
And like every other projectionon technology in history and
definitely projections and ai,it probably undershoots the
actual reality.
So the number actually might behigher than that.
Staying on new announcementsthat are also completely
groundbreaking and changing theway we can think of the world
and engage with the world.
Spin out out of MIT Media Labthat is called Alter Ego, is a

(43:02):
wearable device that isnon-evasive neural interface
that allows you to communicatewithout speaking and without
listening to actual sounds.
So the way this works.
It captures peripheral neuralsignals when you are basically
using your inner voice.
You just have to think aboutwhat you wanna say, and it knows
how to capture that, and itknows how to transmit that
either into a computer or intoanother alter ego device, which

(43:26):
means you can have aconversation with somebody next
to you or somebody on the otherside of the planet as long as
the internet connection is fastenough and you can say things
without making any sound, andthe way you hear the feedback is
through bone conduction audiofeedback.
So those of you who haven'ttried that, there are many
headphones today that you canwear, especially when you're
cycling and stuff like that.
When you want to hear the roadthat you just put on the sides

(43:47):
of your head and through boneconduction, you can hear
perfectly fine while still notblocking your hearing.
So this new means ofcommunication, like I said, with
either devices.
So think about voice activationof everything.
So I think I shared with youmultiple times.
I almost don't type anymore.
I voice type almost everythingin my computer, which makes me
communicate with the computer alot faster.
Well now I won't have to voicetype at all.

(44:07):
I won't have to say anything.
I will just have to think aboutwhat I wanna say and it will
show up on the screen.
And the same thing can happenwith other people.
You can have a conversation withsomebody else without saying
anything and without anybodyhearing any other sound.
This is the closest thing totelepathy that I know that
exists today, but alreadyexists.
It's not science fiction.
And this is another thing that Ithink will become very, very
common again, combine that withother wearables that can see the

(44:29):
world.
By the way, alter ego hascameras that can see the world
around you.
And you understand that thisbecomes an extremely powerful
tool.
Add to that faster internetconnection, or the ability to
run AI on board of the deviceitself.
And you understand that mostpeople around you will become
cyborgs with extremeintelligence capabilities that
are just not available today.
But the technology enables rightnow.

(44:50):
Now take that beyond the one toone level, to the many, to many
level, and think about replacingSlack channels with this bubbles
of thoughts on specific topicsthat the AI itself will be able
to understand what relates towho and who needs to know what
and when.
And we'll be able to transmitthat information to the right
people at the right time or beable to be queried just by more
or less thinking about it orsaying in your inner voice, the

(45:11):
information you're looking for,and it will be able to query all
the other communications withall the other relevant people
that you should have access toand provide you that
information.
Again, my head hurts justthinking about it, but I think
this future is not that far out.
Another company that made a bigannouncement from being just a
storage device to being an AIagentic universe is box.
So box just launched Boxautomate, which is what they

(45:32):
call an operating system for AIagents.
And they have deployed severaldifferent agentic solutions in
the past few months, and nowthey are delivering a much more
comprehensive solution that willallow you to query and use your
data across all the boxsolutions in a much more
effective way or as their CEOAaron Levy has shared AI agents

(45:52):
mean that for the first timeever we can actually tap into
all this unstructured data.
So basically you can ask anyquestion about any data, whether
structured or unstructured, andget an answer in a very
effective way.
He's also saying that their mainfocus has been to solve the
context window limits of justthe large language models like
Claude or chat pt.
But he's saying that there's nofree lunches, right?
That there's still limitationsof the software right now in

(46:15):
both cost and infrastructurethat is required to make that
possible, but that it ispossible.
Staying on the topic of what ispossible or what is going to be
possible with ai.
Mira Mira's Thinking MachineLab, which is the companies she
established just earlier thisyear, raising$2 billion in a
seed round, has released theirfirst research paper, and the
paper is called DefeatingNon-Determinism in LLM

(46:36):
Inference.
So what this research paper iscoming to prove is that AI can
deliver consistent results.
So the myth if you want so far,was that by definition, AI being
a statistical model willgenerate slightly different
results every time you run it.
So as an example, if you'reasking AI to write a report
based on a huge amount of data,and you ask it to write it five
times every time, you're gonnaget slightly different results.

(46:57):
Now, they're all gonna be asummary of the data, but it's
not going to be the samesummary, which is a really big
problem both in research and inbusiness.
Now, the explanation to that wasalways that this is the depends
on floating point math or GPUconcurrency, which is how these
things actually run in thebackend.
And so this was a quote unquote,necessary evil.
However, apparently this is nota must.

(47:18):
So by enforcing what they callbatch in variance.
In inference pipelines, they cannow generate consistent outputs
that are identical regardless ofhow many times you run it.
This solves one of the biggestproblems a generative AI
solutions had so far when itcomes to using it in more data
sensitive environments.
And by solving this, they aremoving the AI world a very big

(47:40):
step forward into doing stuffthat it just couldn't do before
or couldn't do consistentlybefore.
Now they have committed tosharing all their findings and
they are launching what theycall connectionism, which is a
blog series to share and posteverything that they are doing.
Everything that they learned,including their code and
insights and everything with tothe benefits of the public.
Basically like OpenAI was in itsearly days.

(48:02):
And as you probably know, mostof the people behind thinking
machine labs are executives andtop researchers from OpenAI.
And since this company was ableto raise$2 billion before even
explaining what exactly they aregoing to do, this could be a
great segue into the nextsegment of this episode, which
is the crazy amounts of moneythat is being raised in the AI
world right now.

(48:22):
And it'll actually be a fullcircle from the beginning of
this episode when talking aboutthe amount of expenses that
OpenAI are going to have.
So Databricks has just closed a$1 billion funding ground
evaluating them at a hundredbillion plus.
This is just nine months aftertheir previous raised, and it's
fueled by their explosive growthto$4 billion in annual recurring

(48:43):
revenue.
So Databricks just hit this 4billion a IR in Q2 of 2025,
which is a 50% year over yeargrowth with AI products alone,
heating$1 billion in revenue runrate.
Now, another very interestingaspect, as you know, they're
mostly providing database and AItools on top of these databases.
And their CEO Ali Gaze revealedthat, and I'm quoting, A year

(49:03):
ago we saw in the data that 30%of the databases were not
created by humans for the firsttime.
They were created by AI agents.
And this year, this statisticsis 80% basically meaning that AI
is creating its own databases inorder to run things more
efficiently.
Another company that has grownlike crazy because of this
capability is superb base, whichis one of their biggest
competitors, which is a databaseplatform that was built to

(49:26):
enable AI to spin up databasesthat it needs, on the fly.
And now Databricks is alsomoving in the right direction.
So 80%, four out of every fivedatabases that are being created
on the Databricks platform iscreated by AI and not by humans.
This is, again, just makes mybrain explode when thinking
about combining it witheverything that I said before
and you understand where thefuture is going.
AI generating another AI that isgenerating agents that can do

(49:47):
things faster, that can buildits own databases and so on and
so forth.
Another huge valuation jump thisweek is meco, so those of you
who don't know, AmeriCorps is acompetitor of scale ai, which is
the company that was acqui hiredby Meta in order to take over
their chief executive officerAlexander Wang, that is now
running super intelligence.
Well scale AI was number one.
Meco was one of its competitors.

(50:09):
And now because scale AI hasbeen more or less disassembled
because they lost a lot of theirtalent into the Meta Superin
Intelligence lab, and a lot oftheir big clients left them
because they don't trust thefact that their data is not
gonna be available to Meta.
Well, that has drove anincredible growth for AmCorp,
who is one of their maincompetitors and now they've just
raised a series C round, whichvalues them at over$10 billion

(50:31):
up from$2 billion just sevenmonths ago.
But again, while this is anexplosive growth, their revenue
has also exploded, and they'renow at a$450 million run rate
revenue surge.
Which is a huge jump compared towhat they had just seven months
ago.
Now, again, to put things inperspective, this company was
founded in 2023, so two yearsago, and they're now worth$10

(50:54):
billion and they're generating$450 million a year.
Another company that isexploding in its valuation is
perplexity, but perplexity justraised$200 million at a
staggering$20 billion valuation,this is just two months after a
hundred million dollars raise,and the third raise that they're
doing in 2025 with theirvaluation growing in 2025, from

(51:14):
14 billion to 20 billion.
Now their revenue appar,according to a source that spoke
with TechCrunch is approaching$200 million a year, which again
is very significant based on thefact that most of their users
are free users.
Now I shared with you earlierabout reps release of their
agent.
Version three.
Well, they have just closed a$250 million funding round at a

(51:35):
$3 billion valuation, which isnearly tripling its worth since
2023 with their annualizedrevenue over 50 x what it was
just under a year ago.
They're claiming to have acommunity of over 40 million
people that is using theirplatform to vibe, code different
solutions.
Their revenue is right nowaround 150 million a RR, which

(51:56):
is a 50% jump from the a hundredmillion a RR they had in June.
So this is a 50% growth in lessthan three months for a company
that's not growing from ahundred thousand dollars, but
from a hundred million dollars.
This is just showing you theappetite that the world has
today for these kind ofsolutions, and it's showing you
why these crazy amounts thatthey're raising is actually in
tandem with the revenue thatthey're generating.

(52:17):
And this connects to the raisewe discussed last week of Sierra
raising a new funding round at a$10 billion valuation.
Again, that company wasestablished by Brett Tyler, who
used to be the CO CEO ofSalesforce.
And he's on the board of OpenAIand he left Salesforce in early
2023 to establish this companythat builds agents.
By the way, what's interestingabout them, going back to
connecting a few dots of thisepisode, of breaking traditional

(52:39):
way that software is being paidfor.
Sierra charges you only when theagents autonomously resolve
customer issues and it's free ifit's escalated to humans.
So basically you are paying notfor usage, you're paying for
success.
Think about how profound is thatthey trust their agents so much
that you don't pay for them ifthey don't actually solve
customer service issues.

(52:59):
And that explains why they'regrowing so quickly Now one of
the things that Tyler said isthat he agrees with Sam Altman
when he said that someone isgoing to lose a phenomenal
amount of money.
We don't know who, and a lot ofpeople are going to make
phenomenal amounts of money..
aNd he's comparing the AI frenzyto the internet boom, back in
the early two thousands wherehuge flops like pets.com, who

(53:20):
raised stupid the amount ofmoney and completely disappeared
coexisted with giants likeAmazon and Google, who we know
that came to conquer the world.
And so the reason they're sayingthat is that there's obviously a
bubble, but it's a verydifferent bubble than the bubble
that existed back in the 2000.
And I lived through it.
I worked at a startup that inearly 2000, had 20 something
employees and I was one of them.

(53:42):
And then by the late that year,we had over a hundred.
So we grew by almost five x onlyto come back down to like 35
people a few months later whenthe whole thing collapsed.
Now, do I think it's the samescenario right now?
The answer is no.
And the reason I think theanswer is no is that there is
real revenue that is beinggenerated by many of these
companies, very significantrevenue, as I shared with you on

(54:02):
the examples in the past fewminutes.
That being said, there are many,many, many companies that are
raising significant amount ofmoney that are going to
evaporate and disappear, andthat will lose a lot of money
for the VCs and the platformsthat are investing in them.
And it's not always easy to knowwhich one are going to survive
and which one are not.
Now there are more and morediscussions and articles out

(54:23):
there at a potentially loomingAI winter.
There was an article on Fortunemagazine called, is Another AI
Winter Coming?
There were several otherarticles like this.
If you remember, we talked aboutthe MIT study that claimed that
95% of AI pilots don't provideany value, which I discussed
with you in the past weeks thatI completely disagree with the
way they have performed theresearch and the conclusion that

(54:44):
they got to.
But it was still out there andit was published by MIT.
So people take it seriously.
And then you have statementslike the one we just shared by
Tyler, or the statement by SamAltman that said some venture
backed AI startups were grosslyovervalue.
But the interesting thing inthis article is that it's
showing previous cycles in whichthere were significant hype
around ai.
That then completely disappearedover, starting from 1958, where

(55:07):
Frank Rosenblatt, who was a AIresearcher, claimed that AI
would soon recognize people andtranslate languages instantly.
That was in an article again in1958.
Then obviously AI slowed down,then there was another spike,
then another winter in thesixties and seventies.
Then AI leader in the earlyseventies called Marvin Minsky
projected that in three to eightyears, we will have a machine
with the general intelligence ofan average human, and we know

(55:30):
that didn't happen.
Then another winter happened inthe eighties after the US
government spent$1 billion atthat time, which is like$10
billion right now on what wascalled expert systems, which
were AI solutions forbusinesses, and that was
abandoned completely afterwardsand so on and so forth.
So there's several differentparallels to what happened back
then, both in means ofinfrastructure investment as

(55:51):
well as in projections that arebeing made.
I think it is very, verydifferent this time around.
I think the technology isactually there.
I think a lot of theinfrastructure that did not
exist back then, like high speedinternet, like data centers that
are co connected, like GPUs,like the transformer
architecture, all these thingsthat enable what we have today
that is actual and notconceptual.
And I think that's why thisarticle, while it's an

(56:12):
interesting read, is not reallyconnected to reality.
Because I think what we havetoday is real and not just
projections.
Now are the projections of whenwe're gonna hit a GI and a S.
I could be stretched.
Yes.
Whether it's two years, fiveyears, 10 years.
This is still debatable, but Ithink that timeframe will yield
these kind of results.
And like I said, multiple times,I don't think it matters.

(56:34):
I think every single week wehave new breakthroughs that are
enabling things that were notpossible before.
We just shared several of themon this episode that just
happened this past week.
Each and every one of them opensa whole universe of AI use cases
that are in their infancy andare gonna keep on growing and
developing.
I personally don't think there'san AI winter coming.
And I actually think that Q4 ofthis year is gonna be a complete

(56:55):
madness when it comes to newreleases from the big labs as
well as tools built on top ofthem.
That will continue the crazytrend that we have seen in the
last two years.
A few additional pieces ofinformation about the
infrastructure that is runningall of this.
Microsoft just signed a dealwith Naus, which is a company
that provides data centers andthe deal is worth$19.4 billion

(57:17):
to support and host AI datacenters for Microsoft.
If you remember early this year,there was a whole thing about ai
Winter is coming and it's gonnabe slowing down because
Microsoft has canceled severalbuilds of data centers that it
was planning to do.
And then Satya basically cameand said, no, we're not slowing
down.
We're just switching thestrategy from building to
leasing.
And everybody thought, that's anexcuse.

(57:37):
Well, that was not an excuse,and here it's happening.
So they just signed a deal witha single company for$19.4
billion worth of data centerhosting that could grow even
beyond that number.
Microsoft also signed a veryinteresting deal with the US
government basically providingthe US General Services
Administration known as GSA toprovide over$3 billion in AI
cloud services for federalagencies, potentially saving

(57:59):
them another$3 billion by theend of next year.
So for a total of$6 billion ofsavings.
And what they're doing isthey're providing Microsoft 365
co-pilot at no cost for 12months with a potential
extension of another 12 monthswith a tailored version that's
gonna be tied to specific needsof the federal government.
In addition, they're providingdiscounts for Microsoft 365 for

(58:20):
Azure Cloud Services for Dynamic365 and for cybersecurity tools
for 36 months.
Satya Nadela, their CEO said thefollowing, with this new
agreement with the US GeneralServices Administration,
including a non-cost Microsoft365 co-pilot offer, we will help
federal agencies use AI anddigital technologies to improve
citizen services, strengthensecurity, and save taxpayers

(58:41):
more than$3 billion in the firstyear alone.
Now, if you remember, OpenAI didalmost exactly the same thing
just over a month ago.
On August 6th, OpenAI sharedthat they're going to provide
GSA and other federal agencies,GBT for all employees for$1 per
year per agency, which isbasically free.
And so if you remember inPresident Trump's integration,

(59:03):
all the tech giants were sittingnext to him.
And if you thought, what thehell is happening here?
Well, this is what's happeninghere.
Trump is a businessman.
That's what he has done hisentire life.
That's what he knows how to do,and that's how he runs the
government as well.
So he is giving them anenvironment to thrive with a
very supporting environment,with very supportive
regulations, with very support,budgeting across different

(59:24):
things.
And in return, he's gettingbenefits, huge benefits,
billions of dollars of benefitsto the US government and in the
long run to the US economy.
Now whether you like the wayTrump is acting or not, there
are definitely big benefits tothe government and hopefully,
like I said, in the long run tothe US economy, from the way
he's acting.
Staying on the topic ofinfrastructure, Oracle Q1, 2026

(59:47):
results were just released andthey are showing a staggering
$455 billion in remainingperformance obligations with
their CEO FRA Kaz stating thatthey signed four multi-billion
dollar contracts this quarter.
And this is not with unknowncompanies that participating.
The companies they signed thesedeals with are OpenAI, XAI,
meta, Nvidia, a MD.

(01:00:07):
All of them are a part of thiscrazy frenzy for additional
compute, and Oracle is one ofthe bigger winners, which is
really surprising.
Oracle is kind of one of thoseold school companies that
provide, that has been doing thesame thing for the last 30 years
or so, and now their OracleCloud infrastructure is
absolutely booming and they'reprojecting that their cloud

(01:00:27):
infrastructure revenue is gonnago from$18 billion this year to
$144 billion in 2030.
This has sent their stock 40% upin a single day.
Making Larry Aon, their founder,the most wealthy person on Earth
surpassing Elon Musk.
But so far in this episode, allthe companies we talked about,
or most of the companies wetalked about are US based, but

(01:00:49):
the Chinese are not stoppingeither.
So both Alibaba and Moonshot ai,which is a company that is
backed by Alibaba, but they'retheir own company and startup
have released models this week,both of them made it into the
top 10 at the LM Arena, textArena rankings.
Now to put things in a biggerperspective, Chinese companies
have five or six models in thetop 10.

(01:01:09):
Many of them are co-sharing theeighth location and the 10th
location with other US-basedcompanies.
So if you're asking how therecan be so many models, what's
happened to all of them?
So the highest ranking they haveis six, and then several at
number eight, nine, and 10.
But Quin three max preview isnow at the sixth place on the
text arena ranking, which is thetop Chinese model right now.
The two crazy aspects of it isthat last year there were zero.

(01:01:32):
Chinese models and Alibaba hadzero AI models at all, and now a
year later, they are at numbersix on the global ranking and
they are continuing to developat a crazy speed.
And that's without having accessto the latest GPUs by Nvidia.
The other model kin, K 2 0 9 05, which is the date that it was
released, is tied in a placewith other competitors such as

(01:01:54):
Deeps Seq, R one, and Grok four.
But again, in the top 10ranking.
So what does that tell us?
It tells us that China is veryclose behind when it comes to
developing advanced models andthat they are very fast in
catching up again.
Alibaba had no models a yearago, and now they are at number
six.
Alibaba has all the benefitsthat Google has, right?

(01:02:14):
They have the same access tocompute, they have the same
access to talent, they have thesame access to resources, and
they have the same access todata because they're the
parallel of Google in theChinese universe.
So they have a huge benefitacross everything that they need
in order to build a verysuccessful AI company.
The only thing they're lackingis access to the same quantity
of advanced GPUs as the Westernplatforms, and yet they're able

(01:02:37):
to come very, very close.
So I think the race isdefinitely on, and these models
are gonna keep on gettingbetter, and the fact that they
don't have access to these GPUsjust makes them evolve and
innovate faster across otheraspects in order to close the
gap and now to the final andsomewhat fufu news of the day,
albania just announced that theyhave appointed Della, who is the

(01:02:58):
world's first AI created cabinetmember, and Prime Minister Eddie
Rama has shared that in a socialparty meeting just this week.
So if you're asking yourselfwhat this cabinet member is
supposed to do, it is supposedto oversee all public
procurement processes to ensuretenders are 100% free of
corruption, which is a bigproblem that Albania is facing

(01:03:18):
right now.
While Diala does not hold anyvoting rights in the cabinet, it
is going to be a cabinet memberand it's going to focus on
overseeing governmentprocurement.
And I think this is a veryinteresting approach.
If you think about differenttypes of governments that we
have around the world, manycountries appoint cabinet
members, not based on theirpolitical agendas, but based on
their experience in order toprovide the best public service.

(01:03:40):
And this is a great example onhow AI can be used and leveraged
to actually do good for societywithout taking away the decision
making from humans.
I really like that approach.
Whether it's actually going towork or not.
time will tell.
It will be very interesting tofollow, but I expect we will see
more and more of that in thenext coming years.
That is it for today.
We will be back on Tuesday witha fascinating episode comparing

(01:04:03):
from my personal experience,Gemini versus ChatGPT and seeing
where each and every one of themexcels, which of the tools you
should use for which use cases.
I'm going to be sharing multipleof these in detail, so come and
join us on Tuesday.
If you are enjoying thispodcast, please open your phone
right now and click on the sharebutton and share it with a few
people that can benefit from it.
I know I'm saying it's everyweek, and I know some of you're

(01:04:24):
doing this because you areconnecting with me on LinkedIn,
which I highly appreciate.
So any of you who wants toconnect with me, please connect
with me on LinkedIn.
I really love hearing,, fromyou, the listeners and what you
think and what you consume andwhat other topics you think I
should cover and so on.
But also share it with otherpeople that can learn about AI
and how AI is progressing andhow it's going to impact,
everybody's lives.
Because I do believe deep insidethat the more we understand

(01:04:45):
this, the more we can impactpush the AI future to be a
better future versus a veryscary future.
So play your role, share thiswith people.
Rank and rate the podcast onApple Podcasts or Spotify.
And keep on experimenting withAI and share with world, what
you are learning because that'syour way to help in this
process.
I appreciate each and every oneof you for listening and have an
amazing rest of your weekend.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.