Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 2 (00:00):
Hello and welcome to
a Weekend News episode of the
(00:03):
Leveraging AI Podcast, thepodcast that shares practical,
ethical ways to leverage AI toimprove efficiency, grow your
business, and advance yourcareer.
This is Isar Metis, your host,and we have another week with a
lot of really important and bigannouncements, new models that
came out.
Lots of discussion about AIbubble or not AI bubble, lots of
activity in m and a andpartnerships, lots of politics,
(00:26):
as it comes to AI and so on.
And I actually went and took alook back on the 24th of May of
this year, I released episode 191 that was called the craziest
Week in AI News History,Microsoft, Google, Antrophic,
and Open ai.
Major announcements.
And it seems to me that in thepast few weeks this is becoming
the norm.
Like all the big labs come upwith new announcements or new
(00:49):
models or both all at the sametime.
And this week you add to the mixAWS that had a big event and so
on.
And we have so much to talkabout.
I could have probably recordedtwo episodes, but I'm gonna try
to give you as much informationas possible in just one.
And the rest is going to be inour newsletter.
So everything that is not gonnamake it into this episode, and I
can tell you it's gonna be a lotand things that you probably
(01:10):
want to hear about and I willtell you a little bit about it
afterwards, will be in ournewsletter and you can sign up
for the newsletter by clickingon the link in the show notes.
But let's get started and talkabout what's actually happened
this week.
I was able to mostly avoid theAI bubble conversation in the
(01:31):
past few weeks, even though it'sbeen brewing and growing every
single week.
But I cannot ignore it anymorebecause it's literally coming
from every single direction,including some of the leaders in
the industry itself.
So let's do a quick review ofsome of the things that happened
this week and some of thestatements that we got this
week.
The first one is I-B-M-C-E-O.
Arvin Krishna on the VergesDecoder podcast basically
(01:54):
sounded the alarm on the currentcost of infrastructure.
And he broke this down to simplemath.
He said that filling a singleone gigawatt AI facility with
the necessary compute hardwarecomes to a price tag of
approximately$80 billion.
That's one gigawatt.
Now public and private sectorannouncements currently suggest
(02:16):
plans to deploy roughly 100gigawatts of capacity dedicated
to a GI class preparation.
Combining all the differentannouncements from all the
different company.
When you do the simple math andyou multiply the cost per
gigawatt by the amount ofgigawatts you get to$8 trillion
of potential investment thatthese companies are talking
(02:37):
about as a needed infrastructurefor what they foresee in the not
too far future.
So what Krishna is saying is inorder to just finance the
capital, just the cost ofcapital to pay back the debt and
the loans and the raising ofmoney that these companies need
to pay the 8 trillion.
They need to generate 800billion in annual profit, not
(03:00):
revenue, but profit in order tojust pay for the investment and
sustain the payments on theloans that they're taking or the
money that they're raising inorder to get this kind of thing.
To make it even worse, he'ssaying that most of this
infrastructure have about a fiveyear refresh cycle.
If you want his exact quote, hesaid, you've got to use it all
(03:20):
in five years, because at thatpoint you've got to throw it all
away and refill it.
And he's not saying that becauseit becoming obsolete.
The soft, the hardware stillworks.
It is because the newer versionsof hardware just make the old
one make look like a joke.
Both in means of cost ofoperation as well as in means of
capabilities.
And so the lifespan is aboutfive years.
(03:40):
We're gonna get to talk moreabout that in a few minutes now
while Krishna acknowledges thebenefits of using AI tools and
how it's gonna drive significantEN enterprise productivity and
growth.
He does not think that the ROIis there and he also doesn't
think that the current path thatwe are on that large language
models can lead to a GI or SuperIntelligence.
(04:02):
He specifically said that thechances that the current LLM
based AI will reach a GI chancesis between zero and 1%.
So he's a strong believer thatwhat we have right now is
helpful, but it is not the fullneed in order to get to a GI.
And there's many others thatagree with him, and he also
thinks that the currentinvestment is outrageous
compared to the potentialreturns.
(04:23):
To put things in perspective,Google announced this week the
incredible number that they'regoing to a thousand X grow their
capacity to support ai they'replanning this staggering target
comes directly from Amin Adat,who is the head of Google AI
infrastructure.
Who is the guy that is going toactually make this happen.
(04:43):
So this is from a highlyreliable source.
And he revealed a plan in one ofthe recent all hands to double
the server size capacity everysix months until they get to a
thousand x the current capacitythey have right now.
Now, if that sounds insane toyou, his specific quote, which
we heard similar things fromother people in the past.
(05:04):
He said, and I'm quoting, therisk of under investing is
pretty high.
The cloud numbers would've beenmuch better if we had more
compute.
So basically what he's sayingthat currently their supply is
significantly below the demandthey're seeing right now, not to
mention the demand that they'reexpecting to have in the future.
And hence, they are planning todouble the amount of compute
(05:24):
every six months, which isabsolutely insane.
Now.
It's not just about getting morecompute, it's also about getting
better compute.
They're planning to deploy theirseventh generation tensor
processing unit.
Their TPUs, which is theirhomegrown hardware that they've
been training and deploying alltheir AI on, and which connects
very well to the previous pointwe talked about.
(05:44):
They're going to deploy newhardware, which will probably
also replace at least some ofthe old hardware that they have
right now because it's faster,better, and cheaper.
Now, according to this article,the aggregated investment in
capital expenditures by Google,Microsoft, Amazon, and Meta this
year will exceed$380 billion.
(06:05):
Add some of the smaller playersto that and you're getting to
more than half a trillion injust a single year.
Another person that made similarstatements as far as the re
investment right now does notmake any sense, is no, other
than Dario Amer the CEO ofAnthropic.
He was on stage at the deal bookSummit of the New York Times,
and he made some interestingstatements.
(06:26):
One of them is that he believesthat some of their competitors,
which he did not name, but wecan assume who that is, are, and
I'm quoting, pulling the riskdial to far, and you also use
the term yellowing billions ofdollars.
And those of you who don't knowthe.
Term yolo.
It's you only leave oncebasically.
What the heck?
Let's try this out.
You only leave once, but bettinghundreds of billions of dollars
(06:49):
on that.
Again, he did not name OpenAIspecifically, but I think it was
pretty obvious who he'sreferring to.
He also made some veryinteresting comments.
You know, their own growth hasbeen staggering, right?
They made a hundred milliondollars in 2023, and they're on
trajectory to making betweeneight to 10 billion this year,
just two years later.
This is an insane growth, but hesaid, and I'm quoting, it will
(07:10):
be really dumb to assume thistrajectory is guaranteed for the
future, and he's emphasizingthat he's finding it very, very
hard to project how their futurelooks like.
He's saying that next year theycan make anything between 20
billion to 50 billion, which isa pretty broad range if you're
trying to plan on how muchincome you're going to have and
align your expendituresaccordingly.
(07:31):
He also mentioned the sameconcept of the obsolete of chips
after a relatively short amountof time.
He said that while the chips arestill working, they're not gonna
fail.
They're becoming economicallyobsolete because newer, faster,
cheaper versions arrive, andthen it doesn't make any sense
to use the old ones anymore,which means the amount of
billions that you're investingare depreciated very, very
(07:53):
quickly.
And he's saying that Anthropicand all the other scalers have
the same real dilemma, that ifyou underinvest, you might miss
this huge wave.
And if you overinvest, you facea complete catastrophe of
destroying both your company aswell as all the other people who
invested money in you along theway.
Now he's saying that from theway they are planning their
future and the amount of moneythey're raising and the way
(08:14):
they're spending, they cansurvive.
And he, and I'm quoting almostall worlds basically whatever
economic future is around thecorner.
But he also added that, and I'mquoting, can speak for other
companies, again, implying thatother big scalers may not be in
the same safety zone as far asAntrophic right now.
We're going to continue to talkabout the bubble in a second,
(08:36):
but since we're already talkingabout this interview by Dario
Amede, he also reiterated thingsthat he said in the past as far
as his fears to the economybecause of the loss of jobs in
the very near future.
He said that he believes thatthe free market cannot alone
observe the, and I'm quotingmassive labor market disruption
(08:58):
that AI is going to unleash.
He repeated something that hesaid in the past that he
believes that AI could wipe out50% of entry level white collar
jobs within just a few years,and he's suggesting that the
government will have to step inorder to stabilize the economy
in a situation like that.
This is a very clear warning byone of the people who is most
(09:20):
knowledgeable about what'scoming.
We're gonna talk about it in aminute.
They just released Claude Opus4.5, which is an incredible
model already, but that is notwhat they're testing, right?
That's what they're releasing.
Meaning he knows what's comingin six months and 12 months, and
when he's saying it will be ableto replace 50% of entry level
jobs, he knows what he's talkingabout.
He's not assuming any of thatGoing back to the AI bubble
(09:43):
discussion, an article at thefuturism website was talking
about the fact that thecorporate spending and overall
AI intelligence, crazy growththat we've seen in the corporate
world in the past few years isslowing down, and in some cases
even revers.
The most critical stats reportedin this article comes from no
other than the US Census Bureau,so this is not a private entity
(10:07):
that has anything to gain fromthis.
This is actual statistics thatthey collected, which found that
the percentage of Americans,which are using AI to produce
goods and services, so not justgeneral use, but something that
is used to generate work atlarge companies, fell from 12%
to 11% between the two recentsurveys.
Now, this is not a big decline,but it is a decline compared to
(10:30):
very significant growth thathappened in the previous events
of doing the same survey.
Maybe the most staggering statsin that report said that 81.4%
of employees in mid-sizecompanies, a hundred to 250
employees reported they did notuse AI in the last two weeks.
(10:51):
81.4%.
Of midsize company employees inthe US reported not using AI in
the past two weeks.
That is an increase from the74.1% that was reported in
March.
Now even major corporations withmore than 250 employees had a
increase in the number ofemployees who reported that they
(11:13):
haven't used AI from compared tothe previous survey.
The numbers here are lower, butthey're still very high.
68.6% reported not using AI intwo weeks compared to 62%.
Reported that earlier this year.
Another data point in thatarticle came from an economist
at Stanford that is trackinggenerative AI usage, and he
found the same results.
(11:34):
He found that usage has fellfrom 46% to 37%, in just a few
months, basically in a singlequarter.
Now, as somebody who is meetingand working with companies large
and small every single week, Ican tell you that I believe.
These numbers.
I don't necessarily believe thedecline, but I definitely see
(11:54):
the numbers being realistic.
I meet with solopreneurs when Iteach my courses or small
business owners when I teach mycourses.
Plus most of the work that I dois private workshops to large
companies.
Most of them are in the hundredsof millions or in the billions,
and I can tell you that thelevel of usage and understanding
of how to implement AIeffectively as a company
(12:14):
initiative, not specificindividuals that are doing cool
stuff, is very low across theboard on all these different
companies, which going back tothe bubble discussion has two
sides to the story.
One aspect is there's a lotlower usage than these companies
are investing for.
On the other hand, very fewpeople are currently using ai,
(12:35):
which means there's a huge roomfor growth as far as AI adoption
and implementation.
Before I tell you what I thinkpersonally, I will add one more
thing.
Michael Bur, who is the guy fromthe movie, big Short, the guy
who predicted the crash of thesubprime loans and the whole
housing market collapse he nowmade a very strong statement,
and we talked about it a fewweeks ago, that he is making
(12:56):
that bet and putting his moneywhere his mouth is saying this
is gonna be a big crash, but heexplicitly now set up a
statement that says that thestock market could face a crash
that is worse than the twothousand.com bubble.
And he is tying it to twoseparate things.
One, he's citing that hebelieves that valuations are
currently dangerously inflated,but also he's adding a
(13:18):
structural investment conceptthat is now very, very
different.
He's saying that currently overhalf of US equity assets come
from ETFs and not held andmanaged by individual investors.
And because of thatconcentration, he believes that
small changes in the directionof the AI market could trigger a
very significant shock becauseof the concentration and
(13:41):
individuals not taking advantageof immediate changes in the
market.
And so that's one aspect thathe's saying.
The other thing that he's sayingis that NVIDIA's price to book
ratio now surpasses any stockfrom the.com bubble period and
that NASDAQ's one hundreds priceto sales multiplies are climbing
back towards their 1999 to 2000peaks just before the market
(14:05):
collapsed.
Now add to that, his specificwarning that says that, and I'm
quoting, many tech firms arestretching depreciation on AI
hardware.
This reduces expenses on paperand inflates reported profits.
And he's warning that when thereal cost hit and he's stating
as we've stated, that everybodyelse said, that the actual
depreciation is relativelyshort, then the profits will not
(14:28):
be sustainable anymore andearnings could plummet because
these costs are actually comingway faster than what they're
showing up in the books rightnow.
So overall, he's making somevery strong points.
Now, connect the dots toeverything that we talked about
and what I personally thinkabout this.
As I mentioned, I work with.
Large and small companies, and Isee the lack of adoption right
(14:51):
now across the board.
Very few people and very feworganizations actually know how
to implement AI effectively, andthe level of usage right now is
ridiculously low to what itcould be if people just knew how
to use today's technology.
Forget about what's coming inthe next few months or years.
If we stop development rightnow, the amount of scaling that
(15:13):
will happen just becausecompanies will learn from people
like me or on their own or indifferent methods, how to
implement AI effectively.
They will use AI not twice asmuch as they do right now, but a
hundred x as they're using rightnow, which means the demand is
not even close to what it isgoing to be, and AI is going to
(15:34):
be embedded into.
Everything, whether we like itor not, which means from a
demand perspective, it isdefinitely there.
The thing that is not clear, andwith that, I cannot obviously
argue with Michael Bur becausehe knows about this way, way
more than I ever will, is, isthe current level of investment
and expenditures and commitmentsalign with that growth in the
(15:58):
demand?
And to be fair, I don't know,the one thing that is clear to
me is that not all theseinvestments can be successful.
The amount of investments rightnow is insane.
It is across the board and notall these companies will
survive.
Meaning there are going to becompanies that right now are
darlings of the industry thatwill either take a serious hit
or will not exist a few yearsfrom now because there will be a
(16:21):
correction.
Is that a doom scenario foreverything?
Ai?
A hundred percent not, again, Ido not see how we can put this
back in the bottle or in thebox, or call it, whatever you
wanna call it.
AI is moving forward.
It is going to be embedded intoeverything we do from our
personal lives, to businesses,to relationships, to everything
you can imagine.
and there's no going back.
So the demand will keep ongrowing, but there will be
corrections and there will besignificant hits for specific
(16:43):
companies.
This is my personal opinion.
Now switching gears to our nexttopic, which is a big topic.
Sam Altman, basically hit thepanic button inside of OpenAI
and announced a code red as aresult of the release of Gemini
three and my amount of greatpress it has been getting and
the interesting growth that ithas been driving to Google.
(17:04):
So based on an internal memothat is quoted in the
information, which is a magazinethat's been very good at getting
internal information and sharingit accurately.
Altman issued a company widecode red, freezing new products
and slowing down or putting onpause other products to focus on
ChatGPT reputation.
(17:25):
In his statement, Altman said,and I'm quoting, we must
concentrate all capabilities toimprove the overall everyday
experience of ChatGPT.
and then he explained that theseneeds to be in the areas of
speed, speed, personalization,and the range of answers it is
going to answer versus refuse toanswer.
To put things in perspective,OpenAI is at the 800 million
(17:47):
weekly active users range.
They've been on that range forthe past few months, but Gemini
is reportedly just hitting 650million monthly active users as
of October 25, up from 450millions just a couple of months
ago.
So huge growth for Gemini whileChachi's growth, more or less
plateaued.
So with projects are gonna beput on hold Pulse, which is an
(18:11):
announcement that they made afew months ago, which provides
people a push notification everymorning on topics.
They might be interested in anad platform that they were
planning to roll out in order tosupport, the revenue that they
need to support and anyinitiative around agentic ai,
specifically in healthcare andshopping.
Now as part of this focus, theyare already planning to release
(18:32):
a new reasoning model as earlyas this coming week or weeks.
This new model was internallylabeled garlic.
And once it gets released, it'sprobably gonna be GPT 5.2 or 5.5
or whatever, uh, number they'replanning to call it.
They're actually planning torelease two different
variations.
One is an immediate one, whichis not the actual garlic model.
It's is just an improvedthinking model that is
(18:54):
presumably better than what theyhave right now and potentially
aligned with Gemini three andOpus 4.5.
Uh, but they're planning torelease the garlic model, the
full garlic model in the firstquarter of 2026.
Now the interesting thing aboutthis model that was named garlic
was shared by Mark Chen, who istheir chief research officer who
said that it is a milestone inthe way they are pre-training
(19:19):
the models.
So if you remember when theywere about to launch GPT five,
they delayed it several times,and there were many rumors that
many of their pre-training runsfailed.
Every one of these training runscost, hundreds of millions of
dollars, and sometimes can takemonths.
And so when it's failing, it isa very big deal.
And if they have some kind of abreakthrough, which is similar
to what Google said that theydid, one training Gemini three,
(19:41):
that will allow them to not onlydeliver the immediate model
better, but potentially todeliver future better models
faster, better, and cheaper.
Or as Mark Chen said.
It will allow them to, and I'mquoting, infuse a smaller model
with the same amount ofknowledge.
Something that requiredpreviously significantly more
infrastructure and architecture,and now could be done in much
faster, cheaper, and betterways.
(20:03):
Now if you remember last week, Itold you that there's are rumors
about an internal model namedShallot Pit, where apparently
Shallot Pit is this new way todo pre-training and the garlic
model is using that concept inorder to deliver a full new
model that, as I mentioned, willbe released sometime in Q1 of
2026.
But we are presumably getting abrand new thinking model
(20:25):
sometime in the next few days orcouple of weeks.
And obviously as with all theselabs, this garlic model that
we're getting in either a monthor two is not the final thing.
Chen stated that, and I'mquoting OpenAI, has already
moved on to developing an evenbigger and better model thanks
to the lessons it learned withgarlic.
So quick summary.
We should expect a new thinkingmodel within a few days or
(20:48):
weeks.
We should expect garlic,whatever it is going to be
labeled to be released in Q1 of2026.
And there's already a bigger,better model in the making and
everybody's in OpenAI isfocusing on that right now.
As far as getting back on topwith ChatGPT, the interesting
thing for me in all of this isit was exactly the other way
around when ChatGPT came out.
(21:10):
So if you remember, Googleannounced called Red after the
release of GPT-4 and everybodywas all hands on deck working on
try to get their models to par.
If you remember, Google in thebeginning failed miserably and
they delivered models that wereunacceptable.
And I said back then that thereis no way Google is not going to
win this challenge because theyhave everything they need in
(21:32):
order to win it.
They have potentially thebrightest minds, the most
successful with the most amountof history lab, the most amount
of data because of Googleindexing the internet and
because of YouTube, the biggestdistribution, really deep
pockets, their own hardware,literally everything they need
in order to win this race.
And not surprisingly, they,they're now ahead of everybody
else when it comes to both theunderlying model as well as the
(21:55):
integration of it intoeverything Google.
And I'm really excited aboutsome of the new and latest
updates that they made acrossthe entire Google platform.
I will probably record a Tuesdayepisode about this in the next
couple of weeks, talking aboutall the little tweaks that they
added, which are absolutelyamazing as far as providing
value on the day to day.
Now speaking about open AI andthe value that it provides on
(22:16):
the day-to-day OpenAI a weekago, updated voice mode in
something that in face valuelooks completely unimportant,
but it is actually a hugeimprovement.
So voice mode inside of Chachipiti, which I use.
More or less every single day isnow embedded into the regular
chat user interface.
Previously, every time youclicked on voice mode, it would
(22:37):
stop the chat and it will bringup this bubble that will be
blinking on the screen and youwould be talking to the bubble.
And then when you were done andyou would stop it, it will take
it a few seconds and then youwill write what it has had a
chat with you.
And that wouldn't always work.
It actually happened to me onthe app yesterday.
I had a strategic conversationwith it about my software
company that does agentic basedinvoice reconciliation at scale.
(23:01):
Basically automates the entireprocess of vouching invoices at
whatever level you want by usingdifferent agents in different
steps of the process.
So I'm doing a lot of strategywith.
Chat GPT about how to grow thebusiness and how to, deliver
more value and things like that.
And I, the last step of thatconversation was not saved,
which means I will have to do itagain.
(23:22):
So this new update when you'rerunning it on the computer, and
again, I'm sure it will come tothe app very shortly,
transcribes what you're sayingand what the AI is saying more
or less in real time, and it cando more than that.
In one of the examples that theyshowed, the user was asking it
about the weather, and it wasshowing the graphics of how the
weather is going to be in thenext few days on screen while
(23:42):
having the conversation withChatGPT.
The benefit and the value ofusing the voice model just
increased dramatically becauseyou can see the results on what
it's doing as it is doing it,versus having to wait to the end
of the conversation and thenhoping that it's actually going
to capture all of it.
Huge change, which does notsurprise me because if you think
about where they're goingspecifically with their hardware
(24:05):
initiative, with Johnny, ive,this is the direction that it's
going, right?
They're going into a, I assume,an interface with no keyboard,
potentially with no screen, thatwill still allow you to engage
with AI and the world aroundyou.
And this requires this kind ofintegration where you can speak
with it and it speaks back andit's not stopping any other
process that is happening.
(24:25):
And so I'm personally, as aheavy user of voice mode, am
very happy about this change.
And again, it makes sensebecause of the direction that
they are going.
So maybe a bubble, maybe not abubble.
Open AI is in Code Red and thereare also similar news from their
partner, Microsoft,, Microsoftthat in the beginning of 2025
defined the year as the year ofAI agents is hitting a serious
(24:48):
speed bump when it comes toactually getting clients to
adopt the technology.
And based on another report inthe information, the tech giant
is quietly lowering the salesand growth targets, more or less
across the board for all of itsproducts that has to do with AI
implementation.
Microsoft fiscal year ends inJune, and 80% of their sales
(25:11):
teams did not hit their targetsby the end of the year, and
hence, they have updated theirtargets for this fiscal year to
lower than they were before.
These projections are cut not bya little, but the initial
projections, the initial salestarget was 50% growth year over
year are now cut to roughly 25%over the last year.
(25:33):
Now it's still a verysignificant increase.
Again, 25% year over year growthat the scale that Microsoft is
significant, but it was 50% asfar as the projection that they
had.
That again, is driving theirinvestments and the expenses
when you expect this kind ofreturn.
So you cannot completely say,oh, 25 is great.
It is not great when you'realready spending money a year in
(25:55):
advance to build theinfrastructure for a 50% growth.
The article on the informationeven showed a few examples.
One of them is a private equityfirm, Carlisle, that has reduced
its spending on Microsoftcopilot this fall dramatically
because they were struggling to,and I'm quoting reliably tap
data from other applications.
So basically connecting copilotto things that are not in the
(26:17):
Microsoft environment did notwork well and hence they were
cutting back on their spending.
Based on this article, OpenAIhas done a similar process where
they have reportedly lowered itsfive year revenue projections
for AI agents, specificallyreduced by$26 billion.
Now there's several causesobviously for this slowdown, but
some of the main ones that thisarticle mentions is that the AI
(26:40):
is still a statistical modelthat does not always do exactly
what it expects, there are areassuch as finance and
cybersecurity in which, and I'mquoting, small mistakes can be
costly.
Basically, you cannot afford forit to be wrong 5% of the time or
2% of the time.
It's just not acceptable.
It has to be accurate everysingle time.
Since we're already talkingabout Microsoft, another
interesting point that wasshared by another article on the
(27:02):
information shared thatMicrosoft is planning to
potentially replace theheadcount of licenses because of
agent takeover to payinglicenses for agents.
So Rajesh Ja, who is Microsoftexecutive vice president,
basically thinks that thereduced human headcount for
(27:22):
Office 365 licenses that willcome because of automation that
will replace these humans willbe more than replaced by
licenses that will be sold andused for agents to do the same
work the way he's looking at it.
These agents, and I'm quoting,they're going to have an
identity.
They're going to be in anaddress book.
(27:42):
They're going to have a mailbox.
They're going to need a computerto do its computation in a
secure way.
To me, all of those embodiedagents are seat opportunities.
When it comes to addressing thefears of revenue loss because of
job losses of humans, juststated in a UBS conference, and
I'm quoting, I'm not seeing AIas driving down seats.
(28:03):
If anything, I think AI is goingto be an opportunity for us to
drive seat growth.
My personal opinion on this isthat it's, it's total bs and the
reason I'm saying that is I'mseeing what I am implementing
with many of my clients.
Many of them are in theMicrosoft world, and a lot of
the stuff that AI can docompletely reduces the amount of
seats, and it does not require,at least right now, to buy any
(28:25):
additional licenses because oneuser, a human user, with the
right AI tools can do the workof three different users.
So while his statement makessense conceptually, let's start
charging agents for licenses, Idon't exactly see how that is
going to happen, especially thatthey don't live in a vacuum and
there's other companiesproviding similar solutions that
may not require buyingadditional Microsoft licenses.
(28:47):
So if, if the only option wasthat, then maybe they could
twist their hands of theirclients to actually do that.
But they're not the only gamingdown.
And so I do not agree with thisprojection from a personal
perspective.
Now speaking of competition thatis offering similar technology.
AWS just had their big eventcalled Reinvent 2025.
And it was all about agents andnew models, as you would expect
(29:08):
from a company the size of AWSwhen it's doing a big event.
So they just introduced a newclass of AI agent, they called
Frontier agents, that aredesigned to act as autonomous
extensions of a team.
Now, unlike previous bots, theseagents can handle complex
multi-step tasks for hours andsometimes even days.
In this initial release, thereare three agents that are
(29:30):
included, but they're planningto develop more.
The existing agents are namedKiro, which is a virtual
developer that can navigate coderepositories and fix bugs and
create code at a very largescale.
There is an AWS security agentthat acts as an always on
consultant that looks at newcode that is being created and
deployed and looks for potentialsecurity risks and loopholes in
(29:53):
the code to fix them.
There was a very big applause inthe event when this was
announced because as of rightnow, humans can create a lot
more code because of the usageof agents.
So if you're creating a lot morecode, you're potentially
creating a lot more risks, andhaving an agent that
continuously monitor these risksis obviously very important.
Definitely at the enterpriselevel.
(30:14):
And then the third one is aDevOps agent that serves as an
on-call operational team memberto respond to outages in your
DevOps infrastructure.
Again, a very importantsolution, especially when we've
seen the recent outages thathappen across the board on
several different platforms.
This is a very important toolthat will be available to
anybody who decides to use this.
(30:35):
They also announced their thirdgeneration of train servers.
That is a completely newgeneration of chips that can run
in what they call ultra serversand run at a very high capacity.
There are almost four and a halftimes more compute performance
and four times greater energyefficient compared to Traum too.
So just the previous version.
They provide four x fasterresponse time and three x higher
(30:59):
throughput on a single chipcompared to the previous model.
What does that mean?
It means you can do everythingyou wanna do at less price, and
at a quarter of the time.
They also introduced a newfamily of the NOVA models, which
is their homegrown AI models.
These models are good, but theyare not competing as of right
now with the top frontiermodels.
(31:20):
But they came up with a veryinteresting concept that they
called Nova Forge.
What Nova Forge is, it's aservice that is allowing
companies to blend their ownproprietary data with the Amazon
curated data sets to traincustom models based on the Nova
models.
That basically means you canembed your company's know-how
data and so on into the weightsof the actual model itself
(31:44):
inside a training run.
And this, I assume will besomething that many, many
companies will be highlyinterested in using.
And they also made anotherreally interesting announcement,
which they call AI factories.
And what AI factories isallowing enterprises and
governments, basically anyorganization who's interested to
deploy dedicated AWS AIinfrastructure running Nvidia
(32:08):
GPUs.
And or Tanium chips directlyinto their own data centers.
So this is a major shift fromtheir previous strategy.
Previously, if you wanted to useAWS infrastructure, you had to
quote, unquote rent it from AWS,right?
It would run on AWS premise, butnow they're allowing you to use
the technology that they havedeveloped that can run either
(32:28):
their own chips or theircompetitor's chips, which comes
from Nvidia and run it in yourown company's data center, while
leveraging all the technologyand infrastructure and
capabilities that they havedeveloped.
Why do I think this ishappening?
I believe this is happeningbecause they understand that
with the current level of demandand with the amount of
flexibility that companies willrequire, and with big fears on
(32:51):
data and security when runningai, they understand that they
may not be able to twisteverybody's arm to run it on AWS
ecosystem, and hence they areseeing this as an opportunity to
still be competitive andproviding what they have
developed to other companies torun on their facilities as well.
I believe this is somethingwe'll see from all the major
(33:12):
players, and we will see a lotof frenemies relationships where
these competitors on paper willcollaborate to deliver more
flexible solutions because thisis what the market demands.
Now as part of all theseupdates, they're also added some
new capabilities to BedrockAgent Core.
And this includes policycontrols for setting boundaries
to the different agents andeverything that's happening on
(33:32):
Bedrock as well as episodicmemory.
So agents can learn from pastexperience and get better over
time.
And to summarize it all up, Iwill quote Matt Garman, the CEO
of AWS, who said the next 80 to90% of enterprise AI value will
come from agents.
This shift is going to have asmuch impact on our business as
(33:53):
the internet or the clouditself.
That is a very big statementwhen it comes from the company
that more or less invented cloudcomputing in the version we know
it today, and that are still theleader in that space.
Now, before we continue with thebig players and their
announcement, since we'realready talking about Amazon in
a interesting uprise, and Ican't use a different word to
(34:15):
say it, over a thousand Amazonemployees together with over
3,600 external supporters fromcompanies such as Microsoft,
Google Meta, and SpaceX havesigned a letter to the CEO that
is warning that the company'sfrantic race to dominate AI is
causing significant damage tothe workforce and the planet.
(34:35):
To sum up the letter, and we'llput a link to it in the show
notes, it is basically sayingthat they all cost justified
warp speed approach of AIdevelopment will do incredible
damage to democracy, jobs andthe planet.
One of the examples that theymentioned is that while Amazon
has been committed to getting tonet zero by 2040, the annual
(34:56):
emission have grown roughly 35%since 2019.
A lot of it is due to energydemand when it comes to building
new data centers.
They also note in the letterthat while Amazon has cut 14,000
managerial roles to quoteunquote get lean, it is at the
same time planning to spend 150billion on AI data centers that
(35:17):
comes to address the job riskthat comes with it.
And they're also talking about ainternal culture that is
basically a swim or sync culturewhere you are forced to either
work with AI or lose your job.
And this has become the norminside of Amazon.
And they're making threedifferent demands in this
letter.
One is no AI with dirty energybasically to halt using fossil
(35:39):
fuels to power data centers.
The second one is no AI withoutemployee voices.
So creating a ethical workinggroups with non managerial staff
to oversee AI development anddeployment inside the company,
and no AI for violence.
Basically banning the usage ofAmazon AI for surveillance or
must deportation efforts Nowwhile this letter is really
(36:01):
interesting, there have beenmany similar letters in the last
two years, and none of them havedone anything to slow this
process down unless there'sgoing to be a massive strike
across multiple of thesecompanies with employees pushing
back or by the governmentslowing this down by regulation,
which is definitely nothappening.
I don't see any of these lettersare meaningful.
It is important that people aresigning their voices.
(36:22):
We are in a democracy, and I bythe way, agree with everything
that they're saying.
I don't think it is going tomake any difference on the
actual speed in which thesecompanies are going, despite the
fact that many of them areraising the flag saying, we're
running too fast.
Please slow us down.
And to sum this whole section upof all the new announcements and
all the new tools and theemergencies and things that are
happening at the biggestcompanies in the industry, I
(36:44):
will share with you some of thelatest findings from similar
webs preliminary data forNovember of 2025.
So Google Gemini's platform haveseen website visit rising 14% in
just one month to one point 35billion visitors, which is a
huge jump, especially at thatscale.
(37:04):
ChatGPT, on the other hand, hadseen a traffic of five point 84
billion visits.
So about three and a half x moretraffic than Gemini.
However, that is a decline fromOctober that had 6 billion
visits.
So Cha GT's visits has shrunk byabout three to 4%.
While Google Gemini has seen a14.3% growth.
(37:26):
Another big growth has beenexperienced by Grok on much
smaller numbers, but it also hasgrown by 14.7% to 234 million
visits.
And so the trend is clear.
As of right now, more people aregoing to Gemini and using it
more and more.
While Chachi's growth hasplateaued or even reversed
somewhat, at least in the recentfew weeks, hence the code red
(37:48):
inside of OpenAI and focus onbetter delivering a better
ChatGPT.
Now overall, the summary of thismakes a lot of sense.
The sector as a whole is stillgrowing like crazy with gen AI
apps, downloads, surging 319%,so more than three X year over
year, and that's obviously inaddition to the overall growth
(38:09):
of web traffic.
And since we just finishedspeaking about the very large
announcements from the verylarge players, let's switch to
talking about new releases thisweek.
And there's been somesignificant releases.
The first one is deep seek, justreleased deep seek 3.2.
If you remember at the beginningof the year, we had the deep
seek moment where a company thatnobody has heard of out of China
(38:29):
came and released a model, anopen source model that was as
good as the top models in the USand now they have done it again.
So they released two differentmodels, a regular V 3.2, and
then V 3.2 special, or, I don'tknow exactly how to pronounce
it, but that has reportedlyachieved a gold medal
performance in both 2025International Mathematical
(38:51):
Olympiad and InternationalOlympiad of Informatics, placing
it in par with Google's Geminithree Pro and ahead of GPT five,
and they have done it at a costof$0.03 per million input
tokens, which is about 10 xcheaper than the Western based
models.
Now the special model isspecialized in reasoning first,
(39:13):
that natively integratesthinking capabilities including
into tool usage, which basicallymeans that this model was built
specifically to be really goodas the underlying infrastructure
for agents, which is alignedwith where everybody is going.
Now, to make it even moreinteresting, the model is built
on a new concept that deep seekis calling deep seek sparse
(39:34):
attention, with the acronym DSAmechanism which they're claiming
that is an architecture, and I'mquoting that substantially
reduces computational complexitywhile preserving model
performance effectively havingthe cost of processing in long
context tasks up to 128,000tokens compared to traditional
models.
(39:55):
So what does all of this mean?
It means that deep seek ispositioned for exponential
growth in the age agentic era.
A, because it is an open sourcemodel, and B, because they
achieve the holy grail.
It is a better model at lowercost that is built specifically
to do tools usage, which is whateverybody is looking for right
(40:17):
now from a global competitionperspective that places it as a
direct threat to US models andfirst and foremost to meta.
So if you remember, meta wastrying to position themselves in
2024 and into 2025 as theworld's top open source model.
And really they have notdelivered anything meaningful
(40:38):
for a very, very long time.
There was a huge disappointmentin the release of LAMA four and
then the establishment of thesuper intelligence team that has
sent the whole AI team into acomplete turmoil.
And they haven't deliveredanything for a while now other
than negative news as far as HRand changes in the organization.
(40:58):
And so this is just anothernail.
I don't know if in the coffin ofmeta, because we can't ignore
the fact that they're a hugecompany with huge resources and
with huge data and distribution,but right now they're not in a
good shape.
Another really interesting modelrelease that happened this week
is runway 4.5.
Which is their latest image andvideo generation tool, and the
(41:19):
demos are absolutely mindblowing.
They have integrated severalcapabilities that used to be
distributed into one model thatcan now create switch
characters, keep consistency,extend videos.
Basically all the wet dreams ofeverybody that creates videos
with AI are available now inthis one model.
Just a quick reminder, runwayhas been there delivering AI
(41:41):
video capabilities before Sora.
Before vo.
There were basically the firstreal model out there that could
generate something that wasworthwhile looking at.
And 4.5 puts them back ahead ofthe game.
Go watch the examples.
They're absolutely incredible.
And if you are a video creator,you just got another really
amazing goodie that can do somevery sophisticated stuff almost
(42:03):
seamlessly.
As if we did not have enoughmodels that are really good on
our plate to choose from a newcompany that is a spinoff out of
an MIT research that is calledOpen a g.
I just came out with a brand newmodel out of stealth that is
called Lux.
And Lux is built as an agenticplatform versus a large language
(42:23):
model, and they built it thisway from the ground up.
It has achieved a staggering83.6 success rate on the online
mind to web benchmark.
Now, what is online?
Mind to web?
It forces agents to interactwith 136 live changing websites
to perform over 300 diversetasks from booking flights to
(42:43):
cross-referencing e-commercedata.
And it's basically evaluatingreal world resilience on actual
websites versus textbook,problem solving.
And so as a benchmark, itachieves 83.6% success rate
compared to gemini's, CU, acomputer usage agent that
achieves 69%.
Open AI operator with 61%Antrophic CLO with 56%.
(43:07):
So it has blown out of thewater, the leading models in the
world in agentic usage of actualtools.
It is also extremely efficientin doing this, and they're
claiming it is going to cost one10th of the cost of these major
competitors to run this model.
Now, to make it even moreinteresting, while most of these
competitors like ChatGPTi, Atlasruns in a browser and can only
(43:30):
do browser, the open a GI modelcan also run applications on
your desktop, meaning it canoperate anything on your
computer and engage with theentire tech stack that you have.
It has three different modes.
Actor mode, which is optimizedfor speed thinker mode that is
designed to handle vaguemulti-step goals, and then
(43:50):
tasker mode, which offersmaximum control, accepting
Python lists of steps andreiterating until it actually
successfully completes tasks.
Now they're claiming that thereason they are so good at this
is that they're not trainingthis as large language models
are trained, but they're using apros that they call age agentic,
active pre-training.
Or the explanation that open aGI provided and I'm quoting.
(44:11):
Most LMS are trained topassively absorb knowledge like
learning to drive by memorizingthousands of manuals without
ever touching the steeringwheel.
In contrast, our agents activepre-training allows the model to
learn by doing.
Now, if you remember earlier inthis episode and in previous
episodes, we shared that some ofthe leading brains and people in
(44:32):
this crazy race, I've saidmultiple times that LLMs alone
will not get us to a GI and somenew breakthroughs.
Will be required in order to getthere.
This might be one of thosebreakthroughs on how to train
agents on an agent, moreflexible environment versus just
on static data that will allowthem to perform, as we've seen
(44:53):
in this benchmark, significantlybetter than large language
models on such tasks.
Another huge release this weekwas Anthropic Opus 4.5, which is
receiving raving reviews on bothReddit and X, especially from
people who are computerdevelopers who are saying that
the model just writes code in acompletely different level than
(45:14):
any of its competitors.
It has taken the top ranking forweb development on the El Marina
from Gemini three by a bigspread, so 15, 11 points versus
1476 with a huge number ofvotes.
Putting it a clear number onewhen it comes to writing code.
Pushing Claude Sonnet 4.5, whichwas the darling of the code
(45:35):
writers, all the way down tonumber five.
So right now then the ranking isClaude Opus 4.5, thinking in
number one, Gemini three Pro innumber two, Claude Opus, 4.5,
not thinking in number 3G, PTfive, medium in number four, and
then Claudes on it in numberfive.
But again, the reviews is notjust by the numbers.
It is people are saying this isjust the best code they've ever
seen, and it's the first time itreally feels like a partner that
(45:58):
can code with you and sometimesfor you in an effective way.
so kudos to Antrophic for therelease of this new model.
From my own personal experience,I find it really good in writing
in general.
I always like Claude, but thismodel is just even goes beyond
that.
It's just fantastic inunderstanding what I want and
writing exactly the kind ofcontent I expecting it.
Find it to be a much betterstrategy companion than Sonnet
(46:20):
4.5, potentially being maybebetter than GPT five at this
point.
However, I'm still favoring GPTfive when it comes to
brainstorming and strategicthinking, just because of the
voice mode inside of ChatGPT,which I believe just gives me,
from a personal perspective, amuch better way to develop ideas
and think about new strategy andevaluate things that I'm
(46:42):
thinking of.
'cause I find it a lot moreintuitive to just speak to it
versus type, and especially withthe new capabilities of the
voice that is integrating itinto the chat.
I am going to stick to doingthis in 5 1 1, but when I tried
it with Opus 4.5, I wasseriously impressed.
And while this model wasreleased, a researcher was able
to find the quote unquote souldocument that is embedded into
(47:06):
the model's training data thatgives it its well soul Antrophic
researcher, Amanda Ascal hasconfirmed the authenticity of
this text.
So this text includes severalvery interesting concepts.
The first one it is.
Sharing with the model thatAnthropic occupies a particular
position in the landscape, acompany that genuinely believes
it might be building one of themost transformative and
(47:28):
potentially dangeroustechnologies in human history,
yet presses forward anyway.
This isn't cognitive dissonance,but rather a calculated bet if
powerful AI is coming regardlessanthropic beliefs, it's better
to have safety focused labs atthe frontier than to seed that
ground to developers lessfocused on safety E.
(47:50):
It is also defining thepersonality of Claude as a
brilliant friend who treats theuser like an adult offering
frank, high level advice like adoctor or a lawyer would, rather
than watering down answers outof fear of liability.
The guidelines in this documentdistinguish between hard-coded
safety rules, such as never helpsomebody build massive, weapons
(48:12):
of mass destruction and softcoded traits such as tone and
helpfulness, in specificscenarios.
And those soft coded traitsshould adapt to users context
versus the hard-coded ones thatit should never deviate from.
So very interesting approach byAnthropic.
I don't know if other models dothe same thing if they do, maybe
just wasn't found.
The thing that I thought aboutfrom a safety perspective, if
(48:34):
one researcher can find the coreinstructions of a large language
model, what does that mean tothe safety and security of
everything else that runs inthis world, in that the way the
data is used and so on?
And I'm not sure I have theanswer to that.
And it really scares me to thinkwhat is the security level of
these systems if a single usercan find the core instructions
(48:55):
of one of these models.
A single user can findcomponents of the core
instructions of the model thatdefines itself as the safest
model out there with a companythat prides itself of being the
more, the most safe in theindustry.
We're gonna switch to somequicker topics.
A lot of them are reallyinteresting and I would've loved
to have the time to dive intothem, but at least I think you
need to know about them.
(49:15):
There's been a lot ofpartnership and or m and a
happening this week as well.
So Open OpenAI announced aninvestment in Thrive Holdings,
which is a new investmentvehicle launched by one of open
AI's, largest packers, thriveCapital.
So while that sounds like acircle investment to you, it
sounds the same to me and toeverybody else.
So how does this work?
(49:35):
Open AI is going to take equityin Thrive Holdings, which is a
private equity company that isowned by Thrive Capital that
invested a lot of money inOpenAI.
The deal does not include anycash transactions.
Instead, OpenAI is going toprovide technology, staff and
services to Thrive Holdingsportfolio companies.
So it is being presented byOpenAI and Thrive as a way for
(49:57):
OpenAI to embed its technologyin real life environments and
benefit the companies in theThrive Capital portfolio.
Which means if they aresuccessful, OpenAI will make
more money.
But again, going back to fearsof a bubble, this is a great
example of a vicious circle ofinvestment where the money that
OpenAI gets gets invested intocompanies that Thrive Capital
(50:19):
own that OpenAI can benefit fromif they are successful.
I do, by the way, agree with thefact that giving OpenAI the
opportunity to directlyimplement its technology and
companies to prove and to helpthem be successful as a result,
make sense to me.
Maybe they shouldn't have pickeda company that have invested in
them in order to do this thingin another deal.
OpenAI has acquired a startupcalled Neptune for a valuation
(50:42):
that is just under$400 million.
They will get all the technologyand the employees and will embed
them into open AI'scapabilities.
Neptune AI specializes inbuilding tools for tracking and
debugging machine learningexperiments.
OpenAI has already been usingthem in the past and very happy
with the tool, and now they'regoing to own the technology and
(51:02):
they're going to wind down thesupport of external services
that have been provided to othercompanies such as Samsung, and
Roach, and HP and some otherlarge companies.
In a similar move, anthropic hasacquired bun, which is a high
performance JavaScript runtimecompany that can build, test and
package JavaScript, apparentlymuch better than node js.
(51:26):
In a very similar concept,Antrophic has been using BUN for
a while.
They've been using it todramatically improve the results
of Claude Code and they are nowgoing to own it and embed these
capabilities into Claude Code tomake it even better.
This come bys the way togetherwith an amazing milestone for
cloud code of reaching$1 billionin annualized run rate in less
(51:46):
than six months since itslaunch, and it's already being
used by major companies likeNetflix and Spotify, and
Salesforce and KPMG all areseeing amazing results with
cloud code and this is justgonna make it even better.
Now, speaking of money andAntrophic, Antrophic is
currently looking at a new roundthat is going to evaluate north
of$300 billion.
(52:09):
Raising most likely over 15billion that they already have
committed.
So this may grow beyond that andmay grow beyond the$300 billion
valuation.
But they're also apparently arelooking into a IPO opportunity,
potentially as early as 2026.
They have hired one of the toplaw firms in the world to do
these kind of things.
(52:29):
Wilson, Nessy, Goodrich, andRose and Rosati, who is the
company that took public, smallcompanies such as Google,
LinkedIn, and Lyft.
and they are in conversationwith'em on how to structure
their IPO.
They have downplayed the move.
Dario basically said that intheir current scale and size,
this is something they have todo, but it doesn't mean that
they're actually going to do itor do it in the near future.
But the rumors are talking abouta relatively near future IPO,
(52:52):
which makes perfect sense.
Now speaking of Anthropic andpartnerships in a really
interesting strategicpartnership, anthropic has
partnered with Snowflake andthey've announced a multi-year,
$200 million partnershipdesigned to bring a Gentech AI
based on Anthropic capabilitiesto over 12,000 global
enterprises that use Snowflakeas the database backend of their
(53:15):
operations.
So the idea is embeddingAnthropic Cloud models directly
into snowflake's secureplatforms to deliver one of the
biggest problems in thecorporate AI right now, which is
allowing models to access andreason over the proprietary data
of the entire corporationwithout compromising security.
So by bringing the models intothe already secured environment,
(53:37):
they're going to achieve that.
I think that makes.
Perfect sense for bothcompanies, and I think we're
going to be seeing similarapproaches from other
partnerships in the industry.
A great example of that, wetalked about earlier in this
episode when it comes to AmazonAWS, allowing you to run agents
inside the AWS environment inorder to gain similar benefits
as this partnership delivers.
(53:58):
Another company that has droppeda nice chunk of change this week
to get a gent capabilities intoits environment is ServiceNow.
ServiceNow just spent just overa billion dollars to acquire
Visa, VEZA, which is a companythat has developed technology
for identity governancecapabilities and ServiceNow are
planning to integrate those intotheir enterprise platform.
(54:19):
Creating an AI control towerthat ensures that the next
generation of autonomous agentscan be deployed securely and
managed by human employees.
So very significant m and aactivity this week.
The numbers are absolutelystaggering.
The fact that every one of thesedeals is either in hundreds of
millions or billions of dollars,and they're allowing to combine
(54:40):
some of the most advancedtechnologies with other really
advanced technologies showingyou how active this market
currently is.
And I expect this to continuehappening.
And our next quick rapid firetopic is going to be around
politics and how it gets nowtangled with ai, or not now, but
how it's increasingdramatically.
So a new report from the NewYork Times reveals that a
(55:01):
network of Super PACS ismobilizing back candidates who
favor stricter AR guardrails,pushing direct head on to
counterbalance, the super PACthat was created by the
regulator efforts that wereported a few weeks ago that is
pushed forward by AndersonHorowitz and people from OpenAI
and some other people who aretrying to reduce regulation.
(55:24):
This new super PAC is planningto initially raise$50 million.
The other super PAC has ahundred million, uh, but they're
planning to push from there, andtheir goal is to push safety
first and to increase regulationin order to make AI development
safer.
To make this even moreinteresting, the super PAC is
actually working across theaisle, so you have Democrats and
(55:44):
Republicans are a part of thisnew super pac, pushing it
forward.
It is obvious that aI willbecome a key topic in the 2026
mid elections, and I think someof the politicians see it as an
important point.
Some of them just see it as animportant point that they have
to cover, whether they agreewith it and know anything about
(56:04):
it or not, but they have tolearn the talking points in
order to gain more votes, andwe'll see more and more of that
in the next few months as thesepoliticians are going to test
different approaches to gainmore votes and see which way
they want to lean.
Right now it is very clear thatboth Republicans and Democrats
have not decided which side theyactually want to support or need
(56:27):
to support to get more votes andI think over the next few months
we'll start seeing more and moreclarity and alignment with each
and every one of the parties inwhich way they're leaning, which
right now is not totally clear.
Still in politics, the battlebetween central federal laws and
state specific patchwork, thatbattle is intensifying on both
(56:48):
sides of the equation.
And it's pulling in the topplayers in the industry into
that debate.
Sundar Pcha, the CEO of Google,just spoke to Fox News on Sunday
and has issued a warning to uspolicymakers saying that the
rapidly expanding maze of statelevel AI regulations is becoming
a competitive liability in thetech race against China, which
(57:11):
is, has been the statement bythe people who are pushing for
deregulation and for federalcontrol.
But as I mentioned, there aremany others, people who are, who
believe it should be the eitherway around.
One of the things that Sundarshared is that more than a
thousand AI related bills arecurrently moving through state
legislatures across the us,which will obviously make it
(57:32):
very, very hard for companies tocomply and deal with each and
every one of them separately andstaying on government and
politics.
The White House has launched theGenesis mission or what they
call the Manhattan Project ofthe AI age.
The Genesis mission is anexecutive order that is.
With a goal to be a nationalinitiative that aims to
integrate the world's largestcollective federal scientists
(57:55):
and data sets with cutting edgeAI infrastructure to accelerate
discoveries in critical fieldslike clean energy,
biotechnology, and advancedmanufacturing.
And they're planning to do thisby mobilizing the Department of
Energies, national laboratories,private sector partners and top
universities, all in combinationto create an American science
(58:17):
and security platform that willinvest in the developing of all
these fields using ai.
This executive order has a veryaggressive timeline and it is
requiring the secretary to, andI'm quoting, demonstrate an
initial operating capability ofthe platform within 270 days,
basically three quarters of theyear, which is extremely fast
(58:40):
when you're talking about anational level infrastructure
project.
Do I think about this?
I think it's a great idea.
I think the government pushingforward, not just AI for the
sake of ai, but AI forimprovement of clean energy,
biotechnology, and advancementsin things that actually matter
on the day-to-day is a greatinitiative.
(59:00):
It'll be very interesting to seehow they combine government
agencies with private sectorknowledge base, and I am hoping
we will see very positiveresults that will actually have
real impacts on our day-to-dayfuture lives.
So what didn't we talk aboutthat is just available in the
newsletter?
Well, Microsoft is announcingprice hikes and major updates to
(59:21):
their AI platform in 2026.
We have several differentshakeups in AI leadership in
Apple.
We have a Harvard BusinessReview, new concept that they
call the C-D-A-I-O, basicallychief Data Analytics and AI
Officer as a new position.
They think every largeenterprise should have a safety
report that is claiming thatmore or less, all short when it
(59:43):
comes to their safety index.
Warnings by Demi Saba of a GIarriving within the next five to
10 years with a 50% chance it'scoming by 2030 and many other
really important and interestingarticles.
So if you wanna learn more, goand check out our newsletter and
you can just browse through itquickly or dive deeper if you
are interested.
We will be back on Tuesday witha fascinating episode that is
(01:00:04):
going to show you and teach youhow to use Claude Projects to
write amazing content for anypurpose, whether professional
needs internally like salescontent or emails, or creating
marketing content across theboard, all with an amazing
framework that's coming thisTuesday.
And final notes.
If you have not yet done so, Iwould appreciate it if you click
(01:00:25):
subscribe to this podcast so youdo not miss any episode that we
drop.
I am doing everything I can toget you the best information
possible in the most effectiveway, and so if you subscribe,
you'll be able to get all ofthat and while you're at it and
you are inside your podcastplayer, please click the share
button and share this podcastwith a few other people.
and if you are on Apple orSpotify, I would really
(01:00:46):
appreciate if you write a shortreview of this podcast and give
us a five star rating orwhatever you think I deserve I
would really appreciate it.
It helps us get to more peopleand it helps you help other
people be more aware of what'sgoing on in the AI world.
That's it for today.
Keep on exploring ai, keepsharing what you learn with the
world, and have an amazing restof your weekend.