Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 2 (00:00):
Hello and welcome to
a Weekend News episode of the
(00:02):
Leveraging AI Podcast, thepodcast that shares practical,
ethical ways to leverage AI toimprove efficiency, grow your
business, and advance yourcareer.
This Isar Metis, your host, andlike every week we have a lot to
cover.
We're going to start with theimpact of AI on jobs, including
the fed's, first time actuallyaddressing it, or maybe not
addressing, but admitting thatthere is a situation brewing.
(00:24):
We are going to talk about thevery interesting week that Open
AI had the aftermath of thosestatements.
We're going to talk about manydifferent rapid fire items about
OpenAI.
We have interesting news fromsome of the other big players,
including maybe we're actuallygonna get a real Siri in 2026.
There were some very interestinghardware new releases this week,
(00:45):
and we're gonna end up withholiday cheers.
So we have a full andinteresting episode.
So let's get started.
Federal Reserve Chair, JeromePowell stated that after
adjusting for statistical overcounting in payroll data, and
I'm quoting, job creation ispretty close to zero now, he's
(01:08):
linking the hiring slowdown toai and he's noting, and again,
I'm quoting again, a significantnumber of companies are
announcing layoffs or hiringpauses with many explicitly
citing ai, saying much of thetime they're talking about AI
and what it can do.
Now what he's warning about isthat large employers are
signaling they have no need toadd headcount, not just now, but
(01:31):
for years to come.
And he said that the Fed iswatching it very carefully.
Now, he also stated that AI putsthem at a very strange dilemma,
which on one hand they're seeinga huge upside risk in inflation,
mostly impacted by hugeinvestments in tech industry.
But on the other side, they'reseeing downside risk on
(01:52):
employment.
He was saying that makes ittough for them and other central
banks because, and now I'mquoting again, one of those
calls for rates to be lower.
One calls for the rates to behigher.
So they are in a dilemma, andyet they did cut rates from four
to 3.75.
But that is the currentsituation right now where on one
hand there are huge investmentswhich could lead to inflation,
(02:13):
but on the other hand, there isno growth in actual labor force
and it actually might beshrinking.
He also mentioned the recentconversation whether there is or
there isn't an AI bubble andhe's on the side that believes
that it is not a bubble.
And he said, and I'm quoting,these companies actually have
earnings.
And I agree to that to anextent.
First of all, not all thesecompanies have giant earnings.
(02:35):
So yes, the big players aremaking crazy amounts of money
and their fastest growingcompanies ever, and they're
making billions just a few yearsafter launching a product.
However, many other companiesare ranking up hundreds of
millions of dollars ininvestments and are very, very
far from making any significantearnings.
And the other big problem is thegrowing gap between the
(02:56):
valuations and the amount ofmoney they're raising and the
CapEx that they're committing toversus the earnings that they're
actually having.
More about that when we aregoing to talk about open ai.
But overall this year have seena very significant tick up in
layoffs, whether they're relatedor not related to ai, time will
tell.
Many of the companies say thatthey are letting people go
(03:17):
before because of ai, becauseit's an easy excuse, but in
other many cases, that is notthe real underlying reason.
As we talked about last week,Amazon laid off 14,000
employees, which is about 4% oftheir white collar staff.
But a lot of them arepotentially just based on over
hiring after the pandemic.
So it's just scaling back downto reasonable sizes in different
(03:37):
departments.
Challenger Gray and Christmas,which is a research firm, has
cited nearly a million layoffsthis year.
946,000, which is the highestsince 2020 when we had the
pandemic.
out of those 17,000 are tied toai and 20,000 I are tied to
other types of automation.
So yes, the numbers are growing,but they're still a relatively
(04:00):
small number compared to theoverall layoffs that we've seen
this year.
Meaning there are other forcesat play, but the bigger fear,
again, going back to what Powellsaid is that there's a very low
rate of generation of new jobs.
Another big name that related tothe job creation or elimination
as it relates to AI was SatyaNadella, the CEO of Microsoft,
(04:20):
and he said on the BG twopodcast that Microsoft will grow
headcount again after being flatthis year at around 228,000
employees worldwide, despite a12 year revenue growth overall
and a 40% growth on their Azurecloud income.
So they didn't grow despite avery significant growth of the
company.
But he's saying that they willgrow headcount again with a lot
(04:42):
more leverage than the headcountthey had pre ai.
And he said that there is goingto be a transitional period, and
I'm quoting it is the unlearningand learning process that I
think will take the next year orso.
Then the headcount growth willcome with max leverage.
He added right now, anyplanning, any execution starts
with ai.
You research with ai, you thinkwith ai, you share with your
(05:05):
colleagues.
And so what does that tell usabout the mindset of Satya and
probably many other leaders inhis position.
And I really like the unlearningand learning aspect of what he
said.
We are going to be working verydifferently than we are working
right now.
That is true for mostorganizations, whether
for-profit or nonprofit, andthat is true across almost every
industry, which means we have tounlearn the way we work right
(05:29):
now, whether from atechnological perspective, from
a headcount perspective, from aorg chart perspective, and from
more or less every otherperspective you can think of.
Because AI will start feelingmore and more aspects of the way
we work, which means we have toforget about everything we know
so far, not everything, but alot of what we know so far, and
come up with new habits, newprocesses, new tech stacks, new
(05:51):
procedures, new trainingprotocols and so on, in order to
really benefit from new AIcapabilities, and every company
that is not investing in thatprocess right now is gonna be in
serious trouble in the very nearfuture because their competitors
will do that, which means theywill have a completely new base
cost structure, which meansthey'll be able to be more
competitive, do more things withless money.
(06:13):
And if you don't make thesechanges, if you don't commit to
training, you will find it very,very hard to stay competitive in
that future.
And speaking of companytraining, we have just opened
the registration for two newcourses.
One of them is the course we'vebeen running for over two and a
half years, which is the AIBusiness Transformation Course,
which is the basic course thatwill take you from your current
(06:33):
level to knowing everything youneed to know across multiple
aspects and tools of AI in orderto be prepared for 2026 and be
able to deploy AI in yourbusiness effectively.
This course starts on January20th and then continues in three
consecutive Mondays.
And the other course, which is anew course for us, is the
advanced automation course whichwill teach you how to combine AI
(06:54):
together with traditionalworkflow automation, such as
Make an NA 10 for maybe the mostpowerful capability there is
right now that combines therigidness and the consistency
delivery of traditional workflowautomation tools together with
really advanced AI thinking anddata analysis capabilities.
This is the thing that rightnow, when I work with clients,
make me feel the most likeBatman because I can do more or
(07:14):
less anything with thiscombination.
So if you have the basics andyou know how to prompt properly
and you've built a few customgpt and you want to take it to a
completely different level, comeand join that cohort.
But if you need the basics, comeand join our AI business
transformation course inJanuary.
the advanced automation course,starts on Monday, December 1st,
with another session on Monday,December 8th.
(07:34):
And in these four hours, youwill increase your capability to
build effective automations inyour company by a very, very big
spread.
But now back to the news andconnecting the dots to what's
happening to some of the peoplefrom a job perspective., and the
current situation is actuallycreating a very big irony in
some of the new jobs that arecreated.
So everybody's talking about,oh, AI will create more jobs.
(07:57):
Well, right now, the only jobsthat it's really creating are
jobs that are gonna take morejobs away.
And I will explain, the biggestplayer in this new industry is
Meco, which we talked about manytimes.
And what they are doing isthey're paying doctors, lawyers,
and other experts, 200 and$300per hour to train AI models in
order to basically replace them.
(08:17):
So they currently have tens ofthousands of contractors that
are earning a lot of money.
They're claiming about$1.5million per day that they're
paying these contractors totrain AI models on how to do
their jobs.
Me was just valued at$10 billionand they're supplying their data
to companies like OpenAI,philanthropic, meta, Amazon,
Google, Microsoft, Tesla,Nvidia.
(08:38):
Basically, all the big playersare consuming the data on how to
do day-to-day jobs ofprofessionals across multiple
industries so they can bereplaced.
So these people are basicallysaying, if we can beat them,
join them kind of scenario, andlet's make the most out of it
while it lasts.
similar things are happening inother industries as well.
OpenAI recently announced thatthey're working with Julliard
(08:59):
Music students in order to teachcompositions and former
investment bankers from WallStreet to train on entry level
investing, uh, supportcapabilities.
We also reported a few weeks agoon Uber's new digital task
initiatives, which lets driversand technically anybody, you
don't have to be a driver toperform simple AI labeling and
training gigs while making alittle bit more money when
(09:20):
you're not driving.
So they're on one hand helpingthe drivers make a little more
money when they're not driving.
On the other hand, again,they're creating more training
data to replace other jobs.
We reported, uh, in the lastcouple of weeks about Amazon new
augmented reality glasses fordelivery drivers, on a surface
level is supposed to boostproductivity and safety of the
drivers.
But on the other hand, they'recollecting data on exactly what
(09:41):
the drivers are doing, what aretheir routes, how many times
they're getting off the truck,how much they're walking, and so
on, which can be used to trainautonomous robots that can later
replace these drivers.
Now to tell you how much thedemand for this kind of training
has grown, the most recentanalysis of this entire industry
says that it has grown fromabout$3.7 billion in 2024 to 17
(10:02):
billion in 2025, and it is justaccelerating.
Now if you want to take this tothe next level and understand
where this is going, we are allgoing to be using age agentic
browsers.
There is no way around it.
This is gonna be the way we'regoing to engage with the
internet in 2026.
After that, it's probably gonnabe no browser at all.
It's just gonna be agents.
But in the beginning we're gonnawork in agent browsers.
(10:23):
I do a lot of my work right nowin Comet, but we now have other
very solid options from otherdevelopers.
I have no doubt that allbrowsers will become agent
browsers, which means you'reletting them do some of the
work, which means everything youdo in the browser will become
training data for the companiesbehind these genic browsers,
which will learn basicallyeverything that we do in the
internet, and they will be ableto replace it shortly after
(10:45):
because it's all going to becometraining data.
So beyond Meco that are payingpeople or Uber that are paying
people, we are all going tobecome participants for free in
training AI tools that we'll beable to do the work that we do,
or even the leisure stuff thatwe do to support that as well.
Now to be fair, mimicking whatwe do in the browsers.
The way we do it in the browseris very far from the most
(11:07):
effective way to build autonomyaround what we do.
Doing a backend to backend APIkind of calls will be faster and
a lot more accurate than doingit the way we do things in the
browser today.
But it's definitely a good firststep in teaching the machines.
What are the things that needsto be done?
And later on I have very littledoubt.
Again, that will be completelyagentic.
And the user interface that isbuilt for humans will slowly
(11:30):
disappear and will be replacedwith backend agent to agent,
agent to computer agent toserver communication that will
be significantly more effective.
This will just take a littlelonger.
What does this mean?
It means that more or less,everything we do with computers
right now, agents will be ableto do within the very near
future because we will providethem the training data as we do
the things that we do.
(11:50):
And staying on the topic ofimpact on jobs, IBM just
confirmed that it's cutting.
What is calling low singledigits, percentage of its
270,000 global workforce,however, that's about 8,000
jobs.
So while they're trying to playdown and saying, oh, it's only
lower single digits, percentage,it's still 8,000 jobs, uh, that
are gonna get cut and asexpected, uh, they're not
(12:12):
calling it job cuts.
The CEO Arvin Krishna says it'srebalancing or the way he
defined it.
We routinely review ourworkforce and at times rebalance
accordingly.
Now, while this is perfectlyfine and it's the role of the
CEO and the leadership of thecompany to do these kind of
things, the bigger worry is thatmost of the rebalancing
happening in the last 12 monthshave been down versus up.
(12:33):
And to tell you more where thewind is blowing, Krishna told
Bloomberg, he envisions that AIwill replace about 30% of the
26,000 back office workers thatthey have right now.
So that's another 8,000 peopleish that are gonna be let go
later on just because AI will beable to do their work.
So a quick summary on where weare right now from a economy and
(12:54):
job impact.
AI still cannot replace entirejobs yet, but the focus is a on
the world.
Yet as I mentioned, many, manycompanies are investing a lot of
money in training models to beable to do more or less
everything.
However they can do more andmore tasks effectively and tasks
is what together makes a job.
So if a person currently does ahundred percent of something, if
(13:17):
AI can replace 20% of that, thequestion is what do you do with
the other 20% of capacity?
And there's only two options.
Option number one is you growfaster than that.
If you can grow faster than theefficiencies that you're gaining
from ai, either by using AI todo more things that you didn't
do before, that you couldn't doeffectively or profitably, or
just because you have morecapacity with these people to do
(13:37):
more stuff, then this isawesome.
You will retain the workforce,maybe even grow the workforce
and grow.
However, not all the companiesin every industry can grow.
The size of the pie is given.
Even if it grows a little bit,it may not grow at the same pace
as the efficiencies of thecompanies.
And in this case, you havereally two options.
First of all, if you are a VCbacked or a private equity
(13:58):
backed, you don't really have anoption.
You will cut people but ifyou're not VC backed, if you're
just running your own businessand you're saying, I love my
employees, they've been with mefor years, I wanna retain all of
them, you would still have aserious problem because you are
betting on the future of allyour employees.
bEcause if your competitors aregoing to restructure around ai,
unlearn and relearn, like wesaid earlier, and develop new
(14:18):
systems, new processes, new techstacks, uh, maybe potentially
addressing new markets indifferent ways, they'll be able
to be more competitive than you.
And then you may lose yourentire company hurting all of
your employees and not just theimmediate people that you'll
have to let go otherwise.
And this is not a good situationto be in, but I fear that this
is the clear reality that is infront of us now.
(14:40):
Again, there's the people whoare saying, well, every previous
revolution has created more jobsthan it has destroyed.
And that is correct, but thereare several different questions.
Question number one is what kindof new jobs it will create?
Because previously, every timethere was a revolution, we made
more white collar jobs replacingblue collar jobs.
So instead of being in thefields, we started managing,
(15:00):
processes.
And then, uh, later on withcomputers, we started sitting in
front of computers, uh, thatcould do the work and automate
things in factories and so on.
So we always went for brain workinstead of manual labor.
And this is exactly what AI isreplacing right now.
So what are we going to growinto?
I don't know.
So that's question number one.
Question number two is how manyjobs it will create?
(15:20):
Will it be enough to offset thejobs that it will take away?
And the third question is, howquickly will it happen?
And I have a feeling, and again,it's my personal feeling, it is
not necessarily the truth thatA, it's not gonna create enough
jobs to offset the ones thatit's gonna take away, and B, and
that I'm almost certain of.
It's not gonna create them fastenough, which means in the near
future, in the next few years,we're going into seriously
(15:43):
turbulent job market with a veryserious impact on the economy,
both in the US and globally.
And we may come out stronger onthe other side with new roles
and new jobs and potentiallyeven replacing current
capitalism with somethingbetter.
But until that happens, I thinkwe're looking into some very
turbulent years.
Our next topic, as I mentioned,is gonna be OpenAI before we
(16:04):
dive into the interesting weekthat they had, uh, they had a
very big positive announcement.
They have just rocked past 1million business customers in
the shortest amount of time ofany company ever.
So there hasn't been a singlecompany in history that has got
to 1 million business customersin such a short amount of time.
To put things in a biggerperspective, they have 7 million
(16:26):
CHU PT seats, meaning actualpeople in businesses that are
using the platform.
And that grew 40% in just thelast two months.
GPT Enterprise seats grew nine xfrom last year and it is slowly
connecting to everything incompanies.
So right now, G PT five canreason across Slack and
SharePoint and GitHub and othertech stack solutions that
(16:49):
companies have, allowingemployees to make sense in
multiple data sourcessignificantly faster than we
could do before.
We had AI Codex, which is theircoding platform, has grew 10 x
since August.
So in the blog posts whereOpenAI share this, they gave
multiple examples of differentknown companies such as Indeed
and Lowe's and Intercom, andDatabricks and Agentic and
(17:10):
different agentic companies, andhow much time they are saving by
using different OpenAIsolutions, which again connects
to, okay, if it's now 30% moreefficient or 50% more efficient,
or 25% more efficient, what arethey doing with the extra time
the employees have, and do theyhave anything impactful to use
this time for?
If not, the outcome is obvious.
And now before we dive into the,as I mentioned, interesting
(17:34):
statements that OpenAI had ontwo different incidents this
week.
One big piece of news that willlead us into that is that OpenAI
just announced another big datacenter deal.
In this case,$38 billion withAWS over the next seven years.
With some of it rolling out asearly as the end of 2026 with
growing it into 2027.
Which brings the totalcommitment of OpenAI to compute
(17:57):
in the next few years to$1.4trillion.
So the first situation this weekwas with the interviewing bring
Brad Gerstner, who's also aninvestor in open ai, and he was
actually pushing Sam.
I trying to understand exactlythat, and he basically asked
Sam, how can a company thatmakes roughly$13 billion in
annual revenue could commit to1.4 trillion in spending on AI
(18:19):
infrastructure.
And Sam Altman basically lostit.
He was obviously really upsetabout the question, and his
response had nothing to do withthe question.
He basically said, if you wantto sell your shares, I'll find
you a buyer enough.
He then said that their revenueis actually a lot higher than 13
billion, and that there aremany, many people who are
standing in line to buy theirshares.
And if Brad wants to shell hisshares, that could be very
(18:39):
easily arranged.
Brad said that that's not hisplan and that he will gladly buy
more shares, but that obviouslydid not answer the question.
Now, I don't know why Samexploded.
I think it's a very legitimatequestion to ask, right?
If you are making 13 billion or20 billion or 25 billion or 50
billion, how can you commit tofour to 1.4 trillion in spending
(19:01):
on CapEx in the next five toseven years?
Now remember my comment from thebeginning of this episode when I
said, the problem is not whetherthese companies have revenue or
do not have revenue, but the gapbetween the revenue that they're
generating and the amount ofCapEx that they're committing to
this is exactly what I mean,right?
When Microsoft commits to 30billion or 300 billion of
spending in compute, it makessense.
(19:21):
They make that kind of money andtheir cashflow will support that
maybe with a little bit offinancing, but they have the
money to pay the financing.
In this particular case, the gapis so big that it just doesn't
make any sense, and so we didnot get an answer from Sam at
least until the second eventhappened.
So the second event happenedwhen Sarah Friar, the CFO of
OpenAI was talking at a WallStreet Journal event, and she
(19:43):
was talking about the fact thatthe entire US needs to support
this process, including industryand government.
And she said that the governmentneeds to guarantee, and I'm
quoting, drop the cost offinancing, but also increase the
loan to value.
And she's talking aboutspecifically, uh, supporting the
financing of chips and datacenters.
Which basically means puttingtaxpayers dollars at risk in
order to support the growth ofthe AI industry and within it
(20:06):
open ai.
But then in a really poor choiceof words, to make it even worse,
he suggested that the governmentshould put backstops for the
company's massive AIinfrastructure debt.
In other words, this should bean interest of the US for open
AI to be successful and to beable to make this thing work and
hence should guarantee orpotentially be ready to bail out
if this thing doesn't work.
(20:27):
This obviously backfired andcritics came from every aspect
of the spectrum that you canimagine slamming them for the
insanity of this.
Basically saying they on onehand won preferential borrowing
rates from the government, andon the other hand, what the
government to guarantee thatthey don't go out of business.
All of that based on taxpayer'smoney when they just became a
(20:48):
for-profit company, after a verylong struggle to get there.
Now Friar herself was trying toclarify that shortly after in a
statement on LinkedIn, and shesaid it's not exactly what she
meant.
She just meant that thegovernment should play their
part with the private sector forthe overall AI growth of the us.
And she wrote specifically, andI'm quoting, we are not seeking
a government backstop for ourinfrastructure commitments.
(21:10):
But then again, these were twointeresting comments, one by Sam
Altman, the CEO one by SarahFryer, the CFO, both within a
few days, both about their levelof financial commitment that
does not align with the revenue.
So Sam went to Twitter and wrotea long post.
So if you're following Sam onTwitter or X, or whatever you
wanna call it doesn't matter.
You know that he writes reallyshort, great tweets usually, and
(21:32):
every time he writes a very longtweet, you know, something went
terribly wrong.
Well, this was maybe the longesttweet I've ever seen Sam write
The previous time I remember himwriting really long tweets was
the not very successful releaseof GPT five.
So I wanna read to you a fewshort segments from the tweet.
And again, you can very easilyfind the rest, which I suggest
you do because it shed somelight on how they should have
communicated this to begin withversus how they communicated.
(21:54):
And now trying to backpedal fromthat.
So now I'm quoting from the postfirst, the obvious one.
We do not have or wantgovernment guarantees for open
AI data centers.
We believe that governmentsshould not pick winners or
losers, and that taxpayersshould not bail out companies
that make bad business decisionsor otherwise lose in the market.
If one company fails, othercompanies will do good work.
(22:17):
I agree with that and I'm sureyou agree with that a hundred
percent.
I'm continuing back from thepost.
What we do think might makesense is governments building
and owning their own AIinfrastructure, but then the
upside of that should flow tothe government as well.
We can imagine a world wheregovernments decide to offtake a
lot of computing power and getto decide how to use it.
(22:38):
And it may make sense to providelower cost of capital to do so.
Building a strategic nationalreserve of computing power makes
a lot of sense.
But this should be for thegovernment's benefits, not
benefit the private companies.
I agree with that as well.
And again, I wish they would'vesaid that as a statement to
begin with.
And then he talks about thethree big questions in the air
right now and he is givingdetailed answer for each one.
(22:58):
So I'm not gonna go to all theanswers, but I'm gonna give you
a quick summary of the questionsin the short answers of what he
said.
There are at least threequestions behind the question
here that are understandablycausing concern.
First, how is OpenAI going topay for all the infrastructure
it is signing up for?
So, let's talk about this for aminute.
Sam is saying that they're notgonna make.
13 billion this year, butthey're on track to making 20
(23:19):
billion.
He's also mentioning thatthey're going to grow the
company to 300 billion by 2030,which is very, very impressive.
But it doesn't come even closeto covering 1.4 trillion.
And that is with their currentcommitment, that is if they do
not commit for additionalcompute any time after this day,
which is very unlikely.
So even if they do grow at thecrazy trajectory that they are
(23:41):
projecting, which they may ormay not do, but let's give them
the benefit of the doubt, andlet's say that they are making
300 billion in that timeframe,they will need to pay four times
that amount.
This doesn't make any sense.
Going back to my statement fromthe beginning of this episode,
the gap between the revenue thatis very, very impressive to the
financial commitments thatthey're actually making.
(24:02):
Second question again, I'mquoting again.
Is OpenAI trying to become toobig to fail?
And should the government pickwinners or losers?
Our answer on this isunequivocal no.
And yet, the amount of movesthat they're making in their
ties that they're making to allthe significant infrastructure
companies in the us, both on theelectrical side as well as on
the compute infrastructure side,if they fail, they're putting
(24:24):
the entire economy at risk,which whether they are planning
to do this or not, doesn't makeany difference.
And the third question that he'sreferring to is, why do you need
to spend so much instead ofgrowing more slowly?
His short answer, again, there'smuch longer answer in the actual
tweet.
He said, we are trying to buildinfrastructure for a future
economy powered by ai.
(24:44):
Now that does make sense, but itstill doesn't mean that you can
overspend your most optimisticrevenue by four x within the
next few years with the risk ofactually having to spend even
more on unexpected things orjust by buying more compute.
So the bottom line is, first ofall, OpenAI is no longer a small
startup, and its CEO or CFO orany other spokesperson cannot
(25:05):
respond in the way they haveresponded in these two
interviews.
It just doesn't make any sense.
They need to have their thoughtstogether.
They need to have their talkingpoints together.
They cannot just go and pullwords outta the thin air and
then have to back paddle fromthe situation.
The second thing is, I stilldon't get it from a very
personal perspective, I've beenrunning businesses for 20 years.
I've raised money for startups.
(25:27):
I have never seen anythingsimilar to this.
Nobody has seen any similar tothis from a scale perspective,
but also from a ratioperspective, right?
If you are projectingexponential growth really
incredible, like within a fewyears from starting a company to
generating$300 billion isincredible.
Take a$500 billion commitment, a$600 billion commitment,
something that you can pay thefinancing for, and then spread
(25:48):
it over another 20 years.
You cannot commit to 1.4trillion and which is four x
your most optimisticprojections.
This is.
Really, really scary.
And again, it is scary notbecause of OpenAI.
I don't think other than OpenAI,other people care whether OpenAI
fails or succeeds.
There are a lot of other, othercompanies who failed in the
past.
I think the problem is that it'sgoing to take with them a lot of
(26:11):
really, really large companiesthat are now spending a lot of
money to build infrastructurethat they are supposed to be
paying for and may not be ableto.
But going back to the impact ofthis on the global scale, there
is definitely a race between theUS and China and the race is
very, very close and the winnerof this race may win a very
significant.
Very sig and the winner of thisrace may have a very significant
(26:33):
impact on the future of ourplanet and to shed some more
light into the situation overthere.
NVIDIA's, CEO Jensen Huang, oneof the richest and most
successful people in the worldright now, and the leader of the
largest company in the worldright now just delivered an
interesting warning at aFinancial Times future of AI
Summit and he's saying thatChina will win the AI race Now
(26:57):
again, he corrected that alittle bit afterwards in a
statement after the conferencewhere he said, as I have long
said, China is nanosecondsbehind America in ai.
It is vital that America wins byracing ahead and winning
developers worldwide.
Two points about that.
One is that I agree.
I think there is a global race.
I think China is very, veryclose Second.
If second at all from a softwareperspective, they're doing some
(27:19):
incredible things from ahardware perspective, they're
still behind Nvidia, but we needto remember that Jensen Huang
has a real serious interest inmaking these kind of claims and
in putting pressure on the USgovernment because President
Trump just said last week in aninterview that he will not allow
Nvidia to sell their top levelBlackwell chips other than
inside the US or to allygovernments, and that he
(27:41):
definitely won't allow China toget them.
He did say that he might allow.
Nvidia to sell lower fidelitychips to China.
But as of right now, China's noteven interested in that, which
makes it obviously lessrelevant.
So I think the US government isin a similar situation to
businesses.
As I mentioned before, you candecide to slow down, but then
you may lose everything.
And so it is a veryuncomfortable situation to be
(28:03):
in.
And we have to balance ourdecisions between what's good
for humanity, to what's good toour internal people, whether
it's the company or thegovernment.
I have a feeling that in manycases these two objectives do
not align very well, which callsfor tough decisions.
And I don't know what thegovernment will do, and I
obviously don't know what everycompany will do, but these times
core for very strong leadershipand trying to put things
(28:24):
together in a way that willreduce the negative impact and
increase the opportunity that islaying ahead.
Now let's dive into some rapidfire items.
We'll start with OpenAI.
That is apparently planning therelease of GPT 5.1 thinking
model.
It has surfaced in the code oftheir web app just recently.
The rumors talk about a familyof GPT 5.1 models, including
(28:47):
potentially a mini fastlightweight model and a full
scale model as well.
And presumably they are aimingto release it ahead of the
release of Gemini three, whichagain is rumor to be released in
on November 18th.
So stay tuned.
We may get two very powerfulmodels in the next two weeks.
(29:07):
OpenAI 5.1 and then Geminithree.
None of this has been officiallyannounced by anyone, but these
are what the rumors are talkingabout.
As we reported last week, OpenAIis now a for-profit company, but
the lawsuit from Elon Muskagainst OpenAI that may prevent
them from going public is stillgoing on.
And as part of that co-founderof OpenAI, Ilia s has revealed
(29:30):
some details in a depositionthat he had to give.
And as part of that, open AI'sco-founder, Ilia Susko that
since then left.
Open AI and started Safe SuperIntelligence has shared a
deposition that is shedding somelight and some interesting facts
about several different aspectsin the history of open ai.
Most of it relates to theousting of Sam Altman and
bringing in back and whathappened in those crazy few days
(29:50):
in between.
So one interesting aspect isthat OpenAI was actually talking
to Anthropic about a merger thatwill make Dario Amay, the CEO of
Anthropic, the CEO of theunified company.
And apparently there wereserious conversations about
this, but this was turned downby the investors of Anthropic
who feared that it will dilutetheir billions of potential
(30:12):
gains and that's why it did notmove forward.
Now, in addition, SR'S 52 pagememo to board members of OpenAI
from back then detailsconsistent pattern of lying and
undermining his execs andputting his executives one
against the other as behaviorsthat Sam did regularly.
Now, if you remember whathappened afterwards is that many
(30:32):
of OpenAI employees, about 700of them, said that they will
leave OpenAI if Sam is notbrought back, and Microsoft
offered him a job to basicallybring all these employees into
Microsoft and then he was putback in his position that he's
holding till today.
But this sheds some light onwhat was happening in those few
crazy days as Sam was kicked outand then brought back Another
(30:54):
big news from Open Air this weekis they just rolled out the SOA
app for Android users.
So after breaking the record ofgetting to 1 million iOS
downloads in just five days, inSeptember, they finally released
the Android version as well.
It is now live and available inthe us, Canada, Japan, Korea,
Taiwan, Thailand, and Vietnam.
I already downloaded and playedwith it.
It's actually a really cool appand I'm an Android user and so I
(31:16):
finally got a chance to installit and play with it, and it's,
as you've seen on the iOS side,very, very impressive with the
capabilities that it cangenerate.
On the other hand, it's anotherthing that's gonna suck you into
it and waste a lot of time foryou.
Every time I watch these videos,I'm blown away, not by the fact
that they will make me stay onthe app longer, just like any
other social media platform, butI keep on thinking all the time
(31:37):
that these are AI generated andit just blows my mind.
And it is really, really scarybecause a lot of these videos
and are finding their ways intoTikTok and Instagram and so on
when most people do not knowthat they are AI generated.
another interesting nuancechange that happened on ChatGPT
this week that is actuallyreally powerful, especially as
we're moving forward, is now theability to add content and add
(31:58):
comments to the operation.
as ChatGPT is running anoperation in the background, so
far you give it a prompt andthen it's doing its thing
sometimes for a long time.
If you are gave it a very longtask to do and then you had to
wait for it to finish or stop itin order to write an additional
prompt and write.
Now you can actually addcomments, add context, and
interact with charge G PT as itis doing its effort.
(32:20):
I think this is a very criticalcapability, especially as you
develop more and moresophisticated processes, and
especially combined with thecapability of these AI models
who now work for longer amountsof time while thinking through
processes and going towards morethe agentic era.
And as you know, you can clickon the thinking button when chat
is thinking and see what it'sactually doing, which means you
can actually see the directionthat it's taking, which then
(32:42):
allows you to interject anddirect the conversation it's
having in its brain to the rightdirection, or help it get
additional information that itneeds without it asking you and
so on.
I think this is extremelypowerful and I'll be really
surprised if the other modelsdon't come up with something
like this as well in theimmediate future.
And we also got some new hintson the new device that OpenAI is
(33:03):
developing with Johnny I's team.
In the same conversation withthe Wall Street Journal Tech
Live conference, Sarah Friartalked about.
said that she cannot say a lotabout the company's upcoming
device, but she did say it's amultimodal world for ai.
What is beautiful about thesemodels is that they are as good
through text as they are throughbeing able to talk language to
(33:23):
be able to listen auditorily, tobe able to visually see.
She also said that we got usedto cell phones and it getting
people being looking down intotheir screens and talking with
our thumbs.
Then she said, I'm lookingforward to being able to bring
something into the world.
That I think starts to shiftthat.
So basically not being headsdown in our phones and not
talking with our thumbs, whatdoes that mean?
(33:45):
I'm not a hundred percent sure,but it hints to the fact that A,
it is a multimodality device,meaning we'll be able to see and
listen and talk and text in andout.
B, that it will alleviate usfrom the need to look into our
screens all the time, whichmeans it probably does not have
a screen because otherwisethere's no point in that
statement.
It will be very interesting tosee what they come up with.
(34:06):
Now, will this eliminate cellphones altogether?
Here's what Sam said about it.
He said, and I'm quoting in thesame way that the smartphone
didn't make the laptop go away.
I don't think our first thing isgoing to make the smartphone go
away.
It is a totally new kind ofthing.
Well, since I don't have a cluewhat they're developing, it is
very hard for me to comment onthat.
But I must disagree with Sam'sstatement relating to the laptop
(34:30):
not being eliminated by thesmartphone because the
smartphone did completelyeliminate the palm pilots.
Those of you who remember thatthing, and it also made.
Old phones obsolete.
And it also dramatically reducedthe amount of small digital
cameras that are being soldaround the world.
And it also completelyeliminated GPS and navigation
devices that we had, MP threeplayers that we had and many
(34:52):
other devices that were very,very common before the
introduction of the smartphones.
So yes, it did not eliminate thelaptop, but it did eliminate a
lot of other things, which meansthere's still a very big chance
that whatever they come up withmight replace cell phones.
Maybe not in step one, butpotentially within a few years.
Now, before I connect the dotsto how this impacts global
domination, there's one lastpiece of news, which is ChatGPT
(35:14):
just added two additional appsto the apps that can run inside
of ChatGPT, which is Peloton.
Which allows you to now insideof chat PT to craft personalized
workouts and TripAdvisor thatallows you to plan vacations.
These joins a growing list ofapps that are already there, and
they already share that a feware coming such as Uber and
DoorDash, which brings me to thefollowing thought.
We are witnessing the entireecosystem of how we engage with
(35:38):
the digital world throughdifferent devices.
We can see it changing in frontof our eyes whether we're paying
attention or not.
That's a different thing.
Now.
Right now we cannot imagine aworld without smartphone and
apps.
Because we got used to doingthis.
But 20 years ago we had nosmartphones and we had no apps.
And so just like we got used tothat, I think we need to start
getting used to the new wave orthe new way of interacting with
(36:01):
the digital world and throughthat with the real world.
So let's start with apps andthen talk about the bigger
system.
Right now we're using most ofour apps through either Android
or Apple phones.
In that ecosystem, either Googleor Apple takes a huge cut off
the top of every.
Transaction that happens throughthe app store.
Every upsell or every in-apppurchases, Google and Apple.
(36:22):
Apple takes about 30%.
Google takes a little less thanthat, but a very, very big
chunks from the apps that you'reusing on your phone, which means
that companies behind these appshave a very strong incentive to
actually move away from theexisting ecosystem, which
explains to you why they arerunning two develop capabilities
on ChatGPT.
ChatGPT right now is free forthem to run their apps on, but
(36:44):
even if Open AI starts taking5%, 10% of them, it is still
significantly cheaper than thewhat they're paying Apple or
Google right now.
In addition to being a newdistribution channel with 800
million weekly users, which isgrowing all the time, it also is
a very great way for them tobreak the shackles that they
have on themselves from Appleand Google right now.
(37:04):
That means that over time we mayuse more and more of these apps
inside the ChatGPT or other AIplatforms versus Native on our
phones.
But in the long run, I don'tthink this will survive either,
because I think what's gonnahappen, we're going to engage
with the digital world usingagents, and these agents will
spin up the functionality or thedisplay or the connectivity they
(37:27):
need in real time based on thespecific task.
Or they will use something likeskills in Claude right now,
meaning little functionalitiesthat can allow them to connect
or do different things, but notin entire applications.
And this will shorten the timeof going from an agent trying to
spin up something to an agent,being able to actually do
something.
These functionalities will existas either skills or different
(37:48):
small mini apps that will allowthe agents to act in an agent to
agent or agent to server world.
NOw let's connect the dots tosome bigger things as well.
Think about how Google became sosuccessful and impactful.
They started with search, whichis how we find and interact with
information online.
But then they went to the wayswe engage with that information,
the actual access point.
(38:09):
And there were two accesspoints.
Access Point number one was thebrowser.
How do we engage with theinternet, not just the data, but
how do we engage with it?
And they built a browser,actually bought a browser and
were able to make Chrome themost used browser on the planet.
But then cell phones showed up,and this is where they took
Android in order to control thedevices that we use in order to
(38:29):
engage with the internet, whichgave them even more data and
more access and other datapoints on how we use the digital
world, where we use it from andbasically know a lot more about
us so they can make even moremoney.
Then they added the app store ontop of that.
So any application, whether theydeveloped it or not, they have
access to, and they get a cutoff and they build a set of
(38:50):
tools on top of that for basicusers and enterprises with the
Google Suite and beyond.
So this is how Google became theworld dominance they are today.
They're controlling the entireecosystem from the data and how
you find it, all the way to howyou engage with it, the tools,
the processes, everything.
This is exactly what OpenAI isdoing right now.
(39:10):
They're already controlling thedata process and replacing the
old search with ai, and they arenow building devices.
They're just launched a browser.
They're launching enterprise andday-to-day tools, literally
step-by-step following theblueprint that Google did in the
last 20 years.
Will they be successful inpushing Google aside?
I don't know.
Will they become a very strongcompetitor to Google?
(39:33):
A hundred percent, but they'renot the only company that has
been successful in the pastcouple of years or in this past
year.
The information, which is one ofmy favorite digital magazines
right now, I just released.
Its 2024 breakout hits.
Of the top 50 companies thathave grown the most in the past
year, and many of thesecompanies are AI companies that
we talked about a lot.
A big one that they talked aboutis sphe, which is the company
(39:55):
behind Cursor that skyrocketedto 27 billion in dollar in
valuation, with over 500 millionin recurring revenue as of June.
Up from 400 million valuationand 20 million in a RR in just
one year.
Suno, which is the musicgenerator app that I really,
really like, has hit 150 millionin annual recording revenue from
app subscriptions, and grew toover$2 billion in valuation.
(40:18):
Clay, the AI sales tool, which Ialso really like, has got to a
$3.1 billion in valuation inAugust.
sixfold from the 500 millionvaluation just earlier in the
year, perplexity as in$18billion in funding with a huge
growth 11 labs, which is thevoice generation tool, raised a
hundred million dollars at a$3.3billion valuation in October
(40:38):
with shareholders already doinga private sale with north of$6
billion valuation.
So what it is showing you, it isshowing you that A, these
companies valuations are growingexponentially, like nothing
we've ever seen before.
But the reality is going back towhether there is or there isn't
a bubble, the revenues aregrowing at an insane pace that
actually justifies thesevaluations.
So everything seems to begrowing at a very high pace
(41:01):
other than one thing.
So in the BBG two podcastinterview, had OpenAI Sam Altman
and Microsoft Satya Nadelaexposing what they fear is
slowing everything down, whichis electricity.
US data center electricitydemand has surged over the past
five years, outpacing theutilities to support them by a
very wide margin.
(41:21):
Nadela even shared that many ofMicrosoft GPUs are sitting idle
due to the fact that some oftheir data centers cannot
support enough power to run allthe GPUs.
And his specific quote was, itis not a supply issue of chips.
It is the fact that I don't havewarm shells to plug them into.
In fact, that is my problemtoday.
Warm shells meaning theinfrastructure from Alec,
(41:42):
electricity perspective, from adata warehouse perspective to
plug them into, but the flipside, that Alti, that Sam Altman
mentioned in the same interviewtalks about the risk that the
existing investment in, let'scall it old school, electrical
infrastructure, are taking.
He said, and I'm quoting a verycheap, from a very, if a very
cheap form of energy comesonline at a mass scale, then a
(42:03):
lot of people are going to beextremely burned with existing
contracts that they've signed.
Now, as you may or may not know,Sam Altman is invested in
several different startups atvery large scales in the
electricity production fieldsfrom solar to nuclear fusion and
so on.
So when he's saying these kindof things, he knows what
companies are working on in theworld today from an electricity
(42:25):
generation perspective.
And when you are building a newnuclear reactor, or if you're
building a new carbon basedelectrical generation facility,
you are not building it for fiveyears, you're building it for
50, and the investment is spreadover that.
And when he's saying that thismight be a big risk, he knows
what he's talking about.
But since we mentioned Satya,let's switch gears and talk a
little bit about Microsoft andthen followed by many other
(42:47):
companies.
Copilot just launched copilotpages, which allows Microsoft
Phase 65 copilot licensesholders to create new dynamic
webpage without writing any codebehind the season, it is running
GPT five, and you can turn it oninside of copilot chat tab for
instant preview, edit anditeration of ideas and creating
(43:08):
web pages that you canimmediately deploy and share
with people.
This has been possible on allthe other large language models
for a while.
My favorite tool to do this isactually Anthropic Claude.
I believe it is creating themost effective pages right now.
The other option is obviously togo to either a vibe coding
platform to generate somethingthat is more significant or to
one of the age agentic tools,such as Gens, spark, and Manus.
(43:29):
But now you can do this withinthe copilot universe, at least
to an extent.
Now, I promised you news aboutSiri.
So from Microsoft, let's switchto Apple.
Apple has finalized a deal withGoogle to put Gemini as the
engine behind Siri, at least inthe short term, the deal.
In this deal, apple will payGoogle$1 billion every single
(43:50):
year to be the engine behindSiri as early as 2026.
So if you've been following thispodcast or just been following
the saga on your own, apple haspromised Apple intelligence with
a new Siri for a very long timenow.
And there's been a lot of dramaand a lot of internal reorgs and
new people running thisdepartment or departments
because of all the differentreorgs without any real success
(44:12):
as far as delivering a level ofquality that will justify what
they wanna release as the nextseries.
And now after havingconversations with all the
different potential partners,including philanthropic and open
AI and so on, they have selectedGemini to run the model, but
they are saying that they'restill developing their own
internal capabilities and thatfirst of all, some capabilities
would still run.
Based on Apple's intelligence,but most of it is going to be
(44:34):
based on Gemini, but they'restill developing a tool that
will eventually will replaceGemini.
The other thing that they addedis that these Gemini models will
run on Apple's infrastructure toguarantee the data security and
data privacy of the informationthat you're going to be sharing
with Siri and there, and thispresumably puts them back on
track to release the nextversion of Siri in the spring of
(44:56):
2026, and there was a very bigquestion mark on that because
they kept on pushing it back forthe last year and a half.
So maybe now it's finallyhappening.
Staying on Google.
Google Maps just added Geminiinto Google Maps, which is
actually really, really cool.
You can actually talk to yournavigation platform as you're
navigating, and it can do a lotof other interesting things.
You can cross reference imageswith Google Streets and
(45:18):
basically ask for landmark basednavigation.
I see this kind of building.
Can you tell me, should I turnleft or right?
Drivers can also converse withGemini in the middle of the
route.
Is there a budget?
The example they gave, examplesthey gave is, is there a budget
friendly vegan restaurant withina couple of miles, or where is
the parking next to there andstuff like that.
Also, the navigation cues arebecoming more helpful.
So instead of telling you turnin 500 feet, it's gonna tell you
(45:41):
to turn next to the Wawa gasstation or a clear restaurant
and stuff like that.
And there's a new integrationbetween Gemini and Google Lens
where you'll be able to pointyour camera at something that
you're seeing and then have aconversation with Jim and I
about this such as, what is thisplace?
Why is it popular?
What hours is it open?
Or ask about a flower thatyou're seeing and so on.
This gives you a glimpse ofwhere the world is going because
(46:01):
as you know, many companies aredeveloping glasses or other
wearable interfaces.
So combine the capabilities Ijust described of Google Lens
combined with Gemini navigation,combined with Gemini.
Combine that with translationfrom any language either written
or spoken, and you understandthat the world we know and how
we engage with it is going to bechanging dramatically in the
next two years.
Now the technology is alreadyhere.
(46:23):
I just think it will take awhile for it to become
mainstream, and it'll probablytake a year or two.
But I do believe that within thenext few years, we'll start
seeing more and more people useAI wearables across more or less
everything we do.
And that comes with a wholeinteresting baggage of what it
means from a privacy perspectiveand so on.
Staying on Google.
Google just added a vibe codingcapability inside of Google AI
Studio.
So there was some of thatfunctionality available before,
(46:46):
but now it's a full vibe codingplatform where you can tell it
what you want the application todo and it will do it very, very
quickly.
The cool thing about this thatmakes it a little different than
other vibe coding tools is thatit's building upon all the
different infrastructure anddifferent tools and capabilities
and APIs that Google alreadyhas.
So it knows how to use all thedifferent functionalities such
(47:06):
as map and vision and so on, tobuild apps much quicker than on
the other platforms with a lotof flexibility.
It is not built for developers,it is built for people like me
and probably most of you.
So it's definitely worth testingout.
They're getting into a highly.
Crowded space, but they have thebenefit that it is in the Google
universe.
So I will report as I learn moreabout what it can and cannot do
(47:27):
compared to some of the othertools.
Switching from Google tophilanthropic.
Philanthropic just shared thatit is expecting exponential
growth from its enterpriseadoption, and they're talking
about hitting a revenue of 70billion by 2028, which is less
than three years from now.
Now in addition to the 70billion in revenue, they're
expecting 17 billion incashflow.
(47:49):
which is a very differentscenario than OpenAI is
expecting.
OpenAI is expecting to keep onlosing money at least until
2030, so they're expecting tocontinue the exponential growth
they're seeing from adoption,mostly on coding and API usage,
and they're seeing thatimpacting their gross margin to
very positive from very negativeLast year.
Last year they had a negative94% gross profit.
(48:12):
Uh, and this is going to changeapparently in the next few
years.
They're on page to hitting a 9billion A RR by the end of 2025
with their 2026 targets hitting20 to 26 billion, which is three
x this year.
And as I mentioned, a very bigpart of that come from API sales
and a very big part of the APIsales comes from claude Code
alone.
(48:32):
So Claude Code has by itself hasa$1 billion in revenue growing
up from 400 million in July.
So it's two and a half X in justa few months coming just from
Claude Code.
I love Claude Code.
I use it for a lot more thanjust coding.
You can use it for manydifferent things.
We're actually going to do anepisode about how to use, uh,
cloud code in order to help youdevelop different agent and
(48:54):
tools for business use caseswithout understanding anything
about coding, I don't understandanything about coding.
So expect this episode in thenext few weeks.
Now to put things in perspectiveof how good Claude Code X is,
Brex, which is a company thatcreates corporate financing
solutions are have been usinganthropic powered agentic tools
(49:15):
as part of their process.
And they're claiming that 80% oftheir new code base is generated
by ai, mostly through ClaudeCode.
That is very, very significant.
Now, I must admit that I talk toa lot of developers and managers
in large tech companies, andmost of them are actually pretty
frustrated with how employees orpeople are using AI right now to
(49:37):
develop code.
But I guess if you develop theright processes around it, going
back to unlearning and learninghow to work properly, there are
huge benefits, and this is avery significant, successful
company that is saying that 80%of the code is AI generated.
It tells you where the wind isblowing, even if not everybody
are getting similar resultsright now.
And from philanthropic toperplexity.
Perplexity is fighting back thebattle with Amazon.
(49:59):
So amazon has sent a cease anddeceased letter to Perplexity
asking it to prevent theircomment browser from allowing
users to send a cease deceasedletter to Perplexity to ask them
to Hal comments shopping on theAmazon platform on behalf of
users.
Well, in a November 4th blogpost from Anthropic that was
(50:19):
titled, bullying is NotInnovation.
The startup calls Amazon Move abully tactic to scare
destructive companies likeperplexity out of making life
better for people.
They're basically saying thatpeople have the right to use
agents to do stuff on theirbehalf, including shopping on
their favorite platforms.
And obviously Amazon wants toprotect their turf.
CEO, Andy Jassy said on their Q3earning call that third party
(50:43):
agents are lacking, and I'mquoting personalization,
shopping history with deliveryestimates frequently wrong
prices and often wrongdescriptions.
They're obviously trying to pushRufuss, which is their own
platform, which is actuallyseeing huge success.
So Rufuss has rack 250 millionusers and 60% higher purchase
(51:05):
completion rate compared totraditional ways of shopping,
and they're projecting 10billion in annualized, in annual
incremental sales coming fromthe usage of their own AI
platform, which tells youpartially why they don't want
perplexity to be a part of theequation.
I must admit I don't get it.
And the reason I don't get it isbecause if perplexity allows
people, agents to buy on Amazon,that's just another distribution
(51:28):
channel for Amazon.
And yes, they have their ownplatform that allows them to
then sell ads and other stuff,but this is a distribution
channel they didn't have before.
So I'm a little confused withthe approach from Amazon and it
will be interesting to see howthis evolves.
I don't think they'll be able toblock AI agent for a very long
time because otherwise they willstay with no external traffic.
And so they will have to figureout, unlearn and relearn, uh,
(51:48):
how to work in this new agenticworld.
Staying on AI and shopping.
Shopify has.
Shared that they see a seven Xincrease in traffic from AI
tools to Shopify stores sinceJanuary of 2025 and 11 X surge
in pro in purchases powered byAI search.
So a very different approachthan Amazon.
(52:09):
In this particular case, Shopifyis actually embracing AI or as
their president, HarleyFinkelstein said, AI is not just
a feature at Shopify.
It is the center of our enginethat powers everything we build.
As you remember, they recentlysigned partnerships with Open
aiche, PT with perplexity, withand with Microsoft copilot, all
to be able to show products fromShopify in these channels.
(52:31):
Again, exactly the oppositeapproach from Amazon, and I
don't think Amazon will be ableto do what they're doing right
now because then companies likeShopify will start chipping into
their market share because moreand more people will shop
through these agents versusdirectly on the platforms
themselves.
Speaking of interestingpartnerships and perplexity,
perplexity is going to pay$400million in cash and equity to
(52:52):
SNAP in order to become theconversation chatbot behind some
of the AI features inside ofSnapchat.
So why would Perplexity spend$400 million on something like
this?
Well, Snapchat has 477 milliondaily active users, and that
will give perplexity access to avery large, young audience,
(53:12):
which can guarantee their futuregrowth.
And from the US to China, wementioned earlier that the gap
between the US and China isnarrowing if it exists at all.
Alibaba, Quin three Max justnailed several elite math
contents in the us.
It got a hundred percentaccuracy on American
International Mathematicexamination, also known as aim,
and the Harvard MIT mathematicstournament.
(53:34):
It is the first Chinese AI modelto hit the perfect score on
these channels across algebranumber theory and other aspects
of mathematics that are includedin these tests.
G PT five PRO also hit a hundredpercent on the HMMT from Harvard
with tools and 93 and less thanthat without tools.
And it also hit 94.6 on the aimwithout tools.
(53:57):
So right now Alibaba actuallyscores higher than OpenAI and
got the fir the perfect score onboth of these tests.
Now, there've been a lot ofchatter on this podcast and
obviously coming from multipledifferent aspects.
Are we in a bubble or not?
And are the valuations are toohigh and so on?
Well, the Michael Bur the Oraclefrom the Big Short is now
(54:18):
doubling down on his AIskepticism, which he has been
very loud about for a while, andhe is now putting over$1 billion
of his Scion Asset ManagementFund, which is about 80% of the
entire portfolio as bets againstNvidia and Palate here
specifically, the specific quotefrom him on X was sometimes we
see bubbles, sometimes the onlywinning move is not to play.
(54:42):
Basically, meaning he's bettingagainst the market and
specifically against palantirand Nvidia.
The CEO of Palantir actuallyfinds really interesting.
Alex Karp said the two companieshe's shorting are the ones
making all the money, which issuper weird and I agree.
And yet their valuations arecurrently absolutely insane
regard the fact that they'remaking all the money.
So going back to my statementfrom the beginning of this
episode, there is a growing gapbetween being very, very
(55:05):
successful to having valuationsand being able to spend funds
that are way beyond how fastyou're growing.
So these companies are growing?
Yes.
Are they making crazy amounts ofmoney?
Yes.
Does that justify the currentspending or current valuations
of these companies?
Not sure.
and again, Michael Barrydefinitely knows more than I do,
and he thinks no.
By the way, just his statementsent both these stocks down, for
(55:26):
a couple of days.
Will it recover?
Probably in the short term?
What's gonna happen in the longterm?
I don't know.
Don't take any investing advicefrom me.
This is not my role in thispodcast, but staying on the
topic of whether we are in abubble or not, Gartner just
shared a very interesting reportand they're claiming that the
problem is not even whetherthese companies are making money
or not, or whether there's ademand for it or not.
They're saying that right nowthe mass rollout of a agentic
(55:49):
and other AI models andplatforms and products and
solution dwarfs the currentdemand and the ability of
companies to actually use theseproducts.
What they're saying is thatwe're going into a correction
period, not necessarily becauseit's a bubble, but just because
the ability to use these toolsare not possible at the rate
they're coming up, which willsqueeze the margins of different
(56:11):
companies, which will allow thebigger players to scoop up more
talent and more technology, andthen become even more powerful
at the other side.
And if you want the exact quote,it is, while we see signs of
market correction orconsolidation, product leaders
should recognize this is a partof a product lifecycle not
assigned on inevitable economiccrisis.
A few quick items on thehardware side of things.
(56:34):
A company called Zipper fromSeattle is now releasing voice
activated glasses for peoplefrom the trades like roofers,
HVAC techs, electricians, and soon.
These glasses are connected totheir existing platform that
they've been developing for awhile to manage these kind of
employees.
And it provides real timeinputs, feedback as well as
assistance to the employees inthe field based on what they're
(56:57):
seeing and hearing.
And they can also talk andcommunicate back to their
companies through these glasses.
I think that these kind ofsolutions that are very specific
to a different fields are gonnabe extremely valuable.
I've actually used live viewmode inside of chat, PT and
Gemini in order to help me fixdifferent things in the house.
It is almost magical, and doingthat without having to actually
hold the phone will be even moreimpactful, especially as you are
(57:20):
working in potentially narrowspaces or on roofs where it's
unsafe to pull up your phone andlook at it.
And from the company'sperspective, there are two
aspects to the story.
The companies can track.
The employees know exactly whatthey're doing, how long they've
spent on every single kind ofjob, when they've been in and
out of the job site and so on,which on one hand allows you to
build more efficiencies andbetter training and better
operations.
But on the other hand, has areally serious feel of Big
(57:41):
Brother is watching you all thetime.
So I guess good and bad, but Ithink it's inevitable until we
see more and more of thesesolutions out in the field.
A company called Sandbar justreleased their AI activated
ring.
It is connected to an app onyour Apple phone later on
Android and the web as well.
And it has a little button andscroll inside the ring as well
where you can talk to it,dictate to it, get its feedback
(58:03):
via voice and be able to saveinformation, analyze
information, and getinformation, all through that
little ring.
Whether this will be thesolution for our AI
communication.
I don't know.
Again, I think glasses make moresense because of many different
aspects, but a ring isdefinitely a lot less invasive
and you can use it at night andyou can use it in public places,
and you can use it in placeswhere you don't wanna wear
(58:24):
glasses, so it might be a partof the solution.
You can now get the silver ringfor$249 or the gold ring for
$299, and it comes with threemonths of the pro stream.
Functionality, which costs$10 amonth afterwards to use the
ring.
Staying on the productdevelopment, maybe the most
impactful display of roboticsthat I've seen for a while came
(58:45):
from Xang Iron Eight, which istheir eighth generation of
robots, but the third generationof a humanoid robot.
And they made these robotsextremely human-like.
And what I mean by extremelyhuman-like is, first of all,
they come in differentvariations.
There's a female and a maleversion you can pick whether you
want it to be chubby or you wantit to be curvy and so on and so
forth.
but they also have a artificialskin and it looks extremely
(59:09):
human.
How human does it look when theydid their demo?
Many people in the audience andskepticism over.
Over the internet basically saidthat they think it's a human
inside disguised as a robot.
And what they did is theyactually brought the robot back
on stage and cut off its pants,basically showed the artificial
skin, but then cut off theartificial skin to show the
(59:29):
metal pieces moving underneath,and then had it walk on stage
again.
Now with half of its legexposed, showing that it's an
actual robot, the direction thatthey're taking it, by the way,
is not two factories or homeusage, but actually to service
usage, such as in restaurants,coffee shops, and so on.
And that's why they built it tolook very human so it can
interact with humans in the mostlogical way.
(59:52):
This is a very interestingdirection and a very scary
development.
From my perspective.
This is the closest that I'veseen to a future where we have
robots that areindistinguishable from actual
people.
That being said, they'reclaiming these robots follow the
three laws of Asimov, which Italked about many times on the
show, but they've added a fourthlaw of privacy Data never leaves
(01:00:13):
the robot, which is obviouslyimportant.
Speaking of robots and trainingthem to do human work.
Apparently Tesla has a lab tohave humans train the robots.
How does that work?
Well, multiple people doingmundane casual day to day tasks
for hours every single day witha bunch of cameras installed on
them and a bunch of camerasinstalled in the room, and all
(01:00:34):
of that information is beingrecorded in order to train the
next version the Optimus robotthat Tesla has been developing.
This connects very well to thetopic we talked about in the
beginning of people today aredoing jobs that will eliminate
their jobs later on in thefuture.
This is just a hardware versionof the same thing.
An interesting topic about legalGetty Images just won or maybe
(01:00:55):
not won the lawsuit againststable diffusion using their
data for training.
So they won part of the lawsuit,basically their claim about
infringement, but they were notable to prove breach of
copyright, and they dropped thatpart of the lawsuit.
The interesting part is this wasmostly based on a technicality
where they couldn't prove anevidence of where stable
(01:01:15):
diffusion trained their models,which allowed stable diffusion
to get away out of that.
But that now sets a precedence,which basically shows how much
our laws, at least the UK lawsright now, but I think it's the
same most al almost everywherearound the world.
The current laws are not readyfor ai and they will have to
change to adapt to that.
That includes copyright laws,that includes ownership laws,
that includes data usage lawsthat includes what do you own
(01:01:37):
about yourself or not?
You remember the whole case lastyear with AI generating rap
songs of known rappers, and theycouldn't sue because there's
nothing in the law that protectsyour voice as yours.
So going back to.
Unlearning and relearning ourthing.
I think there's a lot of thingsthat will need to change in the
legal system around the world todeal with a new AI future.
By the way, this result sentGetty images stock 8.7% down,
(01:02:00):
and it makes perfect sense.
I must admit, I don't knowanybody that should still be
using these image databasesbecause you can generate your
own images on the fly for lessmoney, a lot less time, and get
something unique that doesn'tshow up in 167 other blog posts.
If you have been following thispodcast for a while, you know
that I really like mentioningthe LM Arena, which is the
platform that allows you tocompare different AI tools and
(01:02:21):
rank them.
And based on that, they havetheir leaderboard where they
just announced a new leaderboardthat they call LM Arena expert,
which basically takes 5.5% ofthe overall prompts that are
used in the platform and isdefining them as expert prompts.
And then they're just comparingthe results on these as the most
difficult, most advanced promptsin the world.
And they're actually broken thisinto 23 different occupational
(01:02:43):
fields, and you can see thesuccess and or failure of
different models across allthese different fields.
I find this to be a lot moreinteresting than the existing
benchmarks that are a lot morestatic than actual real life use
cases.
So if you wanna know which toolsworks best for very specific
advanced capabilities, go andcheck out the new LM Arena
expert leaderboard.
(01:03:05):
And then I promised you holidaycheers.
In the end.
Well, Coca-Cola just releasedanother Holidays are coming
campaign and new video.
And just like last year, this isan AI generated holiday ad.
last year there was a very bigbacklash from then doing this
after years and years and yearsof doing this with actual humans
and shots.
Well, they continued the samepath and they continued it with
(01:03:28):
the same.
With the same company whodeveloped last year's ads, but
instead of showing people theyswitch to animals, which may
give them a way out of saying,oh, but you took away people's
work.
Well, this is not people.
This is now animals that areshowing up in the ad.
It's actually pretty cute.
I've watched it twice just tosee if I'm missing anything.
Interesting.
Some people are saying that itlooks not realistic and so on,
(01:03:49):
and I think it doesn't matter.
I think it definitely deliversthe message and it has a great
vibe and it's bringing theenjoyment and excitement of the
holidays, and it's not likesomebody did it in their garage,
the video.
Generating this took a hundredpeople and over 70,000 AI video
clips that were put together inorder to finalize the final
(01:04:09):
outcome.
So there's still a lot of workto be done, even in the current
AI era, if you wanna get to thatprofessional level as Coca-Cola
is demanding from themselves andI'm now quoting their chief
marketing officer, before whenwe were doing the shooting and
all the standard process for theproject, we would start a year
in advance.
Now we can get it done in arounda month.
(01:04:31):
So go.
Check out the new ad.
Let me know what you think.
And if you've been enjoying thispodcast, please subscribe to it
so you don't miss any episode.
There's some really, reallyinteresting episodes coming on
the Tuesday episodes on how todo things with ai, some amazing
guests that have alreadyinterviewed and these episodes
are coming out in the next fewweeks.
Also, while you're out there,you can go and check out the two
new courses.
(01:04:51):
They're gonna be links to themin the show notes.
And until next time, have anamazing rest of your weekend.