Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Zone Media, Hello im ed Zetron, and this, of course
is better offline what Welcome to the second part of
our four part series, where I give you my most comprehensive,
(00:24):
most up to day explanation of why we're in a
bubble and what that even means. The reason why I'm
taking my time to be descriptive and comprehensive is because
I want this to make sense to those who listen
to it. Having written hundreds of thousands of words this
year about the AI bubble, so many of the arguments
I've made and the secrets I've exposed are contained in
their own discreet little episodes or newsletters. This is my
(00:45):
series to consolidate all of the information I've put out
there in one place, and I want to make it
makes sense to anyone who listens to him. I want anyone,
even who someone who doesn't even know that much about AI,
to listen to the audience I've been making for the
past three years, to understand why things are dire and
to feel the same alarm I'm feeling, or at least
understand why I'm alarmed. Because I don't like to tell
(01:05):
you how you feel. Old school bit of FEEDBACKERD got
from a listener once, and I appreciate that to this day.
Now today I'll make the case the generative AI's fundamental
growth story is flawed and explain why we're in the
midst of an egregious bubble. This industry is sold by
keeping things vague and knowing that most people don't dig
much deeper than a headliner problem I simply do not have.
(01:26):
This industry is effectively in service of two companies, open
Ai and Nvidia, who pump headlines out through endless contracts
between them or subsidiaries or investments to give the illusion
of activity. Open Ai has now promised over four hundred
billion dollars in the next four years, though honestly they
might owe about a trillion dollars with all the data
centers they're signed up for. All of these are egregious
(01:47):
sums for a company that have already forecasted billions in
losses with no clear explanation as to how it will
afford any of this beyond we need more money and
they're vague hope that there's another soft bank or Microsoft
waiting in the wings to swoop in and save the day. Now,
I'm going to walk you through where I see this
industry today and why I see no future for it
beyond a horrible, fiery car rack. While everybody reasonably harps
(02:09):
on about hallucinations, which to remind you, is when a
model authoritativety states something that isn't true, the truth of
why that's bad is far more complex and actually far
worse than it seems. You cannot rely on a large
language model to do what you want, even though its
highly tuned models on the most expensive and intricate platforms
can't actually be relied upon to do exactly what you want.
(02:30):
And I know some people might say, well, yes, they
do every time, one hundred percent of the time. A
hallucination isn't just when these models say something that isn't true.
It's when they decide to do something wrong because it
seems the most likely thing to do, or when a
coding model decides to go on a wild goose chase,
failing the user and burning a ton of money in
the process. The advent of reasoning models those engineered to
(02:53):
think through problems in the way reminiscent of a human.
But it's not thinking. They don't think. They have no consciousness.
They literally you ask them something and they break down
what the prompt might mean and then choose it's not thinking,
and the expansion of what people are trying to use
llms for demands that the definition of an AI hallucination
be widened, not merely referring to factual errors, but fundamental
(03:13):
errors in understanding the user's request or intent or what
constitutes a task, in part because these models, as I said,
cannot think and do not know anything. However successful a
model might be in generating something good once, it will
also often generate something bad, or it will generate the
right thing, but in an inefficient and over verbos fashion,
you do not know what you're going to get each time,
(03:34):
and hallucinations multiply with the complexity of the thing you're
asking for or whether a task contains multiple steps, which
is a fatal blow to the idea of agents. You
can add as many levels of intrigue and reasoning as
you want, but large language models cannot be trusted to
do something correctly or even consistently, let alone every time.
Model companies have successfully convinced everybody that the issue is
(03:56):
that users are prompting the models wrong and that the
people need to be trained to use AI. But what
they're doing is training people to explain away the inconsistencies
of large language models and to assume individual responsibility for
what is an innate floor in how these fucking things work.
Large language models are also uniquely expensive. Many mistakenly try
and claim that this is like the dot com boom
(04:18):
or uber, but the basic unique economics of generative AI
are insane. Providers must purchase tens or hundreds of thousands
of GPUs, each costing fifty thousand to seventy thousand apiece,
and the hundreds of millions or billions of dollars of
infrastructure that goes around them are so expensive and hard
to install, and that's without mentioning things like staffing or construction,
(04:38):
or power or water or even permitting. Then you turn
them on and immediately they start losing new money. Despite
hundreds of billions of GPUs sold, nobody seems to actually
make any of it, other than Video of course, the
company that makes them, and resellers like Dell and Supermicro
who buy the GPUs put them in servers and sell
them to other people. Now, if you're an eager listener,
(04:58):
I would love to hear from you on one question,
then this is just something that's been bouncing around my head.
Super Micro is in Video a customer of super Micro.
They're a custom in super Micro is a huge customer
of Video. I've read something like seventy percent of the
cost of goods sold is buying GPUs. But I read
that in Video as a customer of them. But I
can't find anything else. Reach out easy at better Offline
(05:21):
dot com if you've got any thoughts there anyway, But
back to those resourers. This arrangement works out great for
Jensen Wang, the CEO of in Video, and terribly for
everybody else. Today, I'm gonna explain the insanity of the
situation we'll find ourselves in and why I continue to
do this work underterred. The bubble has entered its most pornographic,
aggressive and destructive stage, where the more obvious it becomes
(05:41):
that we're all cooked here in AI land, the more
ridiculous the GENERATIVAI industry will act a dark juxtaposition against
every new study that says generative AI does not work,
or new story about chat gpd's uncannyability to activate mental
illness in people. And we're going to start looking at
one company in Video which now dominates the stock market
and has taken extraordinary and dangerous measures to sustain growth.
(06:03):
That is, to any sane person, completely unsustainable and unrealistic
on every level. But let's start simple. Nvidia is a
hardware company that sells GPUs, including consumer GPUs that you'd
see in a modern gaming PC. But when you read
someone say GPU within the context of AI, they mean
enterprise focused GPUs like the A one hundred, h one hundred,
(06:24):
H two hundred, and more modern GPUs like the Blackwell
Series B two hundred and GB two hundred, which combines
two GPUs with an Nvidia CPU. This is all complex sounding,
but I want you to have the groundwork. These GPUs
cost anywhere from fifty to seventy thousand dollars and require
tens of thousands of dollars more of infrastructure networking to
(06:45):
cluster these server acts of GPUs together to provide compute
and massive cooling systems to deal with the massive amounts
of heat they produce, as well as servers themselves that
they run on, which typically use top of the line
data center CPUs and contain vast quantities of high speed
memory and storage. GPUs itself is likely the most expensive
single it and within an AI server the other costs.
(07:05):
And I'm not even factoring in the actual physical building
that the server lives in, or the water or electricity
that uses. Well, all this crap adds up. I've mentioned
in Video because it has a virtual monopoly in this space,
Genera IVAI effectively requires in video GPUs, in part because
it's the only company really making the kinds of high
powered cards that GENERAIVAI demands, and because in Video created
(07:26):
something called Kuda. Cuda a collection of software tools that
lets programmers write software that runs on GPUs, which were
traditionally used primarily for rendering graphics in games. While there
are some open source alternatives as well as alternatives from
Intel with its Arc GPUs and AMD, in Vidia's main
rival in the consumer space, these aren't nearly as mature
or feature Richkuda's been around for ten fifteen years now,
(07:49):
they really knew what they were doing. They also have
bought a company called Melanox, which did the high speed
networking back in twenty nineteen, I think, for six billion dollars. Anyway,
due to the complexities of AI models, one cannot just
stand up a few of these GPUs either you need
clusters of thousands, tens of thousands, or hundreds of thousands
of them for it to be worthwhile making any investment
in GPUs and the hundreds of millions or billions of dollars,
(08:11):
especially considering they require completely different data center architecture to
make them run. You've probably read a bunch of stuff
about cryptominers turning into AI data center providers. These crypto
data centers have to be knocked down and replaced. They
you can't just put the same GPUs in isn't going
to work, and with the new Blackwell ones, the brand
new ones, and then the Rubens following them, same deal.
(08:33):
A common request like asking a generative AI model to
pass through thousands of lines of code and make a
change or in addition, may use multiples of these fifty
thousand dollars GPUs at the same time. And so if
you aspire to serve thousands or millions of concurrent users,
you need to spend big, really really really big. It's
these factors, the vendor lock in the ecosystem and the
(08:53):
fact that generative AI really only works when you're buying
GPUs at scale that underpin the rise of Nvidia. But
Beyond the economic and technical factors, there are human ones too.
To understand the AI bubble is to understand why CEOs
do the things they do. Because the executive job is
so vague, they can telegraph the value of their labour
by spending money on initiatives and partnerships and stratagem. AI
(09:15):
gave hyperscalers the excuse to spend hundreds of billions of
dollars on data centers and buy a bunch of GPUs
to go in them, because that, to the markets, looks
like they're doing something. By virtue of spending a lot
of money in a frighteningly short amount of time, Satchnadella
received multiple glossy profiles, all without having to prove that
AI can really do anything, be it a job or
make Microsoft money. Nevertheless, AI allowed CEOs to look busy,
(09:39):
and once the markets and journalists had agreed on the
consensus opinion that AI would be big, all that these
executives had to do was buy GPUs and do AI
or plug AI within their own software profitts. But really
it was just jump on the big stupid asshole train.
(10:02):
We are in the midst of one of the darkest
forms of software in history, described by many as unwanted guest,
invading their products, their social media feeds, their bosses, empty minds,
and resting in the hands of monsters. Every story of
AI's success feels bereft of any real triumph, with every
literal description of its abilities involving multiple caveats about the
mistakes it makes or the incredible costs of running in
(10:25):
generator of AI really exists for two reasons, to cost
money and to make executives look busy. It was meant
to be the new enterprise software and the new I
phone and the new Netflix all at once, a panacea
where the software guys pay one hardware guy for GPUs
to unlock the incredible value creation of the future. In
many ways, Jenera iv AI was always set up to
fail because it was meant to be Everything. Was talked
(10:46):
about like it was everything. It's still sold like it's everything.
Yet for all the fucking hype, it comes down to
two companies, Open Ai and Nvidia, and in Video was
for a while living high on the hog. All CEO
Jensen Huang had to do every three months would say,
check out these numbers and the markets and business journalist
would squeer with glee, even as he said stuff like
(11:06):
the more you buy, the more you save, in part
tipping his head to the very real and sensible idea
of accelerated computing, but frame within the context of the
cash inferna that's generated AI, and it all seems kind
of fucking ludicrous. Huang showmanship worked really well for Invidia
for a while because for a while the growth was easy.
Everybody was buying GPUs, Meta, Microsoft, Amazon, Google, and to
(11:28):
a lesser extent, Apple and Tesla made up forty two
percent of Invidia's revenue, creating at least for the first
four a degree of shared mania where everybody justified buying
tens of billions of dollars of GPUs by saying the
other guy's doing it. This is one of the major
reasons the AI bubble is happening, because people conflated in
Vidia's incredible sales with interest in AI rather than everybody
buying GPUs at once. Don't worry, I'll explain the revenue
(11:51):
side a little bit later. We're here for the long haul.
Sit down, get comfort. You're gonna need to be anyway,
Invidia is now facing a big problem that the only
thing that grows forever is cancer. On September ninth, twenty
twenty five, The Wall Street Journal said that in Vidio's
wow factor was fading, going from beating analyst estimates by
nearly twenty one percent in its fiscal year Q two
(12:12):
twenty twenty four earnings to scraping by with a pathetic
measly one point five to two percent beat in its
most recent earnings, something that for any other company would
be a good thing because they made so much money,
But framed against the delusional expectations that generative AI has inspired, well,
the figure looks nothing short of ominous. I quote the
Wall Street Journal. Already, in Vidia's fifty six percent annual
(12:33):
revenue growth rate in its latest quarter was its slowest
in more than two years. If analyst projections hold, growth
will slow further in the current quarter. In any other scenario,
fifty six percent year of a year growth would lead
to an abundance of dom perignon and one signing hundreds
of boobs. But this is in video, and that's just
not good enough. Back in February twenty twenty four, in
Video was booking two hundred and sixty five percent year
(12:55):
of a year growth but in its February twenty twenty
five earnings in Video only grew by a measly per
thetic disgusting seventy eight percent year of a year. I'm
being sarcastic, of course. It isn't so much that in
Vidia isn't growing, but to grow year over year at
the rates that people expect is insane. Life was a
lot easier when in Video went from six point oh
(13:15):
five billion dollars in revenue in Q four fiscal year
twenty twenty five to twenty two billion dollars in revenue
in Q four fiscal year twenty twenty four. But for
it to grow even fifty five percent year of a
year from Q two FY twenty twenty six, I'm just
going to truncate that now, which is forty six point
seven billion dollars to Q two twenty twenty seven. That
would require them to make seventy two point three eight
(13:36):
five billion dollars in revenue in the space of three months,
mostly from selling GPUs, which make up about eighty eight
percent of its revenue. Just want to be clear that
in a year they would have to make seventy two
billion dollars just selling pretty much GPUs and the associated
hardware in the space of three months. It's it's insane.
This is this is really that it's too much. It's
(13:58):
too much to expect. And this, by the way, we
would put in video in the ballpark of Microsoft, who
made seventy six billion dollars in their last quarterly earnings,
and within the neighborhood of Apple, who made ninety four
billion dollars in their last quarter of earnings. And they
would do this predominantly making money in an industry that
a year and a half ago barely made the company
six billion dollars in a quarter. And the market needs
in Vidia to perform. They must, they must, as the
(14:19):
company makes up seven to eight percent of the value
of the S and P five hundred. It's not enough
for Invidia to be wildly profitable, or to having a
monopsony on selling GPUs, or for it to have effectively
ten x their stock in a few years. No, no, no, more, more, more,
always more. Number must go up. It must continue to
grow at the fastest rate of anything ever, making more
(14:39):
and more money, selling more and more of these GPUs
to a small group of companies that immediately start losing money.
The moment they plugged them in. It's not brilliant, is it.
While a few members of the Magnificent Seven could be
depended on to funnel tens of billions of dollars into
a furnace each quarter, there were limits even for companies
like Microsoft, which had brought over four hundred and eighty
five thousand g us in twenty twenty four alone. To
(15:02):
take a step back about how people actually make money
from buying these GPUs, companies like Microsoft, Google, and Amazon
make their money by either selling access to large language
models that people incorporate into their products, or by renting
out servers full of those GPUs to run influence the thing,
to generate the output, or train AI models for companies
that develop and market their models themselves, namely Anthropic and
(15:23):
Open AI, with some smaller competitors that don't really matter
that latter revenue stream. Renting out GPUs is where Jensen
Wong found a solution to that horrible eternal growth problem
the neo cloud, namely companies like core Weave, Lambda, and Nebius. Now,
these businesses are fairly straightforward. They own or lease data
centers that they then feel full of servers that are
(15:45):
full of Nvidio GPUs, which they then rent out on
an hourly basis to customers either on a per GPU basis,
are in large badges for large customers who guarantee they'll
use a certain amount of compute and sign up for
a long term agreement for moderan an era a time
couple years perhaps these larger commitments. A neocloud is a
specialist cloud compute company that exists only to provide access
(16:06):
to GPUs for AI, Unlike Amazon Web Services, Microsoft Azure,
and Google Cloud, all of which to have healthy businesses
selling other kinds of compute with AI, as I'll get
into later, failing to provide much of a return on
investment at all. It's not just the fact that these
companies are more specialized than say AWS or Azure, as
you've gathered from the name. These are new, young, and
(16:27):
in almost all cases incredibly precarious businesses, each with financial
circumstances that will make a Greek finance minister blush. That's
because setting up a neocloud is expensive, even if the
company in question already has data centers as core We've
did with its cryptocurrency mining operation, AI requires, as I said,
completely new data center infrastructure to run and call the GPUs,
(16:47):
and those GPUs also need paying for. And then there's
the other stuff I mentioned earlier like power, water and
the other bits of the computer. CPU are the bah
blah blah blah blah blah. As a result, these neoclouds
are forced to raise billions of dollars in debt, which
they can lateralized using the GPUs they already have, along
with contracts from customers, which they then use to buy
more GPUs. That's right, they buy GPUs from Nvidia. They
(17:10):
raised dead on those GPUs and then they used that
debt to my more GPUs from Nvidia. It's enough to
drive a man insane. Corwave, for example, has twenty five
billion dollars in debt on an estimated five point three
five billion dollars of revenue in twenty twenty five, losing
hundreds of billions of dollars per quarter. Now you know
who also invests in these neoclouds. You'll never guess. It's
(17:31):
in video. Invidia is also one of Corweed's largest customers,
accounting for fifteen percent of its revenue in twenty twenty four,
and just signed a deal to buy six point three
billion dollars of any capacity that core Weave can't otherwise
sell to someone else through twenty thirty two, an extension
of a one point three billion dollar twenty twenty three
deal reported by the Information. It was also the anchor
investment in Corewey's IPO about two hundred and fifty million dollars.
(17:55):
In Vidia is currently doing the same thing with Lambda,
another neocloud that Invidia invested in, which also plans to
go public next year. In Vidia is also one of
Landa's largest customers, signing a deal with it this summer
to rent ten thousand GPUs for one point three billion
dollars over four years in the UK. And Video has
also just invested seven hundred million dollars in n Scale,
(18:16):
a former cryptomner that has never built an AI data
center that that has despite having no experience, committed one
billion dollars and or one hundred thousand GPUs to an
open AI data center in Norway. On Thursday, September twenty fifth,
Nscale announced that it closed another funding round within Video
listed as the main backer. Although it's unclear how much
money it put in, it would be a safe to
(18:37):
assume it's probably at least one hundred million dollars. In
Vidia also invested in Nebius, an outgrowth of Russian conglomerate Yandex,
and Nebbeus provides, through their partnership with Nvidia, tens of
thousands of dollars of compute credits to companies in Videa's
Inception startup program. Look, in Vidia's plan is simple, fund
these neoclouds. Let these neoclouds load themselves up with debt,
with which point they buy bunches of GPUs from Video,
(18:59):
which can be used as collateral for loans along with
contracts and customers, allowing the neoclouds to buy even more
GPUs from Nvidia. It is just that simple. It's infinite money, right,
just money, me money. Now, you fund the company, the
company buys from you, you fund them again. They've used
the thing they bought to buy from more from you,
unlimited money. Except that is for ones more problem. These
(19:23):
companies don't They don't really appear to have that many customers,
and they don't appear to be making much money. As
(19:43):
I went into a recent premium newsletter, and Video funds
and sustains neoclouds as a way of funneling revenue to itself,
as well as partners like super Micro and Dell resellers
that take in video GPUs like I mentioned and put
them in service to sell pre built to customers. These
two companies made up thirty nine percent of in video
revenues last quarter. Yet when you remove hyperscalar revenue Microsoft, Amazon, Google,
(20:05):
Open Ai, and n Video from the revenues of these neoclouds,
there's barely one billion dollars in revenue can bind across
core Weave, Nebius and Lambda. Corwev's five point three five
billion dollars in revenue is predominantly made up with its
contracts with Nvidia, Microsoft, who are offering that compute to
open Ai, Google who have hired Corewave to offer compute
to open Ai. And I'm not kidding, and of course
(20:26):
open Ai itself, which has now promised core Weave twenty
two point four billion dollars in business over the next
five years. This is all a lot of stuff, So
I'll make it really simple. There's no real money in
offering AI compute, but that isn't Jensen Huong's problem, So
we simply will force in Nvidia to hand money to
these companies that they have contracts to point at so
(20:47):
they can raise debt to buy more of those GPUs,
so that in Vidia can give them more contracts they
can use that to raise more money. It's it's really bad,
all right, It's really bad. When I read this stuff
out loud, I feel a little crazy because it's so
obviously unsustainable. Neil clouds are effectively giant private equity vehicles
that exist to raise money to buy GPUs from Nvidia
(21:08):
or for hyperscalers to move money around so they don't
have to increase their capital expenditures and can, as Microsoft
did earlier in the year, simply walk away from deals
they don't like, with the masses of data center at
least as they walk from. Nebius recently signed a seventeen
point four billion dollar deal with Microsoft, which even included
the clause in its six K filing and official filing
with the government the Microsoft can terminate the deal in
(21:30):
the event that the capacity isn't built by the delivery dates.
And by the way, ah Nebius already used the contract
that Microsoft gave them to raise three billion dollars to
I'm not shitting you here, build the data center to
actually to actually provide the compute for the for the contract.
They don't have it. Yeah, now they don't have them.
They don't have the fucking compute. They they haven't fucking
(21:52):
built up. No one built up. They haven't got the compute. Mate,
These fucking companies are right anyway. Anyway, sorry, sorry, I'll
stop spiraling. Let me just break down these numbers. Let's
look at cort we First, Microsoft, they're sixty percent of
their revenue twenty twenty four and they're providing compute mostly
for open Ai. Fifteen percent of their revenue last year
was in VideA, and then the rest was Meta and
(22:15):
then open Ai and then Google. Lambda half of their
revenue comes from Amazon and Microsoft, and now one point
five billion dollars of their revenue comes from in VideA,
which are their current revenue by the way, and that
one point five billion dollars over four years. So the
current revenue, it's two hundred and fifty billion dollars. Well,
that would make in video the largest customer. I realize
I'm just saying numbers here, but for real, with that contract,
(22:36):
because Lambda only made two hundred and fifty million dollars
in the first half of this year, and in Video
is spreading one point five billion dollars across four years.
In Video is the largest customer now Now Nebutus has
got similar revenue to Lambda, but their largest customer is
now is It's it's fucking Microsoft. It's just they don't
have real customers. They just have hyperscalers or in VideA themselves.
(23:00):
And from my analysis, it appears that Core we've despited
expectations to make that five point three five billion dollars
this year, has only around five hundred million dollars of
non Magnificent seven or open AI revenue in twenty twenty five,
with Lambda estimated to have maybe a round of one
hundred million dollars in AI revenue. Otherwise, Nebius only around
two hundred and fifty million dollars, and that's being generous.
(23:21):
In much simpler terms, the Magnificent seven is the AI bubble,
and the AI bubble exists to buy more GPUs because,
as I'll talk about, there's no real money or growth
coming out of this other than the amount that private
credit is investing. And this really is quite worrying. By
the way, I had a quote here for analyst that
says is about fifty billion dollars a quarter for the
low end for the past three quarters. So why is
(23:43):
this bad?
Speaker 2 (23:44):
All right?
Speaker 1 (23:44):
I don't know. Let's start simple. Fifty billion dollars a
quarter of data center funding is going into an industry
that has less revenue than three to play mobile game
gensin impact. That feels pretty bad. Who's going to use
these data centers? How are they even going to make
money on them? Firms don't typically hold on to assets.
They sell them or they take them public. That doesn't
seem great to me anyway. If AI was truly the
(24:07):
next big growth vehicle, neoclouds would be swimming in diverse
global revenue streams. Instead, they're heavily centralized around the same
few names, one of which in Vidia, directly benefits from
their existence not as a company doing business, but as
an entity that can accrue debt and spend money on GPUs.
These neoclouds are entirely dependent on a continual flow of
private credit from firms like Gold and Sachs, Who's becked,
(24:28):
Nebbia's Corweave and Lambda, JP Morgan, Lambda Crusoe Building Applene,
Texas As Open AI Data Center, and of course Corwave
and Blackstone Lambda and corwav, who have in a very
real sense created an entirely debt based infrastructure to feed
billions of dollars directly to Nvidia, all in the name
of an AI revolution that's yet to arrive. The fact
(24:49):
that the rest of the neoclo revenue stream is effectively
either a hyperscaler or open ai is also concerning. Hyperscalers
are at this point the majority of data center capital
expenditures and have yet to prove an any kind of
success from building out this capacity, outside of course, Microsoft's
investment in open Ai, which has succeeded in generating revenue
while burning billions of dollars of revenue on well, I mean,
(25:10):
it's not really any profit, is they have just burning money.
It's also insane when you say this stuff. I've got
two more goddamn episodes of this and when I read
these scripts, I'm just like, how is nobody else more
freaked out? Oh well, hyperscaler revenue is also capricious. But
even if it isn't, why are there no other major customers?
(25:30):
Why across all of these companies does there not seem
to be one major customer who isn't open Ai Well,
the answers are quite obvious. Nobody that wants it can
afford it, and those that can afford it don't need it.
It's also unclear what exactly hyperscalers are doing with this compute,
because it sure isn't making money. While Microsoft makes ten
billion dollars in revenue from renting compute to open Ai
via their Microsoft as Your Cloud, it does so at cost,
(25:54):
and was charging open Ai one dollars and thirty cents
per hour for each a one hundred AIGPU at a
loss of two point two dollars an hour per GPU,
meaning that it is likely losing money on this compute,
especially as semi analysis as the total cost per hour
per GPU around one dollars and forty six cents with
the cost of capital and debt associated for a hyperscaler,
(26:15):
though it's unclear that whether that's for an H one
hundred or an A one hundred GPU. In any case,
how do these neoclouds pay for their debt if the
hyperscalers give up or in video doesn't send them money,
or more likely private credit begins to notice that there's
no real revenue growth outside of circular compute deals with
neocloud's largest suppliers, investors and customers. Don't know why I
(26:35):
said plural there, because it's just one in video, And
the answer is they don't. In fact, I have serious
concerns that they can't even build the capacity necessary to
fulfill these deals, but nobody seems to worry or think
about them. But really, though it appears to be taking
Oracle and Cruso around two point five years per gigawatt
of compute capacity, how exactly are any of these neoclouds,
(26:56):
or indeed Oracle itself able to expand to capture this revenue.
Who knows, but I assume somebody is going to say
open Ai. Here's an insane statistic for you. By the way,
open ai will account for in both its revenue projected
thirteen billion dollars and in its own compute cost ten dollars,
somewhere in the region of forty to fifty percent of
all AI revenues in twenty twenty five. As a reminder,
(27:18):
open ai is leaked that it will burn one hundred
and fifteen billion dollars in the next four years, and
based on my estimates, it actually needs to raise I
mean upwards are four hundred billion dollars in the next
four years based on its three hundred billion dollar deal
with Oracle and some recently announced one hundred billion dollar
compute purchases for backup. And that alone is a very
bad sign, very very bad indeed. Especially it's with three
(27:40):
years and five hundred billion dollars or more into this
hype cycle, with few signs of life outside of well
open AI promising people money, and that's not healthy or
sane or normal, and it's certainly not stable, and it's
going to get bad real fast. Gatget tomorrow. Thank you
(28:05):
for listening to Better Offline. The editor and composer of
the Better Offline theme song is Matasowski. You can check
out more of his music and audio projects at Matasowski
dot com, M A T T O S O W
s ki dot com. You can email me at easy
at Better offline dot com or visit Better Offline dot
com to find more podcast links and of course my newsletter.
(28:27):
I also really recommend you go to chat dot where's
youreaed dot at to visit the discord, and go to
our slash Better Offline to check out our reddit. Thank
you so much for listening better.
Speaker 2 (28:38):
Offline is a production of cool Zone Media. For more
from cool Zone Media, visit our website cool zonemedia dot com,
or check us out on the iHeartRadio app, Apple Podcasts,
or wherever you get your podcasts. FI