Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Al Zone Media, give me that total next shirt, a
tim pan apple, a German shepherd, a wristbandstand, and a
lurching red bird and more brave than the Turning Network.
This is your weekly Better Offline monologue, and I'm your
host ed zetron. Now, before we go any further, I
(00:24):
need your help. I look Better Offline is up for
a webby and I really need you to vote for
best episode in the business category. It's the man who
killed Google Search. It's Propagart Ragavan. Let's get him. I
realize it's a huge pain in the ass to sign
up for something and vote, but I've never won an
award in my life and I'd really appreciate it. Link
is going to be in the episode notes, and while
you're there, also vote for the wonderful Mollyconger's Weird Little Guys,
(00:46):
which I'll also have a link to. I know signing
up to stuff is annoying. I'm asking a lot from you,
but there you go. I'm doing it anyway. To the monologue,
I feel like we're approaching a choke point in the
whole General v Ai bubble, the culmination of over a
year of different narratives and pressures that I believe will
lead to an ultimate collapse. Last week, open Ai released
(01:07):
an image generator with GPT four to zero, which quickly
gained massive attention for its ability to create images in
the style of famed Japanese animation company Studio Ghibli. And
to be clear, I think these images are an abomination
and everyone involved in launching this tool has committed a
mortal sin anyway. Nevertheless, creating these disgusting, disgraceful images comes
at in incredibly high cost, and for the last week,
(01:30):
open Ai CEO Sam Ortman has been complaining about their
GPUs melting, leading to open ai having to limit free
users to only three image generations a day, along with
longer wait times than capacity issues with video generator Sora.
To make matters worse, Ortman also announced that and I
quote by the way, that users should expect new releases
from open Ai to be delayed, stuff to break, and
(01:52):
for services to sometimes be slow as we deal with
capacity challenges. This led me to ask a very simple question.
I think everybody in the tech media really should be asking,
why can't Sam Waltman ask Microsoft for more GPUs. The
answer is, as you may have guessed from my last
monologue is that there may not actually be capacity for
(02:13):
them to do so. Open AI's relationship with Redmond has
grown kind of chilly over the past year. I'd speculate
that Microsoft has refused to provide additional immediate capacity or
has refused to provide capacity on the chummy terms that
open Ai previously enjoyed, receiving a significant discount on the
usual ticket prices in the past. We know that Microsoft
has both walked away from two gigawats of future compute
(02:34):
capacity and declined the option to spend another twelve billion
dollars on core Weave's compute and core Weave if you
don't remember there that the publicly traded data centered AI
company a whole dog's dinner onto itself, and analyst house
TD Cohen suggested in that this is a sign that
Microsoft is no longer willing to shoulder the immense financial
(02:54):
burden of supporting open Ai, even though open ai picked
that option up, which by which I mean they took
the twelve billion dollars of compute. It isn't clear if
Corwave can actually build the capacity they need, and definitely
don't think they're going to be able to do it
in the time they need it. Microsoft allegedly walked away
from Corewave due to its failure to deliver and that
deliver the services they asked for, and indeed probably the
(03:17):
compute as well. If that's true, it's unclear what has
changed to make core Weave magically able to support open Ai,
or even how a company that's drowning in high interest
debt can finance the creation of several billion dollars worth
of new data centers. Also, it's not quite as simple
as open ai calling up a data center company with
a bunch of GPUs and running chat, GPT, dot ex.
(03:37):
Open Ai likely has reams of different requirements, and the
amount of GPUs they will need will likely vary based
on demand, putting them in a problematic situation where they
could be commuting to a bunch of compute that they
don't need if demand slows down. I've heard that companies
generally want a six to twelve month commitment for GPUs
two the cost is fixed no matter how much they
get used, or at least there's a minimum commitment. But
(04:00):
let's assume for a second that demand for chat GPT
continues to rise. How does OpenAI actually get that compute
if Microsoft isn't handing it over, and the Information reports
that open ai still projects to spend about thirteen billion
dollars on as your Cloud in twenty twenty five, there
aren't really a ton of other options, especially for a
company with such gigantic requirements, meaning that whatever infrastructure open
(04:23):
ai is building is a patchwork between smaller players, and
using so many smaller providers likely creates unavoidable inefficiencies and overhead.
I'm naming another pale horse of the AI apocalypse by
the way limits to service and service degradation across chat GPT.
Open ai is running out of compute capacity. They've talked
about it since October of last year, and chat GPT's
(04:45):
new image generation is a significant drain on their resources,
meaning that to continue providing their services, they're going to
need to expand capacity or reduce access to services otherwise.
The problem is that expanding is extremely different. Data centers
take three to six years to build, and open ai
has planned Stargate data Center won't have anything ready before
(05:06):
twenty twenty six at the earliest, which means we're approaching
a point where there simply might not be enough data
centers or GPUs to burn, while open ai could theoretically
go to Google or Amazon. Both of those companies are
invested in anthropic and have little incentive to align with
open Ai. Meta is building their own chet GPT competitor,
and Elon must despises Sam Mortman real shithead versus fuckwad
(05:29):
situation there. While I can't say for certain, I can't
work out where open ai will get the capacity to continue,
And I just don't know how they're going to expand
their services if Microsoft isn't providing capacity. Yes, there's a
Oracle which open ai has a partnership with, but they're
relatively small in this space. Chat GPT's immage generation has
become this massive burden on the company right at the
(05:51):
point where it's introducing some of its most expensive models ever,
and the products themselves are extremely expensive to run. Deep
research is perhaps the best example using O open ai
is extremely expensive O three model, which can cost in
some cases as much as one thousand dollars per query.
Deeper search is probably cheaper, but not that much cheaper, probably,
I would. I've heard rumors, and this is a rumor.
(06:13):
It's a rumor. I've heard like a dollar or two
per query. If that's the case, that's fucking insane. Anyway.
While open Ai could absorb the remaining capacity at say Crusoe,
Lambda and core Wave, this creates a systemic risk where
every GPU provider is reliant on open AI's money, and
this assumes that they'll actually have enough to begin with.
Open Ai also just close the largest private funding round
(06:35):
in history, forty billion theoretical dollars, valuing the company at
a ridiculous three hundred billion dollars raised from he gets
the soft Bank and other investors. That's good news, right,
Not really? In truth, open Ai really only raised ten
billion dollars, with seven and a half billion of those
dollars coming from soft Bank and another two point five
billion dollars coming from other investors, including Thrive Capital and Microsoft.
(06:58):
The remaining thirty billion dollars off, where which soft Bank
is on the hook for twenty billion dollars off, will
arrive at the end of the year. That's all we've gone.
But open Ai will only get ten billion dollars from
soft Bank, so bringing it down to a thirty billion
dollar round total. If open ai fails to convert from
a nonprofit to a for profit company by the end
of twenty twenty five, a massive acceleration there. As a reminder,
(07:21):
open ai is a weirdly structured nonprofit with a for
profit arm, and their last round of funding from October
twenty twenty four had another caveat that if open ai
failed to become a for profit company by October twenty
twenty six, all investment dollars would convert into debt. I've
also read that they would have to hand the money back.
I'm not sure whether that's the case. Debt is the
(07:41):
one that's been reported the most. Furthermore, open Ai loses
money on every single prompt on Chat GPT, even from
their two hundred dollars a month chet GPT Pro subscribers.
The burdens some interest payments would make it even harder
for open ai to reach break even, which right now
it doesn't even seem like they can do anyway. As
an another reminder, soft Bank is a company that has
(08:02):
now invested in two different fraudulent schemes, Wirecard and Green
Silk Capital, the latter of which helped put the nail
in the coffin of credit sweee back in twenty twenty
three and put sixteen billion dollars into we work. It
will be incredibly some might say impossibly difficult, and I'll
cover this in the future episode to convert open ai
into a for profit company, and the fact that soft
(08:23):
Bank is putting this caveat on their investment heavily suggests
that they have doubts it will happen. And I must
be clear. When the monopoly man is getting nervous, you
should get nervous too. The fact OpenAI accepted these terms
also suggest they're desper and I don't blame them. They've
committed eighteen billion dollars to the Stargate Data Center project,
will spend thirteen billion dollars on Microsoft Computer alone in
(08:44):
twenty twenty five, according to the information, and they've now
created an incredibly popular product that will guarantee people come
and use it like twice and then never use it again.
Now keep a keen eye on any restrictions that open
ai makes on chat GPT in the coming month. I
do not see how this company survives, nor do I
see how they expand their capacity much further. Price increases,
(09:05):
rate limits and other ways of slowing down the pressure
on their servers will likely suggest that open eye is
up against the wall, both in their ability to support
the services they provide and the costs they must bear
to provide them. We are entering the hysterical era of
the bubble, time when the craziest stuff will happen as
the money does everything it can to keep the dream alive.
I look forward to telling you what happens next.