All Episodes

September 11, 2025 32 mins

In part two of this week's three-part Better Offline Guide To Arguing With AI Boosters, Ed Zitron walks you through why the AI bubble is nothing like the dot-com bubble, how the cost of inference is actually going up, and why OpenAI’s massive burnrate is nothing like Uber’s.

Latest Premium Newsletter: Why Everybody Is Losing Money On Generative AI: https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/

YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.

BUY A LIMITED EDITION BETTER OFFLINE CHALLENGE COIN! https://cottonbureau.com/p/XSH74N/challenge-coin/better-offline-challenge-coin#/29269226/gold-metal-1.75in

---

LINKS: https://www.tinyurl.com/betterofflinelinks

Newsletter: https://www.wheresyoured.at/

Reddit: https://www.reddit.com/r/BetterOffline/ 

Discord: chat.wheresyoured.at

Ed's Socials:

https://twitter.com/edzitron

https://www.instagram.com/edzitron

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Ze Media.

Speaker 2 (00:04):
Hello one, Welcome to Better Offline. I'm your host ed
Zi Trun. This is part two of our three parts
serious on how to argue with an AI booster. When

(00:25):
we last left off, I'd started talking about some of
the most common and vacuous talking points used by those
who defend the generative AI industry and why a lot
of them are wholly without merit. These are the booster quips,
assertions that if you don't know much, sound convincing but
are easily disproven with the right information. And in that
last episode we addressed the quips that say were in
the early days of AI and that people doubted smartphones

(00:47):
and the internet. Things they didn't do just like they
did generative AI, which they should do in the cycle
of grief. That's the denial stage. Now we're going to
move on to bargaining. This is just that the dot
com boom, even if of this collapses, the overcapacity will
be practical for the market like the fiber boom was.
All right, folks, time for a little history. You know me,

(01:07):
I'll love me some mystery. The fiber boom began after
the Telecommunications Act of nineteen ninety six deregulated large parts
of America's communications infrastructure, creating a massive boom, a five
hundred billion dollars one to be precise, primarily funded with debt. Obviously,
we're still using the infrastructure bought during that boom, and

(01:28):
this fact is used as a defense of the insane
capex spending surrounding generative AI. High speed Internet is useful, right, sure,
But the fiber optic boom period was also defined by
a gluttony of overinvestment, ridiculous valuations, and genuine, outright fraud.
In any case, this is not remotely the same thing,
and anyone making this point needs to learn the very
fucking basics of technology. Let's get going now. The fiber

(01:51):
optic cable of this era is mostly owned by a
few companies. Forty two percent of Nvidia's revenue is from
the Magnificent seven, and the companies buying these gps are
for the most part not going to go bust once
the AI bubble bursts. You can also already get the
cheap fiber of this era too cheap aigpus already here.
GPUs are depreciating assets, meaning that the good deals are

(02:13):
already happening. I found an in Vidia a one hundred
for two or three thousand dollars multiple times on eBay,
and you can get the h one hundreds which are
more powerful for well, I think thirty grand and those
things go forty five thousand retails, So not brilliant. Aigpus
also do not have a variety of use cases and
are limited by Kuda, in Vidia's programming libraries and APIs.

(02:33):
Aigpus are integrated into applications using this language Kuda, and
this is specifically in Vidia's programming language. While there are
other use cases scientific simulations, image and video processing, data
science and analytics, medical imaging, and so on. Kuder is
not a one size fits or digital panacea. While fiber
optic cable was, and it was also put everywhere, it

(02:57):
truly did set up the future. What are the these
GPUs setting up exactly? Also, widespread access to cheaper GPUs
has already happened, and what new use cases are there?
What are the new innovative things we can do? As
a result of the AI bubble, there are now many, many, many, many,
many different vendors to get access to GPUs. You can

(03:17):
pay at an hourly rate. Who knows if it's probitable,
but you can do it, and sometimes you can get
them for as little as one dollars an hour, which
is really not good. It definitely isn't making them money
but putting the financial collapse aside. While they might be
cheaper when the AI bubble bursts, does cheaper actually enable
people to do new stuff? Is costs the problem because
I think the costs are going to go up. But

(03:38):
even if they weren't going up, what are the things
that you could do that a new What is the
prohibitive cost? No one can actually answer this question because
the answer isn't fun. GPUs are built to shove massive
amounts of compute into one specific function, again and again
and again, like generating the output of model, which remember,
mostly boils down to complex maths. Unlike CPUs, a GPU

(03:59):
can't easily changed tasks or handle many little distinct operations,
meaning that these things aren't going to be adopted for
another mass market use case because there probably isn't one.
In simpler terms, this was not an infrastructure built out.
The GPU boom is a heavily centralized, capital expenditure funded
asset bubble where a bunch of chips will sit in
warehouses or kind of fallow data centers waiting for somebody

(04:22):
to make up a use case for them. And if
an endearing one existed, we'd already have it, because we
already have all the fucking GPUs. Now here's a really
big boost e quip and I have been looking forward to.
I get a lot of people asking you about this.
I'm ed, you're so stupid. Why am I stupid? Exactly? Well,
five really smart guys got together and wrote AI twenty

(04:44):
twenty seven, which is a very real sounding extrapolation that
shut the fuck up, shut up, shut up. AI twenty
twenty seven is fan fiction. If you were scared by this,
and you're not a booster, you shouldn't feel bad. By
the way this was written to scare you. By the way,
if you don't know what it is I'm talking about,
you should consider yourself lucky. It's essentially a piece of

(05:04):
speculative fiction that describes where GENAI companies get fatter models
that get exponentially better, and the US and China are
in brailed in an AI arms race. It's really silly.
It's so very silly, and I call it fan fiction
because it is. If we're thinking about this in purely
intellectual terms. It's up there with my immortal and no,
I'm not explaining that you can google that one for yourselves.

(05:25):
It doesn't matter if all the people writing the fan
fiction are scientists or that they have the right credentials.
They themselves said that AI twenty twenty seven is a
guess an extrapolation, which means guess with expert feedback, which
means someone editing your fan fiction and involves experience that
open AI. There are people that worked on the shows
they write fan fiction about. We're not even insulting fan fiction.

(05:45):
By the way, go nuts, you're more You are one
hundred times more ethically positive than these people. At least
you admits fan fiction could knuckles get pregnant. I'm sure
somebody's found out. I'm not going to go line by
line and cut this any more than I'm going to
go and do a lengthy takedown of someone's erotic Bancho
Kazoui's story, because both are fictional. The entire premise of

(06:07):
this nonsense is that at one point someone invents a
self learning agent that teaches itself stuff, and it does
a bunch of other stuff requiring a Brazilian compute points
with different agents with different numbers after them. There is
no proof that this is possible. Nobody has done it,
and nobody will do it. AA twenty twenty seven was
written specifically to fool people that want to be fooled,
with big chants and the right technical terms used to

(06:29):
lull the credulus into a wet dream and a New
York Times column where one of the writers folds their
hands and looks worried. It was also written to scare
people that are already scared. It makes big, scary proclamations
with tons of links to stuff that looks really legitimate,
but when you piece it all together, is literally just
fan fection, except really not that endearing. My personal favorite

(06:50):
part is mid twenty twenty six China Wakes Up, which
involves China's intelligence agents. He's trying to steal Open Brains
agent no idea who this companicably referring to please email
if you can work it out to I don't care
at business dot org before the headline of AI take
some jobs. After Open Brain releases a model. Oh God,
I'm so bored even fucking talking about this now. Sarah

(07:12):
lyonce puts this well, arguing that AI twenty twenty seven
and AI in general is no different from the spurious
spectral evidence used to accuse someone of being a witch
during the Salem witch trials, and I quote and the
evidence is spectral. What is the real evidence in AI
twenty twenty seven beyond trust us and vibes? People who
wrote it site themselves in the piece, do not demand

(07:32):
I take this seriously. This is so clearly a marketing
device to scare people into buying your product before this
imaginary window closes. Don't call me stupid for not falling
for your spectral evidence. My whole life, people have been
saying artificial intelligence is around the corner, and it never arrives.
I simply do not believe a chatbot will ever be
more than a chat pot, and until you show me

(07:52):
it doing that, I will not believe it anyway. AI
twenty twenty seven is fan fiction nothing more. Just because
it's full of fancy words and has five different grifters
on its byline doesn't mean a goddamn thing. Now now, now, now, now, folks,

(08:20):
we've all been waiting for this moment, and here's the
ultimate booster quip the cust of inference is coming down.
This proves that things are getting cheaper. And here's a
bonus trick for you before I get to my ben
Here we go, ask them to explain whether things have
actually got cheaper, and if they say they have, ask
them why there are no profitable AI companies. If they

(08:42):
say they're in the growth stage, ask them why there
are no profitable AI companies. Again, I'd say it's been
several years and not got one. At this point they
should try and kill you. But really, I'm about to
be petty. I'm about to be petty for a fucking
reason though. In an interview on a podcast from earlier
this year that I will not even quote because the
journalist in question did not back me up and it

(09:04):
pisses me off, Journalist Casey Newton said the following about
my work.

Speaker 1 (09:09):
You don't think that that kind of flies in the
face of same altman saying that we need billions of
dollars for years. No, not at all. And I think
that's why it's so important when you're reading about AI
to read people who actually interview people who work at
these companies and understand how the technology works. Because the
entire industry has been on this curve where they are
trying to find micro innovations that reduce the cost of

(09:32):
training the models and to reduce the cost of what
they call inference, which is when you actually enter aquarium
the chat GBT and if you plotted the curve of
how the cost has been following over time, Deep Seek
is on that curve. Right, So everything that Deep Seek
did it was expected by the AI labs that someone
would be able to do. The novelty was just that

(09:52):
a Chinese company did it. So to say that it
like up ends expectations of how AI would be built
is just purely false and the opinion of somebody who
does not know what he's talking about.

Speaker 2 (10:03):
Newton then says several octaves higher, which shows you exactly
how mad he isn't that he thought what he said
was very civil, and that there are things that are
true and there are things that are false, like you
can choose which ones you want to believe. I'm not
going to be so civil. Other than the fact that
Casey refers to micro innovations, the fuck are you talking about?

(10:24):
And Deep Seak being on a curve that was expected,
he makes, as many do, two very big mistakes and personally.
If I was doing this, I personally would not have
said these things in a sentence that began with me
suggesting that I be in case and Newton in this
example knew how the technology works. Now here's the case
in Newton wib inference, which is when you actually enter

(10:47):
a query into chat GPT. This statement is false. It's
not what inference means. Inference and I've gotten this wrong
in the past too. I'm being accountable. Is everything that
happens when you put in a prompt to generate an output.
It's when an AI based on your infers meaning. To
be more specific, in quoting Google machine learning, inference is
the process of running data points into a machine learning

(11:07):
model to calculate an output, such as a single numerical score.
Except that's what these things are bad at. But nevertheless,
Casey will try and weasel out of this one and
say this is what he meant. It wasn't. He also said,
if he planted the curve of how the cost of
inference has been falling over time, well that's wrong, Casey,
that's wrong the man. The cost of inference has gone
up over time. Now, Casey, like many people who talk

(11:28):
about stuff without learning about it first is likely referring
to the fact that the price of tokens for some
models has gone down in some cases. But you know what, folks,
let's establish and facts about inference. I'm doing the train.
I'm pulling the big horn on the invisible train. I'm
cooking now. Inference is a thing that costs money, is
entirely different to the price of tokens, and conflating the
two is journalistic malpractice. The cost of inference would be

(11:51):
the price of running the GPU and the associated architecture.
Of course, we do not at this point have any
real insight into token prices are set by the people
who sell access to the tokens, such as open ai
and Anthropic. For example, open ai dropped the price of
its O three models token costs almost immediately after the
launch of Claude Opus four. Do you think it did
that because the price of serving the models got cheaper.

(12:13):
If you do, I don't know how you possibly put
your trousers on every morning without cutting yourself in half. Now,
the cost of inference conversation comes from articles that say
that we now have models that are cheaper that can
now hit higher benchmark scores. Though the article I'm referring to,
which will be in the show notes, is from November
twenty twenty four, and the comparison it makes is between

(12:33):
GPT three, which is from November twenty twenty one, and
LAMA three point two to three b September twenty twenty four. Now,
the suggestion is in any case, that the cost of
inference is going down ten x year over year. The
problem is, however, that these are raw token costs, not
actual expressions of evaluations of token burn in a practical setting.
And to really I realized that it was a bit technical.

(12:54):
These are just what it costs to do something. It
doesn't actually tell you how how many tokens will be
burned at what volume they will be burned, because that
would change things. And well, wouldn't you know it, the
cost of inference actually went up as a result. In
an excellent blog from Killer Code, and I did not
get the chance to find out the pronunciation of this

(13:15):
second name, so I'm just going to call her. It
is ewasyz sz Ka. I am so sorry. I would
rather spell it out, miss than actually mispronounce it. I
hate when people say z tron wrong. Great blog anyway,
let me quote, application inference costs increase for two reasons.
The frontier models cost per token stayed constant, and the

(13:36):
token consumption per application grew a lot. Token consumption per
application grew a lot because models allowed for longer context
windows and bigger suggestions from the models. The combination of
a steady price per token and more token consumption caused
that inference cost to grow about ten times over the
past two years. To explain that in really simple terms,
while the costs of old models may have decreased, new models,

(13:59):
which you need to do most things, cost about the same,
and the reasoning that these new models use do actually
burn way way more tokens. When these new models reason,
they break the user's input down and break it into
component parts, then run inference on each of those parts.
When you plug an L and M into an AI
coding environment, it will naturally burn an absolute shit ton
of tokens, in part because of the large amount of

(14:21):
information you have to load into the prompt and the
context window, or the amount of information you can load
in at once, and in part because generatingcode is inference
intensive and also breaking down all those coding tasks. At
each of those tasks requiring a coding tool and taking
a bunch of inference themselves. It's really bad. In fact,
the inference costs are so severe. The Killer Code says
that a combination of a steady price for token and

(14:43):
more token consumption caused app inference costs to grow about
ten x over the last two years. I'm repeating myself.
I realized, But I really need you to get one thing,
which is that the cost of inference went up. But
I'm not done. I refuse to let this point go
because people love to say the cost of inference is
going down when the cost of inference has increased, and
they do so to a national audience, all while suggesting

(15:04):
I'm wrong somehow and acting superior. I don't like being
made to feel this way. I don't think it's nice
to do this to people. And if you're gonna do it,
if you have the temerity to call someone out directly,
at least be fucking right. I'm not wrong, You're wrong.
In fact, software developer influencer Theo Brown recently put out
a video called I was wrong about AI costs They

(15:27):
keep going up, which he breaks down as follows, reasoning
models are significantly increasing the amount of output tokens being generated.
These tokens are also more expensive. In one example, Brown
finds that Grockfor's reasoning mode uses six hundred and three
tokens to generate two words. This was a problem across
every single reasoning model, as even cheap reasoning models would

(15:48):
do the same thing. As a result, tasks are taking
longer and burning more tokens. Another writer called Ethan Deing
noted a few months ago that reasoning models burn so
many tokens that there is no flat subscrips price that
works in this new world. As the number of tokens
they consume to an absolutely nuclear the price drops have
also for the most part stopped. You cannot at this

(16:10):
point fairly evaluate whether a model is cheaper just based
on its cost per tokens, because reasoning models inherently burn
and are built to inherently burn more tokens to create
an output. Reasoning models are also the only way that
model developers have been able to improve the efficacy of
new models, using something called test time compute to burn
extra tokens to complete a task, and in basically anything

(16:30):
you're using today, there's going to be some sort of
reasoning model, especially if you're coding, the cost of inference
has gone up. Statements otherwise are purely false and are
the opinion of somebody who does not know what he's
talking about. But you ask, could the costs of inference
go down? Maybe it sure isn't trending that way, nor
has it gone down yet. I also predict that there's

(16:51):
going to be some sort of sudden realization in the
media that inference is going up, which is kind of
already started. The Information had a piece on it in
late August where they note that into it paide twenty
million dollars to as your last year, primarily to access
open AI's models, and it's on track to spend thirty
million this year, which outpaces the company's revenue growth in
the same period, raising questions about how sustainable the spending

(17:11):
is and how much of the cost it can pass
along to customers. Christopher Mims and The Wall Street Journal
also had a piece about the costs going up. Do
not be mad at Chris. Chris and I chatted before
he submitted that piece, like he literally on Blue Sky
called me out if fucking rocks. By the way, big
up to Chris Mims because it's nice to see the
mainstream media actually engaging with these things, even though it's
dangerous to the bubble. But you know what, the truth

(17:34):
must win out, and the problem here is that the
architecture underlying large language models is inherently unreliable. I imagine open
AI's introduction of the router to chat GPT five as
an attempt to moderate both the costs of the model
chosen and reduce the amount of exposure to reasoning models
for simple queries. Though Sam Moltman was boasting on August
tenth about the significant increase in both free and paid

(17:54):
users exposure to reasoning models, they don't teach you this
in business school. Still, A study written up by VentureBeat
found that open weight models burn between one point five
to four times more tokens, in part due to a
lack of token efficiency and in part thanks to you
guessed it reasoning models. I quote the finding's challenge of
prevailing assumption in the AI industry that open source models

(18:16):
offer a clear economic advantages over proprietary alternatives. While open
source models typically cost less per token to run, the
study suggests that this advantage could be and I quote
the study easily offset if they require more tokens to
reason about a given problem, and models keep getting bigger
and more expensive too. So why did this happen? Well,
it's because model developers hit a wall of diminishing returns

(18:39):
and the only way to make models do more was
to make them burn more tokens to generate a more
accurate response, which is a very simple way of describing
reasoning a thing that opening I launched in September twenty
twenty four, and others followed. As a result, all the
gains from powerful new models come from burning more and
more tokens. The cost per million token number is no
longer an accurate measure of the actual cost of generative

(18:59):
a because it's much much, much much harder to tell
how many tokens of reasoning model may burn, and it
varies as the boint the O Boying, I'm keeping that
all right. You get the real cuts as the O
Brown noted from model to model. In any case, there
really is no changing this path. These companies are out
of ideas now another another one of my favorite ultimate

(19:22):
booster gripts. This is a classic and I still get
this on social media. I'm I have people yapping in
my ear saying open air and Anthropic are just like
Uber because Uber bent twenty five billion dollars over the
course of fifteen or so years and look look edward,
they're now profitable. Why are you calling me Airport? Shut up?
This proves the open Ai, a totally different company with

(19:43):
different economics, will be totally fine. So I've heard this
argument maybe fifty times in the last year, to the
point that I had to talk about it in my
piece how does open Ai Survive, which I also turned
into a podcast around July twenty twenty four. Go back
and link a link to it in the piece. Yaddy yaddy, yadda. Nevertheless,
people make a few points by Uber and AI that
I think are fundamentally incorrect, and I'm going to break
them down for you now. They claim that AI is

(20:05):
making itself too big to fail and betting itself everywhere
and becoming essential, and none of these things are the case.
I've heard this argument a lot, by the way, and
it's one that's both ahistorical and alarmingly ignorant of the
very basics of society. But ed the government, no no, no, no, no, no,
you've heard, you've heard. OpenAI got a two hundred million
dollar Defense contract with an estimated completion date of July

(20:26):
twenty twenty six. And just to be clear, that's up
to two hundred million dollars, and that they're selling chat
GBT Enterprise to the US government for a dollar a year,
along with Anthropic doing the same thing, and even Google's
doing it, except they're doing forty cents for a year. Now,
you're probably hearing this and thinking, ah shit, this means
the government's paid them. They're never going away. And I
cannot be clear enough that you believing this is the

(20:47):
very intention of these deals. They are built specifically to
make you feel like these things are never going away.
This is also an attempt to get in with the
government at a rate that makes train these models a
no brainer. At which point I ask, and the government
is going to have cheap access to AI software does
not mean that the government relies on m every member

(21:08):
of the government having access to chat GPT, something that
is not even necessarily the case, does not make this
software useful, let alone essential. And if open ai burns
a bunch of money making it work for them, it
still won't be essential because large language models are not
actually that useful for doing stuff now let's talk Uber.
Uber was and is useful, which eventually made it essential.

(21:30):
Uber used lobbyist Bradley Tusk to steam roll local governments
into allowing Uber to operate in their cities, but Tasks
did not have to convince local governments that Uber was
useful or have to train people how to use Uber.
Uber's too big to fail moment was that local cabs
kind of fucking sucked just about everywhere. You ever try
and take a yellow cab from downtown Manhattan to Hoboken,

(21:50):
New Jersey, or Brooklyn or Queen's Do you ever try
and pay with a credit card? How about trying to
get a cab outside a major metropolitan area. Do you
remember how bad it was? It was really awful. I
don't think people realize or remember how bad it was.
And I'm not saying that Uber is good. I'm not
glorifying Uber in any way. But the experience that Uber

(22:11):
replaced was very, very bad. As a result, Uber did
become too big to fail because people now rely on
it because the old system sucked. Uber used its masses
of venture capital to keep prices low to get people
used to it too, but the fundamental experience was better
than calling a cab company and hoping they showed up.
I also want to be clear that this is not
me condoning Uber take public transport, if you can to

(22:32):
be clear. Uber has created a new kind of horrifying,
extractive labor practice which deprives people of benefits and dignity,
paying off academics to help the media gloss over the
horrors of their platform, and also now having to increase
prices so that they reached profitability by doing that. That
isn't something that's going to happen with genitive AI. Just
the costs are too high, They're way too high. But anyway,

(23:09):
what is essential about generative AI? What exactly, and be specific,
is the essential experience of generative AI? What are we
if chat, GPT disappeared tomorrow, what actually disappears? And on
an enterprise or governmental level, what exactly are these tools
doing for governments that would make removing them so painful?

(23:31):
What use cases, what outcomes? If your answer here is
to say, well, they're putting it in and they're choosing,
they're choosing which people to cut out of benefits, and please, goddamn,
this is what they want you to do. They want
you to be scared so they can feel powerful. They're
not doing that. You notice that we get all these
horrible stories by the way of internal government things, shoving

(23:51):
stuff into olms. You know what, we don't get another
thing we don't get, oh and then have It's just
they're doing this scary, bad thing that they shouldn't be.
This shouldn't be putting people's private information into anyway. I'm rambling.
Uber's essentral nature is that millions of people use it
in place of regular taxis, and it effectively replaced de
krepit of exploitative systems like the yellow cab Medallions in

(24:13):
New York with its own tech enabled exploitation system that
nevertheless worked far better for the user. Okay, I also
want to do a side note just to acknowledge that
the disruption from Uber brought something to the medallion system
that was genuinely horrendous. The consequences were horrifying for the
owners of the medallions, some of who had paid more
than a million dollars for the privilege of driving a

(24:34):
New York cab and were burdened under mountains of debt.
That our system is so fucking evil. I think it's horrifying,
and I think the payday loan people involved should all
be in fucking prison, worst scum of the world. The
people who are taking advantage of people come to this
country to drive a fucking cab that they have to
take out massive loans to buy. That is evil. Uber

(24:55):
is also just to be clear, but that also is
That's the point I'm trying to make. Should feel sorry
for the victims of that system. That system was a
kind of corruption unto itself anyway, getting back to the thing,
because I don't know, I feel I actually feel a
lot for the people who are the victims of the
medallion system. It's fucking rough, and every time I think

(25:17):
of it, I feel very sad inside. But let's get
back to the episode. I don't want to think about
it any longer. There really are no essential use cases
for Chat, GPT, or really any Genai system. You cannot
point to one use case that is anywhere near as
necessary as cabs in cities, And indeed the biggest use cases,
things like brainstorming and search, are either easily replaced by
any other commoditized The lam will already exist in the

(25:39):
case of Google Search. Now let's do another boost quip
data centers are important economic growth vehicles and now helping
drive innovation and jobs throughout America. Having data centers promotes innovation,
making open AI and AI data centers essential. And the
answer to there is no no. Sorry, this is a
really simple one. These data centers are not in and

(26:00):
of themselves driving much economic growth other than the costs
of building them, which I went into last episode. As
I've discussed again and again, there's maybe forty billion dollars
in revenue and no profit coming out of AI companies.
There isn't any economic growth. They're not holding up anything
other than the massive, massive infrastructure built to make them
make no money and lose billions. There's no great loss

(26:23):
associated with the death of large language models or the
death of this era. Taking away Ober would be genuinely
catastrophic with some people's ability to get places and people's jobs,
even if they are horrifyingly underpaid. But here's another booster, quipped.
Uber burned a lot of money twenty five billion dollars
or more to get where it is today. Ooh, mister Zichron,

(26:43):
mister Zitchron, You're dead. And my response is the open
AI and anthropic are both separately burned more than four
times as much money since the beginning of twenty twenty
four as Uber did in its entire existence. So the
classic and wrong argument about open ai and companies like
open ai is that Uber burned a bunch of money,
is now cash flow positive or profitable. I want to
be clear that Uber's costs are nothing like large language models,

(27:06):
and making this comparison is ridiculous and desperate. But let's
talk about raw losses, shall we, and where people are
making this assumption. So Uber lost twenty four point nine
billion dollars in the space of four years from twenty
nineteen to twenty twenty two, in part because of the
billions it was spending on sales and marketing in R
and D four point six billion dollars and four point
eight billion dollars respectively in twenty nineteen alone. It also

(27:27):
massively subsidized the cost of rights, which is why prices
had to increase, and spent heavily on driver recruitment, burning
cash to get scale, you know, the classic Silicon Valley way.
This is absolutely nothing like how large language models are growing.
And I'm tired of defending this point, but defended I
shall open AI and Anthropic burn money primarily through compute
costs and specialized talent. These costs are increasing, especially with

(27:50):
the rush to hire every single AI scientists at the
most expensive price possible. There are also essential immovable costs
that neither open AI or Anthropic have to shoulder. The
construction of the data centers necessary to train and run
inference for their models, and of course the GPU is
inside them, which I will get to in a little bit. Yes,
Uber raised thirty three point five billion dollars through multiple

(28:11):
rounds of posting IPO dam though it raised about twenty
five billion dollars in actual funding. Yes, Uber burned an
absolutely as ton of money. Yes, Uber a scale, but
Uber has not burned money as a means of making
its product functional or useful. Uber worked immediately. I mean
was twenty twelve. I think I used it for the
first time. Maybe earlier. No, no, it would have been
twenty ten. It worked immediately. You used it, You're like, wow, this,

(28:34):
I can just put in my address. I don't have
to say my address three times because I have a
British accent and nobody can fucking understand me. Sometimes you can,
though you're special. Yeah, it was really obvious that it worked,
and also the costs associate with Uber and its capital
expenditures from twenty nineteen through twenty twenty four were around
two point two billion dollars, by the way, on miniscule

(28:54):
compared to the actual real costs of open ai and Anthropic.
Both open Ai and Anthropic around five billion dollars each
in twenty twenty four, but their infrastructure was entirely paid
for by either Microsoft, Google, or Amazon. And by which
I mean the building of it and the expansion they're
in what we don't know how much of this infrastructure
is specifically for open ai or Anthropic. As the largest

(29:16):
model developers, it's fair to assume that a large chunk
at least thirty percent of Amazon and Microsoft's capital expenditures
have been to support these loads. Great sentence to cut
and listen to again. I also leave out Google, as
it's unclear whether it's expanded its infrastructure for Anthropic, but
we know Amazon has done so. As a result, the
true cost of open ai and Anthropic is at least
ten times what uberburned. Amazon spent eighty three billion dollars

(29:39):
in capital expenditures in twenty twenty four and expects one
hundred and five billion dollars are the fuckers in twenty
twenty five. Microsoft spent fifty five point six billion dollars
in twenty twenty four and expects to spend eighty billion
dollars this year. I'm actually confident most of that is
open Ai, but based on my conservative calculations, the true
cost of open ai is at least eighty two billion dollars,
and that only includes capex twenty twenty four onwards. Based

(30:01):
on thirty percent of Microsoft's capex. It's not everything has
been invested yet in twenty twenty five, and open Ai
might not be all of the capex, and also the
forty one point four billion dollars of funding that open
ai has received so far. The true cost of Anthropic
is around seventy seven point one billion dollars, and that's
not including the thirteen billion they just raised, but it
does include all their previous funding and thirty percent of

(30:23):
Amazon's capex in the beginning of twenty twenty four. Now
these are in exact comparisons, but the classic argument is
that Uber burned lots of money and worked out okay,
when in fact the combined couple expenditures from twenty twenty
four onwards that are necessary to make open ai and
Anthropic worker each on their own four times what Uber
burned in over a decade. I also believe these numbers

(30:45):
are conservative. There's a good chance that open ai and
Anthropic dominate the capex of Amazon, Google, and Microsoft in
part because of what the fuck else are they buying
all these GPUs for as their own AI services don't
appear to be making much money at all anyway. To
put it real simple, AI has burned way more in
the last two years than Uber burned in ten. Uber
didn't burn money in the same way, didn't burn much

(31:07):
in the way of capital expenditures, didn't require massive amounts
of infrastructure, and isn't remotely the same in any way,
shape or form other than that it burned a lot
of money. And that burning wasn't because it was trying
to build the core product. It was trying to scale.
It's all so stupid, And you know what, I'm not
even done. Our next and final AI booster episode will
breeze through the dumbest of the dumb arguments, and I'll

(31:31):
say why I'm finally drawing a line under these arguments
for real, because it needs to be said. We need
to say something. I hope you've enjoyed this, see you tomorrow, godspeed.
Thank you for listening to Better Offline. The editor and

(31:52):
composer of the Better Offline theme song is Matasowski. You
can check out more of his music and audio projects
at Matasowski dot com, M A T T O S
O W s ki dot com. You can email me
at easy at Better offline dot com or visit Better
Offline dot com to find more podcast links and of course,
my newsletter. I also really recommend you go to chat

(32:13):
dot Where's youreaed dot at to visit the discord, and
go to our slash Better Offline to check out our reddit.
Thank you so much for listening. Better Offline is a
production of cool Zone Media. For more from cool Zone Media,
visit our website cool Zonemedia dot com, or check us
out on the iHeartRadio app, Apple Podcasts, or wherever you
get your podcasts.
Advertise With Us

Host

Ed Zitron

Ed Zitron

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Charlie Kirk Show

The Charlie Kirk Show

Charlie is America's hardest working grassroots activist who has your inside scoop on the biggest news of the day and what's really going on behind the headlines. The founder of Turning Point USA and one of social media's most engaged personalities, Charlie is on the front lines of America’s culture war, mobilizing hundreds of thousands of students on over 3,500 college and high school campuses across the country, bringing you your daily dose of clarity in a sea of chaos all from his signature no-holds-barred, unapologetically conservative, freedom-loving point of view. You can also watch Charlie Kirk on Salem News Channel

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.