All Episodes

September 30, 2025 24 mins

In part one of this week’s four-part case against generative AI, Ed Zitron walks you through how generative AI is sold through a complete misunderstanding of the concept of labor - and myth-building by companies like NVIDIA and OpenAI.

Latest Premium Newsletter: OpenAI Needs A Trillion Dollars In The Next Four Years: https://www.wheresyoured.at/openai-onetrillion/ 

YOU CAN NOW BUY BETTER OFFLINE MERCH! go to https://cottonbureau.com/people/better-offline use free99 for free shipping on orders of $99 or more.

Newsletter: wheresyoured.at Reddit: http://www.reddit.com/r/betteroffline Discord chat.wheresyoured.at Ed's Socials -

http://www.twitter.com/edzitron

instagram.com/edzitron

https://bsky.app/profile/edzitron.com

https://www.threads.net/@edzitron

email me ez@betteroffline.com

SOURCE LINKS: http://www.tinyurl.com/betterofflinelinks 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Zone Media, Hello, and welcome to Better Offline. I am,
of course your host ed zitron what and after a
few three part episodes, I had an idea, what if

(00:22):
I did a four parter? In all seriousness, I know
that this is a little bit long, but the topic
we're about to explore demand's quite a bit of depth
and it isn't something I could really do justice to
in a one part or two parts, or I guess
even three parter, but let's get into him. Over the
last few months, we felt the vibes shift downward in
an aggressive way, with both Marg Zuckerberg and Clammy Sam

(00:43):
Mortman saying that we're in a bubble. In the latter
case said, warnings of a bubble are always couched in
rank hypocrisy is it's always implied that whoever it is
and the companies they represent aren't part of that bubble,
but rather it's other people and other companies making unfortunate decisions.
The thing is, there's really no escape for either of
these guys, not for Zaken, definitely not for Sam Ortmon.

(01:05):
And over the next four episodes, I'm going to make
a comprehensive case for the fact that we're in a
bubble and condense everything I've been talking about into one series.
And I know I've been all over the place, and
I get a lot of people saying, oh, well, where
did you talk about this? And where'd you talk about that?
And that's kind of fair when you put out as
much as I do. But I'm going to break this
down in four episodes. I'm going to give you a
comprehensive argument against the bubble. Well I mean that for

(01:29):
a bubble, I guess, but against generative AI in general.
But in this episode, I think it's good to start
from the beginning and work our way forward to track
the thread from the origins of chat GPT to the
billion spurned building data centers all over the world and
the weak business justifications for burning and nearly a trillion
dollars to keep this hollow industry alive. Now, in twenty
twenty two, a kind of company called open Ai surprise

(01:51):
the world with a website called chat GPT. They could
generate text that sort of sounded like a person using
a technology called large language models LLMS, which can also
be used to generate images, video, and computer code, or
at least would eventually last. Language models require entire clusters
of servers connected with high speed networking or containing this
thing called a GPU. Graphics processing units. These are difference

(02:13):
to the GPUs in your xbox or laptop or gaming PC.
They cost much much more, and they're good at doing
the processes of inference, the creation of an output of
any LLM and training, feeding masses of training data to
the models, or feeding them information about what a good
output might look like so they can later identify a
thing or replicate it. These models showed some immediate promise
in their ability to articulate concepts or generate video visuals,

(02:36):
audio text, and code. They also immediately had one glaring
obvious problem because their propabilistic, meaning that they're just guessing
whatever the right output might be. These models can't actually
be relied upon to do exactly the same thing every
single time. So if you generated a picture of a
person that you wanted to, for example, using this story book,

(02:56):
every time you created a new page using the same
prompt to describe the protragton, the person would look different,
and that difference could be minor of something that a
reader could shrug off, or it could make the character
look like a completely different person. Now None of this,
by the way, is me validating or saying that any
of this stuff is good. I'm just describing him. Moreover,
the probabilistic nature of generative AI meant that whenever you

(03:17):
asked it a question, it would guess as to the answer,
not because it knew the answer, but rather because it
was guessing on the right word to add in a
sentence based on previous training data. As a result, these
models would frequently make mistakes, something which we later referred
to as hallucinations. And that's not even mentioning the cost
of training these models, the cost of running them, the
vast amounts of computational power they required, the fact that

(03:38):
the legality of using materials straight from books in the
web without the owner's permission was and remains legally dubious,
or the fact that nobody seemed to know how to
use these models that actually create profitable businesses. These problems
were overshadowed by something flashy and new, and something that
investors and the tech media believed would eventually aut to
make jobs that have proven most resistance towardstion knowledge work

(03:59):
and the creative economy. The newness and hyper and these
expectations sent the market into a frenzy, with every hyper
scaler immediately creating the most aggressive market for one supplier
I've ever seen. Nvidia has sold over two hundred billion
dollars of GPUs since the beginning of twenty twenty three,
becoming the largest company on the American stock market and
trading over one hundred and seventy dollars as of writing

(04:20):
this sentence, only a few years off to being worth
nineteen dollars and fifty two cents a share. Now there's
a stock split that happened there, but it works out
that way. Now. While I've talked about some of the
propelling factors behind the AI wave, automation and novelty, that's
not really the complete picture. A huge reason why everybody
decided to do AI was because the software industry's growth

(04:41):
was slowing and SaaS software as a service company valuations
were stalling or dropping, resulting in the terrifying prospect of
companies having to underpromise and over deliver and be efficient,
you know, gross things like running sustainable businesses. Things that
normal companies, those whose valuations aren't contingent on ever increase,
ever constant growth, don't have to worry about because they're

(05:02):
normal companies. Suddenly there was a new promise of new technology,
large language models that were getting exponentially more powerful, which
was mostly a lie but hard to disprove because powerful
can mean basically anything, and the definition of powerful depended
entirely on whoever you asked at any given time and
what that person's motivations were. The media also immediately started

(05:23):
tripping over its own feet, mistakenly claiming open ais GPT
form model tricked a task grab it into solving a capture.
It didn't. This never happened, or saying that, and I
quote people who don't know how to code already used
bots to produce full fledged games. And if you weren't
wondering what the New York Times was referring to when
they said full fledged, there it meant Pong and are
coupled together rolling demo of Skyroads game from nineteen ninety three,

(05:46):
likely because a bunch of that training data was fed
into the models. Now, the media and investors helped pedal
the narrative that AI was always getting better, could basically
do anything, and that any problems you saw today would
inevitably be solved in a few short months or years.
Or that some point. I guess not really sure when
that point is, but damn do they think it's coming.

(06:06):
And llms were touted as a kind of digital panacea,
and the companies building them off a traditional software companies
the chance to plug these models into their software using
an API, thus allowing them to write the same generative
AI wave that every other company was riding. The model
companies similarly started going after individual and business customers, offering
software and subscriptions that promised the world, though this mostly

(06:28):
boiled down to chatbots that could generate stuff, and then
doubled down with the promise of agents, a marketing term
that's meant to make you think autonomous digital worker, but
really means broken digital chatbot of some sort or just
broken digital product. It really depends how you're feeling that day.
Throughout this era, investors in the media spoke with a
sense of inevitability that they never really backed up with data.

(06:50):
It was an era based on confidently asserted vibes. Everything
was always getting better and more powerful, even though there
was never much proof that this was truly disruptive technology
other than in its ability to disrupt apps you were
using with AI, making them worse, For example, suggesting questions
on every Facebook post that you could ask meta AI,
but which METAAI couldn't answer. And I mean on memes,

(07:13):
on just random posts. It's really not useful in any way,
shape or form. AI became omnipresent, and it eventually grew
to mean everything and nothing. Open AI would see. It's
every move loaded over like a gifted child. It's CEO
Sam Allman called the Oppenheimer of our age, even if
it wasn't really obvious why everybody was impressed. GPT four
felt like something a bit different, but was it actually meaningful?

(07:36):
The thing is our official intelligence is built and sold
or not just faith, but a series of myths that
AI boosters expect us to believe, but the same certainty
that we treat things like gravity or the boiling point
of water. Can large language models actually replace coders? Not really, No,
and I'll get into way later in this series. Consora

(08:05):
Open AI's video creation tool replace actors or animators. No,
not at all, But it still fills the air full
of tension because you can immediately see who is preregistered
to replace everyone that works for them. AI is apparently
replacing workers, but nobody seems able to prove it at scale.
But every few weeks a story runs where everybody tries
to pretend that AI is replacing workers with some sort

(08:26):
of poorly sourced and incomprehensible study, never actually saying somebody's
job got replaced by AI, because it isn't happening at scale,
and because if you provide real world examples, people can
actually check if they're true. Now, I want to be clear,
some people have lost jobs to AI, just not really
white collar workers, software engineers, or really any of the
career paths that the mainstream media and AI investors would

(08:48):
have you believe. Brian Merchant has done excellent work covering
how llms have devoured the work of translators, using cheap,
almost good automation to lower already stagnant wages in a
field that has already been hurting before the add to
generative AI, with some having to abandon the field and
others pushed into bankruptcy. I've heard the same for art directors,
SEO experts, and copy editors, and Christopher Mims of The

(09:09):
Wall Street Journal covered these last year. These fields all
have something in common shitty bosses with little regard for
their customers. Who have been eagerly waiting for the opportunity
to slash labor to quote Merchant, the drum beat marketing
and pop culture of powerful AI encourages and permits management
to replace with the great jobs they might not otherwise have.
Across the board, the people being replaced by AI are

(09:30):
the victims of lazy, incompetent cost cutters who don't care
if they ship poorly translated text to quote Merchant. Again,
AI hypers created the cover necessary to justify slashing rates
and accepting just good enough automation output for video games
and media products. Yet the jobs crisis facing translators speaks
to the larger flaws of the large language model era
and why other careers aren't seeing this kind of disruption.

(09:53):
Generative AI creates outputs, and by extension, defines all labour
as some kind of output creative from a request. In
the case translation, it's possible for a company to get
by with a shitty version because many customers see translation
as what do these words say, even though, as one
worker told Brian Merchant, translation is about conveying meaning. Nevertheless,
translation workers already started to condense to a world where

(10:14):
humans would at times clean up machine generated text, and
the same worker warned that the same might come for
other industries. Yet the problem is that translation is a
heavily output driven industry, one where idiot bosses can say, oh, yeah,
that's fine because they ran an output back through Google
Translate and it seemed fine in their native tongue. The
problems of a poor translation are obvious, but customers of

(10:35):
translation are it seems, often capable of getting by with
a shitty product. The problem is that most jobs are
not output driven at all, and what we're buying from
a human beings a person's ability to think and do.
Every CEO talking about replacing workers with AI is an
example of the real problem that most companies are run
by people who don't understand or experience the problems they're solving,

(10:56):
don't do any real work, don't face any real problems,
and thus can never be trusted to solve them. In
the Era of the Business Idio, which is a piece
I wrote a while ago, I talked about how this
was the result of letting management consultants and neoliberal free
market sociopaths take over everything, leaving us with companies run
by people who don't know how the companies make money,
just that they must always make more without fail and

(11:17):
when you're a big stupid asshole. Every job that you
see is condensed to its outputs, and not the stuff
that leads up to the output, or the small nuances
and conscious decisions that make an output good as opposed
to simply acceptable or even bad. What does the software
engineer do? They write code right, What does a rater do?
They write words right? What does a hairdresser do they
can't hear? Yeah, that's of course not actually the case.

(11:40):
As I'll get into later in the series, the software
engineer does far more than just code. And when they
write code, they're not just saying what would solve this
problem with a big smile on their face. They're taking
into account their years of experience, what code does, what
code could do, and all the things that my break
is a result, and all of the things that you
can't really tell from just looking at the code, like
whether there's a reason things are made in a particle way,

(12:01):
And a good coder doesn't just hammer at the keyboard
with the aim of doing a particular task. They factor
in questions like how does this functionality fit into the
code that's already there? Or if someone has to update
this code in the future, how do I make it
easy for them to understand what I've written and make
changes without breaking a bunch of other stuff. A writer
doesn't just write words. They just ideas and ideas and
emotions and thoughts and facts and feelings into a condensed

(12:23):
piece of text. They sit up late at night typing
thousands and thousands of words, and it drives them in say,
it's often quite a motive or at the very least
driven who or inspired by a given emotion, which is
something that an AI simply can't replicate in a way
that's authentic or believable. And a hairdresser doesn't just cut hair.
They cut your hair, which may be wiry, dry, oily long,

(12:43):
sure healthy, unhealthy on a scout with particular issues at
a time of year, perhaps you want to change length
at a time that fits you in the way you
like it, which may be impossible to actually write down.
But they get it just right, and they make conversation
making you feel at ease while they snip and clip away.
It tresses with you never having to think for a second. Fuck,
does this person know what they're doing? Are they going
to listen to me. This is the true nature of labor.

(13:05):
The executives fail to comprehend at scale that the things
we do are not units of work, but extrapolations of experience, emotion,
and context that cannot be condensed in written meaning or
bunches of trading material. Business idiots see our labor as
the results of a smart manager saying do this, rather
than human ingenuity interpreting both the requests and the shit
the manager didn't say. Now, what does the CEO do? Well?

(13:32):
I did look, and a Harvard study said that they
spend twenty five percent of their time on people and relationships,
twenty five percent on functional and business unit reviews, sixteen
percent on organization and culture, and twenty one percent on
just strategy, with a few percent here and there for
things like professional development. Hmm. That's who runs the vast

(13:52):
majority of companies. People that describe their work predominantly as
looking at staff, talking to people, thinking what we do next,
and go to lunch. The most highly paid jobs in
the world are impossible to describe their labour described in
a mishmash of linked inspiration. Yet everybody else's labor is
an output that can be automated. As a result, large
language models must seem like magic to these dickheads. When

(14:14):
you see everything as an outcome, an outcome, you may
or may not understand it. Definitely don't understand the process behind,
let alone care about. You've kind of already see your
workers as llms. You create a stratification of the workforce
that goes beyond the normal organizational chart, with senior executives
those closer to the class level of CEO acting as
those who have risen above the doldrums of doing things,

(14:35):
to the level of decision making, a fuzzy term that
can mean everything from making nuance decisions with input from
multiple different subject matter experts to as service now Bill
McDermott did in twenty twenty two and I quote make
it clear to everybody in a boardroom of other executives
that everything they do must be AIAIAIAIAI. And that's five
of those the same extents that some members of the

(14:57):
business and tech media that have for the most part
gotten by without having to think too hard about the
actual things the companies are saying. Look, I realize this
sounds a little mean, and it's not a unilateral statement,
and I must must be clear. It doesn't mean that
these people know nothing, just that it's been possible that
scoot through the world without thinking too hard about whether
or not something is true, just because an executive said up.

(15:19):
When Salesforce said back in twenty twenty four that it's
Einstein Trust Layer and AI would be transformational for jobs,
the media dutifully wrote it down and published it without
a second thought. It fully trusted Mark Benioff when he
said that Agent Force agents would replace human workers, and
then again when he said that AI agents were doing
thirty to fifty percent of all the work in Salesforce itself,
even though that's an unproven and nakedly ridiculous statement. Salesforce

(15:43):
is CFO, by the way, said earlier in this year
that AI wouldn't boost sales growth in twenty twenty five.
One would think this would change how Salesforce was covered
or how seriously one takes Mark Benioff, But it hasn't
because nobody's really paying attention. In fact, nobody seems to
want to do their job in this case. And this

(16:13):
is how the core myths of jenerit if AI were
built by executives saying stuff and the media publishing it
without thinking about him. AI is replacing workers. AI is
writing entire computer programs. AI is getting exponentially more powerful.
What does powerful mean? Well, it means that the models
are getting better on benchmarks that are rigged in their favor.
But because nobody fucking explains what the benchmarks are, regular
people are regularly told that AI is powerful and getting

(16:35):
more powerful every single day. The only thing powerful about
generifai is its pathology. The world's executives entirely disconnect different
labor and natural production, are doing the only thing they
know how to spend a bunch of money and say
vague stuff about AI being the future. There are people journalists, investors,
and analysts that have built entire careers on filling in
the gaps for the powerful as they splurge billions of

(16:56):
dollars in repeat with increasing desperation that the future is here,
and then well, absolutely nothing else happens. You've likely seen
a few ridiculous headlines recently, though. One of the most
recent and most absurd is that open Ai will pay
Oracle three hundred billion dollars over the next four years,
closely followed with the claim that Invidia will invest and

(17:17):
I put that in air, quotes one hundred billion dollars
in open Ai to build ten gigawats of ai data centers,
though the deal is structured in a way that means
open Ai is paid progressively as each giga what is deployed,
and also apparently open Ai will be leasing the chips
rather than buying them out right. I must be clear
that these deals are intentionally made to continue the myth
of generat ifai to pump in video and to make

(17:38):
sure open ai insiders can sell ten point three billion
dollars worth of shares, which they're currently trying to do
at evaluation of five hundred billion goddamn dollars. I want
to be clear about something open Ai cannot afford the
three hundred billion dollars. Open Ai has not received a
dollar from Nvidia and won't do so for at least
a month when I think they're going to receive ten
billion dollars. But the rest of that ninety's only coming

(18:00):
when they build those data centers, which open Ai can't
afford to do. In Vidia needs this myth to continue
because in truth, all of these data centers are being
built for demand that doesn't exist, or that if it
did exist, doesn't necessarily translate into business customers paying huge
amount amounts for access to open AI's generative AI services.
In Vidia, open Ai, Core Weave, and other AI related

(18:21):
companies hope that by announcing theoretical billions of dollars or
hundreds of billions of dollars of these strange, vague, and
impossible seeming deals, they can keep pretending that the demand
is there, because why else would they build all these
data centers? Right, Well, there's there's that, and the entire
stock market races on. In video's back. It accounts for
seven to eight percent of the value of the S

(18:41):
and P five hundred, and Jensen Huang needs to keep
selling those fucking GPUs. I intend to explain later how
all of this works and how brittle all of this
really is. But the intention of these deals is simple
to make you think this much money can't be wrong,
and I assure you it can. These people need you
to believe this is inevitable. But they are being proven

(19:01):
wrong again and again and again, and today I'm going
to continue to do so. Underpinning these stories about huge
amounts of money and endless opportunity lies a dark secret.
The none of this is working, and all of this
money has been invested in a technology that doesn't make
much revenue and loves to burn millions or billions or
hundreds of billions of dollars. Over half a trillion dollars
in fact, has gone into an entire industry without a

(19:24):
single profitable company developing models of products built on top
of these AI models. By my estimates, there's about forty
four billion dollars of revenue in general of AI this
year when you add in anthropic and open AIS revenue
to the part, along with other stragglers, and most of
that number has been gathered from reporting from outlets like
The Information. Because none of these companies share their revenues,
all of them lose shit tons of money, and their

(19:45):
actual revenues are really, really, really small. Only one member
of the Magnificent seven outside of Nvidia, has ever disclosed
its AI revenue, Microsoft, which has stopped reporting, in January
twenty twenty five, when it reported it would have thirteen
billion in aniized revenue this well, I guess that will
be for the month because it's month times twelve, about
one point eight three billion a month. I'm I know

(20:08):
that sounds like a lot. But Microsoft is a sales machine.
It's built specifically to create or exploit software markets, suffocating
competitors by using its scale to drive down prices and
to leverage the ecosystem it's created over the past few decades.
One billion a month in revenue is chump change for
an organization that makes over twenty seven billion dollars a
quarter in profits. But hey, it's the early days. Did

(20:30):
you get it? Hurt out? God? Thank you, Scott? Did
you not listen to my three part series on how
to argue with an AI booster? I went over it
over there, get out? Okay. This is also nothing like
any other tic era. There's never been this kind of
cash rush, even in the fiber boom. Over a decade.
Amazon spent about a tenth of the capex that the

(20:50):
Magnificent seven spent in the last two years on general AI,
building Amazon Web Services, something that now powers a vast
chunk of the web and has long been Amazon's most
profitable business generally. If AI is also nothing like Uber,
with open Ai and anthropics true costs coming in at
around one hundred and fifty nine billion dollars in the
past two years, approaching five times Uber's thirty billion dollar

(21:10):
all time burn and that's before the bullshit with Nvideo
and an Oracle. Microsoft last reported that AI revenue in January.
By the way, it's now October. Why did it stop
reporting the number? Do you think is it because the
numbers are so good they couldn't possibly let you know? Hmm.
As a general rule, publicly traded companies, especially those where
the leadership are compensated primarily in equity so stock, they

(21:33):
tend to brag about their successes, in part because bragging
boosts the value of the thing that the leadership gets
paid in. There's no benefit to being shy. Look, Oracle
announced they literally filed something saying they had that huge
three hundred billion dollar contract. They did that despite the stock.
Why is Microsoft not doing that with their incredible AI revenues?
Do you think it's because they're shy? Come on, Satcha,

(21:53):
come on out, come on, Satcher. You can show me
in the numbers. Been in all seriousness, If Microsoft can't
sell this, nobody can. All right, So I'm explaining this
whole thing is if you're brand new and walking up
to this relatively unprepared, so I need to introduce another
company In twenty twenty, a splinter group jumped off of
open ai, funded by Amazon and Google to do much
the same thing as open ai, but pretend to be

(22:14):
nicer about it until they had to raise money from
the Middle East. I am, of course talking about Anthropic,
and they've always been a bit better at coding for
some reason, and people really like their clawed models. But
like does not mean profit or even much revenue. Both
open ai and Anthropic have become the only two companies
in generative AI to make any real progress, either in
terms of recognition or in sheer commercial terms, accounting for

(22:34):
the vast majority of revenue in the AI industry. In
a very real sense, the AI industry's revenue is open
ai and Anthropic. In the year where Microsoft recorded thirteen
billion dollars in AI revenues, ten billion dollars of that
came from open AI's spending on Microsoft Azure, Anthropic burned
five point three billion dollars last year, with the vast
majority of that going towards compute. Outside of these two companies,

(22:56):
there's barely enough revenue to justify a single data center.
Where we see today is a time of immense tension.
Mark Zuckerberg says we're in a bubble. Sam Mortman says
we're in a bubble. At a barber chairman and billionaire
Joe Size says we're in a bubble. Apollo says we're
in a bubble. Nobody's making money and nobody knows why
they're actually doing this anymore, just that they must do
it and must do so immediately. And they've yet to

(23:16):
make the case that jenerat Ifai warranted any of the expenditures.
Now we're a quarter of the way through this four party,
but this one was necessary. I needed to get you
up to speed and kind of give you the lay
of the land because we're going to go a little
deeper in the next episode and I can't wait for
you to hear him see it tomorrow. Thank you for

(23:41):
listening to Better Offline. The editor and composer of the
Better Offline theme song is Mattasowski. You can check out
more of his music and audio projects at Mattasouski dot com,
M A T. T O. S O w Ski dot com.
You can email me at easy at Better offline dot com,
or visit Better Offline dot com to find more podcast links,
and of course my newsletter. I also really recommend you

(24:04):
go to chat dot where's youread dot at to visit
the discord, and go to our slash Better Offline to
check out I'll Reddit. Thank you so much for listening.
Better Offline is a production of cool Zone Media. For
more from cool Zone Media, visit our website cool Zonemedia
dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.
Advertise With Us

Host

Ed Zitron

Ed Zitron

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.