All Episodes

September 10, 2025 35 mins

In part one of this week's three-part Better Offline Guide To Arguing With AI Boosters, Ed Zitron walks you through why AI is nothing like the early days of the internet, why it isn’t the early days of the AI industry, and why every booster argues in the future tense.

Latest Premium Newsletter: Why Everybody Is Losing Money On Generative AI: https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/

YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.

BUY A LIMITED EDITION BETTER OFFLINE CHALLENGE COIN! https://cottonbureau.com/p/XSH74N/challenge-coin/better-offline-challenge-coin#/29269226/gold-metal-1.75in

---

LINKS: https://www.tinyurl.com/betterofflinelinks

Newsletter: https://www.wheresyoured.at/

Reddit: https://www.reddit.com/r/BetterOffline/ 

Discord: chat.wheresyoured.at

Ed's Socials:

https://twitter.com/edzitron

https://www.instagram.com/edzitron

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Zone Media. Welcome to Better Offline. I'm your host ed
zitrun as Eva. Check out the episode notes for merchandise,
my newsletter, various links, things that I just write for myself.

(00:25):
I don't actually leave little notes, but maybe I should.
But as promised in my last monologue, we've got a
three part episode this week, and we're talking about AI boosters,
the people who insist despite very little proof and all
evidence of the contrary. The generative AI is the future,
open AI and anthropic of perfectly healthy companies. And what
we're witnessing isn't an inflation of a massive, dangerous bubble
that might eat our entire economy, but rather the emergence

(00:48):
of a brand new paradigm in tech that's just as
meaningful as the smartphone and cloud computing revolutions. I even
saying that out loud makes me feel a little crazy,
but yeah, that's what they believe. These people exist. They're
often wrong. In fact, they're mostly wrong, and they're also
really really annoying. And so as a public service, I'm
going to spend the next three episodes explaining how to

(01:09):
talk with them without losing your sanity, will to live,
or your ability to operate in society, and now what
makes me qualified to do this well? In the last
two years, I've written no less than half a million words,
with many of them dedicated to breaking both existent and
previous myths about the state of technology and the tech
industry itself. Now let's start with something every critic is experienced,

(01:29):
the massive double standard between those perceived as skeptics and
those who are optimistic or optimists. To be skeptical of
ais to commit yourself to a near constant amount of
demands to prove yourself and endless nags of But what
about with each one, no matter how small, presented as
a fact that defeats any points you may have had,
have or will have in the future. Conversely, being an

(01:50):
optimist allows you to take things like AI twenty twenty seven,
which I will fucking get to seriously, to the point
that you can write an entire feature length fan fiction
piece in the New York Times and nobody will bat
an eye lead. In any case, things are beginning to
fall UPALM two of the actual reporters, the real ones
at the New York Times rather than column nests, suit
kind of act like adult babies. Reported in late August

(02:13):
that Meta is yet again restructuring its AI department for
the fourth time, and that it's considering downsizing their overall
AI division, which sure doesn't seem like something you'd do
if you thought AI was the future. Meanwhile, the markets
are thoroughly spooked by an MIT study covered by Fortune
that found that ninety five percent of generative AI pilots
and companies are failing in providing no ROI. And though

(02:34):
MIT Nanda has now replaced the link to the study
with some sort of Google form to request access, you
can find the full pdf linked in the show notes.
By the way, this is the kind of thing that
is a PR firm wanting to try and set up interviews,
not for me, thank you, just like to read the
thing that you put out very rude. In any case,
the report is actually grimmer than Fortune made its sound,
saying that ninety five percent of organizations are getting zero

(02:56):
return on GENERATIVAI. The report says that adoption is high,
transformation is low, adding that few industries show the deep
structural shifts associated with past general purpose technologies, such as
new market leaders, disrupted business models, or measurable changes in
customer behavior, and saying that out loud. That is right,
what is being disrupted by AI? Search? Kind of that

(03:17):
just means that search sucks anyway. Yet the most damning
part was there was a part called the five myths
about AI in the Enterprise, which is probably the most
wilting takedown of this movement I've ever seen. And I'm
going to quote an adverbatum okay number one. And they
present these as the first statement is something that isn't
going to happen, or they're questioning AI will replace most

(03:38):
jobs in the next few years. What it actually says
is research found limited layoffs from JENAI, and only in
industries that are already affected significantly by AI. There is
no consensus among executives as the hiring levels over the
next three to five years. Another statement they challenge generative
AI is transforming business. No, no, No, adoption is high, but
transformation is rare. Only five percent of enterprises have AI

(04:01):
tools integrated into workflows at scale, and seven of nine
sectors show no real structural change. And I want to
thank them for putting this there. I made this exact
point in February in my newsletter. There's no AI revolution.
Links in the show notes as ever, enterprises are slow
in adopting new tech is another commonly held thing, and
they say enterprises are extremely eager to adopt AI, and

(04:22):
ninety percent of seriously explored buying an AI solution. Another
statement they challenge, the biggest thing holding back AI is
model quality, legal data, and risk. And then they say,
which is my favorite quot, what's really holding it back
is that most AI tools don't learn and don't integrate
well into workflows. I really do love that. I love

(04:43):
that so much because it's like the thing holding AI
back is that it sucks. And the final one, the
final challenge statement they make is the best enterprises are
building their own tools. Internal builds fail twice as often,
and that's twice as often in a thing where ninety
five percent anyway, These are brutal, dispassionate points that directly
deal with the most common boosterisms. Generative AI isn't transforming anything.

(05:07):
AI isn't replacing anyone. Enterprises are trying to adopt generative AI,
but it doesn't fucking work. And the thing holding back
AI is the fact it doesn't fucking work. This isn't
a case where the enterprise is suddenly going to save
these companies because the enterprise has already tried and it
is not working. And what's hilarious as well is so
many people say, well, there's just this enterprise opportunity they
haven't got into yet. Actually they have, They've got into

(05:31):
the enterprise. They're sticking their fingers and all the bits
of the sas and it isn't working. And the enterprise
loves money. The enterprise will do even if it's a
genuinely evil product. They will push it, Like look at Salesforce,
an entire company based on them. Anyway, an incorrect read
of the study that's been going around is that the
learning gap that makes these things less useful is because

(05:53):
of human issues like human error, when the study actually
says that the fundamental gap that defines the GENAI divide
is that users tools that don't adapt model quality fills
without context, and UX suffers when systems can't remember this
isn't something you as a person learn your way out of.
These products don't do what they're meant to do, and
people are realizing it when they try and use them. Nevertheless,

(06:13):
boosters will still find a way to twist this study
to mean something else. They'll claim that AI is still early,
that the opportunity is still there, that we didn't confirm
that the Internet and smartphones with productivity boosting, or that
we're in the early days of AI somehow three years
and hundreds of billions of dollars and thousands of articles
in Yeah, we're in the early days. We're in the
early days when all the money in all the land

(06:35):
has gone into this. And I'm tired. I'm tired of
having the same arguments with these people, and frankly, I'm
sure you are too. No matter how blindly obvious evidence
is to the contrary, they will find ways to ignore it.
They continually make these smug comments about people wishing things
would be bad or suggesting you are stupid, and yes,
that is their belief, by the way, for not believing
that generative AI is disruptive today. And in this three

(06:59):
part tern I'm going to give you the tools to
fight back against the AI boosters in your life. I'm
going to go into the generalities of the booster movement,
the way they argue, the tropes they cling to, and
the ways in which they use your own self doubt
against you. They're your buddy, your boss, a man in
a Ginghamshire Epic steakhouse who won't leave you the fuck alone.
A editor, a writer, a founderer, or just a common

(07:19):
or garden conn artist. Whoever the booster in your life is,
I want you to have the words to fight them with.
So an AI booster is not, in many cases, an
actual fan of artificial intelligence. People like Simon Willison or

(07:42):
Max Wolf who actually work with LMS on a daily
basis don't see the need to repeatedly harass everybody or
talk down to them about their unwillingness to pledge allegiance
to the graveyard smash of generative AI. In fact, the
closer I've found someone to actually building things with LLMS,
the less likely they are to emphatically argue that I'm
missing out on something by not doing so myself. No, no, no.

(08:03):
The AI booster is symbolically aligned with generative AI. They're
fans in the same way that somebody is a fan
of a sports team. Their houses in blazons with every
possible piece of tat and crap they can find. They're Sundays,
living and dying by the successes of the team. Except
even fans of the Dallas Cowboys of a tighter grasp
on reality, but not Micah Parsons anyway. Kevin Rus and

(08:24):
Casey Newton are two of the most notable boosters, and
as I'll get to later in this series, neither of
them have a consistent or comprehensive knowledge of AI, despite
being at The New York Times. Though Casey Newton is
a contractor, he is a contractor just for a podcast,
which I can't insult due to my own contractual relationships. Nevertheless,
they will insist that everybody is using AI for everything,

(08:46):
which is the title of an article they put out,
a statement that even a booster should realize is incorrect
based on the actual abilities of the models. But that's
because it isn't about what's happening. It's not about what's
actually happening. It's about allegiance. AI symbolizes something to the
AI booster, a way that they're better than other people.
That makes them superior because they're, unlike cynics and skeptics,

(09:08):
able to see the incredible potential in the future of AI,
but also how great it is today, though it never
seemed to be able to explain why it is other
than it replaced search for me and I use it
to draw connections between articles I write, which is something
I do for free without AI with my fucking brain.
Boosterism is a kind of religion interested in finding symbolic
proof that things are getting better in some indeterminate way,

(09:31):
and that anyone that chooses to believe otherwise is ignorant
or stupid or I actually don't know what it is
that they're meant to be missing. Let me give you
an example. Thomas Potastic. He wrote a piece called my
ais skeptic friends are all nuts, and it was catnip
for boosters. A software engineer using technical terms like interact
with GIT and MCP, vague chants, and of course an

(09:52):
extremely vague statement that says hallucinations aren't a problem, and
I quote now, I'm sure there are still environments where
hallucination matters. But hallucination is the first thing developers bring
up when somebody suggests using llms, despite it being more
or less a solved problem. Is it anyway? My favorite
favorite part though, let me quote this. A lot of

(10:13):
LLM skepticism probably isn't really about llms. It's projection. People
say llms can't code, when what they really mean is
llms can't write Rust which by the way, is a
coding language. Fair enough, but people select languages in part
based on how well lms work with them, So RUSS
people should get on that. What nobody projects more than
an AI booster. They thrive on the sense that they're

(10:36):
oppressed and villainized after years of seemingly every other, every
goddamn outlet on Earth claiming they're right, regardless of whether
there's any proof. They sneer and jeer and cry constantly
the people not showing adequate amounts of awe when an
AI lab says we did something in private, we can't
share it with you, but it's so cool, and constantly
act as if they're victims as they spread outright misinformation,

(10:57):
either through getting things wrong or never really caring enough
to check if they're right. Also, none of the booster
arguments actually survive a thorough response, as Nick Saresh proved
with his hilarious and brutal takedown of Potastics. Peachs Siesh
is a great guy. He's been on the show before,
and I've linked to his piece in the show notes,
and I'm going to bring him back on. He's written
for my newsletter as well. Absolute legend now there are

(11:18):
I believe some people who truly do love using llms,
yet they are not the ones defending them. But Tastic's
peace strips with condescension to the point that I feel
like he's trying to convince himself how good lms are.
Because boosters are eternal victims. He wrote them a piece
that they could send around to skeptics saying hh see
without being able to explain why it was such a
brutal takedown, mostly because they can't express why other than well,

(11:42):
this guy gets it. One cannot be a big, smart
genius that understands the glory and power of AI while
also acting like a scared little puppy every time somebody
tells them it sucks. You know what, This is a
great place to start. This is a great place to
get into how to deal with AI boosters because AI
boosters love being victims, and you should not play into it.

(12:05):
When you speak to an AI booster, you may get
the instinct to shake them vigorously or respond to their
posts by saying to do something with your something, or
that they're stupid. I understand the temptation, but you want
to keep a level head here. Keep your head on
the swivel. They thrive on this victimization. I'm sorry. If
you're an AI booster and this makes you feel bad,
Please reflect on your work and how many times you've

(12:26):
referred to somebody who didn't understand AI in a manner
that suggested that they were ignorant or tried to gaslight
them by saying AI was powerful while providing no actionable
ways of proving this or prove or just being able
to point in it being powerful. You cannot and should
not allow these people to act as if they're being
victimized or other Throughout this series, I'm going to address
a very specific thing. I'm going to use a term

(12:49):
boost the quip. This refers to the things that they
say and how often you hear them. These are lines
that you'll hear them say again and again and again
and again. They're common arguments, common cliches that demand response.
And let's start with our first booster quip. You are
just being a hater for attention. Contrarians just do it
for clicks and headlines. First and foremost, there are boosters

(13:10):
are pretty much every major think tank, government agency, and
media out there. It's extremely lucrative being an AI booster.
You're showered with panel invites access to executives, and you're
able to get headlines by saying how scared you are
of the computer. And it's really easy to do. So
being a booster is easy. And I must be clear
when I say booster, it doesn't always have to mean

(13:31):
over It could just mean the things you choose not
to do. It could mean the things you choose not
to criticize them for, or the things you just write
down that they say. But really we're talking about the
worst of them. But if you hear that sentence and
you don't think you're a booster, you can be a booster,
by the way, if you just choose not to criticize them.
But we're talking about the real assholes. Being a critic

(13:52):
requires you to constantly have to explain yourself in a
way that boosters never have to. Now, if a booster
says this to you, if they say you're just being
a hater for attention, you're just doing this for clicks,
ask them to explain first of all what they mean
by clicks or attention, and how they think you are
monetizing it, how this differs in its success from say,
anybody who interviews and quotes Sam Altman or Dario Wario,

(14:14):
Ama Day or whomever from Anthropic on Hard Fork. Ask
them what the difference is, and ask them why do
they believe your intentions as a critic are somehow malevolent
as opposed to those literally reporting what the rich and
powerful want them to. There's no answer here, because this
is not a coherent point of view. Boosters are more successful,
get more perks, and are in general treated better than

(14:35):
any critic and pretty much every major out there. Fundamentally,
these people exist in the land of the vague, and
they don't like it when you force them to get specific.
They will drag you toward what's just on the horizon,
but never quite define what the thing that dazzles you
so much will be or when it will arrive. Really,
their argument comes down to one thought. You must get
on board now because at some point it will be

(14:56):
so good you'll feel so stupid for not believing something
that it sucks wouldn't be really good. And if this
line sounds familiar, it's because you've heard it a million
times before, most notably with cryptocurrency, NFTs, Mataverse, Clubhouse, tons
of movements. Actually, they will make you define what would
impress you, which is not your job. In the same way,
finding the use case for them isn't your job. In fact,

(15:19):
you're the customer, you're the consumer. You are the person
AI needs to prove itself too, not the other way around.
But let's go to another booster quip. When they go,
you just don't get it, here's a great place to start. Say,
that's a really weird thing to say. It is peculiar
to suggest that somebody that doesn't get how to use

(15:40):
a product is weird, and that we as the consumer,
as the customer, must justify ourselves to our own purchases. No, no,
no no. If I don't get it, it's the booster's
job to tell me. Why make them justify their attitude.
Just like any product, we buy software to serve a need.
This is meant to be artificial intelligence. Why is it

(16:02):
so fucking stupid that I have to work out why
it's useful? The answer is, of course, that it has
no intellect. It is not intelligent, and large language models
are being pushed up mounted by a cadre of people
who are either easily impressed or invested, either emotionally or
financially in its success. Due to the company. They keep
all their intentions for the world. And if a booster
suggests you just don't get it, ask them to explain

(16:24):
the following What am I missing, What specifically is it
that is so life changing about this product based on
your own experience, not on any anecdotes from other people,
because they will say, well, I heard of a guy
who are ten billion lines of coat, and then the
baby looked at me and I cried. They don't have
real things themselves, So cut off the exits brought up

(16:44):
the doors. And then also ask them what use cases
are truly transformative about AI. Don't let them say well,
I heard in an industry. Actually make them prove themselves.
The use cases will likely be the AI is replaced.
Search for them that they use it for brainstorming or journaling,
proof reading an article, or looking through a big pile
of their notes or some other corpus of information and

(17:05):
summarizing it or pulling out insights. Who gives a shit?
Sorry not to be too a cerbic, but really, who
fucking cares? That shit's so boring? Hundreds of billions of
dollars of wasted investment on this and this is what
We've got three years in fucking Humpty dumpty. Could never
have it this good anyway. Our next booster quip is

(17:26):
one of my faves. Ahem, AI is powerful and getting
exponentially more powerful. Now, if a booster ever refers to
AI being powerful and getting more powerful, ask them the following,
what does powerful mean? In the event that they mentioned benchmarks,
ask them how those benchmarks apply to real world scenarios.
If they bring up swe bench the standard benchmark for coding,

(17:47):
ask them if they can code, and if they cannot,
ask them for another example. I mean, they will tell
you that they've spoken with coders. I've talked with a
lot of coders. I have a great one with Colt
Vogie coming up, another software engineer talking about MS. It's
so funny. It's so funny when you actually lay this
stuff out, how weak their arguments are. But in the

(18:08):
event they mentioned reasoning, ask them to define them. Once
they've defined reasoning, ask them to explain in plain English,
what reasoning allows you to do on a use case level,
not just how it works. They will likely bring up
the gold medal performance the open AI's model got on
the meth OLYMPIAD. Ask them why open ai hasn't released
that model. Then ask them what the actual practical use

(18:28):
case is that this success has opened up. They will
say it's an innovation. You've got to be patient, and
then pairper spray them. No, don't, don't pepper spray anyone anyway.
You should also then ask them what use cases have
arrived as a result of models becoming more powerful. If
they say vague things like oh in coding and oh
in medicine, ask them to get specific, and then ask

(18:48):
them what new products have arrived as a result. If
they say coding LMS, they will likely add that this
is replacing coders. Ask them where that has happened and
ask them to show you proof. Blinks and it will
not be sufficient to say that a CEO mentioned that
they did something with AI and efficiency. Get numbers. They

(19:09):
just won't. They won't do this. They will turn into
a pillar of salt. And we got two more fucking
parts of this. I mean you're gonna have a ball
with this. Look. The core of the AI booster's argument
is that they need to make you feel bad. They
like the gaslight, and you should. You need to refuse
to let them. You need to push back heavily. If
there's ever a point where you feel like they are
trying to make you feel stupid, ask them why they're

(19:29):
doing so. And to be clear, anyone with a compelling
argument doesn't have to make you feel bad to convince you.
The iPhone didn't need a spurious hype cycle of proof
of people saying you must look at this, it is
so important. When it didn't really work. It worked immediately.
Now I said in my news that it didn't need
a fucking marketing campaign. Yes there was marketing dollars behind

(19:52):
the iPhone. Eric newcomer, come at me with better WORKMATEE
respond to the rest of this, But we all kind
of got it with the iPhone. In fact, the moment
Steve job announced their piece of shit that he was,
he was. He said, here's the phone, here's an IPO,
here's here's ema. You could do all these on one device,
and everyone, oh, yeah, that is good. That is really

(20:12):
obvious that that was good. It was impressive because it
was impressive. And boosters will suggest you are intentional or
in not liking AI because you're a hater or a
cynic or a luddie. They'll suggest that you're ignorant for
not being amazed by chat GPT. Let me tell you something,
You don't have to be impressed by anything. By defam
and any product, especially software designed to make you feel

(20:34):
stupid for not getting it is poorly designed. Chat GPT
is the ultimate form of silicon value sociopathy. You must
do the work to find the use cases and thank
them for giving you the chance to do so. AI
is not even good reliable software. It resembles the death
of the art of technology. Inconsistent and unreliable by definition,
inefficient by design, financially ruin us, and adds to the

(20:57):
cognitive load of the user by requiring them to be
ever vigilant of the shit asked outputs that can come
out of them. So here's a really easy way to
deal with this. If a booster ever suggests you are
stupid or ignorant, ask them why it's necessary to demean
you to get their point across, even if you are
unable to argue on a technical level, make them explain
why the software itself can't convince you, and be vigilant.

(21:19):
Boosters seldom live in reality and will do everything they
can to pull you off course. And I should add
that there is a fair criticism here. I do insult people,
I do demean them. I call them babies, and I
do funny voices, and I do that because I don't
respect them. I'm not. I'm I am here to tell
you how I feel. I will convince you through the
large amounts of work I do in the research I have.

(21:41):
I don't if you disagree with me, you disagree with me, Eh,
it's fine. And yeah, these people do sound kind of silly.
I'll get you a Casey Newton thing in a couple
of episodes. I think that really it's impossible to call
them otherwise. Look, if you say that none of these

(22:09):
companies make money, they'll say it's in the early days.
If you say AI companies burn billions, they'll say the
cost of inference is coming down. If you say the
industry is massively overbuilding, they'll say that this is actually
just let the dot com boom and that the infrastructre
will be picked up at used in the future. If
you say there are no real use cases, they'll say
chat GBT as seven hundred billion weekly users. Every single time.

(22:30):
There's the same goddamn arguments, which is why from here
on out, I'm going to give you the responses to
all of them. And there's an article versus you could
print up and stuff it in them. No no, no, no,
no no violence. Everyone be nice, but you're going to
be able to use these arguments going forward. And let's
start with my favorite booster, quip AI will. Any time

(22:55):
an AI booster says AI will tell them to stop,
tell them to stop and explain what AI can do now.
And if they insist, ask them both when they expect
things to happen in the way they're talking about, and
if they say very soon, ask them to be more specific.
Get them to agree to a date and then call
them on that day or show up at their house. Hell,
you could be waiting inside when they get there, assuming

(23:16):
you have a key legally. Now here's another one, another
boost the whip. They will say agents will automate large
parts of our economy, and I will say, fucking stop.
There's that will bullshit again. There's that will will will will.
Agents don't work. They don't work at all. The term
agent means, to quote Maxwolf, a workflow where the LLM

(23:36):
can make its own decisions, such as in the case
of web search, where the LLM is told you can
search the web if you need to, then can output.
I should search the web and search the web. Yet
agent has now become a mythical creature that means totally
autonomous AI that can do an entire job. If anyone
tells you agents are dot dot dot, you should ask
them to point to one. If they say coding, please

(23:57):
demand that they explain how autonomous these things are. If
they say they can reflect her entire code bases, ask
them what that means, and also laugh at them because
they will not know. Ask them to explain how self force.
His own research shows that agents only have a fifty
eight percent success rate on single step tasks that you
ask it to do one thing, and thirty five percent
on multi step tasks. And what's crazy as well? And

(24:19):
I won't have this in the episode now, it's just
gonna have to look it up. There is a story
that just came out about open Aiyes projections, they reduce
the amount of money they're making over the next few
years in agents down twenty six billion dollars less they
just remove that part. Do you think agents exist now?
Agents are not autonomous. They do not replace jobs. They

(24:39):
cannot replace code as they are not going to do so,
because probabilistic models are a horrible means of taking precise actions.
And almost anyone who brings up agents as a booster
is either misinformed or in the business of misinformation. Now
here's another booster grip for you. What I mean is
that AI is like the early days of the Internet

(25:00):
any cases. I think they're referring to AI as being
early as a reference to those early days, and they
never really refer to what that means, because the early
days of the Internet can refer to just about anything.
Are we talking about dial up DSL? Are we talking
about the pre platform days when people accessed it the
Internet via compy server? Al well, yes, yes, I remember
the article from Newsweek. I already explained it in my

(25:21):
newsletter reality Check. I'm going to quote myself about this
fucking article where the guy said the Internet wouldn't take off.
In any case, one guy said, saying that the Internet
won't be big doesn't mean a fucking thing about generat
iv AI, And you're a simpleton if you think it does.
One guy being wrong in some way is not a
response to by work, and I will crush you like
a bug. Again, I'm using ad hominin attacks. Who cares

(25:42):
these fucking people are rude as hell. Now, if your
argument is that the early Internet required expensive Sun microsystem
service to run, Jim Cavello of Goldman Sachs addressed that
in June twenty twenty four by saying that the costs
were that the cost piled in comparison, adding that we
also didn't need to expand our fucking power grid to
build the early web. Well, sir, there's another booster whip, sir. Actually,

(26:04):
people said smartphones wouldn't be big. This is a straight
up lie. By the way, I've heard this a good amount.
I've heard at least five people say, yeah, well, people
didn't didn't think the iPhone would be big. The iPhone
wasn't going to be big. Actually, huhh. This is a lie.
It's a lie. It's a lie. You're lying. Also, as
Jim Cavello from gobmin Sex noted, there were hundreds of

(26:25):
presentations in the early two thousands that included road maps
that accurately fit how smartphones rolled out, and that no
such roadmap exists for generative AI. The iPhone was also
an immediate success as a thing that people paid for
with Apple selling four million units in the space of
six months, and this was on an exclusive contract for
several years, I think with Singular Wireless now called AT

(26:47):
and T hell in two thousand and six. Since the
year before the iPhone launched, there were smartphones, and there
was an estimated seven point seven million worldwide smartphone shipments,
mostly from BlackBerry, Windows Mobile and Palm. Though, to be
generous to the GENERATIVEAI boosters, I'm going to disregard those
because they actually he'll prove my point more and I
just want to be clear that the early days of

(27:07):
the Internet are not a sensible comparison to GENERATIVEAI. The
original Attentional is all you Need paper, the one that
kicked off the transformer based large language model ERA, was
published in June twenty seventeen. Chat GPT launched in November
twenty twenty two. It's not early. We're not early anymore.
We haven't been for a while. But nevertheless, if we're
saying early days here, we should actually define what that means.

(27:29):
As I mentioned previously, people paid for the iPhone immediately
despite it being a device that was completely and utterly
new on one specific carrier. While there was a small
group of consumers that might have used similar devices like
the compact ipack. It's kind of a cool device. The
iPhone was a completely new kind of computing sold it
at premium, requiring you to have a contract with that

(27:49):
specific carrier. Conversely, chat GPT's annualized revenue in December twenty
twenty three was one point six billion dollars, so about
one hundred and thirty three million in that month for
a product that had by that time raised over ten
billion dollars. And while we don't know what OpenAI lost
in twenty twenty three, reports suggested it burned over five
billion dollars in twenty twenty four. Big Tech is spent
over five hundred billion dollars in capital expenditures in the

(28:12):
last eighteen months, and all told, between investments of cloud
credits and infrastructure, will likely sink over six hundred billion
dollars by the year's end. The early days of the
Internet would define not by its lack of investment or attention,
but by its obscurity. Even in two thousand, around the
time of the dot com bubble, only fifty two percent
of US adults used the Internet, and it would take
another nineteen years for ninety percent of US adults to

(28:34):
do so. These early days were also defined by its
early functionality. The Internet would become so much more because
of the things that hyperconnectivity allowed us to do, and
both faster inte net connections and the ability to host
software and the cloud would change everything. We could define
what better would mean and make reasonable predictions about what
people could do on a better Internet. Yet, even in

(28:56):
those early days, it was obvious why you were using
the Internet and how it might grow from them. One
did not have to struggle to explain why buying a
book online might be useful or quicker than a shop,
or why a website might make a quicker reference than
having to go to a library, or why downloading a
game or a song might be a good idea. While
habits might have needed adjusting, it was blatantly obvious what
the value of the early Internet was. It's also unclear

(29:19):
when the early days of the Internet ended. Only forty
four percent of US adults had access to broadband internet
in two thousand and six. Were those the early days
of the Internet. The answer is no, and that this
point is brought up by people with a poor grasp
of history and a flimsy attachment to reality. The early
days of the Internet were very, very, very different to
any associated tech boom since, and we need to stop

(29:41):
making the comparison. The Internet also grew in a vastly
different information ecosystem. Generative AI has had the benefit of
mass media driven by the Internet, along with social media
and social pressure to adopt AI for multiple years. And
now our last boost to quip for the episode, actually
that I meant something else. What I mean is that

(30:01):
we're in the early days of AI. All of the
other things you said were very misleading. You misread my statements. Somehow,
we are not in the early days of generat if AI,
and anyone using this argument is either ignorant or intentionally deceptive.
According to Pew, as of mid twenty twenty five, thirty
four percent of US adults have used chat GPT, with
seventy nine percent saying they'd at least heard of it

(30:23):
and a little about it. Even Furthermore, chat GPT has
always had a free version. On top of that, A
study from May twenty twenty three found that over ten
nine hundred news headlines mentioned chat GPT between November twenty
twenty two and March twenty twenty three, and a brand
Watch report found that in the first five months of
its release, chat GPT received over nine point two million

(30:43):
mentions on social media. Nearly eighty percent of people have
heard of chat GPT, in over a quarter of Americans
have used them. If we're defining the early days based
on consumer exposure, that ship has sailed. If we're defining
the early days by the passage of time, it's been
eight years since Attention Is All You Need and three
since CHADGBT came out. While three years might not seem

(31:03):
like a lot of time, the whole foundation of an
early day's argument is that in the early days, things
do not receive the venture funding, research, attention, infrastructural support,
or business interest necessary to make them big. In twenty
twenty four, nearly thirty three percent of all global venture
funding went to artificial intelligence, and according to the Information

(31:24):
AI startups have raised over forty billion dollars in twenty
twenty five alone, with Statista adding that AI absorbed seventy
one percent of VC funding in the first quarter of
twenty twenty five. These numbers also failed to account the
massive infrastructure costs that companies like open Ai and Anthropic
don't have to pay for the limitations of the early

(31:44):
Internet were twofold, the fiber optic cable boom that led
to the fiber optic bubble bursting when telecommunications companies massively
overinvested in infrastructure, which I will get too shortly. There was
also the lack of scalable cloud infrastructure to allow distinct
apps to be run online, a problem solved by Amazon
Web Services, among others. In Generative AI's case, Microsoft, Google

(32:04):
and Amazon have built the fiber optic cables for large
language models. Open AI and Anthropic have everything they need.
They have even if they say otherwise, plenty of compute,
access to the literal greatest minds in the field, the
constant attention of the media and global governments, and effectively
no regulations or restrictions stopping them from training their models
and the works of millions of people or destroying our environment.

(32:26):
They've already had this support too. Open ai was allowed
to burn half a billion dollars on a single training
run for GPT four point five and five, and they
did multiple runs. If anything, the massive amounts of capital
have allowed us to massively condense the time in which
a bubble goes from possible to bursting and washing out
a bunch of people. Because the tech industry is such

(32:47):
a powerful follow up work culture that only one or
two unique ideas can exist at one time, and I
think those ideas are currently open AI and anthropic. The
early Day's argument hinges on obscurity and limited resources, something
that general does not get to whine about. Companies that
make effectively no revenue can raise five hundred million dollars
to do the same AI coding bullshit that everybody else does.

(33:09):
In simpler terms, these companies are flush with cash, have
all the attention and investment they could possibly need. After all,
attention is all you need, and are still unable to
create a product with a defined, meaningful mass market use case,
let alone one that doesn't burn money. In fact, I
believe that, thanks to effectively infinite resources, we've speed run
the entire large language model era and we're nearing the end.

(33:32):
These companies got what they wanted, and I think I
want to die in Minecraft, obviously, but I must press on.
I must. Just saying all these things out loud, you
really get a sense for how illogical these people are
and how much bullshit is going around to try and
push back against skeptics. But it's not gonna work. It's
not gonna work. I hope you're not tired of me

(33:52):
talking about boost equips and doing silly voices. Is You've
got two more of these fucking episodes coming, and I
love recording them. I really do say recording. Jesus Christ,
my friends were in hell, but we're together in hell
and that's fun. And there's a lot more bullshit to
break down. Speak to you tomorrow. Thank you for listening

(34:18):
to Better Offline. The editor and composer of the Better
Offline theme song is Matasowski. You can check out more
of his music and audio projects at Matasowski dot com,
M A T t O. S O w Ski dot com.
You can email me at easy at Better offline dot com,
or visit Better offline dot com to find more podcast
links and of course, my newsletter. I also really recommend

(34:41):
you go to chat dot Where's youreed dot at to
visit the discord, and go to our slash Better Offline
to check out our reddit. Thank you so much for listening.
Better Offline is a production of cool Zone Media. For
more from cool Zone Media, visit our website Cool zonemedia
dot com, or check us out on the iHeartRadio app,
Apple podcast, or wherever you get your podcasts.
Advertise With Us

Host

Ed Zitron

Ed Zitron

Popular Podcasts

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

What Are We Even Doing? with Kyle MacLachlan

What Are We Even Doing? with Kyle MacLachlan

Join award-winning actor and social media madman Kyle MacLachlan on “What Are We Even Doing,” where he sits down with Millennial and Gen Z actors, musicians, artists, and content creators to share stories about the entertainment industry past, present, and future. Kyle and his guests will talk shop, compare notes on life, and generally be weird together. In a good way. Their conversations will resonate with listeners of any age whose interests lie in television & film, music, art, or pop culture.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.