All Episodes

September 12, 2025 37 mins

In the final part of this week's three-part Better Offline Guide To Arguing With AI Boosters, Ed Zitron walks you through why generative AI is nothing like Amazon Web Services, how the media misled the public about ChatGPT, and why ChatGPT’s popularity does not mean it’s a mass-market product.

Latest Premium Newsletter: Why Everybody Is Losing Money On Generative AI: https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/

YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.

BUY A LIMITED EDITION BETTER OFFLINE CHALLENGE COIN! https://cottonbureau.com/p/XSH74N/challenge-coin/better-offline-challenge-coin#/29269226/gold-metal-1.75in

---

LINKS: https://www.tinyurl.com/betterofflinelinks

Newsletter: https://www.wheresyoured.at/

Reddit: https://www.reddit.com/r/BetterOffline/ 

Discord: chat.wheresyoured.at

Ed's Socials:

https://twitter.com/edzitron

https://www.instagram.com/edzitron

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Wuz Media. Hello, I'm ed Zitron, and this is Better Offline.
We finally reached the end of our three part how
to Argue with an AI Booster series. Big strong men

(00:24):
are standing outside of the Better Offline studio, suspended two
hundred feet above the Las Vegas Strip, with tears in
their eyes, and they're saying, sir, sir, it's the most
beautiful podcast I've ever heard. Please stop recording it. Everyone
else will feel insufficient when they hear it. No, No,
I have to continue. I'm afraid my listeners need me.
But okay, seriously, folks, if there's anything I want you

(00:47):
to take home so far, is that the arguments that
these people, these AI sicker fans make, they crumble under
the slightest bit of scrutiny. And yet these arguments work
because they either exploit the lack of knowledge of those
of us who do and understand the rotten economics of AI,
or because they forced the opposite party to surrender their
reason and to exit the planes of reality, to which

(01:08):
I say, fuck that, absolutely not. This is a hell
I'm prepared to die on, and I'll hope that you'll
all be standing with me, side by side Now, in
the first episode, we shook away the claims that it's
the early days for AI and we just need to
give it more time. Then I took the idea that
generative AI is like Uber or fiber optic networking to
industries that both burned a lot of money at the
start but are otherwise nothing like generative AI. And now

(01:29):
it's time to deal with the dregs of the arguments.
These are the worst of the worst. Boost to quips.
Here we go, ultra booster quip. I thought about recording
that with a bunch of reever, but it didn't really
work out for me, but I wanted to do it once.
But anyway, their argument is hum AI is just like
Amazon Web Services, a massive investment that took a while
to go profitable, and everybody hated Amazon for it. Now

(01:52):
I actually covered this in depth and the Hater's Guide
to the AI Bubble, But the long and short of
it is that Amazon Web Services is a platform and
necessity with an obvious choice and has burned about ten
percent of what Amazon and all of them have burned
chasing generative AI, and had also proven demand before building it. Also,
Amazon web Services was break even within three years. An

(02:12):
open aye was founded in fucking twenty fifteen, and even
if you start from November twenty twenty two, by Amazon
web Services, standards should be break even by now. But
now i'll quote myself, Amazon, weent no way. Sorry, that's
the boosters. Amazon web Services was created out of necessity.
Amazon's infrastructure needs were so great that he had effectively
had to build both the software and hardware necessary to

(02:33):
deliver a store that sold theoretically everything, the theoretically everywhere,
handling both the traffic from customers, delivering the software that
runs Amazon dot Com quickly and reliably, and well making
sure things were stable. It didn't need to come up
with a reason for people to run web applications. They
were already doing so themselves, but in ways that cost
a lot more, were inflexible and required specialist server skills.

(02:57):
Amazon web Services took something that people already did and
what there was already a proven demand for, and made
it better and scaled it. Eventually Google and Microsoft would
join the frame. I editorialized a bit there, but I
can do that with my own work now. A common
booster quip, by the way, is for them to say, well,
this AI company they've got high annualized revenues. And as
I've discussed in the past, this metric is basically month

(03:19):
times twelve. And while it's a fine measure for normal
high gross margin businesses like software as a service companies,
it isn't for AI. It doesn't account for churn, which
is when people leave. It also is a number intentionally
used to make a company sound more successful. So you
can say two hundred million dollars annualized revenue instead of
sixteen point six million dollars a month. And you're also

(03:39):
meant to budship. You heard they said two hundred million annualized,
but you heard two hundred million. Your mind did that
was a bad planket reference. But I'll continue. They want
you to think two hundred million. They want you to
think that's what they'll make. More often than not, if
they mentioned an annualized number, that number will not be
how much they make that year. Also, if they're using

(04:00):
this number, it's likely not consistent. Now, if they bring
this up, you should just say to them, hey, how
much profit is the company making and also how much
are they burning? At this point they will I think
mace you I mean with the spray or an actual
maze AI boosters are strange. Now they'll also say, well AI,
this AI companies in growth mode, and they'll pull the
profit lever when it's time. And the answer to that

(04:23):
is always going to be why have none of them
done this? Not one, not one of them. Now a
booster will burst through your don't go AI AGI AGI will,
and then there's that will bullshit again. It's always about
the will with these fuckers. We do not know how
thinking works in humans and thus cannot extrapolate it to
a machine. And at the very least human beings have

(04:45):
the ability to reevaluate things and learn a thing that
LMS cannot do and will never do. We do not
know how to get to AGI. Sam Almonds said in
June the Open AI was now confident they knew how
to build AGI as they have traditionally understood it. Then
in all August, Altman said that AGI was not a
super useful term and that the point of all this
is it doesn't really matter, and it's just this continuing

(05:07):
exponential of model capability that will rely on for more
and more things. I'm really tired of people quoting this guy.
He doesn't make any fucking sense read out read anything
he says out loud, and it's just really just total bullshit.
Even Meta's chief AI scientist Ian Lacun says it isn't

(05:28):
possible with transformer based models to make AGI. We don't
know if Agi is possible, and anyone claiming they do
is lang. Anyone who's talking about Agi is talking about
fan fiction. Again, Ask them how they feel about Banjo
and kazoo Do you think they made love? Actually, that's
a if. Next time someone brings up Agi to you,
seriously bring up Banjo and Kazuoi and their romantic involvement.

(05:51):
I actually that I think that that's the only response
you should give. Now, stop humoring them. It is fan fiction.
But putting Banjo and Kazuoi aside, there's also a really
stupid booster thing they do, which is I'm hearing from
people deep within the AI industry that there's some sort
of ultra pammerful models they're not talking about. And this,
by the way, is hogwash. Nothing different than your buddy's

(06:13):
friend's uncle who works at the Nintendo that says Mario
is coming to the PlayStation. Ilios suits Caver and Mirror.
Murti raised billions of dollars for companies with no product,
let alone a product roadmap. And they did so because
they saw an opportunity for a grift and to throw
a bunch of money at compute for no reason. Anyone
who has secret shit is not talking about it because
it doesn't exist. Also, if someone from deep within the

(06:35):
industry has told somebody big things are coming, they're doing
so to con them or make them think that they
have privileged information. Ask for specifics, and if they say
I couldn't possibly tell you, then they're full of shit.
They're full of crap. They are full of doodo. And
if they get vague, get specific, Oh, it's going to
be able to automate other things? What things? How how's

(06:55):
it automate them? Oh? I don't know. Then you don't
know shit about fuck? Now talking about not knowing about
Fuck's another booster. Quip chat GPT is so popular seven
hundred million people use it weekly. It's one of the
most popular websites on the Internet. Its popularity proves its utility.
Look at all the paying customers now that paying customer's
part will get to in a second. But this argument

(07:16):
is poised as a comeback to my suggestion that AI
is not particularly useful, a proof point that this movement
is not inherently wasteful, that there are in fact use
cases for chat GPT that are lasting, meaningful, or important,
I fundamentally disagree. In fact, I believe chat GPT and
lllm's in general have been marketed based on lies of inference,
which I realize is ironic. I know it's pretty clever.

(07:37):
I had a whole blog written called the Lie of
Inference that kind of became this. It wasn't very good,
though this is. This is good. Don't say it's bad.
I also have grander concerns and suspicions about what open
ai considers a user and how it counts revenue. Let
me give you an example. They claim to have five
million business customers, yet five hundred thousand of those are

(07:59):
from a fifteen million dollar year deal, year long deally
with cal State University, which works out to around two
dollars and fifty cents a user a month. Open ai
has also started doing one dollar a month trials of
its thirty dollars a month team subscription, and one has
to wonder how many of those subscribers are counted in
the total and indeed for how long. I do not
know the scale of these offers nor how long open

(08:20):
ai has been offering them. I readditor posted about this
one dollar for a month deal a few months ago,
saying that open ai was offering five seats at once,
so one buck a month for a month per se.
How many people cancel after that? Who knows? Maybe they
just hope they don't. In fact, I found a few
people talking about these deals, and even one adding that
they were offered an annual ten dollar a month Chat

(08:41):
GPT plus subscription. That's like, not like ten dollars a
month for just one month, that's for twelve months, with
one person saying a few weeks ago that they'd seen
people offered that same deal for canceling their subscription. And
actually I got the same thing when I tried to cancel. Yes,
I pay for Chat GPT. I need to actually use
the fucking thing to criticize it. When I tried to
cancel this like, hey, do you want three months with
ten bucks space? And I was like sure, just to

(09:03):
prove my point suspicious, But there is a greater problem
at play, by the way, and it goes beyond pricing.
And it's the chat GPT in Open Ai has been
marketed based on lies. So chat gpt is seven hundred
million weekly active users. Open Ai is yet to provide
a definition. Yes, I've asked them, which means that an
active user could be defined as somebody who has gone
to chat gpt once in the space of a week.

(09:25):
This term is extremely flimsy and doesn't really tell us much. Yes,
it's a lot of people, but how active are they?
Similar web says that in July twenty twenty five, chatgpt
dot com had one point two to eighty seven billion
total visits, making it very popular. What do these facts
actually mean? Though? As I said previously, chat GPTs had
probably the most sustained PR campaign for anything outside of

(09:47):
a presidency or a pop star. Every single article about
Ai mentions open Ai or chat gpt. Every single feature launch,
no matter how small, gets a slew of coverage. Every
single time you hear AI, you made to think of
chat gpt by a tech media that's never stopped to
think about their role in the hype or their responsibility
to their readers. And as the hype has grown, the

(10:07):
publicity compounds because the natural thing for a journalist to
do when everybody is talking about something is to talk
about it more chat. GPT's immediate popularity may have been viral,
but the media took the ball and ran with it,
and then proceeded to tell people it did stuff it
did not. People were pressured to try this service then
under false pretenses, something that continues to happen to this day.

(10:28):
And I'm going to give you a really fucking grizzy
example when I discovered this, When I went and looked
at this, it filled me full of rage. It's disgraceful
what happened. On March fifteenth, twenty twenty three, Kevin Russ
of The New York Times would say that open Aiy's
GPT four was exciting and scary, and then it was exacerbating,
in his words, the dizzy and vitigenous feeling I've been

(10:48):
getting whenever I think about AI lately, wondering if he
was experiencing future shock, then described how it was an
indeterminate level of better, and then said something that immediately
sounded ridiculous. In one test conducted by an AI safety
research group that hooked GPT four up to a number
of other systems, GPT four was able to hire a

(11:08):
human task rabbit worker to do a simple online task
career solving a capture test without alerting the person to
the fact it was a robot. The AI even lied
to the robot about why it needed the capture done,
concocting a story about a vision impairment. Now this doesn't
sound even remotely real now, but this was two years ago.

(11:29):
So I went and looked up the paper, and pretty
much everything that Ruth described was illustrative. I didn't really
see whether it happened. Now. He's referring to the safety
card which every model has that lists all the measures
used to train it and such. And this safety card
led to the perpetration of one of the earliest falsehoods
and most eagerly parroted lies about this fucking industry. And

(11:50):
that was the chat GPT in Jenerity. I of AI
is capable of agentic actions outlet after outlet, and some
people who should definitely have known better, led by Kevin Ruce,
eagerly interted an entire series of events that took place
that doesn't remotely make sense, starting with the fact that
I don't think you can hire a cast grabbit to
solve a capture at the very least without a contrived

(12:12):
situation where you create an empty task and ask them
to complete it. Why not use mechanical turk or favor.
There are people right now offering that service. There were
actual real things. But you know me, I'm a curious
little critter. So I went further and followed this the
citation from the safety card to the study, and this
is by METR and the research page. It turns out

(12:32):
that what actually happened was MATR had a researcher copy
and paste the generated responses from the model and otherwise
handled the entire interaction with the task grab it and
based on the plurality of cask Grabbit contractors, it appears
to have taken multiple times. On top of that, it
appears to open AI and META as METR sorry, we're
prompting the model and what to say, which kind of

(12:54):
defeats the point, like we don't naturally know what they
prompted it to do. And when you look even says
it does like chain of thought reasoning, which didn't really
exist back then, and if it did, it was extremely
like chain of thought is reasoning, And that came out
end of twenty twenty four. This whole thing is absolutely insane.
It's absurd that anyone wrote about it is real. What happened,

(13:15):
just to be really blunt, is that they if they
even opened a task rabbit. It's really not obvious whether
they actually did this. They had to go to the
model and say, okay, I'm opening a task rabbit window,
and now the person has said this, and now this
is it. It just doesn't sound real at all. But
even if it did, it's very obvious that they were
telling the model what to say and then copy pasting

(13:36):
the response, and it took them multiple tries. It took
me five whole minutes to find this article, partly because
it was cited on the GPT for system card. I
then then read it within that time, then wrote this
part of the script. It didn't require any technical knowledge
other than the ability to read. It is transparently, blatantly
obvious that GPT four did not hire a task rabbit

(13:58):
or indeed make any of these actions it was prompted to,
and they did not use show the prompts they used,
likely because they had to use so many of them.
If they even did it, anyone falling for this as
a mark and open ay should have gone out of
their way to correct people. Instead, they sat back and
let people publish outright misinformation. Rus, along with his co

(14:26):
host Casey Newton, would go on to describe this example
at length on a podcast that week, describing an entire
narrative where the human actually gets suspicious and GPT four
reasons out loud that it should not reveal that it
is a robot. It's not a reasoning model, at which
point the task rabbit solves the capture. During this conversation,
Newton gasps and says, oh my god twice. And when

(14:47):
he asks Rus, how does this model understand that in
order to succeed, its task has to deceive the human?
Rus responds, we don't know. That is the unsatisfying answer,
and Newton laughs and states, we need to pull the
plug and again, what disgraceful, embarrassing, reprehensible. All that and
more on the hard Fork podcast published weekly You Can

(15:11):
Cut that that ads for free fellas credulousness aside. The
GPT for marketing campaign was incredibly effective, creating an order
that allowed open ai to take advantage of the vagueness
of its offering, as people, including members of the media,
willfully filled in the blanks for them. Allman has really
never had to work to sell his product. Think about it.
Have you ever heard open ai tell you what chat

(15:31):
gpt can do or go to great lengths to describe
its actual abilities. Even on open AI's own page. With
chat GPT, the text is extremely vague. You scroll down,
you're told that chat gpt can write, brain storm, edit
and explore ideas with you. It can generate and debug code,
automatic repetitive tasks, not clear what the tasks are, and
help you learn new APIs question mark. With chat GPT,

(15:53):
you can learn something new, dive into a hobby, answer
complex questions and analyze data, and create charts repetitive tasks.
Who knows how am I learning? Unclear? It's got thinking,
billtim What that means is also unclear, unexplained, and thus
allows a user to incorrectly believe that chat GPT has
brain and thinks. To be clear, I know what reasoning means,
but this website does not attempt to explain what thinking means.

(16:17):
You can also offload complex tasks from start to finish
with an agent, which can, according to open Ai, think
and act proactively, choosing from a tool box of urgentic
skills to complete tasks for you using its own computer.
This is an egregious lie, employing the kind of weasel
wording that would be used to torture ir but boon
for an eternity precise in its vagueness. Open AI's copy

(16:37):
is home to make reporters willing to simply write down
whatever they see and interpret it in the most positive light,
and thus the lie of inference began. What chat GPT
meant was moneyed from the beginning, and thus chat GPT's
actual outcomes have never been fully defined. What chat gpt
could do became a kind of folklore, a non specific
form of automation that could write code and generate copy

(16:58):
and images, that can analyze data. All things that are true,
but one can infer much greater meaning and use from
One can infer that automation means the automation of anything
related to text, or that right code means write the
entirety of a computer program. Open AI's chet GPT agent
is not, by any extension of the word and I
quote already a powerful tool for handling complex tasks, but

(17:18):
it has not, in any meaningful sense committed to any
actual outcomes as a result, potential users subject to a
twenty four to seven marketing campaign have been pushed towards
a website that can theoretically do anything or nothing, and
have otherwise been left to their own devices. The endless
gas lighting societal or pressure, media pressure and pressure from
their bosses has pushed hundreds of millions of people to

(17:39):
try a product that even its creators can't really describe
or don't feel compelled to. And if I was wrong,
we'd have real use cases by now and better metrics
than weekly active users. As I said in the past,
open ai is deliberately using these weekly active users so
that it doesn't have to publish their monthly active users,
which I believe would be much higher. Now, why wouldn't
it do this well? Open ais mentioned as twenty million

(18:01):
paying chat GPT subscribers and five million business customers, with
no explanation of what the difference might be really other
than it involves teams and edgy but not pro Anyway,
this is already a mediocre three point five percent conversion rate. Yeah,
it's monthly active users, which are likely eight hundred and
nine hundred million, but these are guesses would make that
rate lower than three percent, which is pretty terrible considering

(18:22):
everyone says this shit is the future. I'm al so
tired of having people claim that search or brainstorm or
companions are a lasting, meaningful business model. I'm really tired
of him. I'm tired of being told this again and
again and again. That's not what chat GPT is going
to actually survive on bath. Okay, let's move on. Here's
another booster grip though open ai is making tons of money.

(18:44):
That's proof they're a successful company and you are wrong somehow.
So open ai announced that it is here it's first
one billion dollar month on August twenty to twenty twenty
five on CNBC. In fact, weirdly enough, by the way,
that quote was not in the TV interview. But anyway,
this is also brings it exactly in line with my
estimating five point twenty six billion in revenue that I
believe it has made at the end of July. Did

(19:05):
that in a premium newsletter? Please pay me? However, remember
what the MIT study that I mentioned said, Enterprise adoption
is high, but transformation is low. There are tons of
companies throwing money at Ai, but they are not seeing
actual returns. Open Aiy's growth is the single most prominent
company in AI, and if we're honest, one of the

(19:26):
most prominent in software. Writ large makes sense, but at
some point will slow because the actual returns for businesses
are not there. If there were, we'd have one article
where we could point a chat GBT integration that helped
scale a company or save a bunch of money, make
a bunch of money, written in plain English and not
in the gobbledygook of profit improvement. Also, open aiye is

(19:47):
projected to make twelve point seven billion dollars in twenty
twenty five. How exactly will it do that? Is it
really making one point five billion dollars a month by
the end of the year. Even if it does, is
the idea that it keeps idding ten billion dollars or
more a year every year into it? Tenny? What actual
revenue potential does open ai have long term? It's products

(20:08):
are about as good as everybody else, is, cost about
the same and do the same things. Chat GPT is
basically the same product as Claude or Grock, maybe in
the less Mecha Hitler or any number of different lms.
The only real advantages that open ai has are infrastructure
and brand recognition. These models have clearly hit a wall
in training, hitting diminishing returns, meaning that the infrastructural advantage

(20:29):
is that they can continue providing its service at scale,
nothing more. It isn't making its business cheaper other than
the fact it mostly hasn't had to pay for it,
other than the site in Aplene, Texas, where it's promised
Oracle thirty billion dollars a year in twenty twenty five.
I'm sorry, I don't buy it. I don't buy that
this company will continue growing forever and its stinky conversion
rate isn't going to change anytime soon. When open Ai

(20:53):
opens Stargate Abilene, it will turn profitable. How I hear
this one a good amount? How? How? How? How how
gonna happen? Nobody ever answers, No one ever for answers
actually how this company will become profitable. It's fucking insane
to me. Nobody ever answers the question efficiencies. Efficiencies. They're
going to be efficient? Mmmmm, they're going to be efficient
if you're gonna say chat GPT five. I wrote a

(21:15):
huge scoop and then did an episode about why it's
not more efficient. In fact, it's less efficient and I'm
sure one of you is going to argue, well, you know,
they could do their customs Silicon. There a ten billion
dollar deal with Broadcom. How they fucking pay them for that?
Also you realize that well, actually no, just you know, no,
you're right, they're going to get the chip from Broadcom

(21:35):
because you know what they always say about the first
generation of tech, right, it always works and it's always
great and it has no problems. That's what I always
That's that's why happened with pretty much every first time
they make anything in tech. Anyway's move on. You'll hear
boosters also be like, well, my brother's friends dog uses
chat GPT and I love it. Well I heard this happen,
or my mate as it in this or I heard

(21:57):
this person, or I use it in this one before
we go any further. Just to be clear, though, is
when you hear a booster bring up AI and they'll
say something, make sure they're talking about generative AI. Are
they actually talking about generative AI? Is this a large
language model thing? It's very, very very common for people
to conflate AI with generative AI. There are many different

(22:17):
kinds of AI. Make sure that the AI booster whatever
they're claiming whatever they're telling you is actually about large
language models. So there are all sorts of other kinds
of machine learning that people love to bring up. LM's
have nothing to do with folding at home, autonomous cars,
or most disease research. Well, okay, let's do a speed
run using AI led researchers to discover forty four percent

(22:38):
more materials. No, it didn't. Mit is now withdrawn this
paper site and concerns about its integrity. I've linked it
in the show notes. There's a huge rundown. Here's another quote.
AI is so profoundly powerful that it's causing young people
to have trouble finding a job. While young people have
been having trouble finding jobs, there's no proof that AI
is the reason. Every piece of coverage or reading is

(22:59):
citing an ax with economics report that amidst a bunch
of numbers, says and I quote there are signs that
entry level positions are being displaced by artificial intelligence at
higher rates, a statement that it does not back up
other than claiming that the high adoption rate by information
companies along the sheer employment declines in some roles since
twenty twenty two, suggested some displacement effect from AI, and

(23:20):
digging deeper, the largest displacement seems to be entry level
jobs normally filled by recent graduates. There's otherwise new data.
Anyone making this point is grasping at straws. I go
into this in more detail in the newsletter called Sincerity
Wins the War, which I've linked to. But it's one
of the worst reported stories in tech. And now I'm
actually going to add li it for a second, because

(23:40):
I forgot while write in this script. There was also
this thing that came out of Stanford that said it's
been a thirteen percent drop in jobs affected by AI,
and this was used as proof that AI was taking them. Now,
curious little critic that I am, I went and read
that what it actually did was find a bunch of
jobs that they think are related to AIM being affected
by AI. They saw they were going down, and they went, oh,

(24:00):
it is AI that did that. They've done. They fart
around with various statistics, but that's long and short of him.
I give you an example. One of the job's accountancy. Now,
any accountants listening, big up to my accountant listeners, there's
been a hiring crisis. Is accountancy for years. People are
not becoming CPAs. The reason there are less of them
is that less people are becoming them. It's nothing to
do with AI. Like, are you imagine if anyone have

(24:25):
put half as much effort into writing up these stories
as I did writing up one of these booster quips.
But here's another one that AI is replacing young coders
and it is not. In fact, Amazon's cloud chief just
said that replacing junior employees of AI is one of
the dumbest things he's ever heard. There is no actual,
real evidence that this is the case. Every single story
you have read is anecdotal. Anyone peddling this has an

(24:46):
agenda or is not reading. Every CEO mentioning this specifically
avoids saying the words that AI is replacing people because
AI can't replace people. I will add an aside, there
are people's jobs that have been replaced by A Translators, transcribers.
Brian over at blood is in the machine. Blood in
the machine. Even sorry, Brian, He's doing a great job

(25:07):
on covering this. There are people that have lost jobs.
These people are losing it because their bosses are fucking stupid,
because their bosses are just taking the shittiest possible version
of their work and slopping it up. That's not happening
at the knowledge worker scale, nor is it happening at
the code of scale. Everyone telling you that has an agenda.
But boosters will also claim that AI is doing science

(25:28):
for search somehow or will do it, and it won't.
I've included a write up about why foundation models can't
do this. Someone's going to read it and say, but
there's this bit where it says it isn't a defeat
of lllms, and the reason he says this is because
I sheot you not that lms aren't incapable of doing
scientific research. He says they're insufficient, which is which is

(25:50):
the same thing. Think they're insufficient anyway. He claims they're
also not dead weight for science, then spends hundreds of
words meandering around around that thing to kiss up to
AI boasters for some reason. I assume because they've terrified
him by being really annoying, and these people need to
go outside and touch grass. Now a lot of people
think they're going to tell me that they use AI

(26:12):
all the time, and that will change my mind. I
cannot express how irrelevant it is that you have a
use cases. Every use case I hear is one of
the following. I use it for brainstorming, to which I say,
who cares. Not a business model, it's commoditized. I use
it like search. Who cares. It's not even good at search.
It's fine, It's not even better than the low bar
set by Google Search. The results it gives on great,

(26:33):
and the links are deliberately made smaller, which gets in
the way of me clicking them so I can actually
look at the content. If you're using chat GPT for search,
you may not actually care about the content of the
things you're looking at. If I'm wrong, great, you now
have a functional search engine can gratch your fuculations. Well,
I use it for a search, and if you use
it for a search, you do not respect natural research.
You want a quick answer, It's that simple. These reports

(26:54):
are slop. I've read many, many, many AI reports and
they're not good. Sorry. Well, I use it for coding.
I know someone who used it for coding, and I'll
get to that at a minute. But all of this
would be fine and dandy if people weren't talking about
this stuff as if it was changing society. None of
these use cases come close to explaining why I should
be impressed by generative AI. It also doesn't matter if

(27:15):
you yourself have a kind of useful thing that AI
did for you once. We are so past the point
when any of that matters. AI is being sold as
a transformational technology, and I am yet to see it
transform anything. I am yet to hear one use case
that truly impresses me, or even one thing that feels
possible now that wasn't possible before. This isn't even me
being a cynic. I'm ready to be impressed. I just

(27:36):
haven't been impressed in three fucking years and it's getting boring. Also,
tell me with the straight face that any of this
shit is worth the infrastructure. Remember, AI boosters are arguing
that this stuff is powerful. None of these use cases
are powerful sounding, but sir, I I agree, sir. Vibe
coding is changing the world, allowing people who can't code

(27:56):
to make software. Now, this is one of the most
brain dead takes about AI and coding, and it's the
Vibe coding is allowing anyone to build software. And you'll
never guess what. Kevin Ruce covered this sh did this article.
While writing the script, I hadn't even know as need
Lisch anyway. Well, technically true in that one can just
type build me a website into one of many coding environments.

(28:18):
This does not mean said website is functional, secure, or useful.
Let's make this really clear. AI cannot just handle coding.
Go into the show notes and read this piece I've
linked from Colton Bogie. I have actually interviewed in him
now he's going to be coming out in the next
few weeks. The episode the interview is fucking brilliant. And
then the other I've linked to by Nick Suesh. If

(28:39):
you contact me about AI and coding without reading these,
I will send them to you and nothing else, or
crush you like a car in a garbage dump into
a cube. One or the other I will choose at
the time. Also, show me a vibe coded company, Please,
not a company where somebody who can code has quickly
spun up some features. A fully functional, secure and useful
app that has made money and made by somebody who

(29:01):
cannot read or write code. You won't be able to
because it is impossible. Vibe coding is a marketing term
based on lies peddled by people who either have a
lack of knowledge or morals and RAI coding environments making
people faster. I don't think so. In fact, a recent
study suggested that they actually make software engineers nineteen percent slower.
The reason that nobody is vibe coding in entire companies

(29:22):
because software development is not just put a bunch of
code in a pile and hit go, and oftentimes when
you add something, it breaks something else. This is all
well and good if you actually understand code. It's another
thing entirely when you're using cursor or clawed code, like
a kid at an arcade machine, turning the wheel repeatedly
without having a coin in there and pretending that you're
playing it when the demo's going on. VIB coders are
also awful for the already negative margins of most AI

(29:45):
coding environments is every single thing they ask the model
to do is imprecise burning tokens in pursuit of a
goal they themselves don't really understand. VIB coding doesn't work,
It will not work, and pretending otherwise is at best
ignorance and are worth supporting a campaign built on lies.

(30:15):
And this is all built up to one final point.
I'm no longer accepting half baked arguments. If you're an
AI booster, please come up with better arguments, and if
you truly believe in this stuff, you should have a
firmer grasp and why you do so. It's been three
years and the best some of you have is it's
really popular or uber also burned money. Your arguments are
based on what you wish were true rather than what's

(30:36):
actually true, and it's deeply embarrassing. Then again, there are
many well intentioned people who aren't necessarily AI boosters who
repeat these arguments, regardless of how thinly frame they are,
in part because we live in a high information, low
processing society where people tend to put great faith in
people who are confident in what they say and sound smart. Jason,
I also think the media is failing on a very

(30:57):
basic level to realize that their fear of missing out
or seeming stupid is being used against them. If you
don't understand something, it's likely because the people you're reading
or hearing it from don't either. If a company takes
a promise and you don't understand how they'll deliver on it,
it's their job to explain how, and your job to
suggest it is implausible and clear and defined language This
has gone beyond simple objectivity into the realm of an

(31:19):
outright failure of journalism. I have never seen more and
misinformation about the capabilities of a product in my entire career,
and it's largely peddled by reporters who either don't know
or have no interest in knowing what's actually possible, in
part because all their peers are doing the same thing
and saying the same nonsense. As things begin to collapse,
and they sure look like they're collapsing, but I'm not

(31:40):
making any wild claims about the bubble bursting quite yet,
it will look increasingly more deranged to bluntly publish everything
that these companies say. Never have I seen an act
of outright contempt more egregious than Sam Altman's saying that
GPT five was actually bad and that GPT six will
be even better. Members of the media, Sam Altman does
not respect you. He is not your friend, Clammy. Sam

(32:01):
Mortman is not secretly confiding in you. Clammy Will thinks
you are stupid and easily manipulated, and will print anything
he says, largely in part because many members of the
media will print exactly what he says whenever he says him.
And to be clear, if you wrote about GPT six
and made fun of it, that's great. But let's close
by discussing the very nature of AI skepticism and the
so called void between those who hate AI and those

(32:23):
who love AI. Am from the perspective of one of
the most prominent people in the skeptic camp. Critics and
skeptics are not given the benefit of grace, patience, or
in many cases, hospitality when it comes to their position.
While they may receive interviews and opportunities to give their side,
it's always framed as the work of a firebrand and outlier,
or somebody with dangerous ideas that they must eternally justify.

(32:46):
Skeptics are demonized, their points under constant scrutiny, their allegiances
and intentions constantly interrogated for some sort of moral or
intellectual weakness. Skeptic and critic are words said with a
sneer of trepidation that the listener should be suspicious that
this person isn't agreeing that AI is the most powerful,
special thing ever. To not immediately fall in love with

(33:07):
something that everybody is talking about is to be framed
as a hater. To have oneself, introduced with the words
not everyone agrees or on forty percent of your appearances.
By comparison, AI boosters are the first to get TV
appearances and offers to be on panels. Their coverage featured
prominently on tech Meme, something slot like books called shit
like The Future of Intelligence, Masters of the Brain, featuring

(33:28):
eighteen interviews with different CEOs that all say the same thing.
They don't have to justify their love. They simply have
to remember all the right terms chirping out, test time, compute,
and the cost of inference is going down enough times.
The Salomon Wario Amiday to give them an hour long
interview where he says the models they are in years
going to be the most powerful school teacher ever built.
And by the way, yeah, I did sell a book

(33:49):
because my shit fucking bangs, my shit rocks. I'm I'm
not going to be too smug, but like, I put
a lot of effort into this and I research it
very well. Others should try harder. I have persistent, deeply
sourced arguments that I've built over the course of years.
I didn't become a hater because I'm a contrarian. I
became a hater because the shit that these fucking oaths
have done to the computer pisses me off. I did

(34:11):
the man that destroyed Google Search because I wanted to
know why Google Search sucked. I wrote Sam Altman Free
because at the time I didn't understand why everybody was
so fucking enamored with this clammy sociopath. Everything I do
comes from a genuine curiosity and an overwhelming frustration with
the state of technology. I started writing the newsletter that
led to this podcast with three hundred subscribers and sixty views,

(34:33):
and have written it as an exploration of subjects that
grows as I write. I do not have it in
me to pretend to be anything other than what I am,
And if that's strange to you, well I'm a strange man,
but at least I'm an honest one. I do have
a chip on my shoulder and that I really do
not like it when people tried to make other people
feel stupid, especially when they do so as a means
of making money for themselves or making someone else look good.

(34:53):
I write this stuff out because I have an intellectual interest.
I like writing, and by writing, I'm able to learn
and process my complex feelings around technology. And talking it
out actually feels good. It's an intellectual exercise that I
really enjoy. I happen to do so in a manner
that hundreds of thousands of people enjoy every month. And
I'm not specifying where those people go. And if you
think that I've grown this by being a hater, you

(35:13):
are doing yourself the service of underestimating me, which I
will use to my advantage by writing deeper, more meaningful,
and more insightful things than you, and then I'll say
them with lots of curse words on this podcast. I've
watched these pigs ruin the computer again and again and
make billions doing so. And all of this is happening
while the media celebrates the destruction of things like Google,
Facebook and the fucking environment in pursuit of eternal growth.

(35:36):
I can't manufacture my discussed nor is it hard to
nor can I manufacture whatever it is incide that makes
it impossible to keep quiet about these things. I don't
know if I take this too seriously, whether I don't
take it seriously enough because they keep saying fucking shit.
But I'm honored that I'm able to do so, and
I really appreciate everyone who listens, reads, or engage with
with me in any way. I really do love you

(35:58):
all for listening. I know that this was a long
three parter. I've enjoyed recording it. I've done lots of retakes. Mattasowski,
love you man. Sorry for all of this. I'll catch
you next episode. Thank you for listening to Better Offline.

(36:19):
The editor and composer of the Better Offline theme song
is Matasowski. You can check out more of his music
and audio projects at Matasowski dot com, M A T
T O s O W s ki dot com. You
can email me at easy at Better Offline dot com
or visit better Offline dot com to find more podcast
links and of course, my newsletter. I also really recommend

(36:41):
you go to chat dot Where's Youreed dot at to
visit the discord, and go to our slash Better Offline
to check out our reddit. Thank you so much for listening.
Better Offline is a production of cool Zone Media. For
more from cool Zone Media, visit our website cool Zonemedia
dot com, or check us out on the iHeartRadio app,
Apple Podcasts, wherever you get your podcasts
Advertise With Us

Host

Ed Zitron

Ed Zitron

Popular Podcasts

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

What Are We Even Doing? with Kyle MacLachlan

What Are We Even Doing? with Kyle MacLachlan

Join award-winning actor and social media madman Kyle MacLachlan on “What Are We Even Doing,” where he sits down with Millennial and Gen Z actors, musicians, artists, and content creators to share stories about the entertainment industry past, present, and future. Kyle and his guests will talk shop, compare notes on life, and generally be weird together. In a good way. Their conversations will resonate with listeners of any age whose interests lie in television & film, music, art, or pop culture.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.