Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Also media hell, and welcome to this week's better Offline Monologue.
I'm your host, led Zeitron, and this has been a
tough week. I'm not gonna lie. Monday, we recorded a
wonderful episode with Victoria's song and Ashmun Rodriguez, which got
(00:23):
lost due to a technical four. And then I recorded
this in time monolog, which then got lost to a
completely different different computer, different room, different place, technical error.
I love making this show for all of you. It's
been kind of a pisser, but it's important we get
on top of this goddamn subject. So check the episode notes,
buy a challenge coin, read the newsletter. A's a premium
(00:44):
version unrelated to this show. I'd love if you'd subscribe,
and if you don't, I won't feel anything. Don't worry
about it, but let's get to it. Last week open
Ai launched GPT five, a new flagship model of some
sort that's allegedly better at coding and writing, but in
reality is much more of the same. Another model is
interterminately better at benchmarks built specifically for large language moddles,
because they can't do actual work. The Wall Street Journal
(01:07):
reported late last year that it took multiple half billion
dollar training runs to get GPT five off the ground,
and Altman him said said in the podcast with theovon
of all people at GPT five scared him and made
him say, what have we done? And that's a good
bloody question, Sammy. According to open Ai, GPT five is
a unified system with a smart, efficient model that answers
most questions, a deeper reasoning model, and a real time
(01:30):
router that quickly decides which model to use based on
conversation type, complexity, tool needs. And you're explicit in tent.
I read all of that out because I wanted you
to hear how convoluted GPT five is and how much
effort open Ai has had to put in to create
something that, based on all reports is fine. To quote
Simon Willison, it just does stuff. Wowie zowie. In simpler terms,
(01:51):
Chad GPT's version of GPT five takes the user's prompt
and decides which modele to use as a result, using
one of a few submodels GPT five Regular, Mini or
now I know, and then spits out an output. And
there are rate limits, by the way, so if you
use it too much you get kicked down to many automatically.
If you ask it to think about something, it will
choose to engage the reasoning part of the model. These
things do not think, by the way. They are probabilistic models,
(02:14):
so reasoning is kind of like you get a prompt
and then it reads the prompt with another model and says, Okay,
what would the steps be to execute this? It has
some returns, but recent papers suggest that it doesn't work
that well anyway. Using this, you mostly have to trust
the open ai will choose the best model for the job,
as opposed to the cheapest one for open ai to serve,
which is what I think they're actually doing. As part
(02:36):
of the launch, open ai has also killed access to
all other models that at least is planning to, and
truncated user access to two choices GPT five or GPT
five thinking with legacy models like four four many and
their associated rate limits gone immediately for most, though some
of them are back, and in sixty days of people
paying two hundred dollars a month, you'll work out why
(02:56):
I'm dithering in a second. This enraged the chat gp subreddit,
with users claiming that GPT five was and I quote,
wearing the skin of their dead friend, referring to GPT
four oh, and another saying that GPT four point five
genuinely talk to them and as pathetic as it sounds,
was their only friend. And it must be clear, we
can make fun of these people if we want, but
this is actually genuinely sad. There is something going on
(03:19):
here where people are so lonely that they want to
talk to a chat bot. Mock them if you want,
and some of you will, and I don't know if
I even want to. But something is happening here and
it isn't brilliant. But after a few days, Clammy Samuel
and restored access to GPT four overpaid users, and then
this only matched the stem the type briefly, with one
user saying that their baby was back, that they cried
(03:41):
a lot, and that they were crying as they wrote
the post, ending by saying love you. I assumed to
GPT four zero. Here's the problem. Though, users are now
doubting that the four O that open Ai is restored
is actually the same model. One post claims that four
roh has lost its soul, and another says that fourro
is lobotomized all the way down in one three. One
user said that four row had gotten markedly worse suddenly,
(04:03):
and another said it's definitely not the same, though others
in the thread claimed that it was. Another post said
legacy GPT four row is GPT five in cosplay, even
as others pleaded with them and said it was exactly
the same and that people were experiencing some strange phantom
love for placebo effect. And I think that's actually what's
going on, writ large. Chat GPT was never a success
based on its actual abilities or outputs or things it
(04:26):
could do, but a global marketing campaign perpetuated by it
taking business media asleep at the wheel, or worse still,
that wanted these companies to win and help them by lying.
I have done a comprehensive evaluation of the last three
years of press around chat, GPT and GPT itself, and
you will look in the things in twenty twenty three
and there's shit that's just fucking made up. There's a
whole go and look up task Grabbit GPT four. There
(04:49):
are so many I'm a link one in the fucking notes.
There are people that claim that GPT four ordered a
task grab it to complete a capture. Now, on top
of this not being a thing that task grabbit generally
it does. This is from the system card of GPT
four and it claims that it hired a taskrab, except
when you look at just said it messaged them. It's
very clearly made up. But everyone reported it as agents
(05:11):
existing in twenty twenty three. Aha. Every time I read
this stuff, I feel little goddamn insane. But anyway, because
these models do not have obvious replicable ways outside of
benchmarks of testing what they can do, each user is
effectively in a constant vibe check with the models, and
the sycophantic qualities of GPT four oh were clearly enough
to endear them to the platform. People using GPT four Oh,
(05:36):
they couldn't tell you why it's different to GPT five
other than it feels less human or doesn't do the
same things, even if those things are kind of hard
to define. This is what happens when you build a
following for a product based on specious hype and vague
promises and lines of inference, of course, and then allow
users to make up the reasons that they care. You
begin engaging with the gamer mindset of vibes based fandom
(05:57):
that's completely unbreakable unless you make one subtle change that
you could never see coming that breaks the illusion, leading
to gamer like distrust and anger. You see, I theorized
that the vast majority of chat GPT users do not
know where they used it in the first place. Three
years of immediate pressure to use AI that AI was
the future, their boss is saying AI is important and
that you would be left behind if you didn't use AI.
(06:17):
Mean that people come to chat GPT to work out
why they're using it in the first place, which has
led to all sorts of bizarre emotional attachments, kind of
like one's attachments to a live service game and shout
out to Catharsis twenty three on Blue Sky, who made
this observation. As a result, users were incredibly sensitive to
changes like removing or changing a model because their association
with chat GPT was based on However, GPT four zero
(06:38):
works and sounds by ripping it out and replacing it
with GPT five, users immediately felt jilted and swindled by
open AI, and much like a dying live service game,
any changes that have been made as a result were
met with paranoia and confusion. Clamiel Altman's attempts to paper
over the problem by boosting rate limits on GPT five's
thinking and restoring access to GPT four to ZH and
(06:59):
other models for pay maid users were not enough because,
in a very real sense, many of those users could
not tell you what they liked four to oh to
begin with four oh wasn't good so much as it
was an investment of time. By showing that open ai
is willing to cut things arbitrarily, users can no longer
trust that this investment of time is worthy, especially as
many complain that it launch GPD five deleted a bunch
(07:19):
of conversations. Now open ai sits in an odd spot
where they're supposedly huge Manhattan Project level launch has been
met with either apathy or agony. While they've plicated users
in the short term, it's very clear that the vast
majority of users this, like GPT five and power users,
don't seem particularly impressed with it either. This was meant
to be the big launch that changed things for open
(07:40):
ai forever, but it's turned into something of a mass
betrayal or just kind of a mass letdown. And because
it's based on vibes rather than its actual ability to
do something, there's very little one could do to fix
this problem. It's unclear how all of this affects the
company long term, but things do not seem good. Sam
Wortman has already said that open ai is having to
reallocate capacity for the couple of months prioritizing paying Chat
(08:02):
GPT users of an API demand up from the current
allocated capacity and commitments that they've made to their customers
code for those who do not want to pay priority processing,
which is now available for any developer. It's also unclear
what happens next. GPT five is not the future. Open
ai is running out of capacity, and their product, despite
the fanfare, has no capabilities or reasons to adopt it
(08:23):
that are really new or interesting. Years of allowing the
media to spin out ridiculous narratives about what AI can
or could do using vague pablem that kind of suggests
they're more powerful than they are has created a pr
campaign for a product that does not exist, and the
seven hundred million weekly active users of chat GPT have
clearly arrived there without much guidance. The attachment born of
(08:44):
compulsion and societal pressure rather than any real use cases.
When you allow people to define an indeterminately powerful tool
by any standard they like, with no interest in correcting them,
with no interest in guiding them, with no interest in
actually showing them what it was that they were paying
for others. Then it can generate staff, You'll create an
attachment to it that devis any real ability you have
(09:05):
to control things. Open ai was never forced to productize
and at scale. It's a very real possibility that people
have pressure by the media and society itself force themselves
to find meaning in LMS somehow, even if it feels
kind of stupid. And what I mean by this is
you get to the product and there really isn't that
much guidance. Go on an open AI's website and have
a look at the chat GPT page and look at
(09:27):
what it tells you to do. It's quite vague. You
can look at it can analyze data, right, it can
generate stuff. It helps you do ideas. Is that good?
Am I smart for using this? Everything else you look
at in the software world will tell you what it
is you're using it for any consumer driven software at
the very least. Yeah, chat GPT's never had to because
(09:49):
the media for three years or two years, I guess
with chat GPT has kind of just sat there doing
the work for them, telling them, oh, yeah, you can
use it as a powerful person a system assistant. How
to assist me with what? Nasty? Kevin Russe creepy Kevin
in the New York Times a few weeks ago, when
he did Everyone's Using AI piece with Casey Newton, he
(10:12):
said that it's a powerful assistant. It's like, for what
you can't do that? This shit can't control my calendar.
I don't want to touch in my emails. I don't
think most people do either. So you've just got people
to use it as a shit, our search engine, an
online companion, and a brainstorming thing, which is a natural
way to get people kind of addicted, but addicted to
(10:32):
a product you don't truly control and a product you
don't truly understand, one that can be swiped by just
about anyone. I actually think we're on the the kind
of downward spiral for this shit. I am winking like
a pig and squawking like a bird watching this happen.
Even core Weavers crashing as I record this. They're down
seventeen point nine one percent. Still up too much, though,
(10:56):
And I really do think something has shifted thanks to
GPT five, I'm a excited and like Doctor Stones once said,
get excited because I think the next few months, like
next year, is going to be chuckle heavy's. We're going
to be whimsy pilled as we go through the remainder
of the AI Boom, and I'll be here to guide
you through it. Thanks for listening.