Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
All media, soul of an angel, body of a devil,
chosen by God and perfected by science. This is better
offline and I'm your host ed Zitron. Now we're working
(00:23):
on my newsletter. Last week I was chatting with my
friend friend of the show casey Kagawa about generative AI,
and we kept coming back to one thought, where's the money?
Where is it? Not really where is the money? Where
is the money that this supposedly revolutionary, world changing industry
is making and of course we'll make in the future.
And the answer is simple. After hours of hours of
(00:45):
grinding through earnings, of grinding through media articles, of grinding
through all sorts of things, I just don't believe it
really exists. It's real, but it's small. Generative AI lacks
the basic unit economics, product market fit, or market penetration
associated with any meaningful software boom, and outside of open AI,
the industry may be pathetically hopelessly small or while providing
(01:06):
few meaningful business returns and constantly losing money. I'm going
to be pretty straightforward with everything I say in this
two parter because the numbers and the facts in my
hypotheses are pretty fucking damning of both the generative AI
industry and its associated boosters. You're going to get this episode.
Then there's going to be a monologue about something else
or something related. I really haven't got to it yet,
and then a second part which I'm recording immediately after
(01:28):
this one, a little behind the curtain there for you. Anyway,
in reporting this analysis, I've done everything I can to
try and push back against my own work, and I've
saw evidence to counter the things that I've seen, like
the revenue and the business models of these companies. Yet
in doing so, I've only become more convinced of the
flimsiness of generative AI and the associated industry, and the
(01:49):
likelihood of this bubble bursting in a way that kneecaps
tech valuations for a prolonged period, or worse, hits the
major stock market. Now, I really had originally written a
far more jocular and perc script, but while I was
writing it, I realized I really had to be blunt
because what I'm describing is a systemic failure. Venture capital
has propped up Open AI and Anthropic, two companies that
(02:11):
burned are combined ten point five billion dollars in twenty
twenty four, and that number is set to double or
more in twenty twenty five. The tech media has allowed
Sam Mortman to twist them to validate completely fictional ideas
as a means of propping up this unprofitable, environmentally destructive
software company, and big tech has become so disconnected from
reality that it is incapable of seeing how little actual
(02:33):
returns there are in generative AI, and they're failing. By
the way. As I'll walk you through in these episodes,
the GENERATIVEAI industry is very small, with the consumer market
of the entire American GENERATIVEAI industry outside of chat GBT
barely cracking one hundred million monthly active users, which puts
them below a lot of free to play games that
(02:53):
you get on your iPhone. Hyperscalers have already spent hundreds
of billions of dollars in capital expenditures for an AI
industry that has the combined monthly active users of a
free to play mobile game. I really must repeat myself,
it's insane, But unlike most mobile games, GENERATIVEAI doesn't really
make any money. And for those of you wondering if
(03:15):
selling access to AI models is the solution, it's important
to know that open AI the market leader in generative AI,
made less than a billion dollars an API calls in
twenty twenty four, and that's when people plug their models
them For those of you who don't understand, so it's
the difference between you to load up the chat GPT
app or someone has an AI generative AI like chat
gpt plugged into it. Now, Microsoft pays open Ai a
(03:37):
revenue share of twenty percent on them selling open AI's models,
so two hundred billion dollars. So this means that Microsoft
likely when he makes a billion dollars in revenue from
API calls themselves, this is a pathetic amount of money
and suggests there really isn't significant demand at all or
they're not charging enough. Neither of these are great, And honestly,
I'm sick and tired of hearing people prop up this
(03:59):
fucking industry. In these episodes, I will explain as calmly
as possible how the generative AI industry barely exists outside
of open Ai, And honestly, in writing this, I've become
completely disgusted at Silicon Valley at the waste. Why is
nobody talking about the revenues, Why is nobody sharing real
user numbers other than open Ai. Well, I believe it's
(04:20):
because there isn't that much money, and there certainly aren't
that many users. Nobody is making a profit from this
other than consultants, and that's because this is a hype
driven movement. What you see on TV and in the
newspaper is not the advent of a revolutionary piece of technology.
It's a cynical marketing campaign for one company, open Ai.
(04:42):
I need you to understand how precarious this all is.
So much money has been wasted propping up an industry
that only burns money, that does not have mass market appeal.
Chat GPT is not significant enough, or useful enough, or
meaningful enough to justify spending nine billion dollars to lose
five billion dollars. And yes, those are the raw economics
(05:02):
of open Ai. Now you may say, well, Ubber lost
a lot of money, didn't it, Edward? Guess what? Uber
lost about six point two billion dollars in one year,
and that was in twenty twenty when they couldn't run
their bloody service. Uber is a very different company. I
will gladly if you email me, I will explain this
to you. But Uber is not a comparison. There is
no comparison to what open ai is doing, what Anthropic
(05:23):
is doing. I sound crazy as ever, but you're going
to understand why when I'm done. I am deeply worried
about this industry, and I need you to know why.
But in this first episode, I'm going to focus on
one specific thing, and that's the capitalist delusion known as
open Ai, a company that encompasses almost all of the traffic, funding,
and attention in generative AI, and I believe they die
(05:46):
without a constant flow of venture capital and hyperscale or welfare.
And I actually don't know why I said I believe
that they will. They cannot survive without that money. But okay,
let's take a second. Let's talk about open as unit economics.
Putting aside the high the blaster, open Ai, as with
all GENERATIVAI model developers, loses money on every single prompt
(06:08):
and output. Its products do not scale like traditional software,
in that the more users it gets, the more expensive
its services are to run because its models are so
compute intensive. For example, Chat GPT having four hundred million
weekly active users is not the same thing as a
traditional app like Instagram or Facebook having that many users.
Or indeed Uber. The cost of serving a regular user
(06:29):
of an app like Instagram is significantly smaller because these
are effectively websites with connecting APIs, images, videos, and user interactions.
These platforms aren't innately compute heavy, and so you don't
need to have the same level of infrastructure to support
the same amount of people. Conversely, GENERATIVAI requires expensive to
buy and expensive to run and expensive to maintain graphics
(06:51):
processing units GPUs, both for inference and training the models themselves.
These GPUs must be run at full tilt for both
inference and training, which organs their lifespan while also consuming
ungodly amounts of energy. And by the way, inference is
just the thing that happens when you tell chat GPTs.
I think it infers the meaning of the prompt. And
the training is what they do when they throw all
(07:11):
the training data to make the model huh smart. Not
really though, And by the way, surrounding the GPU in
there isn't that the GPUs just kind of hang out.
There's the rest of a computer, which is usually highly
specked and incredibly difficult to cool and thus very very expensive.
These generative AI models also require endless amounts of training
data and supplies of that training data have been running
(07:32):
out for a long time. While synthetic data might bridge
some of the gap, there are likely diminishing returns due
to the sheer amount of data necessary to make a large,
loud and quich model even larger, as much as more
than four times the size of the Internet. This is insane.
There is not enough data, and it already kind of sucks,
and it's not getting better. These companies also must spend
(07:55):
hundreds of millions of dollars on salaries to attract and
retain AI talent, as much as one point five billion
dollars a year in open AI's case, and that's before
stock based compensation. In twenty sixteen, Microsoft claimed that top
AI talent could cost as much as an NFL quarterback
to hire, and that some has likely only increased since
then given the GENERATIVAI frenzy and the fact they're overpaying
(08:15):
quarterbacks as in the side. One analyst told The Wall
Street Journal that companies running generative AI models could and
I quote be utilizing half their capital expenditures because all
of these things could break down. That's that's really bad.
These costs are not a burden on open ai or Anthropic,
but they absolutely are on Microsoft, Google and Amazon. This
(08:35):
shit's crazy anyway. As a result of the costs of
running these services, a free user of Chat GPT is
a cost burden on open Ai, as is every single
free customer of Google's Gemini, Anthropics, Clawed, Perplexity, or any
other generative AI company. Said costs are also so severe
that even paying customers lose these companies' money. Even the
(08:56):
most successful company in the business appears to have no
way to stop burning cap And as I'll explain, there's
only really one real company in the industry, Open Ai,
and open ai is not a real business. But let's
start with a really, really important fact. If you forget
everything I say, I want you to remember this. Open
Ai spent nine billion dollars to make just under four
(09:19):
billion dollars in twenty twenty four, and the entirety of
their revenue that's about four billion dollars, is spent on compute,
two billion dollars to run models and three billion dollars
to train them. That is completely and utterly fucking insane.
That is bonkers. That is crazy. That is completely nuts.
This is not a real company. It is insane. We're
allowing this. Everyone should be screaming this at everyone. We
(09:40):
live in an alternate reality where this is acceptable. There
has been no precedent for this. Not Amazon Web Services,
not Uber, not anyone. No one has done this, and
it's sickening and wasteful that we continue to. And in
the past I've repeatedly said that open ai lost five
billion dollars after revenue. Now that is true. By the way,
that is completely true. They made money and they lost money,
but ended up losing five billion either either way. However,
(10:04):
I really just I can't in good conscience suggest that
open ai only spent five billion dollars. It's time to
be honest about these numbers. While it's fair to say
that their net losses are five billion, they spent nine
billion dollars to lose five billion dollars. Let's really get
(10:30):
down into the nitty gritty of these numbers. So, as
discussed previously, according to the reporting by the Information, open
AI's revenue was likely somewhere in the region of four
billion dollars in twenty twenty four. Their burn ray, according
to the Information, was five billion dollars after revenue in
twenty twenty four, excluding stock based compensation, which open Ai,
like other startups, uses as a means of compensation on
(10:51):
top of cash. Nevertheless, the more it gives away, the
less it has for capital raises. And these are technically costs,
though they're not real money unless there's a liquidity event,
but that's our you don't need to worry about that.
To put this in blunt terms, based on reporting by
the Information, and I'm repeating myself here, but I really
need you to remember this, running open Ai cost nine
(11:12):
billion dollars in twenty twenty four. The cost of the computer,
to the compute to train models alone three billion dollars
obliterates the entirety of their subscription revenue, which was about
three billion dollars by the way, and the compute from
running models two billion dollars takes the rest and then some.
They actually end up losing an extra billion on top
of that. Sam Altman's net worth is a billion dollars.
(11:34):
By the way, Casey Gogawa has now used this as
the the Altman Index, so it's like you've lost one
Sam Altman. That's a billion dollars. But just to be clear,
it doesn't just cost more to run open Ai than
they make. It costs them a billion dollars more than
the entirety of their revenue to run the software they
sell before any other costs. Why are we not more
(11:54):
concerned about this company now? Something else to note is
that open ai also spends another line amount of money
on salaries, over seven hundred million dollars in twenty twenty
four before you consider that compensation from stock, a number
that will also have to increase because open ai is growing,
which means hiring as many people as possible, and they're
paying through the nose for them. But let's talk about
(12:15):
how open ai makes money. Open Ai sells access to
its models via its API and selling premium subscriptions to
chat gpt. The majority of its revenue over seventy percent,
comes from subscriptions to premium versions of chat GPT. The
Information also reported that open ai now has fifteen point
five million paying customers, though it's unclear what level of
the service they're paying for or how sticky these customers
(12:37):
are is in how luckily they are to stick around,
or the cost of acquiring these customers are really any
other metric to tell them tell us how valuable these
customers are to the bottom line. Nevertheless, open ai loses
money on every single paying customer, just like its free users.
Increasing paid subscribers to open ai services somehow increases open
AI's burn rate. This is not a real company. Now,
(13:00):
The New York Times reports the open ai projects it
will make eleven point six billion dollars in twenty twenty five,
and assuming that open ai burns at the same rate
it did in twenty twenty four, spending two point two
five dollars to make one dollar, open ai is on
course to bone over twenty six billion dollars in twenty
twenty five for a loss of fourteen point four billion dollars.
Who knows what their actual cost will be. Now you've
(13:21):
probably heard about soft Bank coming in. Soft Bank's going
to feed the money, and soft banks said they're going
to spend money on this, that, and the other. That
round has not closed yet. Masayoshi Son a complete fucking
idiot who's lost thirty odd billion dollars for soft Bank,
the Japanese make a conglomerate. He's dedicating billions of dollars
of revenue to buying open ai services, unless this is
(13:43):
a straight up trade where he's just sending money before
the services come in. I don't know if it happens,
and I'm going to get into things like agents later,
but the information reported that open ai expects to make
three billion dollars in revenue from agents. By the end
of this episode. You're going to realize how fucking stupid
that sounds. We'll get their layer. It's also important to
note that open AI's costs are partially subsidized by its
(14:05):
relationship with Microsoft, which provides cloud compute credits for its
zero cloud service. Not super technical, it's just when they
host people's software and files and such and the compute
to run these models, and they also offer this a steep,
steep discount to open Ai. Or put another way, it's
like open ai got paid with air miles, but the
airline lowered the redemption cost of booking a flight with
(14:26):
those air miles, allowing it to take more flights than
any other person with the equivalent amount of points. Until recently,
open ai exclusively used Microsoft as Zuo services to train
host and run its models, but recent changes to its
deal means that open ai is now working with Oracle
to build up further data centers to train host and
run its models. It's unclear whether this partnership will work
in the same way as the Microsoft deal with open
(14:47):
Ai provided credits and discounts like before. If not, open
AI's operating costs will only go up. For previous reporting
from the information, open Ai pays just over twenty five
percent of the cost of a zero's GPU compute part
of their deal with Microsoft, and that's about a dollar
thirty per GPU per hour versus the regular is your
cost of three dollars and forty cents to four dollars
(15:07):
an hour. I know that this sounds really technical, but
in very short they're getting a sweet deal from Microsoft,
and if anything happens in that, they're completely fucked. They're
fucked anyway. They don't have They're burning billions of dollars.
It's insane. But let's talk about user numbers, because open
ai has quite a few. They recently announced that they
have four hundred million weekly active users. Now, weekly active
(15:30):
users is a wanky number and a very strange one
for a company like this. Open Ai may pretend to
be a consumer company, but the majority of their revenue
comes from monthly subscriptions, making them kind of a cloud
software company. Classically cloud software companies report monthly active users.
That way, you can, I don't know, compare one number
which is the amount of active users you have with
(15:51):
the paid users you have, and then say, oh, that's
a good business. That's a good business, right there. Man,
guess what open ai isn't given their monthly active users.
Don't worry, I might estimated it. When I asked open
ai to define what a weekly active user was, it
responded by pointing me to a tweet by chief operating
officer Brad light Cap that said chat gpt recently crossed
four one hundred million weekly active users. We feel very
(16:12):
fortunate serve five percent of the world every week. What
a fucking liar. It's extremely questionable that open ai refuses
to define this core metric. By the way, and without
a definition, in my opinion, there is no way to
assume anything other than open ai is actively gaming its numbers. Now.
There's likely two reasons they focus on weekly active users.
One has described these numbers are easy to game, you
(16:33):
can choose any seven day period. And also the majority
of open AI's revenue comes from paid subscriptions to chat gpt.
And that latter point is crucial because it suggests open
ai is not doing anywhere near as well as it
seems based on the very basic metrics used to measure
the success of a software product. The information reported on
January thirty first, open Ai, like I mentioned, had fifteen
point five million monthly paying subscribers, and they added in
(16:56):
this piece that this was less than a five percent
conversion rate of open aiy's weekly active users, a statement
that's kind of like dividing the number fifty two budd
a letter A. This is not an honest or reasonable
way to evaluate the success of chat GPT's still unprofitable
software business, because the actual metric, like I mentioned, would
have been to definde paying subscribers by monthly active users
(17:19):
or the other way around. I guess a number that
would be considerably higher than four hundred million. And the
reason they don't need to do that, by the way,
is because you would divide them and see that they
have a piss poor conversion rate. Good conversion rate is
way higher than five percent, by the way, and theirs
is definitely lower. But don't worry. I'm a sneaky little shit.
So I went and looked some stuff up, and I
talked to some people based on data from the market
(17:39):
intelligence firm Center Tower. Open AI's chat gpt app on
Android and iOS is estimated to have more than three
hundred and thirty nine million monthly active users, and based
on traffic data for market intelligence company similar web chatgpt
dot com had two hundred and forty six million unique
monthly visitors, and these were in January twenty twenty five.
There's likely some crossover with people using both the mobile
(18:00):
and web interfaces, though how big nut group is is
kind of hard to tell and remains uncertain. Though every
single person that visits chatgpt dot com might not become
a user, it's safe to assume that chat GPT's monthly
active users are somewhere in the region of five hundred
to six hundred million. That's good, right, Its actual users
are higher than officially claimed, right, that's good. No, it's bad.
(18:22):
First of all, each user that uses chat gpt for
free is a drain on the company, whether they're free
or not, honestly, but either way, their free ones definitely are.
It would also suggest that the real conversion rate is
somewhere in the neighborhood of two point five eight three
percent from free paid users on chat GPT, which is
astonishingly bad, and it's a fact that's made worse by
the fact that every single user, regardless of whether they
(18:44):
pay or not, loses the money either way. And while
it's quite common for Silicon Valley companies to play fast
than loose with metrics, this particularly one is well was
deeply concerning, and I hypothesize that open ay is choosing
to go with the weekly versus monthly active users in
an intentional attempt to avoid people calculating the conversion rate
of its subscription products. As I will continue to repeat,
(19:05):
these subscription products lose the company money every single time.
Now let's talk product strategy, shall we, because I don't
think open ai really has one. Open Ai makes most
of its money from subscriptions approximately three billion dollars in
twenty twenty four, and the rest on API access to
its models approximately a billion. As a result, open Ai
is chosen to monetize chat GBT in its associated products
(19:28):
in it all you can eat software subscription model, or
otherwise make money by other people productizing in and just
to be clear, in both of these scenarios, open Ai
loses money on every transaction. Open AI's products are not
fundamentally differentiated or interesting enough to be sold separately. It
has failed, as with the rest of the generative AI industry,
to meaningfully productize its models due to the massive training
(19:48):
and operational costs and a lack of any meaningful killer
app use cases for large language models. The only product
that OpenAI has succeeded in scaling to the mass market
is the free version of chat GBT, which loses the
company money with every single prompt and output. This scale
isn't a result of any kind of product market fit,
by the way, It's entirely media driven, with reporters making
(20:09):
chat GPT synonymous with artificial intelligence, a thing they regularly
write about without thinking. As a result, I do not
believe that the generative AI industry is real. It's not
a real industry, which I will define as one with
multiple competitive companies with sustainable or otherwise growing revenue streams
and meaningful products with actual market penetration. And I feel
(20:29):
this way because this market is entirely subsidized by a
combination of venture capital and hyperscaler cloud credits, and well
real money. I guess chat GPT is popular because it's
the only well known product one that's mentioned in basically
every article on ai. If this were a real industry,
other competitors would also be mentioned all the time. They
would have similar scale, especially those run by hyperscalers. But
(20:52):
as I'll get to later, they suggest that open ai
is the only company with any significant user base in
the entire generative AI industry, and it's still wildly unprofitable
and unsustainable. Open AI's models have also been entirely commoditized.
Even its reasoning model OH one has been commoditized by
both deep seats are one model and Perplexities agonizingly named
(21:13):
are one seventeen seventy six model, both of which have
similar outcomes at a much discounted priced open ais oh one.
Though it's unclear and unlikely in my opinion, that these
models are profitable anyway. Open Ai as a company, well,
they just piss poor at product. It's been two years
in chat gpt mostly does the same thing, still costs
more to run than it makes an ultimately does the
(21:35):
same thing as every other LLM chatbot from every other company.
The fact that nobody has managed to make a mass
market product by connecting open AI's models also suggests that
the use cases just aren't there. Furthermore, the fact that
API access is such a small part of its revenue
suggests that the market for actually implementing large language models
is relatively small. If the biggest player in the space
(21:56):
only made a billion dollars in selling access to its
models unprofitably, and that amount is the minority of its revenue,
there might not actually be a real industry here. And
I must be clear. If there was user demand, this
would be where it was in the APIs, it would
be doing gangbusters because people wouldn't be able to help themselves.
They'd just be all over this generative AI share. But
(22:19):
they're not. Now. I want to address one counterpoint. Some
might argue that open ai has a new series of
products that could open up new revenue streams, such as
(22:40):
operator It's agent product and deep Research their research products.
And I'm so fucking tired of hearing about agents. Whenever
you hear someone say agent, really look at what they're saying,
because they want you to think autonomous bit of software.
What they're actually talking about is either a chatbot or
well the dogshit the open ai and Anthropic have warmed up.
You'll get too shit. But first let's talk costs. Both
(23:02):
of these products are very compute intensive. Operator uses open
AI's computer using agent they see u weigh, which combines
open aies models with virtual machines that take distinct actions
on web pages in this extremely unreliable and costly way
where they take screenshots as they scroll down, and it
just doesn't fucking work. I had a whole thing about
Casey Newton writing about this. It's just it was just
(23:23):
so bad. Like the case Newton, you please go outside, challenge,
just just go outside, Casey stop, stop with the computer.
You don't know what you talked about. But failures with
these and remember these models, pretty much all of them
are inconsistent. And the more in depth the thing you
ask them to do, the more likely there's going to
be a problem with it. So think about it like this.
Failures from something you've asked them to do will either
(23:44):
increase the amount of attempts you make to get the
thing you want or make users not use it at all.
Not a really great idea. Now, let's stalk deep research.
They use a version of open Aiyes Oh three reasoning model,
which is a model so expensive because it spends more
time to generate a response based on the model, reconsidering
and evaluating steps as it goes. The open Ai will
no longer launch O three as a standalone model, and
(24:06):
that's really a good thing when you see a company
be like, yeah, you can't touch it. It's too expensive.
In short, these products are extremely expensive to run, and
this means that any time their outputs aren't perfect, which
is to say a lot of the time, there's a
high likelihood that they'll be triggered again, which will in
turn spend more compute. But let's talk about the product
market fit, because this is really important to use. Operator
(24:28):
or Deep research currently requires you to pay two hundred
dollars a month for open AI's Chat GPT Pro, a
two hundred dollars a month subscription which Sam Altman recently
revealed still loses the money because people are using it
more than expected, and that is a quote. Furthermore, even
on chat GPT pro, deep research is currently limited to
one hundred queries per month, adding that it is very
(24:49):
compute intensive and slow. Though Altman has promised the chat
GPT Plus and Free users will eventually get access to
a few deep research queries a month. Well, that's not
good for their cash burn. That's actually bad for the
cash burn. I'm not sure it's going to make them.
Not really sure how that turns into money anywhere. But
let's talk about Operator. Operator is this agent product where
(25:11):
you're meant to be able to be like, hey, look,
go and look something up for me, and it only
works like thirty percent of the time, and it takes.
It's just very bad. And as I covered in my
Newslater a few weeks ago, this product and it claims
to control your computer and does not appear to be
able to do so consistently. It's not even ready for
the prime time, and I don't think it has a market.
The way they're selling this is that you'll be able
(25:32):
to make it do distinct tass in the computer. But
even Casey Newton in his article was like, yeah, it
only works sometimes, and the things it works on are
like searching trip Advisor. Imagine this if you will. What
if for the cost of boiling a lake and throwing
an entire zoo into the lake and boiling the animals
inside it, you could sometimes be able to search trip
Advisor in two minutes versus ten or like five seconds.
(25:54):
The future is so cool. I love living in it.
But let's talk about Deep Research for a say. It's
already being commoditized perplexed. The AI and Xai have launched
their own versions immediately, and Deep Research itself is not
a good product. As I covered in my newsletter last week,
the quality of the writing that you received from Deep
Research is really piss poor, and it's rivaled only by
(26:15):
the appalling quality of its citations, which include forum posts
and search engine optimized content instead of actual news sources.
These reports are neither deep nor well researched, and cost
open Ai a great deal of money to deliver, and
just to give you a primary. Deep Research is meant
to be you meant to be able to type something in,
and it does like a three thousand word report. It's gobbledygook,
it's nonsense, it's bullshit. I really if you should go
(26:35):
and look up go to my newsletter. Where's your head?
Not at the it's the piece before the ones that's
going to come out when these episodes come out. I
forget the name exactly. You need to go and look
at how shit Deep Research is. It's incredible that this
money losing juggernaut piece of shit thinks that this is
a real product. And it's insulting to the intelligence of
readers that people at Casey Newton claimed it was good.
(26:57):
But now we've established that both of these products are expensive, commoditized,
and don't work very well. Let's talk about how they
make money or don't. Both operate and deep research, like
I told you, currently require you to pay two hundred
dollars a month to a company that loses money all
the time, that also loses money on the two hundred
dollars a month. Neither product is sold in its own
and while they may drive revenue to the chat GPT
(27:19):
pro product has said before, said product loses open AI money.
These products are also compute intensive and have questionable outputs,
making each prompt very likely to create another follow up prompt.
And the problem is you're asking something that doesn't know
anything that probabilistically generates answers to research something. So as
a result, the research isn't going to be any good.
(27:39):
It's not like it's going to research it and go, hey,
what would be a good source. It's going to say
what matches the patterns? What matches all the patterns that
are being trained on? Eh, that's fine, Who gives a shit?
It's like having the world's worst intern, except the intern
gets a concussion every ten minutes. But in summary, both
Operator and deep research are expensive products to maintain, are
sold through an expensive two hundred dollars a month subscription that,
(28:02):
like every other service provided by open ai, loses the
company money, and due to the low quality of their
outputs and actions, are likely to increase user engagement to
try and get the desired output, incurring further costs for
open Ai. Well, you know, like ed, ED, you say, ED,
you're just being You're just being a hater, right, just
being a hater. Things don't look great today, but this
(28:24):
early days. It isn't early days, but still edits early days.
Things don't look great today. What about the future prospects
for open Ai? Things can't be that bad, can they? Yeah?
They can. A week or two ago, Sam Ortman announced
the updated roadmap for GPT four point five and GPT five. Now.
These are their next generation models that they've been hyping
up for the best part of a year, except GPT
(28:47):
four point five didn't exist before. It was always GPT five.
Now GPT four point five will be open AI's last
chain of thought model, referring to the core functionality of
its reasoning models, where it checks the work as it goes,
and it really it uses a model to ask another
model whether the model's doing the right thing? Can they
both hallucinate? Yes? GPT five will be and I quote
Sam Mortman, a system that integrates a lot of open
(29:08):
AIS technology, including three What the fuck are you talking about?
Aortman also vaguely suggests that paid subscribers will be able
to run GPT five at a higher level of intelligence,
which likely refers to being able to ask the models
to spend more time computing an answer. He also suggests
that the GPT five and I quote will incorporate voice,
canvas search, deeper search, and more. Fucking bed Bartham beyond motherfucker.
(29:32):
Come on, my man, your company spent nine billion dollars
to lose five billion dollars. Why is anyone taking this seriously?
This is ridiculous. But both of these statements, all of
these statements honestly vary from vague to meaningless. But I
hypothesize the following GPT four point five will be an
upgraded version of GPT four zero open AIS Foundation model
(29:53):
you're probably using right now, and it's code named Orion
GPT five, which used to be code named Ryan, could
literally be anything. But one thing that Altman mentioned in
the tweet is that open AI's model offerings have got
too complicated. They'd be doing away with the ability to
pick what model you used, gussieing this up Inese claiming
it's unified intelligence. This fucking guy. If I said this
(30:14):
shit to a doctor, they'd institutionalize me. They'd say, you
sound like a lunatic. But anyway, as a result of
doing away with the model picker, which is literally the
thing you click and you choose GPT four O or
GPT four O Mini or like the one reasoning things,
I think they're going to attempt to moderate costs by
picking what model will work best for a prompt A
process it will automate. And if there's one thing I've
(30:37):
noticed with open Ai, they're not very good at automating anything.
So I expect this to be bad, and I believe
that Altman announcing these things is a very bad omen
for open Ai because Orion has been in the works
for more than twenty months and was meant to be
released at the end of twenty twenty four, but it
was delayed due to multiple training runs that resulted in,
to quote the Wall Street Journal, software that fell short
(30:58):
of the results were searchers are hoping for. As in
the side, the Wall Street Journal refers to Orion as
GPT five. This was from several months back, but based
on the copy in Aortman's comments, I believe Orion refers
to a foundation model open ai, which is one to
replace the core GPT one that powers chat GPT. Open
Ai now appears to be calling a hodgepodge of different
(31:18):
mediocre models something called GPT five. It's almost as if
Altman's making this up as he goes along. Now. The
Journal further adds that as of December, Orion performed better
than open AI's current offerings, but hadn't advanced enough to
justify the enormous costs of keeping the new model running
with each six month long training run no matter how
well it works, costing over five hundred million dollars. Open
(31:40):
Ai also, like every generative AI company, is running out
of high quality training data, the data necessary to make
its model smarter. Based on the benchmark specifically made up
to make l Limes seem smart, and I should note
that being smarter means completing tests not new functionality or
new things that it can do. Sam Mortman, deputizing Ryan
from t GPT five to GPT four point five, suggests
(32:02):
that open Ai has hit a war with making its
new model, requiring hint to lower expectations for a model.
Open Ai Japan president Tagao Nagasaki had suggested, would and
I quote aim for one hundred times more computational volume
than GPT four, which some took to me in one
hundred times more powerful when it actually means it will
take way more computation to train or run inference on it.
I guess he was right. Now, if Sam Altman, who
(32:25):
is a man who loves to lie, is trying to
reduce expectations for a product, I think we should all
be really, really worried. Now. Large language models, which are
trained by feeding them massive amounts of training data and
then reinforcing their understanding through further training runs, are hitting
the point of diminishing returns in simple terms. To quote
friend of the show, Max Zev of tech Crunch, everyone
now seems to be admitting you can't just use more
(32:47):
compute and more training data with pre training large language
models and expect them to turn into some all knowing
digital god max is a fucking legend. Open AI's real advantage,
other than the fact it's captured the entire tech media,
has been its relationship with Microsoft, because access to large
amounts of compute and capital allowed it to corner the
market for making the biggest, most hugest large language model.
(33:08):
Now that it's pretty obvious this isn't going to keep working,
open ai is scrambling, especially now deep seekers commoditized reacting
models and prove that you can build lllms without the
latest GPUs. It's unclear where what the functionality of GPT
four point five or GPT five will be. Does the
market care about an even more powerful large language model
if said power doesn't do anything new or lead to
(33:30):
a new product? Does the market care if unified intelligence
just mean stapling together various models to produce more outputs
that kind of look and sound the same. As it stands,
open ai has effectively no mote beyond its industrial capacity
to train large language models and its presence in the media.
Open ai can have as many users as it wants,
(33:50):
but it doesn't matter because it loses billions of dollars
and appears to be continuing to follow the money, losing
large language model paradigm guaranteeing it, or lose billions of
dollars more if they're allowed to. This is the biggest
player in the generative AI industry, both the market leader
and the recipient of almost every single dollar of revenue
that this industry generates. They have received more funding and
(34:11):
more attention than any startup in the last few years,
and as a result, their abject failure to become a
sustainable company with products that truly matter is a terrible
sign for Silicon Valley and an embarrassment to the tech media.
In the next episode, I'm gonna be honest, I have
far darker news. Based on my reporting, I believe that
the generative AI industry outside of open Ai is incredibly small,
(34:32):
with little to no consumer adoption and pathetic amounts of
revenue compared to the hundreds of billions of dollars sunk
into supporting it. This is an entire hype cycle fueled
by venture capital and big tech hubris, with little real
adoption and little hope for a turnaround. Enjoy Tomorrow's monologue
and then the final part on Friday. Thank you for
(34:57):
listening to Better Offline. The editor and compos of the
Better Offline theme song is Matasowski. You can check out
more of his music and audio projects at Matasowski dot com,
M A T T O S O W s ki
dot com. You can email me at easy at Better
offline dot com or visit Better Offline dot com to
find more podcast links and of course, my newsletter. I
(35:19):
also really recommend you go to chat dot where's youreaed
dot at to visit the discord, and go to our
slash Better Offline to check out our reddit. Thank you
so much for listening. Better Offline is a production of
cool Zone Media. For more from cool Zone Media, visit
our website cool Zonemedia dot com, or check us out
on the iHeartRadio app, Apple Podcasts, or wherever you get
(35:40):
your podcasts.