All Episodes

April 12, 2024 47 mins

Every major tech firm is betting billions of dollars that the generative AI revolution will change society - yet when you look under the hood, the reality of generative AI might be far grimmer. Ed Zitron walks you through the many signs that we're on the verge of the AI bubble popping - and what the consequences might be if it does.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Call Zone Media. Hi, I'm Edzeitron and welcome back to
Better Offline. As I've discussed in my last episode, there
are four intractable problems that are going to stop generative

(00:22):
AI from going much further. It's massive energy demands, its
massive computation demands. It's hallucinations when it authoritatively tells you
something that isn't true or makes horrible mistakes in images,
and the fact that these large language models have this
insatiable need for more training data. Yet I think what
might pop this bubble is a far simpler problem. Generative

(00:44):
AI simply does not deliver the magical automation that everybody
has been fantasizing about, and I don't think consumers or
enterprises are actually impressed. A year and a half after launch,
it seems like the kind of immediate and unquestioning in
fact youation with chat GBT and for that matter, or
other generative AIS has softened. Instead, there's this rising undercurrent

(01:08):
of apathy and mistrust and of course failure that's kind
of hard to ignore. In June twenty twenty three, traffic
to chat GPT's website, where people access the chat GPT
bot in a web browser fell for the first time
since launch, starting a trend that's continued for five of
the following eight months. According to data from similar Web,
people are becoming more aware of the technology's limitations, like

(01:31):
as I mentioned, hallucinations, which, as I note, is when
chat gpt confidently asserts things that aren't true, which can
be in writing, when it gives you an incorrect fact,
or in an image when it gives a dog eighteen
legs to make matters worse. According to data from data Ai,
which used to be known as Appanny, chat gbt's downloads

(01:52):
in iOS have begun to drop from a high of
just over seven hundred thousand a week to a plateau
of around four hundred and fifty thousand to five hundred
thousand a week since early twenty twenty three, which sounds
impressive until you hear that only seven point three five
percent of people who downloaded chatgbt in January twenty twenty
four actually used the app again thirty days after they

(02:14):
downloaded it, cratering from a high of twenty eight percent
a month after the app launched in June twenty twenty three.
In fact, things immediately appear to have fallen apart in
July twenty twenty three, only two months after launch, only
four point five nine percent of users opened the app
for a second time. Numbers like these tell the story

(02:34):
of a buzzy new application that isn't actually providing users
with much utility. I think the generative AI engine has
started to sputter for customers, for businesses, and indeed for
the startups that create them. That's bad news for any
industry that's yet to reach profitability or indeed sustainability, and
especially for generative AI, which remains relying on a kind

(02:57):
of an indefinite supply of cash to operate. Back in
April twenty twenty three, Dylan Patel, chief analyst at Semi Analysis,
calculated that GPT three, the previous generation of chech GPT
current ones known as GPT four, cost around seven hundred
thousand dollars a day to run. It's about twenty one
million dollars a month, or two hundred and fifty million

(03:17):
dollars a year. In October twenty twenty three, Richard Windsor,
the research director at large of Counterpoint Research, which is
one of the more reliable analyst houses, hypothesized that open
AI's monthly cash burn was in the region of one
point one billion dollars a month, based on them having
to raise thirteen billion dollars from Microsoft, most of it,
as I noted in credits for its Azure cloud computing

(03:40):
service to run their models. It could be more, it
could be less. As a private company, only investors and
other insiders can possibly know what's going on in open Ai. However,
four months later, Reuter's would report that open aye made
about two billion dollars in revenue in twenty twenty three,
a remarkable sum that much like every other story about
open ai, never mentions profit. In fact, I can't find

(04:05):
a single reporter that appears to have asked Sam Mormon
about how much profit open ai makes, only breathless hype
with no consideration of its sustainability. Even if open ai
burns a tenth of windsors estema about one hundred million
dollars a month, that's still far more money than they're making.
There's not a single story out there talking about them
making a profit, and I don't think they make in one.

(04:28):
Here's one thing we can be certain of. Though things
are getting more expensive, progress in generative AI means increasingly
complex models, and as I previously mentioned open AI's attempted
God damn it a Rakis model one, built specifically to
wow Microsoft by making chatgiputy more efficient, failed to actually
make it more efficient. Their attempts to make this a

(04:51):
better company are not working. Windsor the aforementioned analyst in
a separate blog also pointed out that there's nothing really
sticky about these companies. There's nothing stopping someone from switching from, say,
chat GPT to anthropics clawed two model. They're all trained
on similar data sets, and they all produce very similar answers.

(05:13):
And what one model might be better a one thing than another,
they're fundamentally very very similar. There's also nothing stopping someone
from simply giving up on generative AI altogether. It doesn't
seem to be the plug and play automation god that
everybody's been replicate to be, and judging by the plateauing

(05:34):
Chat GPT user numbers, I think that might already be happening.
It's also important to remember that while generative AI is
shiny and new artificially, intelligence is absolutely not, and over
the past decade it's found a number of homes from
expensive security apps that detect when a hacker is trying
to break into a corporate network. The spam filters proof

(05:54):
for reading tools like grammarly, Plenty of Things, even Siri
on your iPhone. In these context, AI is either a
small component of a larger product or something that directly
builds on human efforts. This stuff is actually valuable. AI
based spam filters are typically better than those reliant on
hand coded rules, for example, But it's also from a
marketing perspective, can of boring generative AIS A law is

(06:19):
that it can supplant humans either partially or entirely, producing
entire creative works that otherwise would have taken hours and
carried a real financial cost. But behind this glitzy technology
and media hype, the unspoken truth is that generative AI
holds way over the financial markets because it's regarded as
a tool to eliminate an entire swarth of jobs in

(06:41):
the creative and knowledge economies. It's a ghastly promise and
it underpins the vast market value of otherwise commercially unviable
generative AI companies like open Ai and Anthropic, and it's
what is driving I believe the multi billion dollar investments
we've seen from Microsoft, Amazon, and Google yeah, I see
no evidence of mass adoption of generative AI, and by

(07:03):
research suggests the enterprise adoption, which is the meat of
what would actually make these companies money, it just isn't there.
Deep within the earnings reports and the quotes of every
major cloud provider claiming that the AI revolution is here
is a deeply worrying trend. The AI revenue really isn't
contributing much to the bottom line outside of vacuous media coverage,

(07:26):
and I think the internal story is going to be
much bleaker. In early March, The Information published the story
about Amazon and Google tamping down GENERATIVEAI expectations, with these
companies dousing their salespeople's excitement about the capabilities of the

(07:47):
tech they're selling. A tech executive is quoted in the
article saying that customers are beginning to struggle with questions,
simple questions like is AI actually providing value? And how
do I evaluate how AI is doing? And a Gartner
analyst told Amazon Web Services sales staff that the AI
industry was at the peak of the hype cycle around

(08:08):
large language models and other generative AI, which is a
somewhat specific code for it's not going to get much
better anytime soon. This article confirms many of my suspicions
that and I quote the information here, other software companies
that have touted generative AI as a boon to enterprises

(08:29):
are still waiting for revenue to emerge, citing the example
of professional services firm KPMG buying forty seven thousand subscriptions
to Microsoft's Copilot AI at a significant discount on the
thirty dollars a c sticker price. Except KPMG bought the
subscriptions without really have engaged whether their employees actually got

(08:50):
anything out of it. They bought it, and I'm not
kidding you entirely so that if any KPMG customers ask
questions about AI, they'll be able to answer them. Was
so clearly in the bot Oh my god. Anyway, as
I've hinted, it's also not obvious how much AI actually
contributes to the bottom line. In Microsoft's Q four twenty

(09:13):
twenty three earnings report, gief financial officer Amy Hood reported
that six points of revenue growth in its Azureine Cloud
Services division was attributed to AI services. I went around
the web and I read every bloody article about their earnings.
I looked, and I looked and everyone was saying, Oh,
this is really good. I found someone who said it
was six percent of their revenue, and I went, that

(09:33):
sounds like complete bollocks to me. So I went and
spoke with Jordan Novette, who's covered Microsoft for many years.
He is a great cloud reporter over at CNBC, and
he actually covered Microsoft's earnings for the NBC itself, and
he confirmed that what this means is that AI contributed
six percent of the thirty percent of year over year
growth in Microsoft's as your Cloud services. That is a

(09:56):
percentage of a percentage. So by the way that that means,
so thirty percent growth year of year. So six percent
year of a year growth is from AI. Could be good,
but also all of the rest of it came not
from new products, just from the natural growth of the company.
It's unclear how much money that really is, but six
percent of the year over year growth isn't really that

(10:18):
exciting anyway elsewhere. Amazon CEO Andy Jasse, who took over
from Bezos a few years ago and was the chief
of Amazon Web Services, said that generative AI revenue was
still relatively small, but don't worry, you said it would
drive tens of billions of dollars of revenue over the
next several years, adding that virtually every consumer business Amazon
operated in already had or would have generative AI offerings.

(10:42):
Now they can just say that stuff. I really want
you to know. You can say what you want on
earning scores, as long as you're not just outright lying,
like saying we have one hundred billion dollars in cash,
but you have fifty dollars. That is a lie. You
can't do that. But you can be like iah At
We've got all sorts of AI and everything. Now it's
bloody magically. Take you don't take a look, but it's there.
I promise you. They can just say what they want.

(11:06):
But don't worry, they're not the only ones. Salesforce chief
financial officer Amy Weaver said in their most recent earning
score that Salesforce was not factoring in material contribution from
Salesforce's numerous AI products in its financial year twenty twenty five.
Guidance software company Adobe shares slid in their last earnings.
It's the company failed to generate meaningful revenue from its

(11:28):
masses of AI products, with analysts now worried about its
ability to actually monetize any of these generative products. Service
now claimed to its earnings that generative AI was meaningfully
contributed to its bottom line. Yet a story from The
Information quotes their chief financial officer as saying that from
a revenue contribution perspective, it's not going to be huge.

(11:49):
Going to be a bit honest and feeling a little
insane with this stuff. I feel crazy every time I
think about these stories, because elsewhere in the media, so
many people are saying how big and successful the generative
AI revolution is, and it's going to be Yeah. Every
time I look at the actual places where they write

(12:09):
down how much money it makes, any of the actual
signs of growth and significance and utility and adoption, it's
just not there. It's just breathless hype, with this kind
of whisper of stagnation and non existent adoption. And while
there are startups beginning to mind usefulness out of general AI,

(12:31):
and they do inside by automating internal queries and customer
sport questions, these are integrations rather than revolutions, and they're
far from the substance of a true movement. Maybe the
darker truth of the generative AI boom is that it's
a feature, not a product, and that these features might
be built entirely off the back of large language models

(12:54):
which are unsustainable to run, grow, or even make better.
What if AI only drives a couple percentage points of
real revenue growth of these companies. What if what we're
seeing today is the upper limit, not the beginning. Honestly,
I'm beginning to believe that a large part of the
AI boom is just hot air, and it's being pumped

(13:15):
up through a combination of executive bullshittery and very compliant
media that's so happy to write stories imagining what AI
can do, yet seems unable to check what it can
do or what it's doing. It's so weird. Now, there's
a bloke over at the Wall Street Journal called Chipcutter
who should really look into if you want to know

(13:37):
why your boss keeps asking you to go back to
the office. Wall Street Journal's Chip Cutter. He loves to
write things about how bosses are good and how returning
to the office is good. He wrote a piece in
March about how AI is being integrated into the office,
and most of it was just hundreds of words of
him guessing about what people might do but when he
gets to the bottom and he starts talking about companies

(13:57):
using it, it's almost entirely exact samples of people saying, yeah,
it makes too many mistakes for us to rely on it,
and we're just experimenting it. Elsewhere in the media, the
New York Times talked with Salesforce is head of AI
Clara's share and in this I think six hundred and
seven hundred word article, didn't really get to say much

(14:18):
of anything about AI or what their products do. All
she said was that the Einstein Trust layer handles data.
And you may think I'm being facetious here, that's all
she said about that, and then she added that it
would be transformational for jobs the way the internet was.
What what does that mean? Why am I reading this
in the newspaper? Why is this what I read in

(14:40):
the newspaper? How is this helping? I know I rant
a lot on this podcast, and I'm going to keep
doing it. You're stuck with me, all right, it's free, Okay,
you don't pay for this unless you do cooler zone media,
which you should pay for anyway. I know I'm ranting,
But the reason that this stuff really infuriates me is
it's misinformation on some level. I know it's kind of

(15:01):
dramatic to say, oh, they're misinforming people by suggesting that
AI can do stuff, but it is. It is misinformation.
When you're letting corporate executives go in the newspaper and
talk about how amazing their products will be without asking
them what they can do today, you're just giving them
free press. You're not giving them credit for stuff they've done.
You're giving them credit for things they're making up on

(15:23):
the spot. And when you do that, you make the
rich richer and the poor poorer. You centralize power in
the hands of assholes, people who are excited, people who
are borderline masturbatory, jumping around saying, oh God, I can't
wait till and replace humans with fucking computers. Good news

(15:43):
is they're not going to be able to, but that's
what they're excited about, and that's what they're getting media
coverage around. The media has been fooled, just like they
were with the metaverus, by this specious promise train of
the generative AI generation, and these worthless executives champions these
half truths and this magical thinking has spread far faster

(16:05):
due to the fact that AI actually exists and is
doing something, and it's actually much easier to imagine how
it might change our lives, unlike the metaverse. Even if
the way it might do so is somewhere between improbable
and impossible, it is easy to think about how my work.
You know, you could use an AI to or make
data entry or boring busy work. Surely all of this

(16:29):
you can automate, right, And when you use chat GPT
you can almost kind of, sort of somewhat see how
it might happen. Even if when you open up chat
GPT and try and make it do something, it's always
a bit off, never seems to quite do it. I'm
in a very in my day job, my PR firm.
I'm in a spreadsheet and document heavy business. Of all
the people who this could help, you think it would

(16:50):
be me A lot of my work is hey, all
of these things I need him in a spreadsheet. It
can't bloody do it. And I'm sure you listeners will
probably email me and say, oh, I've used CHATGPT for this,
don't care. I really mean that this thing is not
changing the world. And actually I think far more of
you have already shared. Thank you. By the way, easy.
At Better off Line dot com. You can email me

(17:12):
your ideas and your angry comments, so you can go
on the reddit to complain. But the thing I'm hearing
for most people is, yeah, I've tried it and it
didn't do enough. I tried it and there were too
many mistakes. There was a Wall Street Journal article back
in February about how Amazon and Google were having trouble
selling AI services because well, when they went to sell

(17:36):
them these companies, these financial services companies in particular, they
were saying, yeah, but these hallucinations could actually get the
sec mad at us. And the answer that they had was, yeah,
what if we just made it so that the models
would sometimes say they don't know stuff. Every time you
get to a reckoning with AI where you want it

(17:56):
to be better, where you're like, hey, AI executive, how
will this actually be fixed these hallucination problems, for example,
they come up with the most meanly mouthship. And I
truly believe it's because there is no answer to these problems,
as I said in the previous episode, and I think
that's why I can't find any companies that have integrated
generative AI in a way that's truly improved their bottom

(18:18):
line other than Klana, which allows you to do zero
percent interest free loans on almost anything. It's a very
worrying company anyway. They claimed that their AI powered support
bot was estimated to drive a forty million dollar amount
in profit improvement in twenty twenty four, which does not,
by the way, despite it being troubleted by members of

(18:40):
the media, otherwise mean that they made forty million dollars
in profit. I actually can't find what profit improvement refers to.
And this is the classic AI boom story. By the way,
there's always this weird verbal judo going on where they're like, yeah, sir,
forty million dollars in profit improvement, upwards, downwards and side
to sidemark, it's really good, And I think it's just

(19:03):
headline grabbing. I think it's just buzz. And despite fears
to the contrary, AI doesn't appear to be replacing a
large amount of workers, and when it has, the results
have been pretty terrible, like when Microsoft replaced MSN dot
COM's editorial team with a series of AI bots that
have spread misinformation and conspiracy theories, things like Joe Biden

(19:25):
falling asleep. It's so weird. Interestingly, There was also a
study in Boston Consultancy Group, and just as a note,
if anyone would love the opportunity to just replace workers
with robots, it's BCG, mckenzi, Accentia. All these companies were
absolutely they would be giving, however much open ai wanted
to do that, and then they would charge fifty million

(19:46):
dollars for in integration that didn't work, which I guess
makes AI perfect them putting that aside. In a study
from BCG, they found the consultants that solved business problems
with open AI's GPT four model performed twenty three percent
worse than those who didn't use it, even when the
consultant was warned about the limitations of generative AI and
the risk of hallucinations. Yeah, really great stuff. To be clear,

(20:11):
I am not advocating for the replacement of workers with AI. However,
I'm saying that if it was actually capable of replacing
human outputs, even if it was even anywhere near doing so,
any number of these massive, horrifying firms would be doing
so at scale, and planning to do so more as
the models improve. They'd be fuddling cash right up. Open

(20:31):
ayes ass it would be incredible, but the reality of
the AI boom is kind of a little more boring.
It recently came out that Amazon's cashless just walk out
technology in some of their stores. You could walk in,
scan a QR code, and then you could just grab
your rouse tomato sauce and your condoms or whatever, your

(20:51):
weird magazines. I don't know what they sell, and there
I'm not giving any more money to Amazon than I
need to. Anyways, everyone thought, oh, it's just AI. You
could just walk in. It's the cameras would tell you
through computer vision what you had bought. It would be great.
Now it turns out that there's one thousand workers in
India that were monitoring these cameras and approving transactions. Worse still,

(21:12):
open AI used Kenyan workers who were paid less than
two dollars an hour to train chet GPT's outputs, and
they currently pay fifteen dollars an hour. I think for
American contractors, no benefits of course, you know, fuck workers, right,
that's the thing underneath this whole thing. It's just this
undercurrent of disrespect for human beings, and it pisses me off.

(21:34):
And I realized I'm pissed off about a lot of things.
You'd be listening for like an hour now half an
hour in this episode. Anyway, I'll keep going. But yeah,
like I said, if AI was changing things, if AI
was actually capable of replacing a person, it would have happened.
It would be happening right now, It'd be happening at scale.
It would be so much worse than things feel now, unless,

(21:57):
of course, it just wasn't possible. What if what we're
seeing today is not a glimpse of the future, but
actually the new terms of the present. What if generator
of AI isn't actually capable of doing much more than
what we're seeing today. What if there's not really a
clear timeline when it will actually be able to do more.
What if this entire hype cycle has been built, goosed,

(22:21):
and propped up by this compliant media, ready and willing
to take whatever these career embellishing bullshitters have to say.
What if this is just another metaverse, but with a
little bit more product. Every single time I've read about
the amazing things that artificial intelligence can do, I just
see somebody attempting to add fuel to a fire that's

(22:42):
close to going out. When the Wall Street Journals Joanna
Stern wrote about Sora open ayes yet to be released
video generating chatbot. She talked about how its photorealistic clips
were good enough to freak her out. And I get
it at first glance. These do look like people, these
images do. They look like something approaching a video. They

(23:04):
look almost real, kind of like text some chat GBT
is almost right or it's right fully, but it doesn't
feel right. But much like the rest of these outputs,
you look a little closer, and they have these weird
errors like cars disappearing in and out of the shot,
or a different car coming out from behind something, or

(23:27):
completely different images between frames. Are these strange, unrealistic moments
of lighting, and they're never much longer than thirty seconds. Stern,
who by the way, I deeply respect, isn't really afraid
of what Sura can do. But what would happen if
open ai was able to fix the hallucination problems that
makes these videos kind of unwatchable. Well, it's easy to

(23:51):
imagine tools like Saura could eventually play a role in
online disinformation campaigns, churning out like lifelike videos of politicians
saying or doing appalling things. We can all breathe a
sigh of relief in knowing that the videos themselves are
often so flawed. You can pretty much instantly see their
AI generated Also, SRA is not available to the public yet,
and I don't even know if it ever will be.

(24:14):
You just need to look at the hands or the backgrounds.
Look at the people in the background of any AI
generated photo or video. They often contain too many fingers
so you can't see their faces. Or in Sora's videos,
their legs don't look right. It's so weird and I
don't I don't know how to put it perfectly, but

(24:38):
they don't feel human. Just to be clear, though Sora
is dead on arrival, no one actually has access to it.
It's unclear when it will come out. Every journalist that
has quote unquote use Sura has just given a prompt
to open ai to run. But there's also a very
obvious problem that kind of relates to something I mentioned

(24:59):
in the previous episode. Open AI and every generative AI company,
they're all dependent on high quality data to train the models,
and video data is so much larger, more complex, and
harder to find. There's less of it because it's visual media,
and it's just a much bigger, more complex model and

(25:20):
a much harder computational task to create video moving image
is it's actually kind of putting aside my anger about
Generative AI. It's amazing they've done even this, but to
be clear, as amazing as it might look, it isn't
enough to do anything. It's just a kind of a
do hickey. And this data is so much more complex
than the text based data that open ai is running

(25:43):
out of to make CHATGBT spit out words. Even if
there were enough data, there's pretty good reason why open
ai is coy about when they'll release the model. Like
I said, it's expensive and complex to run, and at
no point has anyone on how the fuck this actually
makes them any money, how they sell this. It's so weird,

(26:06):
to be clear, when you use surra it turns text
prompts into a video. You can't edit the video, you
can't change the video. The video is what the video
looks like. There's no way to make Sora make the
same thing multiple times, which makes the very basics of
making film, which is multiple angles of the same thing,
completely goddamn impossible. In fact, consistency between the same two

(26:30):
prompts is impossible from these models because they're all probabilistic.
We've recently seen some of the first quote unquote movies
made with sourra and the first one was called Airhead,
which is about a minute long. It's this man with
a balloon head walking around and it's got this it's
very twee. It sucks. It's just putting aside the aipart.

(26:52):
It's just crappy, and it's got a guy being like, yeah,
having a balloon head is difficult. Yeah, it's weird. I
hate having a balloon head. I hope I don't get popped.
It sucks. It's really bad filmmaking. But also each shot,
and there's multiple mbile shots of this guy with a
yellow balloon head looks completely different. It's a different balloon
every goddamn time. And it's so funny because you have

(27:13):
these guys on twittering like, oh my god, oh my god,
I am crying and pissing myself. This is the best
thing I've ever seen. But it isn't. It's so close
yet so far away, And the only reason it's impressive
is people are willing to sit there and say, but
what if it wasn't shit? But it is it really

(27:36):
is like every other generative output. It's superficially impressive, kind
of sort of lifelike, but once you look at it
for more than a moment, it's just flawed, terribly, irrevocably flawed.
It's time to wake up. We are not in the
early days of AI. We're decades in and we're approaching

(27:58):
the top of the s curve of innovation. There are
products being built, don't worry, but it's all things like
Claude Author, which creator Matt Schumer calls a chain of
AI systems that will write an entire book for you
in minutes, and I call a new kind of asshole
that can shit more than you'd ever believe. Generative AI

(28:19):
is the ugliest creation of the rot economy, and its
main selling point is that can generate a great deal
of passable material. Images generated from generative AI models like
open Ayes Dali all have the same kind of eerie
feel to them, as they're mostly trained on the same data,
some of it licensed from shutstock, some of it outright
plagiarized from hundreds of artists. Without sounding too wanky and philosophical,

(28:44):
everything created by generative AI feels soulless, and that's because
it is no matter how detailed the problem, no matter
how well trained the model, no matter how well intentioned
the person writing that prompt these are still mathematical solutions
to the emotional problem of creation. One cannot recreate the
subtle fuck ups and delightful little neurological errors that make

(29:05):
writing a book, or a newsletter or a podcast special.
While this podcast is admittedly trying to generate what I
believe AI might do in the future, it's not generative,
and it's not generated as a result of me mathematically
considering how likely an outcome is. My fury is not
generated by an algorithm telling me that this is the
right thing to be angry at. I'm pissed off because

(29:27):
I feel like we're all being like to and treated
like idiots. What makes things created by humans special isn't
doing the right thing or the best thing, but the
outputs that result in us fighting past are on imperfections
and maladies like the strep infection I've been fighting for
the last few days, and like, look, to my knowledge,

(29:47):
you can't give a generative AI strep throat. But if
I ever find out it's possible, I will make it
my damn mission to give it to chat GBT. All

(30:08):
of this hype is predicated on solving problems with artificial
intelligence models that are only getting worse, and open AI's
only answers to these problems are a combination of will
work it out eventually, trust me, and we need a
technological breakthrough in both chips and energy. That's why Sam
Moultman has been trying to raise seven trillion dollars and

(30:29):
that's not a mistake, by the way, to make a
new kind of AI chip, because there's no sign that
this or even future generations of chips will actually fix anything.
Generative AI's core problems, it's hallucinations, its massive energy, and
its massive unprofitable compute demands are not close to being solved.

(30:50):
I've now watched a frankly alarming amount of interviews with
both open AI CEO Sam Moultman and their CTO mirror Marati,
and every time they're saying the same specious empty talking points,
promising that in the future chap GPT will do this
and that as all evidence points to their models getting
worse and through the last years, by the way, they've
just said the same thing in every interview. They always

(31:12):
talk about chat GPT being like something or help creatives,
they never really say how, which just kind of weird.
But yeah, generative AI models they're expensive, they're compute intensive,
and they don't seem to provide obvious, tangible, mass market
use cases. Marathi in Oltmund's futures depend heavily on keeping
the world believing that development and improvement of these models

(31:34):
capabilities will continue at this rapacious pace of progress, even
though it's unquestionably slowed, with open AI even admitting themselves
that their latest model, GPT four, may actually be worse
at some tasks. I study from UC Berkeley last year
found that GPT four was actually worse at coding them
before and that chat GPT was at times refusing to

(31:57):
do certain tasks. Nobody wants to work anymore. Well, I
feel like I'm walking down the street telling people their
houses are on fire, only to be told to stop
insulting their new heating system. These models aren't intelligent. They're
mathematical behemoths generating the best guess on training, data and labeling,

(32:19):
and thus they don't really know what they're being asked
to do. You can't fix that. You can't fix hallucinations.
You can't just make these problems go away with more compute,
you can mitigate them. The current philosophy, by the way,
is that you can use another model to look at
another model's outputs, which, as I mentioned in the previous episodes,

(32:40):
is very silly. But seriously, everyone telling you hallucinations are
going away, look a little deeper and look at when
they actually failed to tell you how they were. It's
just very silly. Look. Every bit of excitement for this
technology right now is based on this idea of what
it might do, as I've said, and that quickly gets
conflated with what it could do, which allows Sam Mortman,

(33:02):
who by the way, is far more of a marketing
person than an engineer. His one startup, Looped was a failure.
He's failed upwards. He was in Why Combinat he did this.
It's actually ridiculous. He's so famous. All of this bullshit
it allows him to sell the dream of open AI,
and he's selling it based on the least specific promises
I've seen since Mark Zuckerberg said we'd live in our

(33:23):
bloody oculus headsets. And it's frustrating because this money and
this attention could go to important things. We have real
problems in society. I believe that Sam Moortman and pretty
much anyone in a position of power and influence in
the AI space has been tap dancing this entire time,
hoping that he could amass enough power and revenue that

(33:45):
his success would be inevitable. Yet I think his hype
campaign has been a little bit too successful, and it's
deeply specious, and he, along with the rest of the
AI industry, has found himself suddenly having to deliver a
future these not even close to developing. I am always
scared of automation taking our jobs. I think it's always

(34:06):
worth being scared of. But I don't think that's the
thing the tech industry is working on right now. I
don't think they're close, and I think there's something more
imminent to fear, and that thing is the bottom falling
out of generative AI as companies realize that the best
they're going to see is maybe a few digits of
profit growth. Companies like Nvidia, Google, Amazon, Snowflake, and Microsoft,

(34:29):
they have hundreds of billions of dollars of market capitalization
as well as expected revenue new growth tied into the
idea that everyone's going to integrate AI into everything, and
that they'll be doing more than they are today. You
can already see the desperation coming from these companies, like Microsoft,
for example, which in March effectively absorbed a company called
Inflection AI into itself, kind of an acquisition by Stealth.

(34:53):
Inflection AI is a public benefit company that portrays itself
as a nicer, gentler version of open AI. It's called product,
a chat GPT style chatbot touts its empathetic tone, its humor,
and its emotional awareness. Inflection was created in twenty twenty
two with an all star founding team that included Reid Hoffmann,
the founder of LinkedIn, and most of Fars Suleiman, the
British born co founder of deep Mind, which Google acquired

(35:15):
in twenty fourteen. In mid twenty twenty three, Microsoft took
part in a one point three billion dollar funding round
which saw the company acquire a significant stake in Inflection,
alongside other AI players like Nvidia. Inflection's core product has
the same inherent underlying issues as every other generative AI product, Hallucinations,
for example, but it has an accomplished team that has

(35:36):
taken a different approach to its competitors. Whereas chat GPT
and clored two tend to be, or at least aspire
to be, functional tools that provide information or complete tasks.
Inflections sought to make its product feel a bit more organic.
For Microsoft, the appeal was obvious. It has so much
riding on its AI ambitions, but in terms of money
spent as well as its share price, that it can't

(35:57):
really afford to appear stagnant or worse as though it
made a bad bet. Acquiring Inflection would help it maintain
its image, especially with idiot Wall Street analysts. But here's
the problem. Microsoft already holds a massive stake in Open AI,
and regulators both in America and Europe are wary of
market consolidation. Acquiring Inflection it'd give them a little too

(36:20):
much scrutiny. So Microsoft took a third, nastier path. Instead
of buying the company, it bought the employees, with Suliman
and the majority of his coworkers jumping ship to found
Microsoft's new AI division. It secured the talent a subsequent
six hundred and fifty million dollar licensing deal, yet another
example of Microsoft basically paying itself, and then gave that

(36:44):
deal to the shell of Inflection. You know, the one
without any of the staff left, giving it access to
the company's tech its IP and there's nothing regulators could
do to stop him. To be clear, Microsoft is in
a position where it could easily absorb the shock wave
of a potential AI bubble burst. It still prints money
from its other business units like office and cloud computing,
Microsoft Windows, and the Xbox gaming system, and the same

(37:08):
is true for the other big names like Google and Nvidia.
They're well insulated for any slowdown in AI investment or
from a growing apathy towards AI enterprise customers. I will note, however,
these massive investments in data centers, if they're all for nought,
you will see a form of crash. I can't say
the same for startups, though other companies aren't going to

(37:30):
be so lucky. Stability AAI, the developer of stable diffusion,
a generative AI that can produce images from written prompts
innovative for the time, is perhaps the canary in the
coal mine of PKI. Stability AI rode the same waves
as Open AI, especially in twenty twenty three, but now
that money is tighter and skepticism is higher, it's struggling
to stay aflow. Although the company raised one hundred million dollars.

(37:53):
In early twenty twenty three, it burned through nearly eight
million dollars a month, and in a recent attempt to
raise further cash they fail. The company routinely missed payroll and,
according to Forbes, a master sizeable debt with the US
tax authorities that culminated in threats to seize the company's assets.
They owed debts to Amazon, Google and core Weave, and
each compute provider that specializes in AI applications. With negligible

(38:16):
revenues and rapid cash burn combined with no obvious way
to monetize the product, stability Ai is now in turmoil,
with its key talent leaving the company in March, followed
by the company's CEO and co founder, Amount Mustak. Its
ongoing existence is in question, with The Financial Times writing
in March that the company's future, despite once being seen
as among the world's most promising startups, is in doubt.

(38:40):
While it would be fair to say that stability Aii
was unique at its internal turmoil, its external pressures, the
ability you or lack thereof, to monetize an expensive product,
and its reliance on external funding to survive much more
common across the industry. Its survival depended on investors believing
in a lofty future for AI, where it's integrated into
every facet of our lives and it plays a role

(39:01):
in almost every industry, which, of course we now know
it doesn't. While that belief hasn't been shattered, or at
least not yet, it's fair to say that expectations and
aspirations are increasingly tempered after reaching the apex of the
AI pissfest. The tech industry is getting a hangover, and
companies like Stability can't survive the headache. But to be clear,

(39:23):
I am not excited for the AI bubble to pop,
and on some level, as weird as it sounds, I
kind of hope it doesn't. Once it bursts, the AI
bubble will hit far more than the venture capitalists that
propped it up. This hype cycle has driven the global
stock markets to their best first quarter in five years,
and once the markets fully turn on the companies that

(39:45):
falsely promised an AI revolution, it's going to lead to
a massive market downturn and another merciless round of layoffs
throughout the tech sector, led by Microsoft, Google and Amazon.
This will in turn suppressed tech labor and flood the
market with techtile. It's going to suck for everyone involved
in software. A market crash led by the tech industry

(40:05):
will only hurt innovation, further draining the already low amounts
going into the hands of venture capitalists that control the
dollars going into new startups, and once again, the entire
industry will suffer because people don't want to build new
things or try new ideas. No, they want to fund
the same people doing the same things or similar things
again and again because it feels good to be part

(40:27):
of a consensus. Even if you're wrong. Silicon Valley will
continually fail to innovate at scale until it learns to
build real things again, things that people actually use, and
things to actually do something. I don't know if I
want to be right or wrong here. If I'm wrong,
generative AI could replace millions of people's jobs, something that
far too many people in the media are excited about,

(40:49):
despite the fact that the media is the first industry
that open AI kind of wants to automate. If I'm right,
we're going to face a dot com bubble s town
turn in tech, one that's far worse than what we
saw in the last few years. In any case, I
do wish the tech industry would get their heads out
of their asses. I'm tired. I'm tired of watching tech

(41:11):
firms life their teeth about the future that will live
in the metaverse, that our future will be decentralized and
paid for in cryptocurrency, and that our world will be
automated with chatbots. I truly think that these companies think
regular people are stupid, which is why Microsoft put out
a minute long Super Bowl commercial for their Copilot AI
that featured several prompts like write the code for my

(41:32):
three D open world game that don't actually do anything
that prompt I just mentioned. Go type it into Copilot.
It will give you a guide to coding a game
no code created. Also in the commercial, he types in
classic truck shop called Pauls. But none of these image
generators can actually do words, so it just looks like
go and do it. Trust me. It's funny. But every

(41:54):
time that these big tech booms happen, every time they say, oh,
we're going to live in the metaverse and oh we're
going to be able to automate everything, every time they lie,
the world turns against the tech industry, and this particular
boom is so craven in its falsehoods that I think
it'll have a dramatic chilling effect on tech valuations if

(42:17):
the bubble pops quite as severely as I expect. And
Sam Mortman desperately needs you to believe the bubble won't pop.
He needs you to believe that generative AI will be essential, inevitable,
and intractable, because if you don't, you'll suddenly realize that
trillions of dollars in market capitalization and revenue are being
blown on something it's kind of mediocre. If you focus

(42:39):
on the present, what open AI's technology can do today
and will likely do for some time, you see in
terrifying clarity that generative AI isn't really a society altering technology.
It's just another form of efficiency driving cloud compute software
that benefits kind of a small amount of people. If
you stop saying things like AI could do or AI

(43:02):
will do, you have to start asking what AI can do,
and the answer is not that much and probably not
that much more. In the future. Surra is not going
to generate entire movies. It's going to continue making horrifying
human adjacent creatures that walk like the atats from star
Wars and cartoons that look remarkably like SpongeBob SquarePants Chat

(43:25):
GPT isn't going to run your business because it can
barely output a spreadsheet without fucking up the basic numbers,
if it even understands what you're asking it to do
in the first place. I think that AI has maybe
three quarters to prove itself worthwhile before the apocalypse really arrives.
When it does, you're going to see it first in
the real infrastructure companies, starting with Nvideo, who's grown to

(43:47):
about two trillion dollars in market capitalization because of the
chips they make, which are pretty much the only ones
that can power the AI revolution. There are other companies
that AMD in Micron, but m Video is the one
that's really grown. If you watch any of their notes,
they're insane. They're just full of fan fiction. Once Nvidia
starts to see growth slow, and Oracle in particular Oracle

(44:08):
massive data center company, massive data based company as well
one of the largest customers Microsoft building data sentence for them.
Once that starts slowing down, that's when you should start worrying.
But the real pain's going to come for Amazon, Microsoft
and Google when it's clear that there's not really that
much revenue going into their clouds. Once that happens, once

(44:31):
you start seeing Jim Kramer on CNBC saying I don't
think the AI boom is here, despite having saying it
just was, that's when things get nasty and the knock
on effects will be horrible. It's going to be genuinely painful,
worse than we've seen the last few years. And it's
all a result of the same problem. It's all a
result of the growth of all cost tech economy. When

(44:52):
things are made to expand, when things are made to
build more rather than build better, When you're building solutions
to use compute power to sell cloud computing services rather
than helping real people make their lives better. Tech is
not building for real people anymore. And the AI revolution,
despite its spacious hype, is not really for us. It's

(45:15):
not for you and me. It's for people at Satching
the Della of Microsoft to claim that they've increased growth
by twenty percent. It's for people at sam Altman to
buy another fucking Porsche. It's so that these people can
feel important and be rich, rather than improving society at all.
Maybe I'm wrong, Maybe all of this is the future,

(45:35):
maybe everything will be automated, but I don't see the signs.
This doesn't feel much different to the metaverse. There's a product,
but in the end, what's it really do? Just like
the metaverse, I don't think many people are really using it.
All signs point to this being an empty bubble. And
I'm sure you're sick of this too. I'm sure that

(45:57):
you're sick of the tech industry telling you the futures
here when it's the present and it fucking sucks. And
I'm swearing a lot, and I'm angry, but I'm justified
in this anger off feel and I'm not telling you
how to think. And I've heard from some of you saying, oh,
don't tell me how to think, and I agree. I agree.
I'm not here to tell you to be angry about anything.
But I want to give you at least my truth,
and I want to give you what I see is happening,

(46:18):
because I don't feel like enough people are doing that
in the tech industry. And that's what better Offline is
going to continue to be. I really appreciate you listening.
It's been about a month month and a half since
we started. It's only going to get better from here.
Thank you, thank you for listening to Better Offline. The

(46:43):
editor and composer of the Better Offline theme song, It
is Mattersowski. You can check out more of his music
and audio projects at Mattasowski dot com, M A T
T O. S O w Ski dot com. You can
email me at easy at Better Offline dot com, or
check out Better Offline to find my newsletter and more
links to this podcast. Thank you so much for listening.

(47:05):
Better Offline is a production of cool Zone Media. For
more from cool Zone Media, visit our website cool Zonemedia
dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.
Advertise With Us

Popular Podcasts

Dateline NBC
The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.