Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Zone Media. Well, I know that assholes grow on trees,
but I'm here to trim the leaves. This is Better
off Line. I'm your host ed dait tron. It's bullshit
(00:22):
Week here at Better Offline headquarters. The thousands of elves
that work for me have been finding me things that
say bullshit for no reason. When I wrote the episode
weeks ago. I don't know what to do with them.
But anyway, we're starting with one of the largest financial
institutions of the world calling bullshit on Generative AI as usual.
Don't take my word for it. Check the episode notes
(00:42):
for links that map to everything I'm talking about. I've
tried my best to map them to exactly what I'm
saying too, and you can feel free to yell about
it like yell out the words like the Beastie Boys,
or just follow along, because I think it's important for
you to know where I'm getting all of this from,
versus just assuming that I made it up somehow. That
is an accus I've had made. I don't really know
how I would do that anyway, the episode I'm very sorry.
(01:06):
At the tail end of June, Goldman Sachs, one of
the largest global investment banks put out the thirty one
page report titled JENAI too much, spend too little benefit
that's with the question mark at the end. That includes
some of the most damning literature on generative AI that
I've ever seen. And yeah, that weep sound in the background.
You here is the slow, painful deflation of the bubble
(01:28):
I've been warning you about since March. The report covers
AI's productivity benefits, which Goldman remarks are likely limited, AI's returns,
which are likely to be significantly more limited than anticipated,
and AI's power demands, which are likely so significant that
utility companies will have to spend nearly forty percent more
in the next three years to keep up with the
demand from hyperscalers and rot economists like Google and Microsoft.
(01:53):
The report is so significant because Goldman sacks, like any
investment bank doesn't care about your feelings or your happy
or emotions or anything unless doing so is profitable. It'll
gladly hypob anything it thinks it will make a bug.
Back in May, it published a report which claimed that AI,
not just generative AI, was showing very positive signs of
(02:13):
eventually boosting GDP and productivity. Even though said report buried
within it, constant reminders that AI had yet to impact
productivity growth and states that only five percent of companies
report using generative AI in regular production. For Goldman to
suddenly turn on the AI movement suggests that it's extremely
anxious about the future of generative AI, with almost everybody
(02:36):
agreeing on one core point in the report that the
longer the generative AI takes to make people money, the
more money that it's going to need to make. The
report also includes an interview with economist Darren Smoglu of MIT,
which can be found by the way on page four
of the document, which you can find in this episode
spreadsheet of links. An Institute professor who published a paperback
(02:57):
in May called the Simple Macroeconomics of AI that argued
that and I quote the upside to US productivity and
consequently GDP growth from genera IVAI will likely prove much
more limited than many forecasts that expect a month has
only made Asmoglue more pessimistic, declaring that truly transformative changes
won't happen quickly and few, if any, will likely occur
(03:19):
within the next ten years, and that genera ivai's ability
to affect global productivity is low because and I quote again,
many of the tasks that humans currently perform are multifaceted
and require real world interaction, which AI won't be able
to materially improve anytime soon. What makes this interview, and
(03:39):
really the whole report so remarkable is how thoroughly and
aggressively it attacks every bit of marketing collateral that the
artificial intelligence movement has. Asmoglu specifically questions the belief that
AI models will simply get more powerful as we throw
more data and GPU graphics process unit the thing that
(04:01):
crunches the numbers capacity at them, and specifically asks a question,
what does it mean to double AI's capabilities? How does
that make something like say a customer service rap better? Seriously, though,
what does better really get them less errors? How do
you quantify less errors without factoring in the current state
(04:23):
introducing them? It's not great And this really is a
specific problem with the whole GENII fantasists bullshit spiel. They
heavily rely on the idea that not only will these
large language models llms like chat GPT get more powerful,
but the getting more powerful will somehow grant it the
power to do something. As Asimoglu says, what does it
(04:47):
mean the double it What does that do? No? Really,
what does that more actually mean? We've heard people talk
about this for eighteen months, saying the more powerful chat
GBT gets, the better it will get. But what does
better even look like? GPT four Oh, the latest version
of chat GPT, other than accepting more inputs, is kind
(05:10):
of the same thing. The capabilities have not really changed,
and while one might argue that more powerful will mean
faster generative processes, there really is no barometer for what
better looks like, and perhaps that's why Chat GBT, Claude
and other lms have yet to take a leap beyond
being able to generate pictures of Garfield and Lingerie are
(05:33):
Scooby Doo with a gun anthropics. Claude LM might be
and I quote, best in class according to tech Crunch,
but that only means that it's faster and more accurate,
which is cool, but not really the future or revolutionary
or necessarily good in all cases. I should add that
these are questions that I and other people writing about
(05:54):
AI kind of should have been asking the whole time.
General FAI generates outputs based on text based inputs and requests,
and eventually multimodal will mean be able to look at
something and generate an answer too, and these requests can
be equally specific and intricate. Yet the answer is always
as obvious as it sounds generated fresh, meaning that there's
(06:16):
no actual knowledge or indeed intelligence operating in any part
of the process. As a result, it's easy to see
how this gets better ish faster, but much much harder to,
if not impossible, to see how generative AI leads any
further than where we're already at. And by that, I
(06:37):
mean what does it do other than what it's doing today?
What does chat GBT do today? That's radically different to
eighteen months ago. The jump between say, two generations of iPhones,
from the first iPhone to what it would be the
iPhone three g I think it would be someone's going
to email and say I'm wrong, and I'll say to them,
(06:57):
you're very rude anyway, But that jump was huge, which
faster internet meant you could do more with the phone.
There was the app store, These were new functionalities added
two years after the first iPhone, two years after the
first chat GBT, and uh, I guess, I guess you
could talk to it in a response that's not really
great How does GPT, a transformer based model that generates
(07:20):
answers probabilistically, as in, what the next part of the
generation is most likely to be based on what's been inputed,
based entirely on training data, How does it do anything
more than generate paragraphs of occasionally accurate text or images,
maybe a picture of Scooby Doo in lingerie. I don't know.
But how do any of these models even differentiate from
(07:42):
each other when most of them are trained on the
same training data that they're already running out of? And
what's crazy? Is I mentioned that I should have asked this,
I really should have every single episode of mentioned AI.
What does better even look like? What does more powerful
even look like? Because when we say better, does that
mean faster? Does that mean quicker to generate things? I
(08:04):
mean it can generate more things? And the answer is
usually not. It's not that it can actually function in
a different way. It's just that you can do more,
it can grow more. Hey, remember the idea of growth
or cost Jesus right, But seriously, though, this feels like
the question to ask Sam Ultiman or Mirror Marati the
CTO of open AI's like, what's next? What's next, Oh,
(08:27):
you can generate videos? When can I use that? Cool?
What does better look like? They're other than not horrible looking? Okay,
but what what more can GPT do? Because this thing,
this thing can't think it's generating stuff, It doesn't know anything.
It has training data. It eats up and then craps
out an answer. How is that going to lead to
(08:48):
even automation? I just and then how do you even
deal with the fact that it really is running out
of training data? And I've argued before the training data
crisis is one that does not get enough attention, But
it's sufficiently dire now that it has the potential to
halt or at least dramatically slow any AI development in
the future, by which I mean generative AI. As one
(09:10):
paper published in the Journal of Computer Vision and Pattern
Recognition found, in order to achieve a linear improvement in
model improvements, you need an exponentially large amount of data.
So it's not like they read something and then work
something out from there. They need so much more to
learn one thing kind of right, it's not very good
(09:31):
and maybe there's another way to put it. Each additional
step of training becomes increasingly and exponentially more expensive to
take two. It requires you to get a bunch of
training data and process more of it, but it also
requires extremely expensive technology GPUs and a ton of energy
to actually do so, and this infers a steep financial cost,
(09:53):
not merely in just obtaining the data, but also, as
I mentioned, the compute required to process it. And Anthropics
CEO Dario Amadel said that the AI models currently in
development will cost as much as a billion dollars to train,
and within the next three years we may see models
that cost ten or one hundred billion dollars to train,
which is just insane, as that's roughly three times the
(10:15):
GDP of Estonia, a beautiful, quite cold country. Back to
mister Darren Smoblo, who doubts that llms can even become
super intelligent and that even his most conservative estimates of
productivity gains and I quote may turn out to be
too large if AI models prove less successful in improving
upon more complex tasks, and I think that's really the
(10:39):
root of the problem. All of this excitement, every second
of breathless beat off hype has been built on this
idea that the artificial intelligence industry, led by General AVAI,
will somehow revolutionize and automate everything from robotics to the
supply chain, despite the fact that general IVAI is not
actually going to solve these problems because it is not
(11:00):
built to do so. I willn't have a calculator to
drive my fucking car Jesus Christ. And While Asamoglu may
have some positive things to say, for example, that AI
models could be trained to help scientists conceive of and
test new materials, which actually already happened thanks to Google
DeepMind researchers, his general verdict is kind of harsh that
(11:20):
using generative AI and quote too much automation too soon
could create bottlenecks and other problems for firms that no
longer have the flexibility in troubleshooting capabilities that human capital provides.
In essence, replacing humans with AI might break everything. If
you're one of those bosses that doesn't actually know what
the fuck it is they're talking about, But you know what,
(11:41):
I'm sure the following advertisements from people who know exactly
what they're talking about, and we're back. The report also
includes a palate cleanser for the quirked up AI hype fiends,
and you'll find it on page where Goldman sax As.
(12:01):
Joseph Briggs argues that generative AI will and I quote
likely lead to significant economic upside based and I shit
you not entirely on the idea that AI will replace
workers in some jobs and then allow them to get
jobs in other fields. Briggs also argues that, and I
quote again, the full AI automation of AI exposed tasks
that are likely to occur over a longer horizon could
(12:23):
generate significant cost savings, which assumes that generative AI or
AI itself will actually replace these tasks. But also, that's
such a funny thing to say, Hey, you know, auto
mating tasks over a long time could save money. Fucking hell,
I should get a job at Goldmann Sex anyway, anyway. Sorry.
I should also add that, unlike every single other interview
(12:45):
in the report, Briggs continually mixes up AI and generative AI,
and at one point suggests that recent generative AI advances
are foreshadowing the emergence of a superintelligence. This is like
suggesting that because I squeeze my washing up liquid and
it's sprayed across the wall, that I'm one step close
to becoming Picasso. I included this part of the report
(13:08):
because sometimes, very rarely I get somebody suggesting that I'm
not considering both sides. The reason I don't generally include
both sides of this argument is that the AI hype
side generally makes arguments based on the assumption that things
will happen, such as a transformer model that probabilistically generates
the next part of a sentence or a picture somehow
gaining sentience. I wish my calculator gained sentience. During high school,
(13:33):
I was a very lonely child. I did not have
many friends. But no, all it would do is just
let me kind of make it look like it said boobs.
Then I'd get in a lot of trouble. But anyway.
Francois Chole, an AI researcher at Google, recently argued that
large language models like CHET GPT can't lead to average
general intelligence, the sentient AI that everyone's excited about, explaining
(13:54):
in detail in an interview with a podcast that I've
linked in there, that models like GPT A simply not
capable of the kind of reasoning and theorizing that makes
a human brain work. Schley also argues that even models
specifically built to complete the tasks of his abstraction and
reasoning corpus a benchmark test for AI skills and true
intelligence that he invented are only doing so because they've
(14:16):
been fed millions of data points of people solving the test,
which is kind of like measuring somebody's IQ based on
them studying really hard to complete an IQ test except
even dumber. But the reason that I'm suddenly bringing up
these superintelligences or AGI artificial general intelligence average general intelligence,
lots of people say different things. The reason I'm saying
(14:38):
it is because throughout every single defense of generative AI
is a really nasty, deliberate attempt to get around the
problem that GENERATIVAI doesn't actually automate many tasks. While it's
good at generating answers, sometimes even correct ones, or at
creating things based on a request, sometimes with the right
number of fingers, there's no real interaction with the task
(14:59):
or the perse and giving the task, or considering what
the task needs at all. Just the abstraction of things
said to output generated orbit in quite a complex way.
Tasks like taking someone's order and relaying it to the
kitchen at a fast food restaurant might seem elementary to
most people and I won't write easy, because working in
fast food is a hard and horrible job. It might
(15:20):
seem elementary, though, but it isn't for an AI model
that generates answers without really understanding the meaning of any
of the words. And really, I shouldn't have said the
word really that it doesn't understand anything. Last year, Wendy's,
a burger chain here in America, announced that it would
integrate its generative Fresh AI ordering system into some restaurants.
In late June, it revealed that the system requires human
(15:41):
intervention on fourteen percent of the orders. On one redditor's post,
they noted that Wendy's AI regularly required three attempts to
get it to understand them, and would sometimes cut you
off if you weren't speaking fast enough. Hey, it's two,
am I connecting the words chicken sandwich, the diet codes
difficult for me. Another burger place here in America, White Castle,
(16:03):
which implemented a similar system in partnership with Samsung and SoundHound,
fared a little better, with a remarkable ten percent of
orders requiring human intervention. Last month, McDonald's discontinued its own
AI ordering system, which it built with IBM and deployed
to more than one hundred restaurants, likely because it wasn't
very good, with one customer rang up for literally hundreds
(16:25):
of chicken nuggets like the I think you should leave sketch. However,
to be clear, McDonald's system wasn't actually based on GENERATIVAI,
and if nothing else, all of these examples illustrate this
disconnect between those building AI systems and how much, or
really how little they understand the jobs they wish to eliminate.
Little humility or doing a real job goes a long way.
(16:48):
Another thing to note is that, on top of generative
AI generally cocking up these orders, Whendy still requires human
beings to make the fucking food. Despite all of this hype,
all of this media attention, all of this incredible investment,
the supposed innovations don't even seem capable of replacing the
jobs that all of these horny capitalists have been planning
for them to do. Not that they think they should,
(17:09):
just I'm being told that this future is inevitable or indeed,
here it's starting to really accept what the problem is
with generative AI, though it isn't good at replacing the
kind of jobs that actually affect the economy but commoditizing
distinct acts of labor, and in the process the early
creative jobs that help people build portfolios to advance in
(17:30):
their industries. The freelancers having their livelihoods replaced by bosses
using generative AI aren't being replaced so much as they're
being shown how little respect many bosses have for their
craft or the customer they allegedly serve. Copy editors and
concept artists provide far more valuable work than any generative
AI can yeat. An economy dominated by managers who don't
(17:52):
appreciate or participate in labor means that these jobs are
constantly under assault from large language models pumping out stuff
that all looks and it sounds the same, to the
point that the BBC reports that copywriters are now being
paid to help them make the ais sound more human.
One of the most fundamental misunderstandings of the bosses replacing
these workers with generative AI is that you are not
(18:14):
just asking for a thing, but outsourcing the risk and
responsibility for delivering it. When I hire an artist to
make a logo, my expectation is that they'll listen to
me theyn add their own flare then will go back
and forth with drafts until we have something I like
and that they're proud of too. I'm paying them not
just for their time, but for their years learning their
(18:35):
craft and the output itself, and so the ultimate burden
of production is not just my own, and that their
experience means that they can adapt to circumstances that I
might not have thought of. These are not things that
you can train in a data set, because they're derived
from experiences inside and outside of the creative process. While
one can teach a generative AI what a billion images
(18:57):
look like, AI doesn't get handcrams or a call at
eight pm saying that something needs to pop more. It
doesn't have moods, nor can it infer them from written
or visual media, because human emotions are extremely weird, as
our moods, our bodies, and our general existences. We're disgusting
and weird and beautiful. And I realize all of this
(19:18):
is a little flowery, but even the most mediocre copy
ever written is on some level a collection of experiences,
and fully replacing any creative is so very unlikely if
you're doing so based on copying a million pieces of
someone else's homework. Now, if I was looking for something creative,
I get it from one of the following advertisements, which
(19:39):
I am sure aligned perfectly with the things that I'm
talking about. Won't make me look weird, won't have people
email me. No, you're going to love the advertisements, and
then you're not going to email me about them. All Right,
(20:01):
we're back in the room. The most fascinating part of
the Goldman Sex Report, and you'll find it on page ten,
is an interview with Jim Cavello Goldman sax Is head
of Global Equity Research. Cavello isn't a name you'll have
heard unless you are, for whatever reason, a big semiconductor head,
but he's consistently been on the right side of history,
named as one of the top semiconductor analysts by LI
(20:22):
Research for years, successfully catching the downturn in fundamentals in
multiple major chip firms far before others did, at times
mocked for quote being wrong and then turning out to
be so very very right. And Jim, Jim Cavello, in
no uncertain terms, thinks that GENERATIVEAI the whole bubble, this
(20:42):
whole generative monstrosity kind of for shit. Cavello believes that
the combined expenditure of all parts of the GENERATIVEAI boom,
data centers, utilities, applications will cost a trillion dollars in
the next several years alone, and he asks one very
simple quest question, what trillion dollar problem will AI solve?
(21:04):
He notes that replacing low wage jobs with tremendously costly
technology is basically the polar opposite of the prior technology
transitions that he's witnessed in the last thirty years. Just
to be consistent with my AI to national GDP rubric,
one trillion dollars is roughly half the GDP of Italy
or the entire Danish economy multiplied by two and a
half times. Yes, Denmark an EU and NATO member state
(21:27):
that gave the world, among other things, a zempic, Lego
and Aqua, the artist behind nineteen ninety seven Saccharin song
of the Year Barbie Girl, one amazing country, and apparently
their GDP is just a footnote in comparison to how
much we have to spend on generating pictures of well
Garfield in lingerie shooting at Scooby Doo. At this point,
(21:49):
I'm just going to keep escalating one particular myth that
Cavello dispels, and this was my favorite part is that
many people compare generative AI to the early days of
the Internet, and he knows so. It's that, even if
it's in its infancy, the Internet was a low cost
technology solution that enabled things like e commas to replace
costly incumbent solutions, and that AI technology is exceptionally expensive,
(22:11):
and to justify those costs, the technology must be able
to solve complex problems, which it isn't designed to do.
And that is a quote, by the way, hello's a beast.
He also dismisses the suggestion that tech starts off expensive
and gets cheaper over time as revisionist history, and that's
a quote, and that quote again, the tech world is
too complacent in the assumption that AI costs will decline
(22:32):
substantially over time. He specifically notes that the only reason
that Moore's law was capable of enabling smaller, faster, jeeper
chips was because competitors like AMD forced Intel and other
companies to compete, a thing that doesn't really seem to
be happening with Nvidia, which is a near stranglehold on
GPUs required to handle generative AI. And indeed, all of
(22:53):
these other companies aren't really making competitive products. They're just
kind of making the same thing. Google has their own
video and text and image generator, sodas Lama, the Metawe,
sodas chat GPT. It's all the same. There's no competition here.
Kind of look a a cartail. Almost good idea for
an episode. And while there are companies making these graphics
(23:16):
process units aimed at the AI market, especially in China,
where US trade restrictions prevent local companies from buying high
powered cards like the A one hundred and video makes
for fears that they'll be diverted to the Chinese military,
they're not doing so at the same scale as in video,
and Cavello notes that and I quote, the market is
too complacent about the certainty of cost declients. He also
(23:38):
notes that costs are so high that even if they
were to come down, they'd have to do so dramatically,
and that the comparison to the early days of the Internet,
where businesses often relied on sixty four thousand dollars service
from Sun Microsystems and there was no live cloud stories
like Amazon Web Services or linod or Azure, those days
paled in comparison to the current costs of generative AI,
(24:00):
and that's even before you include the replacement of the
power grid, which he says is a necessity to keep
the boom going. I could probably just read you the
entirety of Cavello's interview because it's just so nasty. I
want to roll around on the floor on it. It's amazing
and he's so good, and he even attacks one of
my least favorite things, where he believes that when people say, oh,
(24:25):
people didn't really think the iPhone was a big deal,
the Internet was a big deal. Put specifically the iPhone
and smartphones, they didn't think it was going to be big,
and thus generative will be big. And he just says
it's complete nonsense. He says that he sat through hundreds
of presentations in the early two thousands, many of them
including road maps that accurately fear how smartphones eventually rolled out,
specifically noting things like GPS. So when GPS technology came down,
(24:49):
of course that would be in a smartphone. That makes
perfect sense. Just to be clear, this guy is there
and he's a software and hardware analyst. He sits through
presentations about what the future might look like all the time,
and he said there's no such roadmap for generative AI,
and there's no killer app either. He also notes the
big tech companies now have no choice but to engage
(25:09):
in an artificial intelligence ARM race given the hype, which
will continue the trend of massive spending. And he believes
that there are and I quote low odds of AI
related revenue expansion, in part because he doesn't believe that
generative AAI will make workers smarter, just more capable of
finding better information faster, which is not great, by the way,
(25:30):
and that any advantages that generative AI gives can be
arbitraged the way because the tech can be used everywhere,
and thus you can't as a company really raise prices.
In fact, and bridging off from what he said there,
what might happen is a race to the bottom, or
maybe when things get too expensive, everyone will get more
expensive too. None of this is good, by the way
(25:53):
for them, but in plain English, just saying it, I'm
just going to put it out there, you'll be supper.
But what I'm about to say, generative AI isn't making
any money for anybody because it doesn't actually make companies
that use it any extra money. Efficiency is useful, but
it's not company defining. Cavello also adds that hyperscalers like
(26:13):
Google and Microsoft will also garner incremental revenue from AI,
not the huge returns that they're perhaps counting on given
their vast AI related expenditures over the last two years
and the ridiculous things they've been doing, like putting generative
AI in search. Jesus Christ Sundar. And this is damning
for many reasons, chief of which is that the biggest
(26:34):
thing that artificial intelligence is meant to do is be
smart and make you smarter. Being able to access information
faster might make you better at your job, but that's
efficiency rather than allowing you to do something new. You're
not actually really even being enhanced. This is one step
above Google. But maybe it's an internal Google search. I
(26:54):
guess you can generate really crappy looking ah. I'm just
not sure where it is that. I don't think Jim
is either. He ends with one important and brutal note
that the more time that passes without significant AI applications,
the more challenging and I quote the AI story will become,
with corporate profitability likely floating the bubble as long as
(27:15):
it takes for the tech industry to hit a more
difficult economic period, kind of like we saw in the
middle of twenty twenty two. He also add his own prediction,
investor enthusiasm may begin to fade if important use cases
don't start to become more apparent in the next twelve
to eighteen months. And I think he's being a little optimistic.
(27:37):
While I won't recount the rest of the report, one
theme brought up repeatedly is the idea that America's power
grid is literally not ready for GENERAIVAI. In anew in
an interview with former Microsoft VP of Energy Brian Janus
on page fifteen, the report details numerous nightmarish problems that
the growth of GENERAIVAI is causing to the power grid,
(27:57):
such as hyperscalas like Microsoft and Google increasing their power
demands from a few hundred megawatts in the early twenty
to tens to a few gigawatts by twenty thirty, enough
to power multiple American cities, or the centralization of data
center operations from multiple big tech companies in northern Virginia,
potentially requiring a doubling of grid capacity over the next decade.
(28:19):
Utilities they've not experienced a period of load growth as
in a significant increase in power draw in nearly twenty years,
which is a problem because power infrastructure is slow to
build and involves numerous onerous permitting in bureaucratic measures to
make sure it's done properly or at all, and the
total capacity of power projects waiting to connect to the
grid grew thirty percent in the last year and wait
(28:40):
times of forty to seventy months. Expanding the grid is
no easy or quick task, and Mark Zuckerberg said that
these power constraints are the biggest thing in the way
of AI, which is sort of true and remarkable for
maguy who's are often full of shit in essence. On
top of generative AI not having any killer apps, not
(29:01):
meaningfully or helpfully increasing productivity, or GDP not generating any revenue,
not creating any new jobs, or massively changing existing industries,
it also requires America to rebuild its power grid, which
is a good thing which Brian Jane has regrettably had
that the US has kind of forgotten how to do.
US doesn't really do be give infrastructure anymore. It's not
(29:24):
a great scene. I don't know. Perhaps Sam Mortman's energy
breakthrough could be these fucking AI companies being made to
pay for new power infrastructure. And I don't mean them
oaning it. I mean tax them, tax their asses they're
burning the world, make them pay for it. The reason
I so agonizingly picked apart this report is that if
Goldman's sex is saying this, things are very, very bad.
(29:48):
And it also directly attacks the specific hype tactics of
AI freaks. The sense that generative AI will create new
jobs really hasn't in the last eighteen months. The sense
that costs will come down they haven't in there doesn't
seem to be a part of them to do so
in a way that matters. And that there's this incredible
demand for these products they claim exists, and there really isn't,
(30:09):
and I don't see a path to it even Goldman Sachs,
when describing the efficiency benefits of AI, added that it
was able to create an AI that updated historical data
in its company models more quickly than doing so manually
with one problem it costs six times as much to do.
I love the AI future. I love artificial intelligence. I
(30:30):
think it's so good that it's saving so many I
feel crazy. I feel crazy, I feel sorry. Moving on now,
there is a remaining defense, and it's also one of
the most annoying. People sometimes argue that perhaps open ai
has something we don't know about, some sort of big, sexy,
(30:51):
secret technology that will break the bones of every hater
that will bring me to my knees and beg the
machine God for mercy. And I have account point. No,
they don't. They don't have shit. Seriously. Mirror Marati, cto
of open ai, said in early June that the models
that open ai has in its labs are not much
(31:12):
more advanced than those that are publicly available. And that's
my answer to all of this. There is no magic trick.
There is no secret thing that Sam Ortman has that
he's going to reveal to us in the next few
months that makes me eat cro or some magical tool
the Microsoft or Google pops out that makes all of
this worth it. There isn't. I'm telling you, there isn't.
We would have seen a sign. There's not even a tinkle.
(31:33):
There's not a rumor, there's not a leak. There's nothing.
Generative AI, as I said a few months ago, is peaking.
If it hasn't already peaked, it can't do more than
it's currently doing. At least not much more other than
maybe doing it faster with some new inputs. It isn't
getting more efficient. SEQUOIA hype man David Kahn gleefully mentioned
in a recent blog that Nvideo's B one hundred chips,
(31:56):
the kinds used for training and running these models, will
have two point five x better performance for only twenty
five percent more cost, which doesn't mean a goddamn thing,
because generative AI isn't going to gain sentience or intelligence
or consciousness because it's able to run faster. I don't
even think it's going to be able to do more things.
Generative AI is not going to become AGI, nor will
(32:19):
it become the kind of artificial intelligence you've seen in
science fiction. Ultra smart assistance like Jarvis from Iron Man
would require a kind of consciousness that no technology currently
or may ever be able to produce, which is the
ability to both process and understand information flawlessly and make
decisions based on experience, which, if I haven't been clear enough,
(32:39):
are all entirely distinct things. And generative AI and AI
in general doesn't have experiences. They don't have shades of gray,
They only have shades of brown. Poops, poop, joke. Generative
AI at best processes of information when it trains on data,
but at no point does it learn or understand because
everything it's doing is based on ingesting, training, data, and
(33:01):
developing answers based on a mathematical sense of probability rather
than any appreciation or comprehension of the material itself. Llms
are entirely different pieces of technology to that of an
artificial intelligence in the sense that the AI bubble is
hyping and it's disgraceful that the AI industry has taken
so much money and attention with such a flagrant offensive lie.
(33:24):
The jobs market isn't going to change because of generative AI,
because generative AI can't actually do many jobs, and it's
mediocre at the things that it's capable of doing, which
is why it's so shocking. There are even people who'd
replace real people with it. While it's a useful efficiency
tool in specific contexts, said efficiency is based off of
technology that's extremely expensive, and I believe at some point
(33:47):
AI companies like Anthropic and OpenAI are going to have
to increase prices or begin to collapse under the weight
of a technology that has no path to profitability. If
there was some secret way that this would all get fixed.
Wouldn't Microsoft or Meta or Google or maybe Amazon, who's
CEO of their cloud platform compared GENERATIVEAI to dot com
bubble in February. Wouldn't they have taken advantage of this big,
(34:09):
sexy secret. Why am I hearing the open AI is
already trying to raise another multi billion dollar round after
raising an indeterminate amount at an eighty billion dollar valuation
in February. Isn't open AI's annualized revenue three point four
billion dollars? Why do they need more money? How much
are they burning? Because I'm guessing that if it's more
(34:30):
than three point four billion dollars, it's got to be
a lot more if they're still trying to raise capital,
and if they're still going out there lying about what
CHATGBT could do in the future. But I'll give you
an educated guess, because whatever they open AI and other
generative AI hucksters have today is obviously painfully not the future.
Generative AI is not the future but a regurgitation of
(34:54):
the past, a useful, yet not groundbreaking way to quickly
generate new data from old that costs too much to
make the compute and the energy demands worth it. Google
grew its emissions by forty eight percent in the last
five years, chasing a technology that made its search engine
even worse than it already was, and they've got nothing
to show for it other than a bunch of very
funny headlines. It's genuinely remarkable how many people have been
(35:18):
won over by this insane con this unscrupulous manipulation of
the capital markets, the media, and brainless executives disconnected from production.
They're all buying it, and it's all thanks to a
tech industry that's disconnected itself from building useful things. I've
been asked a few times when I think the bubble
will burst, by the way, and I maintain that part
(35:40):
of the collapse will be investor descent, punishing one of
the major providers, Microsoft or Google for a massive investment
in an industry that produces little actual revenue. However, I
think what'll really get the bubble poppin will be a
succession of bad events, like say, Figma pausing its new
AI feature after it immediately plagiarized Apple's weather, likely because
(36:00):
it was trained on it as part of its training data,
crested by one large, nasty one such as a major
AI company like chatbox company character ai, which raised one
hundred and fifty million dollars in funding a few years ago,
and the information great publication claims might sell to one
of the bigger tech companies. I think someone like character
ai could collapse under the way of a non sustainable
business model and unprofitable technology and just also, do you
(36:25):
really need a chat this is what this company does.
You can speak to like sotorro Gojo from jiu jitsu Chism,
which is an anime and amonga. By the way, if
you're reading the margarets, take it too long and it's weird.
It's one of these weird companies you can talk to
historical figures and you're kind of like, okay, I assume
you have a vague enterprise product. Oh no, it's all
just nonsense. It's just when I see these companies and
(36:47):
want they raise them, just like, what the fuck are
you even doing all day? But it could be a
slightly more useful sounding one like Cognition Ai, which raised
one hundred and fifty seventy five million dollars a two
billion dollar valuation in April to make an AI software
engineer with one problem. Well, I mean it's just a
small one. It was that they had to fake a
(37:09):
demo of it working. They had to fake the demo,
and I've included a link in the notes to a
YouTube of a really great software engineer just sitting there
saying I'm actually a fan of AI, but this is crap,
and he just goes through it line by line. Basically,
there's going to be a moment that spooks the venture
capital firms, and it's going to spook them into pushing
one of their startups to sell, which will lead to
(37:30):
a sudden and unexpected yet very obvious collapse of a
major player. As valuations drop and people get desperate for
open AI and anthropic there really is no path profitability,
only one that includes burning further billions of dollars in
the hope that they discover something, anything, that might be
truly innovative or indicative of the future, rather than further
(37:50):
iterations of generative AI, which is at best an extremely
expensive new way to process data. I just see no
situation where open ai and anthropy continue to iterate on
large language models in perpetuity, because at some point, Microsoft, Amazon,
and Google will decide that cloud compute welfare isn't a
business model. And by the way, all of those companies
(38:11):
I listed, Microsoft, Amazon, and Google have either invested in
open AI or Anthropic. In the case of Amazon and Google,
they've only invested in anthropic, and a lot of that
is in cloud credits, meaning that it's just welfare. It's hilarious,
very right wing people in the tech industry. You think
that this kind of welfa would bother them. I know
what the difference is anyway, Without a real tangible breakthrough,
(38:32):
one that would require them to leave the world of
large language models entirely, it's unclear how generative AI companies
can even survive. GENERATIVEAI is locked in the Red Queen's race,
burning money to make money in an attempt to prove
that they one day will make more money, despite their
being no clear path to making more money than they spend.
It's all a little wild, a little bit upsetting, and
(38:54):
a little bit crazy making. And I feel a little
bit crazy every time I put together one of these
episodes because it's also patently ridiculous. GENERATIVEAI is unprofitable, unsustainable,
and fundamentally limited in what it can do thanks to
the fact that it's probabilistically generating an answer. It's not
learning anything. It doesn't know anything. It isn't intelligent, it's
(39:16):
only artificial. It's been eighteen months since this bubble started inflating,
and since then very little has actually happened involving technology
doing new stuff, just iterative explorations of the very clear
limits of what an AI model that generates answers based
on training data can produce, with the answer being something
(39:36):
that at times is sort of good. It's obvious, it's
well documented. GENERATIVEAI costs far too much, it isn't getting cheaper,
it uses too much power, and doesn't do enough to
justify its existence. There are no killer apps, and no
killer apps on the horizon, and there are no answers
to these problems. I don't know why more people aren't
(39:57):
saying this as loudly as I am. I don't now
why everyone's not freaking out. I understand that big tech
desperately needs this to be the next hyper growth market,
as they haven't got any others. But pursuing this cause
our useful environmental disaster causing cloud efficiency boondoggle will send
shock waves through the industry when it all collapses and
what's frustrating to me about this is it's becoming obvious,
(40:20):
and you're seeing people lately kind of come out their
shells and saying, I'm not sure this is going to
be good. I'm not sure are we hitting PKI as
a great piece in the New York Times by someone
recently saying that we hit PKI and it used a
bunch of my links. Thank you, Julia. Anyway, putting aside
my bitterness, the problem here is all of the money
and power going into this, but also the big lie
(40:41):
being perpetrated by people. In my next episode, I'm going
to go into some more of that. I'm going to
talk about the fact that a lot of these companies
are not really doing anything that they're kind of doing
marketing exercises as a means of manipulating the media in
the markets. It's all just the discrace waste. The people
(41:02):
getting hurt by Generative AI are people that are working
contract jobs because it's harder than ever to get a
full time job. They're being replaced by shitty software that
cost too much money that will invariably go away when
this all collapses. And I feel that the media has
some responsibility here. The markets too, but definitely the media.
(41:22):
I understand that it's difficult to look at the tech
industry and think, are they all kind of leading us on?
But they've done it so many times and this is
the worst one I've seen. This is ridiculous. Billions of
dollars wasted, so much time, so much energy, And yes,
we can cover this, Yes we can talk about this.
We don't owe them, are both sides. Why do we
(41:44):
owe them that they wouldn't give us one? Have you
ever seen a tech entrepreneur go, oh, that was a nice,
fair interview? Or do they usually piss and moan the
moment anyone says anything negative. No, the right way to
look at this now is with suspicion and frankly most innovations,
but in particular this one, because despite an alarming amount
(42:05):
people in the media saying things like, oh yeah, well,
generative I will of course lead to superintelligence, it's never
going to do that. And we all I don't know
if you consider me a reporter or a journalist. This
is a conversation to have with me via my email,
I guess, but whatever I am, I should not be
the only one that's really going out this hard, and
(42:26):
I know I'm not. There are other people, some people
right about Brian Merchant on the Machine, fantastic Riyer another
great guy, Paris Mark's another great guy doing great work.
And Ellen Hewitt the Bloomberger was on a few episodes ago.
Great coverage of Sam Hortmaan. But nevertheless, it is time
to start saying when we cover generative AI, specifically open
AI and Anthropic, hey, where are we with superintelligence? And
(42:50):
I don't mean what level we're at, I mean what
is the technology that you have that is going to
lead to that? And when they refuse to give an answer,
what should be covered is that they don't have one
make them dance? Why do we have to do the
work to make AI important? Why doesn't generative AI impress
us right now? It's taking money out of people's hands,
(43:14):
It's killing freelancers. It's really horrible. And the thing to
cover is that you can cover fun apps. I don't
have a problem with that, but when it comes to Google, Microsoft, Amazon,
Open AI, Anthropic, all of these companies, we need to
treat them as kind of interlopers at this point because
they need to fucking show us what they're working on them.
(43:35):
They need I don't care about the safety side. They
don't care about it either. We don't need to worry
about the safety issue right now, because the only safety
issue is the problem we are facing today, which is
that millions of people's works being stolen and millions of
other people are having their jobs taken in the most
mediocre way by shitty bosses who don't do any work.
When Sam Altman gets on stage and claims that AI
(43:58):
will solve all the physics, the appropriate answer is, what
a fuck are you talking about? You sound like an idiot.
When Brad light Caps, COO of Open AI says, I
don't know if it's trained on YouTube, you should say
why don't you know? And then when he bubbles away, goes, oh,
I'm not sure. Say it's really weird that a company
worth eighty billion dollars as a COO who does not
(44:20):
know what's happening, saying goes from Mira Marati when she
can't answer whether Sura was trained on YouTube. These people
must be treated, if not as criminals, as suspicious types,
as con artists, as people that have absorbed so much attention,
so much money, so much time, so much adulation. They
(44:43):
should be held accountable and holding them accountable starts with
a real evaluation of generative AI. And as I've shown
you today, this bubble is full of shit. I'm gonna
end on a happier note, though I looked it up
yesterday and as of speaking this out, it's only been
four and a half months of this show show. I
thought it had been six months or a year. I
have time dilation issues in my brain and I will
(45:05):
be seeing a doctor. But I genuinely want to thank
all of you. I want to thank everyone who's been
there from last episode or the first episode, everyone who's
come through through different ways. This show's evolving and it
will continue to evolve. I know I'm angry, I'm pissy,
but I think at this time in history, there's a
reason to be. I'm not feeling myself full of it.
(45:25):
It's naturally there. It's naturally there. When I look at
an industry, at tech industry that I genuinely love, that
genuinely made me the person I am today, and I
see what's being done to it, and it makes me
so pissed off. Because technology is everything. We do. To
discard tech as something that happens just on the computer
and then there's real life. It's kind of silly. They're
(45:47):
both the same thing. Now we are all influenced by
offline and online culture, times far more by online culture.
As this show progresses, I'm going to get more into that,
and the upcoming episodes I have are going to be
a lot of fun. The seconds of this week is
just a real laugh, and it kind of showed me
how this show can grow into a more conversational, fun,
(46:07):
interesting thing where I talk to all sorts of folks
about all sorts of things in tech and how it
affects them. I'm really grateful for you all listening. You'll
hear my email after that. And by the way, it's
easy as an letter E letter Z or z from
my British listeners. Email me easy abotter offline dot com,
message me on Twitter or Facebook or Instagram. I'm always
(46:27):
happy to hear from people. I want to know how
to make this better, but I know one way is
to just keep doing it. And I'm really excited for
the future and I'm very very thankful to have all
of you there, even the people who are very mad
at me if you're not like in Generative AI. But seriously, though,
one final thing. People will see this stuff and they
(46:48):
do contact me fairly regularly, and they say, what can
I do? And I said this at the end of
the show Older Supremacy as well, but I'll say it
here as well. You can't do a ton. What you
can do is say people's names, Mirror Marathi, Sam Prabagar,
ragavans Under Pashi. I know it sounds silly, but a
lot of these ultra rich people, they don't care about experiences,
don't really have predators. What they have is their reputations.
(47:11):
But also what they have is their lies. I know
it sounds a bit dramatic to say they're liars, but
look at what Sam Moltman's saying. The only reason Sam
Mortman has been able to get where he is today
is a series of lies and cons. The way to
break generative AI, which it should be broken until it
can prove itself profitable and not environmentally destructive, is to
(47:32):
push back, to constantly talk about how bad it is.
To share your bad experiences with it, share every hallucination,
share them publicly on social media channels. Tag Sam Multman
at Samma on Twitter, Sama do it. I know this
sounds silly, but guess what these people have grown so
rich and powerful from nobody calling out their shit. And
(47:53):
there are people up me. They're journalists who've done it.
And I'm not saying nobody does it. Don't get mad
at me. What I'm saying is thousands, tens of thousands,
hundreds of thousands of people listening while this stuff sucks,
why it's bad. We'll get to the tech people and
eventually one of those messages will get to some of
these financial people, and when they really turn against this,
(48:15):
it will collapse. And if it doesn't, if I'm wrong,
well I'll do an episode about how wrong I am,
because you know what this is meant to be informative,
it's meant to be entertainment. But also I don't mind
being wrong, just kind of yet to be so someone's
gonna be mad at that anyway. Thank you, Thank you
(48:43):
for listening to Better Offline. The editor and composer of
the Better Offline theme song is Metasowski. You can check
out more of his music and audio projects at Matasowski
dot com, m A. T. T OsO w Ski dot com.
You can email me an easy, Better Offline dot com,
or visit Better Offline dot com to find more podcast links,
(49:03):
and of course my newsletter. I also really recommend you
go to chat dot where's youreaed dot at to visit
the discord, and go to our slash Better Offline to
check out our reddit. Thank you so much for listening.
Better Offline is a production of cool Zone Media. For
more from cool Zone Media, visit our website cool Zonemedia
dot com, or check us out on the iHeartRadio app,
(49:24):
Apple Podcasts, or wherever you get your podcasts.