Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Media. I'm mad as hell and I'm not gonna take
it anymore. Welcome to bear Offline. I'm your host ed Zetron. Now.
Speaker 2 (00:23):
This is the second part of a two part series
about how.
Speaker 1 (00:25):
Open Ay's stupid, bloody growth myth has gone the entire
media ecosystem, forcing people who really should know better to
repeat absolutely insane things as though they're actually perfectly reasonable. Now,
if you haven't already listened to the last episode and
come back, just go, just go. I don't I'll need
you to listen to it otherwise, Let's talk about why
this bubble actually inflated. And I realized I did a
(00:47):
monologue about this, but it's so obvious that I needed
to do a longer episode. But let's start simple. The
term artificial intelligence is bastardized to the point it effectively
means nothing and everything at the same time. When people
hear AI, they think of an autonomous intelligence that can
do things for them, and generative AI can do things
for you, like generate an image or a text from
(01:08):
a simple prompt. As a result, it's easy to manipulate
people who don't know much about tech, or even try
it in the first place, into believing that this will
naturally progress from It can create a bunch of text
for me that I have to write for my job
just by me typing in a prompt. Too, it can
do my job for me just by typing in a prompt. Basically,
everything you've read about the future of AI extrapolates generative
(01:29):
AI's ability to sort of generate something a human would
make and turns it into a It can do whatever
a human can do, or because some tech in the
past has sometimes been bad at the beginning and linearly
improved as time drags on. This illogical thinking underpins the
entire generative AI boom because we found out exactly how
many people do not know what the fuck they're talking
(01:50):
about and are willing to believe the last semi intelligent
person that they talk to. Generatif AI is kind of
a remarkable karner, just good enough version of human expression
to get it past the gatekeepers in finance and the media,
knowing that neither will apply a second gear of critical
thinking beyond hook as we'll do an AI now. The
expectation that general IVAI will transform into this much more
(02:10):
powerful version requires you to first ignore its existing limitations,
believing it to be more capable than it is, and
also ignore the fact that these models have yet to
show meaningful improvement over the past few years unless you're looking,
at course, at the benchmarks that they are built specifically
to pass. They still halluciname, they're still ungodly expensive to run,
they're still unreliable, and they still don't really do that much.
(02:31):
We're still chat GPT's growth as galvanized these people into
believing that this is a legitimate, meaningful movement rather than
the most successful public relation campaign of all time. Think
of it like this, and some of you really don't
like this argument is something couldn't be this big just
because of the media.
Speaker 2 (02:48):
It really could.
Speaker 1 (02:49):
If almost every single media outlet talked about one thing,
this thing being generally of AI, and that one thing
was available from one company, Open Ai, wouldn't it look
exactly how things look today? Got open ai with hundreds
of millions of monthly active users, and then a bunch
of other companies, including big tech firms with multi trillion
dollar market caps, with somewhere between ten and sixty nine
million monthly active users, and I don't count Gemini because
(03:11):
they're cheating. You can't slap it on Google Assistant. That's unfair.
What we're seeing here is one company taking most of
the users and money available and doing so because the
media fucking helped them. People aren't amazed by chat GPT.
Speaker 2 (03:23):
They're curious.
Speaker 1 (03:24):
They're curious about why the media won't shut up about it,
and we have to realize and recognize and accept that
part of the reason this bubble was inflated was the
failure of Google Search. Everyone I talk to that uses
chat gpt regularly uses it as either a way to
generate shitty limericks or as a replacement for Google Search,
a product that Google is deliberately made worse as a
(03:44):
means of increasing profits. Listen to our award winning episode
The Man That Destroyed Google Search. I'm going to say
that forever, and by the way, that's absolutely what you
should listen to. It's wherebe award winning the best Business episode.
Speaker 2 (03:56):
We won it, folks.
Speaker 1 (03:57):
Anyway, I hate to say this, I really don't like
saying this. I don't like to give it to them,
but chat GPT, if I'm honest, is better at processing
search strings than Google Search, which is not so much
a sign that chatchpt is good at something as it
is that Google has stopped innovating in any meaningful way.
Over time, Google search should have become something that was
able to interpret searches into kind of like a perfect result,
(04:18):
which would require the company to improve how it processes
your requests. It should be how Google used to feel magical. Instead,
Google search has become dramatically worse, mostly because the company's
incentives change from help people find something on the web
to funnel as much traffic and show as many add
impressions as possible on Google dot Com. It does little
annoying things like ignoring random words in the search string,
(04:40):
even though those words were there for a reason. And
Google doesn't know better than you what you want ideally,
though they should when it comes to a search product.
That's ideally what you get someone to search something for you,
You want them to bring back the best thing. And
by this point, Google search should have been far more magical,
more capable of taking a de witched question and turning
(05:01):
it into a great answer. Would said answer being a
result from the internet right. Note that nothing I'm saying
here is actually about generating a results. It's about processing
a user's query and presenting an answer. The very foundation
of computing and the thing that Google at one point
was the best in the world at doing. Thanks to
Propagar Ragavan, the former head of Ads that led a
coup to become the head of Search, Google is pulled
away from being the meaningful source of information on the Internet.
(05:24):
It really is actually quite sad when you think about it.
And I'd argue that chat GPT filled the void left
by Google Search by doing the thing that people wanted
Google Search to do, answer a question even if the
user isn't really sure how to ask it. Google Search
has become clunky and messy, putting the burden of using
the service on the user rather than helping fill the
(05:44):
gap between a query and an answer in any meaningful way.
Google's AI summaries don't even try to do what chat
GPT does. They just generate summaries based on search results
and say, uh, mate, is this what you want? I
don't fucking know? Only may I only mate latsambillion does
a core give me a fucking break? But one though
on Google's AI summaries, they're designed to answer a question
(06:04):
rather than provide a right answer. There's a distinction that
needs to be made here because it speaks the underlying
utility of search itself. One good illustration came earlier this
week when someone noticed that you could ask Google to
explain the meaning of a completely made up phrase and
it would dutifully obey you. Two dry frogs in the situation,
Google said, refer to a group of people in an
awkward or difficult social situation. Not every insect has a mortgage,
(06:26):
Google claimed, there's a humorous way of explaining that not
everything is as it seems. But my favorite is, sorry,
I haven't read this bags. Big winky on the skillet
bowl is apparently a slang TERWN that refers to a
piece of bread with an egg in the middle.
Speaker 2 (06:40):
Or very funny. Sure, but is it useful?
Speaker 1 (06:44):
No? And it's also really cool that this company just
makes billions of dollars off of this. With all its
data and all its talent, Google has put out the
laziest version of an LLM on top of its questionably
functional search product as a means of impressing shareholders. And
I'd like to say the results speak for themselves. Now,
it must be clear none of this is to say
that chat GPT is good like it really isn't. It's
(07:08):
just better at understanding a user's request than Google Search. Yes,
I fundamentally believe, by the way, that five hundred million
people a week could be using chat GPT as some
sort of search replacement. And no, no, I do not
believe that's a functional business model, in part because if
it was chat gpt open AIE, they'd have a functional business.
And to be clear, that's a load bearing kit because
(07:30):
I know that five hundred million people aren't using chat
GPT as a search replacement.
Speaker 2 (07:34):
It's hypothetical.
Speaker 1 (07:35):
We actually don't know, and we cannot trust what comes
out of them, not that they share. It's not just
the open aiy as a shambles of a company.
Speaker 2 (07:43):
It appears that.
Speaker 1 (07:44):
Google's ability to turn search into such a big business
was because it held a monopoly on search, search advertising,
and pretty much the entire online ads industry. And if
it was a truly competitive market and Google wasn't allowed
to be vertically integrated with the entire digital advertising apparatus
of the Web, it would likely be making much less
revenue per user. And that's bad if your Google replacement
costs many, many, many, many many more times more money
(08:06):
than Google to run as an a side. By the way,
if you're wondering. No, open Ai cannot just create a
Google Search competitor. Search GPT will be a significantly more
expensive product to run at Google scale than Chat GPT,
both infrastructurally and then the cost of revenue itself, with
open ai being forced to create this massive advertising arm
that does not exist. And by the way, another great
(08:27):
critique I get is, like open Air, I can just
hire a bunch of ads people. Why do you think
there hasn't been another competitor to Search? You realize that
there are lots of other companies with lots of other
money that could just do this, and indeed they've tried.
Why do you think that hasn't worked. Is it because
of the monopoly. It's because of the monopoly. It's because
of the monopoly.
Speaker 2 (08:48):
Look, people love the chat GPT in todays.
Speaker 1 (08:51):
It's the box where you can type in one thing,
get another thing out because it resembles how everybody has
always wanted Google Search to work.
Speaker 2 (08:56):
Does it work well?
Speaker 1 (08:57):
Who knows, but people feel like they're getting more out.
But before I go any further, I'd like to talk
to you about HEGI Artificial general Intelligence art official bloody
(09:18):
general intelligence, the conscious computer that does not exist and
is entirely fictional. And this two part of by the Way,
has been a break from my usual onerous, tortured, punished
and analysis. And it's because I needed to have a
little fun. And I think you've kind of heard that
in the episode. I'm like a little flexible. I'm more
all like limber and excited, and it also comes from
a place of frustration. None of this AI stuff has
(09:42):
ever felt substantive or real because the actual things that
you can do with Generator IVAI never seemed to come
close to the things that people are Sam Ortman and
Warrio Aama Day seem to be promising, nor do they
come close to the bullshit like people Casey Newton and
Kevin Ruster peddling. None of this ever resembled artificial general intelligence,
and if I'm honest, very little it seems to even
suggest it's a functional industry. And when cynical plants like
(10:04):
Kevin rus of the Times bumble around asking theoretical questions
such as, do you think that there's a fifty percent
chance or higher that AGI to find this an AI
system that outperforms human experts, Eventually all cognitive tasks will
be built before twenty thirty, I think when we're reading
people saying that, and that was Kevin rus at like
some little AI panel, things surrounded, but just like venture
(10:25):
capitalists and the senior officials of anthropic, when you've got
like the lead columnist of the time saying this, we
should all be terrified. We should be terrified, but not
of AGI, but that the lead technic columnist that The
New York Times appears to have an undiagnosed concussion. Rus's logic,
as with Kate sin Newton's, is based on the idea
that he's talked to a bunch of people that say, yeah, yeah,
(10:46):
AGI is right around the corner, rather than any kind
of like tangible proof for evidence. Just line going up.
Speaker 2 (10:53):
Do you see it?
Speaker 1 (10:53):
The line's going up? Which line is it? Get out
of my house. Look, I will say I've been a
little mean on Kevin Ruse. I've been saying that he's
written the worst shit about artificial intelligence, that he is
a credulous buffoon or at worse, c cynical plant, that
Kevin Ruse at The New York Times is regularly pushing
things like NFTs and crypto and getting basic things wrong,
(11:17):
like the Helium piece in which Helium lied.
Speaker 2 (11:20):
About their customers.
Speaker 1 (11:23):
I've said that Kevin Ruse regularly says things that don't
make any sense, and he has indeed written the worst
tech criticism out there, criticism being the wrong word, take
analysis out there. And I want to just say that
I've been wrong. I've been wrong the entire time. What
I should have said is that he's written the worst
stuff yet. And I should have said that because he's
recently published one of the dumbest fucking things I've read
(11:44):
in my life, and I can't wait to tell you
about it. His most egregious example of credulousness came in
late April when he published this thinly veiled puff piece
about what to do if AI models become conscious in
the future. Now, this piece, this piece is possibly the
dumbest tech piece I've read in my life. I really
mean this. I've read some dog share, I've gone on
telegram groups for the smallest icos you've seen in your life.
(12:06):
From then the rutger House speech from a blade runner
right now, the things I've seen with your eyes different speech.
I realize this piece from the New York Times is
called if AI Systems become conscious, should they have rights?
And the subhead is as artificial intelligence systems become smarter,
one AI company is trying to figure out what to
do if they become conscious. I'm going to give you
(12:26):
exactly one guess what company he's talking about. And here's
a clue. It's the company Casey Newton's boyfriend WORKSAM. That's right,
it's Anthropic. Kevin Ruse interviewed two people, both employed by Anthropic,
with one holding the genuinely hilarious job description of AI
welfare researcher, who said nakedly insane things like there's only
(12:47):
a small chance, maybe fifteen percent or so, that Claude
or another AI system is conscious. And it seems that
if you find yourself in a situation of bringing some
new class of being into existence dot dot dart, then
it seems quite prudent to at least be asking questions
about whether that system might have its own kinds of experiences. Well,
I'm experiencing brain damage reading this article out. And what
(13:09):
makes this article so appalling is that Ruth acknowledges that
this shit is seen by most level headed people as
utterly fantastical. He describes the concept of AI consciousness as
a taboo subject and that many critics will see this
as crazy talk, but doesn't bother to speak to any
actual critics. He does, however, speculate on the motives that
said critics, saying that they might object to an AI
(13:29):
company's studying consciousness in the first place because it might
create incentives to train their systems to act more sentient
than they actually are, Eh, Kevin, Wouldn't it be really
bad if a generative AI company deliberately did something to
convince someone that their systems were sentient. You know, someone
really credulous would accept whatever narrative said AI company was pushing,
such as that they created a special job just in
(13:49):
case AI was sentient. Genuine question, Kevin, when you walk
past the mirror, do you start barking at it because
you see another guy? Anyway, nothing about anything that open
AI or anthropic is building or shipping suggests that we
are anywhere near any kind of autonomous computing. They've used
the concept of AI safety and now AI welfare as
a marketing term to convince people that they're expensive, wasteful
(14:12):
software will somehow become conscious because they're having discussions about
what to do if it does so, and anyone literally
any report are accepting this that face Value is doing.
Their readers are disservice and embarrassing themselves in the process.
If AI safety advocates cared about I don't know safety
or AI, they'd have cared about the environmental impact or
the fact that these models train using stolen material, or
(14:32):
the fact that these models don't really deliver on their promises.
And if they did, it would shock the labor market
and it would hurt millions, if not billions of people.
And we don't have anywhere near the social safety network
to support the ones we have right now, let alone
in this completely fictional future. These companies do not care
about your safety, and they don't have any way to
get to AGI. They're full of shit, and it's time
(14:54):
to stop being honest that you don't have any proof
that they do anything that they say they will. Also,
by the way, I think the bubble might actually be bursting. Yeah,
remember in August of last year I was talking about
the pale horses of the AI apocalypse. Well, one of
the major warning signs that the bubble was bursting was
big tech firms would be reducing their capital expenditures and
(15:14):
This is a call I've made quite a few times,
and there was one in alarming Clarity from April fourth,
twenty twenty four. And I'm going to quote myself here. Well,
I hope no, that's the wrong voice. Well, I hope
I'm wrong. The calamity I fear is one where the
massive overinvestment in data centers is met with a lack
of meaningful growth or profit, leading to the markets turning
on the major cloud players that state their future and
unproven GENERATIVEAI. If businesses don't adopt AI at scale, not experimentally,
(15:38):
but at the core of their operations, the revenue is
simply not there to sustain the hype, and once the
market turns, it will turn hard, demanding efficiency and cutbacks
that will lead to tens of thousands of job carts. Well,
we're about to find out if I'm right. On April
twenty first year, who Finance reported that analyst Josh Beck
said that Amazon's generative AI revenue for Amazon Web services
would be five billion dollars five billion dollars this year,
(16:02):
a remarkably small sum that is not profit, by the way,
and also are dropping the bucket compared to Amazon's projected
one hundred and five billion dollars in capital expenditures in
twenty twenty five, it's seventy eight point two billion dollars
in expenditures in twenty twenty four, or it's forty eight
point four billion dollars in twenty twenty three.
Speaker 2 (16:17):
And is that really fucking Are you fucking kidding me?
Speaker 1 (16:21):
Are you fucking kidding me? I have had Jack Coles
for years telling me this was the next big money driver.
Five billion dollars, five billion.
Speaker 2 (16:28):
Goddamn dog?
Speaker 1 (16:29):
Are you fucking kid? You'd make more money auctioning dogs.
This is a disgrace. And if you're wondering, yes, all
of this is for AI, all of that capex from
the Yahoo Finance article, the one for Laura Bratton wrote, well,
this is a quote, by the way, from the article.
CEO Andy Jassey said in February that the vast majority
of it's this year's one hundred billion dollars in capital
investments from the tech giant will go towards building out
(16:50):
artificial intelligence capacity for its cloud segment Amazon Web Services AWS. Well, shit,
I bet investors are going to love this. Better save
some money, Andy, what's that?
Speaker 2 (17:00):
What's that? Handy?
Speaker 1 (17:01):
What Oh, you're saving money already? How oh shit?
Speaker 2 (17:06):
Oh uh oh.
Speaker 1 (17:07):
A report from Wells Fargo analysts published at the end
of April, called Data Centers Aws goes on Pause says
that Amazon has paused a portion of its leasing discussions
on the co location side, and while it's not clear
the magnitude of the pause, the positioning is similar to
what analysts have heard recently from Microsoft that they are
digesting aggressive recent lease ubdeals, pulling back from a pipeline
of loys or sqq's. Now, some asshole is going to
(17:30):
say lwys and sqqs aren't a big deal, But they are.
They are. Go listen to the Power Cut episodes. Go listen,
Go back and listen to them right now, Get off
my how'd you get in my house?
Speaker 2 (17:39):
Anyway?
Speaker 1 (17:40):
Digesting in this case refers to when hyperscalers sit with
their current capacity for a minute, and Wells Fargo adds
that these periods typically last six to twelve months, that
they can be much shorter. It's not obvious how much
capacity Amazon is walking away from, but they're walking away
from capacity. It's happening. But what if it wasn't just Amazon.
(18:11):
Another report from a friend of the podcast read people
I email, occasionally asking for a PDF, of course, referring
to analyst td CO and put our report last week that,
while titled in a way that suggested there wasn't a
pullback at all, actually said that there was. It was
called like hyperscaled demand is up.
Speaker 2 (18:25):
Let's take a.
Speaker 1 (18:25):
Look at one damning quote a relative to the hyperscalar
demand backdrop at PCC, which is a conference hyperscal of
demand has moderated a bit, driven by the Microsoft pullback
and to a lesser extent Amazon discussed later, particularly in Europe,
there has been a broader moderation in the urgency and
speed with which the hyperscalers are looking to take down capacity,
and the number of large deals a four hundred megwat
(18:47):
deals in the market appears to have moderated. In plain English,
this means demand has come down, there's less urgency to
build stuff, and the market is also slowing down. Cohen
also added that it observed a moderation in the exuberance
around the outlook for hyperscale at demand, which characterized the
market this time last year. Brother, my brother in Christ,
isn't this meant to be the next big thing?
Speaker 2 (19:09):
We need?
Speaker 1 (19:09):
More exuberants, not less. We're still a Microsoft appears to
have pulled back even further, with TD Cohen noting that
there has been a slow in demand, slow down in demand,
and that it saw very little third party leasing from
Microsoft this quarter and most damningly in our boat and
I'll be can't I really want you to take this?
Let it ring through you? These deals in totality suggest
that Microsoft's run rate demand has accelerated materially. For those
(19:33):
of you wondering, it means that they're not getting any
fucking revenue demand for generative AI. Well, at least Meta
an Oracle aren't slowing down right well. TD Cohen also
reported that it received reverse inquiries from industry participants around
the potential slow down and demand from Oracle, leading to
the analyst to ask around and find that there had
been a near term slow down in decision making, admit
(19:55):
organizational changes at Oracle. That it adds that this might
not mean this is changing its needs of the beat
it which it secures capacity. And if you're wondering what
else this could mean, you were correct in doing so,
because slowing down traditionally refers to a change in speed. Yeah,
even the analysts are kind of like trying to dance
around this. It's kind of sad. TD Cohen also adds
(20:15):
that Meta has continued demand, which means that they're buying
more stuff, albeit with less volume of megawat signings quarter
over quarter, then adding that Meta's data center activity is
historically being characterized by short periods of strong activity followed
by digestion. In essence, Meta is signing less megawats of
compute and has in the past followed these kind of
aggressive buildouts with fewer buildouts. It doesn't sound good, does it.
(20:39):
It doesn't sound like we're cooking and now we're going
to get to and the whole thing's been kind of defensive.
Speaker 2 (20:44):
I don't care.
Speaker 1 (20:45):
I mean, I've yet to receive an email that meaningfully
changes my opinion on any of this stuff. But a
little bit of defensiveness in this. It's not because I
feel self conscious. It's because I'm tired of hearing the
same fucking questions. But I'll ask you, if you're a
cruitter or a a hater, or one of the many
people who would like to see be punished, put me
on the cross, set me on fire. It's what you
would love wouldn't it, But you'd love it if I
(21:07):
was dead putting that joke aside, I will ask my
haters a question if I'm wrong. How exactly am I wrong?
I don't know, man, All of this seems all this
stuff sure seems like the hyper scalers are reducing their
capital expenditures at a time when tariffs and economic uncertainty
are making investors far more critical of revenues. It sure
seems that nobody outside of open AI is making any
(21:29):
real revenue on generative AI, and they're certainly not making
a profit. It also seems at this point it's pretty
obvious that generative AI isn't going to do much more
than it does today. If Amazon is only making five
billion dollars in revenue from the literal only shiny new
thing it has sold on the world's premiere cloud platform
at a time when businesses are hungry and desperate to
integrate AI, then there's little chance this suddenly turns into
(21:50):
a remarkable revenue driver. Amazon made one hundred and eighty
seven point seven nine billion dollars in their last quarterly earnings,
and if only five billion dollars of that is coming
from AI.
Speaker 2 (22:00):
At the height of the bubble.
Speaker 1 (22:01):
It heavily suggests that they may not actually be that
much money to make either, because it's too expensive to
run these services, because well, these services don't have the
kind of addressable market that AWS generally has. Microsoft has
reported that it was making a paltry thirteen billion dollars
a year in annualized revenue, so the equivalent of three
point twenty five billion dollars a quarter selling GENERATIVEAI services
(22:23):
and model access. The Information reported that Salesforces Agent Force
bullshit isn't even going to boast sales growth in twenty
twenty five, in part because it's pitching it as a
digital labor that can essentially replace humans with tasks, and
it turns out that it doesn't do that very well
at all, cost two dollars a conversation and requires paying
Salesforce to use its data cloud product. What if anything
suggests that I'm wrong here? The things have worked out
(22:46):
in the past with things like the Internet and smartphones,
and that surely that's going to happen with GENERATIVEAI and
by extension, open AI. That companies like Uber lost money
and eventually worked out. And I've actually written a thing
I'll link to about that. You can get it in
episode notes. Is it that Open Eye is growing fast
and that somehow discounts the fact that it burns billions
of dollars and does not appear to have any part
(23:07):
of the profitability? Do you think that agents will suddenly
start working and everything will be fine. It's a fucking joke.
I'm tired of it. I'm really, I'm just I've tried
to be like intellectual, I've tried to explain it nicely,
and I'll probably still continue, but I'm just kind of
fucking tired of it. I'm not going to stop doing this,
by the way. If anything, I'm more galvanized by my rage.
(23:30):
My aura ring is going to claim I did a
workout these episodes. It's great, I'm burning calories in real time.
But large language models and their associated businesses, in my opinion,
are a fifty billion dollar industry masquerading as a trillion
dollar solution for a tech industry.
Speaker 2 (23:44):
That's lost the fucking plant.
Speaker 1 (23:46):
Silicon Valley is dominated by management consults that no longer
know what innovation looks like, and they've been tricked by
Sam Worltman, who's kind of a savvy con artist that
took advantage of their desperation for growth. Generative AI is
the perfect nihilistic form of tech, a way for people
to spend a lot of money on power on cloud
compute because they don't have anything else to do. Large
language models are boring, unprofitable, cloud software stretched to the limits,
(24:09):
both ethically and technologically speaking as a means of kind
of keeping text collapsing growth eer going, and open AI's
nonprofit mission has been fattened up to make fois graph
for SaaS companies to upsell their clients and cloud compute
companies to sell GPUs at an hourly rate. The rot
economy has consumed the tech industry. Every American tech firm
(24:30):
has been corrupted by the growth of or cost mindset,
and thus they no longer know how to make sustainable
businesses that solve real problems, largely because the people that
run them haven't experienced them for decades. As a result,
none of them were ready for when Sam Moultman tricked
them into believing he was their savior. Generative AI isn't
about helping you or me to do things, or even
about replacing workers. It's about making uskus new monthly subscription
(24:53):
costs for consumers and enterprises new ways to convince people
to pay more for the things they already have used
in slightly different ways that oftentimes end up being worse.
Only an industry out of options would choose this bubble,
and the punishment for them doing so will be grim.
I don't know if you think I'm right or not.
I don't know if you think I'm insane for the
way I communicate about this industry. Even if you think
(25:15):
I am, think long and hard about why it is
you disagree with me, and the consequences of me being
wrong or right. There's nothing else left after generative AI.
There are no hypergrowth markets left in tech SaaS Companies
are out of things to upsell. Google, Microsoft, Amazon, Andmeta
do not have any other ways to continue your own growth,
And when the market works that out, there will be
(25:36):
hell to pay. Hell that will reverberate through the valuations
of at the very least every public software company and
many of the hardware wants to I have no idea
how this ends, and I can't see how any of
it works out well for anyone. In any case, I'll
be here to tell you whether it does or not.
And I must say I love doing the show so much,
(25:57):
and thank you for humoring me when I experience brain
madness every week.
Speaker 2 (26:02):
I'm not gonna stop.
Speaker 1 (26:03):
I really enjoyed doing this, and the emails I got
around the web were genuinely wonderful, the people on the reddit,
people over email messages, just you're all so wonderful.
Speaker 2 (26:13):
I'm gonna keep doing this. I mean, I'm.
Speaker 1 (26:15):
Contractually agreed to as well, but I love doing this
and I genuinely feel very.
Speaker 2 (26:20):
Lucky to be able to thank you for listening to
Better Offline.
Speaker 1 (26:31):
The editor and composer of the Better Offline theme song
is Matasowski. You can check out more of his music
and audio projects at Matasowski dot com, M A T
T O S O W s ki dot com. You
can email me at easy at Better offline dot com,
or visit Better offline dot com to find more podcast
links and of course, my newsletter. I also really recommend
(26:53):
you go to chat dot Where's youreed dot at to
visit the discord, and go to our slash Better Offline
to check out our reddit. Thank you so much for listening.
Speaker 2 (27:02):
Better Offline is a production of Cool Zone Media.
Speaker 1 (27:04):
For more from Cool Zone Media, visit our website coolzonemedia
dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.