Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Bloomberg Audio Studios, Podcasts, Radio News.
Speaker 2 (00:18):
Hello and welcome to another episode of the Odd Lots podcast.
Speaker 3 (00:22):
I'm Joe Wisenthal and I'm Tracy Alloway.
Speaker 2 (00:25):
Tracy covering the AI boom is actually reminding me a
little bit of the tariff boom in April, simply because
every day they are new headlines, like they're just today
we're recording this November twelfth, Anthropic commits fifty billion dollars
to build AI data centers in the US. So the
advanced model companies are vertically integrating more to build their
own data centers. Every day some new development.
Speaker 3 (00:47):
Yeah, it's becoming pretty hard to keep up. So I
think we're probably just going to talk in terms of
billions and trillions. We're just going to say lots and
lots of money is going into the space. But the
way I've been thinking about it is, Okay, at this
everyone agrees that the AI buildout is super expensive, and
all these companies are spending massive amounts of capex to
(01:08):
do this, and I'm starting to think that AI capex
is kind of like the Schrodinger's Cat of markets in
the sense that it could either be a massive strength
for these companies because the capex is so expensive and
it takes so much money to build out, and so
anyone who manages to do it kind of builds a
moat around their business. Or it could be a massive weakness,
(01:32):
right if you're spending all this money and then that
doesn't end up generating the revenues that you actually need
to justify it. And going back to the Schrodinger's analogy,
it seems like we just don't know what's going to
come out of the box, right, Like it's simultaneously a
strength and a weakness, and until we build out AGI
or whatever, like, we're just not going to know.
Speaker 2 (01:53):
I told her, right, there's so much at stake here,
and obviously we know the numbers are absolutely enormous. They're staggering,
and we could talk about them too. The financing structures
are also very interesting.
Speaker 4 (02:04):
You know.
Speaker 2 (02:04):
It's one thing if you just have Meta or Alphabet
and they make a ton of money already and they're
spending money on data centers whatever. That's one thing. It's
another thing when you start seeing these SPVs where the
hyperscaler puts in this amount of money and then the
private credit puts in this equity and then they borrow
a bunch and then there's all these questions about the payback.
(02:25):
And we think of tech as from years and years
as basically being this equity story, and when it becomes
a credit story. Yeah, and when you know people are
talking about quoting Oracle CDs, I always forget these companies
even have CDs because I'm so unused to thinking of
big tech companies as credits. So when I see people
starting to tweet Oracle CDs charts or core Weave CDs charts,
(02:47):
It's like, Okay, we are in a different level of
capital intensity.
Speaker 3 (02:51):
Right, and some of those swaps have been going up lately.
I'm going to say one more thing, thinking back to
the two thousand and eight financial crisis. I remember the
economist at Raymond James I it is Jeff's out who
went on to become a very big name. Yeah, we
should have him on the podcast. But he made the
point that historically when you had real estate crashes property crashes,
(03:13):
it was usually because of a problem in the economy.
But then what happened in the run up to two
thousand and seven two thousand and eight is the housing
market crash became the proximate cause of the troubles in
the economy. And if you think about how much money
is being spent on AI right now again billions, trillions
possibly of dollars, it's very easy to see how AI
(03:37):
could borph into a problem for the wider real economy.
Speaker 2 (03:40):
Totally just on this note, and then we'll get into
our conversation. The Center for Public Enterprises out with a
great report today called Bubble or Nothing by Ed vat Aarun,
pointing out one of the things that makes data centers
interesting is how they sit at this intersection of essentially
industrial spending and real estate. It's an interesting ascid class
for its own right. So much to talk about. We
(04:02):
could never do a justice in one episode, but that
means we got to do more. Anyway. I'm very excited
for today's episode. We really do have the perfect guest.
Someone who's been writing about this for a long time,
someone who's just been writing about the Internet and all
things for longer than any of us, someone who's been
blogging and investing for far longer than either of us
or anything like that. Way more knowledgeable about how these
(04:22):
businesses worked, and most very focused on the data center
buildout we're going to be speaking with Paul Kadrowski. He
is a fellow at the MIT Institute for the Digital Economy,
also a partner at sk Ventures, and longtime internet blogger, writer, newsletter, yapper, etc.
Someone we've never never had on the podcast before. So Paul,
thank you so much for joining us.
Speaker 5 (04:44):
Hey, guys, thanks good to be here. Other than the
blogging part, but.
Speaker 2 (04:47):
No, it's all. It's all. You're a true pioneer in
that and it's impressive that you still write with the
output that you do. At some point in the last year,
I feel like you really got laser focused, maybe in
the last two years, really got laser focused on the
data center's story is this is where the action is.
Speaker 6 (05:05):
Yeah, I did, and in part just because I caught
myself by surprise with it.
Speaker 5 (05:09):
It was weird.
Speaker 6 (05:10):
I was looking at first half GDP day it actually
first quarter GDP data earlier in the year, and you know,
this has become a commonplace that people know this, but
I hadn't realized what a large fraction of GDP growth
in the first quarter data centers were was on the
order of fifty percent, much larger if you included all
sort of externalities all the other things that data center
spending in turn kind of accelerates. And then obviously the
(05:31):
same thing was true in the second quarter, and it
was I got back to thinking about my dog, and
I was my analogy is that.
Speaker 3 (05:36):
As one does, as one does.
Speaker 6 (05:38):
I got to get like my dog barks when the
mailman comes to the house and keeps barking, and then
the mailman goes away. And I'm convinced he thinks he
makes the mailman go away, right, he has this really
screw causality, and it's like, dude, if you don't bark,
it goes away. Anyway, this is part of the job.
They just go away. And I think about macro policy
in the same way that if you don't understand and
(06:00):
the drivers of GDP growth, you're likely to think to
whatever it is you would most like to be causing
GDP growth is doing that. So in the case of
the US in the first half of the year, you know,
was this puzzle was, well, maybe it's terroifts, maybe tariffs
are actually contributing to it, maybe consumers are much.
Speaker 5 (06:14):
More resilient than we expected.
Speaker 6 (06:15):
And as it turns out, a huge factor, probably the
largest factor, was this sort of unintentional private sector stimulus
program otherwise known as data centers, and for me that
I'll start it. So that started this puzzle of understanding
this sort of disconmisserate size, the consequences of that size,
and the acceleration's consequences in terms of where where the
(06:36):
money is coming from, and all.
Speaker 5 (06:37):
Sorts of other things.
Speaker 6 (06:38):
But just to reframe in terms of something you guys
were already talking about, and this I think is super important,
and understanding why this particular episode is likely to turn
out to be historically really important.
Speaker 2 (06:50):
Wait, when you say you're referred to this podcast episode,
you're not referring to the broader episode of AI data Center.
Speaker 5 (06:57):
Entirely, just the podcast.
Speaker 6 (07:01):
Who Cares about data centers at the ten year anniversary
of bad Law. So the reason why sort of it's
going to be historically important is because, for the first time,
we combine all the major ingredients of every historical bubbles
in a single bubble. We have a metabubble no pun
intended for meta. We have real estate. You guys just
talked about this, right, Some of the largest bubbles in
US history had some relationship to real estate. We have
(07:22):
a great technology story. Almost all the large modern bubbles
has something to do with technology.
Speaker 5 (07:27):
We have loose credit.
Speaker 6 (07:28):
Most of the major bubbles in some sense have a
loose credit aspect. And one of the other exacerbating pieces
that some of the largest bubbles, thinking about even the
financial crisis, is some kind of notional government backstop. You know,
think about the role in terms of broadening home ownership
in the context of the real estate bubble, and the
role that Fanny and Freddie played and loosening credit standards
and all of those things. This is the first bubble
(07:50):
that has all of that. It's like, we said, you
know what would be great, Let's create a bubble that
takes everything that ever worked and put it.
Speaker 5 (07:58):
All in one. And this is what we've done.
Speaker 6 (08:00):
Got a speculative real estate component is probably one of
the strongest technology stories we ever had back to rural electrification.
In terms of a technology story, we have loose credit.
You guys talked about what's happening with respect to not
just the role of private credit, but how private credit
is largely supplanted commercial banks with respect to being lenders here.
So we have all of these pieces that have all
come together at once, and I think in terms of
(08:21):
framing what's going on right now. It's really important to
understand that it brings together all of these components and
ways we've never seen before, which is one of the
reasons why the notion that we can land this thing
on the runway gently is nonsense.
Speaker 3 (08:34):
I love that framing the metal babble is perfect. Also,
I had an epiphany earlier. I already told Joe, so
you can attest to this, but I realized private credit
kind of supplanted shadow banking as the term. Right like
after two thousand and eight, we called it shadow banking,
and then at some point it flipped to I guess
the couplier private credit.
Speaker 2 (08:54):
Shadow bank always owned it sinister right away that private
credit is.
Speaker 3 (08:57):
Well, someone figured that out and they're like, well, now
it's private credit.
Speaker 5 (09:00):
I like to think of it as a kind of
financial witness protection program. It was like, oh, you're those guys.
That's great, now who you are?
Speaker 6 (09:07):
Yeah, it's kind of like that, And it's now like
one point whatever. It is one point seven trillion dollars
is the size of which is larger than many components
of the orthodox lending market combined. In terms of the
private credit industry itself, so that's a huge new piece
of this that sometimes escapes notice how big it is
and why it emerged, So all of those pieces.
Speaker 3 (09:26):
Yeah, it's stunning the growth that we've seen. Let me
ask a very basic question before we go further. But
one thing I've been wondering is Joe mentioned that anthropic
headline that we heard before. We've seen Meta raising financing
for data center builds, all that stuff. Why do these
massively profitable and cash rich companies have to raise financing
(09:48):
at all?
Speaker 6 (09:49):
Well, they don't, but there's these irritating shareholders out there
get all pissy whenever you start diluting earnings pre shared
too much and diverting it towards a single source. Now
that's not the case with private companies obviously, but by
the same token, open ai doesn't have the luxury of
having cash flows via which they can do any of
the things we're describing, so anthropic open Ai and everyone
else they have no option other than to do exactly
(10:11):
what we're describing. It's a different story with respect to
how what percentage of Google's free cash flow or Amazon
free cash flow that they want to continue to divert
towards data centers. So in terms of the privates, this
is the only option that they have. The public's obviously
increasing the hyperscalers increasingly. We've got up to the point
where around five hundred billion dollars or fifty percent of
their free cash flow is going directly towards spending on
(10:33):
data centers, and that's obviously a point at which you know,
we have other things we have to do with free
cash flow, and including having some of it be earnings
per share, and so we increasingly it's become the option.
Speaker 5 (10:44):
You see what METT is doing recently with respect it
is SPVs.
Speaker 6 (10:47):
We bring in other participants, create new financing vehicles, and
then we play this entertaining game of it's not really
our debt.
Speaker 5 (10:53):
It's in an SPV. I don't have to roll it
back onto my own.
Speaker 6 (10:56):
Balance sheet and then bring in new lenders, new private
credit firms and others.
Speaker 5 (11:00):
So that's the reason. Obviously it's partly because of the scale.
Speaker 6 (11:02):
It's probably because the privates who have no other option,
and it's probably we've kind of tapped out the public
companies in terms of the fraction of free cash flow
that they.
Speaker 5 (11:10):
Feel as if they can spend with impunity on these projects.
Speaker 2 (11:14):
Explain to us for those who don't know. You know, again,
SPV one of these terms that we really haven't heard
in a while. And there's nothing inherently bad about an
SPV except that you only hear about them typically after
there's something, you know, some sort of crazy.
Speaker 5 (11:27):
Ride, which is weird obviously, But yes, tell.
Speaker 2 (11:29):
How would you U say in the broad strokes, how
would you characterize what these financing vehicles are?
Speaker 6 (11:35):
So Mechanically, it's just a way of making sure that
I don't have to roll data onto my balance sheet.
But legally it's a structure into which I and my
partners contribute capital that in exchange for which they retain
legal title to the project that we've created, which allows
us to all contribute capitalists but not have to put
it back on my balance sheet and therefore not to
have that debt rated.
Speaker 5 (11:55):
Which is really the key.
Speaker 6 (11:57):
Now, if you look at the actual intrinstics, say, for example,
the reason that a project that they did in conjunction
with blue Out, it's wild and byzantine. It looks like
something you might have seen and what was that in
Harry Potter or the forest with all the spider webs.
Speaker 5 (12:08):
It looks a little like that, right where.
Speaker 6 (12:09):
Everything's connected to everything and all I know is something
and here's going to get me. So there's incredible complexity,
but at the core, it's a mechanism via which I
can raise more capital and keep it off my balance
sheet by creating a legal entity that controls the actual
data center and I don't. Therefore I have to put
it back, roll it all back onto my balance sheet, navierated.
Now there's weird intricacies obviously, So for example, what happens
(12:31):
if at some period in the future this thing isn't
performing the way we expect who owns it at that point?
Speaker 5 (12:37):
Is there a payment exchange, does.
Speaker 6 (12:38):
It become metas, does it become blue ouls, does it
become someone else? And these things will turn out to matter.
Right now, no one cares. If you go through some
of the documents on these things, it's not entirely clear
what the recourse payment will be when it ever, if
and when it ever has to revert back to another owner,
and it's not going to be held on to by
the SPV. And I think this will turn out to
be really important four or five years down the road,
(13:00):
but right now nobody cares.
Speaker 3 (13:16):
So Number one, the lifespan of data centers is actually
not that long. I can't remember the exact estimate, but
maybe like three or four years something like that. And
then also you have this risk that tenants are sort
of rolling through and no one knows what that actually
means for the structure of the debt, and you kind
of get this asset liability mismatch.
Speaker 6 (13:37):
Yeah, so I'll start with the first one first. So
this gets into something Michael Berry was tweeting about the
other day, which was sort of entertaining that back about
four years ago, tech companies changed the appreciation schedule or
the assets inside of data centers.
Speaker 5 (13:53):
They extended them somewhat. Now, that wasn't an error.
Speaker 6 (13:57):
The reality is that data centers used for the purposes
like at aws, where You've got a big S three
bucket and I'm storing data inside of it. Those things
generally speaking, the assets are long lived. I'm not running
them flat out, it's not. These are not streetcar racers
that I'm running around inside of a data center. These
are relatively inexpensive chips that I'm using for really mundane
purposes like storing large amounts terabytes exhibites of data inside
(14:20):
of s three buckets, so it's not unreasonable to say
their lifespans fairly long. They're not being taxed that heavily,
so pushing out the depreciation schedule makes a lot of sense.
But that was coincident with the emergence of GPU driven
data centers using products like the chips from Nvidia, and
those have much shorter lifespans, so depending on the usage.
Speaker 5 (14:39):
So there's two.
Speaker 6 (14:39):
Different reasons why the lifespan and therefore the depreciation schedule
of a GPU inside of a data center is very different.
So the reason most people think about is, oh, well,
technology changes really quickly and I want to have the
latest and greatest, and therefore I'm going to have to
upgrade all the time. That's important, but it's probably about equal,
if not maybe slightly less important the nature of how
(15:01):
the chip is used inside the data center. So when
you run using like the latest, say a Nvidia chip
for training a model, those things are being run flat
out twenty four hours a day, seven days a week,
which is why they're liquid cool. They're inside of these
giant centers where one of your primary problems is keeping
them all cool.
Speaker 5 (15:18):
It's like saying I bought a used car and.
Speaker 6 (15:20):
I don't care what it was used for. Well, if
it turns out it was used by someone who was
doing like Laman's twenty four hours of endurance with it,
that's very different. Even if the mileage is the same
as someone who only drove to.
Speaker 5 (15:31):
Church on Sundays.
Speaker 6 (15:32):
Right, these are very different consequences with respect to what's
called the thermal degradation of the chip. The chip's been
run hot and flat out, so it probably it's useful.
Lifespan might be on the order of two years, maybe
even eighteen months. So there's a huge difference in terms
of how the chip was used, leaving aside whether or
not there's a new generation of what's come along. So
(15:53):
that takes us back to these depreciation schedules. So these
depreciation schedules change, just as the nature of how the
lifespan of the chips changed dramatically, because I can use
something for you know, storing things in s three buckets
for a long time, six to eight years isn't unreasonable.
But if I'm doing the the Laman's endurance equivalent with
a GPU, it might be eighteen months. That's a huge
(16:16):
difference in terms of the likely lifespan of a product
that I'm depreciating over a very different period. And so
that's a huge part of the problem here with respect
to understanding the intrinsics in terms of how data centers
can and can't make money. How you have to think
about the likely capex requirements because of this much shorter
life span of the underlying technology, and then.
Speaker 3 (16:36):
Talk about the tenancy rollover risk. I guess we might
call it.
Speaker 6 (16:41):
Yeah, it's really interesting. So one way to think about
data centers is as giant apartment buildings. Right, They're essentially
gigantic commercial pieces of commercial real estate with a bunch
of tenants. Sometimes there's a lot of tenants, sometimes there's
only one. Sometimes Google bought the whole apartment building and
just moved in, Or it's a giant office building they
just moved in. It's all theirs, right, So think about
it in those sorts of terms. And the reason why
(17:02):
as a sponsor of a data center I might take
a different view on how many tenants I want is
again you think about it in terms of what can
I get Google to pay? But whereasus what can I
get someone who's a much flightier tenant to pay? Well,
I can get the flightier tenants, more of them and
diversified as all leasing inside the data center, paying higher
lease rates for GPUs over the period of tendency than
(17:24):
I can get a Google to pay. Why because Google's
got great credit, they don't have to pay very much
and they know they don't.
Speaker 5 (17:29):
So if you look at the commercial real estate.
Speaker 6 (17:31):
Data, the cap rate, the blended cap rate for these
for the largest data centers that are tenanted by hyperscalers
is horrible. It's like four point eight five point three percent.
It's like, why don't you just buy a treasure you're doing.
So what happens then is people start blending in more
different kinds of tenants to Tracy's point, as an effort
to try and improve the yield the cap rate on
(17:53):
the underlying instrument, which is the data center. So you
could do all of this should start to sound familiar
because it's this idea of a blend together all of
these different tendencies. I can increase the yield of the
securitized instrument, but that also changes the risk profile of
what comes out at the other end, which just takes
us to things like the increasing usage of these things
in asset backed securities, which are these trench securities that
(18:15):
have all the different pieces, We have different layers associated
with it, and that's a reflection of well, there's different
tenants inside these data centers, and people want different exposures
to risks. So I may only want to buy the
senior tranch. You may want to buy the mezzanine and trace.
He may want to buy the equity charge.
Speaker 3 (18:31):
Can I just say, I know we already said this,
but Paul is truly, truly the perfect guest. I remember
reading his coverage of subprime and securitization in like two
thousand and eight, and so having someone who's able to
synthesize that experience with what's going on now is just fantastic.
Speaker 2 (18:50):
I kind of can't believe we're doing this again. I know,
I mean, look, I mean again, there's nothing inherently wrong
with SPVs. There's nothing inherently wrong tranching, right, Like a
lot of these things are very intuitive, etc. But it
is still a little weird how central this is and
how it's the same old There's nothing I mean, on
some financial level, it feels very familiar.
Speaker 5 (19:10):
No, there's nothing new un to the sun.
Speaker 6 (19:13):
But I think that point is really important It's not
that tranches are evil. It's not the securitization is evil,
or that asset backed security your project finance is evil.
Speaker 5 (19:21):
No, all of these things are terrific pieces of the arsenal.
Speaker 6 (19:26):
Whenever you're actually raising money for projects, the issues start
to arise at the scale, which is what you guys
have already alluded to. But the secondary piece, which again
will sound painfully familiar to the financial crisis, is there's
a flywheel that gets created at the back end of this.
So once you start securitizing the yield producing assets in
the form of these tranch securities, the people who are
(19:47):
purchasing those things don't give a rats ask what's going
on inside this AI. I joke all the time that
a lot of these people can't spell AI. They don't
care what's going on inside the.
Speaker 5 (19:57):
Data center, right.
Speaker 6 (19:59):
It could be you know, the world Hide and Go
Seek Championships had going on in there. I don't care
as long as it generates heels and I.
Speaker 5 (20:05):
Can securitize it.
Speaker 3 (20:06):
Well.
Speaker 6 (20:06):
It's very much an analogous to what's happened in prior
periods like this, where again you get this secondary flywheel
effect of let's just create more of these things because
our customers want more and they're really easy to securitize
and look gets backst up by Meta and Google or
whoever else.
Speaker 2 (20:21):
Well, so this actually brings important point. I mentioned this
great report out from the Center for Public Enterprise. One
of the things that they pointed out is in this
market environment where everyone is just you know, there's this
sort of AI pixi us, but also just the reality
if your revenues are surging, the market probably loves you,
(20:41):
like talk to us about the unit economics. Here is
the incentive for all the players essentially to just grow
the top line as much as possible, even if these
aren't whether we're talking about inference on a per token basis,
even if these aren't particularly profitable, how do you think
about the union economics of some of these businesses and
(21:01):
how that could eventually perhaps sort of you know, come
home to Ruster to speak.
Speaker 5 (21:06):
Yeah, So.
Speaker 6 (21:09):
The term of art obviously is these things have negative
unit economics, which is a fancy way of saying that
we lose money on every sale and try to make
it up on volume. Right, I mean, that's the that's
the problem here. So but that's okay, I mean, we've
had lots of Amazon.
Speaker 5 (21:21):
In its early days that negative unit economics. You can
get past that.
Speaker 6 (21:25):
And as an aside, I'll say right here, all of
the things that I'm saying is and to say that
you know AI is some kind of you know, free
tamagatche thing, that's just a fad as an incredibly important technology.
What we're talking about is how it's funded and the
consequences of doing that in terms of what's going to
happen with respect to the businesses and the return on
those businesses. Right, So, the unit economics are dire for
(21:46):
a bunch of reasons, have mostly having to do with
the more tokens you have to produce. The costs rise
more or less linearly with the demand on the system.
As opposed to an orthodox software business where the more
people use my service, the more people across which I
can spread my relatively fixed costs. That's not the way
that for the most part, current generation large language models
(22:09):
were costs rise linearly or sublinearly with the number of users,
which makes for really crappy unit.
Speaker 5 (22:16):
Economics, and that's a big part of the problem.
Speaker 6 (22:18):
So from there you get to the question of Okay,
so what does it have to look like in terms
of making it look profitable. There's lots of ways to
back into this. You can do bottoms up models. It
would suggest that like if every iPhone newsrun earth paid
fifty bucks do at work, we could have around a
four hundred billion dollar, five hundred billion dollar annual stream
of revenue flowing. And well, that's not going to happen,
but it's worth pointing out like that would do it.
(22:39):
But it gives you a sense of the kind of
scale of what at a consumer level, for example, it
might have to look like.
Speaker 5 (22:45):
People come out it from the other end.
Speaker 6 (22:46):
One of my favorite ways that people come out is
to say, well, we could create a viable model here.
If you think this was in the JPM call last week.
I don't know if you guys saw the summary of it,
but it was huge fun for the whole family listening.
And so one of the ways they backed into it
was a top model where they said, well, the global
TAM for human labor.
Speaker 5 (23:05):
I love the five trillion dollars. I love the global TAM.
I said.
Speaker 6 (23:08):
That was right up there with saying like if I
reduce humans to their chemical components, here's what.
Speaker 5 (23:13):
I can get for you.
Speaker 3 (23:14):
Well, this was this was Steve Eisman's line, which was like,
beware of anyone that mentions tam right.
Speaker 6 (23:21):
Right, right, no exactly, and so then and then they play.
The next step is of course to say, well, imagine
we can get ten percent of that, right, which is
which is obviously one of the oldest cliches. It's like saying,
you know, I'm going to get five percent of the
Chinese market. No one ever gets five percent of the
Chinese market.
Speaker 5 (23:35):
This doesn't happen.
Speaker 6 (23:35):
So the same thing won't happen with global labor. But
if you were to do that, you do the math
on that that call those kinds of numbers gets you
to a weighted average cost of capital basis to a
reasonable return on current and planned expenditures with respect to
AI data centers. If you assume we're heading to about
a three or four trillion dollar a number, which is
kind of the I think it's around the number that
(23:57):
most people put out there, which I think is a
completely wrong number, but nevertheles that's the kind of number
and what you'd have to do to get there.
Speaker 5 (24:01):
So you can get there from.
Speaker 6 (24:02):
A bottoms up model by making some really unreasonable assumptions
about the total numbers of subscribers and what they pay.
You can get there from a top down model. You
can also get there by thinking about it purely in
terms of industrial users. I think about purely API users
just for end retail users of AI don't exist. And say,
you know, Andthropics projecting seventy billion dollars in revenue in
(24:22):
twenty twenty eight, something like thirty five percent of their
current revenues. Most of their revenues today are from their API.
Thirty five percent of that is from software developers that
split between two large users, Copilot and Cursor. And so
you know, we can model that out. Everybody has to
become a software developer.
Speaker 5 (24:40):
And we can make the math work.
Speaker 6 (24:41):
The problem is it's got huge fragility right in customer
concentration risk. So a Cursor disappears as a user of
Entropics API, and you just blew out fifteen percent of
your revenues because they're gone and they've done something else.
And as it turns out, Cursor a two weeks ago
announced that they were trading their own internal model that
you could use for software developed and you wouldn't have
to call the Anthropic API so you can think about
(25:03):
all these different ways to get there, but they all
have a lot of built in fragility with respect to
so we all become software developers and we all subscribe
to Cursor.
Speaker 3 (25:12):
Just going back to the used car analogy that you
mentioned before, when we're thinking about all this financing of
the AI capex spen, is it useful to think of
GPUs essentially as the collateral the problem?
Speaker 5 (25:26):
Yes, or what would you.
Speaker 3 (25:28):
Call the collateral in this case?
Speaker 5 (25:29):
So what ends up happening.
Speaker 6 (25:31):
The collateral in this case is the gp There's no
question it is the GPA. The issue is this disconnect,
this temporal mismatch that you alluded to earlier with respect
to the duration of the underlying debt and the assets
that are producing.
Speaker 5 (25:42):
The income that allows me to pay for the debt.
Speaker 6 (25:44):
Right, so we've got this probably unprecedented temporal messmatch with
thirty year loans and two year depreciation on the underlying collateral,
which is essentially the GPUs that are the income producing assets.
And so that creates this constant refinancing risk because I'm
going to can you only have to turn over the
base And we've seen this many many times right now,
it's easy to turn it over, but in two years
(26:05):
it may not be possible. There's a wave of refinancings
coming in twenty twenty eight in many of the more.
Speaker 5 (26:09):
Speculative data centers.
Speaker 6 (26:11):
Will they be able to turn over their debt and
refinance all the GPUs today?
Speaker 5 (26:14):
They could? This today is in twenty twenty eight.
Speaker 6 (26:16):
So that's the inherent problem, is this structural temporal mismatch
between the income producing assets and the duration of the lungs.
And it gets worse if you think about it in
more realistic terms, think about it in terms of one
of the other gating factors here that's driving all.
Speaker 5 (26:31):
Of this is the scarcity of energy supply. It's really difficult.
Speaker 6 (26:35):
You can hook them up to the well. It's actually
kind of turned into a bit of a joke. I
can hook you up to the grid, but I can't
give you power. I don't know if you saw the
recent episode with the Oregon Public Utilities Commission, Amazon had
three data centers that they connected to the grid, and
it was kind of like the Oregon PUC said, Oh,
you want power too, Oh, I can't help you with that.
Speaker 5 (26:52):
We can't help you with that.
Speaker 6 (26:53):
So now there's a complaint in it the Oregon PUC
from ADS, Amazon's the digital services group that runs aws,
complaining we now have data centers, but.
Speaker 5 (27:01):
We have no power.
Speaker 6 (27:02):
Right it sounds a little bit like like a winter
storm hazard or something, but it's the structural problem with
respect to the inability. We can connect people, but we
can't provide them with power. So the next stage is
and this takes bets back to the collateral problem in
the temporal mismatch, is that people are doing behind the
meter power. They're building natural gas or if you're fair me,
you're saying wild things about nuclear power and you're saying, Okay,
(27:26):
I'm coming with my own power. You don't need to
connect me to the grid. I'm going to power this myself.
That creates two or three different issues, but among the
more important is think about how long lived an asset
a natural gas plant is. This is not something that's
got a five year lifespan and we just truly wave goodbye.
This is going to be running probably twenty five to
thirty years. And the only thing your ability to forecast.
(27:48):
We know the cost of the natural gas plant, but
in terms of the cost of the center, and it's
incompability to generate enough income to pay off the loan
associated with the natural gas plant. God help you if
you think you can sort that out, because what you've
really got is a huge likelihood of a stranded asset
of their natural gas plants that are longer useful for
powering these things that they were built for.
Speaker 2 (28:23):
The good news is that Daniel Jurgen said this on
the show. You know the back orders for natural gas turbines,
like you probably if you ordered one today, you would
probably get it in twenty thirty. So the good news
that I suppose ten years is that at least you
don't have to have the turbines sitting there for years.
Like I don't know, Maybe I don't know if that's
good news at all, but there are se I may
(28:44):
never get it in, You may never get the gas
plant built. Anyway, someone will be stuck with the book.
Speaker 6 (28:48):
It kind of raises this goes back to Tracy's question earlier.
This raises a really interesting thing. So like, honestly, what
the f are all these people doing who are announcing
the giant unding translation. I think of it like people
all showing up with the OK Corral at once and
It's like, dude over there has one gun, I got two.
Speaker 3 (29:07):
Yeah, I got Oh that's not a nice this is anie.
Speaker 5 (29:10):
Yeah.
Speaker 6 (29:10):
But it's this deterrence. It's this deterrence program that's going on.
Don't even imagine spending fifty because I'm spending one hundred.
Speaker 5 (29:17):
No point in you doing any of those. That's very
game theoretic.
Speaker 3 (29:20):
Well, this also worries me because you hear so many
people framing this as like an existential competition. Right, and
once you start calling something existential, the limit on spend,
well it becomes unlimited.
Speaker 5 (29:33):
Right.
Speaker 3 (29:33):
It's about survival, so you'll spend anything.
Speaker 2 (29:35):
That's why the conversation has turned in recent weeks to
the one entity that actually, at least in theory, can
print as much money as possible.
Speaker 6 (29:43):
Right, that's the you know, the Sarah Friar's accidental foot
in mouth the thing earlier in the week.
Speaker 5 (29:49):
But that's right.
Speaker 6 (29:49):
But that's again goes back to my original point about
what makes this bubble unusual. It's this element that not
only is there a kind of bagstock, but there's actually
a notion of wrapping in the flag. We have to
win this competition, we have to do what it takes.
This is existential. It's US versus China, and it's not
just the US doing this. I was talking to some
(30:10):
Canadian policymakers just earlier this morning, exact same thing going
on there. We have to build a domestic in the
same thing in the UK, same thing in Germany. And
so there's this idea around the world that sovereign ai
is something that's incredibly important. So this this government backstop
isn't just mythic, it's it's global. It's this idea that
we all have to win, we all have to win,
which obviously can't happen, but that the government's playing a
(30:32):
role and that that be can trace this kind of
limitless course of capital.
Speaker 2 (30:35):
You know. So one of the things that's going on,
and maybe it's part of the same the sort of
maximalist strategy mentioned Anthropic wants to get into data centers,
so everyone's sort of looking at how they can expand vertically.
Can I own the data centers? I think? You know,
Sam Altman has talked about owning chips or owning a
semiconductor fab at some point, like maybe that'll be part
(30:57):
of the story. Who knows. There's one thing that I don't.
I'm sort of curious. I'd love to have your take
on there was. At the end of September, Meta announced
the deal to buy Compute from core Weave, one of
these neo clouds. I don't totally get that because Meta
has its own data centers, et cetera. Do you have
some intuitive sense about what an established hyperscaler needs a
(31:19):
neo cloud for in this arrangement, what core Weave can
supply that Meta can't build on its own or buy
on its own.
Speaker 5 (31:26):
Nothing.
Speaker 6 (31:29):
So here's what's going on. This is what's going on
is that there's this form of hoarding going on. So
what's happening is is people saying, you have capacity, I
can lock that up.
Speaker 5 (31:40):
I'll lock that up.
Speaker 6 (31:42):
And because I can't lock it up yet by building
a data center quickly enough, I'll lock it up in
the marketplace. So once you start thinking of compute as
a hordable commodity, and what people are doing is trying
to hoard it, control it before someone else can do it,
because until they bring on their own access capacity. That's
really what's going on in a lot of these transactions.
This is a way of making sure that I may
(32:03):
not need this but you sure can have it. And
so there's there's an element of compute hoarding going on
across the map because of you know, this backlog and building.
Speaker 5 (32:11):
Data centers that may or may not ever get built.
So that's the answer.
Speaker 6 (32:14):
The answer isn't that they care at all about whether
or not they can run giant workloads on any particular
neo clouds provider. It's the idea of hoarding capacity and
making sure that no one else can have it, like
trying to have like the Hunt Brothers and the getting
a corner on the silver market.
Speaker 3 (32:29):
You know, I want to go back to China because
it is true that the US and China seem locked
in this existential race for AI supremacy, but they seem
to be taking very different approaches to it. And in
the US, it's all about spending as much money as
you can developing these you know, state of the art,
mostly closed source models, whereas in China it seems to
(32:50):
be much more about rapid adoption and creating open source
models that just get out into the market much faster
and much more cheaply. And so I'm curious, like, which
of those approaches do you think it's going to win?
Speaker 5 (33:04):
Here.
Speaker 6 (33:05):
Yeah, so that's a really good question. So I think
it's going to be something closer to.
Speaker 5 (33:11):
The Chinese approach, but not for the reasons they expect.
Speaker 6 (33:14):
So the reason is because, so what, let's I'll reframe
what the Chinese are doing slightly, so I'll say that
instead of it just being a sort of an example
of open source, I don't think that's right. The right
way to think about it is they're using this kind
of distillation approach increasingly where there's kind of a you
think about it like, Okay, I'm a sales manager. I
don't want to train all my salespeople. I'm going to
train this dude.
Speaker 5 (33:33):
And they're going to train all the sales But that's distillation, right.
Speaker 6 (33:35):
You train the trainer, I train somebody who trains something else,
and something else in this case are these smaller models.
So that approach of kind of training the trainer really
speeds up the process of creating new models because I
distill them, I train them out of out of other
models that are really compute intensive, like anthropics or opening
eyes or whoever else is right. So the notion is,
(33:57):
so is there are huge efficiency gains to be had
in training and the Chinese are showing the huge efficiency
gains to be had, and the one way to think
about it is that the transformer models that underlie large
language models that are so computationally intensive, went from the
lab to the market faster than any product in technology history.
So they're absolutely bloated and full of crap. Right, So
(34:19):
these things are wildly inefficient. There's all kinds of other
ways to do the same sorts of things, one of
which is distillations. So what you're really seeing is a
kind of an accident of history that we've came down.
The US came down this path that led directly out
of the original transformer paper in twenty seventeen, and the
Chinese have said, yeah, we're not going to be able
to do that for a bunch.
Speaker 5 (34:39):
Of different reasons. But we don't have to do.
Speaker 6 (34:41):
That because I can take this approach of distillation, which
lets us get you.
Speaker 5 (34:44):
If you look at Kimmy, this sort.
Speaker 6 (34:46):
Of relatively recent open source these things are actually really
effective in benchmark very well, and it's not surprising because
they've been trained by really good trainers, which.
Speaker 5 (34:54):
Is to say some of the other models that are
out there.
Speaker 6 (34:56):
But these are about efficiency games, which should then ask
the question is whoa wait a minute, if there's all
these efficiency gains ahead from training, and training is seventy
percent of the workload on data centers? Hang on a second,
aren't we completely misforecasting the likely future the arc of
demand for compute And the answer is yes. And this
is rather than looking at it as an example of
(35:17):
why China is doing something better for worse, another way
of looking at is saying, just just refuted the approach
that we're taking to training altogether, because it shows how
blowdd and inefficient the approach we're taking is, and yet
we're projecting on that basis what future data center needs are.
Speaker 2 (35:33):
Part of the question, it seems to me, and this
is where it gets a little bit philosophical, is what
do these AI companies think they're building? Because one theory
is like, well, maybe they're building business tools, right, maybe
they're building business tools of various sorts. And if they're
building business tools of various sorts, that implies the possibility
that eventually they get good enough. This does the job right,
(35:55):
This makes it easier for this website. You can use
an agent to book your travel, and the technology works,
and we don't have to keep building it because we
got to the point where it works. And then there
is this other question of like, well, maybe they want
to build something called AGI or ASI that's like so
sci fi et cetera, in which case you could never
(36:16):
get enough, or simply having built the thing that allows
you to book your travel or book a dinner reservation
or translated text or whatever, that's not nearly enough. You
you hear different things. But what do you think the
builders at the cutting edge of these labs are going for?
Is it really the sort of sci fi building god
cliche or do they want to build profitable business tools?
Speaker 5 (36:38):
So it's the first.
Speaker 6 (36:39):
Thing until you challenge them, and then it's the second.
So what happens is if you have the conversation internally,
they'll say, yeah, no, no, no, we're building this really effective
productivity enhancing tools that'll be used across a host of businesses,
and these all sounds really good.
Speaker 5 (36:52):
But then when you walk through some of the math.
Speaker 6 (36:54):
In terms of justifying the ROI on the spend, all
of a sudden, then it turns into what I call faith.
Speaker 5 (37:00):
Argumentation about AGI, and they.
Speaker 6 (37:02):
Say it's like the greatest call option ever, Like what
would you pay for a call option that could get
you anything, and it's like, well, wait a minute, this
isn't a way of justifying any particular expenditure.
Speaker 5 (37:13):
This is just faith based argumentation.
Speaker 6 (37:14):
We're saying, you know, with the uber call option for anything,
you should be willing to pay anything for it. And
obviously that that kind of justification doesn't get you anywhere.
So in house they'll arm wave a lot about these
different models that will emerge.
Speaker 5 (37:27):
Who knows.
Speaker 6 (37:28):
I had someone at inn Vidia tell me the other
day that we really are just waiting for the uber
of ai to come along and show.
Speaker 5 (37:33):
Us the future. And I'm like, okay, so that's it's
not an answer, right.
Speaker 2 (37:38):
So because in theory, if you're building a business productivity tool,
then eventually you could solve your unit economics problem. Right,
If you're just trying to build a really great business opportunity,
then as simply you know what, we don't have to
build anymore. It works, and then the cash flow just
starts pouring in and the cost per token goes down can.
Speaker 6 (37:57):
And there's a bunch of that already happening. It's really interesting.
But what's incre thing happening is the problems they're solving
are really mundane, and so it's things like I'm trying
to onboard a bunch of new suppliers right now that
people have weird zip codes and they sometimes don't match up.
I have a dude in the back who fixes that.
I'd rather have someone who could do it faster so
they could onboard a lot more suppliers. Oh, it turns
(38:17):
out these small language models are really good at that.
These micro models like IBM's granted and whatever else, But
those things require a fraction of the training, are very cheap,
are not going to justify anywhere near the economics needed
to pay for the current spend. And yet those things
are almost likely very likely the future because it'll be
profitably get token used from micro models often hosted internally
(38:41):
to do really mundane background tasks, not very glamorous onboarding
new suppliers, matching records, great stuff, just not really very exciting.
But large language models are amazing at it, and small
language models are amazing at it, and almost.
Speaker 3 (38:54):
Free and writing songs, right, Joe, I'm actually I'm still
annoyed that AI is like getting into art and music
writing and all the fun stuff versus the stuff that
I don't want to do like folding launchy to your classic.
Speaker 5 (39:08):
Example or matching customer records. Are that?
Speaker 3 (39:11):
So, going back to the beginning of this conversation when
we were just talking about the scale of AI investment
and its impact on the US economy, I'm pretty sure
you are one of the ones who's described AI capex
as like a private sector stimulus program for the US economy.
What are the actual consequences, either positive or negative, of
(39:31):
having this massive private sector spend in the economy versus
something I guess more typical, which would be a government
stimulus or maybe growth driven by consumer spending or something
like that.
Speaker 6 (39:44):
Yeah, So to an orthodox economist, the old line is like,
it really doesn't matter what we pay people to do as.
Speaker 5 (39:49):
Long as we pay them, right. It's the idea of I.
Speaker 6 (39:51):
Should be, I should be you should be willing to
pay people to dig holes in the ground and people.
Speaker 5 (39:55):
Over there to fill the holes back in.
Speaker 6 (39:57):
Again, it really doesn't matter as long as the money
he's out there circulation, right, It's just it's all just stimulus.
Speaker 5 (40:03):
Right. So, to that way of thinking, it doesn't matter
because the money's all finding its way back into the economy.
Speaker 6 (40:09):
But I think that's obviously hugely misleading, because in this context,
these are investments created with an expectation of a return.
If they can't, then that flows backwards into all the
entities that are built on that basis, whether it's private
credit firms and their returns, the S and P five hundred,
what is it like? Thirty five percent now is AI
related mag seven meg ten whatever? Fifty percent now the
(40:30):
last two years return. So this is a massive negative
wealth effect when you unwind it, not just in terms
of the direct spending, but in terms of the wealth
effect with respect to what people's holdings are. So this
is not as simple as saying this has just been
a wonderful stimulus program.
Speaker 5 (40:42):
We're paying people to dig holes and filling them back in. Again,
this is.
Speaker 6 (40:45):
A wasting asset on something that's likely to be produced
in quantities that we can never earn an economic return from,
in part because of wildly flawed assumptions and projections about
the future of demand for those units. And so that's
that's the deep structural problem, and can get into this
whole question of like, well it was just private equity
guys get hurt, you know, cares Screw those guys, right,
(41:07):
And it's not, of course, because as we just talked
about it, it's it's in equity funds.
Speaker 2 (41:10):
It's firefighters and teachers money.
Speaker 6 (41:12):
Yeah, and it's in reeds now look at the larger
holdings and reads now increasingly our data centers.
Speaker 5 (41:16):
Yeah. And it's even in.
Speaker 6 (41:17):
Sort of sneaky backdoor ways like we're seeing increasing I
don't if you guys are familiar with these new interval funds.
Speaker 5 (41:21):
They're appearing there all over.
Speaker 2 (41:23):
Now, Paul Kadrowski, we could I have a million more
questions you could ask you, But much like the race
towards a GI itself, that would imply that we'll ever
actually get to the end of this conversation. So how
about we wrap here and then just plan on, you know,
revisiting the com six months, maybe three years. We just
keep revisiting down the line where we are in the cycle.
Speaker 5 (41:44):
As long as we haven't been turned into paper clips.
Speaker 1 (41:45):
I'm good.
Speaker 2 (41:46):
Yeah, that's the no one talks about the nightmare. I
feel like that was a no one talks about the
old school paper clip maximizer stuff. Everyone's onto more esoteric fears.
Speaker 5 (41:56):
I know people have moved on. We need to worry.
Speaker 3 (41:58):
Does anyone wait, did anyone ever try to securitize Clippy?
Speaker 5 (42:01):
They didn't, right, I don't think so.
Speaker 2 (42:03):
No, thanks Paul.
Speaker 6 (42:06):
Hey, thanks guys.
Speaker 2 (42:19):
Paul's so good. That's a lot of fun. He's so good.
Speaker 3 (42:21):
Here's my highest form of praise for an odd thought's guest.
I am going to go back and read that transcript
from beginning to end.
Speaker 2 (42:27):
It is a very good that is a very good
practice to do. You're not going to listen to it.
Speaker 5 (42:33):
I'm going to read it.
Speaker 2 (42:34):
Yeah, I can read it. I can't listen to it.
Speaker 3 (42:36):
I just listened to it.
Speaker 2 (42:37):
I can need to read it. I can't listen to
our episodes. No, I just you know, I think there's
a lot, there's a lot more to do on all
this topic, but the financing in particular and some of
these arrangements. It's just incredible how the speed with which
I guess I would say the financing has gotten interesting.
Do you know what I'm saying that? I think like
a data center project ten years ago, Microsoft AWS thing
(43:01):
just seemed like a fairly straightforward is probably more complicated
than I appreciate at the time, but basically straightforward. We
make this money and part of it is going to
go to building more data centers to you know, serve
you know, Amazon Prime Streaming or whatever it is, or
some client thing or whatever. And then the degree of
complexity with these SPVs and rollover risk and depreciation schedules
(43:24):
and changing of who it's gotten very interesting, very fast.
Speaker 3 (43:27):
Life Uh finds a way life finds. Yeah, that was
my terrible, terrible impression. I think that's absolutely right. One
thing I would say is the fact that a lot
of these big, supposedly cash rich companies are doing this
through SPVs that effectively preserve their balance sheet and their
cash flow so they can do something else with it.
I mean a lot of companies use SPVs. Sure, yeah,
(43:50):
But I do think it says something about the scale, yes, right,
Like there's a scale problem here where if all you're
spending was appearing on balance sheet investment might think very
very differently about your company. And then the other thing
I would say is I still think the comparing contrast
between the US and China and their approaches to AI.
(44:11):
You know, both of them, I think would agree that
this is an existential problem of some sort or an
existential competition. But they're following very different paths, and it
does seem to me like the arc of history kind
of leans towards stuff becoming cheaper.
Speaker 2 (44:28):
The artifactory bend towards China.
Speaker 3 (44:31):
Well that's that too, but it bends towards you know,
people generally want the cheaper thing, and they want the
thing that's like available now, and China seems to be
going for that.
Speaker 2 (44:41):
The counter argument is that if you're going to use
an open source model for some purposes, you have to
supply your own electricity, right, you have to supply your
own inference. You've got to host on your service, like,
you still run into some constraints, and so rather than
having it beyond whatever whoever else is data center, you
gotta find a way to run it yourself.
Speaker 3 (45:01):
Yeah, okay, but China has a leg an electricity.
Speaker 2 (45:05):
Which was the point that Jensen Wong made. I mean,
part of the reason, like there's so much talk about
this these days right now, is that the industry insiders
are saying a bunch of weird things. Paul mentioned the
Sarah Friar comment yea, and she she sort of had
to walk back, but then she said there was the
Sam Altman thing where he was asked how are you
going to pay for all this? And he said, look,
you want to sell your shares or not, which is
(45:26):
like the interviewer probably thought he.
Speaker 3 (45:27):
Was little defensive.
Speaker 2 (45:29):
Obviously, Jensen Wong talking at a recently about how China
was going to win. Maybe he was saying that because
he wanted to catalyze more action on solving some of
the electricity problems in the US. But you know, the
very people at the center of this are saying things
right now that you know. What's interesting too, is you
know this bullwhip phenomenon everyone as Paul described it, he
(45:52):
didn't use the word bullwhip, but when everyone is trying
to get their hands on the same gear, you gotta
wonder how sustaint what's the other side of a bullwep
could look like? We just got to do more episodes
on this.
Speaker 3 (46:01):
Yeah, we have to. Shall we leave it there for now?
Speaker 2 (46:03):
Let's leave it there all right?
Speaker 3 (46:04):
This has been another episode of the Audthots podcast. I'm
Tracy Alloway. You can follow me at Tracy Alloway and.
Speaker 2 (46:10):
I'm Jill Wisenthal. You can follow me at The Stalwart.
Check out Paul Kadrowski's writing at Paul Kadrowski dot com,
follow our producers Carmen Rodriguez at Carman Arman, dash Ol
Bennett at dashbod and Kilbrooks at Kilbrooks. And for more
odd Lots content, go to Bloomberg dot com slash odd
Lots with the daily newsletter and all of our episodes,
and you can shout about all of these topics twenty
four to seven in our discord Discord dot gg slash
(46:33):
od Lots.
Speaker 3 (46:33):
And if you enjoy odd Lots, if you like it
when we talk about the AI private credit leverage, subprime
economy nexus, then please leave us a positive review on
your favorite podcast platform. And remember, if you are a
Bloomberg subscriber, you can listen to all of our episodes
absolutely ad free.
Speaker 4 (46:51):
All you have to do is.
Speaker 3 (46:52):
Find the Bloomberg channel on Apple Podcasts and follow the
instructions there.
Speaker 4 (46:56):
Thanks for listening in
Speaker 5 (47:22):
In