All Episodes

December 9, 2024 49 mins

AI software and the hardware that enables it have been hugely popular investments this year. But there have still been limiting factors on the sector, including a shortage of compute to power so many new start-ups. Investors don't want to finance companies that lack a signed contract for compute, and compute providers don't want to sign contracts for startups that haven't already secured funding. Now Magnetar, a hedge fund which started its first ever venture capital fund earlier this year, is trying to solve this "chicken and egg" problem by offering compute in exchange for equity. Magnetar was an early investor in the AI space, partnering with Coreweave and recently helping the hyperscaler to raise $7.5 billion. On this episode, we speak with Jim Prusko, partner and senior portfolio manager on Magnetar's alternative credit and fixed income team, about why the hedge fund is getting into venture capital and some of the new ways they're deploying money in the space.

Read More: Magnetar Starts First-Ever Venture Fund, Targets Generative AI

Become a Bloomberg.com subscriber using our special intro offer at bloomberg.com/podcastoffer. You’ll get episodes of this podcast ad-free and exclusive access to our daily Odd Lots newsletter. Already a subscriber? Connect your account on the Bloomberg channel page in Apple Podcasts to listen ad-free.

    See omnystudio.com/listener for privacy information.

    Mark as Played
    Transcript

    Episode Transcript

    Available transcripts are automatically generated. Complete accuracy is not guaranteed.
    Speaker 1 (00:03):
    Bloomberg Audio Studios, Podcasts, Radio News.

    Speaker 2 (00:20):
    Hello and welcome to another episode of the All Thoughts podcast.
    I'm Tracy Alloway.

    Speaker 3 (00:24):
    And I'm Joe Wisenthal.

    Speaker 2 (00:26):
    Joe, AI is so hot right now, in the immortal
    words of Mugatu, AI is so hot.

    Speaker 4 (00:33):
    It is.

    Speaker 3 (00:33):
    Yes, it is really hot. You know, you hear something.
    There's a little bit of slowing down in some of
    the progress on the models, but the recent in video
    results speak for themselves. There is nothing that I've seen
    yet that would suggest that this macro trend, at least
    as an investment trend, and I'm not talking about stocks
    per se, is anywhere close to quote slowing down.

    Speaker 2 (00:57):
    Yeah, And the interesting thing is we seem to be
    more and more players, some new types of players that
    are getting into the space. So, you know, we have
    AI funds kind of launching left and right. And one
    of the newest players is a hedge fund called Magnetar
    and I know them like primarily for credit stuff. I
    think they were big in redcap trades for a while. Yeah,

    (01:19):
    and now they're launching an AI fund, a VC fund,
    which is kind of unusual for this type of hedge
    fund to.

    Speaker 3 (01:25):
    Do totally, I mean I've heard of magnetar for a
    long time, obviously, going back to the early twenty tens
    at least, And look, I'm not surprised that various investors
    are looking for what is their distinct way into this space?
    And of course, look, we've done interviews with vcs of
    various nature and positions in the past, and so I

    (01:48):
    guess you know, there's sort of two questions to my
    mind anytime we're gonna be talking to someone investing in
    early stage or any stage of AI, which is obviously
    what is the thesis is what's going to win out
    where we'll value a crew. But then from an investor perspective,
    given so many entrants into this space, specifically whether on

    (02:09):
    the public equity side, whether on the private side, whether
    on the VC side or early stage, late stage, what
    do they, as a fund or an investor bring to
    the table or will be able to see that the
    other billions of dollars competing for AI profits do not see.

    Speaker 2 (02:24):
    I have a slightly different question, which is for these
    types of investors, like how much is it about how
    good the technology is that they're investing in versus how
    much is it about getting in the right position in
    the capital stack.

    Speaker 3 (02:38):
    So that's a great question.

    Speaker 2 (02:39):
    I think it's going to be really interesting to talk
    to someone who's coming from this perspective. And without further ado,
    we have the perfect guest we're going to be speaking with,
    Jim Prosco. He is a partner and senior portfolio manager
    on Magnetar's Alternative credit and fixed income team. Jim, Welcome
    to the show.

    Speaker 5 (02:58):
    Thank you, great to be here.

    Speaker 2 (02:59):
    So how does someone on a hedge funds fixed income
    team get into AI.

    Speaker 4 (03:05):
    Well, we have a long history of investments in private companies,
    really dating back to an increased focus after the Financial
    Crisis when spreads and yields got tighter and the private
    markets seem more interesting. And we've often partnered with platforms
    where we thought we could grow the platform and generate
    an interesting asset, either a pool of cash flowing assets,

    (03:28):
    or help grow the company and participate in that growth
    and support them through financing and other things like we
    can support them through helping them with hiring or accounting
    or other systems they need, and just to help them
    grow generally. And so you know, I've been doing that
    a long time and we've been a number of areas
    like auto lending in Ireland, and then we've moved into

    (03:51):
    various fintech companies. We were one of the first institutional
    investors in open Door before they went public. We're supporting
    and investing in a very interesting fintech that is financing
    restaurants right now, and so we felt we had experience
    in that space, and then that sort of overlapped with

    (04:12):
    our relationship and our investment in core Weave, where we
    were the first institutional investor in core Weave in twenty
    twenty one. So we're very early in the trend of
    putting capital into the AI infrastructure space and that's just
    sort of grown as this whole market has grown to
    encompass literally everything. Now, you know, we continue to look

    (04:34):
    for smart ways to invest, and you know, one of
    those ways we felt was what can we provide that's
    a value And one of the things we can provide
    besides the general help we can give a growth stage
    company is compute because that is the scarce resource right now,
    and that's where all the capital is going to the

    (04:54):
    various parts of the value chain to deliver compute, and
    so there's a competition to get compute, and if you're
    a smaller company with limited capital or limited access to capital,
    it can be difficult to get that, and so that
    was sort of the value proposition we thought we could
    bring to bear.

    Speaker 2 (05:12):
    Joe, I have this vision in my head of vcs,
    like going into startups bearing baskets full of chips.

    Speaker 3 (05:18):
    Ah yeah, instead of just saying.

    Speaker 2 (05:19):
    That, like our pitch is the relationship and the coaching as.

    Speaker 3 (05:22):
    We have access to the chips or the energy plus chips.
    Just for point of clarification, listeners should know we've talked
    to Core. We've at least twice on the show, and
    it feels like in the AI space specifically, this is
    one of those names that's a very big deal, but
    not many people don't know it the way they know

    (05:43):
    sayan in Nvidia at the very back end or Chatgypt
    at the very front end, but they build a lot
    of the data centers that are filled with Nvidia chips.
    I want to get more into the business model there
    because I have a lot of questions in the business
    of selling compute, etc. But talk a little bit more
    about you said your experience in the private side is

    (06:06):
    like this expertise with platforms per se. And when I
    think of platforms, I think of companies that can acquire
    lots of other companies or a lot can be built
    onto them. Talk to us about how the platform specific
    expertise informs you're thinking with a core weave or any
    other AI investment that you're making now.

    Speaker 4 (06:27):
    So we've tried to put capital into companies that are
    trying to build their business in a particular space, and
    oftentimes that could be a space where they generate a
    cash flowing asset, like in the auto loan example, in
    the open door example, they were acquiring real estate, which

    (06:48):
    was a hard asset. In that restaurant fintech example, they're
    acquiring restaurant credit. And so we've tried to support businesses
    that had some kind of asset or flow and work
    with them on a number of ways that we can
    add value. I think first and foremost is all these
    growth stage companies need financing, and I think we have

    (07:09):
    great expertise from debt to equity, private to public, and
    we can be innovative in trying to bring you the best,
    most appropriate, lowest cost capital to these growth stage companies.
    And like I said, as well as.

    Speaker 3 (07:22):
    So just to be clear, just to understand in this context,
    what makes AI distinct, say from other waves of tech
    or what makes it distinct for say a magnetar is
    in part this distinct capital demand that was not perhaps
    as big of a deal during the SaaS wave of
    the twenty tens.

    Speaker 4 (07:43):
    Yes, so not not only a general capital demand, but
    in many cases, for many of these companies, a very
    specific demand to have capital to deploy with compute, and
    because they need this very specific scarce resource, helping to
    deliver that resource, and in particular helping to deliver that

    (08:03):
    resource in a high quality way. Where you have a
    partner like core Weave that has I think there's a
    lot of evidence that they have the highest performing AI
    training cluster, and so that is really valuable to these
    companies that might otherwise struggle to get enough compute to
    further their business model.

    Speaker 2 (08:21):
    Speaking of Core We've I'm really curious how that conversation
    actually started because this was a new and novel thing.
    I don't think we had chip based loans before to
    my knowledge, and I keep hearing that asset based financing
    is going to be like this next big thing in
    private credit or it's the last real frontier in private credit.

    (08:42):
    How did you come up with this idea this deal?

    Speaker 4 (08:46):
    Well, acid based financing is really a classic private credit
    tool and there's a number of examples. Just if you
    think about my example with the Irish auto lender. If
    you buy a loan for a car, so the Irish
    auto lenders generating car loans and those go and you
    buy them in a vehicle, you have primarily the security

    (09:10):
    of the people paying on those loans, and so you
    get paid back by the cash flow of the borrowers
    paying their car loans back, but there's credit risk to
    that they could potentially stop paying, and in the case
    where they stop paying, then you have the cars collateral.
    And really that metaphor applies almost directly to GPUs, where

    (09:33):
    if you're a company delivering high performance compute like Core
    we've has, you're contractually selling that compute to some counterparty
    that's going to use it in their case. You know,
    that's often a very large, very credit worthy hyperscaler, but
    not always. There could be smaller startups that have riskier
    business models, and in that case, primarily by funding the GPU,

    (09:58):
    you're getting paid back with those controls actual cash flows
    on the use of the GPU. But in the case
    that company fails, then as backup you have the GPU itself. Now,
    the GPU isn't really like the car where you'll probably
    go out and sell it, but you get the time
    back on the GPU, which you can then resell to
    somebody else, and being a scarce asset, you can think

    (10:21):
    about what value that would have in a future time.

    Speaker 3 (10:24):
    One difference that I could imagine with the GPU versus
    other forms of assets, say whether it's a car or
    say whether it's a house, is a certain here in
    twenty twenty four, still unpredictability about many things in the future.
    Will in video always be the gold standard so to speak?

    (10:45):
    In AI chips maybe it looks like it, yes, but
    it doesn't seem guaranteed. How fast will the current generation
    of chips that are deployed degrade in value? I imagine
    there are fairly predictable sort of depreciation curves for cars
    that perhaps are more uncertain for chips. And then also
    the uncertainty of actual deployment given permitting and challenges with

    (11:10):
    energy and the other operational things that have to do
    with a new company building a data center. Talk to
    us about modeling or at least thinking through some of
    the uncertainties with chips specifically, Well, depending.

    Speaker 4 (11:22):
    What stage you get involved, you have the breadth of
    all those different risks potentially. So if you're investing in
    high performance compute but it's a greenfield data center, then
    you have to think about all those things. You have
    to think about the delivering of the power. You have
    to think about the timing on all the components to
    get to the data center. If you're making what we've

    (11:45):
    been talking about, which is sort of a GPU based loan,
    then usually that loan is based upon a running GPU
    and an existing high performance compute data center, so you
    don't really have to think about some of the earlier
    stage issues. You more have to think about how long
    is my contract, how good is my contract, What do

    (12:07):
    I think the value of renting that chip out will
    be at the end of that contract. How much rent
    on that chip could I get if I had to
    re rent that in the middle of the contract. So
    it's more near term things on actually having a functioning
    GPU in the data center, But all those other things
    have to be financed too, and there's going to be

    (12:27):
    innovative and large amounts of capital dedicated to financing those things.

    Speaker 2 (12:32):
    Setting aside the financing for a second, how hard has
    it been just to find physical space in data centers, well, it's.

    Speaker 4 (12:41):
    Been extremely scarce, and a lot of that is driven
    by the search for power. The data centers required for
    the new AI chips are much different than the old
    data center. So it isn't really cost efficient in most
    cases to go and take an old data center and
    try to retrofit it because the amount of power just

    (13:03):
    a loan that has to go there is you know,
    transcending an order of magnitude more per rack of GPUs now,
    and so that's just you just can't really retrofit that efficiently.
    It's better to build your own building. And so it's
    really come down to things like permitting availability of power
    and time to get all your components, and you know,

    (13:26):
    all these things have their own lead time. So it
    had an interesting back and forth to Brian on curing transformers.
    You know, all these little you know, nuances come into
    play when you have to build a data center. And
    so because power is really the limiting factor most of all,
    you're seeing a lot of moves towards where the power is.

    (13:47):
    And it was recently an article on Bloomberg. I think
    about a company in Texas that owns a bunch of
    land that's now worth forty billion dollars, right, And that's
    because they're near all this renewable power. But that isn't
    the only thing. It's incredibly complex to operate this high
    performance compute. So then you have to think about if
    I try to build my data center out there where

    (14:08):
    the power is. Can I get everything out there, including
    operational expertise?

    Speaker 5 (14:15):
    Right?

    Speaker 4 (14:15):
    Can I staff my data center with the kind of
    experts I need to run this kind of highly technical,
    high performance compute. And each generation is just getting more complicated.
    We're going to have liquid cooling on the next generation
    of Nvidia chips, probably immersion cooling right after that. It's
    very complicated, very expensive, and very difficult to scale. Much

    (14:36):
    harder to do in a large size than it is
    to do in.

    Speaker 5 (14:39):
    A small size.

    Speaker 2 (14:40):
    Maybe Magnetar can finance a small modular nuclear reactor. No, seriously,
    because if you're financing the compute and securing that on
    behalf of companies that you want to invest in, you
    could go one layer down finance the energy.

    Speaker 4 (14:55):
    And we're certainly interested in that, and we have a
    history in investing in energy. We have investment right now
    and a developer of utility scale solar power in the
    US who has least some of that solar power to
    various hyperscalers. So that is certainly a space we're interested in.
    I was just in Miami meeting with a company that

    (15:16):
    has a novel heat sink battery technology that they want
    to deploy to data centers that they're talking to a
    bunch of data center type companies about launching that product there.
    So there's a ton of interesting things, and just like
    every other part of this ecosystem, it's going to require
    an immense amount of capital.

    Speaker 3 (15:33):
    I guess, just since we're sidetracked on the energy component
    for now while we're here novel battery technologies, there's a
    lot of them out there. There's a lot of startups
    that have something novel and energy, and often one of
    the things that they talk about is this chicken and
    egg problem where they need capital, They need sort of

    (15:55):
    financing of some sort or another to build this stuff,
    but the lenders don't really want to give it until
    there's demand, and no one's just going to promise to
    buy it until it's shown that it can work. Can
    you talk a little bit, I mean again, I know
    there's a little bit off track from GPUs themselves. But
    since you were talking about similar yeah, talk about the batteries.
    Can you talk a little bit about that dynamic as

    (16:17):
    it affects solving the energy side of the equation?

    Speaker 5 (16:19):
    Yeah, for sure.

    Speaker 4 (16:20):
    And it has some overlap with the way you look
    at an AI company too. You know, if you think
    about the core things that we really want to look at,
    it's technology team and traction. So does their technology really work?
    That's first and foremost. You know, what is this product?
    Does it have some kind of advantage? And then traction

    (16:42):
    like time to market.

    Speaker 5 (16:44):
    That's super important.

    Speaker 4 (16:45):
    I was just talking to isokon pool side and like
    to him, like those are the two most important things.
    Speed to product, speed to market, because it's a race,
    and even if you have the greatest technology, if you
    take too long, someone's going to be using something else.
    And that's certainly true in the energy space where energy
    is of critical importance. So I think that for these

    (17:09):
    startups on the traction side, they really need some strategic
    partnerships because their cost of capital is very high.

    Speaker 3 (17:17):
    Strategic partnership is kind of like an existing company that
    has a demand. It also has a lot of cash
    and could theoretically be a buyer of their.

    Speaker 4 (17:25):
    Solution, yes, and really on the other side too, So
    for example, because their cost of capital is so high,
    there's certain things that it's hard for him to do.
    And one of the things that it's really hard for
    all these startups to do, and this was true and
    the recycling industry and other industries, is build a plant.
    Like very expensive, time consuming to build a plant. You

    (17:48):
    don't really want to raise bc capital to build a plant,
    and so it's important to have a partnership on the
    manufacturing side too. And that was really like the first
    thing this battery startup that I just visited talked about
    is like getting that because you've got to be able
    to deliver your product and you have to deliver it
    on scale, and ideally you don't want to be wasting

    (18:09):
    time building your own plant on that and then like
    you said, on the other end, you want to have
    a partnership with the users of the energy, which is
    all the people that either have data centers or use
    data centers or customers of data centers, and you want
    them to ideally put together an attract a financing relationship

    (18:29):
    where you know, in some form or fashion they're front
    loading their payments to use so that you can use
    that capital to actually build a product that they meet.

    Speaker 2 (18:56):
    So Joe and I went to San Francisco a little
    while ago and we saw some cool things. I had
    my first ride in a way Moo, and we saw
    some cool battery related technology. We also saw a lot
    of vcs. Everyone very excited about AI. Obviously, they were
    also talking about the difficulty of chasing deals right now,
    how do you compete with those traditional vcs or are

    (19:20):
    you just not competing with them directly because you're taking
    the slightly different GPU backed approach.

    Speaker 5 (19:26):
    You know, I think it's both.

    Speaker 4 (19:27):
    I think you're competing with them and to an extent,
    partnering with them. And that's the thing we had to
    ask ourselves before launching the fund, is what are we
    bringing to the bear that's value added? And in this case,
    we're bringing to bear the compute. And so often these startups,
    even if they're backed by a strong VC, can have
    a bit of a chicken and egg problem, which is

    (19:50):
    they need compute to develop their product, and they need
    capital to buy that compute. But if they don't have
    the compute lined up and the price locked in, then
    the capital might be hesitant to go in because they'd
    be like, we could put our capital into you, and
    then it could take you an extra six months to
    get your compute, and by that time some competitors passed
    you by or the technology has changed. And on the

    (20:14):
    other hand, because they're a startup, they don't really have
    the credit worthiness to just contract the compute. They most
    likely have to pay up front, and so we bridge
    that gap. And so if we go into a fundraising
    round where there's a bunch of vcs putting cash in,
    if they know that we're putting compute in alongside them

    (20:34):
    and that the second the round closes that compute will
    be available to the company, that makes it easier to
    raise the cash part of it. So we are competing
    and we need that value added to be part of
    the equation. But also I think it helps them to
    raise from traditional vcs because we take that one risk
    off the table.

    Speaker 3 (20:52):
    How big is the market of companies that need compute,
    because there are plenty of AI companies that just build
    on top of an existing model like GPT or anthropics model,
    et cetera. How many companies are actually out there and
    who like not who are they specifically, but what are

    (21:12):
    the types of companies for whom actual access to compute
    is an important part of their business.

    Speaker 4 (21:20):
    Yes, well, you know it starts, of course with the
    LM companies. You're using massive, huge, huge amounts of compute.

    Speaker 5 (21:28):
    But then if you look.

    Speaker 4 (21:30):
    At the rest of sort of the AI stack, there's
    a couple areas where you're going to need compute, and
    one is all the small model custom model companies, and
    small commute a lot of different things. So you can
    have some very small companies that are using a very
    targeted model, like say in a vertical stack, you might

    (21:52):
    have a robotics company that is specifically training a model
    to run a robot in a particular situation, and that
    could be anything from a warehouse to doing surgery, right,
    and they need compute to train that model or another
    one which is huge and dominated by an existing big

    (22:12):
    players autonomous driving, but there are other autonomous driving companies
    that are trying to be deployed at other automakers that
    need compute to train those models.

    Speaker 5 (22:23):
    Or weather models.

    Speaker 4 (22:25):
    There's some really good companies that we've talked to doing
    weather models. They need compute to train their model, and
    so that whole model layer, and then even on the
    app layer, they might be custom elements of small models
    that they have that sit on top of the big
    lms that they need some amount of compute for.

    Speaker 5 (22:45):
    So there's quite a range.

    Speaker 4 (22:46):
    You know, it's not everyone, you know, it's more in
    that model application layer, and you know, less in the
    infrastructure layer that need compute.

    Speaker 2 (22:55):
    So this is one thing I always wonder about AI investment,
    which is you have a lot of companies that are
    building on top of existing models, as Joe mentioned, And
    to some extent that makes sense because they can save
    a lot of money by doing it, and realistically, are
    you going to compete with Google or Microsoft? Probably not.
    But on the other hand, I always wonder if you're

    (23:16):
    building on top of an existing model, how do you
    ring fence that business? Because my assumption is if AI
    gets better, maybe at some point the AI can replicate
    any AI model basically.

    Speaker 4 (23:30):
    So this is the first thing we always worry about
    is does some giant company already have this product in
    a closet with like twenty PhDs working on this and
    somebody I was just at this conference and somebody coined
    the phrase incumbent maximalist, And that's the man. You think
    the incumbents are going to do everything and no one
    else will ever succeed. And I think there's a few

    (23:53):
    use cases. There's things where it's a very specific task
    that is hard to do well with a giant general
    model and probably isn't worth doing well. Like if you're
    focused on growing tens to hundreds of billions of dollars
    of revenue, you can't be distracted by trying to do

    (24:13):
    every little thing. And we've seen this in previous tech
    revolutions as well, and so it can be something that's
    very focused on a space. We've seen legal accounting, sales.
    There's some great companies that have virtual employees that they're
    doing things that are very task specific. There's some companies

    (24:35):
    doing text of language and language to text and other
    things for very specific applications. So you know that's one way.
    The other way is data. The greatest ring fence is
    to any AI company or business is data. Because you've
    seen as the performance of some of the lms has

    (24:56):
    supposedly flattened out, a lot of that is because they've
    just used all the data, like they've trained on the
    whole Internet, there's nothing left and so now you have
    to have other ways to train or novel sources of data.
    So proprietary data is super valuable. And then there just
    could be areas where they're conflicted. They don't want to
    compete with their customers right now, although you know, competing

    (25:17):
    with your customers is a great tradition in the tech space,
    but there could be situations where it's not worth it
    to them yet to compete with their customers. And so
    I think there's those different use cases where you know
    you're going to see a small number of companies succeed.

    Speaker 3 (25:32):
    I have a very stupid question, and actually I shouldn't
    even be asking you. I should have asked it the
    last time we talked to core Weave, but since you're here,
    I'm gonna take them all again, or on the question
    I didn't ask them. I know that Nvidia is an
    investor in core Weave, but even setting aside that specific relationship,
    the actual purchasing of chips, how does the pricing work

    (25:55):
    and how much is it a de facto auction? Where
    As demand for chips boomed, in Vidia can expand its
    margin versus in Vidia aims for a stable margin over time,
    And I imagine this enters into your calculation to somewhat
    thinking about a core Weaves future capital requirements. How does
    that market for chips work?

    Speaker 4 (26:16):
    Well, I can't comment on the internal workings of Nvidia
    setting their prices.

    Speaker 3 (26:21):
    But is an investor in a buyer whatever you I'm
    a buyer of chips, how do I want to buy
    some chips?

    Speaker 2 (26:29):
    And now imagine it's like the container industry where you
    have to have a specific relationship and there's a shipping
    manager called Lars somewhere in northern Europe who holds the
    keys to the chips.

    Speaker 4 (26:40):
    Well, for any company using a resource, and it's certainly
    true of companies using compute right, it's always a cost
    benefit example. So there's great benefits to running your AI
    training on an Nvidia ecosystem on a network like Core
    weaves that's very fast and very reliable because you know,

    (27:02):
    when you train a model, you stop every fifteen or
    thirty minutes to save your work, and if there's a
    failure in there, you have to go back to the
    last time you save your work and there's a huge
    loss on that. So there's benefits to using the best technology,
    but those are quantifiable, and if you're a particular kind

    (27:23):
    of technology becomes too expensive, you'll see people diversify out right.
    I mean, there was just news the last two days
    about Anthropic and AWS and aws's new chips, So there's
    always some form of competition. I mean, in Vida is
    sitting in a unique place where they've really had a
    de facto monopoly on this, and I think their pricing

    (27:46):
    is being set in a way to grow the market,
    right Like, they want to grow the market. I can't
    speak for them, but you wouldn't want to set the
    price of your product so high that you stifle the
    market's growth, right Like, growth is more than making an
    extra dollar on every widget, And so I think that's
    got to be a calculation, and certainly to date it's

    (28:08):
    been fruitful in that this market has taken off like
    almost no market ever.

    Speaker 2 (28:28):
    I want to go back to the capital question, and
    most venture capital comes in the form of equity. You're
    doing something slightly different in my understanding. You're primarily going
    down the debt and sort of fixed income route. That
    seems so different because in my mind, when I think
    about bond investing, and we've said this a number of

    (28:50):
    times on the show, it's all about avoiding losers, right Like,
    there's limited upside, but you don't want a bankruptcy that
    wipes out your investment, whereas equity the upside is basically uncapped.
    So it's about finding that one stellar out performer or
    that one lottery ticket. How do you square I guess
    the risk averseness of some of this debt financing with

    (29:12):
    getting the huge upside that is potentially there from AI.

    Speaker 4 (29:17):
    Well, the amount of financing required for this whole AI buildout,
    which is on some immense scale of you know, people
    have talked about the Manhattan Project, the building of the Interstates.
    It's going to require capital in many forms for many things,
    and I think there's a lot of thinking going on,
    and you know, certainly we're part of that in deploying

    (29:41):
    the most efficient capital to the different layers of this buildout.
    And so we've talked about a couple different things here.
    We've talked about financing GPUs. So if you're financing GPUs
    with debt, then you can really think through your downside protection,
    just like in the audio metaphor.

    Speaker 2 (30:00):
    Right, you have the collateral.

    Speaker 4 (30:02):
    You have the collateral, you have the contract. You can
    analyze the credit worthiness of the contract. You can look
    at how the leasing curves of prior chip generations have decayed.
    You have some real information there, you have a real asset,
    you have real contracted cash flows. Now in the VC fund,

    (30:22):
    that's a lot different. In this case, this is true
    venture equity, and it's just that it's being deployed in
    a unique way where instead of cash, the compute has
    been contractually secured and it's just being exchanged for the
    equity directly, as I talked about before, saving that step
    and de risking the process of acquiring compute for these

    (30:45):
    grow stage companies.

    Speaker 2 (30:47):
    So you are doing equity through the VC fund.

    Speaker 5 (30:49):
    The VC fund is equity.

    Speaker 4 (30:51):
    Yes, it would be part of typically but not always
    a part of a round that a growth stage company
    might be doing.

    Speaker 2 (31:00):
    Doing convertibles.

    Speaker 4 (31:01):
    So we can do virtually anything across the debt equity
    private public spectrum, and have in many cases in the
    AI fund itself. Most of the companies being gross stage
    are not really in a position to do debt, so
    I think for the most part, I would expect that
    those would all be venture equity investments.

    Speaker 3 (31:25):
    I gotta chuckle when you're like, oh, we've been in
    this space, it's way back, and then you said twenty
    twenty one, But it does really sort of.

    Speaker 5 (31:32):
    Speak to hell.

    Speaker 2 (31:32):
    It feels like a long time.

    Speaker 3 (31:33):
    Yeah, well, you know, I mean ched GBT, I think
    came out at the very end of twenty twenty two
    or maybe early twenty twenty three, and that was the
    big light bulb moment for a lot of people. So
    even being that active in a lot of this stuff
    a year earlier truly is early. That being said, things
    like core weave, things like data centers. The need for
    compute is very well understood right now in a way

    (31:55):
    that perhaps in three years ago many people in the
    credit and financing space weren't thinking of is that a
    margin compressor for you? The fact that other entities, probably
    many with much more capital than Magnetar has everyone has
    now woken up to this opportunity of yes, there's going
    to be a lot of financing needs in AI. And

    (32:18):
    do you see change in competition or spreads or anything
    like that.

    Speaker 4 (32:23):
    Well, I think it really depends on what you're financing.
    So there's a lot of capital that's gone into all
    these spaces, and certainly all across the stack.

    Speaker 5 (32:35):
    Of financing compute.

    Speaker 4 (32:36):
    You've seen a huge amount of capital come in, and
    you've seen all the giant investment companies providers of capital.

    Speaker 5 (32:44):
    Get involved and so.

    Speaker 4 (32:47):
    There's a lot of capital in there, but there's also
    like a huge need for capital, and it's very complex
    thinking about the structuring and getting the right capital and
    the right space. And so I think there's room to
    be innovative. And I've spent the last twenty years at
    Magnetar thinking about unique ways to source investments and deploy capital,

    (33:11):
    and I think that really comes to bear on this.
    And because this whole market, like you said, is so new,
    and we've only had chat GPT for a couple of years,
    you know, you're seeing companies with all different ways of working.
    You know, we I talked to a company in the
    text of voice space at a conference last week and
    they actually were buying their own DGX servers themselves and

    (33:34):
    just running on themselves in their own on premp site.
    And We're like, sure, like that's something we can finance.
    That's like a hard asset.

    Speaker 5 (33:43):
    But no one's really looking at that yet.

    Speaker 4 (33:44):
    Because most of the capital is so big, it has
    to go to the biggest thing. So you have your
    trillion dollar investment firm, which was a couple you're not
    going to want to deploy twenty to fifty million dollars
    in a one off thing. You're going to want to
    deploy tens of billions of dollars in the biggest thing,

    (34:05):
    whether that's power, physical data centers, or GPUs.

    Speaker 2 (34:10):
    What's the pitch to your investors, to Magnetars investors, Because again,
    this is something I know you said you've been in
    the tech space for a while, but it's still something
    that feels fairly new. And when I think about AI,
    there's been so much excitement over it. Some people have
    been talking about whether or not it's in a bubble,
    and I think about a hedge fund, and that's all

    (34:31):
    about uncorrelated returns and investing profitably through the cycle. I
    get that you might be promising very large upside to investors,
    but what is the hedge aspect of this.

    Speaker 5 (34:47):
    Well.

    Speaker 4 (34:48):
    As a firm, we've done many different products and many
    different strategies for many different investors over the years, and
    we've really been flexible in trying to deploy capital in
    the most most interesting areas that are going to have
    the best risk adjusted returns. And many of our investors
    have been with us through the whole life of the
    firm since two thousand and five and appreciate that. And

    (35:10):
    so we've done both diversified investment strategies where we just
    thought the general pipeline of deploying structured capital has been great,
    and then we've also done things targeted at a particular
    asset when we thought that opportunity was great. And so
    in the case of the VC fund, the value proposition

    (35:32):
    really is for the investor what it is for the company,
    which is, we're bringing something unique to these gross stage
    AI companies which will get us access to making investments
    and what we hope will be the best best of
    those companies with the best business models and the best teams.

    (35:53):
    And so we're going to use the unique compute that
    we have and the way that we're going to exchange
    that for equity and deliver that to these companies as
    a way of getting access to investments in what's a
    very as you mentioned, very competitive environment where there's a
    lot of capital going into the space. And so I
    think for investors that want to participate in that kind

    (36:17):
    of investment, in getting capital deployed into growth stage AI companies,
    you know, this is a very unique opportunity, and so
    we saw a lot of traction with that.

    Speaker 3 (36:27):
    When you come in as a VC investor in some
    of these startups, do you have to supply dollars or
    in some cases or all cases, is your ability to
    promise compute from day one enough for equity?

    Speaker 4 (36:41):
    It really varies, and there's investments we've made both inside
    and outside the fund, and it just depends on the situation.
    So there can be companies that we find super interesting
    but don't need compute, and in that case we could
    invest in those companies directly outside of the fund. For
    the fund itself, the proposition is equity for compute, and

    (37:04):
    so the fund itself is focused on companies that really
    do need equity and are interested in equity. And I
    really do need compute and are interested in compute on
    corewaves network, and so that's the kind of companies that
    will invest in from the fund. But as Magnetar as
    a whole, we've been focused, like we talked about, on
    everything from energy, through infrastructure, through other AI companies that

    (37:29):
    just don't happen to me compute right now.

    Speaker 3 (37:32):
    Then, just to this point, your ability to promise or
    give AI startups compute, this access to compute emerged via
    that initial relationship as a financier.

    Speaker 2 (37:47):
    This is what I was going to ask, which is
    how worried. Are you about competitors doing the same thing
    and providing GPU back debt or is it the case
    that because of your first mover advantage with core Weave,
    you can hold onto that advantage for a while.

    Speaker 4 (38:02):
    So for the fund itself, it was the unique relationship
    we had with coreweve where we felt they were the
    best provider of AI training compute and we were able
    to work with them to contract some of the very
    scarce resource of that and then have that available to

    (38:23):
    deliver to these AI growth companies. And so that was
    really where we were able to put together something unique because.

    Speaker 3 (38:31):
    Day one that was understood to be part of the
    payoff of being a financing partner to Corewave.

    Speaker 5 (38:38):
    I wouldn't say from day one.

    Speaker 4 (38:40):
    I would just say it's part of the natural growth
    in their business and our growth in investing in the
    AI market and in being a partner with them. Everyone
    is both a partner and a competitor in this space,
    and you know, Nvidia has multiple ways that they invest
    in their customers, as do all the hyper scalers for example,

    (39:02):
    And so it's really about are you providing something unique,
    something that's different, And you know, right now this moment
    in time. We feel like the size of the compute
    we're providing and the network we're providing it on and
    the way that we can provide it in real time
    is unique and is valuable to many companies. Now, look,

    (39:25):
    there could be some companies that are getting their compute
    from somewhere else and it's just not a fit that's
    certainly going to happen. But I think there's many, many
    AI growth companies where this is very valuable to them
    to get the compute on Quorwy's network, and that's going
    to lead to a relationship with them.

    Speaker 3 (39:42):
    When Amazon makes a VC investment, it's in large part
    understood that it's the same sort of premise that they're
    going to invest in some software company and the money
    comes right back in because that company has AWS needs
    and so it comes back. Obviously, we know that the
    not only to the large legacy hyper scalers. Not only

    (40:03):
    they're building their own models, many of them they're building
    their own silicon and Facebook has its own chips and
    talked about Amazon and Google has I forget what their
    whole thing is called. How do you think about them
    as competitors to core weave in these sort of pure
    chips and data center side. I know their partners, I
    know their customers, et cetera. But they are also pure

    (40:24):
    competitors both to say a core weave and to say
    an n video.

    Speaker 5 (40:28):
    Yeah.

    Speaker 4 (40:28):
    Again, everyone's a partner and a competitor, you know. I
    think the difference.

    Speaker 3 (40:33):
    Google's as TPUs is their thing. Anyway, Sorry, keep going,
    I just couldn't.

    Speaker 4 (40:36):
    Yeah, I mean the difference, as Brian talked about, is
    the core Weave network was built for the ground up
    to be hyper efficient at running AI solutions, and so
    I think it's unique in that way, and I think
    that's why it's grown so fast. But certainly everyone else
    is trying to build their own out and there will

    (40:58):
    be other people that will have in Vida GPU chips
    and that will include the hyperscalers. But you know, one
    of the things we've.

    Speaker 5 (41:06):
    Seen is that.

    Speaker 4 (41:09):
    This is very hard technology. So it's particularly hard to
    deploy at scale because you run into like real physics issues,
    you know, surface area to volume type issues of getting
    this much power to IRAQ with like how much cable
    does that take? How much cooling does that take? How

    (41:31):
    do you run the software layer, like the software layer
    to control you know, a node of eight GPUs is
    going to be a lot different than if you're trying
    to run one hundred and twenty eight thousand GPUs. And
    so this problem gets more and more difficult, and you
    need better technology and you need highly skilled people, and

    (41:51):
    so the bar is always moving. You know, there's always
    a next generation chips that's going to be super complicated. Certainly,
    the Blackwell deployments and the incremental new Blackwell generations are
    going to be ever more complicated and trigger to deploy.
    And you've seen issues already, right, You've seen hyperscalers and

    (42:13):
    other competitors in the space have reliability problems or be
    behind schedule. Like it's not easy. It's a very complicated technology.
    You're not plugging your GPU into the wall and it's
    ready to run an AI model, and so like, I
    think there's going to be value accrewing to skill an
    efficiency and execution in the space, and you know that's

    (42:36):
    going to last for a while.

    Speaker 2 (42:38):
    So some people draw an analogy between the current enthusiastic
    cycle for AI and the early two thousands period where
    we had a lot of enthusiasm for internet companies and
    telecoms and things like that. Do you see evidence of
    froth out there, or is it the case that because
    of the huge amount of initial capital invents that's needed,

    (43:01):
    it's difficult to get I guess enough new entrance that
    this would become a bubble.

    Speaker 4 (43:07):
    Yeah, everything can become a bubble eventually in almost any
    industry that's highly capital intensive. Usually if there's excess returns,
    you'll see capital go into it until those returns aren't
    good anymore, and a lot of capital will go in
    before you figure out.

    Speaker 5 (43:24):
    That last part.

    Speaker 4 (43:25):
    But this is extremely early. Like if you look at
    the capital that went into the Internet and then how
    that value accrued to both the big tech companies and
    the startups. People have looked at numbers like three trillion
    dollars of equity value created with the large incumbents, but

    (43:46):
    there was another five hundred billion created for the new startups.
    And we're just getting going here, right. We're just building
    out the kind of data centers, the kind of energy infrastructure.

    Speaker 5 (43:58):
    We're just starting to deploy products. If you talked to.

    Speaker 4 (44:01):
    Enterprises, they're just starting to implement the most obvious use
    cases for AI. So I think we're much too early
    to worry about a bubble. I talked to somebody at
    a hyperscaler and they were like, the last thing we're
    worried about right now is having too much compute.

    Speaker 3 (44:20):
    Last question for me, you say, we're early. There's still
    no signs of too much compute. Earlier in the conversation,
    you're like, this is a Manhattan project, scale project. Give
    us some flashy number. How much has been deployed in
    this area, and you know over the next ten years
    how much capital is going to be demanded for this

    (44:41):
    space and how much will be needed.

    Speaker 4 (44:44):
    So one one number I saw was that in twenty
    twenty three, thirty seven billion dollars was deployed into AI infrastructure,
    and in two thousand and thirty three that number is
    going to be like four hundred and thirty billion in
    that year. So this is trillion dollar scale investment.

    Speaker 2 (45:05):
    Cool, You're cool, all right, Jim Presco, Thank you so
    much for coming on all thoughts. That was great.

    Speaker 5 (45:11):
    Thank you for having me.

    Speaker 3 (45:13):
    Thank you so much.

    Speaker 4 (45:26):
    Joe.

    Speaker 2 (45:26):
    There's two things that I hear consistently about AI, and
    one is it's going to need a lot of capital, yeah,
    which Jim spoke to. And then the other thing I
    always hear is well, at some point AI companies have
    to actually produce revenue, and I guess the question is, like,
    are they going to start producing revenue in time to
    pay back that massive capital need.

    Speaker 3 (45:49):
    Yes, it's very interesting because, look, I believe that there
    are companies that are getting productive value out of AI models.

    Speaker 5 (45:58):
    Like I believe that exists.

    Speaker 3 (46:00):
    But you know, you talk about hundreds of billions over
    the coming years and financing in the end that is
    going to have to come from profitable deployment to customers,
    and so like this to me is like, you know,
    still a bit uncertain. I do think the financing that
    we talked about is extremely interesting just in the context
    of this conversation.

    Speaker 5 (46:20):
    Yeah.

    Speaker 2 (46:20):
    Absolutely, the GPU backed loans, Yeah.

    Speaker 3 (46:23):
    Well, both the GPU backed loans and the opportunity that
    that affords company like Magnetar to make GPU capacity in
    lieu of cash for equity investments is extremely interesting. And
    so and then you get this second order effect. So A,
    you're providing something that other vcs can't because you are
    giving them access to compute on day one. And then

    (46:46):
    b other vcs want to enter that deal because they
    know that they're going to be investing in a company
    that is not going to be have to scrambling for
    compute once they get that VC cash.

    Speaker 2 (46:57):
    It's a very sort of middle way approach because I
    think so far the way we've seen AI investment unfold
    is either it's the sort of picks and shovels approach
    where you invest in the chip companies themselves and the
    data centers, or it's you invest in the AI companies
    that are doing cool things. But this is kind of both.

    Speaker 3 (47:16):
    It is exactly both, and it sort of sounds like
    some combination of foresightful planning and also stumbling into a
    very good situation by which the firm's relationship with core Weave,
    dating all the way back to twenty twenty one, does
    now give them this a certain edge in the VCR.

    (47:36):
    It's just a really it's this is a fascinating sort
    of open frontier in many respects.

    Speaker 2 (47:41):
    I still want to know who came up with the
    idea for chip based financing. Jim kind of evaded that
    part of the question, but I want to know what
    those initial conversations were Like.

    Speaker 3 (47:50):
    Yeah, it's also just interesting to think about that on
    some level, the analogies are like an Irish car lender, right,
    So it's like, on some level this is a very
    none and with technology that is highly uncertain. And then
    on the other hand, if you're invested in a Carlo
    and Company, you could sort of get it.

    Speaker 2 (48:09):
    Yeah, all right, shall we leave it there.

    Speaker 5 (48:10):
    Let's leave it there.

    Speaker 2 (48:11):
    This has been another episode of the Oudlots podcast. I'm
    Tracy Alloway. You can follow me at Tracy Alloway and.

    Speaker 3 (48:17):
    I'm Joe Wisenthal. You can follow me at the Stalwart.
    Follow our producers Kerman Rodriguez at Kerman armand dash Ol
    Bennett at Dashbot and kill Brooks at Kilbrooks. Thank you
    to our producer Moses Onam. From our Oddlots content, go
    to Bloomberg dot com slash odd Lots, where we have transcripts,
    a blog, and a daily newsletter and you can chet
    about all of these topics, including AI twenty four seven

    (48:38):
    in our discord. Go there and check it out Discord
    dot gg.

    Speaker 2 (48:42):
    Slash odlocks And if you enjoy ad Blots, if you
    like it when we dig into the capital structure of
    AI investments, then please leave us a positive review on
    your favorite podcast platform. And remember, if you are a
    Bloomberg subscriber, you can listen to all of our episodes absolutely.
    The ad free. All you need to do is find
    the Bloomberg channel on Apple Podcasts and follow the instructions there.

    (49:06):
    Thanks for listening.
    Advertise With Us

    Hosts And Creators

    Joe Weisenthal

    Joe Weisenthal

    Tracy Alloway

    Tracy Alloway

    Popular Podcasts

    Stuff You Should Know

    Stuff You Should Know

    If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

    Dateline NBC

    Dateline NBC

    Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

    Music, radio and podcasts, all free. Listen online or download the iHeart App.

    Connect

    © 2025 iHeartMedia, Inc.