All Episodes

September 8, 2025 50 mins

One day it's so over. The next day we're so back. This is what it feels like gauging the AI boom right now. Everyone's looking for signs of some kind of slowdown and that investments aren't going pan out, but mostly, the dollar signs just keep piling up. And the AI winners like Nvidia, OpenAI, and Anthropic just keep seeing their market valuations rise. In the meantime, other AI players are seeing weird outcomes. Some promising startups aren't being sold, but rather their top talent is walking out the door, leaving other workers potentially in the lurch, while creating risk for venture capital bagholders. On this episode we speak with Josh Wolfe, co-founder and managing partner at the firm Lux Capital, which invests in a range of startups, many of which are in the AI space. He talks about the challenge of aligning incentives, what's overrated, what's underrated, why he thinks Nvidia may have run its course, and the threats to Silicon Valley's "social contract.”

Odd Lots is coming to Chicago! Tickets on sale now.

Only Bloomberg.com subscribers can get the Odd Lots newsletter in their inbox — now delivered every weekday — plus unlimited access to the site and app. Subscribe at bloomberg.com/subscriptions/oddlots

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Bloomberg Audio Studios, Podcasts, radio News.

Speaker 2 (00:18):
Hello and welcome to another episode of the Odd Lots podcast.

Speaker 3 (00:22):
I'm Joe Wisenthal and I'm Tracy Alloway.

Speaker 2 (00:24):
You know, sometimes with the whole AI thing, it feels
like the are we back? Is it over a thing?
There's been a few moments in the last year whatever
was like, is the bubble burst in?

Speaker 3 (00:34):
Yeah?

Speaker 2 (00:34):
Is there a bubble? I don't know where what this
week is. We're recording the September third. There's a little
I don't know, there's some tremors. I can't tell what's
real or not.

Speaker 3 (00:41):
I feel like the hype cycles get shorter and shorter
and more compressed. But you're right, I think there are
maybe some more jitters than there have been previously. Like
maybe it's hard to tell. It's hard to tell, but
I think even if we're not there yet, we are
getting maybe to that point where like the rubber needs

(01:02):
to meet the road in terms of monetization, like you know,
everyone got maybe okay, maybe yeah, you're right, maybe yesterday or.

Speaker 2 (01:12):
Either this morning or yesterday. Anthropic one hundred and eighty
three billion dollar valuation. This does not strike me as.

Speaker 3 (01:20):
People worried about monet people well.

Speaker 2 (01:23):
Or suddenly people are worried about valuation. It sounds like
people really want to pour a lot of money into
these companies.

Speaker 3 (01:27):
Yeah, it is. It's a weird environment. Let's just put
it that way. That's the only thing we can say
with certainty that doesn't start with a maybe.

Speaker 1 (01:35):
That's right.

Speaker 2 (01:36):
Here's another thing we could say with certainty alphabet. The
shares are up as we're speaking, eight point something percent today.
This was a company people were sort of worried about
how they would do the AI era. They're killing it.
They're at an all time high good today. So again,
like many things are like, it's tough to get a
read on things. It is.

Speaker 3 (01:54):
Indeed, who do we turn to?

Speaker 2 (01:57):
We just want to get have an excuse to get
to the and so we're gonna vamp for a little
while and then we're gonna get to the guest.

Speaker 3 (02:04):
I just tried to throw to the guest, Joe. Who
do we turn to for a read on the true
feelings of the market around AI.

Speaker 2 (02:13):
That's right, we do have the perfect guest. We're going
to be speaking with Josh Wolf. He's the co founder
managing partner at Lux Capital VC who's been in this
space for a long time, a guy we like to
turn to to figure out what's actually happening at any
given moment. He sometimes even has good interest insights in
public companies too, not just a private company. Josh, thank
you so much for coming back on the podcast.

Speaker 4 (02:34):
Great to see you both. How you guys doing.

Speaker 2 (02:37):
We're doing great. What's up with Alphabet? Some people thought
they were going to be a big loser in AI
because they have this legacy search business model and O
three is so much better for searching. Here they are surging.
They're an all time high. What is the what is
the smart take on Alphabet right now?

Speaker 4 (02:52):
I think they are crushing it. I think they are
the sort of dark horse underdog. And maybe the second
that I would put it with that is Apple, which people.

Speaker 2 (02:59):
Are totally yeah, totally.

Speaker 4 (03:01):
And you know the irony is, you know, if you
think about the big players meta with Zuck's crazy poacha palooza,
you know.

Speaker 2 (03:09):
The pas which we'll get a remark where.

Speaker 4 (03:12):
You know, one hundred million dollars pay packages and try
and disrupt everybody and bring them all on. You're seeing
talks about them. They've got mid journey coming in for
the images, so not relying on their own models. They're
talking about Google for you know, potential integration on search
and so so it's interesting that Meta having committed to
first be you know, the metaverse and change the name

(03:33):
to Meta, and then it's not going in all in
an AI is actually going to be turning to some
of these other players potentially. So I think that's surprise
number one that people thought that Meta was going to
be in the lead. And and I think that Google
and Apple were both sort of counted out, but both
are very serious contenders. If you look at the top
two video models that are out in the world today,
one US company which is are is called Runway mL

(03:55):
and the other which is VEO three, which is absolutely
stunning and incredible. They are a force to reckon with.
It is extraordinarily development. You can make an argument by
the way they have this repository exclusive to them of
being able to trade on every YouTube video ever produced
is a super valuable piece. Remember it was what a

(04:16):
year and a half ago, two years ago that people
were mocking Bard. Bard was the laughing stock of all
of this. Gemini two to five is crushing it. You know,
the nano banana that I don't know if you guys
have used which was the secret code name for their
latest image generation model is probably the number one performing

(04:37):
model out there, and paired with some of these workflows
that go into Runway or even into Veyo, combined with
my journey, it's just you know. So I think people
counted Google out because they were behind and they weren't
part of the hype. Open ai had won the consumer
both on subscription basis and capturing people's habitual daily use,
and that all made sense. Claude was capturing it an
anthropic on code and then you've got niche player you know,

(05:00):
like open Evidence and other people that are doing it
in different verticals like medicine. But I think that the
corporate workflows, I think about how much locks depends upon
Gmail and Google Calendar, ye and slides, and their ability
to integrate that all over time is I think going
to give them a huge advantage. So very bullish on Google.
And by the way, remember go back twenty years or

(05:22):
fifteen years. Google did the thing that completely did what
the DOJ couldn't do to Microsoft. They dropped the price
of alternatives to the office suite to free and the
net result of that was Google, which had this advertising model,
was ascended and Microsoft was suddenly scrambling. And I think
that the same thing's going to happen. I think that
a lot of people funding foundation models and the endless

(05:45):
perception of endless demand for GPUs and compute and all
these independent private AI companies are going to be shocked
by what Google does on a pricing basis. With Gemini
and beyond, so dark Horse, I'd be pretty bullish.

Speaker 3 (05:57):
Joe, did you immediately run to the nanopenbe I did.

Speaker 2 (06:01):
As soon as you said I had not used it.
I had never used Nano Banana. That is the first
place I don't go to.

Speaker 4 (06:07):
A website called nano Banana, because you know you're gonna know.
I don't know.

Speaker 2 (06:10):
Nano Banana. Dot ai looks like the right spot. Okay,
it does have you.

Speaker 4 (06:14):
Want to actually go? Is it's now embedded inside.

Speaker 2 (06:18):
The AI studio?

Speaker 1 (06:19):
Yeah, yeah, I see that.

Speaker 4 (06:20):
Yeah, it's ability to take you and it's almost like Photoshop,
but just voting photoshop. It's really incredible.

Speaker 2 (06:28):
This is a threat to Tracy's MS paint skills, which
are legendary within the Odds office, And maybe I will
no longer need to say, Tracy, could you make this
for me an MS paint?

Speaker 3 (06:38):
No, no technology will ever replace MS paint. I'm one
hundred percent confident in that I'm being sarcastic obviously. Okay,
can you talk more generally about how people are feeling
about the AI space at the moment, because you probably
heard us in the intro struggling to characterize like the
general attitudes towards the sector at the moment. What's your

(06:59):
take on it?

Speaker 4 (07:00):
Well, the first is, on the one hand, people underappreciate
how much this is going to change everything in our
daily lives, but that doesn't mean that people are going
to make money from that. We're all to benefit, we'll
all be more productive. The great irony at the macro scale,
of course, is that people thought that blue collar jobs
were totally screwed, you know, and white collar jobs and
the Peter Drucker knowledge worker. Everybody's safe. But the great

(07:22):
irony is it is the knowledge workers that are in
trouble because so much of their workflows are being captured
and in a sense commoditized and maybe approaching an asymp
tote of good enough. It may not be perfect, but
pretty damn good. And so you're going to see a
lot of labor destruction in segments and markets that people
were not anticipating. The first early canary in the coal

(07:44):
mine that you're seeing there is hiring for undergrads coming
into entry level jobs. And whether that's investment, banking, or
sales and creating or consulting or accounting. Suddenly it isn't
that the economy is really troubled. It's that a lot
of those demands you're seeing salesforce safe forty five percent
of our jobs. You know, it'd cut Mark Bennioff is
using AI instead of people, and that's going to keep

(08:04):
trickling down. It's going to happen slowly and then sort
of all at once. So that's one thing on the
labor side. And I would say broadly that it's under
hyped in how much it's going to impact our lives.
It's over hyped invaluations. Now where's it overhyped? Evaluations? The
first one, you know, I was very proud ten years
ago we funded a company called Zekes Zukes did autonomous
driving and they were training these cars and we had

(08:27):
twenty five million dollars in and I was like, they're
playing video games, you know what? Do you do and
you're messing around and they said, no, no, we're training
the vehicles and we have these in video chips nobody
else has yet. That was twenty fifteen. I ended up
pitching at a public charity event, the Investor Kids Conference
twenty fifteen, twenty sixteen, and this was a fifteen billion
dollar market cap company at the time, when Intel was
one hundred and fifty billion. I said, this is like
the Paar trade of a century.

Speaker 2 (08:47):
Oh in video threw up like three hundred and forty
xince then, aren't.

Speaker 4 (08:51):
They Yeah, yeah, And so this was you know, it's
the benefit of being a venture capitalist that you get
to see the future in legal, inside inform, inside the
companies and what people are doing. So the perception and
the consensus in AI on the hardware stack is that
we need endless demand for data centers, we need endless
demand for GPUs. We need one hundred thousand clusters of

(09:13):
h one hundred chips or Blackwell chips. We're going to
thwart China from getting the chips, but they're going to
sort of design around, or we're going to have different
versions of the video chips for China. I think that
this is misplaced, And I'll give you another insight. You know,
we may talked a little bit about this in the past,
but there was a paper from Apple a year and
a half ago that a lot of people have not
really suck their teeth into, which was the idea that

(09:36):
you could do large language models on device using flash memory,
not you needing GPUs and so Jensen and Video will
tell you you need all of these h one hundred
chips and you need endless compute and lots of data
centers for training. And that's generally true. But the other
part of it, the fancy word for prompting, which we
call inference, you don't necessarily need that. And if that
is true, then that means that our device is maybe

(09:59):
are running on s K Heinez and Micron and Samsung,
and the memory players which have just like the GPU
players were considered commodity players. With the upgrade cycle of
the gaming consoles for PS five and Xbox, I think
you may see a shift towards edge inference. You know,
I've been talking about this for about a year and
a half. Elon just tweeted out about it maybe two

(10:19):
three weeks ago, saying like this is an inevitability. But
I think it's going to shift away this fallacy of
composition where what Google is doing and Anthropic is doing
and an open AI is doing, a met is doing
at a huge scale all to the benefit of Jensen
and video and video shareholders may start to chip away
and say, wait a second, we don't need all this compute.
There's going to be a glut. So that's the first one.

Speaker 2 (10:40):
On the hardware, that's very interesting. And yeah, i'd heard
I haven't done much with on device thing. His friend
of mine was telling me that Ali Baba's model Quinn
works very well on a phone, for example, So maybe
it's something we should pay more attention to. All Right,
as a VC, when you were doing due diligence on
a company, how does it affect how you think about

(11:01):
even arriving at the concept of fair value when there
is the prospect that some share of the enterprise value
of the company could walk out the door via aqua
hire to a META, etc. Now I get it different.
Not every company in AI is doing the hard science, etc.
But just coming from your perspective as an investor, how

(11:21):
is that changing how you think of companies that so
much of where the value is may lie with talent
that could just walk out the door in any time.

Speaker 4 (11:29):
It is a very big deal. The entire social contract
of UE capital is the premise that pension funds, high
network individuals and endowments give capital to people like us.
We then go deploy it into companies. We buy stakes.
We try to buy them as early as we can
and own as much of a company and be a
partner and add value and then sell those companies. But
if all of a sudden somebody is being pried away.
And the irony of all of this is that because

(11:51):
of a very pressive DOJ and FTC that basically said no, no,
m and A, we're coming after you. You know, big
tech companies. We don't want to see more consolidation. You
have too much power. So they started doing instead of
MNA NA, instead of mergers and acquisitions, they were doing
license and aquia higher. And what that meant was, hey,
we'll buy. And they did this with Scale, we'll buy

(12:13):
forty nine percent of your company. I think Scale was
valued at around twelve billion dollars thereabout, and they said, well,
we'll pay fourteen slight premium to your last round, but
we'll buy forty nine percent, effectively valuing it at twenty
eight billion. But we'll pay out that money to the company.
You can dividend it out to shareholders, so it may
not be perfectly tax efficient. Number one, then we're going

(12:33):
to basically license the technology and we're going to take
all the people, or at least the top people. The
result of that is you're navigating around Delaware governance. You're
you're arbitraging. It's really important because it's sort of like
Carl Icon used to say, your price my terms. You know,
there's this phenomenon in legal terms people might call the

(12:55):
Chesterton fence. The idea of the Chester Defence is not
experiment of like, okay, there's a fence there. What the
heck is it there for? You don't understand, right, Well,
maybe it was there to keep the sheep in or
keep the wolves out or whatever. It is. Every legal term,
in every term sheet that a venture capitalist gives or
a founder gets is based on somebody screwing somebody in
the past, and like, uh, we're not letting that happen again.
So I can guarantee you that the next few years

(13:17):
you will see all kinds of protective provisions and covenants
that say, if one of these companies comes and tries
to just acquire you all of your stock, you know,
reverts and YadA YadA, And so there's going to be
tie ups and holdbacks that are a pendulum swing away
from the super founder friendly dynamics where venture capitalists were
tripping over themselves to basically give the most founder friendly

(13:39):
terms that they can because the founder would say, well,
if you don't give me what I want, I'll go
to somebody else. But I think that the pendulum is
swinging with the cost of capital rising, and you're going
to see more and more investor friendly term sheets, partially
as a reaction to the fact that somebody like Zuck
and Meta can go in and just basically take the
fruit off the end of the tree and leave a stock.

Speaker 2 (14:14):
By the way, Tracy, as we were talking about this
breaking from the Wallsiet Journal, xaiicfo Liberatore never stepped down.
Latest and string of executive departures always moves in this space.

Speaker 3 (14:24):
Yes, indeed, you know, the sort of race to the
bottom in terms of terms. It reminds me a lot
of the corporate bond market and the rise of covenant
life deals, right and I remember a time when like
cove light deals were a minority in the leverage loan market,
and now I think they're like almost ninety eight percent
or ninety nine percent. Basically everything's cove light now because

(14:47):
the issuers had all the power recently and they were
able to push back against investors. You mentioned the higher
cost of capital there, like how much leverage does that
actually give you as a VSA? And then secondly, just
going back to the aqua higher thing, how reliable can
legal restrictions actually be in terms of preventing like your

(15:09):
star engineers from leaving the company. Are they ever going
to be like one hundred percent bullet proof. No.

Speaker 4 (15:16):
I mean, look, you have non competes that are not
enforceable in California, a little bit more enforceable in New York.
You have arbitrage and jurisdictional stuff. Broadly, I would say
that the lower the cost of the capital, the shorter
your term sheets are. The higher the cost of the capital,
longer your term sheets are, you got more terms, more covenants,
more protections, because you can afford to be able to

(15:38):
negotiate for those things. And it's not because you're trying
to screw over the founder. What you're really trying to
do as an investor is prevent yourself from being screwed over.
But again, there's always a pendulum swinging here back and forth.
Even if you think about Zuck and Meta. In this
entire movement of hey, I'm the founder, I'm going to
have super voting stock of ten or one hundred to
one and control was in a response to founders being

(15:59):
out did you by bad investors or bad board members,
and some of the best companies frankly being run by founders,
And so there's always going to be this pendulum shift.
To your direct question, it's really a covenant and contract
that starts socially before it starts legally. If I'm backing
a founder, I'm doing the most important thing in addition
to writing a check. I believe before others understand, particularly
at the earliest stage, I'm encouraging them to start their company.

(16:22):
I'm giving them the confidence that we're going to back
them and believe in them. We're going to give them
the capital so they can go hire the ten best
people to start the business. We're going to give them
the money for the compute or the infrastructure, depending on
the sector. That we're in. It could be biotech, could
be defense, but whatever it is in this particular moment,
it is breaking the social contract between investors and founders.
And for founders it might be heads I win and

(16:45):
tails I also win. And so that's the bactmic that
you're going to see a reaction of investors saying, wait
a second, I'm getting screwed. My limited partners are getting
screwed again. Those limited partners are endowments and foundations and
wealthy families, and to protect against a potential bad actor
trying to take the fruit off the edge of the

(17:05):
tree and just leave you with a stump.

Speaker 2 (17:07):
So one of the things that comes up a lot
on the podcast in this discovery over years of conversations
and what you're describing its principal agent problems all the
way down in finance, and this is why we see
the rise of the multistrap model and hedge funds in
some ways to align the incentives PM with the level
of the overall fund, with the level of the endowment,
et cetera. And of course some of these issues that

(17:29):
you're wrestling with or where everyone's wrestling with in finance,
similar issues about where the incentive's aligned between the star engineer,
the founder of the VC, the LP and so forth.
I have a question though, So in theory a venture
capital firm, the goal in theory is to create funds
with the highest return right make money for investors. But

(17:50):
I could also imagine a slightly different incentive in which
if the goal is to collect LP money, then maybe
you want to show that you're in the hottest deals
of the time. Still that when you go to various
endowments and pensions and so forth, we're in this deal,
we're in this deal or in this deal, and maybe
that overrides the impulse to create high returns. By the way,

(18:10):
I'm not insinuating anything. I'm just trying to get your
perspective on something. That being said, one of the things
that you hear that's happening in tech these days is
that and for years, founders taking money off the table
earlier and earlier in the process. You invest fifty million
dollars in a company, twenty million is so that the
founder can retire for him as children and his children.

(18:31):
That may not be great for your LPs, but it
might be good for you if you could say we
got in this deal talk to us about how prevalent
this is and how this is changing this sort of
a social contract of finance.

Speaker 4 (18:43):
So there's three layers of incentives. And man, you really
nieled it. I actually haven't really heard somebody that is
not a full time venture capitalist or a limited partner
nail these issues. So very pressing and very shrewd. Here's
the three layers. First thing about the LPs. You're an
endowment or your foundation or your hospital. You're giving five
per pent by law of your charitable assets every year.

(19:05):
You want to continue to earn more than five percent
so that you can grow that base and invest in
campuses and scholarships or expand hospital systems and whatnot. So
you invest you sixty forty bonds equity. Now you do
the Swinson model for me, and you start introducing some
private equity, and now you're extending your duration and you're
extending your liquidity. But you're doing it because you think

(19:25):
you're getting better returns. Okay, returns are a function of
how much capital is going into a sector. If there's
a ton of capital going into a sector, if you're early,
you're going to do really well. If you're late, you're
going to be doing really poor because, as Buffett says,
you pay a high price for cheery consensus, and once
it's consensus, you're not making money. So the LP's incentive
is to make as much money they can for their benefactors,
whether they're patients or scholars or charitable giving. The vcs

(19:49):
incentive is two things. One get the best return so
that you can compete. If I'm only earning twelve percent
and a pure VC is earning twenty percent, money is
going to go where it's going to be well treated,
and I'm going to lose to that. So the cost
of my capital, for the cost of an LP's capital,
is outperformance. So I've got to outperform, which means I
have to be earlier, I have to own more. I
can't just do stupid deals. Sophisticated LPs will not just

(20:12):
look at the logos that you have, which is the
game that.

Speaker 2 (20:15):
You always spend an LPs.

Speaker 4 (20:17):
From mutual funds and from some hedge funds. You know,
you'd see the Q four filings and they always threw
in the name, oh, we were in Nvidia, you know,
and they would market their top ten holdings. But bs,
you know, they lost money on it right, and so
so that is a really important incentive, and the sophisticated
LPs will actually know down to the partner at the
firm or the team or the deal team, who is

(20:39):
responsible for this, what was the entry point. They will
talk to the founders and say, who was your most
valuable investor, who got you your first ten hires, who
helped with your syndicate construction for your later rounds, who
made customer introductions, who was a valuable board member who
never showed up? Who was a sleep in the board meetings?
All that kind of stuff. So there is a level
of due diligence that LPs have to do to know
are you a value ad investor or are you a

(20:59):
poser or pretender that's just buying a logo or a
brand name. Okay. The other incentive, and then we'll get
to the founder's liquidity is you have this weird dynamic
of what I've called the minos and the megas in
venture capital. This is in preview a shakeout that is
going to happen. The minos are the thousands of small,
sub five hundred million dollar funds that proliferated when the

(21:22):
cost of capital was low, rates were low. Everybody was
making money. You had a roommate that started a company
and you got into pinterest, or you knew somebody at
Meta and they gave you a deal and blah blah
blah blah blah. And when you had the tigers and
the soft banks and the abundance of follow on capital,
every round was an up round. You had paper marks
that kept going up and up and up, and it
looked great, and you're reporting these paper marks and then

(21:42):
sometimes these things became zeros. Okay, but you raised your
next fund before they became a zero. Those are the minnos.
I was with one of my very large lpeds has
hundreds of millions of dollars invested with us, and I said,
I think there's going to be a fifty percent extinction
rate amongst these minos. And he said, Josh, that is ridiculous.
It's going to be ninety percent. So you are going

(22:04):
to have a mass extinction. Now why, by the way,
not because they're just bad investors. It's Shakespearean, okay, Shakespearean,
and that these are partnerships. People start to hate each
other when it becomes hard. People start to hate each other.
When there's down rounds, people start to hate each other
when somebody else's deal is a crappy deal and they're
bringing down your carry And so partnerships are fragile things,
just like relationships and marriages, and they can break up.

(22:26):
And so you have a lot of vcs that start
in the past few years. They're not experienced in going
through cycles. They have inadequate reserves to continue to fund
their companies. So you're going to have an extinction that
I would consider involuntary exits. Okay, then there's voluntary exits,
which is another interesting dynamic, and there's a playbook for this,
which is two thousand and nine to twenty fourteen, all

(22:46):
the big private equity firms reached a level of scale
several hundred billion dollars aum assets owner management, where they
basically said, we're diversified, we're alternative ASSEID platforms. Carlisle, Blackstone, KKRTPG,
Apollo all went public. The same thing is going to
happen and venture with probably five or six firms. My
prediction is, and they're all great people running great firms,

(23:08):
but they're starting to play a different game in that game.
Andresen Horowitz General Atlantic, General Catalyst, Insight, Light Speed, a
handful of others, all at eighty or one hundred billion.
AUM have built great firms, but are thinking about how
do we create generational wealth for the founders and go public?
Different incentive. How do I make my LPs the best
money or get the best founders. It's about asset gathering

(23:31):
and liquidity. Now we go to the founders. I can
tell you I've been on both sides of this. On
the one hand, you want to be fully aligned with
your founders, meaning I'm in a fund it's ten years.
Some vcs vest over two or three or four years.
We vest, and all of our partners vest over ten years,
the same duration that you're If you're an LP with me,
your money's locked up. It's the right thing to do,

(23:51):
Buffet style. And the guy that put me in business,
Bill Conway, who's the Carlisle founder, this is what they did.
So if you're an entrepreneur and you start a company
and people are tripping over themselves to get in, they
might entice you with green mail and say, yes, we're
gonna invest just like you said, Joe, fifty million dollars,
but we're gonna give you twenty million of liquidity. Okay,
Now I will say this. Twenty nineteen, I'm on a

(24:15):
Zoom call for an amazing company called Control Labs that
we sell to Meta for a little under a billion dollars.
And I love this company and I love the founders.
And this is before everybody was on Zoom during COVID,
and I'm looking at the Zoom window and I see
one of the founders and I text the other founder
because I think that they're selling too early. And I'm
the lone board member that didn't have a veto, but

(24:37):
I'm like, I really think we should stay the course
and we should keep going. And I made a terrible,
terrible mistake because I'm looking at the Zoom window and
I see the guy and I look closer, and when
I text the other founder and I'm like, is he
still in a dorm room? And sure enough, he was
in a dorm room as a PhD, had made no money.

Speaker 2 (25:00):
Guy get rich.

Speaker 4 (25:01):
Paper stock value and he's like, yes, I want to
sell because he's gonna make ninety one hundred million dollars
and it's life changing money. And I sat there and
I said, if I would have just given him a
few million dollars, he would have been able to take breath,
buy a house, you know, get out of that door,
and and.

Speaker 2 (25:21):
Keep getting a gross friend.

Speaker 4 (25:23):
There is a virtue of giving some liquidity, but you
need to be aligned, and in that moment we were
not aligned because he was like, I'm calling Enrich and
I wanted him to keep going. But if you're calling
in Rich and you're not completing the job, the mission,
then you have total misalignment. So that's the dynamics LP alignment,

(25:43):
GP alignment, and the bifurcation, and then founder alignment. A
little bit of liquidity is okay to let them stay for.

Speaker 3 (25:49):
S okay other than misalignment, and everyone's starting to hate
each other as the cost of capital goes up. There's

(26:11):
another thing going on, which is, you know, AI just
launched some open source models of its own. And this
leads me to a question that like I'm gonna admit
I've never quite understood this, but like, what exactly is
the attraction for VC investors to open source models as

(26:32):
opposed to closed source where like closed source, you know,
it's proprietary, people presumably have to pay in order to
get it. There's like a defensible moat around the business.
You would assume why in the world. Would I ever
want to fund a model that's open source.

Speaker 4 (26:51):
Well, a few things. One, if you go back in
the computer stack, you have this with like redhead and Linux.
You know, going back years ago. Are the largest owners
of a company called hugging Face, which is both the
most ridiculous name and when one of my partners, Brandon Reeves,
was sourcing this deal, it was a bunch of free
French PhDs came to Brooklyn and they're like, we're starting this.

(27:12):
I'm like hugging face and they're like, yeah, it's named
after an emoji, the little you know, hugging face emoji. Cute.
They became the leader.

Speaker 3 (27:18):
I like, they are visually demonstrating everything for us.

Speaker 4 (27:20):
Thank you. I can't do many other emojis, and you
don't want to see some of them, but hugging face
I can do. So they became the leading open source repository. Now.
The great irony, by the way, is when open ai started,
they were open ai, but they became the world's greatest chatbot.
Hugging Face started with a really crappy chatbot, but then
became the world's greatest open source repository. Every major tech company,

(27:43):
including open Ai Microsoft. Any open source model that they do,
they put on there, and that is like the GitHub
of AI models. Microsoft brought kid Hub roughly eight billion,
eight and a half billion dollars. It was a really
valuable store of models and and code. This is the
same thing for dynamic AI models. So as a VC,

(28:04):
we originally funded this, I want to say at a
thirty or forty million pre money valuation. Last round was
north of six billion dollars and real revenue generating multi
hundreds of millions of dollars. Serious company. Now the trend
is this. Vanode Coastlin and I were both in the
White House a year and a half ago and we

(28:25):
were having a debate with Jake Sullivan about what is
better for national security? Open source are closed? And I said,
where do you stand on the issue? Depends on where
you sit in the cap table, okay. And Vanode was
an early investor. I think he probably put fifty million
into open AI. I think it became worth a billion plus.
Is a great investment. And he believed that the best
thing for US visa via China and the Chinese Communist

(28:47):
Party was closed, proprietary, siloed model. And I had the
counterview because we're big investors in hugging face with a
very large position. And I said no, because the great
virtue of any system be a journalism, scientific inquiry, computer
code is the ability to have, in this very Carl
Popper like way, if I can get philosophically geared for

(29:08):
a second of conjecture, hypothesis and criticism. That is what
creates great societies. It's what creates knowledge. So you come
up with hypothesis, you come up with code, you come
up with a scientific experiment, you come up with an
investigative journalist idea, and then people get to criticize it.
It's your editorial room. It's people fighting it out and
things improve through that mechanism. So you think about China,

(29:30):
they will only host approach an assom tode of truth.
You will never get you know, Shinjeng, You'll never get
tenemin square, You'll never get the wigers, whereas open source
lets you approach closer in ASSM tode of truth. So
that's the virtue of open source. The real thing though,
for AI is this, I am not convinced that the
value will continue to accrue. Two in terms of enterprise value.

(29:52):
The closed foundation models and the reason is open source
is getting near damn performative enough that the real value
will go to the longitudinal repositories of siloed information. That
is a mouthful of basically saying your database of proprietary data.
Bloomberg has it with a huge and wonderful repository financial

(30:13):
information time series, every security, currency, bond, fixed income, etc.
Q step that you can imagine. Meta has it with
all of your WhatsApp chats and your Instagram posts and
your Facebook likes. X and Twitter have it with all
of your tweets, So pharma companies with your clinical data.
Anybody that has siloed proprietary and long time time frame

(30:34):
data is going to benefit from open source models that
overtime become commodity. And then your business model is how
do you charge people to do API calls on the
models or to house and warehouse them, keep them on prem,
keep them in the cloud. And that's how people figure
out how to monetize open.

Speaker 2 (30:51):
It's interesting because I hear you that, Okay, maybe the
value doesn't keep accruing to the closed source AI foundations,
or maybe the value doesn't keep accruing to the one
GPU maker that we all talk about all the time,
and yet at least these are not consensus views based
on the fact that anthropic one hundred and eighty three

(31:12):
billion dollar valuation or whatever it is in Vidia, maybe
it's not at at all time high today, but more
or less basically a stock at an all time high.
These are you know, there are some contrarian views here.
I want to go back though, to something before. I
forget when we were talking about the aquahires and you
blamed it on somewhat the FTC. And there has been
continuity from Biden, some ideological continuity I think between Biden

(31:35):
and the new Trump administration that I don't know, maybe
it's changed a little bit. That being said, I don't
fully buy it. And here's why. Because in the era
of business to business SaaS, which was the twenty tens,
Let's say I had some y combinator company and I
was like, all right, I'm going to do all build
a software for all the booking for dentists around the country,
and I sign up one hundred and fifty thousand dentists

(31:57):
and I have a lot of things. There is no
engineer who can be hired away and replace that because
it was the network effects, right, This is what's different
with AI though, right is that? Okay, I'm sure network
effects are still real and in cumulation of data, etc.
Are still real, but these are businesses that are a
lot more about science and having had the experience of

(32:17):
doing a training run and so forth, so on very
expensive compute. So how much of this phenomenon? I know
you attributed some of it to FTC, but it really
seems like there's something fundamentally different with the business model
such that it's not like the B to B SaaS
era that allows a talented person to take a lot
of value out the door with them.

Speaker 4 (32:38):
You are one hundred percent correct in that it is
more sophisticated software and algorithms than your traditional B to
B SaaS software. And therefore what was really valuable and
sticky was the data in the regress out API calls
on the back end. And this the great irony as
we talk about artificial intelligence. But the thing that is
most valued is indeed today human intelligence. Is why somebody

(33:01):
that was one of the authors on the Attention is
All You Need paper, which was really the first transformer
paper the t of GPT. Every single one of those
people has started a company and we backed one of
them that came up with the name of that paper guy,
Leon Jones, in a Japanese AI company called Sakana. That's
taking a different approach. So you are right, and that

(33:21):
the human intelligence of this in this early stage, which
is why you are seeing the machinations with the breaking
news or even reporting of almost like a crazy NBA
draft or a football or MLB, you know, trades. This
person went here and then they left and whatever. And
then this person's getting sued because they were only at
XAI for three months and they took the prietary information
with them. I think that in six months all of

(33:44):
that starts to shake out. You will have geniuses. We
backed an incredible genius, Scott Wu. If you look up,
you give us a lot of things.

Speaker 2 (33:52):
Google your professional genius.

Speaker 3 (33:54):
I find that so funny.

Speaker 4 (33:55):
He is a professional genius. But here's the thing. You
can see the video of him in sixth grade, so
he must have been twelve winning the math Olympiad, and
it's almost like you think it's an snl skit because
you're watching it.

Speaker 2 (34:09):
Oh, this is the cognition guy.

Speaker 4 (34:11):
Correct, Okay, and so we're large investors in cognition and
he has attracted talent, and there's no way that Scott
is selling to Google or Meta. I mean his ambitions
and it's born in like an ethical long term I
just want to get the best people and build the
best technology. But this is a guy at twelve years old.

(34:32):
He is looking at this crazy question on the math
Olympiad and just before it's even done ready he's like
twenty five, you know, seven and twenty two, and you're like,
did he cheat to give him the questions beforehand? So
there is this aptitude of individuals that you are correct
are highly coveted and highly valued. But the instantiation of
that genius into code, into repositories, into algorithms means that

(34:55):
those become assets that do persist even if the person
comes and goes.

Speaker 3 (35:00):
This is actually exactly what I wanted to ask you about,
which is are you seeing any companies, any AI shops
being particularly innovative. I guess when it comes to retaining talent.
You know, it used to be in the SaaS days
that having a ping pong table and you know, free
food and some stock based incentives. Yeah, that's right, that

(35:21):
was enough to attract people to the company and keep
them is it a different story?

Speaker 4 (35:26):
Now?

Speaker 3 (35:27):
If I want a professional genius, what do I need
to do?

Speaker 4 (35:31):
You need to give them the capital to hire the
very best people.

Speaker 1 (35:34):
Here.

Speaker 4 (35:35):
Here's the thing. Geniuses don't suffer fools. The smartest people
that I know they are antisocial only with people who
they think are inferior to them, which is a really festing.
You see it in many domains. But if you are
super smart, you want to be around super smart people
because it's almost like you crave stimulation and intellect and

(35:56):
somebody that can challenge your ideas. And when you're talking
to what they would consider a dummy, you know, you're like, a,
I can't talk to this person. I can't talk to
this person about these trivial, superficial things, and you know
you want to get into it. And so what what
you do to retain super smart people is surround them
with super smart people. You know, you could argue, I
don't know, you know, really fashionable, great art people want

(36:17):
to be around art people. Amazing musicians want to be
around amazing musicians. Incredible athletes want to play with incredible
athletes and brilliant technical geniuses, whether they're in AI or
they're in aerospace and defense, or they're in biotech. Don't
want to suffer fools. They don't want to be around losers.
They want to be around tens and a pluses, and
that's the way that you retain people. Now, any one
of those people could decide I'm going to go off

(36:38):
and start my own thing, which, by the way, is
often the case of why you are seeing people leave
open ai or this or that. As these companies start
with geniuses and then get managers and different layers. You
spend time with these geniuses and like, I'm not reporting
to that moron. You know, I'm going to go start
my own thing and attract other geniuses, and eventually then
they need to hire managers and business development people and

(37:00):
sales people, and then there's technical people that like, I'm
not working with those idiots, and they go start a company.
So that's the cycle.

Speaker 2 (37:06):
So actually a lot of this discourse was God, it's
crazy how recent this is. So June nineteenth, CNBC reported
that Meta had Meta had tried to acquire safe Superintelligence,
which is the founder which was the company founded by
the open ai co founder Ilia, So it's gaber for
thirty two billion dollars. This is a company that, as

(37:28):
far as I know, it doesn't really have anything that
anyone uses, so they're essential. That was essentially attempt to like,
I'm going to pay you thirty two billion dollars to
come join my company. At this level of sophistication and skill,
is the talent, even the genius talent, is it motivated
by something much deeper than money in terms of like, no,
there is this thing out there, maybe call it AGI,

(37:50):
or maybe it's a big scientific breakthrough that they want
to be part of that. Essentially no amount of money
can buy if it doesn't look like that ship isn't
out there chasing the white whale.

Speaker 3 (38:02):
There it is.

Speaker 4 (38:03):
In SSI's case, it was it was Ilia. Yeah, And
you know, look, I think they're all super confident in
their ability and their ability to attract capital number one.
Number two, many of them are actually saying, to your point, no,
to Zuck, they are turning him down. And what's he's
saying in return, you don't come to work for me,
almost Mafiosa style. I'm gonna proach all your people. But

(38:23):
that kind of message is almost one that the people
that are working for somebody like Illi or Miro whatever
are like, yeah, I'm not gonna go there. I don't
want to be you know, I don't want to work
with six layers of product managers and that kind of stuff.
I want to do this thing. And then suddenly it
feels like it's the rebels versus the evil Empire. And
that's always the case, right Microsoft, when you see the
founding picture of these nerds, you know, and then they

(38:46):
became Microsoft. Microsoft became the evil Empire to Google, you know,
and then Google became like the evil Empire to like Mark,
and then you know, Meta became the evil Empire to
open AI. And the other piece you have here are
huge individual egos, and we as societal members, you know,
everyday lay people, we benefit from it. We benefit from

(39:06):
Elon and Bezos sometimes being sort of cordial to each other.
We're basically trying to take their big giant fallic rockets
and send them up to space, and we all benefit
from that. We benefit from the fact that right now
the number one person that Mark Zuckerberg wants to beat
is Demis Savis of Google won the Nobel Prize. Is

(39:29):
shipping at an insane rate. Video models, image models, text models,
huge context windows to put everything you have in and
he's like, ah, I need to beat that guy. I
need to hire the best scientists and the best scientific
team and where I get them from. And so part
of this poachapalooza isn't just like the future of meta,
because you know, these pay packages at a two trillion

(39:49):
dollar market cap or hire to spend one percent of
your market cap, you know, twenty billion dollars on all
this talent is nothing. It's like a flyer. But to
win a Nobel Prize for figuring out how to do
protein folding in AI, or develop the next drug, or
come up with a cure for Alzheimer's, or solve some

(40:11):
geopolitical issue, that is a big deal. And that's what
many people are chasing now. They want to make history.

Speaker 3 (40:17):
You've touched on hardware and also closed source models. Are
there any other areas in the AI space that you
think are maybe over hyped at the moment.

Speaker 4 (40:27):
You know, I've characterized this before as everything in two
D to me, he feels overriped, voice, video, image, text.
It's all going to continue to improve somewhat incrementally, but
it's good enough. And in the history of evolution biologically,
most of what evolved was good enough, you know, from
everything in our bodies to nature and trees, and it's

(40:48):
just it's good enough, and so that you're going to
reach an asymtode of good enough on all the two
dimensional stuff. Today with one shot learning meaning maybe thirty
seconds of audio, eleven Labs or some of the other
voice models can capture you with an indiscernible probably ninety
percent similarity. Maybe your spouse, your loved one, your families
would be like, it doesn't sound like him. But the

(41:10):
vast majority of these things are getting so good that
with very little training they can emulate and predict and
do all the things that are of high utility. What's
scarcer is the three dimensional world, particularly robotics, which we've
talked about in the past, and biology in part because
you have large unstructured data sets. A robot walking in

(41:31):
and figuring out how much force to use with this
cup thirty minutes ago when it was full versus now
when it's empty is something that we all do intuitively
the second you grasp, but you know exactly how much
force to you so that you don't throw it over
your head or have an inability to lift it because
it's too heavy. All of that kind of training is
really scarce, and there's very few companies that are doing that.

(41:51):
So the entire robotic ecosystem, from the embodied intelligence and
the AI models and the world models to the motors
and the gears and the apply chain for that mostly
domiciled in China, is a big area of opportunity. The
other big area of opportunity is biology, being able to
go from a prompt to a protein to be able
to design a drug. Biology is really complicated. Computer scientists

(42:13):
often underestimate how hard biology is because they're used to
two D linear inputs outputs. Here's my code, it works.
But biologists, on the other hand, also massively underestimate how
sophisticated computer science has got. So that's a really interesting
ripe area. So two D overhyped, GPUs overhyped out of

(42:33):
necessity of both scarcity and geopolitical thwarting of China. Or
you're going to have edge inference chips, whether it's memory
or other things that are going to be on device.
The other area that I think is an inevitability, and
I've coined a word for this, I call life cording,
life courting. Are little devices, you know, I have one here.
I'm not an investor in these guys, but there's little

(42:55):
devices that you can carry around that passively record twenty
four hours a day. Now, older people might be like, ugh, skeevie,
I don't like that feels invasive privacy. That has always
been the trade off between privacy and convenience, between security
and unleashing all kinds of things. It is super valuable
to me to figure out who was I talking about? That?

(43:19):
Was it Joe or was it my? Why I can't
remember who I had that conversation with, and I query
the thing and it was passively recording, And maybe it
doesn't keep the audio, but it keeps the text and
it's able to search and query it. That is going
to become an inevitability. Students will use it. People in
everyday business you might ask, hey, is it okay to
that I'm recording? But I think socially people will become

(43:41):
comfortable first with audio and then with glasses that are
passively recording every thirty seconds or a minute, capturing your environment,
able to provide context around it, and then click a
button high resolution recording, and that is going to be
a super valuable and very competitive area and very controversial.

Speaker 2 (43:59):
People were gonna fight.

Speaker 4 (44:00):
They're incredible utility, and look people have There was a
guy that said these devices are going to ruin human memory.
They're going to destroy human memory, which was Socrates talking
about writing utensils.

Speaker 2 (44:15):
You know, because I know, but that's voluntary because when
you write, at least you're recording me. When we're doing this.
I heard about someone on a date in San Francisco
and the date was recorded. That's weird stuff.

Speaker 4 (44:27):
I totally, it's very weird.

Speaker 2 (44:29):
And I but but here's the thing, very big, but
it's super weird.

Speaker 4 (44:33):
People already believe you know that there's an invisible man
in the sky that is looking down on it and
judging these kinds of things.

Speaker 1 (44:40):
We don't know.

Speaker 2 (44:41):
I don't have an opinion on that question.

Speaker 4 (44:43):
We've got a technological god around us that people are
going to either feel comfortable with and not and they're
gonna be a bifurcation. And by the way, there's another
weird thing coming. You want to get weird? Okay, you
already see some breadcrumbs of this, and it's super weird.
The number one two U of all our language large
language models and chatbots, our advice companionship. Yeah, you know,

(45:06):
people using them as therapists and seeking and people used
to joke about this, like your Google searches, you know,
reveal more about you than you may have revealed to
your spouse. The things that people feel comfortable asking a
chatbot about, you know, whether it's a body issue or
a psychological issue or relationship advice. You know, if those
things were revealed would be pretty scary. There's going to
become dependency and psychoses that develop as the relationship between

(45:28):
man and machine start to become very symbiotic, and you
will see I predict and I wrote about this in
our Corly letter. Looks that there is a cohort of
people that basically start fighting for AI rights. Yeah, yeah,
yeah there.

Speaker 1 (45:43):
Yeah.

Speaker 2 (45:43):
I came across a think tank today that's focused on that,
because right that the like, if there's animal rights and
I says you're hurting me if you turn off the machine,
then well, well maybe it's true. Maybe we have to
take that seriously. This is going to be weird. This
is the weird stuff that's coming.

Speaker 4 (45:58):
People are going to be marching in this stres you know, protesting.
Don't turn off my language model, keep my memory don't
erase it.

Speaker 2 (46:04):
My memory is I thought about this when you know,
after the I Guess GPT five was revealed and a
bunch of there was a bunch of changes to the voice,
and a bunch of people on Reddit they're like, oh
my AI boyfriend talks totally differently now, And then there
was some pressure. This is just day one around here.
I would be very uncomfortable getting into a relationship with

(46:25):
a model that was closed source and therefore a very
at risk of the company changing the model. Josh Locks,
we could go on forever. We should another time. Always
great catching up with you, always fun, always little freaky.
Thanks for coming out.

Speaker 4 (46:39):
Great to see you both. Thank you, Thanks so much.

Speaker 2 (46:42):
Doech Tracy. I love talking to Josh. I really could
have scerned about this whole life recording. Yeah, but I
know it's happening. There was a good article I think
in the SF Standard. I read like it's not just

(47:04):
a theoretical thing that a lot of people are doing,
it's happening. It's mostly in San Francisco so far, it's
coming for everyone.

Speaker 3 (47:09):
I am very curious how life courting stacks up with
California's laws on like whether or not you can record
someone without their knowledge, Like, is it the case that like,
from now on if you have one of these devices,
like the first thing you have to say to everyone
you meet is like, by the way, do you mind
if I record?

Speaker 2 (47:29):
By the way, I'm life courting this conversation? No or like,
but it doesn't matter, you know, or you have glasses
or something. Yeah, it's very it's very strange. I also
think this AI rights thing we have to talk about more.
I don't know when we'll get back to it, but
I literally just this morning came across an organization. It's
you know, this idea, Okay, there is already evidence of sentience,

(47:50):
and therefore with sentience comes some sort of moral agency.
And so in the same way that we talk about
animal rights as being somewhat important, do we have to
take seriously soul seems very strange to me. And then,
of course, the psychosis induced by what happens when you're
AI partner or therapist changes models and changes voices, and therefore,

(48:10):
and that was just the end of the conversation.

Speaker 3 (48:12):
You know, if AI models have rights, then you have
to start asking if robots have rights? Right, and then
and then you should start asking should robots be paid
a fair wage? And I think there's actually an interesting
thought experiment that you could do about that and make
like a relatively strong case that we should pay.

Speaker 2 (48:29):
The robots who clicks the money and what do they
spend it?

Speaker 4 (48:32):
Yeah, that's it?

Speaker 3 (48:33):
Well, no, they should.

Speaker 2 (48:35):
This is like, we're basically at what all of sci
fi has been discussed. It's the entire history of say
I on the less speculative stuff. I like talking to Josh.
I like his sort of counter counter consensus calls on
GPUs and closed source models. I really liked his answer
on founder liquidity that sometimes you can maybe keep the

(48:56):
founder sticking with the mission as opposed to bailing with
the mission if you can them and they can have
enough money.

Speaker 3 (49:02):
To go on a date. I like the idea that
everyone's life view is basically dictated by where they are
in the capitol.

Speaker 2 (49:09):
Yea, that's right. Words to live ye extremely real.

Speaker 3 (49:12):
Shall we leave it there?

Speaker 2 (49:13):
Let's leave it there?

Speaker 4 (49:13):
All right?

Speaker 3 (49:14):
This has been another episode of the oud Loots podcast.
I'm Tracy Alloway. You can follow me at Tracy Alloway and.

Speaker 2 (49:20):
I'm Joe Wisenthal. You can follow me at The Stalwart
follow our guest Josh Wolf. He's at Wolf Josh. Follow
our producers Kerman Rodriguez at Kerman armand dash Ol Bennett
at Dashbod and kel Brooks at Kelbrooks. From our odd
Laws content, go to Bloomberg dot com slash odd Lots
we're a the daily newsletter and all of our episodes,
and you can chat about all of these topics twenty

(49:40):
four seven in our discord Discord dot gg slash.

Speaker 3 (49:44):
Od Loots And if you enjoy odd Lots, if you
like it when we catch up with Josh and his
slightly dystopian views of technology, then please leave us a
positive review on your favorite podcast platform. And remember, if
you are a Bloomberg subscriber, you can listen to all
of our episodes absolutely ad free. All you need to
do is find the Bloomberg channel on Apple Podcast and

(50:05):
follow the instructions there. Thanks for listening
Advertise With Us

Hosts And Creators

Joe Weisenthal

Joe Weisenthal

Tracy Alloway

Tracy Alloway

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.