All Episodes

April 22, 2024 52 mins

Over the last year or so, probably every venture capitalist has become interested in artificial intelligence. So people are still figuring out what types of business models actually work, and who will end up making money in the space. Josh Wolfe has been at it for a long time. As a co-founder and managing partner at Lux Capital, he's been involved in a number of deals in the space, and is already looking at what's next after the wave of excitement for chatbots since ChatGPT was released. On this episode, we talk to Josh about what he's excited about right now, including robotics, biotech, and maintenance. He tells us that just as ChatGPT opened everyone's eyes to the power of chatbots, a similar moment is coming in the robotics space.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Bloomberg Audio Studios, Podcasts, Radio News.

Speaker 2 (00:20):
Hello and welcome to another episode of the Odd Lots Podcast.

Speaker 3 (00:23):
I'm Joe Wisenthal and I'm Tracy Alloway.

Speaker 2 (00:26):
Tracy, let's talk about AI some more.

Speaker 1 (00:29):
Okay, Well, we could just have AI write the script
for us. We could give ourselves some time.

Speaker 2 (00:34):
No, I don't think the technology is there yet, you
know what, so I can't say who. Maybe I was
trying to a professor recently and she said something really
interesting to me. I'm not supposed I'm not sure it's fine,
I think, And she said, like, you know, there's all
this anxiety about you know, kids cheating on their essays
or having Chad g GBT write their essays for them,

(00:55):
and like, you know, supposedly professors are like tearing their
hair out trying to figure out what to do about this,
and the AI detector don't really work all that well particularly,
but apparently like it seems like the solution is to
just grade them as regular essays no matter what. And
it sounds like at this point all the chat GPT
essays are basically solid c essays, and so if you

(01:16):
just sort of like take them at phase value, even
if you think they might be AI generated at least
at this point, it doesn't seem like a way at
least for college students to write good essays.

Speaker 1 (01:25):
Yet, so our baseline, our average is chat GPT.

Speaker 3 (01:29):
Now, yeah, that's basically can you beat the bot?

Speaker 1 (01:32):
Did you see the thing someone was tweeting about one
of the tells for AI generated words is if you
use the word delve. Yes, I saw that, which, as
someone who I'm sure has used delve numerous times on
this podcast and in my writing, I thought was a
little unfair. It is a little it's a cliche, but
that doesn't mean it's from AI.

Speaker 2 (01:52):
Although being said, there's more to AI than just chat
GPT obviously in chatbots, and this sort of has come
up on a couple episodes recently, but only like very tangentially.
People talk about the use of like AI and industrial applications,
and I've seen a lot of stuff. There have been
a couple Bloomberg articles about some startups that sort of say, like, okay, well,

(02:12):
like what if we trained robots the same way we
trade large language models, where you feed them just like
tons and tons and tons of real world data, and
so that yeah, like you sure you still have to
solve the mechanical engineering part. But then what if that
allows them, like all this training data to do more
advanced industrial things like I don't know, like make a

(02:34):
pizza or you know, be a more powerful humanless assembly
line or.

Speaker 3 (02:39):
Something like that.

Speaker 2 (02:40):
Where it's like we see all these oppressive robots and
videos like Boston Dynamics, but I never know like if
any of this is like quite there yet in terms
of having its value.

Speaker 1 (02:49):
Yeah. So the robotics aspect of AI is something that's
incredibly interesting to me. It kind of makes me think
about the world that we want to see. So it
would be great if if we had physical robots who
are able to do stuff like clean up a house
or take care of an elderly family member or something
like that. It's not so great if all of our

(03:11):
technological prowess basically goes into writing satirical lyrics. Yes, via
chat GPT, like that's fun. I can do that myself,
But what I really need is someone to vacuum or
dust the house.

Speaker 3 (03:23):
Do the laundry.

Speaker 2 (03:24):
Yeah, exactly, really nice. All right, Well, I just want
to jump right into it because we really do have
the perfect guest. We're going to be speaking to someone
who has been investing in AI for a long time.
A lot of vcs like started investing in AI last
year obviously, but this is someone who has been investing
in AI for quite some time before it became the

(03:45):
hot new thing. We spoke to him last July and
had a great conversation about what he was seeing in
the space. So I'm really pleased to welcome back on
the show. We're going to be speaking with Josh Wolfe,
co founder and managing partner of Lux Capital. Josh, thank
you so much for coming back on ODLTS.

Speaker 4 (04:01):
Great to be on. I feel like I should say
hello and like a robot voice.

Speaker 3 (04:06):
So what's interesting to you these days? What are you
seeing out there that gets you excited?

Speaker 4 (04:10):
Well, you know, you guys started this off with AI
and with an AI. Look, we've had what I would
call maybe a little bit pejorative, a little bit looting crude.
We've had chips everybody knows that we talked last time
about in AMD. We've got chatbots. You already have some
of these guys that are starting to fail. They've raised
billions of dollars and in some cases just you know,

(04:30):
relatively undifferentiated, big debate between open source that is approaching
the ASSYMP tote of achievement that the big private models
have and then being a little bit lout you've got chicks.
What do I mean by that? Most of the applications
in AI are the mundane and passe on the one side,
which would be like customer service and basic call center

(04:52):
supplementation or substitution. And then at the other end you
have people that are spending tens of thousands of dollars
a month in some case on AI girlfriends and people
that are doing what they often do with technology which
is used for prurient interests. So that to me the
two barbel extremes in AI, where people are actually making
money and profits serving demand for basic human instincts I

(05:12):
guess and needs. That's interesting. Overall, you're seeing a big
shift from the compute piece to the energy piece, meaning
people now recognize this bottleneck in AI is not going
to be so much about the chips. We also talked
about this months ago when we said, look, you don't
necessarily need these in video chips for inference, the part
that most people do when they're querying all these models.

(05:33):
You do need them for training, but the power levels
on these things are just enormous. There was a Dell
earnings call that they think sort of accidentally leaked that
this in VideA B one hundred Blackwell chip is going
to be a thousand wide draw of power, which is
like forty fifty percent more than these eight one hundred chips.
Why does that matter because now you got to figure
out how do you supply the energy for that. And

(05:56):
a tidbit which I thought was interesting was Amazon acquired
a nuclear powered data center in Pennsylvania. They've spent six
hundred and fifty million dollars, they got about a gig
lot of power. And I think that that is going
to be a trend. I think it's actually going to
usher in a wave of what I like to call
out elemental energy, but demand for nuclear power to power
these big AI data centers.

Speaker 1 (06:16):
Yeah, it's kind of funny when you think about it.
I don't think anyone expected uranium to end up being
an AI play, but here we are. I want to
go back to something you said, and you actually brought
it up the last time we spoke to you last year,
and it was the idea of I guess the novelty
of some of these more public facing AI projects, And
I think you pointed out last year it's really fun

(06:39):
to use some of these things, like generate a bunch
of cartoon versions of yourself or whatever, but it might
not be a sustainable business model, and it might end
up being a functionality that is eventually incorporated into another
platform or a different project. Have you seen any example,
specific examples of more public facing novelty AI start to

(07:01):
like go away? I think you mentioned a few failures recently.

Speaker 4 (07:04):
Yes, You've had a whole bunch of companies that basically
took chat, GPTA, GPT three or four and put a
wrapper around it, basically meaning give the average user who
doesn't know how to use these or even do prompts,
some means to interact with it. And those things raised
a bunch of money. They made these things accessible, and
they've sort of gone away. The foundation models themselves that
are undergirding all of this themselves are also starting to

(07:24):
be relatively commoditized away. And some of these things have
prominent people and have raised a lot of money, but
they are what I would consider failing. Take inflection. You've
got Mustapha Solimon, super smart guy, co founder of deep
mind raised I think a billion and a half for
that company. You know, I'm gonna be careful here because
Microsoft has been probably the savviest actor in this entire game,
figuring out that they can acquire things by doing things

(07:45):
in a clever way, skirting FTC and DJ oversight. They
effectively control open ai. As I think we also talked
about heard, particularly going into the end of last year
when there was all the drama around open Ai. Sadia said, look,
if open ai went out of business, we own it,
control it. We've got all the data up left, right center,
you know, all around them. Same thing with this company Inflection.
They did I think a six hundred and fifty six
hundred and seventy five million dollar license for the technology

(08:08):
that basically was a payment above and beyond what the
venture investors had made. Venture investors made a little bit
of money, not a lot. Key management went over to Microsoft.
But Microsoft has been very clever. So back to your question,
I think the big are going to get bigger and
are going to be most of the beneficiaries here, Microsoft, Adobe, Amazon,
Amazon themselves. Coming up on the one year anniversary of Bedrock,
they're going to announce that they've got the best performing

(08:30):
model with Anthropic, which they've made billions of dollars of
investments in now sort of competitor to Opening Eye CHATGPT.
They're also going to renounce something with one of our
companies that hasn't yet been publicly disclosed in biology. That's
going to be one of the two biggest waves next
biology and what you started the conversation with robotics. If
you just take robotics as an example, I think in

(08:51):
one of our companies, Hugging Face, which is one of
the main repositories for all these open source models, there's
something like sixty thousand text generations models. You know, it's
like fifty nine thousand, seven hundred something now, but just
an enormous number of text generation models. This is in
like two or three. It's like everybody's doing this. Everybody's
trying to do it all, basically trying to predict the
next word based on the prior one. What this transformer technology,

(09:14):
which was invented at Google, ended up parlaying into guess
how many robotic models there are? Fifty nine thousand in
text generation? Guess how many robotic models there are?

Speaker 1 (09:22):
A fraction.

Speaker 4 (09:24):
I mean, I'm obviously leading with my phone are but
nineteen robotic models. Okay, so you got that to me.
As a venture investor, we're just always looking at where's
their abundance, where's their scarcity? Their scarcity of robotic models? Now, why, Well,
it's relatively easy to train on the open Internet. You've
got Wikipedia, you've got YouTube videos. You know, whether you're

(09:45):
not you're supposed to be doing that or not. Like
the woman who was asked it, soa hey, how did
you train these things? Was it on YouTube? And she
gave it and you probably saw that, So there's gonna
be all kinds of copyright stuff on that. Robots are hard.
Why Most of the robotic stuff that has been out
in the world, like you talked about in your intro,
is constrained in manufacturing facilities, in work cells on an

(10:06):
assembly line, very specific, parametrically constrained, so very few degrees
of freedom of what they're actually doing. The robots themselves
might have multi access scripts and controllers, but they're not
moving around very freely. You have exceptions with like Amazon
which acquired Kiva, moving the warehouse inventory stuff around, but
again relatively xyz access not unstructured environments. You and I

(10:30):
and our listeners. We all thrive every day in unstructured
environments and that is where you need enormous training data.
You can't search the internet for that, So how do
you do it. There's a few things that have emerged,
and you mentioned some articles. We funded a company that
recently came out of STELL called Physical Intelligence stead of
artificial intelligence, Physical Intelligence, and it is the krem Dela

(10:51):
Creme team from Stanford and Berkeley. You've got some open
ai folks, You've got Google Deep Mind folks. They took
investment from open ai US and a bunch of other
vcs and they are just twenty four to seven training
robots doing all kinds of crazy things like folding, laundry, pouring.
Determined but to let these robots encounter unstructured environments and

(11:12):
then be able to thrive in them. The next thing
that you're going to see are visual models where you're
effectively giving, like an ikea sketch or you're drawing something,
and you're being able to instruct the robot to have
a sense of intuitive physics of how the world works
and how things might connect to each other and then
learn from that. And then we're also training these robots

(11:33):
with simple verbal cues. So there's a video you can
see online from some of the researchers where they are
picking nuts and m and ms and separating them, you know,
just as a task of being able to sort and
filter with precision and dexterity, and if they picked a
wrong one, you can actually instead of physically grabbing the things,
say stop grab the Eminem's not the nuts. And now

(11:53):
it knows that. So I think that we're about to
unleash in robotics what will become a chat GPT like
moment where people are so used to seeing robots and
they see the arms, and they've seen West World and
this kind of stuff, and suddenly something happens that just
blows your mind. And I think that's coming soon.

Speaker 2 (12:27):
That's pretty exciting because, like I said, you know, for
like at least a decade, I've been watching those Boston
whatever online, those youtubes, and at this point I'm sort
of convinced that it's like basically a content generator because
it never seems like they're like crazy robot dogs or
anything like ever become commercial. But maybe this is the
missing link. But you've brought up like three different avenues
we could go on and I want to sort of

(12:48):
eventually hit on any of them. Here's a specific question,
and then we can maybe get back to robotics. This
element where there is such a shortage of advanced, cutting
edge talent, people who really know how to do this.
And you mentioned that guy that got hired by Microsoft
from his other companies as an investor in AI or
robotics startups. Is this a dynamic that's different than in

(13:11):
other software or other tech investing. Basically, this sort of
like highly skilled tech qman risk.

Speaker 4 (13:18):
Basically yes, in that you always are looking for what's
scarce and you want scarce talent. If anybody could do this,
it's just not that valuable companies would get funded. Vcs
would fund forty of them at the same time, or
maybe four hundred of them. Contrast to things where it's
very web based or the old groupons of yesterday. This
is highly technical, often PhD scientists. The vast majority of

(13:40):
the founding teams that we've backed at companies like Covariant
or format or this new company Physical Intelligence, they're all
PhDs that are coming out of Stanford, Carnegie, Mel and MIT.
Some of the best robotics programs in the world, and
there's lineage of these great professors. Many of them have passed,
but for example, there's this one guy, Hans Morvec used
to be a carnegiemail and I got to meet him

(14:00):
when he was still alive, but he was one of
the very early pioneers in robotics. And he's got this
paradox that insiders in the robotics world called the Morvek paradox,
which is this weird counterintuitive phenomenon which is basically like
all the stuff that we think is really hard is
actually pretty easy for AI, and all the stuff that
we find totally intuitive and easy, like riding a bike,

(14:21):
that's really hard for robots. So there's this great paradox
that some of the most brilliant researchers are working on,
which is how do we do the kind of stuff
that a four year old can do very intuitively with
these very complex, expensive machines. And there's all kinds of
considerations we could talk about about where are these arms
coming from? The acquisitions that China has been making from
what historically was a lot of German companies. I mean

(14:41):
when I say arms meeting the robotic arms they can
move thing. And then there's this great philosophical debate that
it hasn't yet come to four. But I believe will
and investors are sort of lining up on this. I'm
on the opposite side of some people are funding humanoid robots.
And the reason that I say I'm on the opposite
side of it is I don't really believe in them. Yes,
you would want somebody to help take care of your
grandma and maybe provide some companionship. But this idea of

(15:04):
the movies of these ex machina kind of robots that
embody a human form. We know that engineering is better
than evolution. If we were inventing a car tomorrow, it
would be a terrible idea to take Fred Flintstone and
use his feet, you know, to power these stone wheels.
We know that an actuator and the axle and an
engine or just better, and evolution didn't create that. Why

(15:27):
would we create these humanoid hands where if I'm twisting
the cap off of a bottle, you know I can
only I have to turn my hand like seven times
to do that. Whereas if I was just designed the
perfect robot, I'd have a little suction cup that would
go on top. It'd have like a drill bit mechanism.
It would quickly twist it off and then it would
Swiss army knife, you know, swap out for the next

(15:48):
technical gripper capability. So I think that people are misguided
and they're basically going to end up doing things for
like prosthetics or you know, something that's sort of Westworld like.
But I think the practical robots that we're going to
all be using in our homes are going to look
nothing like these humanoid robots.

Speaker 1 (16:05):
This is funny. This is very reminiscent of a weird
conversation I used to have with my dad. He had
like some sort of bugbear around the shape of aliens hands,
and he was like, why are they always shown or
depicted in these illustrations as having like human like hands
or sometimes even three fingers, Like why wouldn't they have

(16:26):
just evolved to the next level of very very efficient physiology. Anyway,
One thing I wanted to ask, and I'm trying to
think how to frame this question or what the right
word is, but how open source is robotics in the
sense of, like how much of the technology is shareable

(16:47):
or replicable? Because I feel like one of the reasons,
and you touched on it earlier, but one of the
reasons we have seen this boom in AI is because
you can go to places like hugging Face and download
a bunch of code, open source code, and build off
of it and it sort of multiplies around itself. But
is there any aspect of that all in robotics or

(17:09):
is it just much more proprietary.

Speaker 4 (17:11):
The hardware piece has historically been very proprietary, although there's
lots of knockoff things. There is a Chinese company which
is increasingly dominating the field. A lot of people don't
know the name called Unitry Uni Tree Unitry that is
sort of copying the Boston Dynamics robots that Joe was
talking about and that you see in Black Mirror episodes
and those kinds of things. On the software side, it

(17:33):
really because it has the same kernel, the same origins
as many of the AI software things that led to
large language models and came from transformers academic roots and
academics like to share and publish, and of course you
can patent certain things, but by and large, the early
systems something called ROSS, as you might guess, robot operating system.

(17:55):
People that were doing something called r duino, which is
sort of for hobby programmers with hardware and software at
the intersection Poppy. There's a handful of these things. But again,
now you're in this mode where you need to find
training data and you need to do the work and
spend the time, and that costs money. So you will
have a mix of open and closed models. And if

(18:17):
you take a company like Physical Intelligence, their mo their
motive is is we want to build the operating system
that any robot can basically use to navigate the world.
They want to build the brains for the robots as
opposed to the robots themselves. And there's an interesting philosophical
and scientific tangent which Barbara Tavski, who is a friend
and she was the partner of now the late Danny Connoman,

(18:39):
who was also a friend, her work and she was
far less famous, but I think actually more important work
is all about motor function, and her hypothesis is that
the entire reason for the existence of the brain, like
the sole purpose of it is to actually produce movement,
movement towards food or a mate, or to run from

(19:00):
a prey or predator, which in turn is doing the
same kind of thing, and I think that some of
the most interesting philosophical questions about consciousness and memory, spatial perception,
embodied cognition, gesturing. I mean, I'm wildly gesticulating while I'm
talking now with my hands. It's just an innate thing
being able to do mental simulations, like when we think

(19:21):
about human brains and machine brains merging. I actually think
it's really going to be very revelatory as these robotic
systems advance and us understanding that's a lot of the
purpose of thinking and intelligence is actually just about moving.
And so that's a pretty cool side effect that I
think is going to come from all the commercial and
speculative ventures stuff that we do and funding these things.

Speaker 2 (19:42):
You keep saying things that like jog me onto something
else that I was meaning to ask you, but you
mentioned this impulse.

Speaker 3 (19:49):
That academics like to share their work, and.

Speaker 2 (19:52):
That reminded me of something that someone was telling me.
So you know, maybe I'm sure you are on this
site a lot. But for listeners, there's this site archived
dot org where people like publish research and sort of
like an open source, sort of ungated manner about all
kinds of scientific and computer things and just today on
the artificial intelligence page, there's like fifteen new papers and

(20:12):
they have headlines like autonomous evaluation and Refinement of digital
agents or a modular benchmark framework to measure progress and
improve LLM agents. And this guy was who I was
talking to, made the contention that's like all this stuff
is being published and investors, a lot of vcs don't
really have these sort of like technical chops to judge
a lot of this research.

Speaker 3 (20:34):
It's like the hear.

Speaker 2 (20:35):
Take my money meme, I'm curious, like what you see
in this space where it's like there must be a
lot of investors like yourself who are like wowed by
PhDs like doing all kinds of stuff. It's like, we
got a one hundred x breakthrough in the energy efficiency
of this in video chip by training the model differently,
like how do you evaluate the science and how risky

(20:56):
is that for investors with a sort of like plethora
of ostensible breakthrough is happening left and right.

Speaker 4 (21:02):
Well, you're right, there's endless I mean it is why
there's this endless frontier and sort of the batter for
bush sense, and it's always very exciting and you need
to have a very high filter for these things. Not
everything is commercent. Sometimes there's just a breakthrough, but maybe
that breakthrough can be licensed to a company. And so
the people who are actually commercializing this and thinking about
capital allocation and recruiting teams and then deciding these are

(21:22):
our top three priorities, and even though there's forty other
really exciting things that we could and maybe should be doing,
we're just not going to do that. Now. That's really
what company building is about. And so oftentimes we might
have a brilliant scientist, but maybe that person just isn't
a good salesperson. They can't tell a narrative and convince
people to move across the country and join them, they
can't raise capital, and therefore they're not going to be
a great entrepreneur and they're probably better off as a scientist.

(21:43):
But when we're making evaluations, it's how much money is
going to accomplish what in what period of time? And
who's going to care. It's like if you were playing poker.
You're looking at your hand, You're figuring out how much
money do you have to anti up for the next round,
And then what's the exogenous what's the outside view, what's
the market going to say, and will they care. That's why,
you know, we talked a little bit about this, but
very skeptical about other fields that people are funding right now,

(22:04):
in fusion or in quantum computing. I'm so skeptical about
them in part from hard and cynicism of twenty years
of people pitching us things that are always about unbreakable
cryptography and femtosecond and kneeling of this quantity, and I'm like, so,
what most of the things that people promised, you know,
about unbreakable cryptography or molecular modeling for people are doing.

(22:26):
They're just not using quantum computers. They're using GPUs, they're
using in video chips, they're using new algorithms. So I've
been very skeptical about that, also been very skeptical about
fusion for the same sort of reason that to your point,
there's like this ignorance arbitrage that people take advantage of.
They take advantage of that Investors don't fully understand something.
It's hot, it's on the front page of a newspaper
or magazine or you know whatever. It's a buzz and

(22:48):
they want to play in it, and so they invest
in it. And that's how you get frauds. So we're
always looking and basically trying to say, is this academic
practitioner commercially minded. They're probably not going to leave their jobs,
so we're only getting them twenty percent of their time.
Is the intellectual property reel. And then oftentimes to you're
pointing about the papers, there is a high reference ability
in papers. A paper that is cited enormously or immensely

(23:09):
has a lot more credibility because you have the vainglorious
error detection and correction of other scientists who are trying
to shoot that person down who has all the fame
for doing it, trying to seize that mantle. And so
scientists are not a benevelent bunch. They're just as competitive
as an investigative journalist trying to break the story or investor,
or an A and R REP for a music band

(23:31):
trying to get there before everybody else. And we're no different.
They're no different, But that's how all this stuff progresses.

Speaker 1 (23:37):
Talk to us about the robotic arm. Yeah, I'm going
to take the bait.

Speaker 3 (23:41):
Yeah.

Speaker 4 (23:41):
The first point I would make is I don't believe
in these humanoid robot arms with like fingers and you know,
high dexterity. I think it's cool Parler trick, but I
just I think we should have robotic arms that look
more like Swiss army knives that are swapping and moving stuff.
And you can see a bunch of these things online.
You know, different tools for different tasks and be able
to do them instantaneously when you out to the industry structure.
You've got Fanik, which is a major Japanese player. They

(24:04):
started with industrial robots. They do factory automation, but they've
been a key player, probably a twenty five thirty billion
dollar enterprise value company. You've got ABB, which is Swiss
Swedish multinational industrial robot arms. Those are most of the
things that you would see in even a Tesla gigafactory
or something where Elon's talking about all their automated things
and they've got some of these ABB arms or the

(24:26):
other one is Kuka. Kuka which is a German company
and they were one of the great leaders. They got
bought by a Chinese company, I want to say twenty
sixteen or twenty seventeen, and China's made a bunch of
I think very smart investments acquiring technologies that were a
little bit before their time, And I think it presents
some geopolitical things that probably in five or ten years
will be looking at and saying, my gosh, how is

(24:48):
the dominant robot arms supplier or robot body supplier be
akin to like being dependent on TSMC, a Chinese company.
So I think there's going to be nash robot companies
that form in the same way you're starting to see
national AI companies form.

Speaker 2 (25:07):
How does a company like Physical Intelligence, how are they
solving the data problem? Because like you said, there isn't
just the equivalent of a robot Internet where they can
watch millions of hours of a robot or I'm trying
to do something, or humans doing something or whatever. What
is the approach by which the sort of the token
problem is being solved?

Speaker 4 (25:26):
I would call it the easy way to do it,
which is actually quite trivial and hard, is doing what
people originally did with robotic surgery. So we had a
company called Ors Surgical Robotics. We saw that JJ for
six billion dollars. It started with surgeons operating these things
in like a telerobot, so they had little pinchers on
their fingers and you know, from five feet away or
in a totally clean operating room. They were operating, but

(25:49):
it was their hands being teletransmitted to the device. And
so that is the first way which has come up
with one hundred different tasks, maybe the highest frequency things
like washing dishes, folding clothes, again unstructured environments, being able
to do them in multiple different houses, multiple different heights,
multiple different you know, wet clothes, dry clothes. Being able

(26:09):
to pour coffee, being able to have the dexterity to
open up a ca cup. I actually don't drink those.
I think they're disgusting, but you know, put it into
a coffee machine.

Speaker 3 (26:18):
They don't drink any of that.

Speaker 4 (26:19):
Slow and there it's an engineer that is operating them
and the movement of compensating for gravity, how much force,
how much tension, how much pressure. That's all information. It's
information that historically has not been captured and some of
that is then extensible. And this is a really cool thing.
You can see some of these robots. You might have

(26:39):
five different robots, but they have this amazing thing called
transfer learning. You teach one robot a thing, and suddenly
the other robot, which is disconnected, you know, to or
it's connected through the Internet through it, but it can
actually learn what that robot just learned and perform the task.
So that's actually pretty eerie and pretty cool. It's the
same sort of thing like if I had one roll

(27:00):
that saw where I tossed a ball in a room,
but three other robots didn't know, they would instantly know
because they have the eyes of the first robot. So
there's all kinds of training like that. Then there's schematics
and drawings. So I was alluding to this before of
like Ikea drawings, but once you can take diagrams and
schematics and actually use visual language models, it's something that

(27:21):
Opening Eyes a partner here with physical intelligence, and they
have pioneered some of that. That's going to be wild too,
where you can literally just show an Ikea drawing and
the robot goes with a constraint set of limited pieces
that they have to put together, a set of screws
and wrenches and whatnot, and they can completely assemble whatever
it is, a nursery thing or furniture or desk, which

(27:41):
I think is going to be pretty wild for people
to see too.

Speaker 1 (27:43):
That would actually be a major quality of life cure,
not having to put together ikea furniture. That'd be amazing.
Could you talk a little bit more about how robots
are learning and I guess, like what the different types
of learning are or different patterns of learning, and what
you've seen that's been most promising so far.

Speaker 4 (28:02):
You have two main categories or maybe three. You've got
supervised learning, so there you've got input data. The robot
is learning, it's being corrected, it's being told in some
cases like I described before, whether it's voice or a
gesture or nudge. Then you have unsupervised learning, where the
robots are basically training on unstructured data. They get to

(28:26):
discover patterns, they encounter the world, they encounter boundaries, gravity,
those kinds of things, and it might be slower in
that case, but they're reducing the dimensions for error. There's
something that I coined this term. I call it MBTFU,
which is meantime between f ups. But you want that
to be as long as possible. If you go back
to like the early rumbas, you know, a roomba would

(28:48):
not know if it was cleaning up a spill of
chocolate milk, or if your dog made a mess and
it smears. It like to tell all over your floor, right.
You want to increase as long as possible the meantime
between me, you know, basically error reduction. Men, you've got
reinforcement learning, imitation learning, where you might be controlling the
robot or having it mimic you. There's this idea of

(29:09):
transfer learning, where a single robot learned something, but it
can transfer it to different robots or from a different domain.
So people are trying lots of different approaches, and the
more different mechanisms you have, people are then going to
figure out, okay, which is the least data intensive, or
which has the lowest latency and is the quickest, or
which is the best system for training a robot that

(29:32):
you put it in a totally unstructured environment and without any training,
sort of what they call it zero shot learning. It's
able to figure out from prior knowledge. I know that
I can go through that chair. I know that it swivels,
I have to turn it this way. I know how
much force I need to pick up a general coke
can and those kinds of things. And I think, again,

(29:54):
it's going to be enlightening of how much we as
we navigate any given minute in our life. Take for granted,
all this intuitive tacit knowledge that we have about the
physical world, and there really is this intuitive physics of
how we move around. Robots are going to learn.

Speaker 2 (30:11):
That is really simple question. In the next five years,
is it plausible that I'll have a robot in my
house where I could take all the clothes out of
the dryer, drop it into something and have it turned
into folded clothes. And or if not that, what could
be that chat gpt of robots that's right around the corner.

Speaker 4 (30:31):
Well, one thing which I posited to the team because
I was thinking about exactly that. You know, what would
be a really cool thing. I lose stuff in our
apartment all the time. We have a bunch of different rooms.
I would love to basically say, has anybody seen which
is often what I do to my wife and three kids?
Has anybody seen my wallet? Has anybody seen my glasses? Okay,
just announcing that you can see that a robot with

(30:53):
a series of things in your home that would have
visual identifiers and visual learning machine learning to be able
to spot object in a video frame could say yes, Josh,
I know exactly where they are and go and retrieve them.
So fetch and retrieving objects in the home to me
would be a pretty cool thing. Where'd I put that
remote or whatever? And the robot basically knows because they

(31:14):
can go through the DVR of the home and they
basically know where it is and they can fetch it
and retrieve it with the right physics. Folding laundry. I
don't know how to handicap that, and we can do
that now pretty.

Speaker 2 (31:24):
Crudely through That's the one I want because I have
a swall in New York City apartment. I don't lose
things like too much. Basically, like, I need that fold
I need that folding robot.

Speaker 4 (31:32):
What are you gonna pay for that? Though? I don't know.
You know, would you pay five grand ten grand for
a robot that folded your clothes? Probably not. So it's
it's gonna have a high. That's why most of these
things have found their way into industrial uses first, and
over time they'll get cheaper and cheaper, they'll get better
and better. But you know, look, I'm one of the
few people that have an Amazon astro It's a robot

(31:54):
that you know, rolls around the house and you can
tell to go into a room. You can put something
in the back of it and take it to It
can do facial identification for my family, so I can
say where's quinner body, and they'll find my younger kids
and I can send a message noise the hell out
of my wife. But I think it's sort of cool,
and yeah, every robot that comes out, I'll be an
early adoptor, and yeah, we like funny things.

Speaker 1 (32:33):
So one of the interesting things about the current tech
cycle and all this enthusiasm for AI is that so
far it's been the big incumbents who seem to be
winning here. And part of that is because the capital
investment needed is so large, the amount of data needed
is so large when it comes to robotics, would you
expect to see a similar thing? And then added on

(32:56):
to that, could you have a situation where, you know,
if you are a large manufacturer, or perhaps you are
a company that has a lot of proprietary data, like
an insurance company or a financial company or something like that,
could you develop your own robotics Like would that be
your edge here? And could you potentially just do it yourself?

Speaker 4 (33:18):
On the first case, if I look at the current landscape,
the one company in sort of the big magnificent seven
would be Amazon, just because they have already been investing
in this and for a long time. Jeff Bezos is
very passionate about robots. So I think there's a DNA
there where doing things that enable them to do the

(33:38):
three things that Jeff loves to do historically, which is
increased choice and availability, increase convenience for customers and lower prices,
and factory automation, warehouses delivery. Even when they bought our
company for a little over a billion dollars called Zookes
that was doing autonomous driving, they have a long term
intention to be able to do twenty four to seven
right hand turn lanes, navigating around the city with a

(34:00):
human that's basically delivering last miles kind of stuff. And
so so I could see Amazon doing that. Microsoft, I
don't really see them getting into robotics in a significant way.
They have some R and D efforts Google and we
talked a little bit about this phenomenon, but they are
the third or fourth group where we have taken a
team out of a big tech We did it with
Google with a company called Osmo to create basically Shazam

(34:23):
for smell. We did it with a BIOAI company called
Evolutionary Scale out of meta that's going to be announced
more publicly soon. And then we did it with this
team out of Google that became a physical intelligence between
Google deep Mind and open AI and Stanford and Berkeley.
So I think that this is going to be more
the startups. The beneficiaries of the money will still be

(34:44):
Nvidia and some of the chip players, some of the
hardware providers, because we need the hardware to be able
to train the robots. But I really think that this
is an open field. And again just going back to
that stat of how many large language model there are
for chatbots and how few there are for robots. I
think it's a big opportunity. Now five years from now
we might have a bubble in robots, but today I
think it's it's it's a really exciting field, just.

Speaker 2 (35:05):
An existing AI and competitive advantage. So when I first
in the year two thousand, I think encountered Google, I
used it. I was like, this is way better than
Yahoo or anything else. I never like stopped after that.
Again with chat bots, just going back now, I even
talking about robotics here. It's like I and I like
got a chet GPT progount or whatever early on, and

(35:28):
I thought it was pretty cool and I like use
it for some stuff. And then like the new version
of Claude came out from Anthropic and I was like, oh,
this is actually kind of cool and I kind of
like it more. I don't really know why I like
it more, but for some reason, I like it more
and I had no problem switching over. Could it be
possible that some of these like core like models do
not prove to be as sticky or have as deep

(35:49):
moat as people expect.

Speaker 4 (35:50):
Totally Inflection, which launched there is called PI just wasn't
that good. They've now gone to Microsoft Anthropic. Yeah, at
first was a little bit behind. They are the most
formative model. They're the best performing one. They're the one
that I also like, you use the most. Why it's fastest,
it has a little bit more.

Speaker 3 (36:08):
It seems to talk a little bitter.

Speaker 4 (36:10):
Yeah, but here's an interesting thing to your point about
you know, sort of motes and Google early on most
searches for Google, and remember Google's two hundred and eighty
billion dollars add revenue generated, you know, enormous Google. Most
of those are like five to ten word searches, right,
so you put something in your like, I don't know,
restaurant in West Village or you know whatever. Okay, Perplexity,

(36:34):
which I don't know if you've used, and we met
the founder early on and we ended up not investing
and it was probably a miss for us, but the
founder they are focused. You know what I'm going to
do the point two percent of searches or the one
percent of searches that are longer formed where people really
want to ask a full question. And I see a
lot of people that are using it and it's not
coming up with some Wolke answer or some pithy, you know,

(36:55):
rock like response. It's actually a well researched, footnoted, sources,
cited response. So to me, the two things that I
use most on the chap outside right now are clawed
from Anthropic and Perplexity. But in six months there might
be something totally different. And the sheer number of models
that are available on open source. Who knows what Apple

(37:18):
ends up doing here and that being integrated. You know,
Series sucked, but so did Alexa. Amazon's making moves Apple
will too. So yeah, but you know, use you mount
for a second. All of this stuff and Venture Venture
is going through this downturn and the one area where
there are very high valuations and a lot of money
flowing and a lot of talent is been in AI,

(37:41):
and therefore the future returns on these things are going
to be lower. And that's why we at Lucks decided,
you know, we're going to focus on AI in the
physical world. Having done all the stuff over the past
five years, I would say there's this five year psychological bias.
Everybody wants to be invested today where you should have
been five years ago, So hugging face and Mosaic and
which we sold the data bricks. We were in that
five years ago. Today we're really interested in biology and

(38:02):
robotics and AIS use in those. And then I have
a really weird theme which is so sexy because it's
unsexy to me and to others. You understand accounting, vast
majority of venture investors and make startup people don't. But
do you take capex. Capex is made up of two pieces,
growth and maintenance. And everybody's been funding growth, growth, growth, growth, growth,
investor growth. So I got interested in maintenance. Why because

(38:26):
you have trillions of dollars of assets infrastructure, hospital systems,
energy systems, buildings that need to be maintained and every
generation and every new startup and every new investor always
wants to do the new new thing. It's why we
get new music, and we get new food, and we
get new fashion. But there's all these neglected assets, and
I think you can apply new technology to maintaining these systems.

(38:48):
And so I've become obsessed with this unsexy theme of maintenance,
which I think is going to become a hot area
over the next few years.

Speaker 1 (38:55):
Well, you mean maintenance of physical infrastructure. So the idea
that you could have, I don't know, a little row
bought that goes around your factory or a bunch of
highways and sort of surveys it for cracks or things
that it thinks needs to be fixed.

Speaker 4 (39:08):
Totally, could be infrastructure for transportation, it could be inside
of hospitals where there are road routine things. And it's
also oddly, and I know you guys have covered this,
but the idea that AI is really coming for the
white collar workers. You know, you joke that you could
talk about AI and generate a script, you know, based
on AI.

Speaker 1 (39:27):
But oh no, it wasn't a joke, very serious.

Speaker 4 (39:31):
They always thought that they were relatively insulated and it
was the blue collar workers. But let me tell you
the guy that put me in business, Bill Conway, the
founder of the Carlile Group. He's spending all his philanthropic money,
or a significant portion of it, funding nursing schools. Why
because he identified a very high magnitude impact. Because we
have such a shortage of nurses in this country. That's
an opportunity for maintenance where robots and technology can play

(39:54):
a role. How do you augment and help nurses plummers.
We have a massive shortage of plumbers this country, and
so I actually think that the blue collar workers, empowered
by technology and maintaining all of these systems around us,
are actually going to be a winning combination.

Speaker 2 (40:09):
I want to talk about another aspect of I guess
AI investing, which is that in this sort of the
SaaS wave, the twenty tens a decade like compute was
very cheap, right, and so basically like that part, you're
like plug into aws and it's sort of yeah, I know,
it probably costs some money, but is not a big
lineup or a line item.

Speaker 3 (40:29):
Ultimately for a lot of these companies, How does that change.

Speaker 2 (40:32):
In twenty twenty four when you're dealing with an AI
company and electricity bills exist or hardware accumulation, depending on
where they are in the stack. How is an investor
do you think about like the changing I guess people
will talk about like you know, shifting, you know, having
to spend more on capex versus op X relative to
the prior generation of tech startups. How does that play

(40:53):
out in the investments you choose?

Speaker 4 (40:56):
It's a great question in the AI world, and then
I'll give you the biology world. On the AI side,
you know, take opening Eye. These are all rumored numbers,
not you know, nothing's fully confirmed, but two billion, maybe
three billion of revenue. I think about ten million people
paying twenty bucks a month or thereabout, and probably one
hundred million users. You know, I don't know how many
of those are unique, but they're not making money on that.
They're losing several billion dollars today because you have these

(41:18):
upfront costs, big cap X, a lot of training, you know,
and then you try to maybe do some big enterprise deals.
A company like hugging Face is profitable because they're not
doing they're just hosting it and you know, letting people
run in frints and then charging and making margin on
that kind of stuff. So that to me is interesting
of the people that spend a ton of money and
they've got to earn it back. And can you get

(41:39):
pricing power by going from twenty bucks a month to
thirty bucks a month? And maybe you get that because
now you have open AI premium where you have access
to say Sora for video generation or something like that.
So that's going to be a big question on are
these profitable investments? Not? Are they cool? Not? Are they
world changing? Absolutely, but are they profitable investments? And look,
the market may not care if they're profitable. Market funds

(41:59):
all kinds of profitable thing that they believe in the
narrative in the story. But thinking about fundamental businesses and
the economic changes between Capeck and op X, I think
in AI it's very hard if you are building out
your data centers, trying to do your own training, your
own inference, hosting these models, it's very hard. Biology we
will see an AWS moment where instead of you having
to be a biotech firm that opens your own wet

(42:22):
lab or moves into Alexandria real estate which is specialized
in hosting biotech companies in all these different regions, approximate
to academic research centers. You will be able to just
take your experiment and upload it to the cloud, where
there are cloud based robotic labs. We funded some of these.
There's one company called Stratios. There's a ton that are

(42:43):
going to come on wave. And this is exciting because
you can be a scientist on the beach in the Bahamas,
pull up your iPad, run an experiment. The robots are
performing ninety percent of the activity of pouring something from
a beaker into another, running a centrifuge, and then the
data that comes off of that, and this is the
really cool part. Then the robot and the machines will
actually say to you, hey, do you want to run

(43:04):
this experiment, but change these four parameters or these variables,
and you just click a button yes, as though it's
reverse prompting you, and then you run another experiment. So
the implication here is that the boost in productivity for science,
for generation of truth, of new information, of new knowledge,
that to me is the most exciting thing. And the
companies that capture that, forget about the societal dividend, I think,

(43:27):
are going to make a lot of money.

Speaker 1 (43:29):
Yeah, this actually reminds me of the conversation that we
had regarding snack food innovation and this idea that you
can use a sort of factorio like simulation just to
run new processes through your factory and see how they
would actually work out and what the supply chain might
look like. But not to give in to my five
year bias too much and overly focus on chat GPT.

(43:53):
But where are we in terms of context window expansion,
because this is something we spoke about with you last year,
and I think for a lot of people it's probably
one of the overriding annoyances with something like chat GPT,
the fact that you can't actually copy and paste that
much text into it, and that you are limited in
terms of the output that it actually gives you. Have
there been major advancements since we last spoke to you.

Speaker 4 (44:15):
Well, the CLAW three is one of the largest, and
then you've got all kinds of interesting collaborations. You've got
Nvidia and Microsoft died one with a huge number of tokens.
You've got a one to one labs that has this
thing called Jurassic Again, a lot of people are making
headway here, but we are I think a year away
from you being able to upload hundreds of PDFs thousands

(44:40):
of books if they're not already immediately referenceable and be
able to detect pattern change amongst documents, summarize and unearth,
you know, the entirety of key concepts, and then I
think the most valuable thing will be it prompting you
to say, here's a question you didn't ask about all
these documents that you just uploaded. So yeah, I think

(45:03):
we're just keep increasing the context window. But that said,
most of the history of innovation is just like, keep
increasing this factor, and then somebody else comes along and
invent something. It's like that factor doesn't matter anymore. You know.
My favorite iconic example of this is like sailboats. You
find these sailing ships back in the day, they just

(45:23):
kept adding more and more sales, Like these things started
to look ridiculous, you know, and then somebody invents the
electric motor, and you have a motor both, so I
think we'll have the same sort of thing. And then
people figure out, hey, there's a better architecture here than
just constantly increasing the context window. And some of that
might be with memory retrieval and being able to reference

(45:43):
other models and just go into the archive of what
they have. So yeah, that's going to keep expanding.

Speaker 2 (45:48):
Josh woolf Lux Capital, thank you so much for coming
back on odd lots.

Speaker 3 (45:53):
Always great to get an update on what you're interested in.

Speaker 4 (45:56):
It was great to be with you guys.

Speaker 2 (45:57):
Yeah good, you're prepared already to ingratiate yourself and to
blend in with the human art robot.

Speaker 3 (46:06):
Thank you so much. That was fantastic.

Speaker 2 (46:22):
First of all, Tracy, I really like talking to Josh
and always like getting an update. I really do want
the folding the clothes folding robot though, Like I actually
think that's a really big deal and would make almost
everyone's life better if they didn't have to worry about
folding clothes.

Speaker 1 (46:37):
I agree, it would be far more useful to have
something doing physical tasks like folding laundry versus telling you
where you're lost, Yeah, is in your desk.

Speaker 3 (46:47):
I want I need that folding robot.

Speaker 1 (46:48):
I mean, I will say, I know everyone likes to
make fun of Alexa as well, but our house we've
kitted out all the lights on. They're all those smart
bulbs because we don't have any overhead wiring, so like
everything has to be lamps. So if you didn't have
a robot, that was able to turn on all of
your appliances at once in a room. It would be

(47:10):
incredibly annoying because you would be going from lamp to
lamp to lamp. So it does make a difference in
my daily life at least, I mean, there's so much
to pull out from that. So one thing that I
thought was interesting from an industrial policy perspective was Josh's
discussion of some of the robotic capabilities being developed in
places like China and the idea that we might have

(47:33):
another Chips Semiconductor like situation on our hands where we
wake up in ten years and realize that a primary
component of robotics is being built much more efficiently and
cheaply elsewhere outside of the US or the West. And
then the other thing I thought was interesting was the
idea of leap frogging, right, So, I think a lot
of people, myself included, when we think of technological advances,

(47:56):
it's like, can this do this thing slightly faster? Can
it do it on a slightly larger scale, to the
point about the context window and expansion there. But you
can leap frog in technology, as Josh was saying, and
you can go from the sailboat to the motor boat,
or you could bypass a human evolution, for instance, and

(48:19):
instead of having humanoid robots, you could have a Edward
scissorhands like thing with a Swiss army knife on the
end of his arm.

Speaker 4 (48:26):
Yeah.

Speaker 2 (48:27):
That made a ton of sense to me, which is like,
if you're like starting from scratch, like it's not obvious
that the human form that was developed over millions of
years through evolution is necessarily the thing you want to
create or recreate to do various tasks that you need.
There was a lot in there that I liked. The
thing that he would talk about at the end. It

(48:48):
sort of sounded like cloud kitchens, but for biology les. Yeah,
so if you just have all the robots do it
and then they can prompt you for other ideas, that's interesting.

Speaker 3 (48:56):
It does seem exciting. The idea of ways to.

Speaker 2 (49:00):
Accumulate training data for these sort of like you know
that you could maybe solve the mechanical engineering, but without
you know, there's no equivalent of like all of the
text on Reddit or Wikipedia or whatever that change or
you know, Google books or YouTube. So like having to
recreate that as a bottleneck for building robots was really interesting.

(49:21):
I love the term he used I think was ignorance arbitrage, Yeah,
which is a really great term. So it's like, yeah,
like in a lot of like pure science spaces, you're
going to get investors who are willing to throw money
at someone who just like has a really good idea
on paper because that person is smart.

Speaker 1 (49:35):
Well, I think this is also the really unusual thing
about this particular cycle, which is the dominance of the
incumbents and the fact that on the one hand, you
do have a bunch of open source software and to
some extent you can take something off of a repository
and you can pitch it to investors and say this
is the next big thing, and they might not have

(49:56):
the technological expertise to actually evaluate that. But when it
comes to making you know, actual advancements in something like robotics,
it does feel like you have to have an edge
in one respect or another. You either have to have
the capital to deploy or you have to have access
to that data.

Speaker 4 (50:14):
So I don't know.

Speaker 1 (50:14):
I guess we'll see how it shakes out.

Speaker 2 (50:16):
I guess we'll have Josh back next year, yeah, next, next,
next Springer summer to see what the next big thing is.

Speaker 1 (50:21):
Then hopefully he can bring a robot with him of some.

Speaker 3 (50:24):
Sort or folding robot.

Speaker 1 (50:25):
Yeah, all right, shall we leave it there.

Speaker 3 (50:27):
Let's leave it there.

Speaker 1 (50:28):
This has been another episode of the All Thoughts podcast.
I'm Tracy Alloway. You can follow me at Tracy Alloway and.

Speaker 2 (50:34):
I'm joll Wisenthal. You can follow me at the Stalwart.
Follow our guest Josh Wolf. He's at Wolf Josh. Follow
our producers Carmen Rodriguez at Carman Erman dash, Ol Bennett
at Dashbot and kel Brooks at Kelbrooks. Thank you to
our producer Moses on Them. For more odd Lags content,
go to Bloomberg dot com slash odd Lots, where we
have transcripts of blog and a newsletter. And if you

(50:55):
want to chat about all of these topics, including AI
and robotics, there's a room in that in the lot
Discord chatroom Discord dot gg odd lots.

Speaker 3 (51:03):
Go check it out.

Speaker 1 (51:04):
And if you enjoy odd Thoughts, if you want us
to crowdsource buying a unitary humanoid robot or.

Speaker 3 (51:11):
Something similar, I'm Ellie Express.

Speaker 1 (51:13):
That's right, then please leave us a positive review on
your favorite podcast platform. And remember, if you are a
Bloomberg subscriber, you can listen to all of our episodes
absolutely ad free. All you need to do is connect
your Bloomberg subscription with Apple Podcasts. Thanks for listening

Speaker 2 (52:00):
And
Advertise With Us

Popular Podcasts

Dateline NBC
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.