Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
And so many faculty are involvedwith startups and students
from Stanford are involved instartups, some very famous ones.
3
00:00:08,980 -->00:00:12,270
And so, I wanted to be in
that ecosystem and I was.
I was involved with being on scientificboards, um, being board directors and
companies came out of my lab and so on.
And that was exciting.
The part that I was gravitating themost to over time was venture capital.
(00:24):
And so very much finding the fitwas a key part of my mindset.
And it took some time for that, but thatfirst meeting with Marc and Ben, I think
their mindset of really thinking aboutthe future, being very intellectual
about that, you know, how to put it.
I think to, they, theyreminded me when I joined the
firm in 2015 the way, you know,
Stanford was in 1999.
(00:45):
That the real desire to really putan impact in that landscape,
but still building and still growing.
And so, it was something where tech was akey part of the mindset for driving a16z.
And Marc and Ben, this was somethingI think that was a key part of,
could see the value in tech and lifesciences and tech and health care.
So, I came in 2015 and launchedour first life science and
health care fund, first bio fund.
(01:05):
And so,
we're now on our fourth fund and nowit's about over 50 people on the team.
Welcome to another episodeof NEJM AI Grand Rounds.
I'm Raj Manrai and I'm herewith my co-host, Andy Beam.
And today we are thrilled to bringyou our conversation with Dr.
Vijay Pande.
Andy, this was really a lot of fun.
(01:26):
You know, Vijay has this amazingjourney from being a Stanford
professor of chemistry, structuralbiology, and computer science,
to now, uh, a general partner atAndreessen Horowitz, where he leads
investments in health careand life sciences.
It was really fun to sort ofpick his brain about that, uh,
transition from academia to industry.
And I have to say, I thought he was reallybalanced and thoughtful in articulating
(01:50):
why someone might want to stay in academiaversus, uh, versus trying to impact
health care from outside of academia.
All in all, this was a lot offun and really, really great
to get a chance to talk to him.
I really enjoyed this conversation.
Vijay has these twosides of his personality.
He's a general partnerat Andreessen Horowitz.
He's done amazing work leadingtheir bio fund, so he opened up this
(02:12):
entire new investment area for a16z.
It has done a lot of amazing workthere, but it's still really easy
for him to tap into this, like, scholarly academic side, too.
So, he can give you very thoughtful,very grounded answers on scientific
questions and how they think aboutinvestment in these really hard areas.
Like investing in health careis a very hard thing to do.
(02:32):
And he's able to articulate thisvery first principles, very well
motivated, investment thesis.
So, I really loved talking to himbecause you get to learn a lot.
You sort of get some inside baseballknowledge about how these deals get made.
But he's also able to pull it backand give you these very thoughtful
answers to very hard questions.
(02:52):
The NEJM AI Grand Rounds podcastis brought to you by Microsoft,
Viz.ai, Lyric, and Elevance Health.
We thank them for their support.
And with that, we bring youour conversation with Vijay
Pande. Vijay, thanks for joiningus on AI Grand Rounds today.
We're super excited to have you here.
Absolutely.
Happy to be here.
Thank you.
(03:13):
Welcome Vijay.
So, this is a question that we askall of our guests to get started.
Could you tell us about the trainingprocedure for your own neural
network, how you got interested inAI and what data and experiences
led you to where you are today?
Yeah, so here I ended up going kind offar back in that I started programming at
a pretty young age, like when I was 11,and I was just doing things that were fun.
(03:35):
I ended up doing things a little moreseriously, later on when I was, 15.
I was in Naughty Dog Software, acomputer game company with just me,
Andy, and Jason, the founders, and so on.
And so that was real, but betweenthen I poked around a lot of things.
And I don't know if you guys rememberSHRDLU, I forget how it's pronounced.
The Terry Winograd AI thing whereyou could have the computer pick up
(03:55):
this and that, and it was very oldschool AI, but that got my excitement.
And I coded some probably very simple-minded version of that early on.
And then later on in college, neuralnets start to get very hot and
neural computing was getting hot.
So, this is like
1990 or so.
And so, I was coding up neural nets anddoing the math for Hebbian neural nets.
And the thing that most of us sawat that time, there was a lot of
(04:18):
excitement, but neural nets couldn'treally do that much because, as we know
in hindsight, they were just singlelayer neural nets that they hit a limit.
And so, I think most of us putthings down to do real things.
It seemed like a toy and thenthings changed, a little later.
Yeah.
I remember when I was in grad school.
The saying was that neural nets were thesecond best way to do almost anything.
And sort of the implication there was thatthey weren't the best way to do anything.
(04:41):
So, what was the serious thing thatyou picked up after you decided that
neural nets were still in the toy phase?
Yeah.
So, a lot of what I did from gradschool, postdoc and early days at
Stanford were physical simulations.
So, you know, go down to thephysics of atoms to understand,
how molecules work and so on.
The real benefit of physics is thatthey you're hopefully not overfitting
(05:01):
and you have real generalizability.
The fantasy from the days of Newtonis that you can watch an apple falling
down from the tree and then you canpredict planetary motion, something that
seems very far from overfitting, right?
That you really understandthe fundamentals.
I think a big concern with ML wasalways overfitting, especially in an age
where we didn't have really much data.
And so that was a natural thing todo, but I think, probably 2012, 2013,
(05:24):
2014, things start to change both interms of data, but also many of us,
and this was a key thingon my mind at that time,
we're pushing for more compute.
And so that combinationreally changed things.
And correct me if I'm wrong, you werefaculty at Stanford for about 10 to 12
years in this lead up to the sort of deeplearning renaissance that we're in now.
Yes.
I started in 99 and Istarted at a16z in 2015.
(05:48):
So, 15, 16 years.
And I was primarily in chemistry,but had appointments in computer
science, structural biology,and I was chair of biophysics.
So, a lot of the work I did was at theintersection of those different fields.
So, I think we're going to circle backto this at some point, but could you
tell us a little bit about the
decision making for leaving yourprestigious, presumably cushy academic
(06:09):
job to transfer into the exciting worldof investment and venture capital.
Yeah.
So, even from the beginning, verymuch the rationale for me to go to
Stanford, one of the things I reallygot excited about Stanford was to be
in the middle of the startup ecosystem.
And for those of you that aren't familiarwith just the geography of the area,
Stanford and Sand Hill Road, whichis the home of venture capital in the
(06:30):
area, and to some extent the world, youcould in principle walk, but it's like
a five-minute drive between the two. Andthe ecosystems very, merge very much.
And so many faculty are involvedwith startups and students
from Stanford are involved instartup, some very famous ones.
And so, I wanted to be in that ecosystem.
And I was involved with being onscientific boards, um, being board
(06:51):
directors and companies came out of mylab and so on, and that was exciting.
And
being a part of that ecosystem, thepart that I was gravitating the most
to over time was venture capital.
And it was something that I'dbeen thinking about, but you know,
venture capital firms are about asdifferent as people are different.
And so very much finding the fitwas a key part of my mindset.
And it took some time forthat, but that it'd been on my
(07:12):
mind ever since the beginning.
And so what was it about a16zspecifically that made that sort
of VJ market fit, feel so good?
Yeah. I think first meeting with Marcand Ben, I think their mindset of really
thinking about the future, being veryintellectual about that, how to put
it, I think to, they reminded me when Ijoined the firm in 2015, the way Stanford
(07:35):
was in 1999. That the real desire toreally put an impact in that landscape,
but still building and still growing.
And so, it was something where techwas a key part of the mindset for
driving a16z and Marc and Ben.
This was something I think that was a keypart of could see the value in tech and
life sciences and tech and health care.
And that combination wasvery, very exciting to me.
(07:55):
And I think a lot of other venturecapital firms, I think took a more
traditional mindset that tech and lifesciences are in tech and health care.
We're not going to mix.
And again, correct me if I'm wrong,but I think you came there to start
their biotech fund that essentially youoriginated that sort of whole arm of a16z.
Yeah, yeah.
So, I came in 2015 and launched ourfirst life science and health care
(08:15):
fund, the first bio fund.
And so, we're now on our fourth fund andnow it's about over 50 people on the team.
And that was part of the other excitementwas to actually really build something.
And that's something I very much enjoy,but especially to build it in this new
mindset that it would not be lookingbackwards the way traditional funds
are built, but really looking forwards.
Okay.
So, I think I'm going to takethis in chronological order here.
And can you just talk to us about whatyour biotech investment thesis is?
(08:38):
So, I think that a16z does have thisunique perspective, like how do
you understand value in this area?
How do you understand good companies?
And I think also like, how doyou identify good founders?
Because I believe that's a huge componentof investing in successful companies.
Yeah, so this is something where I thinkthere's, this was probably more unusual
10 years ago when I started the fund andit's fun to see that I think the rest
(09:01):
of the mainstream investing, I thinkhas come a long way, but, especially
when we launched the first fund, theconcept of technology having an impact
on life sciences or health care was prettyradical and even debatably heretical.
I remember people telling methat machine learning or AI
would never have an impact in drug design.
And that, okay, we've seen this beforeand people have talked about this before.
(09:22):
And that was a key hallmark,but I think I don't want to
over rotate on the AI side.
I think the true nucleus of what techbrings is AI is an example of, but not
the only example of, I think the truenucleus is a concept of engineering.
That we're going from bespokeartisanal discovery to something
that's designed, engineered.
(09:43):
And when we talk about engineering,what we really like about engineering
is the fact you can make it 20% better year over year.
And that sounds actuallyrelatively modest, right?
But if we compound that
over decades,
that's what Moore's law is.
That's what the cost of genomicsgoing down exponentially is once
you can get something improving withthat regularity and that regularity
(10:03):
comes from an engineering mindset.
Now things change.
And so the idea of bringingengineering to life sciences and
engineering to health care, uh, thatwas a really foundational aspect.
I wonder though, if there's an importantdifference here that you had to grapple
with, especially in the early days,I think this is appreciated now.
So, you mentioned like Moore'slaw for the cost of sequencing.
But in drug development, we oftentalk about Eroom's law, which is
(10:24):
just Moore's law spelled backwards.
And it points to the fact thatactually drug development is
getting longer and more expensive.
And so, you're often working on radicallydifferent time horizons, especially
for return on investment when you'rethinking about a biotech than you
are for a traditional tech company.
So, it was like some type of expectationadjustment, something that you
had to do in those early days.
Again, now I think this is appreciated.There are some similar theses that you
(10:46):
could identify between tech and biotech,but the timescales are just so different.
Yeah, the funny thing is like the timescales for IPO for biotech is faster.
The time scale to revenue is slower,but the revenue ramp can be really fast.
To put it in tech words, like theproduct market fit is perfect, right?
If you have cancer and thisis the only first in class,
(11:07):
it's going to do well, right?
And so, it's just, it's a verydifferent mindset, I think.
The key thing that, um, we verymuch have tried to stay away from,
though, is something that has alot of single asset risk, something
that's really hard to predict.
And the benefit of a platform companyis that if the first asset doesn't work
and the platform is really productive,you can have many beyond that.
(11:28):
In practice, I think one of thebig challenges, even for modern
platform companies, is that onceyou have that first asset, that's
where most of the value is.
And so, there's a very strongtemptation to pile behind that asset.
And so, building a true platform company,modern day Genentech or Amgen, or even
more recently, Alnylam or somethinglike that, those are hard to do.
(11:48):
They're truly hard to do because ofthe temptation to pile into the asset.
Yeah.
There's always a tradeoffbetween exploration, exploitation,
and all of these things.
I wonder if you could tell us about acouple of companies that you've been
involved with that you think sort ofembody a successful platform approach.
A natural one, and this is a fun oneinvestment and investment I'm on the
board of is a company called insitro.
We've had Daphne on thepodcast. So, there you go.
(12:10):
So Daphne is a true OG in AI.
And so, um, insitro is her vision forbringing AI to drug design. I think
she's primarily, especially tackling thechallenge of using AI to unravel biology
and especially human biology. That thefact that drugs work so well in mice
and so poorly in humans is obviously thefact that we can run experiments on mice.
(12:31):
We have lots of data on mice and itwould be naturally unethical to do
the analogous experiments on people.
So how can you do that?
And it's a perfect role for AIto gather all that data and to
build models that are predictive.
They're not perfectly predictive, butthey're way more predictive of human
beings than a mouse would be, is the bar.
And so, for better or worse, thebar is actually not crazy high.
(12:51):
And she very much wants to build a trueplatform company where, there's not going
to be a single asset, multiple assets.
And what's intriguing about what she'sbuilt is that the ability to understand
biology, has implications for the wholedrug design process. For not just finding
targets, but you can imagine what youcould do with that for trials and beyond.
Got it.
Got it. So, I'd like to know, so we'vegotten you through biotech investor, now
(13:14):
health tech investor, I think is yourmost recent arm something that I think
is a little closer to our wheelhouse hereon AIGR. And I think something that even
though I'm deeply involved with this,I have less clarity on than on biotech.
As you pointed out, the markets areexceptionally clear and well defined.
It's very easy to understandvalue pools in biotech.
(13:35):
In health care, it's not evenalways clear who the consumer is.
You know, you've been on record as sayingthat the biggest company in the world, in
the future, will be a health care company.
Could you walk us through sort ofthe investment thesis for health care
companies, especially as they interfaceor interact with AI platforms?
Yeah, so the health care part ofthe fund is a key part of it.
(13:55):
I think one of the unique aspectsof the fund as first constructed
is that we would do both.
That typically life sciences andhealth care investors and tech investors,
those are typically three differentfirms, if not three different funds.
And I think one of the rationalesfor especially including health care
is that I was expecting then, andI think we're very much seeing
now, a blending of these areas.
(14:16):
That modern drug design companies,modern biotechs, have to really think
about care delivery in their choiceof therapeutic areas and so on.
And then if you're a care deliverycompany, the things that are coming
down the pipe in terms of newpharmaceuticals radically change things.
I mean, GLP-1s are justone of many examples.
So, in terms of health care leading to ahuge company, I think the opportunity
(14:36):
here is that we've yet to see ahealth care company that is built the
way a consumer tech company is built.
And if you think about theimpact of these companies could
be way bigger than any FAANGcompany way bigger than Facebook.
In terms of what you care about,it's hard to put something more
important than your health, or yourparents' health, or your spouse's
health, or children's health, and so on.
(14:57):
The challenges, I think that's whatyou've alluded to, is that the current
health care system is complex, andactually who pays for what and how
things work is fairly complicated.
Health care companies with aconsumer mindset, and that could
be a couple different things.
It could be, in principle, who paysand that's one thing we could talk
about, but I think the key thing abouta consumer mindset is from a tech
perspective is changing consumer behavior.
(15:18):
And one thing that techcompanies are particularly good
at, and tech is particularlygood at, is changing behavior.
And for better or worse, you think aboutall the things that I do, my colleagues
do, the world does to try to impactlife sciences, in the end, there's many
things on care delivery that you coulddo that have as big or bigger impacts.
The quote that's usually brought up isthat even curing cancer would lead
(15:40):
to three years of added lifespan.
And cancer is something that, we werevery excited to have a huge impact
in, and it's a horrible disease, butI think this really paints to the fact
that there's many other things thatwe should also be paying attention to.
And that's, I think, part of theopportunity for health care is to think
about how can we motivate people
in consumer oriented experiences thatwe have in tech to be able to actually
(16:02):
really take control of their lives.
And we've made severalinvestments in this space as well.
And it's still early, butI think even the early
days have been fairly exciting.
Could you talk about some ofthose investments in this space?
I think Hippocratic AI is one thatcomes to mind and some of the others.
So maybe you could tell us about, thatand how you see this, the space evolving.
(16:22):
There're a couple differentcategories we could talk about.
One is AI plus health care going,providing to the health care system.
And Hippocratic is agreat example of that.
So, what Hippocratic does is that itprovides essentially AI nurses and
they've been very clever in terms ofgoing after areas that could have a
still a big impact on the health caresystem, but for now avoid challenges
(16:42):
such as diagnosing or prescribingdrugs that a doctor would do.
And so, and also actually one of the bigsurprises is that I think we'll see in
hindsight, when we look back at COVID,that as much of a mess and horrible
COVID was, there were actually someinteresting health care tailwinds that
were created during COVID, especiallyin terms of virtual-based care.
And so, anything that you do with a nurse
(17:04):
on the phone or on a Zoom, inprinciple, you could do with AI.
And right now, Hippocratic hasAI nurses that one can talk with.
They can be used for prepand other procedures.
And if you think about one of thecrises that we have right now is that
there's a huge nursing staffing crisis.
This is a very natural role for AIto play where it can do something
(17:24):
that could have a huge impact.
Maybe we'll get into clinical ontime, but not maybe immediately, but
still today drive a lot of value.
And I think about primarycare as well, too, right?
Even situated, as Andy and I are herein Boston, where we're attached to a
major medical school and university,many hospitals, I think it's still hard
to have time with your physician, right?
(17:45):
Just to actually discuss your care,go through the shared decision making.
What is your personal values?
What's, what are your goals?
And I think this is why there's alot of excitement amongst a lot of us
that AI might help actually bridgesome of this gap and fill this
gap in counseling and talking topatients and thinking about their care.
And ideally in a preventative andforward-looking manner as well.
(18:07):
With Hippocratic, I think LLMs,obviously now these are, you
hear about them all the time.
The concerns are hallucinations,confabulations, safety, bias,
integration with theexisting health care records,
and context and all that.
We're very familiar with thissort of set of challenges.
What do you think is the sort of primarything that Hippocratic is solving?
(18:27):
I mean, there's so many ways tokind of attack this problem, right?
And you don't want to do everything atonce, but what is their bet on where
they will stand ahead, where they willstand apart in the next few years?
And what are they reallytrying to solve amongst these,
these challenges around LLMs?
Yeah.
So, the challenges you're talking aboutwith LLMs are not unique to Hippocratic.
They're true for anyone using LLMs.
(18:48):
And the thing that's different abouthealth— They're true for humans.
There are many of them aretrue for humans as well.
Well, so that's a very interestingpoint that's worth getting to.
I'll get to that in a second.
If you think about hallucinations,LLMs, if you're doing this for
poetry, that's not a problem.
That might even be seen as creative.
If you're doing this for artand a cat has six fingers or I
have six fingers or three fingers.
(19:09):
That's just artisticlicense, not a big deal.
If you go to, if I have a surgery and Icome back with three fingers, I'm going
to be really, really pissed, right?
That's going to be a problem.
And so, the health care,we have to get it right.
And we don't have thatroom for creativity.
And what we're seeing today broadly isthe use of LLMs as user interface and
as a means for answering questions.
But on top of that, you'll have a mixtureof experts of many types of models.
(19:31):
Some of which may be LLMs, someof which may be more traditional
machine learning and so on.
That would make sure that
if the LLM is hallucinating, that'ssomething that we can just be on top of.
And in a sense, this maybe a poetic analogy, but I
think it's deeper than that.
It's essentially a care team whereyou have one member that's good
at one thing, another member that'schecking on the results going out.
(19:52):
And that team approach is somethingthat can be done now with low
latency, which is kind of amazing.
And that you're getting thisteam of experts speaking to you
when you're talking to an AI.
And that part's really unique to today.
Raj, I don't know if you haveany more questions in that line.
I wanted to ask like one more sortof big picture health care question
before we go to the next section.
So again, the biggest companyin the world of the future
(20:12):
could be a health care company.
However, we're currently spending abouta quarter of GDP on health care costs in
the U.S. and your friend and colleague,Marc Andreessen has this blog post about
how the cost of the TV has gone way down.
The cost of health care has gone up faster,multiples of the price of inflation.
So, I guess a question that we oftenask guests on this podcast is, like,
(20:32):
will technology make health caremore affordable and better, or is
it just going to increase the spend?
Is this like a new sort ofspending mechanism on health care?
How do you see that playing out?
And if you see it driving downcosts, how do you reconcile that
with sort of Marc's take on this?
Yeah, I think the fundamental resolutionof this is that health care as sick
(20:52):
care, basically dealing with youonce you're sick, that is inelastic
from an economic point of view.
What would you pay to save your spouse?
Everything, right?
There's, I don't want another spouse.
So, I would give everything for that.
And if that's the case, that's somethingthat's inelastic like that is going to be
hard and you'll just pour whatever moneyinto you can into that and technology
will create more options, but it doesn'tchange the fundamental elasticity of that.
(21:15):
Where I think this getsinteresting is something that
is easier said than done, but whereI think the future of health care
really lies is not in sick care,but in keeping us from getting sick.
And that's something that'svery natural for tech.
That's something that obviously will,will bend the cost curve on the sick
care side and will change things.
That's where I think tech has to go.
And we're going to use tech todevelop new therapeutics for rare
(21:37):
diseases, and cancer, and so on.
And that's going to be partof that sick care cycle.
But I think where it gets reallyinteresting and where the curve
gets bent is ideally, you never go there.
And so from a health care deliverypoint of view, that's getting on top
of things, that's running diagnostics.
Companies like Function isa great example of that.
If you're a care delivery company, that'savoiding admissions or readmissions,
(21:59):
and using that as your North Star.
Trying to avoid the sick caresystem as much as you can.
That's the intellectual goal.
How to get that done is the realchallenge, and something that I
think it's easy to poke at, and Ithink there's a lot of work to do,
but that's, I think, the real goal.
Is there an opportunity, fromwhat you've seen just to
deliver care more efficiently?
So even if you look at GDP percapita spent on health care versus
(22:22):
mortality, we're awful by that metric.
So even outside of like health careversus sick care, can we just like be,
deliver health care more efficiently?
And you mentioned some tailwinds of COVIDfor telehealth and things like that.
Is there just a way we can like,
even if we're not keeping people
out of the hospital, once they'rein the hospital, not spending $500 on
an aspirin or something like that?
Yeah.
So this is where I looked at companieslike Devoted, Devoted Health which,
(22:44):
is, has a Medicare Advantage plan anddual plan and is, but more broadly
a health care company with both as aninsurance company and as a provider.
What they do is really handle all thelogistics and thinking about the right
care at the right place at the right time. If you think about
like maybe a non-medical analogy would be something like Amazon,
and the cost of things before Amazon.Like, Amazon is an amazing logistics
(23:06):
company and is able to drive the costdown through a marketplace as well.
So, in that sense, there's no silver bulletalgorithm that makes Amazon, Amazon.
It's about that mindset everywhere.
And Devoted has that very much as well.
I think their ability to bend the costcurve, I think is a great example of that.
And naturally avoiding readmissions areon their mind and handling things ahead
(23:26):
of time is something they're incentedto do as being a payer and a provider.
Great.
So, Vijay, I want to transitionus a little bit and we want to
spend a few minutes just digginginto academia versus industry.
So, Andy mentioned this, you know, youwere a tenured professor at Stanford.
You'd made it and you decided toleave that position to join a16z.
(23:47):
And I think you've clearly donevery well there, but maybe you
can take us into your mindset.
I think this was back in 2015, yourmindset behind that decision.
And maybe I can even ask moreprovocatively, thinking about
today, where should a smart personwho's interested in impacting
AI for biomedicine be?
Academia
or a startup and why?
(24:09):
Yeah, so this is a question verydear to my heart and I think there's
a couple different aspects to this.
So, first off, I think just purelyintellectually, forgetting about
any sort of macro arguments oranything like that, I think the first
question is, what is the timescalefor the impact that you want to do?
So, what you want to do is goingto require a decade or two decades.
That's not something you do in a startup.
(24:29):
That timescale is reallyonly available in academia.
And so, when I started 99, I wantedto really make computational drug
design a reality, uh, that wassomething that wasn't going to get
done in three years and it wasn'tready to roll out to design drugs.
That's something thatcompanies are doing today.
And, even Genesis, spun out of mylab is a really beautiful example
of that, that needs some time.
(24:49):
And so that was thereason to be in academia.
And the other reason to be in academia is,
let's say you don't evenknow what you want to do.
You want to explore.
And I think there's very few placeswhere you can just go and explore.
And you think about some of themost fortuitous discoveries like
CRISPR or something like that.
That's something that I think couldprobably only be found through a love
(25:10):
of basic research and exploring andfinding things and seeing what happens.
And it's impossible to even put atimetable on those types of things.
And then finally, it'syour mindset, right?
Some people, I think, have muchmore of an academic mindset.
The academic mindset is, whatdo you get excited about doing?
Do you prefer to read Natureor The Wall Street Journal?
That type of thing.
And actually, the funny thingis— We generate PDFs.
(25:33):
Yes.
We generate PDFs, yeah.
Yeah, there's also that.
It's like, what do you,also, what do you hate less?
Yeah.
Proofreading papers and, and soon, I think you'll be gravitated,
you'll see yourself being gravitatedto one or the other, but now macro
wise, I think the one thing aboutacademia today is that it's a lot more
complicated than when I started in 1999.
(25:55):
And I think, for someone to raise $5million in a pre-seed round is relatively
straightforward, especially someone who'swell known at the caliber of a strong
academic professor. And to get a $5 milliongrant, has a lot more work and a lot
more strings, a lot more complexity.
And so, if you're in the space that we'retalking about, let's say machine learning
(26:16):
AI for drug design and health care, theopportunities from a macro perspective
are really quite juicy right now.
Very exciting on the startup side.
Now obviously you have to build a companyand so if you don't want to build a
company, that's not the place for you.
But if this is something that now canbe taken from the world of ideas to the
world of implementation, something whereI think this is useful to like actually,
(26:37):
I can really positively impact patients'lives. That I think is the reason for
the transition and finally one lastpoint is that I don't know if this is
the best way to characterize it, but Ithink it's not uncommon for people to
start in academia and then move over.
And you learn a tonbeing a junior academic.
The one challenge is like if you're theretoo long sometimes you get comfortable
(26:57):
with one type of approach to another.
And I think if you're therefor long enough, you may just
be content enough and stay.
So, there's a balance there too.
I see the physicists and the kindof the chemists just come out there.
And a little Michaelis-Mentenseparation of timescales
as a way to think about this.
I was trying to guess what you wouldsay as the sort of the first axis,
first dimension to think about this.
(27:18):
And that wasn't what I wasexpecting, but it makes total sense.
And your other point about,exploration, kind of ambling, wandering
through idea space without somethingvery, tangible, immediate in mind, but
really trying to have the room both intime and in, what you even ask, right?
Like what you even sort of evolvedto ask over the next few years.
(27:39):
I think academia is still verywonderful that way, even with the
grants and with everything elseand all the pressures that
we face as junior faculty.
That is very, it's honestly veryencouraging as a junior faculty
member to hear that and to hear avote of confidence for certain
ideas as being uniquely, in certainpads being uniquely academic.
I was wondering as a follow-on,maybe I can get you to talk more
(28:01):
about medicine versus biotech.
And so,
medicine is set largely byclinical practice guidelines, clinical
societies, uh, major clinical bodiesthat I think still largely exert
influence via kind of academic channelsor conventional academic channels.
And I think, this is probably, thereare analogs for sure in biotechnology
(28:24):
broadly, but I think biotech seemsto me at first brush and at, you
know, probably oversimplifying here,moving more as driven by industry and
driven by currents that are happeningin, you know, outside of academia.
And so, if you are someone who's interestedin influencing medicine, via what is
considered best clinical practice.
Do you see that as something thatis still uniquely academic, or is
(28:46):
that even something that is changingand that you should contemplate
industry a little bit more?
I'm asking clearly for myself rightnow, but I think there are many
people who are also in my bucket.
No, it is a good questionand an important one.
And I think you make a useful distinctionbetween life sciences and care delivery,
because there's not a life science biotechequivalent of academic medical centers.
(29:07):
There is not like academic biotech centerswhere— That's the much more eloquent,
that's a much more eloquent wayof what I was just trying to say.
We're very much in agreement there.
And the role of academic medicalcenters is really critical, right?
And, and that's where a lot ofinnovation still come from and so on.
And that, um, uh, that maychange, but that's not, that's
very much the way today.
(29:28):
I think a lot would have tochange for that to be there
for the way providers work.
So, I think from that perspective, thereis a huge opportunity on the health care
side for driving innovation in AMCs.
Uh, the real question is like, to whatdegree will the reality of running a
medical system also complicate things?
Because it's, it's a rough time tobe running a hospital right now.
(29:50):
And, uh, I was at Stanford during theUCSF–Stanford merger and unmerger.
I was chair of biophysics then,which is in the med school.
So, I got to have the delight of watchingthat, you know, from the chair's
perspective. You know, that's a hugebusiness proposition to have to deal with.
And that's different than like what alot of academics want to be dealing with.
And so,
the future of AMCs from a businessperspective is also, I think, going
(30:14):
to be important part of this andhow can we keep the best and keep it
sustainable is the question I always have.
Awesome.
Thanks Vijay.
So, I think we're going to runyou through the lightning round
next, if you're up for it.
Sounds good.
Awesome.
The first lightning round question
requires a little bit ofsetup for our listers.
(30:34):
Um, but you sort of mentioned inpassing that you're one of the
first employees at Naughty Dog.
I think one of the sort of like blue chipgame developers now they develop things
like, uh, The Last of Us and Uncharted.
And I think what Naughty Dog doesuniquely well that other game designers
try to emulate is storytelling.
So, they have these amazinglycinematic games that have these
(30:55):
well fleshed out characters.
And so, I guess my first lightning roundquestion is, is there something in
the DNA there that you learned aboutstorytelling that has served you well as
both a professor and venture capitalist?
Yeah, I think so.
I mean, in the early days, uh, it wasme, Andy and Jason, and I think a lot of
what you're describing came after, evenafter somewhat, uh, significantly after, but
(31:15):
I think the storytelling that is perfectedlater is true of all video games and video
games as a media, a medium for, uh, andso I think that was very much on my mind.
I think though, part of it too, is that,uh, I think part of being a physicist
is, uh, physicist culture and beingtrained as a physicist is that there is
storytelling there too, you know, uh,and even one thing that doesn't happen as
(31:38):
much in biology is like even family treesand academic family trees and the stories
associated with your academic family tree.
So my advisor's, advisor, advisorwas Lev Landau, the storied Russian
theoretical physicist and so on.
And so like hearing about Landau'sculture and all these things, it's,
it's a different type of storytellingand narrative to live up to.
(31:59):
So I think there's manydifferent traditions there.
Um, in, in terms of venturecapital, the fun thing about venture
capital storytelling is that
it's both a bit about predicting thefuture and a bit about making the future.
And that if we can lay out a futurethat's plausible, but who knows what the
future is really going to bring, right?
So, a future that's plausible and wecan, rally brilliant people to come
(32:23):
help us and help our founders, and wecan put billions of dollars behind it,
that future could become a reality.
And that part's particularlyintriguing, especially in health care
where the stakes are so high, and thepotential is so great for change.
Got it.
Thanks.
Vijay, if you weren't in science orengineering, or science or engineering or
investing, what job would you be doing?
(32:44):
Yes, I've thought about this andlike, um, one of the delights about my
family is that I have a lot of cousins.
Uh, and you know, so obviously mycousins have the same, uh, grandparents.
Uh, and so like one ofmy cousins is a chef.
I could totally imagine being a chef.
That would be like fantastic.
And I like cooking.
Another one of my cousins is apsychologist and is quite famous in that.
(33:04):
And so I could imagine doing that.
And so I look at my cousins andimagine all the different lives
that I could have had, and these aresort of things that I enjoy. But
probably the deep, deep fantasy, whichI've not made a ton of progress on, but
maybe made a little bit of progress on wouldbe something like some sort of musician,
like a jazz musician or something like that.
And so, I have music isa key part of my life.
(33:25):
It's too late to be a jazz musician.
But, uh, actually briefly, when Iwas at Berkeley, Vijay Iyer, the
famous jazz pianist and piano's, uh,my instrument too, had the email
Vijay at Berkeley, uh, physicalBerkeley to do right before I did.
So I got all this, uh, Vijay Iyer fanmail, which was, uh, obviously not
intended for me, but was, uh, was, uh,inspirational, uh, and it was fun to get.
(33:47):
Yeah.
Our sound engineer, Mike, who is aprofessional jazz musician, uh, is
nodding along in approval, I believe.
Excellent.
Excellent.
Absolutely.
Yeah.
I'm a saxophonist, so, you know.
Well, and so my daughter, mymiddle daughter is a saxophonist,
and so we try to do stuff.
How's she doing?
Like, she's doing, she's doing well.
I think, uh, she hasa lot of things going.
We have, um, DougEllington has a band here.
(34:08):
He's the grandnephew of Duke Ellington.
Wow.
And he, we often have him at our houseand I'll, we'll, we'll step in and, and
they are very kind to play around usand make us sound way better than we do.
That sounds great.
Yeah, yeah, absolutely.
All right, so next question.
Maybe I have some sense, butmaybe I'm also way off base.
So obviously, again, your colleagueMarc Andreessen is a prolific generator
(34:30):
of opinions on certain things.
And so, I'm wondering if there'ssomething that, what's the thing that
you most disagree with Marc about?
That's a good question.
What do I most disagree with Marc about?
Uh, part of the draw to thefirm was actually, we actually
agree about most things.
And, uh, he and I are fairly similarin terms of all sort of pre-a16z
(34:53):
experience, in that I was never a CEOof a company, but I've been a chair of
companies, and so has he, and so, and Ilike that role, and he likes that role.
That's a great question.
There must be something, but maybe diet.
Like there's, there's abunch of things on the diet.
He's given up, he's given up booze,I think, if I remember correctly.
Yeah.
So, so, okay.
So, so if you asked me this question sixmonths ago, it would totally be booze.
(35:13):
But recently, and he was actuallysomewhat of a catalyst of this because
I was at his house and he showed me thishop water stuff, which was surprisingly
not as bad as I was expecting.
So, my wife and I have greatlydiminished alcohol, but I think the
difference is that he's gone zero.
And I've gone like a little bit andso the fun thing there and so maybe
(35:34):
that's where you disagree And I knowhe likes scotch and so on but now
like the scotch is so much betterwhen you have it much less frequently
And I was making that pitch to him.
I think I don't think he'd want to hearabout any more of that. Yeah, so, so, he's
dry, but you're still a little damp.
It sounds. A little damp. Alittle bit, a little bit. Yeah.
Got it.
Amazing.
Vijay.
Will AI in medicine be driven more bycomputer scientists or by clinicians?
(35:58):
Yeah, so this is a great questionand you're gonna feel like this
is a cop-out answer, but thisis really the right answer.
The right answer is that there areclinicians that are gonna be AI
specialists, and that's what I was gonnahave to be, and I think this probably
for you two, this is not a shock, right?
That it can't be somebody withjust one mindset or the other.
And actually the other thing,
it would be hard to be.
It's not impossible.
It's hard to have co founders whereone's the AI guy and one's the doctor.
(36:20):
Even that's hard becausethey don't have telepathy.
It's really differentwhen it's all in one mind.
And so one of the unique things aboutdoctors today is that, freshly minted M.D.s
have been around computers their wholelives, which is different than people
like 20 years ago, even 10 years ago.
And so, I think we're going to have that.
And there's plenty of exemplars of this.
The Med-PaLM team is a great example.
(36:42):
There's plenty of exemplars of peoplewho are brilliant in both domains.
And I think that reallyhas to be the future.
I totally agree.
And this has been a big theme of
the conversations we've had on thepodcast too, where we're asking people
about their background, physicians whoare having a lot of impact and it really
has been, I think a common story thatthe latency is too high when the skills
(37:02):
exist in two separate minds. And youjust sort of get together and you don't
really have anything to talk aboutas opposed to rapidly eliminating all
these ideas and then coming up with theright path and it's kind of eliminating
those thousand ideas along the way
to quickly get to where you need to go.
So that's, that's great.
So. Well, and so maybe actuallythe question I'd ask you guys, if
I may, how's that going to work?
(37:23):
Is that going to be someone withan M.D. learning AI or is that a
computer scientist learning medicine?
Andy, do you want to gofirst, or you want me to go?
Yeah.
Yeah, I don't I, I think it's noteither or so I think like I came
into medicine through the side door.
Yeah. But I think I, I think you haveto have a deep appreciation of both.
(37:44):
I think that it's very easy to be acomputer science person and tell me like,
what AUC should my model be calculating?
And that's going to be superficialand only get you so far.
So, I think that like, as long as you'rewilling to be a nerd about both, um,
you're going to have a lot of success,but you can't have a superficial
interest in either side of the equation.
Yeah, I think that's right.
And you can imagine someone who'slike a CS pre-med undergrad.
(38:05):
Yeah.
Yeah.
It's easier said thandone, but not impossible.
And then they go to med school andthey're like the master of two worlds.
Or some M.D. Ph.D.'s who just, taketime to study both subjects.
I have to, this is my obligatory,Andy's heard this like seven times.
So he's gonna, he can simulateme fully at this point.
But I have to give my obligatorynod to this Ph.D. program I
went through, here in Boston.
(38:27):
So this is the HST program, the Harvard,MIT Health Sciences Technology Program.
And basically the Ph.D.'s take.
The first, you know, a good chunk ofthe first two years of medical school,
and then spend a couple of summersin the clinic where you're taking
histories and physicals, roundingwith the teams, presenting, really
understanding how doctors think.
And then the M.D.'s are very technical andthey do an extra year or two of research.
(38:48):
One of my friends of the programactually is at a16z now, Vineeta.
I'm sure you, you know, youknow where Vineeta Agrawala.
And so, uh, it's just a,it's a, it's a great program.
And it's like a, the core thesis thereis that you can't really have one skill
set or the other, but you really have toinvest in, in kind of education in both.
And so, I think we need more of thosetraining programs of real domain
(39:09):
expertise alongside the technical skills.
I agree.
And I think there'sreal potential for that.
I think at Stanford, the MSTPgroup was a little bit like
that, but HST is really special.
Awesome.
All right.
Thanks.
So this is a question that nowhas almost become cliché because
we ask it to everyone, but it'selicited such interesting responses
that we're going to keep it going.
So, if you could have dinner with oneperson dead or alive, who would it be?
(39:31):
I should have like a stock answer forthis, but, uh, I believe we're not,
I've probably done like a hundredpodcasts and no one's asked me this yet.
Nice.
Uh, so, uh, yeah, no, no, butlike, so I feel like I should do
better, uh, with having an answer.
It's such a hard question, but like,uh, who, the people who come to mind,
(39:51):
are the real old school people likeClaude Shannon, John von Neumann, uh,
Alan Turing, that generation where Iwish, just for them like, I could tell
them like, oh, actually all this coolstuff came from like the seeds that you
planted, the ideas that you had.And it would be so much fun to just
(40:12):
sort of hear their view of what buildingthe foundations was like, and just to
get their take on where we are now.
There’re few people that were suchthe polymaths, like von Neumann's
maybe the canonical example.
And I think I'd be probably if I hadto pick one, maybe he would be the
one that would be super interesting.
That was a fun time.
I mean, I think today is anamazing time too, for sure.
(40:33):
But like, um, someone like that,I think would be, would be just,
uh, fun to see the direction, thediscussion is going in both directions.
Yeah.
I, it's your choice.
I, I always worry about
meeting someone like that in real lifethough, because whenever you humanize
them, the legend gets instantiatedas an actual person and you no
longer have the legend in your head.
So yeah, I've seen this where like, whatwas it was a comedian or a sports figure,
(40:58):
oh, it was when Willie Mays dies.
What, died. I was watching PTI,
and I don't know if you've seen Pardonthe Interruption. Pardon the Interruption.
Yeah, yeah, yeah.
And they're talking about, Kornheiserwas talking about meeting Willie
Mays and he's like, just for yoursake, never meet your legends.
It's not going to go the way you think.
So, it's kind of the way it is, butlike in my mind, it could be amazing.
And like probably with AI, wecan make a fake version that
would keep me quite happy.
(41:19):
All right.
This is our last lightning round question.
Given recent progress by Anthropic andGoogle, do you think OpenAI will have the
most capable foundation model in one year?
Yeah, so it's a good, good question.
And a year timescale is well chosen.
Because if you asked me in oneday, I think I would say no.
If you asked me in one decade, I wouldsay it would be OpenAI for a day.
(41:42):
A decade is much more open.
And a year.
Probably in a year, I thinkthere's going to be, there's all
this talk of clout doing well.
And so, it's possible things have switched.
I think the one big challengeis going to be is like, what
is the right metaphor for AI?
And I think my favoritemetaphor for AI is probably the
microprocessor in terms of business.
(42:05):
And if you remember the earlydays of microprocessors,
there were a few, like the.
4004, the 808, and then 8080.
And then there's a little bit ofexplosion with others and then, uh,
basically Intel again, uh, and so on.
And so, I think short term, there's onlya few people that have the technology.
Midterm, a lot of people have thetechnology and then long, long term, I
(42:27):
wouldn't be surprised if maybe there'sreasons that it would converge back again.
I think we're going to see a bit of anexplosion and we already see that to
some degree with all the open sourcestuff with LLAMA and Mistral and so on.
I think the other thing is thatwe're probably going to get less
hung up on one LLM to rule it all.
And that there's going to be lotsof different things, much like you
don't just work with one personor talk to one person or so on.
(42:50):
So, I think we're going to have lotsof different uses, and that will
be part of the explosion as well.
This is back to your mixture of expertsand agents all operating together.
Yeah, exactly.
So well done Vijay, you've passed thelightning round with flying colors.
Thank you.
It was so much fun.
Thank you guys.
So, we want to just wrap up withsome big picture questions.
And I think the last lightninground question was a natural segue.
(43:13):
The purpose of this line of questions isto just get your sense of how the next
five-to-seven years are going to go.
But we're going to anchor it onthis essay by Leopold Aschenbrenner
called Situational Awareness.
I don't know if you've had achance to read this, but— No.
So, he's been making lots of roundson the Twitterverse for writing this
183-page essay on what the next five-to-seven years are going to be like.
(43:34):
And what's helpful is hemakes some specific claims.
So, I'd like to throw some of thesespecific claims at you and sort
of get your reaction to them.
So, first he says that we'regoing to have AGI by 2029.
And by AGI, he essentiallymeans superhuman intelligence.
I think we already have AGI now.
I think that by any reasonable definition,LLMs are a general kind of intelligence.
They're not superhuman inlots of different categories.
(43:55):
So, he thinks 2029 essentially isthe singularity, the event horizon.
We're going to cross over it.
They're not ever goingto be able to come back.
Getting there though, hesays that the rate limiting
factor is going to be energy.
The GPUs that we need to trainthese models are going to be so
power hungry that we're going toneed a gigawatt power center.
And that essentially, we're going tohave a trillion-dollar data center.
(44:18):
That's a combination of these huge computeclusters, along with a nuclear power
plant that's hooked up to it, to power it.
So, when I guess, maybe I'll just throwall that at you and sort of get your
reaction to it, and then we can likesort of pull on some individual threads.
Yeah.
Yeah.
So, let's take the power thing first,because I think that's, uh, so and I
think you can think like what fractionof the world's energy should go to AI.
(44:42):
And I think the answer isa relatively large fraction
concerning the implications of this.
And so, back of the envelope calculations,10%, 20%, 30%, that's a lot of
energy that to add on top of things.
And it actually has obviously realgeopolitical implications because you
just can't make energy just anywhere.
And so places with large scale energyproduction, whether that be oil and
(45:02):
natural gas, or solar or nuclear.
Become appealing with obviouslynuclear becoming appealing because
of low CO2 generation and so on.
But also, you want to do thissomeplace where there's cooling.
And so, places with oil maybe notbe the coolest place to have this.
So, I think all of that becomes logisticalproblems, infrastructure problems, but I'm
excited about spending more energy for AI.
I think that could be one of thebest uses of energy that we could
(45:25):
use as a society or as a, as a world.
Terms of AGI, I think this is whereit gets very complicated because well,
first off, like even just intelligence,like, I don't know how deep you want
to go, but you could talk about IQ,you could talk about G factors, but
even G factors is very human centric.
Like AI has intelligencethat's just really different.
And that even like LLM right now,I can ask it physics questions,
(45:47):
which I know the answer to.
I can ask it music questions.
I can have it do things like,
I think it can do things thatprobably no individual could do,
but like a group of people could do.
And to some extent it's trainedon a group of people, so it's
reproducing a group of people.
And so that's a kindof super intelligence.
And then, we've already seen withthings like radiology, another type
of super intelligence is where AI isbetter than any individual radiologist.
(46:11):
But it's, comparable to the,the group of experts and that's
considered to be super intelligence.
And then I think what we really arelooking for is something where AI
comes up with some new physics orsome new medical breakthrough or
something that is just as differentas Einstein was, or as, something
that is that level of creativity.
And even Einstein is such a hackneyedexample, but it's just the obvious
(46:34):
one because like his, uh, generalrelativity is something, or even
special relativity is something thatwas such a major leap of thinking,
albeit built on work of others, as well.
So, that part, I don't know ifwe're going to see that or not.
And I think what could easilyhappen is that LLMs are very much a
next token predictor and next tokenprediction does really well at these
current games, but is that the thingto really take us to the next level?
(46:58):
I do think that learning is buildinglatent spaces and this could be jazz.
The reason why you do scales injazz and all different types of
scales is that's a latent space.
Um, I do martial arts, all the martialarts stuff is learning latent spaces and
a language for, for combat and so on.
Uh, learning latent spacesand then doing direct
products of these latentspaces and all these things.
That's very natural thing to do.
(47:19):
LLMs can do that to some extent,but I wouldn't be surprised if
there is an algorithmic shortcoming.
And if we were to study the historyof AI, it's a series of S-curves where
eventually it's like, it's going to solveeverything, like single layer neural
nets are going to solve everything.
Uh, except for XOR, exceptfor this, except for that.
And then, we get to a plateau.
(47:39):
It would not surprise me if weget to a plateau before HEI.
And, this 2029 number wouldbe a very natural thing if the
S-curve were an exponential,but what if it were a sigmoid?
And if it's a sigmoid orlogistic, it's going to bend over.
Maybe we just don't get there.
And that extrapolation doesn't work.
I would not be surprised ifthere's something really missing.
(48:01):
And that's the cool thing, because I think we'll get it,
but maybe just not this soon.
So, I think one common rebuttal that Ihear about AGI by 2029, I think you were
kind of alluding to there is that like,they are just trained on Internet data
and Internet data is only get you so far.
And we've already extracted allthe signal from Internet data.
Do you have a sense of how true that is?
(48:22):
So like the models that we'reusing, have they actually extracted
all of the information thathumanity has produced so far?
Or are we still at somelike small fraction of that?
Cause I think that would suggest,how much room we're going to have
for improvement on the current paradigm.
The one thing though, is thatlike the way computers learn is
not the way we learn new things.
It's the way we learn existing things.
(48:44):
So, like, um, I don't know if youever had friends that like grew up
in foreign countries that moved here.
I've had a few that they learnedEnglish from watching TV.
It was shocking for me to imagine that'seven possible, but it's a common thing.
And so that's kind of very LLM-ish, right?
You're just watching and absorbing andmaybe there's a little RLHF in there when
your parents or your teacher says that'swrong and that's wrong, but it actually
(49:05):
reminds me a lot of learning English.
as a foreign language. Butthat's different than learning,
like in a Ph.D., we're into a newdiscipline where nobody knows the answer.
And it's a whole different process.
And I think a lot of things thatwe're excited about are things
that nobody knows the answer to.
And that's a different paradigm.
That's maybe even learning is notthe right word that's exploring,
or creating, or discovering.
(49:26):
And that's not what LLMs havereally been geared up for.
And even, building tasks and Q-star and so on, that's not going
to, I think, teach it how to do that.
And there may be something to do on top.
In a sense, LLMs are likea really good undergrad
that's read all the books, but isnot ready to be a grad student
yet, maybe, I don't know.
(49:46):
Yeah, I agree with that.
I think that there's something to beingembodied and interacting with the world
that's missing with the current paradigm.
So, one thing that I always thinkabout is LLMs are great at selecting
from a set of hypotheses that arecompatible with what we currently know,
but often the only way to know whichone of these hypotheses is correct
is either to assume your way there.
(50:07):
So, I assume that I know howthe world works and therefore
I can rule some of these out.
Or to do an experiment or interact withthe world or push the cup off the table
and falsify or confirm, your hypothesis.
So that does seem like, I don't have asense of how big the fraction of that
missing piece is, but it definitelyseems to be a missing component.
Yeah.
So, towards that end, it could bethat the human feedback that LLMs
(50:29):
get through massive chatbots.
Could be, but like, that's differentthan still like the type of advanced
learning that we're talking about.
So, um, I don't know.
I mean, I'm, I'm very curious tosee, and it could be, you can imagine
creating different types of bots thatare the professors and they create
students and then you cycle over.
I don't know, reproducing academia and AImay not be the ultimate end goal, but that
(50:51):
is an interesting paradigm to consider.
I guess Raj doesn't have any questions.
Just one sort of like forward-lookingthing that I'd like to get your take on.
My experience has been that whenyou're talking to people about AI,
especially people who are hearingabout it for the first time,
there's either elation or fear.
And that like it seems to be prettypolarizing along those two axes
(51:11):
that, oh my God, this is amazing.
Or, oh my God, this is themost terrifying thing I've ever
heard of in my entire life.
How could you possibly be working on this?
Um, at least externally,a16z seems to be
formally, very firmlyin the optimist camp.
So maybe you could give us thebull case for optimism and about
why AI is going to give us thesort of future that we all want.
Yeah.
And I can also give you a, whatmy understanding the psychology
(51:32):
of the fear case is. I think,this is something that has
happened over and over again.
Part of predicting the future
isn't, like, you're a genius.
Like I can predict what timethe sun's going to rise tomorrow.
It has a daily cycle.
Some things have monthly cycles.
Some things have yearly cycles.Technology and industrial revolutions
people have studied, historianshave studied, and they have like
(51:53):
25-year cycles. And we're in themiddle of industrial revolution.
This is not a first.
This is our fifth or sixth,depending on how you count them.
And in these cycles, there areall these people that are scared
of the new technology and there'sa lot of reasons to be scared.
It changes things like when, beforecars, if you were a master horse
breeder, like you work really hard tobe good at this really important thing.
And then that goes away.
(52:14):
That is scary.
And there's a lot ofreasons, the fear of that.
And if cars, the first-time cars are on the market, like cars
kill people, they run them over.
They're this new thing that you'renot used to thinking about, and
not in a way that horses runpeople over, it's just different.
And sure, cars are ambulances, too.
But if you're just looking at thenegative side, being run over a car
is more scary than having cars to helpyou and all the things that you can do.
(52:37):
And so, every fear that wecould attribute to AI, we could
also attribute to electricity.
You know, you put this thing ineveryone's house and that could kill
you if you touch it, you know, orthe positives of electricity or the
negatives of cars, the positives of cars.
And so if you harp on the negatives,it can, anything can look really bad.
Cars and electricity can look really bad.
If you consider the net positives,the net positives are dramatic.
(52:59):
Nobody wants to go back to a worldwithout cars or without electricity.
If you go that direction, you're ina hunter-gatherer society where you
know, the bully beats up everybody.
And that's the person whoruns the little village.
I mean, technology is the great equalizerthat makes, uh, democratizes everything
that the technology we have todaygives us. Things that like a royal king
(53:20):
a hundred years ago would never have.
We all have the sameSpotify and the same iPhone.
I think on the medical side, Ithink the exciting thing is we
will all have the same specialist.
We will have the best doctor, just like wehave the best Spotify or the best iPhone
we can all have that.
And the democratization is somethingthat just doesn't exist in something like
health care today with such disparitiesof quality access and care and cost.
(53:43):
I think that's the real hopeand I think that's what gets me
excited about AI in health care.
But I think this is AI broadly and it's technology broadly.
And as a firm, we're very excitedabout technology because we've seen
all the positive things that it's done.
Thanks.
I think that's a great note to end on.
Oh, fantastic.
This was so much fun.
Thank you.
Thanks for coming on, Vijay.
Thanks so much, Vijay, this was great.