Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
And we've seen that there's more andmore genetic insights that can come out.
Now, when you think about thebroader spectrum of data that can be
collected about even just a singlehuman organism, you have single cell
RNA seq from different cell types indifferent states at different ages under
different environmental conditions.
(00:22):
That's just RNA seq.
You can also think about proteinlevels, and we haven't gotten to
protein levels at the single cell.
We're starting to scrapethe boundaries on that,
but it's certainly not at scale.
And then going down into the level ofindividual proteins, going up to the
level of, now everyone's excited as theyshould be around spatial biology and
the interplay between different cells.
(00:43):
The number of data modalities, the numberof ways that we have to measure biology,
and the number of distinct biologicalcontexts with, that we as a human
live in and that exists within even thebody of a single human is ginormous.
It is way larger than what I thinkthe complexity that we've trained
the large language models that we'recurrently leveraging in these in
(01:06):
the more traditional
LLM sense, what we lack are data collection
approaches that achieve that scale.
And I'm just really excited to beliving in this time because the
number of ways that we have to measurebiology quantitatively and at scale
is increasing, maybe not quite asfast as the capabilities of AI work
(01:27):
a few years back on that curve, butyou can see that exponential curve.
And I think the synergy between those twois going to just unlock just an incredible
tidal wave of insights as we start tobring those two tidal waves together.
Welcome to another episodeof NEJM AI Grand Rounds.
(01:49):
This is Raj Manrai, and I'mhere with my co-host, Andy Beam.
We're really excited today to bring youour conversation with Daphne Koller.
Daphne is the CEO of insitro,where she's working on artificial
intelligence for drug discovery.
Daphne has really doneso many things, Andy.
She's had this illustrious career as anacademic, as a professor at Stanford.
(02:11):
She started Coursera.
This was an amazing and really,really wide-ranging conversation.
Let me admit something to you, Raj.
I was nervous
going into this interview.
One, I think, you know, as you know,I was on paternity leave, so maybe
not at my most mentally sharp, butDaphne is such a force of nature that
it's very intimidating to talk to her.
She is so accomplished and so bright thatit's hard not to be a little intimidated
(02:34):
going into a conversation with her.
Having said that, I think that we covereda lot of ground and I think that the
listeners are really going to get asense of all of the different areas of
computer science and biomedicine thatshe's had a significant impact on.
It was, nerves notwithstanding, a realtreat to get to have her on the podcast.
And again, I learned a lot fromthis conversation, both about
(02:55):
what she has done, but alsohow she thinks about the world.
So, for a lot of reasons, it was areal treat to get to talk to her.
The NEJM AI Grand Rounds podcastis brought to you by Microsoft,
VisAI, Lyric, and Elevance Health.
We thank them for their support.
(03:18):
And with that, we bring you ourconversation with Daphne Koller.
All right, Daphne, thank you so muchfor joining us on AI Grand Rounds.
We're super excited tohave you here today.
Thank you, Andy.
It's a pleasure to be here.
Daphne, welcome to AI Grand Roundsand thank you for joining us.
So this is a question that wealways like to get started with:
Could you tell us about the trainingprocedure for your own neural network?
(03:40):
How did you get interested in AI?
What data and experiences ledyou to where you are today?
So, I will say that when I got intoAI, what I got into wasn't actually AI.
I got interested in the questionof how we can get computers to make
better decisions, initially usingdecision theoretic principles.
(04:02):
And AI didn't encompass that at the time.
I got into the field, I'm old atthis point, um, at a time when AI
was all about logical reasoning and,you know, uh, symbolic computations.
And we, there were people who went aroundsaying that, oh, you're using numbers.
People don't use numbers.
What you're doing is not AI.
(04:22):
And so what really happened is thatwhat I did, which was initially
decision making under uncertaintyand then learning models for
decision making under uncertainty.
So, learning models that enable thecomputer to make sound decisions that
got swallowed by AI and eventuallybasically took over the field over time.
(04:44):
Could we get you to goback a little bit further?
So, you were doing this presumablyin a computer science department.
A lot of that sounds likeclassical statistics to me.
Could you take us back before grad school?
Like, what sparked your interestin these areas and how'd you
get there in the first place?
So I have a somewhat unusual academichistory because I started going to
college at a relatively young age.
(05:05):
So I got into computer science, honestly,as it was becoming a field in the 80s
as a very young high school student.
So when I was 12 or 13, and Icompleted my undergrad degree at 17,
and computer science was at the time,really almost a branch of mathematics.
It wasn't really in most universities,a field in its own right.
(05:28):
It was really very mucha mathematical study.
And I loved actually both themathematical component of the field,
I was a double major in math,but also the fact that you could
actually take these very abstractconceptual frameworks and then use
them to get the computer to do stuff.
And, and the computer was able, you know,at the time, these are very rudimentary
(05:49):
capabilities, but you could build agame that got the computer to play Pong.
And that was like, so coolthat you could actually
tell something what to do, and it did it.
No one else that I tried totell what to do did what I said.
So it was nice to have thecomputer do what I said.
And so, it was reallyempowering and fun to do that.
Um, and then the question was, well, okay,if we can get the computer to do stuff,
(06:09):
how do we get it to do the right stuff?
And initially, my interests wereactually in multi agent systems.
So, if you look back, to the work thatI did even, um, in my master's degree
and then subsequently in my Ph.D.,
a lot of that was about multiagent systems and game theoretic
models for multi agent systems.
And then I realized that beforeyou get the, uh, a community of
(06:34):
agents to do something intelligent,you first have to get individual
agents to do something intelligent.
We were very far away from that,
which led me to the study of thesedecision theoretic systems and decision
making under uncertainty. And thenrealizing that the biggest obstacle to
that was that the computer just didn'thave a very good model of the world, and
that people just weren't particularlysuccessful at building usable models
(06:58):
that really captured thecomplexity of the real world around us,
and then recognizing that the only wayto get to that was via machine learning.
So in some sense, I got to machinelearning via the back door at a time
that it really wasn't part of AI.
I was the first machine learninghire into Stanford's computer science
department and frankly the strongestproponents for hiring me weren't the
(07:20):
traditional AI people who were thedepartment at the time, it was actually
others in the department who saw thevalue of this more modern style of
getting computers to act intelligently.
They were the biggestproponents for hiring me.
Yeah.
I remember I took an AI classbefore machine learning really took
over and now we call it good oldfashioned AI, you know, Russell
(07:42):
and Norvig, uh, that textbook.
Uh, and there's like a tiny sliverof machine learning in there.
So, it is interesting that youwere kind of at the vanguard.
of the probabilistic or data driven
or machine learning approach to AI.
So, what were those early dayslike when you were kind of this
insurgent in a very historicallyrenowned AI department at Stanford?
So, if you look back, I mean,1985 was one of the AI hype
(08:06):
cycles, one of the earlier ones.
And there was a big AAAIor IJCAI conference.
These were the big AIconferences at the time.
And there was a panel on AI.
What is the future of AI?
And most of the panel, with theexception of Judea Pearl, who I
consider to be one of the foremostleaders in this new vanguard, all said
(08:28):
that you cannot use numbers in AI.
People do not use numbers.
Numbers are anathema to AI.
Probabilities for sure, and they hada whole bunch of arguments about why
probability theory would never bethe basis for intelligent reasoning.
And Judea, who's a close friendand mentor, uh, basically was the
one holdout for this is the future.
(08:49):
This is what we need to do.
And I would actually say that Russell andNorvig were among the ones that actually
were helpful in migrating the fieldmore towards this decision making under
uncertainty and machine learning.
The earlier textbooks didn't haveany mention of either probability
theory or machine learning.
So in some sense,
they were the transition point.
(09:11):
And I did my postdoc with Stuart Russell,and I know that he's a very big proponent
of, he was even then a big proponent ofthat, of that transition, but it took
the field quite a long time to adopt it.
And there was this schism where those ofus who did the more probabilistic stuff,
mostly published in a completely differentset of venues than the traditional AI
conferences for quite a number of years.
(09:34):
I don't want to go on too much of atangent here, but your comment about
Pearl is very interesting because backin those days he was saying, we need
probabilities, we need data and now Ithink he stayed mostly the same, but the
field has kind of polarized around him.
His argument now is like,we're too reliant on data.
We need actual causal models of theworld and, you know, the probabilities
(09:54):
are going to lead you astray.
I don't know that I've heardhim say that the probabilities
are going to lead you astray.
I think what happened was that thefield really did swing as a pendulum.
It swung from we entirely logic based,you don't need probability, you don't
need data, you don't need numbers,migrated temporarily, transiently
(10:17):
through something that had elementsof both symbolic and machine learning,
and then swung with the advent ofdeep learning much more towards
we don't need any kind of world model.
It's all going to be just the numbers.
We're just going to learneverything from data.
And I think now if you listen to thepeople who were in some ways, the leaders
(10:39):
of that revolution, people like YoshuaBengio or Yann LeCun, they are, are now
coming back to we need to have some levelof causality, some level of tie in to
sort of more symbolic concepts becausethat is going to be important for common
sense reasoning and it's going to bereally important for taking action in
(11:04):
the real world and understanding whatthat action is, what the consequences
of that action is going to be.
You cannot rely on plain oldpattern recognition, which is
where, um, you know, deep learninghas really had its biggest impact.
And so, I think there's now almostlike starting to gravitate back
towards the middle a little bit.
So, Daphne, if I can take us in alittle bit of a different direction.
(11:26):
So, I think you framed
the very exciting early days, let's say,of AI at the CS department or machine
learning at the CS department at Stanford.
And you, I think, are of this veryrare group of people who've had real
contributions both in computer science,so general computer science, probabilistic
graphical models, support vector machines,your work there, but also, of course, in
(11:48):
applications in biology and medicine.
I don't think I actually know thestory of this, uh, but could you
tell us about how you got interested
in biology or medicine, um, and wherethat, let's say not transition, but
where the work that your group starteddoing at Stanford in applications of
machine learning, computer science,where it started and when it took off.
(12:10):
So I'm going to actually go back alittle bit further, if that's okay.
And talk a bit about my own personaljourney as I think about what to work on.
When I was a Ph.D. student at Stanford,my work was highly conceptual.
Very, um, a lot of theorems,a lot of abstract concepts,
some very beautiful frameworks.
(12:30):
And then when I went to do my postdocat Berkeley, I had what turned out to
be, I think, a very pivotal conversationwith my postdoc advisor, Stuart Russell.
He took me out to lunch one of thefirst weeks that I was there and said,
so you did this beautiful Ph.D. thesis.
You got a Ph.D. thesis award for it.
If I gave you a group of reallytalented undergraduate computer
(12:51):
science majors to work with you andyou could code together something
from your thesis, what would that be?
And I literally sat there, I think,with my jaw hanging open because no
one had ever asked me that question.
And if I had to answer it,honestly, the answer is nothing.
There was from the thesis that I wouldhave wanted to implement and turn into
(13:16):
a useful product, a useful artifact.
And that bothered me.
And it kind of pushed me on a path ofan increasing commitment to building
things that actually make a difference.
And that moved me from the moreconceptual work that I'd been doing
to probabilistic graphical models,which I thought were useful from
(13:37):
probabilistic graphical models, all that
started doing that even prior to mypostdoc from probabilistic graphical
models to machine learning, toapplied machine learning, to machine
learning in the service of actualdisciplines, which at that time
were broader than just biology.
I worked in machine learning appliedto robotics and to computer vision.
(13:58):
And at some point, I got interested inbiology, not because I had any particular
affinity to biology myself, becausethe truth of the matter is, I went to
a high school that was highly tracked,and I was tracked math physics, um,
and, you know, the biochem people werea different breed, and we didn't talk
to them, and they didn't talk to us.
And so, I really didn't know anythingabout biology, but this was a time
(14:21):
when the datasets that were availableto machine learning researchers at
the time were actually kind of boring.
So, there's only so far that onecan get excited about spam filtering
or, even worse, classifying
articles into 20 news groups, which werethe datasets that we all had to work on.
And I wanted to do things thatwere more technically interesting.
(14:43):
And this was the time when the firstlarge for the day today, they're
tiny, of course, um, datasets werecoming out in biology and medicine.
And these are things like, for example,the first microarray data where you
could actually start to think aboutgene-gene interactions and relationships
between genes and phenotypic consequences.
(15:04):
So the first, um, project that Idid was actually with a wonderful
colleague who was a tuberculosis expert.
And we worked on the project havingto do with machine learning for
tuberculosis epidemiology on a datasetthat when you think about the recent
COVID datasets is minuscule, butit was one of the very first, um,
network, if you will, of transmission.
(15:27):
And then moved from there toworking on the, again, gene
expression and the ability to inferregulatory networks from data.
And then from there to some of thegenotype phenotype correlations
and, uh, some of the earlierrelationships between genetics and gene
expression, genetics and phenotypes.
(15:47):
And so, the nice thing about biologywas that every few months, there
would be another really cool dataset, oftentimes in a different
modality that I was unfamiliar to me.
So it kept creating new challengesand opportunities for novel
machine learning to be developed.
And so initially, my personal interest wasmostly this is a great place to find good
(16:10):
challenge problems for machine learning.
But then over time I became interestedin the field in its own right and ended
up having this really weird, bifurcatedexistence where half my lab continued
to do core machine learning workpublished in computer science venues.
That was going to be my next question.
That was going to be my next questionbecause I feel like we have, you
(16:31):
know, we sort of, we split rightin the sort of methodological focus
and then the applications.
So, yeah, so those, the main machine learning
conferences and then half the lab isin the general scientific journals.
Yeah.
And they, you know, and there was someinterplay between them in my lab, but
when you went out in the sense that themethods people were sometimes inspired
(16:52):
by the biological problems and certainlythe people working in biology were very
much informed by the methods developmentthat was being done in the group.
But when you look at the outside world,
I have, even today, computerscience colleagues that ask me, so
why did you get into, uh, biologyso late after you did Coursera?
And it's like, no, no, therewas this whole thing that
(17:14):
you didn't even know about.
And then you had, on the other side,biology colleagues who I think didn't even
realize that I was in a computer sciencedepartment because since when do you have
people publishing in nature and scienceand cell in a computer science department?
And that bifurcation was kind of odd.
Yeah, you're really straddlinga lot of cultures, right?
So, like, as you mentioned, it's what themain journals are that those communities
(17:39):
read or what they look to even I thinkthe publication culture is very different
in computer science versus in medicineor in biology. And so, it's very typical
for example and in statistics, right?
It can be wildly different too and soif you're being evaluated or you're
being uh, you know, working in closecollaboration with researchers in those
(18:00):
communities, I think there can be alot of challenges and opportunities to
overcome as you're sort of navigating.
But you know, Andy and I are bothat a med school and at a school of
public health, and we both work inAI and medicine, and there are very
different venues and very differentcriteria that are applied by those
communities and judging research outputand, and what is a paper even, right?
(18:22):
The sort of foundational question.
Oh, no, completely.
So, first of all, the computer sciencefield, by and large, um, thrives on
conference papers and relatively smallunits of work at a very rapid publication
pace, whereas the field of scientificinquiry focuses on much longer form
pieces that can sometimes take
(18:42):
years to complete.
And that's a very different sortof cadence for how the work gets
done and how people get evaluated.
And so that's one piece.
And then I would say there's a very deep,I would say, sort of, um, mindset shift
between how one thinks about scienceand how one thinks about engineering.
(19:03):
And I think about machinelearning often very much in
the engineering side of things.
When you're an engineer,you're looking for patterns.
You're looking for the model thatwill explain the maximum amount of
the data that you're observing withreasonable amounts of accuracy.
And when you've found that,that is the victory.
That is the winning state, is a modelthat generalizes reasonably well for
(19:27):
a pattern you've been able to discern.
When you're a scientist, oftentimes whatyou're looking for is the outliers, the
exceptions, because those outliers areoftentimes the beginning of a thread
that will lead you to a completelydifferent and novel scientific discovery.
So you have one group that's lookingfor patterns, and the other group that's
(19:47):
looking for exceptions, and that makesthat interdisciplinary communication
quite challenging sometimes, and it'sdefinitely something that even today, as
I'm building a cross functional companywith individuals from both of these
groups, getting people to communicateis more than just about jargon and
making sure that you're familiar witheach other's terminology, but also
(20:08):
about how you think about science.
So I was going to say, I've neversort of thought about it that way,
that engineers sort of care aboutthe mean or the first moment of the
distribution, whereas scientistsmight care more about the second
moment or the variance or the tails.
I know that we speak different languages,but I've actually never thought of it
like we care about different parts ofthe distribution kind of fundamentally.
That's a very interestinglike perspective on that.
(20:28):
I wouldn't call it just the first moment,but I would call it the sort of the, the
pattern that explains enough of the data.
It doesn't have to be the first moment,but the pattern that explains enough of
the data so that you feel you have theability to generalize to new data points,
whereas the outliers, the exceptions,are where you kind of like, well,
(20:48):
my model doesn't explain this point.
Why?
Assuming it's not an error.
Why is this at this point is different?
And what novel insights does that unlockfrom a scientific discovery perspective?
And so, it's a very different mindset.
Yeah. So, I'm going to fast forward a couple of decades from where I think we
were when you were starting your lab.
And Andy is going to diveinto your current work at
(21:11):
insitro in just a couple moments.
But before we do that, I wanted to askyou, your perspective and your thoughts
about a very interesting paper thatyou recently wrote, which Andy and I
were happy to co-author with you, whichof course was published in NEJM AI.
And so, this is a paper called "Why wesupport and encourage the use of large
language models in NEJM AI submissions."
(21:34):
I think you really led this.
And so, can you give us your perspectiveon what we're trying to say with
this editorial, with this article,and how you anticipate LLMs being
used by scientists in improvinganalysis of data, communication of
results, and their dissemination?
Yeah, no, thank you forasking that question.
(21:55):
I think I was struck by the factthat the advent of LLMs caused so
much distress among some of ourscientific colleagues in terms
of, wait, so now machines are goingto take over our job and they're
going to write scientific papers.
And so, we should preventthat from happening.
And we can discuss at length why trying toprevent scientific progress is a bad idea
(22:20):
because it's never been successful before.
So there is that.
But I think maybe even more to thepoint is tools like that elevate
all of us and they allow allof us to do better work.
They allow us to do better researchin terms of understanding what's
(22:40):
out there by summarizing papers forus before we dig into the ones that
we think are the most relevant.
They can help us do better dataanalysis, make better figures.
They allow us to write betterprose, especially for those of us
that might not have, say, Englishas a first language, or have some
other kind of language disability.
Doesn't mean they're worst scientists. It just means that maybe writing isn't
(23:03):
what comes most naturally to them.
So you're elevating everybody.
You're actually equalizing the playingfield for people who come in from
maybe less advantaged backgrounds.
So, I think both from an egalitarianperspective and also from the perspective
of our goal in NEJM AI, and I think inscience in general, is to elevate the
(23:23):
quality of the science that is doneand the insights that that provides us.
It is not to try and judge whethersomeone writes better than somebody else.
For that, there is college andexams and so on and so forth.
But when you get to the point where you'reactually a practicing scientist, what
we should care about is the quality ofthe science that you're able to produce.
(23:44):
So, I think it's absolutely theright decision that we took.
I think that frankly, innot a very long amount of time,
I would say no more than acouple years and probably less,
the idea of banning the use of these toolswill be as laughable as the idea that we
shouldn't let people use computers to dodata analysis or, or even calculators.
(24:07):
It's just going to be a,or that we shouldn't let
people use spell checkers, whichof course is laughable today.
But if you go back not that longago, there were people who were
advocating against the use ofspell checkers and calculators
and computers to do data analysis.
And I think the banning of LLMs fromscientific study is going to look
equally laughable in a couple years.
Yeah, I completely agree.
(24:28):
And just as a disclaimer, I think we areall aware that now there's a startling
number of papers in Google Scholar thatif you search for, as a large language
model, I can't do blah, blah, blah, willactually show up in published papers.
So, we are certainly not advocatingfor people to turn off their brains.
Or, or anything like that.
This is responsible use of LLMs.
And like you said, writing is avery specific skill that strangely
(24:50):
science is highly selective for.
And it's kind of an orthogonal skill,like the ability to think, the ability to
reason, the ability to be rigorous, aresomewhat orthogonal to your ability to
communicate those ideas in written prose.
And I love the idea that this is agreat leveling tool, uh, for that.
And the ability to, um, summarizethe ever-growing literature so that
we are able to, um, potentially,
(25:12):
better contextualize our work interms of what's already been done.
I think it's going to make forbetter science overall, but
you're absolutely right, Andy.
This does not mean that it absolvesthe scientists from the ultimate
responsibility for what their paper says.
Ultimately it is your responsibilityas a scientist to make sure that you
have conviction behind the correctnessand the novelty of what you produced.
(25:34):
And that is absolutely, and we statedthat very clearly in the paper as well.
Agreed.
Daphne, what is your sort of favorite useof language models yourself in either,
I'm going to guess just from some of yourcomments, maybe consuming or summarizing
some of the scientific literature orsome other aspect of analyzing data
or preparing, editing, how have youfound them useful in writing papers?
(25:57):
I think that right now, for me, thekiller app really is the summarization
of papers and the scientific literaturebecause the amount that is out there is
just overwhelming and growing so fast.
The ability to sort of very quicklyget a read on whether a paper is
(26:18):
likely to be relevant to the questionthat I'm studying, which oftentimes
you're not going to get from theabstract because the thing that you're
looking for is somewhere on page five.
That to me is, I think,a real killer app,
personally.
I will say that when I think more broadlyabout the impact of this technology, I
think one of the biggest impacts is goingto be the democratization of programming.
(26:40):
Right now, programming is one ofthe more challenging disciplines for
people to manage, to learn, and it's yet a huge empowerer of
people in terms of making senseof the world. Getting computers to
do things that they personally findinteresting, even if it's organizing
their, you know, their, their photos ortheir recipes or, or something that is
(27:03):
more important, um, from a scientificperspective, like analyzing data.
Right now, there's a gap.
Even, you know, when I think aboutscientifically very talented colleagues
that never learned to program andthey're really dependent on having a data
scientist kind of glued to them at thehip, helping them do their data analysis,
which really limits the number ofhypotheses that they can interrogate.
(27:26):
And so if we create something where youcan program by natural language and ask
questions of the world in, uh, withoutneeding to learn how to program, I
think that will be hugely democratizing.
I do think that it will create anobvious gap in the next skill set
up the hierarchy, which is structured thinking, which is
(27:48):
something that, unfortunately, we asa community do not take the effort
to explicitly teach to our students.
We kind of figure that they'lllearn it on their own or they're
born with it intrinsically.
I don't think either of those is true.
Um, and I think teaching structuredthinking is going to be an imperative
for educators going forward becausethat's going to be the thing
(28:11):
that unblocks your ability toleverage LLMs for problem solving.
I agree.
I think the thing that I always
take away from my degrees in stats andcomputer science was not my ability to
write Python, but my ability to thinkalgorithmically and probabilistically
and reason under uncertainty.
It's more of a way of thinkingthan it is a skill set.
A hundred percent.
I have to say, unfortunately, Ihaven't programmed in a large number
(28:33):
of years at this point, but I, butthose skills of really taking a very
mushy, abstract problem, one that'sill formed and breaking it down
into manageable pieces that togethercreate a solution to the thing you
were originally looking to solve.
I think that is a skill set that'snot going to go away anytime soon.
Agreed.
(28:54):
I'd love to hopforward now to insitro.
Um, so, for those keeping scoreat home, you've been, a child
prodigy, a Stanford professor, Courseraco-founder, and now we're going to
transition to founder and CEO of insitro.
So, could you tell us a little bit aboutthe founding story around insitro?
Like what made you want to take this on?
Cause being a CEO, again, is a verydifferent skillset than being a
(29:17):
researcher and being an academic.
Uh, so yeah, could you, couldyou walk us through that?
Um, so I think I'm going to goback a little bit earlier than your
question and to the time that Ideparted Stanford to go to Coursera.
And that really emerged from what hadbeen an increasing sense of urgency
to make a difference in the worldand trying to think about what could
I do that will really have much moreof a direct impact than just writing
(29:42):
papers and hoping someone reads themor training students and hopefully
go on to do something meaningful.
And so at that point, work that I'd beendoing at Stanford, um, for technology
assisted education basically led to thelaunch of the first three Stanford,
so-called MOOCs, Massive Open Online Courses.
And when I looked at that impact wherewe had 100,000 learners in each of those
(30:03):
courses in a matter of weeks, I had achoice of, well, I could just go back
and write some more papers, or I couldactually leave Stanford and do something
to really bring that vision to life.
And I decided to do the latter, leftStanford on what was very much supposed
to be a two-year leave of absence ratherthan a permanent departure to really
try and bring that vision out.
And it was an absolutely terrifyingexperience because not only
(30:25):
had I never built a company,I'd never been at a company.
I'd been an academic my entire life.
I had no idea what an orgstructure looked like.
I had no idea what a one-on-one was like.
It was just like completely jumpingoff a cliff and hoping for the best.
And, um, and so I ended up doingthat and it was definitely a huge and
terrifying learning experience to builda company from the ground up, especially
(30:48):
a company that was a rocket ship likeCoursera, where we were on an exponential
curve for quite a large amount oftime during those first couple years.
And at the end of those two years,Stanford basically came and said,
well, we have a cap on two year oftwo years on your leave of absence.
So, are you coming back now?
And I said, I can't come back right now.
We're still building.
(31:09):
And they said, well, you have to pick.
And so, I picked, and I ended upresigning my endowed chair at Stanford.
And my mother thought I was nutsbecause who on earth leaves an
endowed chair at the world'stop computer science department.
But anyway, there we are.
And I stayed at Coursera, I thinkfor a total of about five years.
And at the end of those fiveyears, it was a good moment to
(31:30):
sort of step back and reflect.
And notably, if you think about thetimeline, I left Stanford at the end
of 2011, early 2012, which was justwhen the machine learning revolution
was starting in 2012 with ImageNet andthe deep neural networks and so on.
I'd missed all of that and I'd beenfar too busy at Coursera to even pay
much attention to what was going on.
(31:52):
I said, yeah, there's a lot goingon in machine learning, but I
didn't have time to even track.
And then in 2016, when I started to lookaround, it was like, wow, machine learning
is changing the world across pretty muchevery sector, but where it's not having
much of an impact is in the life sciences.
And one of the reasons for that, Ifelt as I still do today, is that
(32:13):
there's just not very many people whospeak both languages, who both truly
understand the problems that really makea difference in biology and medicine.
And at the same time, also understandwhat the tools can actually deliver
and are able to bring the two together.
There are certainly more now thanthey were when I started, but it's
still a diminishingly small fractionof, say, machine learning researchers
(32:35):
who really are able to take thoseinsights and apply them to life
science or the other way around.
And so I decided that this was anincredible opportunity for me to make an
even bigger impact and Coursera was ingood hands and there's not really a lot
of AI in Coursera, certainly not at thetime, and I felt like if I was going
to make an impact, this was the placewhere I could bring the biggest value.
(32:58):
And so I ended up at that time goingto Calico, which is a drug discovery
company within the Alphabet umbrella.
I didn't really know a lot aboutCalico, but it was an incredible
opportunity to work with unbelievablytalented leaders like Art Levinson
and Hal Barron and others.
And I figured it's certainly at least a place where I could learn
(33:19):
and work with wonderful people.
And so I did that, and that was myfirst exposure to drug discovery.
And when I looked at how drug discovery was done, even at a cutting-edge
place like Calico, it was like,wait, this is how we make medicines?
No wonder so few of them are successful.
And so, I realized relatively early inmy journey there that what I really
(33:43):
wanted was to build, I mean, I'm,I'm an engineer, so I build products,
I build things, and I wanted tobuild a, a system that would help
us make better medicines faster.
And it didn't make sense to builda platform like that within the
environment of a company thatfocuses on the biology of aging,
which is what Calico's mission is.
(34:03):
And so rather than trying to createa xenograft of, you know, these two
companies that don't really make sensetogether, um, I ended up leaving Calico in
February of 2018 and launching a company, insitro.
If you think about the name, it'sthe synthesis of in silico, which
means in the computer and invitro, which means inside the lab.
(34:25):
And really bring those two groupsof individuals, these two ways of
thinking, these two types of technologytogether into a single integrated
whole that is going to allow us todiscover and develop better medicines.
And that's the vision behind insitro.
Uh, it was founded as an end of onewith a very substantial amount of
(34:46):
capital from a group of investors whohad actually been looking to make an
investment in the machine learningenabled drug discovery space.
They had done diligence on a number ofcompanies, found all of them lacking
in credibility or something else.
And so when they heard that I waskind of looking to build something
and said, yeah, here's a hundredmillion dollars, do something.
(35:06):
And um, so here I was with a hundredmillion dollars as an end of one,
uh, without any network of peersor connectivity in the biotech
ecosystem to build a team around me.
So it was,
let's just say challenging andterrifying in a very different
way from the Coursera journey,which was my first industry foray.
(35:27):
This wasn't my first, but it was adeep dive into a completely different
ecosystem that I knew very little about.
It took a while to get there, but um,I'm privileged to now have an incredible
team of people with very complimentarytypes of expertise because coming back
to the name and the vision behind thecompany, building this new kind of drug
(35:51):
discovery company that requires trulyan equal partnership between life
scientists and computational scientistsand drug discovery experts, you really
need to have a group of people who comein with a genuine intent to understand
each other and work together and theyall have a seat at the table, which is a
very rare thing to find in this industry.
(36:13):
And so is it fair to say, I alwaystry and distill things down.
Do you think of insitro asa new kind of drug company?
Kind of putting aside like pharmabaggage and things like that.
Is that fundamentally at its core?
Like you're making new medicines and youhope to carry them from inception all the
way through phase three clinical trials.
Is that kind of the visionfor insitro and AI can sort
(36:35):
of help at all, all stages?
So, I think the answer is absolutely AI canand will help at all stages as we go from
the concept of this is a disease we'retrying to deal with a group of patients.
We're trying to help all the waythrough the creation of a novel
therapeutic hypothesis, turning thathypothesis into chemical matter,
(36:57):
taking that chemical matter and goingthrough the clinical development
and even ultimately beyond that.
I mean, we, I think over timewe're going to have to understand
what our drugs do to patients
in the wild, in the real world,and use that to inform the next
stage of our drug discovery effort.
So, all of that areplaces where I can help.
As a small company, I think it'sunrealistic for us, at least in the
(37:20):
early stages, to imagine that we wouldtake every single one of our insights
and turn every single one of them intoa phase three clinical trial simply
because it's a very expensive process.
And so I expect that we willpartner some of those projects to other
companies, whether they're big oneslike pharma or other biotechs that have
other capabilities that we don't have.
(37:41):
That is certainly in our future,but our ultimate goal is to make at
least some, and if we're successfulover time, more and more of those
projects all the way through, becauseI think that AI truly can enable us
to become more effective and efficientthroughout the life cycle of a drug.
Can I get your thoughts on how AI atinsitro and kind of like biotech for more
(38:04):
generally, like where the right sweetspot for what we can currently do is?
And I'm going to give you two axes here.
So, one axis is kind of likebiological risk or uncertainty.
So, like low down on this axis arebiological things we understand.
We have a target, we have a pathway.
Things high on this axis are we don'teven know sort of what the mechanism is.
The other axis is kind oftechnological risk or uncertainty.
So, like if you're low on the biologicalrisk, but high on the technical
(38:28):
risk, we know what the targetis, but we don't know how to hit it.
So is AI good for helping us sortof reduce biological uncertainty,
or is it really good at prosecutingtargets that we currently know exist,
but we don't know how to hit them?
I think AI is good for bothand different companies have
taken different trajectories.
So, if you look at the, field of
(38:48):
AI-enabled drug discovery companies,broadly construed, you will find that
the preponderance of those companiesare actually, I don't know if I would
call it technical risk, but takingtargets that have been reasonably
well validated and turning theminto chemical matter, and they each
have their modality of expertise.
They're a lot of the earlier oneswere in the small molecule space.
(39:10):
Now there's a growing numberin the large molecule protein
antibody space, I think, driven bythe successes of AlphaFold and its
follow-on successors that allow usto design proteins very effectively.
There're even more companies now lookingat novel modalities or more cutting-edge
modalities like, for example, genetherapy where you can design the
(39:32):
capsid, or um, people now with all ofthe excitement around RNA looking at
RNA therapeutics, and I think there'sa lot of companies in that bucket.
The number of companies that actuallyfocus, as we do, on the discovery
of novel therapeutic hypotheses isactually quite limited, and I think
there's a number of reasons for that.
(39:52):
One is, biology risk scares people,it's also something that takes you a lot
longer to know if you've been successful.
When you're in the context of a designingof a drug, there's usually a fairly
well-established set of assays that youcan perform on a drug to know at the
end of whatever your two-year designperiod, if you know, if you're lucky,
(40:13):
that you've succeeded.
It has certain binding affinity,selectivity, solubility, whatever,
and you know that you've succeeded.
And everything downstream is, you know,you don't need to worry about that.
For biology risk, ultimately success iswhen you've put the drug in a patient, and
it actually helps the patient be better.
And so the timeline is much longer.
The risk is, feels to a lotof people to be much larger.
(40:36):
So, I think that is one element.
And the other is that there's not alot of training data in therapeutic
hypotheses, and the very naïve approachesof we're going to rely on successful
drugs as training instances for machinelearning or AI models doesn't work.
There's just not enough successfuldrugs, and we don't understand what
(40:58):
it is that makes them successful.
So what we've elected to do isreally design the problem in a very
different way where we have differentways of generating training data.
We also have a much greater reliance onunsupervised and self-supervised machine
learning algorithms, where the need forsupervised examples is much, much lower.
But it, let's just say it's amuch harder AI problem and a
(41:21):
much harder scientific problem.
And so that's why I think we'rerelatively, I don't want to say unique.
There are a couple of othercompanies that try and do that, but
it's certainly not the majority.
Yeah.
I want to ask kind of like a follow-upquestion about that, especially given
your experience straddling both of theseworlds. I know in medicine, when we kind of
look over at what's happening in mainland AI and try and make analogies to what's
(41:49):
happening in our world. So we'll say, like, we're going to train a big language model of
the electronic health care record.
And we reasoned by analogy quite a bit, orwe'll say, you know, a patient's clinical
record is actually just an image if yousquint and look at it the right way.
And I think that there's a lot of this inbiology too, where reasoning by analogy,
where we're going to build an LLM forthe language of life or for biology.
(42:09):
In what ways do you thinkthose analogies are helpful?
And in which ways do you thinkthat they can be hindering?
So that's a great question.
I do think that those analogies, and Iwould say it's beyond analogies, it's
actual reliance on technical artifactsof actual products that is helpful
because I think if we are going to traina, whatever, language model, or in a
(42:36):
machine learning model, if you will,on, on medical images, the number of
medical images that are available to usis usually quite limited, and if we don't
see the connection to the modelsthat were trained on cats and dogs and
airplanes, and leverage those in ourwork, we're going to end up with
performance that is very much suboptimal.
So I think those connectionsare very helpful.
(42:58):
At the same time, I think that thereare definitely places where that
over simplistic view can lead tounintended consequences, um, where
people, for example, don't reallyappreciate the challenges that you
have with, say, batch effects thatare very, very subtle sometimes.
(43:20):
And you get misleadingly high performanceon your supervised task because the
machine learning of the model is latchingon to something that is, you know, some
weird, uh, artifact of the x-ray machinethat took the image and this x-ray
machine was used in this hospital and adifferent x-ray machine was in a different
(43:42):
hospital and the patient population isjust different and one of them has more
of a certain kind of disease than othersand so the machine is making wonderful
predictions based on something thathas absolutely no biological relevance.
So I think that is something thatcertainly happens in other contexts
as well but is much more prevalent in,um, in the biology and medicine space.
And then even more so, is the sort ofrecognition that the ability to extract
(44:08):
insight from these models, becauseultimately when you're building a
predictor and all you care about is canit find images of cats and dogs for me
on the web, you don't really care why,you don't care how, there's not a need
to sort of ask what the model is latchingonto as long as it's doing a good job.
In the context of the scientificdiscovery world, the insight is often
(44:30):
the thing that you care about ratherthan the quality of the predictions.
That's one of the things, by the way, thatmakes, for example, the difference between
discovery systems and diagnostic systems.
In diagnostic systems, you can make theargument that all I really fundamentally
care about is, is it making goodpredictions, albeit out of distribution?
If you're doing discovery, theability to trace back and understand
(44:52):
something that is going to be the, whatever the therapeutic
hypothesis is much, much more important.
So one last question beforewe get to the lightning round.
Um, so again, a big analogy that we oftenmake is an appeal to the scale hypothesis.
And so for large languagemodels, the scale hypothesis has
continued to prove to be true.
(45:13):
And just as a reminder, that's the ideathat if you have an algorithm that's
scaled with data and compute, you canjust keep throwing more of both of those
at it and keep getting better results.
How do we think about that in thecontext of biology specifically?
Because a lot of biologicaldata is highly redundant.
And I'm thinking of like genomesequencing data is highly redundant.
And even if you have, if even ifyou've sequenced every person on
the face of the planet, the sort ofinformation per bit there is actually
(45:37):
pretty sparse just because we're allvery, very similar to each other.
So how do we think about the scalehypothesis for biology and to sort of
what extent is that a useful analogy?
I actually believe in thescale hypothesis, even in
the context of biology.
I think if you look at imagesof cats and dogs and airplanes,
(45:59):
you've seen 50 images of airplanes.
I don't want to say you've seen themall, but you see them in slightly
different perspectives, from slightlydifferent angles, with slightly
different tail markings and so on.
And you continue to learn.
And yes, the incremental benefit ofeach new sample diminishes, but it's still
valuable, and that's how we've gottento the performance that we've gotten.
(46:20):
I think we're nowhere close to saturatingthe ability that we have to learn
new insights from biological data.
Now, you gave DNA as an example,so I'm going to argue about
DNA, and then I'm going to talkabout the broader phenomenon.
Even in DNA, I think the more peoplewe sequence, the more rare variants we
(46:41):
discover that potentially are highlydeleterious or, or highly, um, or
sometimes highly protective, as well asinteractions between different variants.
Things that are protective only in acertain, um, environmental context,
only in a certain genetic context.
We're nowhere close to saturating theinsights that we have, especially given
(47:02):
the fact that the vast majority of theindividuals that we've sequenced so far
have come from European backgrounds.
So, yes, we are verysimilar to each other.
But there's a lot of differentother genetic backgrounds where
we're nowhere close to having foundthe relevant genetic variants.
And we've seen that even in the,you know, recent publication from
all of us, as well as many of theothers, that there's more and more
genetic insights that can come out.
(47:24):
Now, when you think about the broaderspectrum of data that can be collected
about even just a single human organism.
You have single cell RNA seq fromdifferent cell types in different
states at different ages underdifferent environmental conditions.
That's just RNA seq.
You can also think about proteinlevels, and we haven't gotten to
(47:46):
protein levels at the single cell.
We're starting to scrape to, you know, the boundaries on that,
but it's certainly not at scale.
And then going down into the level ofindividual proteins, going up to the
level of, you know, we've noweveryone's excited as they should be
around spatial biology and the interplaybetween different cells, the number of
data modalities, the number of ways thatwe have to measure biology and the number
(48:08):
of distinct biological contexts with,
that we as a human live in andthat exists within even the body
of a single human is ginormous.
It is way larger than what I thinkthe complexity that we've trained
the large language models thatwe're currently leveraging in these
in the more traditional LLM sense.
(48:28):
What we lack are data collectionapproaches that achieve that scale.
I'm just really excited to be living inthis time because the number of ways that
we have to measure biology quantitativelyand at scale is increasing, maybe not
quite as fast as the capabilities ofAI, but you know, we're kind of maybe
(48:49):
a few years back on that curve, butyou can see that exponential curve.
And I think the synergy between those twois going to just unlock just an incredible
tidal wave of insights as we start to
bring those tidal waves together.
Well, that I think is one ofthe most forceful endorsements
of the scale hypothesis forbiology that I've ever heard.
(49:11):
So I, you know, consider me convinced.
I'm glad.
So, let's go to the lightning round.
So the lightning round is, we'll askyou a bunch of different questions.
The goal is to keep theanswers relatively brief.
Some of them are silly.
Some of them are serious.
And we'll let you decide which one is asilly one and which one's a serious one.
(49:33):
And can I not answerones that I don't want?
So, abstention is not anoption, unfortunately.
Well, I don't know.
I'm an independent human being.
I can decide later if I believe that.
That is, that is true.
That is true.
That's fair.
Um, so the first question is, what'san example of a frivolous thing or
something that you do just for fun?
(49:54):
I do the New York Times crosswordpuzzle and I love to hike.
Nice.
Nice.
What is your all timefavorite book or movie?
I'm often asked that and I usuallyrefuse to answer because I have
too many favorites in differentcontexts and different moods.
So I'm going to abstain.
How about a softer question?
(50:18):
What is a recent good book ormovie that you read or watched?
I really liked Oppenheimer.
It's a recent movie that I watched.
It is a wonderful synthesis ofthe challenges and opportunities
and risks in science, as well as avery human story about a scientist
(50:42):
and challenges that he faced.
And so, I thought thatwas a really good movie.
I need to see that.
It's on my list and it'sjust won best picture, right?
Best picture and best actor.
It's one of a bunch of them.
Yeah.
It's cleaned up.
Yeah.
Yeah.
Cleaned up.
All right.
Excellent.
Um, what is one of the coreguiding principles of your life?
(51:03):
I'm gonna name two that are intertwined.
One is that I believe that it is theresponsibility of each of us to try
and leave the world a better placeby virtue of us having been here.
And the more you were born to privilege,the greater your responsibility is.
(51:23):
And I also believe strongly in leverage,which is the fact that the benefit that
you bring should focus on places whereyou can be disproportionately impactful.
There are a lot of things thatI could do, even I as a human
being, I as an AI researcher.
The reason I'm doing what I'm doingtoday versus the many other things that
(51:45):
an AI researcher could do is becausethere's a lot of very talented AI
researchers who can work on computervision and robotics and natural
language and other things, and probablydo it as well or better than I can.
AI for biology is a rarefiedgroup, and I think I can bring
disproportionate impact by doing that.
I don't want to interrupt the lightninground, uh, too much, but, uh, I
(52:09):
think this has been a real recurrenttheme amongst our guests, which is
speaking both languages and not
being an AI researcher who sort of dabblesin biology or medicine or a biologist who
dabbles in programming, computer science,but really committing to both skill sets
and really having both skills in thesame mind, um, as, as leading to success.
(52:29):
Uh, so it's great.
You know, Ziad Obermeier spoke very,very cogently about this and I think
several other guests have as well.
So great to, great to hear that.
All right.
The next lightning round question, whatis harder for AI, biology or medicine?
Biology.
Um, I think because there's, thecomplexity there is very, very
(52:54):
large and it's incredibly intricate.
I think medicine obviously is complexas well, but I would say the,
the bar in terms of really impacting
clinical care by things that arerelatively simple for AI to do is, is
(53:17):
lower, so I think over time we'll startto hit the point where those relatively
simple things are not enough and we needto go beyond and then the equation might
switch, but right now I would say biology.
Do you think some of that is tied toexplainability or having to understand
(53:38):
the mechanistic or the causal diagramthat is required in biology versus
in medicine, we have many examples ofthings that work well, where we don't
exactly understand why they work well.
So, is part of that wrappedup in explainability?
I think part of that is wrapped upin explainability, in complexity, in
(53:59):
multimodality, and again, comingback to there's a lot of things
that we currently do with patientsthat are so incredibly suboptimal.
The fact that we do not tailor ourtreatments to the specifics of an
individual patient, even though we knowthat one size fits all doesn't work.
(54:19):
And I think there's just a tremendousopportunity to take these rich data
that we're able to collect, althoughoftentimes we don't and don't record
them, but we could around patients andreally impact the care that we provide
to those patients is going to be,um, there's a bunch of, I would say,
relatively simple things that we coulddo, even if it's only helping clinicians
(54:44):
take better notes so that they havemore time to think about their patients.
There's just a lot of, I don'twant to call it low hanging fruit.
Nothing in this space is low hanging,but, um, a lot of, you know, things
that we could build that would bevery, very helpful to how we care
for our patients, um, and I thinkbiology is just a longer journey.
(55:04):
All right, um, so I think thisis going to be our last, last
lightning round question, Daphne.
So it's one that we liketo ask lots of folks.
So, if you could have dinner with oneperson alive or dead, who would it be?
It's going to be really lame.
Um, but Albert Einstein.
Oh, not, that's not lame.
How could Albert Einstein be lame?
(55:25):
Well, because it's so cliche.
It's so cliche, right?
It's all relative.
Sorry.
Yeah.
I'll just say it.
I'll show myself.
I'll show my way out.
Yeah.
All right.
Very good.
It didn't land well.
It didn't land well.
There's some smiles forthe, for the recording.
There's some smiles here and some smirks.
Some golf claps happening now.
Um, okay.
(55:45):
So, I think we're going to wrap up with onebig picture question before we let you go.
We've heard you made comments recently about how it's hard to know where you are on the
exponential curve of AI progressand how understanding where you're
on this curve or the difficulty withunderstanding where you are leads to
bad intuitions about what the nextfive to 10 years is going to look like.
(56:06):
So, could you sort of expand upon that andhelp us like, um, maybe give us a more
sensible intuition about what it mightlook like in the next five to 10 years?
I'm going to answer that questionby explaining why it's so hard to
know where on the exponential curveyou lie and the fact that you are in
fact lying on an exponential curve.
If you go back a decade, a lot of people,certainly outside of the field, but I
(56:28):
would say even within the field, wouldlook at how much progress had been made
in the last, whatever, two, three years,and they would extrapolate effectively
a linear, um, a linear interpolationof those points and say, this is
what we're going to be in 10 years.
And inevitably that was wrong.
But even then, it didn't sink in topeople that this wrongness, that you
(56:49):
keep doing this year after year and yearafter year, you're wrong, um, didn't
really sink in because the wrongness wasrelatively limited in the early days.
As you start to get to the point whereyou're even your linear extrapolation, uh,
is, gets you to these ridiculous places.
And that's when I think people realizethat, oh my God, AI is suddenly here.
(57:12):
But it's not suddenly.
It was here, you know,um, 10, 15, 20 years ago.
It's just that, it's just that we didn'trealize we were on that exponential curve.
I think we're in the same place.
I mentioned that earlier in terms ofthe ability to interrogate biology
at scale in a quantitative way.
And people often don't realize thatwe are on a similar exponential curve
(57:32):
because we are at the earlier stage.
But when you are on an exponentialcurve, the base of the exponent, just how
quickly that exponential curve goes upis very hard to sort of extract and
small differences make a very significantdelta in where you will be in five to
10 years to the point that, you know,even, even I, for example, I, I was,
(57:57):
I realized we were on an exponentialcurve, um, quite a while ago, um, I did
not predict the large language models.
I did not predict where we would be in2023 in terms of the ability to perform
tasks at this level that involve language.
So given that I wasn't able to predict itback in, say, 2020, that this is where we
(58:20):
would be in 2023, even recognizing thatwe were on an exponential curve, I'm not
going to shame myself or embarrass myselfby trying to make predictions about 2026.
So maybe I'll just sneak one lastquestion in here, a big picture question.
And I would ordinarily like to end ona positive note, so you could spin this
(58:42):
into something positive if you so choose.
Is superintelligence or existentialrisk, existential risk from advanced
artificial general intelligence.
Is this something that you take seriously?
You know, we've had some very prominentscientists and folks in the industry and
developers of some of these AI modelstoo, including folks like Jeff Hinton,
(59:03):
who've really been very involved from theearly days, voiced strong concerns about
the path that we're on and about whatthe capabilities are of the next wave or
the next wave after that of these models.
Is that something that you take seriously?
Um, or something that you'renot, you're not so concerned about.
(59:25):
So, I will say that this is definitelynot ending on a positive note.
I will answer the question, but then maybewe can find a positive note to end on.
I'm not
worried most about the form ofexistential risk whereby computers
develop a level of autonomy and whateversentience that makes them want to
wipe out humanity and the Terminatorscenarios that emerge from that.
(59:48):
That is not my biggest concern.
My biggest concern is that we areunleashing and democratizing a very large
collection of very powerful tools thathumans who do actually do evil things
to other humans are able to leverage inways that are we're already seeing today.
And what I think is enabled bythese tools is evil at scale.
(01:00:11):
And so one of the things that I actuallythink the existential risk conversations
around the Terminator scenarios is doingis it's diminishing people's ability
to focus on the much more immediate,in fact, I would say present today
risks of humans using those tools tofacilitate human trafficking, child
(01:00:32):
pornography, the erosion of truth.
I mean, I can go on and on and on to talkabout all those risks that are, because
we're not focusing as much on them,I think, are being allowed to thrive.
And it's going to be in thesame way that cybersecurity is.
It's an arms race.
The bad guys develop a technology
(01:00:53):
the good guys have to develop probably using the same set
of tools. In this case, AI is going tobe the defense as well as the offense.
How do we use AI to detect?
Deep fakes to prevent the erosion oftruth to create watermarks that can't
be forged. I mean, there's a whole bunchof things that we could and should be
focused on as a community, but we're not.
(01:01:15):
So, I think that to me is one of thebiggest challenges of this narrative.
Yeah, I agree.
And I really like your focus on thepresent term and AI is obviously dual use.
And I think you did a goodjob at enumerating what
some bad uses of it may be.
In an effort to fulfill yourrequest to have this end on an
optimistic note, what gives you hope?
(01:01:36):
What gives you optimism about AI?
How will it make our lives better?
And what keeps you workingon, on these problems?
So ,I'm going to dividethe answer into two.
I think that AI is going to make ourlives better and easier in ways that I
think are pretty much visible to mostpeople today, you know, agents that do
our bidding so that I don't have to dealwith a lot of the minutiae of day to
(01:02:00):
day because an AI will be able to takemy verbal commands and go and deal with
those minutiae for me, and I will be ableto have more time to do other things.
It will empower people who mighthave really great ideas, but aren't
able to write, or aren't ableto draw, or aren't able to make
movies, to create in that way.
I think there is a lot of, or to program,we've talked about that earlier, there
(01:02:23):
is a huge empowerment that is going to come from the availability of these AI tools.
That's one element.
The other element, which I think isoften not as visible to the broader
communities, the AI for what you mightthink of as deep tech or deep science
or solving really, really hard problemsthat humanity has been grappling
(01:02:45):
with for decades, centuries, and thatwe're not able to solve on our own.
And that can be things like how do webring better therapies to patients,
or maybe how do we address climatechange, um, and carbon capture and
things like that, where, um, we needall the help we can get and those
tools are going to be super powerful.
(01:03:05):
So, I think, yes, we do need peoplewho work on the more, if you will,
consumer facing aspects of this.
The AI agents, the creative tools,the programming bots and all that.
I think that's really great.
But we also need people who are willingto kind of take the challenge of grappling
with things that are going to take
a longer time to solve.
It's not, you don't get the immediategratification of seeing people use your
(01:03:28):
agent bot, but you are doing somethingthat potentially can be transformative
to the benefit of the world.
And I think both of those are importantopportunities of this technology.
All right.
I think that's a great note to end onand, uh, Daphne, thank you so much.
This was a wonderful conversationand we really appreciate you
coming on AI Grand Rounds.
(01:03:49):
Thank you so much, Daphne.
We know how busy you are,and this was super great.
Thank you, Andy and Raj, you asked somereally insightful questions, including
ones that I'd never been asked before,which is unusual, and it was a wonderful,
far-ranging conversation, so thank youfor taking the time to speak with me.