Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Pushkin.
Speaker 2 (00:20):
So you know Moore's law, right, the idea that computers
are specifically chips, get better and cheaper at an exponential rate.
People who work on researching and developing new drugs, they
talk about e Room's law. E room is more spelled backwards.
And this is sort of a half joke, half not
a joke, way of pointing out that developing new drugs
(00:41):
has gotten slower and more expensive over the past several decades.
It has gone in the opposite direction of developing new microchips. Today,
after all the money and technology and hard work and
intelligence poured into drug development, something like ninety percent of
drugs that go into clinical trials fail, only ten percent succeed.
Speaker 1 (01:05):
That means that if we got to.
Speaker 2 (01:06):
A place where even just twenty percent succeeded, still an
eighty percent failure rate, we would double the rate of progress.
I'm Jacob Goldstein, and this is what's your problem. My
guest today is Patrick Sue. Patrick got a PhD from
(01:27):
Harvard in biochemistry when he was twenty one years old,
and then four years ago, before he turned thirty, he
founded the ARC Institute. It's a nonprofit research center in
the Bay Area that hosts scientists from Stanford, u SE,
San Francisco, and UC Berkeley, where.
Speaker 1 (01:45):
Patrick is on the faculty.
Speaker 2 (01:47):
Patrick's problem is this, how can you use AI to
make biological research more efficient, to guide scientists more quickly
to discoveries that'll lead.
Speaker 1 (01:57):
To new and better truemils.
Speaker 2 (01:59):
As you'll hear later in the conversation, Patrick is particularly
focused on Alzheimer's disease. To start, he told me about
why basic biological research is still so slow.
Speaker 3 (02:11):
So if you look at a biology research lab in
the eighties or the nineties, the early two thousands, of
the early twenty tens, or today in the twenty twenties,
they look basically the same.
Speaker 1 (02:23):
Right.
Speaker 3 (02:24):
You have these long benches, you have these two or
three rows of shelves, these micropipe pets, and various machines
that look like home kitchen equipment. Right, and so graduate
students and postdocs or bench scientists generally are like the
line chefs right inside of you know, a Michelin star kitchen.
(02:45):
You're prepping the vegetables. You know, you're making the sort
of initial sauces. Right, So you might be taking tissues
down or cells, processing them, staining them with antibodies. I
think the point is that doing these experiments is extremely slow,
(03:05):
very manual, and requires a huge amount of soft knowledge
and of how and is really variable lab to lab.
So we have this multi stack problem where we have
multi step search. Right, the experiments take months to years
to run out, and we basically search across the entire
(03:26):
space in an extremely manual way. And so in the
modern era of AI, where we believe we can predict
things across many different fields, the question is can we
speed up biology by being able to have models that
actually have predictive value and power. And our goal is
to be able to create biological intelligence that starts to
(03:49):
crack that threshold of a cell. Biologist will use the
model to rank the top twelve things that they'll do
in the lab, rather than just going into lab and
trying a bunch of things based on hypothsis driven science.
But you actually do model first prediction and then a
lab in the loop experiment.
Speaker 2 (04:10):
And I mean we'll get to the sort of things
that have to happen for that to happen. But if
we could get it to work, right, if you could
get it to work, what's the payoff beyond the level.
Speaker 3 (04:20):
I think the first thing is the practice of how
you design and do experiments will completely change. Right. And
so just like you know, graduate students today, whenever they
want to research a new area, they just use chatch
to BT, right, or Claude or Gemini or whatever your
favorite model is. Right, that has just become a fundamental
part of the workflow. In fact, I don't think most
(04:41):
people read papers the old fashioned way anymore, right. And
it used to be, oh, I read them online in
my browser instead of in a magazine like Nature and Science.
And now it's just I throw the pdf into chatch
BT and I read the summary. Right. That's just a
completely different workflow than just even a few years ago.
And I think instead of designing experiments by hand, we're
(05:02):
gonna have AI models design the experiments for us and
help us troubleshoot and decide what to do next.
Speaker 2 (05:08):
I've heard you describe the way biology works now as
guess and check.
Speaker 1 (05:14):
Which I liked.
Speaker 2 (05:15):
Which I guess on one level is kind of the
scientific method, right, It's kind of like hypothesis and experiment.
But the guess really leans into the not quite blind,
but the uncertainty, all the blind alleys that I guess
you go down exactly.
Speaker 3 (05:31):
You know, it's a lot like that that a childhood
game battleship where you're trying to sink someone's battleship. You
don't quite know where it is, and so you're searching
these different quadrants or like mind sweeper, Right, it's just
very hard to know that you're searching in the right place.
And so the first thing is can these A models
help us search in the right place so that when
(05:52):
we're peppering things with these individual manual wet lab experiments,
are hit rate can just significantly go up, whether that's
for designing a drud or you know, predicting how cell
respond to you know, some perturbation.
Speaker 2 (06:08):
Yes, and then that's kind of the first order thing.
So it's basically making basic research more efficient speed. Yeah,
but yeah, one are the second and third order things
like how does that translate to clinical outcomes?
Speaker 3 (06:23):
I think they're directly linked, right.
Speaker 1 (06:25):
Yeah.
Speaker 3 (06:26):
In drug discovery, we do a thing called target identification.
Speaker 1 (06:30):
Right.
Speaker 3 (06:30):
You need to be able to find the right drug target,
and you need to be able to perturb it in
the right way. You need to find the thing that's
going wrong, the toxic protein that's causing a disease need
try to turn it off. Right, So the goals really
been to just finding right the right drug target and
then drugging in the right way. But the problem is
we don't seem to find the right drug target and
(06:53):
even know what it is for most complex diseases, Alzheimer's disease,
many different types of cancer, it's aging, autoimmunity, a lot
of the major killers, it's still the same. Right, we
don't know what to actually target. We think these models
could help us do that.
Speaker 2 (07:09):
Let's talk about the ARC Institute, right, which is where
a lot of this work is happening. A thing you've
started and you run, like, tell me about the ARK
Institute and like specifically why you started it.
Speaker 3 (07:20):
ARC is about four years old today, we're about three
hundred and fifty people, and we're a full stack AI
and biology research organization. And so by that, I mean
we try to natively combine experimental science and computational science
under a single physical roof so that we can do
(07:41):
native iterative lab in the loop between AI models, both
training and running them and then actually experimentally validating and
verifying things in the lab in order to create new
mechanistic insights, new drug compositions, and identify new therapeutic targets.
We really have two major goals. The first is to
(08:05):
create these virtual cell models that can simulate human biology.
You have foundation models. The second is secure Alzhemer's disease.
We're interested in the view of ARC as a sort
of Edison Shop right where you're inventing a whole bunch
of different things that we can actually productize. And so
in a way, our ambition from ARC has been increasing
(08:28):
beyond how do we do breakthrough academic and basic science
to how do we actually productize this science in a
way that we can impact human health not just with
research papers, but with things that people can actually take,
see and feel and use.
Speaker 2 (08:43):
When you talk about the Edison Shop, I mean obviously
the Edison Shop was a business right, not a philanthropy,
like are you how are you thinking about it commercially?
Speaker 3 (08:53):
So ARC is a nonprofit right, and so you know,
I ARC will the ARC Institute will always remain this
nonprofit discovery oriented mothership, but we also will certainly have
the capacity and the interest to be able to spin
out entities that can take on more focused commercial capital
in order to productize things.
Speaker 2 (09:14):
Let's go back to the machine learning slash AI work
you're doing.
Speaker 1 (09:20):
I think we should talk about EVO. Tell me about EVO.
Speaker 3 (09:23):
EVO is essentially a chat GPT, but only with DNA, right, okay,
And so it's DNA in and DNA out right, So
you can talk to it by typing in nucleotides and
you'll receive from the model some corresponding set of nucleotides.
So just like I could say to be or not
(09:43):
to the model would tell me, you know, you know,
to be or not to be? That is the question,
you know, and then it will keep going and the
rest of hamlets delilokey. Right. If I gave it a
fragment of let's say, a mitochondrial genome or you know,
e Coli genome, it will then try to autocomplete the
(10:05):
rest of that fragment. It's not just memorizing in regard
diating information from the training data, which is what's really important, right.
It's doing a semantic diversification in a way of what
it thinks is in the meaning of what it needs
to make.
Speaker 2 (10:21):
Right.
Speaker 1 (10:21):
Yeah, that's the wild part.
Speaker 3 (10:23):
Right.
Speaker 2 (10:23):
If it was just regurgitating training data, it wouldn't be AI.
Speaker 1 (10:26):
It would just be sort of a library, right exactly.
Speaker 3 (10:29):
And this is why you can ask chatch bt to
you know, here's what I want to see in my email,
write me the email, and if you gave it that
thing ten different times, it would write you similar but
different emails.
Speaker 1 (10:40):
Right yeah.
Speaker 3 (10:41):
And just similarly, you can make different crisper systems that
are similar but not the same, different mitochondrial genomes that
are similar but not the same. And you might say, okay,
well that's fun. Now you have a bunch of sequences
in the computer. What's interesting about that? The point is
that we can then chemically synthesize those DNA strands in
(11:01):
the lab and test them and see what their function is. Now,
we could then take those the numbers that we've measured
in the lab and feed it back into the model
and then tell the model for things like make it
better at this task that I've experimentally verified, and then
you can This is the closing the lab and the
(11:22):
loop that we've built ARC around in order to he'll
climb on. Really any experimental accut that you care about.
Speaker 2 (11:29):
I know there was a project that one of your
colleagues did last year using evo to to basically to
design a new version of an existing phage. A phage
is a virus that infects bacteria. So, like, tell me
about the project with the fix phage and why it's meaningful.
Speaker 3 (11:48):
So we have these databases where whenever people published papers,
they deposit the genetic sequences into this database. So of
all fix and FIX related phages, we have a sense
of the evolutionary distribution that is out there that we've
been able to detect so far as scientists, right, Yeah,
(12:09):
And it turns out evo is able to make new
versions of fi X that are as evolutionarily distinct as
everything else that we've seen. So we're not just copying
and making things that look like a Corolla or another
a chord That would be very boring, but we can make,
you know, a fundamentally new type of car. Right, Yeah,
(12:33):
it's still a car, but it's a completely new type
of design. Let's say, something like as unique as a
pet cruiser, but much much cooler.
Speaker 2 (12:42):
Okay, And so so let's talk about the meaning of
this as a research tool and then the clinical implications, right, Like,
why is it meaningful as a research tool? Why is
this a meaningful proof of concept.
Speaker 3 (12:56):
The first thing is that there they work in the lab,
right that when you actually create these aid designed sequences,
you can actually package real phage particles and they can
actually in effect living bacteria in the lab the way
that a normal one would. Now that's sort of step one.
Speaker 2 (13:16):
Just to be clear, this is a thing, a virus
that has never existed before exactly that the AI invented exactly,
just worked. The machine made it up and it worked,
and it was quasi alive because it's a virus.
Speaker 3 (13:30):
Yeah, exactly, And so that that alone is pretty cool.
The second thing is that we could steer the generation
and so you can use it to infect specific strains
of E. Coli that you cared about and not others
that you didn't. And so you know, in phage therapy
for example, this is the therapeutic implication of this. Right,
(13:52):
if you were to you know, target the gut, you
would want to target specific microbes for example, that are
like bad gut bacteria and not others that are good.
And so you don't want to just do a broad
spectrum wiping out of you know, the entire complex gut community,
which is what broad spectrum antibiotics do. But you could do,
(14:12):
in principle, much more selective deletion of something that you want.
Speaker 2 (14:17):
Right.
Speaker 3 (14:18):
So, from a basic science point of view, the controllability
of generation from the model can lead to like important
precise functional outcomes in the lab. And then this has
I think, you know, as I just explained really interesting
therapeutic implications for phage therapy or a targeted therapy, a
targeted therapy, right, and in a way both medicines, Well,
(14:44):
we care a lot about precision and selectivity, right, yeah,
and that's really the entire game of on target therapeutic
efficacy and side effect profiles. Right.
Speaker 2 (14:55):
Right, You wanted to do one thing really well, and
you wanted to do nothing else.
Speaker 3 (14:59):
You want it to be safe. You want it to
be safe. Yeah, yeah, and effective, exactly safe and effective.
Speaker 2 (15:05):
Tell me about the work you're doing with EVO and
the BRACA one gene, This gene that's implicated in some
cases of breast cancer.
Speaker 3 (15:13):
You know, Broca one is an important gene that can
cause breast ravarian cancer.
Speaker 1 (15:18):
Yeah. The thing that we.
Speaker 3 (15:22):
Wanted to see with the model is can it understand
whether or not a genetic mutation will cause disease, and
so you can feed in, you know, someone if someone
comes in and gets a you know, gets their genome sequence,
then they want to know. You know, as a woman,
does I have a mutation in the broco one gene?
Do I need like is am I higher risk of
(15:43):
itating breast cancer? Do I need to get a double mistectomy?
If I have a causal variant? Many you know patients
elect to do that or do I just do an
annual mammogram and I just monitor? Right, And most of
the time your mutation is different from the ones that
are recorded in existing clinical databases, so it becomes classified
as a variant of unknown significance, right, And the model
(16:06):
has a very good sense of whether not that variant
that has never been seen before, you don't know what
it does, whether or not it will cause disease or not. Right,
And it does that not just for BROCA one, but
for any gene.
Speaker 2 (16:19):
How do you know that? How do you test that
sort of out of sample?
Speaker 3 (16:23):
Well, essentially you benchmark it right against there are these
like ground truth databases created by the National Institutes of
Health that we've tested against. And then you can also
again you can run experiments, so you can for example,
generate those variants of known significance in the lab and
(16:44):
actually test them in the lab assay and compare them
against the causal mutations or the benign mutations and essentially
see where they rank in terms of you know, this
metric of pathogenicity. So folks have done those experiments and
you can use those as benchmark data sets.
Speaker 2 (17:04):
So I mean that one seems to have kind of
immediate clinical relevance, does it? Or am I missing something?
Speaker 3 (17:10):
We certainly don't think that folks should be using EVO
designer on the ARC website, putting in their genetic sequence
and then trying to make clinical decisions, right, Sure, but
we are making newer versions of the model, which we're
training internally that we think can be state of the
art at doing this type of genetic diagnostic and so
(17:33):
we're excited about improving these models over time. I think
in order for them to be used by a doctor
in the context of the clinical diagnosis, right, there's a
lot of testing and validation and regulatory oversight that will
need to happen to make sure that this can be
used and certified in the right way. But from a
(17:55):
research point of view, right, I think this is extremely
exciting given the performance of the model already, let alone
the new things that we're doing to it under the hood.
Speaker 2 (18:07):
We'll lead back in just a minute. Earlier in the conversation,
Patrick was talking about the BRACA one gene with that
single gene link to breast cancer. But for most diseases
(18:27):
with genetic components, there is not one single gene. There
are lots of genes, and in lots of cases we
still don't really understand what's going on. And so I
asked him how that universe of diseases fits into his work.
Speaker 3 (18:39):
Yeah, so this is some of the fun part, right obviously.
You know, so so far we've been talking about individual
mutations in one gene, right, Yeah, and you know, in
this very kind of well controlled setting, in a very
well understed gene for a well reasonably well understood disease.
Speaker 1 (18:57):
Right.
Speaker 3 (18:58):
Most of biology is not that simple, right, And so
the reason why we have GA studies or polygenic risk
scores is because we have complex traits, lots of different
mutations that create lots of different phenotypes.
Speaker 2 (19:11):
Gis genome wide association. Let's look at the whole genome
and see what correlates exactly.
Speaker 3 (19:17):
It's basically the ultimate human genetics fishing experiment, What the
hell is the genetic reason why someone has a given trait? Right,
And that can be literally any trait, And you get
a bunch of people that have that trade a and
a bunch of people that seem normal, sequence them and
look at the statistical association for what genes seem to
(19:37):
be enriched in the people who have the desired or
bad trade right, and that could be height, that could
be hair thickness, that could be likelihood to get Alzheimer's disease.
Speaker 2 (19:51):
Yeah, and it seems like that was one that people
were very hopeful about, I don't know, ten years ago
or something in the kind of post human genome project era.
That turned out to be a lot harder than anybody thought.
Speaker 1 (20:03):
Is that right?
Speaker 3 (20:04):
Yeah? I think our initial goal with sequencing the human
genome was that we would find a whole bunch of
drug targets. It turned out we needed the ability to
sequence many many more genomes, potentially everybody's genomes, and compute
over all of this inscrutable complex data with AI right
in order to actually understand how these mutations actually interact
(20:27):
and work. Right. And then so that's the thing that
we're very excited about with next generation versions of EVA
is to try to understand genetic interactions and polygenic traits
rather than monogenetic traits that are caused by single gene
and a single mutation.
Speaker 2 (20:46):
Because most diseases, or the diseases that affect most people,
certainly have lots of genetic inputs and it's exactly one
messed up gene.
Speaker 3 (20:56):
So in high school you learned that genotype and the
environment collaborate to create phenotype, which is a fancy way
of saying there were nature and nurture collectively create behavior
and all like biology, right, And so this is really
where our EGO work, in our virtual cell work starts
(21:16):
to collide a model of genotype and a model of
cellular response and behavior, and how do we actually connect
these two modules in order to make more accurate models
of cell biology.
Speaker 2 (21:31):
So you brought up the virtual cell, which we haven't
really talked about, which seems like kind of the other
at least another big project at ARC.
Speaker 1 (21:41):
Right, tell me about the virtual cell.
Speaker 3 (21:44):
So we talked earlier about GOS, right, and GS is
fundamentaliation exactly geno wide association. It's an association, right, there's
no causality, right, And so in many ways. The breakthrough
of Crisper and genome editing was to basically take mutations
that are associated causally creating them and testing them in
(22:07):
the lab and living cells and seeing how the cells
respond and behave when you make this disease associated mutation,
do you get cancer from a normal self?
Speaker 2 (22:17):
So it's allowing us to say, well, we know there's
correlation here, but we don't know it's causative. It allows
us to answer that question at least in some content exactly.
Speaker 3 (22:26):
And these are experiments that you do in the lab.
Now the question is how much of this can we
push to an AI model, so instead of doing the
Chrisper experiment, the model can just tell you.
Speaker 2 (22:37):
And is this going back to the original problem of
like it's wildly slow to have to do these one
by one in the lab with Crisper.
Speaker 3 (22:45):
That's exactly right with Crisper, with a small molecule library
or anything else. Right. The point is we want a
model that can actually predict, for example, the result of
Chrisper experiments or a drug perturbations right and guide the
things that people are going to do in the lab.
Speaker 2 (23:04):
So it's basically saying, if we took a cell and
we changed the output of this gene. Or if we
took a cell and we used this drug on it,
for lack of a better phrase, what would happen?
Speaker 1 (23:18):
Is that the kind of question?
Speaker 3 (23:19):
Yeah, the maybe the mental model for me is, imagine
you're like a DJ, right, and you have the most
complex mixer on the planet, right, and you're in the
middle of this Las Vegas club. You have all these
knobs and dials, and every time you move them around,
the music changes, and you're controlling the music of the cell, right,
(23:40):
And so when you turn this up all the way,
you know, for some given you know sound, you can
get this kind of chaotic super loud disease, like like noise.
It doesn't sound mollifluous and interesting, right, And then you
want to basically figure out how to turn it off.
But if it's actually a bunch of different sounds that
are contributing to the noise, right, you need to figure
(24:03):
out the fastest way to find out which of these
you know, you know, tens of thousands of jobs are
out there should be actually changed in the right way.
Speaker 1 (24:13):
Now.
Speaker 3 (24:13):
Today we try to find this out by trying to
guess which knobs we need to be dialing and then
we like just messed around and do it. And this
is extremely slow, and it's this comminentorial search that's incredibly slow.
Speaker 2 (24:28):
It's not one thing. It's not even that you have
to guess the one thing. It's like it's many different
things have to happen, and so the math of that,
if you're doing trial and error, is just crazy and
it takes forever.
Speaker 3 (24:39):
Yeah, it's exclusively combinatorial, and the goal of the model
is to be a copilot where it would be the
equivalent of your like metaar glasses that tells you it's
this knob and this knob and this knob, or it's
you should try these five knobs and these five knobs,
and then you can, you know, in a much more
targeted way, you know which things to dial right. And
(25:00):
that's how we could accelerate the experimental verification of the
model's predictions in order to try to find the perturbations
that could treat disease cells for example. But this of
course is a platform capability that can be useful across
basic science but also very much for a therapeutic target idea,
(25:22):
which is something that we deeply care about accelerating.
Speaker 2 (25:25):
Yeah, I mean therapeutic target idea is basically like what
should we send a drug to go fix?
Speaker 3 (25:32):
Exactly?
Speaker 1 (25:32):
Change?
Speaker 3 (25:33):
Right?
Speaker 1 (25:33):
Exactly?
Speaker 2 (25:34):
I mean you've talked about, I think in this context,
I've heard you talk about the the failure rate of
clinical trials, right, like something like ninety percent of drugs
that go into clinical trials, which already is far along
in the development process, right, ninety percent of those drugs fail.
Speaker 1 (25:52):
Like how does how could this help?
Speaker 3 (25:55):
So we're not very good at making drugs? Right? The
statistics are clear, and that really means we're that at
two different things, finding the right drug target and then
actually having a drug that actually targets it in the
right way.
Speaker 1 (26:11):
Right.
Speaker 3 (26:11):
So, so, for example, if you found the right drug target,
let's say it's like a runaway car and you have
an inhibitor, but it doesn't stop it, It just slows
it down, right, Yeah, it wouldn't actually be curative and
fix the problem.
Speaker 1 (26:26):
Right.
Speaker 3 (26:26):
So you need the right drug composition and you need
the right drug target, and because we seem to be
bad at both of these things, we get a ninety
percent failure rate.
Speaker 1 (26:38):
Why did you decide to focus on Alzheimer's.
Speaker 3 (26:41):
So Alzheimer's is one of the major killers no one
has agreed on or we certainly don't know what the
right drug target is for Alzheimer's disease. And we think
of it as a textbook example of a complex human
disease that has a bunch of genetic mutations that we
don't understand. We don't know what genes they control, what
pathways they control, so we just don't know what the
(27:04):
hell is happening, right, And it's a you know, you know,
they're they're also inputs from the environment, infection age obviously,
you know, potentially diabetes and others that increase your risk.
So it's this really interesting example of having a risk
conferring genetic state penetr and then pushed over the edge
(27:26):
by environmental perturbations in order to create dementia. Right, And
we think, first of all, it would be very impactful
to human health if we can solve it, and second,
we can use it as an example as a blueprint
for how we can cure other complex diseases if we
can make progress. And so our goal has been to
(27:47):
try to figure out ways that we can model this
both experimentally at scale and computationally with model like AI
models of the major cell types in the brain in
order to try to you know, model these different combinatorial
changes over time much more effectively, basically much faster. Yeah,
(28:11):
and that's sort of one of the ways that we're
going to actually internally test the output of our virtual
cell models. But of course, any other lab will be
able to take these models and test them for some
heart disease pathway, let's say, or some inflammation pathway that
might be involved in glaucoma and eye disease, or you know,
whatever basic science mechanism that they might care about.
Speaker 1 (28:34):
So that's the.
Speaker 3 (28:34):
Scientific reason why we care about Alzheimer's. On a personal note,
it's also why I became a scientist. My my grandfather,
like many other people, you know, got Alzheimer's when I
was eleven. He was living with us at the time,
and I watched him go through, you know, from mild
(28:55):
cognitive impairment and confusion to you know, flipping shopping carts
in costco to you know, being in a nursing home
and dying. And you know, I think that gave me
very clear early miss and I think really in many
ways redirected my life from probably becoming some you know,
software founder of some kind too, you know, joining a
(29:17):
lot in high school and picking up a pipet and
doing that gusts and check search that we're now trying automate.
Speaker 1 (29:27):
We'll be back in a minute with the lightning round.
Speaker 2 (29:31):
Mm hmm, okay, let's finish with the lightning round. Who
is the most underrated medieval king?
Speaker 3 (29:45):
The most medieval See? I feel like this is basically
well there, there's there's there's so many that you could
that you could pick from. I'm going to pass on
this question. This is this is very this is very
unfair of me. But I mean I'll have to ponder.
(30:06):
I'll have to ponder.
Speaker 1 (30:07):
Yeah, why do you love medieval history?
Speaker 3 (30:09):
I studied it for four years as a as a
young homeschooled student.
Speaker 1 (30:14):
Yeah, why did you study it for four years?
Speaker 3 (30:17):
There's a reason why we love like swords and sorcerers
and you know, in Game of Thrones and the Battle
of Aquitaine and whatever. You know, it's I think it's
one of the periods of time that we felt like
true optimism.
Speaker 1 (30:34):
Do you think so? I don't think. I mean, you
know more about it than I do. But I'm shocked.
Is that not just a projection.
Speaker 3 (30:40):
It's evolutionary selection playing out in real time. It's very
like visible and well recorded history of evolutionary struggle. I
think maybe that's why I found deeply fascinating about it.
Speaker 1 (30:54):
Uh huh.
Speaker 2 (30:55):
That's the biologist view I guess of the medieval period.
Speaker 3 (30:59):
Maybe one way to frame it.
Speaker 1 (31:01):
If you weren't working on biology, what would you be
working on?
Speaker 3 (31:04):
You know, I'd probably be a music talent scout.
Speaker 2 (31:07):
Yeah, what should I listen to? Who should I sign to?
Speaker 1 (31:10):
Our label?
Speaker 3 (31:12):
My favorite band it is the XX I discovered them
in two thousand and nine or two thousand and eight, yeah, early, Yeah,
It's like, you know, if I were a seed investor,
this would be this would be one of my kind
of key seed investments.
Speaker 2 (31:28):
I'm going to read you a thing you wrote and
then ask you a question about it you wrote. In
my view, researchers also need to stoke two warring urges.
One an opinionated sense of taste in a relentless search
for beauty, and two a yeoman's grindset for the unglamorous,
dirty work required to get things done, which is lovely.
(31:48):
A lovely sentence, And I'm curious for your thoughts about
that in the context of AI. Basically, like you think
AI can do two?
Speaker 1 (32:00):
Number two? Do you think it can do one?
Speaker 3 (32:02):
Like?
Speaker 1 (32:02):
Where where does AI fit in the context of that sentence?
Speaker 3 (32:06):
Early on, you know, maybe a year year and a
half ago, you know, I think there was lots of discussion.
You know, I want AI to you know, do my
laundry and taxes so I can make art and music
and not the other way around, right, because you know,
early on, we we had all these models that were
you know, generating new images and making AI art that
(32:27):
was really controversial, and making new types of songs. And
you're like, wellhy can't I do the basic things so
I can live and create and dream? And I think
today it's very clear that it is helping us with both, right,
that these bicycles for the mind are high quality enough
to allow us to create not just intellectual insights, but
(32:47):
new like you know, visual design styles and paradigms, new
types of sound. And we're just very much in the
early innings of using these in a way that feel
augmentive to human creative capacity rather than you know, some
somehow like aggressive or replacing. You know, that's and I'm
a big AI Bowl. But above all, I'm a big
(33:10):
believer in how humans can use them to benefit.
Speaker 2 (33:19):
Patrick Sue is the co founder of the Art Institute
and Assistant Professor of Bioengineering at UC Berkeley.
Speaker 1 (33:27):
Please email us at problem at pushkin dot fm. We
are always looking for new guests for the show.
Speaker 2 (33:33):
Today's show was produced by Trinamanino and Gabriel Hunter Chang.
It was edited by Alexander Garretson and engineered by Sarah Bruguer.
I'm Jacob Goldstein and we'll be back next week with
another episode of What's Your Pop