Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:14):
Hello and welcome to
the Spark of Ages podcast.
I'm your host, rajiv Parikh.
I'm the CEO and founder ofPosition Squared, an AI-focused
growth marketing company basedin Silicon Valley.
So, yes, I'm a Silicon Valleyentrepreneur, but I'm also a
business news junkie and ahistory nerd.
I'm fascinated by how big,world-changing movements go from
(00:34):
the spark of an idea to aninnovation that reshapes our
lives.
In every episode, we're goingto do a deep dive with our
guests about what led them totheir own eureka moments, like
in this one mega biotech, andhow they are going about
executing it and, perhaps mostimportantly, how they get other
people to believe in them sothat their idea could also
someday be a spark for the ages.
(00:55):
This is the Spark of Agespodcast.
In addition to myself, we haveour producer, sundeep, who's
occasionally going to chime into make sure we don't get too in
the weeds with science and techjargon.
Today, I'm going to saymitochondria a lot, because I
remember that from ninth gradebiology class Hope that helps.
Mitochondria, nuclei, golgiapparatus, proteomics, Cellular
(01:16):
wall, nanomaterials All kinds ofcool stuff today.
We're really happy today tohave Ashwin Gopinath here.
He is leading in thisintersection between biology and
(01:38):
computer science.
It's really cool how it'scoming together.
We're going to talk about howyou enable data collection that
can save your life.
So Dr Ashwin Gopinath is anassistant professor of
mechanical engineering at MIT,and Dr Gopinath is also the
co-founder and CTO of a companycalled Biostateai, an innovative
startup pioneering generativeAI in forecasting drug safety
(02:02):
and toxicity, with a focus onsalvaging failed drugs.
He's going to make failuressucceed.
Biostateai targets a $100billion annual market and has
the potential to revolutionizeeveryday health monitoring and
intervention.
Prior to Biostateai, drGopinath co-founded Palamedrix,
a biotech company combiningsemiconductor fabrication with
(02:22):
AI, which was acquired bySomaLogic in 2022.
Ashwin is an associateprofessor at MIT, where his
academic pursuits intersect, andhere's a whole bunch of great
buzzwords AI, applied physics,biology.
He has researched that with theintersection of DNA,
nanotechnology, microfabrication, synthetic biology, optical
(02:44):
physics and material science.
And yes, we will make thisapproachable for all of you.
Sounds like a bunch of gutcourses.
Speaker 2 (02:50):
That sounds easy.
Speaker 1 (02:52):
He's like taking
foods and dudes at UNH.
He earned his degree inelectrical engineering in India
before pursuing a graduatestudies in applied physics with
a minor in neuroscience reallyboring neuroscience, you know,
really easy at Boston University.
At MIT, his groundbreakingresearch on light transport
within disordered media at MITearned him the prestigious Best
(03:13):
PhD Thesis Award.
So that's really impressivebecause, as I've learned, only
three or four people read yourthesis, including your mom, and
so obviously more people readthat.
In late 2019, dr Gopinathreturned to MIT to establish his
independent lab dedicated topioneering advancements in
real-time biological monitoringand AI technologies capable of
(03:34):
emulating human cognitiveprocesses, that's, brain
processes such asself-reflection.
Oh gosh.
So there you go, ashwin.
Welcome to the Spark of Ages.
Pleasure to be here.
Speaker 2 (03:44):
Well, great to have
you.
So there you go, Ashwin.
Welcome to the Spark of AgesPleasure to be here.
Speaker 1 (03:46):
Well, great to have
you.
So we're going to have so manyinteresting things to talk about
today, especially since you'vetraversed science computer
science, biology, nanomaterialsso this is going to be a lot of
fun to talk about.
Let's start with some basicsfor the audience.
So what's a day in your lifelike You're a professor, company
founder?
What's it like?
Speaker 2 (04:07):
The way in which I
actually phrase it is.
I've been lucky enough tobasically working on things that
purely interest me, so like ona day-to-day basis.
You know I don't have a verylarge lab.
I have a very small lab.
That is by choice, because Iwant to work on things that
interest me and, more recentlybeing working on things that are
(04:29):
more translational.
My day starts quite earlybecause I travel around, but
West Coast has my base, but youknow, many of my things are
happening in Boston as well, somy day starts typically very
early.
First few hours, like three tofour hours, are typically for
working on grant stuff, papers,talking with my students, and
(04:50):
the rest of the four, six hoursis basically on my startup.
So that's my typical day.
And then at the end of the dayit's basically either gym run or
you know, and you know you'redone.
It's a boring life, you know.
Speaker 1 (05:04):
I I would like to
claim that I run and it's very
full, yeah, so, so I I don'thear kids in there.
Huh, it sounds like, uh, you'reable to pursue all these things
because you don't got not yetnot yet.
Not he's married, though I ammarried not yet okay, all right,
he's married to someone theyworked with at caltech, right?
Speaker 2 (05:23):
yeah, yeah, yeah, but
.
But.
But me and my wife know eachother for, like, I think, more
than half my life now.
So yeah, Amazing.
Speaker 1 (05:30):
Now let's talk about
your really interesting startup,
Biostateai.
Maybe you could just tell us alittle about the challenges
you're tackling right now in thehealthcare industry.
Speaker 2 (05:39):
Yeah, the long term
picture of Biostateai is just,
you know, both me and myco-founder, dave, the reason why
we started this was we wantedto develop a technology that
allows us to sort of be able tounderstand biology, to be able
to predict biology, in thespirit of basically, you know,
(06:00):
helping human health diagnosticsand all of that.
But as you, you know, pan back.
It became important for us tolook at a problem that has
economical value now, becauseotherwise you're just building
something you know with a, youknow you get you, you make money
(06:20):
only when you get to themountaintop.
Until then you need tobasically get it funded by
somebody else.
So we were trying to focus onproblems that are now so the
unmet area that we lookedforward.
Very few people are working ontrying to improve drug safety.
Everybody thinks about tryingto make drugs more efficacious
(06:42):
and make it better.
So that is to say, you know, ifyou have cancer or if you have
a particular disease, you wantto figure out a molecule or a
drug or a therapeutic that helpscure it very nicely.
But whenever you actually buildsuch a drug or a therapeutic,
there'll be some portion ofpeople, just because of the
(07:04):
diversity in the population anddiversity in biology.
Some of those individuals willhave adverse effect.
The drug will have an adverseeffect, which is what the side
effects and things like that.
It could be very large.
It could be very small.
If you take aspirin, typicallyit's not that bad side effects,
but as the conditions becomeharsher, the more the side
(07:27):
effects are going to be.
So the vision of where we wantedto go was how do you make sure
that you can tell very quicklyif a drug molecule is going to
be safe and who is it going tobe safe for?
So that and that was also.
If you look at the clinicaltrials and the way in which a
(07:50):
drug goes through regulation,this was the area that we saw
very few companies working on,especially using AI and big data
to kind of truly kind of impactthat area.
So we decided, okay, fine,since nobody is there and it has
real value, not only to makethe life of individuals better
(08:11):
but also unlock economic value.
We decided we will go therefirst.
But the AI and the tools thatwe are developing is much more
general.
It can be applicable foreverything else, but we are
applying it to make better,safer drugs.
So, if I'm if I mean if you do-everything right, yeah, so let
me.
Speaker 1 (08:30):
We've always heard
this, or I've heard this notion
of it takes $800 million to makea successful drug.
So to do that, you go throughmany drug candidates or many
different molecule types, and indoing so many of them fail Like
they may work in an animal, butthen when they get to humans
(08:50):
they're judged as unsafe.
So a lot of those things may bevery promising but because of
safety is you have to apply themto a pretty broad population.
If they're unsafe for a decentenough size of the population,
it gets excluded.
What you're?
doing is by understanding thesafety profile of the drug and
understanding the individual'sgenetic makeup, the omics and
you're going to explain to itwhat that means later the
(09:10):
different proteomics, geneticmakeup, who you are as a person.
By understanding that as anindividual, you'll be able to
take this drug candidate thatmay be unsafe for a bunch of
people but safe for you andactually and then apply it to
solve your specific problemPrecisely precisely.
Speaker 2 (09:31):
I mean to understand
that.
You know you need to kind ofunderstand how a drug molecule
becomes.
You know you get FDA approvalAt first.
You start with several hundredsof lead candidates.
That's what a pharmaceuticalcompany does that's based upon.
You know you look at thebiology and you know what
targets you're looking at andyou have a lot of molecules.
(09:52):
They whittle it down to a few,maybe a dozen of them, and then
they do preclinical trials whereyou do these experiments on
animals, where I mean you do themodels.
If it has, you know you put acancer on a rat or mouse, you
put it to the.
You know you give them the drug, you see if there is an effect
and things like that.
You go to the FDA with all thedata results and they say, okay,
(10:12):
fine, now you can start humanclinical trials.
And then when you start thehuman clinical trials, the first
thing that they do is test itout on random healthy
individuals to see how itaffects it.
The idea is, if you give thedrug molecule to a bunch of
healthy individuals, it shouldnot have adverse effect.
(10:33):
The adverse effect can go allthe way from I have a slight
headache to you know I haverashes to.
In the worst case you havedeath as well.
So you give it to a certaincohort of a certain set of
healthy individuals and that'sphase one trial.
And usually about 40% of drugsthat start human clinical trials
(10:53):
fails there.
So it doesn't matter, you don'teven get to test it on the
individuals with the diseasebecause it fails the safety I
mean.
Sometimes what happens is youhave adverse effect.
But if the condition, if thereare no other drug molecules or
if there is no other option, youkind of get to push it through
(11:15):
to phase two and phase three aswell.
But that's the politics andfiguring out exactly what is the
, how you choose, what are theconditions and things like that.
But the first thing that FDAcares about is safety of a drug.
It doesn't matter how good thedrug is if it's not safe.
Speaker 1 (11:31):
So now, with what you
hope to do with biostate AI is
by, through your methodology oftesting and bringing down the
cost of those tests, you're thenable to take what may have been
hundreds that have been thrownaway and then reuse them.
So potentially the cost of thedevelopment of drug can be much
(11:53):
lower, and maybe what you'realso saying is it can be more
efficacious or more effective ona person.
So you're like recycling stuffthat was thrown away for perhaps
the wrong reasons.
Speaker 2 (12:06):
Yeah.
So there are different businessmodels.
One can build around it and weare thinking about a few
different ways to do this.
One way in which one could usethis is look at failed drugs or
drugs that were approved for aparticular purpose, to be able
to reuse it for something else,but, you know, or, if it has
(12:27):
failed for a safety reason, tobe able to figure out a test to
go with it, so that you can say,if this individual's RNA
profile looks a certain way,then this drug is going to work
for you, or if it looks in acertain way, this is going to be
toxic for you, we won't give itto you.
So there is precedence likethis.
There are a lot of differenttests that you end up getting.
(12:48):
So that allows us to kind ofnot only take drugs that would
have otherwise failed and, youknow, make it useful, but also
speed up the process Because youknow there are different ways
in which pharma can use themodels that we are building.
Because one of the things thatwe are doing is and the reason
(13:10):
why we are, you know Rajivmentioned that we are reducing
the cost, and the reason why weare reducing the cost of these
omics tests is not necessarilyto make it quicker, or for the
test to give a very cheap testto people quicker, or for the
test to give a very cheap testto people.
We are making the cost lowerbecause in order to build the
models, we need to get massiveamounts of omics data.
(13:31):
I need to be able to collectthe transcriptome or the RNA
profile for a lot of differentsamples, and if it is very
expensive, then I can't collectthe data.
I can't build the AI.
So if I reduce the cost of datacollection, it allows me to.
Speaker 1 (13:49):
So hold on, I'm going
to pause you just for a second,
for my non-PhD mind.
Can you explain what omics are?
Speaker 2 (13:58):
I might have an
answer for that, ashwin is going
to expand on it.
Speaker 1 (14:01):
I'm going to start.
I looked it up it's genomics,transcriptomics, proteomics and
metabolomics, and there's others.
Okay, now you have to explainit, not using the suffix omics.
Speaker 2 (14:16):
Okay, so that's just
one of the definition.
No, no, no, I can explain it toyou.
Speaker 1 (14:22):
So it's very Can't
use omics in the definition of
omics.
Speaker 2 (14:25):
How do I put this?
The best way to think about itis and I won't even try to use
omics Think of omics just as aword that represents a certain
kind of data.
So everybody has heard aboutgenome, like DNA, okay, like
everybody understands in somesense that DNA defines
characteristics in your body, so, but DNA is not the full story.
(14:49):
In biology, you have every cellin your body has a certain DNA
in it, which basically is yourgenome.
You call it as your genes andcollectively you call it as
gene-ome.
Okay, it's a collection ofgenes.
Okay, and the DNA typicallyends up making RNA.
Rna makes proteins, proteinswith small molecules, with what
(15:12):
you eat, all the other thingsthat you put in your body.
That's so.
These are the four classes ofmolecules that basically builds
(15:34):
up.
There are other things as well.
There are fatty acids andthings like that, but these are
four.
These are four classes ofmolecules that typically that
you think of, as you, you know,basis of life, are like, very
detailed, very important forlife.
So, like anything that'sstudying with DNA, you end up
calling as gene genome.
Okay, a collection of genes,like.
(15:55):
You'd call it a genome.
Anything that you do with RNAis called.
You call it as transcriptome,because the process by which RNA
is made is known astranscribing it.
Okay, and then you haveproteins and you have a whole
bunch of them.
That is known as proteome.
And if you have small molecules, you call it as metabolome, and
(16:16):
if you are working with all ofthem together, you call it as
multiomics or omics.
Just, it's just a name torepresent these classes of
molecules, that collection thatit's a collection of molecules
that that you, you, you, thatthat represents, uh, like, gives
you a significant idea of yourbiology, or like your identity.
(16:38):
So that's what omics it's just.
Basically, it's like, yeah,like my, like, my comics are a
collection of my stories.
Speaker 1 (16:42):
That I like to you
know, like my comics are a
collection of my stories, that Ilike to you know, like forest
in the trees, right, yeah, yeahthe ground, the dirt, the
molecules, and then you keepmoving up until you understand
how these systems work, and Ithink that's what he's
describing in the way this thesystem works, so it's become so
complex.
I've heard of companies thathave tried to describe the whole
(17:02):
body in a set of equations,mathematical equations, and I
think it ends up being reallyhard to do in building these
types of models because there'sso much individuality.
It's not like a computer whereit's just one and zero.
You start with a one and zeroand you can build off of it.
This is like so manypermutations and combinations,
and I think that's why you wentinto LLMs, right, that's why you
(17:23):
went into AI.
Speaker 2 (17:32):
Correct, correct.
So the reason.
So, if you trace back as to whyI am working on Biostate, it's
sort of you know, I was ahappy-go-lucky sort of like
semiconductor and like funny,like DNA nanotechnology person
back in tech 2016 timeframe andI wasn't really that bothered
about healthcare much till mywife got leukemia and then we
both started basically lookinginto different ways in which you
(17:55):
could track your healthcare andkeep track of it.
That took me down the path tothe question of hey, we have all
of these technologies, whycan't we actually understand
biology?
I mean, and to a certain extent, biology is a very complex
system, but when you go rightdown to it, you don't even know
all the components and all thedifferent ways in which these
(18:18):
molecules interact with eachother.
So that increasingly got to apoint where in like even my
previous company, the reason whywe formed the previous company
was we wanted to measure all theproteins in your body and
because if you measure all theproteins in your body, then you
can track it over time and thenyou can feed that into an AI
model, build a model so that youcan start predicting what is
(18:40):
going to happen.
And even biostat is, to acertain extent answering that
same problem, that general classof problem itself, but we are
taking a slightly differentroute.
So it all comes back to howmuch more and more data that you
can collect, annotating thatand then building models to
actually understand how thesemolecules interact with each
other.
(19:00):
So to Rajiv's point.
The reason why LLMs are reallyreally good is because the
architecture is good enough atthis point that we don't need to
know in detail what are all theinteractions.
We can just feed it differentexamples of a system and it will
(19:21):
learn how these molecules areconnected to each other purely
from data.
Then you can actually play withthe model itself and understand
what is happening and thingslike that.
That is explainability.
But that's not in some waynecessarily needed.
Like if I were to feedsomething into the system and it
says that this person isunhealthy.
Do you really want to know why?
It's good to know, becausethat's needed for approval and
(19:44):
regulations and all of that.
But in some sense there ispower in just being able to know
if something is actually goingto happen.
Speaker 1 (19:51):
Maybe you could talk
about an example of that, of how
that this I think you call it alongitudinal LLM, how that's
comparable to chat GBT, andmaybe an example of what someone
would see from that.
Like you talked about healthyversus unhealthy by building a.
Speaker 2 (20:08):
So what we are doing
is at every time point, like in
humans or any biological system,we are connected over time, At
every second.
What your body is going to gothrough is dependent on what was
the situation a second beforethat or a minute before that.
Like, for instance, if youconstantly hear a colloquial way
(20:28):
to think about it.
A simple way to think about itis if you keep eating McDonald's
every day, for every singlemeal, eventually you're going to
get fat, or like you're goingto get unhealthy.
That's just a result of overtime.
It just accumulates over time.
Or if you have, you know,habits, you will get
progressively better.
So that is, you are temporallyconnected to each other.
(20:48):
At every single time point, youcan attribute, you can take
attributes of a biologicalsystem, and that is what we call
as biostate.
Like simple way to think aboutit is if I were to take, at
every single point, as a kid isgrowing.
You are measuring your height,okay.
You're measuring your weight,you're.
You know how far you can go,how you know all of these, all
(21:09):
of these things that you measureby your you know Apple watch.
That is a version of biostat.
But what we are looking at iswe are, look, we are measuring
all the proteins and we arecalling all the, all the
molecules in your body, and weare calling that as the biostat.
So at any given time point youhave a certain numbers.
You know.
You have protein, one so much.
You have RNA, one so much.
(21:29):
You have like that.
There's a large list.
That's one biostat and at thenext time point you have another
biostat and the third time youhave another biostat and they're
all connected to each other.
Okay, they're all connected toeach other.
We don't know how they areconnected to each other and we
can't predict what is going tohappen next.
At a very high level.
That is very similar tolanguage.
You can take one sentence andthen you have another sentence
(21:52):
that follows it.
You have a third sentence thatfollows it.
What ChatGPT can do is if yougive a three sentence or four
sentence, it learns what you aretrying to tell and it predicts
what is going to happen next.
What you are trying to tell andit predicts what is going to
happen next.
So, just like how in chat GPT,you are taking it, you know over
, you know you're, you're givingit more and more words and it
(22:12):
predicts what is going to happennext.
Here you're predict, you'rewhat you're giving it biostates
over time and it's going topredict what is going to come
next.
Speaker 1 (22:22):
Each individual
because their state is
constantly changing.
You're going to be able tosense that change and predict
where it goes next based on thevolume of data that you have
about many individuals or thecurrent individual.
Speaker 2 (22:36):
So we have just to be
clear we are not doing these
experiments, we are not doingthe data on humans, yet we are
just doing this on humans.
Yet we are just doing this.
I mean because becausecollecting human data and doing
it, doing it in a control, it'stough and you know it costs a
lot more.
So it costs a lot more andthere are more regulations
involved.
So what we are doing is we aredoing these experiments in
(22:59):
animals.
We are collecting over time onrats and mice.
We are collecting over time onrats and mice.
We are basically giving themcertain drug molecules which are
again approved and observinghow those drug molecules change
their biostates over time.
And can we predict, you know,can we design models wherein,
(23:29):
you know, we can predict howtheir whole body or how their
states are going to change whenI perturb it in a very
controllable manner.
So because that's one of thethings that we are trying to do
and this also segues intoanother thing that Biostat is
trying to do is we are trying toquantitatively map what is
experiments done in one speciesinto another, because one of the
other things that we startedrealizing is we want to build
these models.
Of course, we can't do the largeamount of data as needed.
(23:51):
We can't do these experimentson humans, but we can actually
do in a humane way.
We can do experiments inanimals and there are approved
and like FDA gives and there areethical guidelines on how to do
it.
We follow all of that and wecollect data over on animals and
we are all.
We look at all animals asfundamentally, all of us are
(24:12):
mammals.
You can think of each animal asa different language, okay, and
it's just like how you canactually take a French book and
translate it into Hindi orChinese or English.
Translate it into Hindi orChinese or English.
Our understanding is, if we doeverything right, we can do
experiments and like do thesethings on animals and in an
approved way and from that beable to tell how it's going to
(24:41):
behave in a very predictablefashion in humans.
Speaker 1 (24:42):
That's another part
of what biostatistic is.
That's actually a key aspect,because most people would say
that oh great, great job onanimals, but not everything from
animals translates to humansanother part of what
biostatistic.
That's actually a key aspect,because most people would say
that oh great, great job onanimals, but not everything from
animals translates to humans.
It's a discussion I have withmy friends who do everything
based on animal studies and readall these wonderful things on
the internet about ways to fastor do all these interesting
things like not what works thereworks everywhere, but what
you're saying is, by doing theexperiments on an animal, you're
(25:04):
then going to create atranslational bridge between
that animal to humans and do amuch better job of predicting
without having to do all thesame exact studies on humans.
Are you comparing it to data?
Because if you're not doing theactual tests on humans, it's
just because you have the volumeof data that, as it translates
to, say, other animals, orbecause you see, you know how,
(25:28):
like.
I'm wondering how are you ableto make that assumption off of
of rats better than say you knowwhat folks already do?
Okay?
Speaker 2 (25:36):
so it's a great
question.
Uh, so the insight that weended up having is so the reason
why you can't compare whathappens in one species to
another species is because, like, even though we are all mammals
, we all have you know, uh, like, when I say we, I mean like all
, like all mammals, like,whether it is rats, or you like
(25:57):
mice, or chimpanzees, or dogs orwhatever it is, and humans,
they all have you know.
Or slots, okay, they all haveyou know.
Or slots, okay, they all havesame organ systems and they have
you know.
Very, they have comparablebiological systems.
The fine details are verydifferent.
For instance, mice and ratsdon't have what you would think
(26:19):
of as skin, the way you can'ttake what you see as their skin
as comparable to what are humanskins.
So if you were to basicallygive a particular drug to a mice
or a rat, you will not.
And if that and and if it, oneof the side effects that it
would have on humans is likeskin rashes.
You would never see that in amice, okay.
However, if you were to look atthe epithelial cells, like
(26:44):
their inner, if you look attheir digestive tracts, the
cells on their digestive tracts,inner digestive tracts is very
similar to the human skin.
You would see rashes there.
You won't see the rashes onwhat you would think of as like
on the rats fur and things likethat, but you can capture all of
these things in the.
(27:05):
You know, when you do your theomics analysis, if you were to
actually measure all themolecules of the in the rat,
you'll be able to capture thesethings.
So now the question is how doyou translate it?
Okay, how do you translate?
Which molecule changestranslates to human skin changes
?
Which set of molecules changesin a rat would map to like
(27:26):
something on an eye orrespiratory things?
So that mapping function iswhat we are figuring out, and
once you start figuring that out, then you can start looking at
when you do an experiment on arat.
You will immediately be able tosay if you were to do this in a
human or in another species.
This is how it is going to look.
Speaker 1 (27:51):
That's really, that's
really wild, but you are still
looking at human trials, I meanultimately right.
You still have to look at howdoes the drug, or whatever,
affect humans.
That's where we are going.
It's just that you're becomingbetter predictors of it because
you have you have so much moredata precisely, precisely.
Speaker 2 (28:04):
I mean, that's the,
that's the direction, that's the
world that you're building, andit, and underneath all of this,
is being able to collect the uhomics data.
So, typically, if you were totake a drop of blood or drop of
urine or drop of any kind ofsample and if you want to do
like what can be thought of, asyou know, we look at RNA.
(28:26):
The reason why we look at RNAis just because RNA is what
changes over time, like DNAdoesn't change over time your
genome.
You're born with your genome.
Unless you went to Chernobyl oryou got like irradiated very
badly, your genome is not goingto change, but your RNA and
proteins change over time.
And RNA is the most economicalones and the more mature
(28:48):
technology that we can look at.
Even though it is mature, itcosts about depending upon who
you actually work with.
It costs about $400 to $800 todo one analysis for one drop of
sample.
So if you want to collect,let's say, hundreds of thousands
or millions of samples, that'sa very large number.
So we need to progressivelyreduce the cost of that analysis
(29:12):
itself, which is the firstthing that we did, because if
you want to collect, if you wantto do you want to build a
massive like an all-encompassingAI to understand biology, you
need to first figure out thedata collection engine.
So we are collecting the data.
We have already reduced thecost down by about an order of
magnitude.
What that means is what costsabout $400 to do right now.
(29:35):
We can do it in our hands.
Get it down to about $50.
We have already done that andit's basically going to
progressively reduce from hereon.
We know the way to do it andthat allows us to already over
just the last year Wow, that'shuge.
Over the last year, we havealready collected 2% of all rat
data that people have evercollected.
(29:56):
So in just one year of ourexistence, we collected 2% of
all the rat data that is thereusing these tricks, to 2% of all
the rat data that is thereusing these tricks.
And you know, as we proceed,everything that we are building
can eventually be done on humansas well.
But we just don't want to gothere because if we go there, it
opens up a can of worms as faras regulations are concerned.
(30:16):
We need to kind of follow allthese things, and getting access
to samples are harder, all ofthat.
So before we get there.
We want to get as much aspossible like build up all the
AI, build up all the frameworksso that, when the human data
comes in, we can immediatelymake it value.
Speaker 1 (30:33):
So now, when you
translate this, okay, so then
you have the sample collectingthat you've already reduced by
an order of magnitude.
You're looking at probablyanother order of magnitude that
you want to get it down to andthen you have the translational
capabilities.
So that helps us understand.
You know where you're.
Some of what your company isgoing right what, what do you
see as the big goal for whereyou really want to go with this?
(30:53):
Like, as you, as you put thistogether and you start to hit
greater scale, like, where doyou want to go and what stage?
You know you've done a startupbefore.
Yeah, what stage are you at?
Speaker 2 (31:04):
where we are at right
now is we have already shown
that in rats and mice.
We have demonstrated that if Iwere to give it a drug molecule
like, let's say, tetracyclineit's it's basically a particular
approved molecule In certaincases you would give it to, it's
an antibiotic.
You can uh, if you give it acertain dosage, a high dosage
(31:27):
and not like a moderate to highdosage uh, we end up seeing that
if you give it to a hundredso-called comparable rats, you
know 40 of them have uh about anadverse effect.
Okay, they are uncomfortable,they are visually uncomfortable,
they are in a little bit painand you know they eventually
become healthy or like, or or ifsome of them have basically
(31:47):
passed away as well.
But 60% of them have no effect,they are totally fine.
So that's very similar to thephase one trial like in the
sense that if you were to give aparticular drug molecule to a
set of individuals who arebasically supposed to be all
identical and healthy, some ofthem will have an adverse effect
.
So we did that experiment andthen we trained our model and it
(32:11):
turns out that before, if Ilooked at their RNA profile
before I gave the drug molecule,I can predict which one of them
with eight times moreprobability, like eight times
which one is going to have anadverse effect and which one is
not going to have an adverseeffect.
So that's basically saying thatin an animal we can do what we
want to do this whole, you know,individualization of toxicity
(32:33):
okay, that can be done, that'salready done.
We have a paper that is done onthat.
The second one we have done iswe have already shown that I can
take an experiment, do it on arat, do the same experiment on a
mouse, do the same experimenton a rabbit, and I can predict.
I can look at the data justfrom a rat and predict with
greater than 98% accuracy whatthe results is going to look
(32:58):
like if I did the experiment ona mouse or what the experiment
is going to look like if I do iton a rabbit.
So at a high level, bothindividualization of tox
prediction as well as what wecall as cross-species transfer
learning, we have done both ofthat From a proof of principle
point of view.
We can extend this to humans aswell and we can extend it to
(33:20):
other animals and humans andthings like that.
Of course, more data is needed.
You know more work needs to bedone and all of that.
So proof of principle has beendone and obviously we have
reduced the cost of the datacollection and things like that.
And where we want to go is ourfirst big milestone would be we
want to take a drug that hasfailed you know, clinical trials
(33:41):
because of safety, be able tobasically demonstrate that using
we can do the experiments on aset of animals and be able to
say, if I were to do thisexperiment on a human, or if
this would be the profile of ahuman for which this is going to
be toxic, based purely on theanimal results.
(34:02):
So does that make sense?
And researcher or some pharmabasically wants to come and say
hey, we want to collect massiveamounts of data, we want to use
your low cost data collectiontechnique, we are willing to
(34:24):
basically work with them on that, and we have built a bunch of
AI tools to kind of automate theanalysis and everything.
All of that we are justbasically starting to launch and
give it out to the communityno-transcript.
Speaker 1 (35:00):
I'm building these
models.
I have a sampling technique.
You have this data, thisamazing set of algorithms that
you've built.
Who are you selling it to andhow do you get paid?
Speaker 2 (35:12):
Do they?
Speaker 1 (35:13):
find you or do you
find them?
Speaker 2 (35:15):
So we are going
through that process.
So we are going through thatprocess in the sense that the
first set of products that weare releasing is essentially
because we have reduced the datacollection cost and massively
increased the amount of datathat we have collected.
We need to basically buildthese kind of AI tools to
analyze it and make internaltools to analyze it, and all of
that, all of that is beingproductized and make internal
(35:36):
tools to analyze it, and all ofthat, all of that is being
productized and given out to theworld.
So so, in the sense that if,like I think in the next couple
of weeks might be like, yeah,like we are, we are doing a more
, more public launch for this,wherein if a, let's say, a grad
student wants to actually do anrna profile of a particular
sample, we can measure all therna in, then there are technical
(35:58):
reasons why currenttechnologies can't do all of
that, like can't measure all theRNAs.
It can measure only a smallportion of the RNAs.
So we have developed thesetools that allows us to actually
measure all the RNA.
They can just contact us andbasically we'll do that
experiment for them and that ischeaper than anybody else,
anybody else, that is, whetherit is being done in the Anybody
else, whether it is being donein the US typically, whether it
(36:19):
is being done in China, whereverit is, we have the cheapest
technology or the mosteconomical technology to
basically do it.
Not only that, we are alsogiving people free access to
what we call as Omics Copilot.
Essentially, you canconversationally basically
analyze all the data.
If they have RNA data, likelike rna data or dna data or
(36:40):
whatever other omics data isthere, they can just load it and
just basically talk through itand like contextualize it, like
to figure out where, whatmolecule, how it relates to
other molecules in theliterature.
All of that you know, you know.
You know, think of it as chat,gpt, but tailored for very
specific scientists that work inthis field.
So both of that is beingoffered and that starts the
(37:03):
revenue generation for thecompany.
And the reason why we decided todo this was because, you know,
internally we wanted tobasically build this model that
allows us to say, rescue drug,to understand biology, all of
that, but we didn't want toconstantly be taking money.
We wanted to actually be ableto understand biology, all of
that, but we didn't want toconstantly be taking money.
We wanted to actually be ableto.
Also, we realized that weneeded to, like, generate
revenue.
Ultimately, we are still acompany.
(37:25):
Okay, since we reduce the datacollection cost, you know we
owed it to the community tobasically give it out.
If we hold it to ourselves, youknow we'll collect the data and
we'll build these things, butwe, if we offer it the community
basically, can basically getmore data as well.
Speaker 1 (37:39):
So in a way you're
offering.
When we say go to market, howdo you market?
So marketing sales, customersuccess, that's what go to
market means, right, but whatyou're doing is actually a form
of what they call plg.
Right, it's basicallyproduct-led growth.
You're taking portions of yourproduct, portions, portions of
your capability, you're offeringit for free to the scientific
(38:02):
community.
They then get value out of it,they get to start using and
getting into it and then fromthere they'll eventually, when
they get to a higher level,subscribe with you to perform
analyses or data collection ateven a higher level.
So that's how you whet theirappetite.
You don't need necessarilysalespeople to go cowl and pound
(38:22):
and chase and drive themthrough.
You put out this to theresearch community, who you know
so well anyways.
Speaker 2 (38:28):
Correct.
I mean we're not giving it all.
Not everything is being donefor free, but it is still
cheaper than everybody else's.
So I just wanted to add thatcaveat.
But the spirit is to enablewhat we have developed to give
it to the larger community, sothey can also generate data, and
you know we all will in somesense.
Speaker 1 (38:46):
Will you end up
developing your own drugs or
will they be buying tools andsubscriptions from you?
Speaker 2 (38:52):
It depends.
It depends upon how it goesright, like, so we look at it,
and that is something that wewill decide over time.
Because we look at it, as thereare so many drugs that are
failing for safety reasonsanyway, it makes like, so we can
start and and and and so itmight.
If we have this kind of aturnkey technique wherein, you
know, you put a failed drug in,you follow a system and out pops
(39:17):
a method to basically figureout who it is going to be safe
for, then I can actually I don'tneed to have developed my own
drugs.
I mean, I can basically takethe ones that are failing and
put them through the you knowringer in some sense, and
basically out pops something.
And this is something thatevery drug development company
has to deal with.
(39:38):
So, like, eventually, if we arewildly successful over the next
decade, decade and a half,maybe we will get there, but we
don't have to go there.
If we don't have to go there,Right.
Speaker 1 (39:48):
So I mean that's cool
.
You have a whole body to gofocus on and, as you know, the
difference between somethingthat's in research or something
that you get to toy with, versusthe companies.
You have to drill in on oneplace, focus in on it.
So does that mean you'vealready raised capital?
Are you in the process ofraising a lot more capital?
Is this the stage you're at?
Are you already starting to getrevenue and subscriptions from
(40:10):
companies that have wanted tolicense some of that or use some
of your models?
Speaker 2 (40:15):
So we have raised a
relatively small round of about
4 million, like earlier, likelater last year.
So we have, like folks likeDario Amodi, who was an angel in
us from Anthropic and we arefrom, you know, matterventure
partners and Catapult have putin some amount of money and
Caltech also has put in somemoney.
So we've not raised a verylarge amount.
(40:37):
We have raised only our 4million to basically get all the
processes going and todemonstrate these kind of key
capabilities.
So we have done that and we arein the process of basically
giving our services to academiaand few other folks.
So we are in conversations andto get a few different customers
(40:58):
to come on board, we will raiselarger rounds in the coming
months.
Speaker 1 (41:02):
That's awesome.
So you've already raised amodest amount of capital.
It's a good size seed and nowyou're building it up and you're
already offering it in theproducts You're already off and
running.
So that's really awesome.
But if you could hit the bigdream with what you're doing, if
there's a disease you couldsolve, what disease would you
solve with your technology?
Speaker 2 (41:20):
I mean, see, for me,
here is the thing right, like so
, I have a I have, you know, avested interest to basically
solve leukemia.
I'm interested because I knowthat biology a lot more.
But what we want to get to is,like, the place that I want to
(41:51):
get to is I want to basicallyensure that we make you know the
concept of side effects itselfobsolete, like in the sense that
if you want to, if you want totake a drug, you want to be able
to know very quickly is thisgoing to be safe?
What are the exact side effectsthat you're going to end up
having?
You know how bad is it going tobe, what are the dosages
(42:14):
wherein the side effects aregoing to kick in?
All of that, all of those thingsare right now not talked about
in, or like you don't try tosolve it because you don't have
mechanisms and understanding tosolve it.
But it need not be the case,because any drug, there are
individuals for whom there areno side effects and it does the
(42:34):
job perfectly.
So now, why is it that somepeople have side effects, some
people don't have side effects?
And if you could understandthat and a priori, be able to
say you're going to have theside effects, so probably don't
take side effects.
And if you could understandthat and a priori be able to say
you're going to have the sideeffects, so probably don't take
this medication, take anothermedication that is for the same
condition, because there aremore than one drugs for every
single condition.
So now the point is how do youselect it?
(42:56):
And we are not even going tostart talking about the dosage
problem, because that's anotherthing altogether, because when
you decide, when you want whatdosage somebody should have, it
is done in a ridiculous fashionright now.
So again, those are anotherareas that we can actually
everything that we are buildingcan be applied.
Speaker 1 (43:13):
That's super cool.
I love that notion that.
It's an incredible vision tohave saying, wow, imagine if I
have a situation and then youcan actually predict whether
I'll get a side effect or notand be able to dose me more
effectively and just start froma much better place in terms of
that level of experimentationthat happens when you engage
(43:34):
with your physician right andengage with your body.
So that's a super amazing wayto go with this.
Like overall, in terms of justAI and what's happening with AI,
there's so much going on rightand there's so much fear in the
market.
There's some fear in the market.
There's super excitement in themarket.
I'm more on the excitement side.
I understand some of the fears.
Where do you see this AIcapability going?
(43:56):
Is this something that you sawa few years ago that you should
jump into, or are you reallyaccelerating it today because of
just the explosion ofinvestment or the explosion of
technology?
Speaker 2 (44:06):
So, like I said, my
interest in AI sort of began
quite early on.
So I put myself as sort of likea scientific hipster in some
sense.
Everybody was going towards acertain direction and I wanted
to work in a different areabecause it was so exciting Like
everybody was going in thatdirection.
So in some sense, you know, atleast in a very big way in 2015,
(44:27):
2014, 2015, like I said, whenmy wife was sort of diagnosed
with leukemia, at that pointonwards it became very clear to
me that the only way in whichbiology can make sense of all of
these data is by having somekind of machine learning or some
kind of an AI black box to kindof make sense of the data.
So my interest sort of likereally peaked at that point
(44:51):
onwards and I was taking a verydifferent approach for it,
wherein, from a pragmatic pointof view, I wanted to collect
large amounts of data and justsort of focus on developing
technologies to kind of collectdata and then, you know, use AI
stuff like existing algorithmsto kind of collect data and then
use AI stuff like existingalgorithms to kind of make sense
of it.
It's just that.
So it was sort of like a smoothprogression in some sense, and
(45:12):
here we are.
So I wouldn't try to claim thateverything that all the AI that
we are developing and thealgorithms that we are
developing is like somethingfundamentally new.
We are taking concepts that arethere in other areas and
applying it to data that we arecollecting here.
There are some interestingcreative ways to kind of
creative ideas that we are doingin ai as well, but you know it
is very specific to re.
(45:33):
As far as, more generally, youknow all the fears and things
around ai.
You know I have constantly, Imean for I'm not that old, but
I'm not, neither am I that youngI've kept hearing at every time
, whenever some new technologyhappens, it's sort of like end
of the world and for some people, for the others, it's utopia.
I think it eventually will besomewhere in the middle.
So I think it's neither goingto be a Terminator jumping
(45:56):
around, jumping from the corner,neither is it going to be like
utopia for everybody, like someconcept of AGI might happen for
everybody, like some concept ofAGI might happen.
Some concept of these thingswill happen, but it'll be an
interesting one.
Speaker 1 (46:06):
So sorry, sunil.
You know the writer in youwon't be able to.
According to Ashwin, it's notgoing to be this end-all, be-all
Terminator sequence.
Even though your imaginationmay run wild, well, that's not
going to stop us from, you know,writing the movie where that
happens.
Speaker 2 (46:20):
That's over from one
of Ashwin's experiments.
Okay.
Speaker 1 (46:25):
Ashwin tell us, you
seem to be involved in so many
different scientific areas andengineering areas.
Were you always into technologyand science?
What sparked that?
Speaker 2 (46:36):
passion in you.
So I was always so I put myselfas the closest thing that I'm
trying to basically comparemyself to is that little dog in
up wherein it's sort of likesquirrel and basically, you know
it keeps getting distractedLike I.
Am that equivalent in science,in the sense that even though I
have traversed different areas,I'm not working on all of them
(46:58):
simultaneously.
In some sense there is a smoothprogression from one to the
other, wherein you know I workon a particular problem, you hit
a wall and then you, in orderto solve that, you might have to
find the solution in anotherdiscipline.
So you go and you know, work onthe other discipline and you
know, even though you change,you work with another discipline
.
It's not as if you forget theprevious ones.
(47:19):
So, through my careerprogression, I started in
neurobiology.
I shifted to applied physics.
So, through my careerprogression, I started in
neurobiology, I shifted toapplied physics, then I shifted
to DNA, then I shifted to likeusing DNA to kind of build work
with semiconductors.
Then I went to Google for awhile working on like some AI
projects and then I came back toMIT to work on certain other
things.
So you work on a problem, youhit a wall and then the solution
(47:41):
and the inspiration might be ina completely different
discipline and you just go thereand do it.
I mean, I've been fortunateenough to have folks who have
funded my curiosity, so it hasallowed me to kind of jump
around, and so that's the way inwhich I put it.
It's just sort of like I haveshifted fields, not because I've
been interested in thatnecessarily, it was like out of
necessity.
Speaker 1 (48:01):
I'm wondering.
I guess kind of going deeper alittle bit is like what is the
truth?
What is the deeper truth thatyou seek through experimentation
?
You know you're a researcher,experimentalist at heart, but
what are the deeper questions?
Speaker 2 (48:15):
you're trying to
answer for yourself.
So for me, here's the thingright, I want to have predictive
capabilities, like, forinstance, if I'm setting up an
experiment and if I can predictwhat the outcome is going to be
and if it is perfectly lines up,then in some sense I have
understood the system.
So, like I think that is theessence of science in some sense
.
So, and I want to get that inbiology.
Like I started out inneurobiology to a certain extent
(48:39):
, when I started way back in theday, it started out with this
desire to basically havepredictive capabilities.
Like I want to be able to kindof do X, y, z and be able to
take neuronal progenitor cellsand be able to convert it to a.
You know that's.
It's like it was a veryspecific problem that I wanted
to actually solve.
And if I can't, if I, if I canpredictively do that, then I
(49:00):
have understood the system.
That's the heuristics and inbiology there is almost there
are very few areas in biologyand medicine where that can be
done.
I mean so in some sense.
I won't say biology andmedicine are like, definitely
like healthcare and medicine isnot a science yet because you
can't, you won't have predictivecapabilities.
But you know, if I and you knowmany of the people in the
(49:24):
community not just me like dotheir things right, probably we
can get there in the next decade, decade and a half.
Speaker 1 (49:30):
If I can get there,
I've done some things right wow,
that's awesome is there isthere someone in your life or a
historical event that got youinto this, into this game either
science or even as anentrepreneur.
Speaker 2 (49:46):
This would not be
like an inspiration, but one of
the people that I am really sortof driving is some sense that
you know someone like John Salk,like from the guy who basically
invented the polio vaccines.
Like in some sense, there islike something very beautiful in
amount of the work that heended up doing and also
something that is even morebeautiful like the fact that he
(50:08):
never patented it and he gave itout to the world.
In some sense and even thoughyou know I can't do that, so the
entrepreneur side of me can'tdo that Like I can't take that,
you know that much.
Speaker 1 (50:23):
Your investors may
not be happy.
Speaker 2 (50:26):
But it's an awesome
thing to say that you know that
much of your investors.
Speaker 1 (50:28):
your investors may
not be happy, but it's an
awesome thing to say that rightLike what he I mean.
Speaker 2 (50:30):
My desire is that
some of the work that I do at
least one thing or two thingsthat I do in my life has that
level of impact on a bunch ofpeople.
Speaker 1 (50:39):
Well, look, you know,
hopefully your name ends up in
the pantheons of John Salk andMarie Curie, but right now we're
going to have to put youthrough the paces and make you
the lab rat in my game, theSARTank.
So welcome to this ruthlessportion of the podcast where
(51:02):
we're going to pitch you in abattle of wits against my
brother.
So tonight's battle of wits,ladies and gentlemen, features
our guest, Dr Ashwin Gopinath,the brilliant mad scientist
behind Biostateai, going head tohead with my brother, Rajiv
Parikh, who is just mad that helost so many of these games.
Speaker 2 (51:15):
I won one.
Speaker 1 (51:19):
He's on a hot streak,
so he's won one in a row.
Speaker 2 (51:22):
I'm taking on the
other scientists.
Speaker 1 (51:27):
That's his biggest
streak, so let's see if we can
extend that to two.
All right, we're about to playthree rounds of two truths and a
lie, focusing on thefascinating world of failed
drugs, something you may know alittle bit about.
I'm going to share three talesof pharmaceutical mishaps and
your challenges to pinpoint theone that's too outlandish to be
true.
So let's find out who's got thescientific savvy and the
business intuition to spot thepharma fiction.
(51:49):
Sure, all righty.
So I'm going to list threestatements On the count of three
.
I'm going to count off three,two, one.
You're both going to raise yourfingers up one, two or three at
the same time, so that youcan't cheat off of each other.
All right, here we go.
(52:12):
Statement number one a drugintended to treat baldness
inadvertently causes users'eyelashes to grow excessively
long.
Statement number two a weightloss drug was pulled from the
market after it was discoveredto cause vivid, recurring
nightmares.
An anti-aging cream containingsnail slime was discontinued
after it gave some users anallergic reaction, resulting in
a quote weeping rash whichresembled snail trails.
(52:32):
Which statement is the lie?
One, two or three?
Wait a minute.
You have drug baldness thatcaused eyelashes Weightless.
You have baldness that causedeyelashes to grow.
Okay, weight loss drug thatcaused eyelashes to grow.
Okay, weight loss drug thatcaused vivid, recurring
nightmares.
Speaker 2 (52:49):
Number three An
anti-aging cream.
I was trying to figure out whatit was supposed to do.
Speaker 1 (52:52):
Created a rash Are
you?
Speaker 2 (52:54):
guys ready.
Speaker 1 (52:54):
Lock in your answers.
Mentally, here we go Three, two, one, one.
I love this because we can seewhat kind of internet delay we
have.
All right, you both have saidthat you think the lie is
treating baldness causing youreyelashes to grow.
That does sound crazy.
Now here's the deal.
Certain medications likebimatoprostlatease I guess maybe
(53:19):
is the non-generic which isused to treat glaucoma have been
known to cause excessive eyeflesh growth as a side effect,
and this was discovered duringclinical trials in the early
2000s.
So we're both wrong we're bothwrong.
And later approved by the FDAfor cosmetic use in 2008.
So I'm so sorry, you're bothwrong.
Round two An appetitesuppressant made from a seaweed
(53:43):
extract was withdrawn after itcaused users' sweat to smell
faintly of the ocean.
Okay, number two.
A drug intended to treatallergies accidentally triggered
temporary synesthesia in someusers, which means it caused
them to associate sounds withcolors.
(54:04):
Number three, which means itcaused them to associate sounds
with colors.
Number three an experimentalcold remedy containing bee venom
was abandoned due to the riskof severe allergic reactions in
some patients.
Which statement is the lie?
Speaker 2 (54:19):
Are you ready to
answer?
Speaker 1 (54:20):
You're going to say
it.
Speaker 2 (54:21):
Are you ready?
You guys locked it in.
I think, so here we go.
Speaker 1 (54:23):
Three, two, one Once
Alrighty.
Speaker 2 (54:29):
You guys.
Speaker 1 (54:29):
both again are going
with one, which is the seaweed.
I'm going to read the thinghere Seaweed is rich in iodine
and consuming large amounts canlead to increased iodine
excretion through sweat andurine, potentially causing a
slight change in body odor.
Speaker 2 (54:46):
Oh really.
Speaker 1 (54:47):
And this was observed
in the 20th century, when
seaweed was used as treatmentfor thyroid disorders.
So I'm so sorry you guys gotthat one wrong again, do you
want?
Speaker 2 (54:56):
to go for it.
What the hell?
Blind on, blind on, blind on To50-50?
.
Speaker 1 (54:59):
Now we got to do a
50-50?
.
All right, two or three, herewe go Three, two, one, two,
synesthesia or bee venom.
Speaker 2 (55:09):
Okay, you both said
two and we're both wrong again.
Speaker 1 (55:11):
If this was worth
anything, you would both get a
point.
Yes, correct, synesthesia.
Synesthesia, it is indeed aneurological condition where
senses are blended seeing soundsas colors, for example but it
is not a known side effect forany allergy medication.
But yes, the bean of venom wasalso.
It was true, it's used forconditions like arthritis and
(55:34):
multiple sclerosis, but carriesthis risk for allergic reactions
.
Amazing, all right, very coolRound three still tied at
nothing apiece, here we go.
This one's.
I put a little twist on thisround.
This is about how drugs arerepurposed.
You always get thoseinteresting, just like we talked
about the eyelash thing thatbecame cosmetic use.
So which of these repurposingswas not true?
(55:54):
Two of these are true.
Which of them was not true?
Number one a drug originallydeveloped to treat tapeworms in
animals was later approved foruse in humans as a treatment for
alcoholism.
Statement to a chemicalcompound initially investigated
as rocket fuel component waslater to be found effective in
treating erectile dysfunction.
(56:16):
Statement number three a drugdesigned to suppress lactation
in new mothers was laterrepurposed as a treatment for
Parkinson's disease.
All right, which statement isthe lie?
Three, two, one Let me see youranswers.
All right, you're both sayingone.
Speaker 2 (56:37):
I was going to say
two, you guys picked one every
single time, so interesting.
At some point one had to be theright answer.
Speaker 1 (56:44):
If only I was that.
Don't metagame me.
Don't metagame me.
All right, here we go.
Speaker 2 (56:52):
Because I swore that
I have heard the last one, and
you are right about that.
Speaker 1 (56:56):
I'm going to go ahead
and say that Dysoliferum,
initially developed as ananti-parasitic for animals, was
discovered to cause severenausea and vomiting when
combined with alcohol, making adeterrent for alcoholism, and
this is approved for use in the1950s.
So I'm so sorry you were wrongonce again.
(57:16):
You know what?
It is not number one I thoughtthat it's number two.
Two is the line.
Yeah, and I thought nitricoxide.
Nitric oxide was the one forheart disease.
Yes, that was supposed to bethe one from blood pressure and
heart disease, but I thought,who knows, maybe they took the
nitric oxide from from rocketfuel or some bullet.
You're not, you're not.
You're not off on your logichere.
(57:36):
Nitric oxide, a component ofsome rocket fuels, plays a role
in vasodilation, also dilation,which is important for erection.
But there's no direct link, andmaybe byotanaai can solve this.
There's no direct link betweenrocket fuel components and ED
treatments.
I had the first part right, Iwas going to go that way, and I
was like you know what Exactly?
Speaker 2 (57:55):
All right, we got to
go with the tiebreaker, we got
to go.
One of us has to win.
Speaker 1 (57:59):
Get it Get it, so you
must choose something different
.
Okay, these are kind ofhistorical.
I reached back for these, okay.
Statement number one In theearly 20th century, a chemist
working for a dye companyaccidentally discovered the
first synthetic anti-malarialdrug while trying to develop new
colors for fabrics.
Statement two In ancient Greece, around 400 BC, hippocrates
(58:21):
recommended the use of moldybread to treat infected wounds,
unknowingly utilizing penicillin, which wasn't technically
discovered until 1928 inAlexander Fleming's library.
Statement three In the 18thcentury, british sailors
discovered that limes couldprevent scurvy, a debilitating
disease caused by vitamin Cdeficiency, after noticing the
(58:43):
citrus fruits were part of thediet of sailors from other
countries who did not sufferfrom the condition, and this is
the reason that the Brits areoften called limeys, or were
once called limeys.
So which statement is the lie Onthe count of three?
One, two, three.
Okay, you both chose somethingdifferent.
(59:05):
All right, ashwin thinks itwasn't the limeys and then Rajiv
thinks it wasn't Hippocratesaccidentally using moldy bread.
Well, in mid-18th century,british naval surgeon James Lind
conducted a clinical trial thatdemonstrated the effectiveness
of citrus fruits in preventingscurvy.
This led to the adoption oflimes and lemons as standard
provisions on British ships,earning the British sailors the
(59:27):
nickname the Limeys.
So, yes, that was the truth.
So, rajiv, I can't believe youdid it.
You've won a second one in arow.
Speaker 2 (59:34):
Oh, my gosh and I
beat a real scientist.
Speaker 1 (59:37):
Ashwin, I'm never
going to hear the end of this
dude, I swear to God, I couldhave sworn that.
Speaker 2 (59:42):
I have read that you
can actually put moldy bread.
They used to put moldy bread tobasically inoculate on wounds.
Maybe I was wrong.
Speaker 1 (59:52):
Yes, Ashwin, of
course you heard that.
Speaker 2 (59:53):
You probably heard
that from our parents, who say
all sorts of crazy stuff aboutwhat you can do to help your
cuts.
Speaker 1 (59:59):
Yeah, but all right.
Well, you did indeed survive atleast the spark tank.
It's challenging to even walkout of here alive, so well done,
ushwin, but uh, unfortunatelynow I'm going to hear the
insufferable no end to achievebragging about his two in a row.
Speaker 2 (01:00:14):
Who's the who's the?
Who's the elder one?
Can't you tell gosh?
Speaker 1 (01:00:18):
that's an insult in
it in and of itself.
I see he's 12 years older thanme, ushwin.
Yeah, maybe it's bad justbecause he uses all that botox I
take a mixture of moldy breadevery morning yeah, and snail
slime.
Speaker 2 (01:00:33):
You use the snail
slime.
Snail slime is actually abeauty product, uh, you know.
So I think they do.
They do use uh snail slime foryes I think you are correct.
Speaker 1 (01:00:47):
It just doesn't cause
that weird thing that we're
talking about, but yes, there'slike a look.
I think that's the beauty ofthese try to put a little bit of
truth.
Speaker 2 (01:00:55):
Yeah, yeah, to kind
of throw you, that's awesome,
that's awesome.
Speaker 1 (01:00:58):
Thank you for that
was awesome, that you played the
game with us and, uh, you know,of course, we we had a lot of
fun chatting with you today andlearning so much uh from you
today.
So, uh, we also and this iswhat you'll have to verify we
also think that we also heardthat you're a failed artist
somehow self-proclaimed.
Speaker 2 (01:01:18):
We're not saying that
I got, I got more failed.
Speaker 1 (01:01:22):
Who, for one of his
top papers, had a single cell
DNA?
Create starry night.
Speaker 2 (01:01:31):
Yeah, it's not single
cell.
It's basically by puttingsingle molecules to basically
create.
You paint the starry night withsingle molecules.
You put individual molecules tobe.
Every pixel in the starry nightwas put where it was put, one
(01:01:51):
at a time.
So essentially you can make Igotcha.
Speaker 1 (01:01:54):
Okay, it's like
people who put those photo
collages.
They're like tiny, tiny photosof everything.
But then now you pull back andit turns out it's a picture of
my mom, exactly, or?
Speaker 2 (01:02:01):
something like that
Exactly, but it was basically to
show that we could control.
We could put a molecule whereyou want in a controllable
fashion.
If I want to put a molecule ina place, I can do it, and I can
do this on a massive scale.
Speaker 1 (01:02:17):
How do you show it?
Speaker 2 (01:02:19):
So visually trying to
show it it's a great effect.
Speaker 1 (01:02:22):
Yeah, amazing, super
cool.
Right F visually trying to showit.
So it's a great effect.
Yeah, amazing, super cool,right failed artist no more,
because he can do it with dna.
Now is there one?
This is like.
This is the lightning round.
So you get an answer in 10seconds or less.
Okay, you're ready, we're gonnagive you a lightning round.
What's the biggest surpriseyou've had being in boston for
the over the course of yourcareer?
What's one big surprise?
Speaker 2 (01:02:45):
It's.
It's a very unhappy place.
People are unhappy.
Speaker 1 (01:02:51):
Oh wait, we just want
to hold on.
Speaker 2 (01:02:53):
Boston just won the
championship.
How can you be unhappy?
That's why he's in Palo Alto,yeah exactly.
Speaker 1 (01:03:03):
But you know what MIT
I heard is the most all my
friends who went to.
Mit.
Don't don't talk about it withjoy.
They love it, but they don'ttalk about it with joy.
Speaker 2 (01:03:08):
It's awful, I'm glad
I didn't get in.
Okay, next question.
Speaker 1 (01:03:13):
If money was no
object, what would you do for a
job?
Two words.
Basically paint 10 secondsright Paint, yeah Paint, All
right With with.
Speaker 2 (01:03:23):
DNA molecules or just
without DNA.
Without DNA, like thetraditional way.
Okay, here's another one.
Speaker 1 (01:03:29):
You have all these
students in class and you have
this chance to impart so muchwisdom.
Is there a life motto you leavethem with after every class
that you like to share witheverybody?
Speaker 2 (01:03:41):
Try to fail.
Speaker 1 (01:03:43):
Nice Love that Try to
fail.
Love that.
It's like what we talk aboutour podcast Be Ever Curious.
Well, ashwin, thank you forhaving so much, for playing with
us today, teaching us today,taking a really complex subject
and making it accessible, and Ithink that's really hard to do.
We came in with the notion ofenabling you to humanize
(01:04:05):
technical innovation for ouraudience in terms of how data
collection can save lives, andyou helped us accomplish our
mission.
So thank you so much.
Speaker 2 (01:04:13):
Thank you for having
me.
Speaker 1 (01:04:27):
Who knew that
biological and computer and
nanoscience could be sointeresting?
Yeah, well, I think all thosefun buzzwords, as you called
them, do just equate to life.
This is what we're trying tosolve, is constantly trying to
solve is not just the meaning oflife, but how to extend life
(01:04:49):
and how to make life moreenjoyable to live.
And it's exciting to hear fromthe front lines of like.
You know, when he, when he saidlike, was it Jonas Salk?
You know, you're like, yeah,you know, potentially some of
these guys that we end upinterviewing here, like could be
that absolutely could be thatperson.
And when you talk about hisdream and his mission, what he
wants to do with this thing, Imean it would.
It would completely change theworld.
(01:05:10):
I'm blown away by how he'smotivated by his wife, his
wife's journey, his wife'sdisease, how he wants to get rid
of side effects for everyone.
He's driven up the wall by thefact that all these interactions
are not predictable.
Where I had the frustration instarting my own medical device
company about how complicatedmedical science was, he's
(01:05:32):
actually digging in finding away.
It's like the scientist withADHD who just jumps around and
every time he hits a wall, findsanother way of solving it, with
a different science.
It's just.
It's the most fun about talkingto innovators.
I love that.
It's like using ADHD as thesuperpower that basically avoids
disappointment and wanting toquit when you fail.
No try to fail, he says.
(01:05:53):
I'm really inspired by thatconversation.
Really cool and I truly hope hesees it, he will.
He'll get there one way or theother.
So thanks for listening.
If you enjoyed this pod, pleasetake a moment to rate it and
comment.
You can find us on Apple,spotify, youtube and everywhere
podcasts can be found.
This show is produced by myself, samir Parikh and Anand Shah,
production assistance by TarynTalley and edited by Sean Maher
(01:06:16):
and Aiden McGarvey.
I'm your host, rajiv Parikh,from Position Squared, an
AI-enabled growth marketingcompany based in Silicon Valley.
Come visit us at position2.com.
This has been an effing funnyprogram.
We'll catch you next time andremember folks, be ever curious,
try to fail.
Fail a lot, you.