All Episodes

December 10, 2024 36 mins

Send us a text

Much like open source software, open science is a path to distributed collaboration. By sharing the data from experiments and investigations open and available, scientists can multiply impact and discovery for teams they've never even met.

Our guest, Saskia de Vries, talks to us about her work at the Allen Institute, including accelerating the pace of discovery by making scientific data available to everyone who wants it.

Credits
Saskia de Vries, guest
Ashley Juavinett, host + producer
Cat Hicks, host + producer
Danilo Campos, producer + editor

You can learn more about the Allen Institute on their website: https://alleninstitute.org/

Read some of Saskia's recent thoughts on sharing data in neuroscience here: https://elifesciences.org/articles/85550

The CRCNS open data repository that Saskia mentions: https://crcns.org/

Read about the FAIR principles for scientific data management and stewardship: https://www.nature.com/articles/sdata201618 

Learn more about Ashley:


Learn more about Cat:

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Saskia (00:00):
I like to think about open science is really that it

(00:02):
is a form of collaboration.
Sometimes it's a collaborationwhere, we are actively
collaborating with each other.
Sometimes it's a collaborationwhere we're building off of one
another's work without having aconversation about it.
I think we're kind of at a pointwhere it's really unlikely that
the person who's driving theforefront of the data

(00:22):
acquisition and the experimentaltechniques is going to be the
same person who's going to beable to drive the forefront of
the analytical and mathematicaltechniques.

Ashley (00:38):
This is Saskia DeVries.
Saskia is the associate directorof data and outreach at the
Allen Institute and is overalljust an amazing scientist.
Thinks about open science andthere's a lot of parallels here
for open source software, Ithink too.

Cat (00:51):
I've loved listening to this because it made me think
about the future of humanknowledge, the stereotypes and
beliefs that are holding usback, like the idea that science
just comes from one personlocked in a lab.
Really, this is like a beacon oflight and hope at this moment
right now about thinking of ourcollective human effort that can

(01:12):
compound on each other, so, soexcited to talk to Saskia.

Saskia (01:15):
My academic journey was started off kind of as a typical
academic, uh, trajectory.
I my undergraduate work was inmolecular biophysics and
biochemistry.
And I spent a couple years as atech in a lab.
I did a PhD in neuroscience.
Um, and, uh, that, and I don'tknow, it was, it, it felt at the

(01:38):
time like it, just a normal PhDor in retrospect is also feels
like a normal PhD.
I was in a lab, uh, I had aproject.
Everybody's project was prettylike distinct.
We were friendly.
We communicated with each other,but it wasn't necessarily like a
collaborative, um, process.
Um, and then I went and did apostdoc.
All of the work that I did kindof post college was all in the

(01:59):
visual system.
So it's really interesting andexcited about how the outside
world gets into our brain.
I kind of quickly realized Ididn't want my P.
I.
S.
Job.
He would contest this, but Ididn't feel like he got to do
science, right?
I felt like he was running asmall business, and if I wanted
to run a small business, Ishould have gone to business
school.
And I actually kind of feel likethis is one of the downsides of

(02:20):
academia, that, like, the skillsyou need to run a lab are not
the skills that you are trainingin graduate school and as a
postdoc.
Yeah, so the Allen Institutewhere I work, uh, it's a
nonprofit research institutethat was founded in like 2003 by
Paul Allen.
Um, and when it first started,it was the Allen Institute for
Brain Science, and it wasfocused on creating a brain

(02:42):
atlas.
Um, first of the mouse, andthen, uh, there was also a human
atlas and macaque atlas, Ibelieve.
Um, and then the part I wasinvolved with was, uh, it's
called the Allen BrainObservatory, which was doing in
vivo physiology.
So recording activity of cellsin the brain while the mouse is
working Watching visual stimulior performing behavioral tasks.
Since then, the Institute,there's now an Institute for

(03:03):
cell science and Allen Institutefor immunology and Allen
Institute for neural dynamics,which is where I currently work.
Um, but one of the key thingsabout the Institute is that the
core principles of our Instituteare big science.
Team science and open science.
It made sense to me how thatworked for things like an atlas
where you spend a lot of timecreating this resource.

(03:24):
And now this is a resource thatanybody, can access and, and
use, at this stage, all thesethings are digital, but you
could imagine an atlas being abook that you just, you know,
share with somebody and theycan, um, they can peruse it and
figure things out and findcoordinates for things.

Ashley (03:39):
I remember this from, from grad school.
Like, yeah, yeah, exactly.
It's like this big, like, footby two foot book, basically.
And you, like, pour it open whenyou're, like, cutting up your
mouse brain and you want to knowwhere you are.
And you find the page that lookslike the thing you're looking at
under the microscope.
Yeah.
So it's very actually physical,yeah.

Saskia (03:59):
I came to the Institute.
I was doing the, the physiologythat was my background.
So recording from cells in themouse brain.
Um, and then, yeah, how do weshare this with the world?
How do we make this a, a, aresource that other people can,
can use?
Can interact with and engagewith and use to do science,
right?
I think at the time we had likea institute motto of

(04:19):
accelerating discoveries.
Um, and, uh, like, yeah, whatdoes that look like?
Um, and it, it, it kind ofquickly became clear to me that
this does look really differentin the physiology space than it
does, um, when you're creatingan Atlas or kind of a compendium
of, um, tools or, or things likethat.

(04:40):
Um, and so I've, I've.
Based off of my early work inthe Brain Observatory, for a
while I was leading a researchgroup here, but then, um, a few
years ago, I moved into a rolewhere I'm really just focused on
how do we share this data, um,and how do we make it, um, most
impactful and most accessiblefor people to reuse it, um, and

(05:01):
so are there, what tools do webuild around it, how do we
organize it, how do we documentit, um, kind of all of those,
all of those questions,

Ashley (05:08):
I love that.
And I'm one of those scientistswho has benefited tremendously
from this because, you know,before you actually, you know,
go do an experiment, you mightlook at the Allen Institute data
and see, okay, like, did they doit?
What did they find?
What can we build off of?
It's, it's such an amazingresource.
One thing I'm curious about isyou said, you know, the goal is
accelerating discovery and Ifeel like, you know, Obviously

(05:30):
for humankind, this is a veryimportant thing.
Like we want this.
So like what are the things thatyou feel like the Allen
Institute is trying to do thatreally like Get us closer to you
know, whatever that discovery is

Saskia (05:41):
I think there's a couple of, of aspects to this, um, and
I think one place that I'll,I'll maybe emphasize is when we
talk about open science just ingeneral, a lot of times those
conversations end up talkingabout data sharing, um, and how
do we put data out into theworld, um, Which obviously is,

(06:02):
is a, a, a major component ofopen science.
But if, if there's no, like,data reuse happening, it kind of
doesn't matter.
Um, and so the thing that I puta lot of my thought and
attention into is how do wefacilitate the reuse?
Because the ultimate goal here,right, is, um, Like, as you're

(06:24):
just, you know, posing thatquestion, right?
How do, what is that, what, how,what does it look like to
accelerate discovery?
Um, and it only really, like,it's only going to do something
if people are able to take thethings that we're doing and use
that to drive their ownquestions and use that to, um,
drive their, their own answers,right?
Um, and some, sometimes, like, Iactually, the example you gave,
I really like, is the one of,um, I have an experiment I want

(06:47):
to do, I have a question I wantto ask.
Can I get some of thatinformation?
Can I get some of that work outof stuff that already exists?
Right.
Instead of having to do ahundred percent of my
experiment, if I can constrainthe variables of my experiment
based off of previous work, Ionly need funding.
I only need time.
I only need resources to do theremaining.

(07:08):
I forget what percentage I putout there, but if, if, if we, if
you can get 20 percent fromexisting data, you only have to
do 80%.
Right.
Um, and so for me, that's like.
One of the ideal examples ofAccelerating work is and I think
this is what the brain Atlas hasdone for so many people.
It tells them these are thegenes that are expressed in your

(07:29):
your region, or these are theregions where the gene that you
care about is expressed.
Go look in those places.
It really focuses people'sattentions and allows their
experiments to be betterdesigned, better constrained and
then to fit better intoexisting, um, Existing data,
existing knowledge.
Um, so I think

Ashley (07:48):
20 percent might have taken someone individually like
two years or something likethat's all the validation
experiments Like, you know,maybe you wanted to we talked
about dopamine in a previousepisode So I'm gonna like come
back to dopamine but like, youknow Let's say you just want to
know like where dopamine neuronsin the brain go like who do they
talk to you?
You know your first step mightbe like look at the Allen Atlas

(08:10):
look at where the dopamineneurons are and where they where
they go You Then you could startthere and you could say, okay,
we've got X, Y, and Z brainregions.
Let's dig in there and see.
Yeah.
But that would take probablyyears of someone's PhD

Cat (08:22):
something that comes to my mind as I listen to this, like,
great chat between scientists,right, from like outside the
field is I, I have spent a lotof time thinking about things
like, you know, who gets creditfor work and, and why is it hard
to collaborate?
Why is it hard to share?
Right?
And the, the beautiful teamscience, open science stuff
feels, you know, Um, deeplyvital, deeply exciting, but I

(08:46):
think that I'm really curious,like, do you see barriers to
people's adoption of this way ofworking?
In particular, I'm wonderingabout things like, you know,
does it feel more, like, morereal to people if they did the
experiments in their own labs?
And, you know, is there kind ofa conflict about ownership?
I don't know even what myquestion is here, but like, I
know that there are strugglesgetting people to adopt these

(09:09):
ways of working, right?

Saskia (09:10):
there's like three different ideas I want to touch
on here.
Um, but I, I think the, thefirst one is like, how do we get
people to adopt this?
And there's, again, there's fromthe data sharing side.
And again, this is where so muchof the energy and the brain
initiative and the NIH isputting requirements on people
to share their data.
Um, and it's, it's interestingbecause like the conversations

(09:30):
that I'll have with people aboutwhether how to best share data,
when to share data, um, why toshare data, it used to be that
people were, were, had, hadstrong reasons against it.
Often it was like, there's morethings I want to do with it.
Um, now the thing, the, therebuttal I hear most often from

(09:51):
people is, um, Nobody's reallyinterested in my data, right?
Like, it's a lot of work for meto share my data, and

Cat (09:59):
Impostor syndrome.

Saskia (10:00):
to use it.
Well, I don't know that it'simposter syndrome.
Like, I think often, like, I've,you know, I've done this really
narrowly constrained experimentthat is really unique to a
really specific thing.
I don't see what somebody'sgoing to do with it.
The reality is, if you look atthe reuse of, of data, so I
spent some time looking at, Thereuse in CRCNS is computational

(10:22):
resources in collaborativeresources in computational
neuroscience, um, as arepository that the NSF funded.
Um, and it's, it's like only 11percent of the data sets that
are in that repository have everbeen reused to, to our
knowledge, right?
Based off of like publicationsand, um, Um, that's not a
perfect record, but, um, you doget this long tail distribution.

(10:45):
And so if you really are justlike, I just, I'm going to be
out on that long tail.
Like, is it worth my energy toput it out there?
Um, like, I think that's avalid, like a valid concern.
Um, And so that's why likepushing a lot on, on making it
easier for people to reuse data.
Um, and, and, um, sharing anddocumenting data in a way that

(11:09):
my ability to reuse it isn'tconstrained to the questions
that you asked when you firstcollected it.
Um, and so that's, that's likeone of the other kind of really
important things is that, um,data needs rich metadata and I,
I'm going to try and avoidspending the next like hour just
talking about metadata, butlike, um,

Cat (11:30):
We have a pretty

Saskia (11:31):
technical

Cat (11:31):
audience, so you probably could.

Ashley (11:34):
we love getting

Saskia (11:35):
meta on

Ashley (11:35):
this

Cat (11:35):
Rich metadata could be a tag for this show.

Saskia (11:40):
It's, well, I don't know if you know that the FAIR
standards, right, these are kindof the standards for open
science of findable, accessible,interoperable, and reusable.
And if you read kind of thedocumentation about FAIR,
everything is about richmetadata and everything is like,
All of these principles actuallycome down to having rich

(12:00):
metadata.
And then when you kind of lookat the metadata that exists for
most data, it's very, it's, it'svery, uh, anemic, um, to, for
really understanding it.
Um, but yeah, being able toreuse data in ways that are
different from how it wasoriginally, Um, thought of, I
think, is it's kind of thatcrucial piece, and so part of it

(12:22):
is sharing data that facilitatesthat.
And part of it's, I think,training scientists to, to, to,
it's a new muscle, right?
Um, when I was a graduatestudent, I was taught to think
of, like, what's the nextexperiment that you do to answer
this question?
And I think there's a trainingpiece of how do I look for data
that might let me answer thisquestion, the 20, 30, 50, 40,

(12:47):
100%.

Ashley (12:47):
Like, how do I, you know, see, okay, I collected X,
Y, and Z, and maybe I have likea set of questions, but, you
know, just see the field kind ofmore broadly and see the
possibility in your own data toaddress other questions in
science.

Saskia (12:59):
exactly.
Yeah.

Cat (13:00):
I think it's beautiful, it's like, Really exciting to
imagine that your one nicheproject could also be like the
foundation for someone else ifwe could get into that like
Coalitional mindset about itwhen I think about my own
research lab like and theexperiments that we run We
actually do try to think a lotabout you know Could I have a

(13:22):
continuously, you know,gathering data on a measure that
remains important, no matterwhat other things we're
studying, and then we get likelongitudinal insight, you know,
and we start to repeat it, andwe, you know, we have a lot of
that work that's kind of goingon in the background of every
specific project,

Saskia (13:38):
Mm hmm.

Cat (13:39):
know, That kind of relies on us just being diligent about
that.
But I, I always tell my teamlike reduce, reuse, recycle, you
know, like I'm a big fan of likeall this intellectual labor that
we're spending in the world isso precious and like, try to
make it go as much as you canmake it go.

Saskia (13:56):
Yeah.
Absolutely.

Ashley (13:59):
Yeah.
So that's like, so big thing,number one, then it's just like
getting people to think aboutreuse at all, either other own
data or to address questionsthat they have using other
people's data.

Saskia (14:09):
I like to think about open science is really that it
is a form of collaboration.
Um, and, and sometimes it's acollaboration where, um, we are
actively collaborating with eachother.
Um, and sometimes it's acollaboration where we're
building off of one another'swork without necessarily they're
ever, you know, having aconversation about it.

(14:32):
Um, and, you know, I think thatin neuroscience, particularly
right now, You know, we reallyhave in the last decade plus
become a really, like, become abig data field.
Um, and for, for a number ofreasons, a big, big one being

(14:53):
the Brain Initiative, right?
Like, the NIH put a lot of moneyinto, um, neuroscience research.
Um, a lot of that money thenwent into developing and
improving our technologies forcollecting data, for recording
data.
Um, And so we now have kind ofthis explosion of data, and, you
know, one of the things that Ioften tell, um, scientists and

(15:14):
students is that, you know, theexperiments I was doing as a
graduate student, I was, uh,recording small populations of
neurons, probably like 30 to 50neurons at a time, which was
kind of cutting edge, um, back,you know, 20 years ago, and now
is not, not cutting edge, right?

(15:34):
And so great, like we can recordthousands of neurons at a time,
but the other side to thatequation is that the analysis
that I did was like pairwise,uh, correlations between these
different sets of 30 neurons,um, that doesn't scale to
thousands of neurons, right?
Like, and so, like, not only didthe.
Like data collection techniqueschanged drastically in the last

(15:58):
20 years, but our analysistechniques have changed
drastically in the last 20years, and I think we're kind of
at a point where it's reallyunlikely that the person who's
driving the forefront of thedata acquisition and the
experimental techniques is goingto be the same person who's
going to be able to drive theforefront of the analytical and
mathematical techniques.
And so, like, we have tocollaborate.

(16:20):
Um, in order for us to reallycapitalize on all of the amazing
data that we are able tocollect, um, because maybe I can
collect this huge data set anddo some really light.
surface analysis of it.
Um, but like, there's so muchmore there that my personal
mathematical skills can'tscratch the surface of.

(16:41):
And so I have to be talking to amathematician, a theoretical
scientist, a statistician, inorder to really, um, capitalize
on these data.
And I think this is just truewrit large of our, our field
right now that our, ourmolecular techniques, our, um,
recording techniques, ouranalysis techniques, like they
are all in such different, uh,domains of expertise that if

(17:03):
we're not collaborating bothdirectly and indirectly, like
we're just, we're just wastingour time really.
Um, and so like we have to have,our field has to be set up to
enable that.
Sometimes I talk about like thetools that we build for open
science are ultimately tools forcollaboration.
And we're just making thatcollaboration with everybody.
Um, I could shut the doors andjust make it for my best

(17:25):
friends.
Um, But I still need to be ableto like move my data from my
from my computer to yourcomputer.
You still need to be able toopen this file and understand
what's in it.
You have to know what theexperiment was on.
And so if I'm doing that for myMy four best friends and I just
opened the door.
That's open science.

Ashley (17:43):
Yeah, I love that.
I love thinking about it ascollaboration, like, and as you
said, it's just like tools forcollaboration.
And I'm super curious about,like, you know, you mentioned,
We need different skill sets,right?
When we're thinking about how todo that collaboration.
So like, you know, in your timedoing this and in working with
scientists that are using thedata, like, you know, like what

(18:04):
are those skill sets?
What do they look like?
Like, what do people need to beable to do in order to like
really tap into the possibilityof all of this open data?

Saskia (18:14):
mean, it, it runs the gamut.
Right.
Um, some of them are, are reallytechnical.
It's, it's funny.
Cause like, I'm now in aposition where I work with a lot
of software engineers.
Um, like I am, I'm in ourscientific computing team, um,
and I am not a softwareengineer, right?
Like my, you know, my ability touse GitHub is very minimal.

(18:38):
Like I use it, don't get mewrong, but like, I'm not good at
it.
Like if something gets out ofsync, I'm just like somebody
else come fix it.
Um,

Cat (18:45):
Every software engineer I interview starts the interview
with the same thing.
I'm not as good as everybodyelse.
Just saying you might be adeveloper in some rooms.

Saskia (18:57):
no, that's a fair point.
I work with people who are a lotbetter at it than I am.
Some of it becomes technical,um, I mean, the biggest thing
is, like, communicating.
Um, and the, the, the tools ofcommunicating, um, look
different, right?
For, for people with different,um, Backgrounds and different

(19:19):
expertise.
Um, and, uh, I don't know, wecan come back to kind of code as
an example.
There's some, some code that Ican read and make sense of, and
some code that I can't makesense of, and if you don't have
documentation that goes with it,I'm never going to figure out
what that, that software does.
Um, So how we document our code,how, like, how we build the

(19:40):
examples around it, and when webuild tools that make it so that
you don't have to understandthat, right?
I have someone on my team who'sworking with language models so
that, Scientists don't have tolearn how to do, um, MongoDB
queries or SQL queries to, um,find data through all of our
metadata, um, and can askquestions just through a

(20:01):
sentence.
Um, and there's challenges tothat.
It's not, uh, like a trivialthing to do, but like the more
we can remove that barrier ofhaving to, like, learn yet
another, um, language or tool orpackage, um, to be able to work
with these, these things, um, Ithink that the better that is,

Ashley (20:19):
Yeah, I love that, I love that at the end of the day,
it's like, it's communication.
And that's just what we need toget better at.
And like, whether that's in theform of writing metadata, that's
detailed enough.
So someone knows what you did inthe experiment or documenting
your code.
Like these are all just forms ofcommunication.
That is the bedrock ofcollaboration.

(20:39):
Right.
It doesn't happen without that.
I love that.
Yeah.

Cat (20:43):
that not everybody can do everything is such a powerful
one.
And also maybe like seeing yourplace in science as part of the
market forces, you know, thebeginning of this, you were
talking about not wanting yourPI's job of running a small
business, but I hear a lot ofmacroeconomic commentary and,
and, um, I was, I was going tosay.

(21:04):
It was funny because I, Ifounded a startup and founded a
research lab, you know, insequence and I actually felt
like there were reallyinteresting skills that I
learned from, you know, gettingmoney in both places for very
different things.
I think that's a reallyinteresting call, because
sometimes as a scientist, youget trained, maybe to kind of
think you're sort of isolatedfrom society and you're just in

(21:26):
your lab alone and you doeverything from beginning to
end.
You know, and that's like whatmakes you a great scientist.
Um, and you're just painting apicture for me of like a very
different reality.

Saskia (21:37):
I kind of think I did think that way.
And you know, my, um, my fatherwas a history professor, um, and
very much sat in his office andwrote papers and wrote books
and, um, you know, did a lot of,a lot of really interesting
stuff actually.
But like, it really was justlike, and I would, I've kind of
You know, uh, teased him alittle bit, um, where I'm just

(21:58):
like, well, you know, why are,why these questions?
Right?
Like, you know, what's thebenefit of asking these
questions?
It's like, he never gets askedthat as a historian, right?
You don't have to justify whyyou're asking about, um, the
price of bread in modern Dutcheconomy.
Um, you know, like, I'm suresomebody at some point asked him
that, but he never had tojustify it, right?

(22:19):
He never had to go to the NIHand be like, give me money to do
this in the way that we have tojustify every experiment that we
do, um, which is not necessarilya bad thing, right?
Like we should think.
carefully about why we're askingthe questions we're asking, why
we're doing the experimentswe're doing.
But in the sciences, it's like,you're constantly justifying the
things that you're doing.
And I kind of just want to,like, I just want to understand

(22:40):
how we see the world, right?
Like, I just want to, like, sitin a lab and do experiments and
get excited and, like, havethese little aha moments.
Um, and, It was one of thosethings where it's like, you just
don't get to do it that way.
You don't, it's, it's reallyhard to do science in a way
that's just about, um, that pureexploration.

(23:01):
Um, it, it, you always are needthe funding and you need the
justification.
Um, and so I think that was alittle bit part of it is like,
um, it is always part of abigger effort and figuring out
how to better fit it into thatbigger effort, I think was.
Like, one of the things thatpushed me in this way.

Ashley (23:19):
I think it's really interesting point.
I feel like, you know, writingand thinking through that
justification as like a singleperson is, is probably easier
than doing it on a team.
But the benefits of doing it ona team are that then you get all
the input in and you're allbought in, you know, hopefully
on like the end result and theend thing you're striving for.

(23:40):
So like, you know, I know thisis something that the Allen
does, right?
Because you're, you're, youknow, You're engaging in these
like massive projects and youhave multiple teams like on
board.
So like, you know, I guess like,what does that work and what do
you see changing in people'skind of mindsets as they go
through this process, movingfrom being these like single

(24:00):
individual contributors to beinga piece of the team?
Like, what does that shift looklike for people?

Saskia (24:08):
Um, yeah, it can look like a lot of different things.
I mean, I think the thing thatkind of comes first is seeing
that the possibility is reallyopen up really quickly because
you can collaborate with peoplewith different skill sets.
And it's, It becomes a loteasier to add in, um, a rich
layer of histology on top ofyour physiology, even though

(24:31):
you're, you're terrible athistology.
There's also challenges though,where, and I, I think you had
kind of touched on this inearlier about like, you know,
how do we do credit when we'recollaborating like this?
Right.
Um, and that, that definitely,um, is a challenging piece of
like, how do we actuallyrecognize who's contributing in
what ways?
Um, And how do we, how do we,like, how do we communicate that

(24:54):
to the broader world, um,because it ends up being
important for people's careers,right?
Um, and paper authorship isn't agreat way of doing it.
You can, um, put as many starsnext to names as, as you want,
but it really often comes downto just like that first author
is the person who gets the mostcredit.
Um, and I don't know that we'vereally solved this.

(25:17):
I think it's something wecontinue to grapple with and,
and think about and, um.
And you know, internally, how dowe recognize people's
contributions?
How do we reward people'scontributions?
And then how do we communicatethem?
Um, is, is kind of always anongoing, uh, an ongoing thing.
Um, I do think it's possiblethat some of the infrastructure

(25:37):
for open science could help inthis regard if papers stop being
the only currency that we have.
Um, and so, um, being able togive credit, say in our
metadata, um, where, you know,now it's not just like how many
papers do I have my name on, buthow many data assets is my name
attached to, um, that, uh, Youknow, and how many, where do,

(26:01):
where do, what happens withthose data assets, like, maybe
that becomes another piece,

Cat (26:05):
There's so many parallel conversations that I've heard,
you know, that I'm, I thinkabout in open source software,
you know, there's so muchconversation about what
contributions get recognized,even things like, um, who
manages a project, even if theydidn't perhaps write the lines
of code, but they actually do aton of work to facilitate, you
know, the interactions betweenpeople that made something

(26:27):
important happen.
And then that becomes kind oflike.
in the cultural history ofsomething, but not always in the
credit, the explicit credit.
And, um, I think protecting thatcredit and kind of fighting for
it, even for our, you know,friends and colleagues who do
that kind of work really mattersa lot.
And, um, I also think aboutthings like, uh, you know, a PI

(26:49):
in my grad school told me like,you know, you don't cite
anything like a software packageor anything like that.
You don't put that in your paperreferences.
Like that was.
That was just what they thoughtback then.
Um, and then I started a lab andwe started working with software
developers.
And I was like, I have to citeall of these developers who
wrote the stats package that Iused, you know, and I almost
felt this shame, this sadnessthat I had like not seen them as

(27:12):
people and kind of been trainedto not even see it as anything
but a tool.
Um, and it just happened to befree.
Like, I don't know why it wasfree.
It was just was, you know,

Saskia (27:21):
Yeah,

Cat (27:21):
so I went on that journey myself,

Saskia (27:23):
Even at the point where you do cite these things,
there's also this weird thingwhere, um, I, I can cite a data
set, I can cite a softwarepackage, and then I can cite
this random paper that Imentioned in one line of my
paper, right, but ultimately mypaper is built off of the
software and the data that I'vegot from these other two
citations, but they get just asmuch credit as this one idea,

(27:45):
like, you know, like I citeHubel and Wiesel in every paper
I ever write, like, They don'tneed any more credit for
anything, right?
It's just like we have to,right?
It's like we're putting anelectrode into a brain in the
visual system.
I have to cite Hubel and Wiesel,but like, intellectually it's
doing almost nothing,

Cat (28:03):
I'm imagining this like, you know, you have your like
acknowledgements citations andyou're like, okay, but these are
my OGs.
Like this is my, these are mylike real ride or die citations
that really shaped me.
And these are the ones I have tomention.
It feels like we'd get into alot of drama if we started
waiting citations as well.

Ashley (28:25):
At the Allen Institute, there are teams of software
engineers that build a lot ofthe tools and developers.
And I'm sort of curious, like,you know, in this thinking about
their involvement in thescientific process, like, what
does that look like at theAllen,

Saskia (28:37):
They're often involved in the science in the, when it
comes down to, okay, these arethe experiments we're going to
do, what does that look like,what does, what are the, what
are the requirements that thatputs on, um, Um, Our data.
What are the systems that exist?
What systems need to be built?
I work on the team with a lot ofsoftware engineers and, you
know, we do try and get them toengage with scientists a lot

(29:00):
because I think it helps them tounderstand, um, kind of why, why
we're building the tools we'rebuilding when they can see how
scientists are using them.
Um, and so bringing them into,um, scientific talks.
Um, and so.
Both kind of like, uh, you know,we've got kind of seminar series
and things like that, but it'salso, we'll sometimes have

(29:22):
scientists just come talk to thesoftware engineers, give a, a
slightly more lay version of,um, an explanation of the work
that you're doing.
Um, and I often, when I havelike new people on my team,
we'll kind of set up a bunch ofthese one on one conversations
so that they understand thecontext or they just maybe not
understand, but they get to seea picture of that context, um,

(29:44):
quickly.
So I think that's often, butIt's, it's rare that, that
software engineers are, kind of,Hey, here's what the next idea
that we should try.
Um, but they're not too far fromthose conversations.
Um, uh, and, and then it also iskind of a personality thing,
right?
Like some software engineers aremore excited about that and
maybe sit in on more of thoseconversations.

Ashley (30:06):
I love that that's like just also another piece of
communication that thescientists then also learn to
need, need to learn how to do,right.
It's like, how do you break downthe science, bring in the people
who are going to build out thetechnical backend to make this
happen and sort of like informthem about enough so they can
appreciate what the goals areand they know then, you know,

(30:26):
what to build at the end of theday.
Yeah.

Saskia (30:29):
I think having been on kind of both sides of it, like,
You know, I think I, and I, andI see this in other scientists
had this idea of just like,well, they can build anything
for me, right?
Like they just, you know, I justhave to tell them what I'm doing
and they're going to make it allhappen.
And, and it's just like, oh,that's, that's just not true.
I, or maybe it is true, but itjust, the, the time, the, the

(30:49):
time horizon of it is, is longerthan what we can afford kind of
thing.
Um, so being able to understandkind of.
The, the needs and limitationson both sides,

Cat (30:59):
I get it on the other side because developers are like,
well, you're a scientist.
So like, can you just pleasetell me how to be happy at work
and how to fix my company?
And, you know, and so we boththink the other side is magical
in a way that I think we couldlean into and say, well, it
comes from love, you know, butlike, let's, let's find a way to
connect.

Ashley (31:17):
Everyone's like, just fix it already.

Cat (31:19):
Just give me the answer.
Give me the thing.

Saskia (31:23):
One of the, like, leads on, on the technology side used
to say, and it became so dear tome, she used to, whenever people
would say, you just do this,just do that, she's like, just
is a four letter word.
I want this like embroidered onmy, like on the wall of my
office because, um, it, that waslike an eye opening thing for me
to learn is like, we, yeah, we,we do just think you can just do

(31:44):
this, just do that.

Ashley (31:45):
All this stuff takes, takes effort.
It takes time.
It takes communication.
You know, we have a lot oflisteners who are in tech, um,
maybe have never done histologyor physiology or touched a mouse
or a microscope.
But you are creating all ofthese databases and lots of
tools and probably need, even onthe open science and the open

(32:05):
software development side ofthings.
So I don't know, for the, forthe curious listener, who's
like, how do I do a little bitof neuroscience or help out in
some way, what would you tellthem?

Saskia (32:17):
There is so much open neuroscience that people can tap
into.
I think the challenge is findingit, um, um, finding, finding
what it is you can do with inthat space.
Right.
Um, there's a lot of, Open data.
Um, there's like a variety ofdifferent repositories based off

(32:38):
of different data modalities.
So if you're interested inphysiology and behavior data,
there's a repository calledDandy, which is supported by the
brain initiative that, uh, iswhere a lot of, uh, animal
physiology and behavior.
There's also MRI for humans andopen neuro.
Um, there's a lot of placeswhere you can find data.

(32:59):
Um, but I think it, you know,that's one thing, like you can
dive into data and think aboutanalysis, but you might need
some hooks that help you getinto it and think about what
types of analysis areinteresting, um, to do.
I think the other place, though,is, uh, on, on kind of the
software and tooling side.
Um, And I mean, I think thatmight take more sleuthing, but I

(33:22):
can tell you there's a lot of,um, open source software that
gets used to process data andanalyze data, um, where a lot of
it's being developed byneuroscientists who, um, you
know, find themselves having tofigure out how to deal with some
type of data that they, youknow, I've collected all of
these, um, these signals out ofthe brain, and now I want to

(33:45):
sort.
individual units out of all ofthese signals.
Um, and you know, as a field,we've, we've developed a number
of, of great tools, but I don'tknow that we're done.
Um, and I, I think one of thethings that a lot of these
projects can benefit from ispeople who are better, uh,
software engineers, um, makingthem just better tools that are

(34:08):
easier to use, um, in, in thatregard.
Um, but then also people withlike, um, You know, engineers
with like signal processingskills who maybe don't care
about the context of where thesignals come from, but can like
hone in on algorithms of, ofdetection and stuff like that.
So there's a lot of open source,um, tools that the field largely

(34:30):
relies on that I think could,like, would, would welcome
people to engage with them and,and be able to, to work on them.
So, um, but like I said, it's,it's a little bit about finding
it.
And some of that is, um, Youhave to kind of tap into some
part of the conversation, be itthe literature or, uh, the
social media, um, or I don'tknow, like talking to some, some

(34:51):
scientists face to face, uh, butat some point you have to get
into that conversation

Cat (34:55):
People are feeling down about science right now.
Like science is a scary thing todo out in the open for some of
us, you know, I work on socialtopics It's tense in tech.
And so I think you're in areally interesting place.
I think the Allen to mesometimes is like this beacon of
hope a little bit like, Hey, wecan actually have big ambitious
projects and a big institutionthat occupies this different

(35:19):
part of society, you know,bridges gaps.
And, um, yeah, just throw thisgeneral question out to you.
Like, what advice do you give tous, um, about maintaining hope,
believing in science, supportingscience right now?

Saskia (35:35):
I don't know that I'm the right person to answer this
question.
I don't know that I, um, I guessI maybe will come back to, like,
Some of the stuff we've beensaying, like, for me, hope is,
always comes down to otherpeople, um, and I, that's the
only place I've ever been ableto find hope, um, and maybe this

(35:58):
is why I, I gravitate to teamscience in the way that I do,
um, so, my hope in science isonly because there are other
scientists here, um, that are,you know, you know, are in it
with me, um, and who share kindof this desire to make it more
collaborative and more open and,um, and, and more inclusive.

(36:22):
Um, yeah.
At the same time, I'm also veryscared.
I don't know what our futurelooks like.
I think there's a lot of thingshappening that could have a
really big impact on everythingthat we're doing.
And so I get very nervous, but Iat least know that I'm not
facing that alone.

Ashley (36:37):
Oh, I love that.

Cat (36:39):
You're not alone.
We're with you.

Ashley (36:40):
I think that's like a that's a beautiful note to end
on actually like hope is otherpeople
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.