All Episodes

September 11, 2025 56 mins

Daniel and Kelly shine a light on the inner workings of one of the crucial parts of the scientific process: peer review. How does it work? Who are these peers? Can peer-reviewed papers be wrong, and does that mean that all of science is questionable?

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
Science is supposed to be our best way of revealing
the truth about the universe. But science is constantly being
updated and corrected, and sometimes we learn after the fact
that a study was flawed or even was shoddy to
begin with.

Speaker 2 (00:21):
So how do scientists decide.

Speaker 1 (00:23):
Whether a new result is robust or not? And how
does the general public know when the science is settled
or about to be upended. No system is perfect, but
it's important that the process is transparent.

Speaker 2 (00:35):
So today we're.

Speaker 1 (00:35):
Going to shine a light on the inner workings of
one of the crucial parts of the scientific process, peer review.
How does it work? Who are these peers? Can peer
reviewed papers be wrong? And does that mean that all
of science is questionable? Welcome to Daniel and Kelly's peer
reviewed universe. Hi.

Speaker 3 (01:07):
I'm Daniel. I'm a particle physicist and I've published over
a thousand papers, but haven't read most of them.

Speaker 4 (01:14):
Hi.

Speaker 2 (01:14):
I'm Kelly Wieder Smith and I'm confused. What does that mean? Daniel?
How have you published over a thousand papers at the LHC.

Speaker 1 (01:22):
Does everybody who is reading Oh that's cheating, man, that's cheating.

Speaker 3 (01:27):
Well, the way the collaboration works is everybody who's contributed
to the project is an author. And when you're an author,
you get included on every paper that comes out during
your authorship period. And ATLAS is a huge collaboration of
lots of clever people. We put out about one hundred
papers a year. Every single one has my name on it,
and I haven't read most of them. I couldn't even

(01:48):
explain the titles of some of them to you.

Speaker 1 (01:50):
So in my so fields have their own cultures. In
my field, in my culture, that would ethically not be okay.
You should be responsible for what is in every paper
that your name is on, in my opinion.

Speaker 3 (02:06):
And I was in another collaboration previously where things work differently,
where you weren't an author on a paper unless you
asked to be, and nobody questioned it. If you had
the right to be an author, if you were on
the list, you could be. But it was an opt
in and I thought that was much better. The lengths
of all the list was shorter, and authorship meant something.
But here you can ask to be removed, but you

(02:26):
have to go through this process and it would be
like I'd be doing it twice a week every week.

Speaker 1 (02:30):
So wow, So what does your CV include all of
those papers or just the ones that you feel you've
actually contributed to, So.

Speaker 3 (02:39):
That depends the CV I make and share. People only
includes papers that I actually did anything on, or had
ideas for, or really contributed to. But sometimes an institution
wants a list of your official papers, and then I
have to like include all I don't even know the
number one thousand plus papers at this point.

Speaker 2 (02:57):
Oh poor dude.

Speaker 3 (02:58):
Now, so.

Speaker 2 (03:02):
How do you get them? Ball? Do you just copy
them from Google Scholar?

Speaker 3 (03:05):
Yeah, Particle physics is pretty good at automation and stuff
like this, so we have our own databases of papers
and it's pretty easy to download that kind of stuff.
But yeah, it's pretty tough to have so many papers.

Speaker 2 (03:17):
Oh, so many pages you have to print out. The
file for the PDA for your resume is so big.

Speaker 3 (03:23):
How about you, Kelly, how many papers do you haven't?
Have you finished that paper from your thesis yet?

Speaker 2 (03:27):
Oh I'm feeling angry now, Dan, I have not.

Speaker 1 (03:32):
I've been busy with other things, including my new Ducks
and geese. But when goats and goats, which I won't
mention of.

Speaker 3 (03:40):
My co authors, do your ducks and geese and goats?
Go on your CV. That's my question. No, uh, students supervised?
Come on?

Speaker 2 (03:46):
Oh yeah, no, I've got that list.

Speaker 1 (03:48):
But in the Great, the Great Goat, I think I've
got I haven't I haven't counted. I think I've got
something like thirty that. My pace has slowed since I
writing POPSI, but you know I still get a couple
in there Every year.

Speaker 3 (04:04):
My CV has something whimsical and unseerious on it. Yours
is totally professional.

Speaker 2 (04:09):
Ah, yes, what whimsical thing do you have on yours?

Speaker 4 (04:17):
Oh?

Speaker 3 (04:17):
I'm just gonna leave that as an easter eggs. Oh.
I put it in there to see if anybody actually
reads my CV, because if they don't respond to that,
I'm like, hmm, you didn't really read it. Okay, yeah
check it out.

Speaker 2 (04:26):
All right, Well, I'll be looking up your CV. Do
you haven't updated? Here's the question.

Speaker 1 (04:30):
Most scholars don't update their CV, but like once every
half decade.

Speaker 2 (04:35):
Is your online CV updated?

Speaker 3 (04:38):
Yeah? I think I updated a couple months ago.

Speaker 2 (04:40):
Oh all right, bravo, good out.

Speaker 1 (04:43):
All right, Well today we're not talking about the thousands
of publications that Daniel has, but we're all very proud
of Daniel for all of his hard.

Speaker 3 (04:50):
Work, or the tens of geese and ducks that Kelly
is currently raising, though we're very proud of her and
her literally extended family.

Speaker 1 (04:58):
Well, I don't have that many, and I wouldn't claim
other people's ducks that I didn't help out with as
my own ducks.

Speaker 3 (05:04):
Oh wow, now no.

Speaker 1 (05:09):
But anyway, today we are answering a question from a
superfan and we're going to go into a bit more detail.
This question inspired us to make a whole episode about
the process of peer reviews. So let's go ahead and
hear what Stephen from British Columbia has to say.

Speaker 4 (05:25):
Hi, Daniel and Kelly, this is show super fan Stephen
from British Columbia, Canada. My question today has to do
with how scientific studies are peer reviewed?

Speaker 1 (05:38):
Is it me?

Speaker 4 (05:39):
Or is science denial spreading these days? And it came
to my attention recently that not all peer reviewed studies
are replicated, And then it occurred to me, doesn't this
automatically cast doubt on a discovery if a study is
not replicated but is still considered peer reviewed.

Speaker 3 (06:02):
I would love to.

Speaker 5 (06:04):
Hear an explanation about how the peer reviewing process works
and how scientists determine when a fact is a fact,
a finding is a finding, and how the community comes.

Speaker 4 (06:18):
To a consensus on new discoveries.

Speaker 3 (06:21):
I also think your audience.

Speaker 4 (06:23):
Could use some tools for how to determine what studies
are legitimate and how to spot a bad study. How
should one respond to someone who claims a scientific study
is bogus. Thank you Daniel and Kelly for this great
podcast and for explaining science topics to the masses.

Speaker 3 (06:47):
Thank you Stephen for asking this question. I think it's
really the right moment to dig into this. You know,
when we see institutions being attacked, when we see science
being denied, when we see the whole process of science
being q and I think it's important to shine some
light on how does this all work, what does it mean,
especially for those folks who are not scientists, who don't know,

(07:07):
what does it mean for a paper to be peer reviewed?
What is bad science? Thank you Stephen for asking this
question and giving us the chance to dig into all
of this.

Speaker 1 (07:15):
That's right, and so the question in particular is about
peer review for scientific manuscripts. But peer review actually happens
at multiple stages in the process of doing science, and
so we thought we would take a step back and
talk about peer review from the very beginning, which is
when your idea becomes a grant. And this is one
of the less fun parts about science is writing this

(07:38):
grant because peer review for grants is harsh.

Speaker 3 (07:41):
This is like the science version of when a build
becomes a law. We should have written a song for this.

Speaker 1 (07:45):
No, well we still can. You know, we could write
it and then we can add it to the top. No,
we won't do that to everyone.

Speaker 3 (07:52):
So Kelly, tell me what is your process of grant writing?
First of all, who do you submit your grants to?
And what's it like for you to prepare that?

Speaker 1 (08:00):
So I submit my grants to the National Science Foundation.
And actually it's possible that in the last six months
the process has changed a bit because I know the
new administration is shaking things up.

Speaker 3 (08:11):
The agency formerly known as the National Science Foundation.

Speaker 2 (08:15):
No, what is it called now?

Speaker 3 (08:16):
I'm joking?

Speaker 2 (08:17):
Oh god, anything is believable, Daniel.

Speaker 1 (08:20):
At this point, all right, the institution still known as
the NSF or the National Science Foundation. So for this
you write a grant, and there's different programs that accept
different kinds of grants, but often the grant is like
fifteen pages where you go into extreme detail and what
we know about the question you want to ask based

(08:41):
on the literature that's already out there, extreme detail and
the experimental design, and then a lot of detail about
what you think the implications will be of your results.

Speaker 3 (08:50):
So you talk about why this question is interesting, to
argue that your work will help answer it, and then
you lay out in detail how are you going to
spend every dollar, who's going to do what, and when
you get like a whole gan chart for like when
everything is going to happen yep, And then you have
to lay out also like the products for the grant, right, yeah.

Speaker 1 (09:07):
Yeah, the budgets and so the you're supposed to There's
a section called intellectual merit where you talk about how
many papers you think you're going to end up producing,
what big questions you're going to answer, And then there's
also a section called broader impacts, where you talk about
how your work is going to benefit society writ large.
And you know, for my grants in the past, it's
been things like, you know, my information is going to
be helpful to fisheries managers who are managing fish populations,

(09:30):
which is like a literally a multi billion dollar industry
for like people who want to go out fishing and stuff.
So most of my research has been on fish and
their parasites. Also intellectual merit will include things like how
many students you're going to train and what skills you're
going to give them that they can then use when
they go out in the workforce. And depending on the
field you're in, sometimes your broader impacts will be things like, Hey,

(09:52):
this could be what we learn about the brains of
this fish could one day help us produce a new
medication for anxiety or something like. So you talk about
how your stuff would impact the broader society.

Speaker 3 (10:04):
And so for those folks who might think, hey, scientists
are just cashing in on the government gravy train, tell
us like, is this the kind of thing you throw
together in an afternoon and then can confidently think it
will be funded?

Speaker 2 (10:16):
Oh my god? Okay.

Speaker 1 (10:17):
So one of the reasons, to be honest, one of
the reasons I sort of transitioned away from academia in
the most purest sense, so I'm like academic adjacent now,
but was because I spent too many Christmas Eves working
on my grants because they were due like sometime in January,
and they just they were like so many I would

(10:40):
say months of work. In a lot of cases, if
it's a new idea, if you're just cleaning up an
old grant, then it can be once of work. And
then so many great grants are submitted, and when they
go to review, you have three peers of yours who
are in closely related fields review it, and then there's
a panel discussion where they talk about their views of

(11:00):
your grant in front of a bunch of other experts
who can also weigh in. And then all those reviews
go to program directors who pick out of like the
twenty best grants, you know, maybe they can only fund
fifteen or something. And then they pick some on the
East coast, some on the West coast, some in the
center of the country, some from major research institutions, some

(11:21):
from institutions that focus a lot on undergraduate research, and
they have research programs as well, And so so many
really great grants don't make it into the pile that
get funded.

Speaker 3 (11:33):
Meaning that people have read them, have said this is excellent,
the science is good, it's interesting, it's important. We would
benefit as a society if we did this. It's all
well thought.

Speaker 2 (11:42):
Through, but no, that is exactly right, yes, and it
is soul crushing. I know so many people who are like,
I gave up Christmas Eve on this grant, and this
is the third time I've submitted it, and it just
never makes it quite high enough, even though everybody thinks
it's amazing. And then quickly just to mention for follow up.

Speaker 1 (11:59):
If you do get grant, every year you have to
write a progress report saying what you've done, have you
stuck to your timeline, how have you spent the money
that you've been given, are you going to hit your timelines?
And what have you done that impacts society? And so
every year you have to report on that and then
at the end you have to write a bigger report.
So you need to be justifying what you're doing with

(12:19):
the money every step of the way.

Speaker 3 (12:21):
And before we turn it around and ask me questions,
who are the peers here, Who are folks who are
reading your grants and commenting on it.

Speaker 1 (12:29):
There are folks in the same discipline at other universities.
So you also have to fill out a detailed excel
sheet where you talk about who in the field who
does similar work, has been your student or your postdoc
or your mentor or a co author.

Speaker 3 (12:42):
You might have a conflict.

Speaker 1 (12:43):
Yeah, and anyone who might like you for you and
who might be too nice to you when they review
the grant. So it has to be only people that
you haven't worked with or haven't co author of paper
with for something like five to ten years. I guess
they assume that the liking of someone wears off about
half a decade. But it can be tough because you know,

(13:04):
these fields are kind of small, but you know, usually
it's other people at research institutions who do similar work,
are familiar with the literature and can tell if what
you're proposing makes sense or not and is good or not.

Speaker 3 (13:14):
And so the thing to take away from this is
getting a grand funded is hard, right. Not only is
it a huge amount of work, but you have to
survive an excruciating process where really only the best are selected.
It's sort of like watching the Olympics and you're wondering, like, oh,
here's somebody from this country and somebody from that country,
and you know that each person has already won some

(13:36):
sort of like really competitive national competition just to even
be there at the Olympics representing their country. And so
like every grant that's funded, even if it seems silly
and it's about like the sex lives of ducks, you
know that it's been like excruciatingly written, reviewed in detail,
questioned by experts without conflicts of interest, and found to

(13:57):
be excellent, beaten out lots of other grand These are
not slush funds shoveled to scientists, so you know, just
like do whatever they want with. These are hard one
funds to do science that the agency thinks are a
good idea.

Speaker 1 (14:11):
Yeah, it is absolutely excruciating to do these things. It
feels great when you get it funded. And I said,
I hate writing grants. Sometimes I enjoy the process of
like figuring out just the right experimental design for a question.
That's like fun for me. But in general, when the
grant doesn't get funded, it sucks. But anyway, all right,
so what's your experience.

Speaker 3 (14:31):
Yeah, my experience is very similar. You know, we come
up with an idea, we spend a lot of time
polishing it, often doing a lot of the work in
advance that we need to demonstrate that the work described
it's reasonable. Right. If it's too far forward thinking, then
you won't get the money because they're like, well, that
might work, but can you really prove it. It's too

(14:52):
much of a risk. If you've already done too much
of the work, then they're like, oh, this is already done,
why would we fund it? So it's really sort of
on the edge there of like you've done the initial
work for free right or on some other grant or whatever.
So you prove that the idea is valid and can fly,
but not so much that they're like, why would we
fund this? You've already done it, which is often a
delicate balance. And my feeling with grants is submit it

(15:16):
and forget it, because such a tiny fraction ever come
back with money that you just got to like let
it go, like, hey, I've submitted it, and I never
expect to hear back, you know, anything positive, And so
I just sort of like give up emotionally on everyone
because otherwise it's too hard, you know, it's too hard
to deal with, yea. So my process is very similar

(15:36):
to yours, with a couple of exceptions. Sometimes I submit
to private foundations, like I recently found a private foundation
that likes to fund projects that are too blue sky
that were rejected by the NSF for being like way
too out there, and that submission process was like write
a paragraph, send us your rejected NSF grant with the views,

(16:00):
and that's it.

Speaker 4 (16:01):
Whoa.

Speaker 3 (16:02):
And that one actually just came back and they just
gave me some money to build my cosmic ray smartphone
telescope down here at Irvine. So yeah, that was actually
a really positive experience graduations.

Speaker 2 (16:13):
Yeah, that's awesome.

Speaker 3 (16:14):
Yeah exactly, because you know, private foundations can do whatever
they like with your money. What we talked about previously
was mostly the process of applying to government institutions, the
NIH the NSF. Most of my fund incomes from Department
of Energy, Office of Science, which funds particle physics and
lots of other stuff in the United States. But private
fundations can do anything. Like the Bill Gates Foundation, you

(16:36):
can't even apply to. They have to like reach out
to you.

Speaker 2 (16:39):
Wow.

Speaker 3 (16:39):
And you know Mackenzie Scott just like gives money to
people that don't even apply. They just get a phone
call saying like, by the way, here comes a million
dollars or more.

Speaker 4 (16:47):
Whoa.

Speaker 3 (16:48):
So yeah, private foundations are weird.

Speaker 1 (16:51):
Yeah, so I'm sure that they make solid choices about
who they're going to give their money. To But I
guess one way to sort of check how much you
can trust an article as you can look in the
acknowledgment section at the end and say, oh, this was
funded by the National Science Foundation, And then you know
this went through rigorous peer review. And just because something
doesn't go through rigorous peer review at the grant stage

(17:11):
doesn't mean it's bad science. But at least you know
if it went through a government agency, it has been
sort of gone over with a fine tooth coust And the.

Speaker 3 (17:19):
Thing that's frustrating to me is this process is so inefficient.
Scientists spend so much of their time preparing grant proposals
and having them rejected. And you know, each of these
is a great idea, each of them. If we did
it would benefit society in terms of sheer knowledge or
you know, new technology or something. We have all these

(17:39):
smart people constantly pitching great ideas to the government, the
government saying, yeah, that's awesome, we'd love to do it,
but we can't. Like, why don't we just double triple
funding for science. It would be like better for us
every dollar we spend, we know comes back two threefold
in terms of like economic output. It just seems crazy

(18:01):
to me to reject all of these excellent ideas.

Speaker 2 (18:04):
I mean, I gotta say, not every idea I've reviewed
on a panel has been excellent.

Speaker 3 (18:09):
No, but there are plenty above threshold.

Speaker 2 (18:10):
Yes, absolutely, absolutely.

Speaker 1 (18:12):
There's always a grant every round where my heart breaks
a little that it didn't get funded because I thought, oh,
that is so cool, but there wasn't enough money to
cover it. And so yes, there's a lot of good
work that's not getting done.

Speaker 3 (18:23):
And I agree. I'm a grant reviewer often and I
see grants I'm like, no, this is not well thought out,
or there's a flaw here, or this isn't cutting edge.
You know, somebody did this last year, and it's important
that this stuff gets reviewed and gets reviewed in a
fair way. But there's so many that are above threshold,
and only a fraction of those get funded, and it
just seems to me to be a waste. But whatever.
I'm not a political person. I understand these things. But

(18:46):
I want folks out there to realize that every grant
you've seen that's been funded has been reviewed and excruciating
detailed before dollar one was even sent to the institution.

Speaker 2 (18:55):
That's right.

Speaker 1 (18:55):
I don't think we need to get into lots of
detail about this, but every once in a while, my
husband and I will have chats about what would be
a better system for funding grants, Like maybe every four
years you get a chance to submit and that way
you don't have to write grants the other three years
and there's fewer people in the pool and you're more
likely to get it. I think that's not the answer,
but I wonder if there's some way we could save

(19:16):
scientists from spending all of this time on grants that
don't get funded, or maybe we should just throw twenty
times as much money at the scientific community. There's our solution.

Speaker 3 (19:25):
Well, you know, it used to be fifty years ago
that the philosophy is a little bit different that the
government funded people rather than projects. And they were like, oh,
scientist X at this university, you're a smart lady. You've
done good work. We'll just keep giving you money and
it'll just go for a while. As long as you
keep doing something, we'll keep giving you money. And I
think that we used to be the major model, and

(19:47):
it's just not that way anymore. Now. It's projects, and
so if you have a lab at a university, it's
sort of like running a small business. You know, you
have to constantly be pitching grants to get funds. You know,
you're like running a store at the mall. Katrina likes
to say, you're constantly looking for new ways to contribute.
The one holdover that I'm aware of is actually experimental
particle physics is still a little bit of the older model.

(20:10):
They run competitions for junior faculty, and if you win
one of these awards, like I want a Outstanding Junior
Investigator Award when I was a very young professor, then
you sort of get in the club and then they
mostly fund you, and your find can go up if
you do great, and go down if you're less productive.
But it's much more stable than typically I think, because

(20:31):
these particle physics projects last like twenty years. You know,
we build a collider, we expect to use it for
twenty five years. That's my thinking for why they still
do it in the sort of older model. But it
means I don't have to write as many grants because
I do have one more stable source of funding.

Speaker 2 (20:45):
Wow. So even to this day, you still get a
chunk of money every year.

Speaker 3 (20:49):
I have to submit a grant every three years to
propose new work and to tell them what I did
in the last three years, and so far every time
my grant has been renewed and continued. So yeah, I've
been funded from the Armament of Energy for a couple
of decades now, which is very nice. Yeah, and I'm
very grateful to the Department of Energy, thank you very much,
and to all the taxpayers who support them.

Speaker 1 (21:08):
Oh you government, shill, I'm just kidding. I am so
happy for you. That's awesome. I know there's a couple
grants for people in the medical field that work that
way also where they fund your lab and just trust
that you're doing awesome stuff and you will continue to
do awesome stuff with that money.

Speaker 2 (21:25):
And that sounds pretty sweet.

Speaker 3 (21:26):
Yeah, it's pretty nice. Yeah. I think you're talking about
like the Howard Hughes Awards, for example.

Speaker 2 (21:30):
That sounds very yeah. Yeah yeah. All right, Well, my
jealousy aside.

Speaker 1 (21:34):
Let's all take a break, and when we get back,
we're going to talk about the next phase when peer
review is done, and.

Speaker 2 (21:40):
That is when you're about to start your experiments.

Speaker 3 (22:03):
All right, we are back and we are talking about
peer review. Thanks to a question from a listener, and
we dug into the process of peer review for grant
proposals and grant writing. And now let's talk about what
it's like to review an experiment while it's running, before
the paper is even sent to the journal. Kelly, I
mostly do research on particles that don't have rights and

(22:25):
don't have institutional review boards protecting them. What's it like
to do an experiment on living creatures with emotions that
can feel pain and have people looking over your shoulder.

Speaker 2 (22:35):
Oh, it can be pretty stressful.

Speaker 1 (22:37):
So say you get that National Science Foundation grant, they
won't actually release the funds to you until you've shown
that you've acquired certain permits and protocols so that your
institution is giving you permission to do the research. So
my PhD work required collecting some fish out of estuaries
in California. So first I had to talk to the
state government and fill out permits to get permission to

(22:59):
go collect those fish. So I had to convince the
California government that I wasn't going to take too many,
that there were plenty of these fish out there, that
what I was going to do to them was asking
a worthwhile scientific.

Speaker 3 (23:11):
Question, Kelly, what are you going to do to.

Speaker 1 (23:13):
Them, I'm going to hear well the questions I'm going
to ask of them that they will be contributing in
a meaningful way to science. And so first you have
to get that permission to take the animals out of
the wild, and then you need to fill out a
protocol through the Institutional Animal Care and Use Committee, which
is iya Cook. It includes five people, and they include

(23:35):
folks like veterinarians, an outside member, so like somebody who's
part of the community who is just sort of going
to weigh in on what the general public feels about
the work that's being done. Sometimes you'll also have like
an ethicist or a lawyer in there, and often you'll
have like a faculty member on there also.

Speaker 3 (23:51):
And what do these people want? Do they want to
say yes? Do they want to say no? Are they
just totally disinterested? Like what are their motivations? Why is
like a rando from the public doing this? Anyway?

Speaker 1 (24:01):
So, if I'm being completely honest, my field in the
past did some uncomfortable things to animals, and you know,
they didn't use, for example, anesthetic when they were doing
some surgeries, and they should have things like that. And
so this committee is trying to make sure that you
are treating the animals as nice as possible and using

(24:22):
the fewest animals that you need to get answers to
your questions. So you need to convince them that you
have read up on the most effective anesthetics for whatever
animal it is that you're using. You also often have
to take a bunch of online training courses where you
show I am well trained and the best way to
treat these animals and I know how these anesthetics work.

(24:44):
You often have to prove to them that you have
like teamed up with someone who can make sure you're
doing the process correctly, and you need to convince them
that you've thought really hard about the exact number of
animals that you're going to use for these experiments. You know,
their job isn't to stop science, and they're not necessarily
trying to figure you're out of the question that's being
asked is good or not. They assume that if you

(25:04):
have funding from the National Science Foundation, that has already happened.
But their goal is to just make sure that nothing
unethical happens and that the animals who are contributing their
lives to science have the best life possible.

Speaker 3 (25:14):
Oh that sounds nice. It is, and it's nice to
see the process sort of self correcting, like, yeah, okay,
we trust scientists. Actually, maybe they need some more eyeballs
on them because they have conflicts of interest, and so
it's good to have other people who don't have those
conflicts looking over your shoulder and making sure you're doing
things with the right way.

Speaker 1 (25:32):
That's nice, it is, And I also appreciate that there's
a veterinarian on there. The veterinarian will like check in
pretty regularly. I have training in parasitology, but I don't
necessarily have training in care of fish or something.

Speaker 2 (25:44):
And I've done a bunch of reading.

Speaker 1 (25:45):
But it's nice to have a veterinarian check in every
once in a while and offer their expertise to make
sure that these animals really are being treated as well
as you can in a lab setting. And I have
never worked on humans, but if you are doing things
with humans, including just like sending out survey to humans
that might ask questions that would make people feel uncomfortable,
you have to pass those procedures through an institutional review board.

(26:07):
So there's a similar procedure for working through humans. So
I guess you don't have anything like that for particle
physics because we don't care what you do to the particles.

Speaker 3 (26:17):
No, we don't. But I did one time come home
from work to find Katrina collecting samples for an experiment
from our children. Oh, And I was like, hmm, shouldn't
you be asking the other parent pofore experimenting on your children,
like you have a conflict of interest here? And she
was like, it's just a saliva sample. I'm like, the
children can't consent to. So we have a lot of

(26:38):
fun joking about that.

Speaker 2 (26:39):
Did you give permission? That's good? That's good.

Speaker 1 (26:43):
So, Daniel, there's this new thing that's getting sort of
big in science that my friends have been talking about.
I haven't had a chance to do this yet, but
pre registering a scientific study, have you done this yet?

Speaker 3 (26:53):
We don't actually do this in particle physics, because this
is another way that particle physics is weird, is that
we publish our studies when we get a negative result.
Like lots of fields of science, you might say I
have an idea for how we might learn something, and
you know, you do this study and then it didn't
work or you learned that there's nothing new there like okay,
bears eat salmon. We already knew that you know like

(27:15):
or whatever. And often those studies don't get published if
the result isn't statistically significant or you didn't learn something new,
and there's a statistical issue there, which is that this
can lead to a bias in our data and our understanding.
The effect is called PA hacking, and it means that
sometimes things that are not real can appear to be
significant just due to random fluctuation. Say, for example, I'm

(27:38):
testing a bunch of coins to see if they're fare.
I flip each one one hundred times. Then like most
of the time, I'm going to get fifty heads, but occasionally,
just due to random fluctuations, I'm going to get one
that gets like seventy heads right. And the more I
do this experiment, the more likely I am to have
one that's a fluctuation. And so if I only publish

(27:58):
the ones that seem to be sick magnificant, that that
crossed some statistical threshold to be weird, it's going to
look like these coins are not fair, when in reality
they are. That's what PA hacking is. P refers to
the probability for the alternative hypothesis that the coin is
fair to fluctuate randomly to look like it's not fair.
And so the way to counteract this is to publish

(28:20):
negative results, to say, look, I did all these experiments
and the coin came out fair. We already knew that, yawn.
But it's important that we include this context so that
if one in a thousand studies says, look I saw
one in a thousand times effect, you know what it means.
And often we do this by pre registering studies by
saying I'm going to go out and do this study
and I don't know the results yet, but I'm going

(28:42):
to publish it either way, right, And so that's a
way to counteract p hacking.

Speaker 1 (28:47):
So in my field, what you've just described, I think
we would call the file draw effect. And so the
idea here is that, yeah, when you get a null
result or a result that's less interesting, you might try
to publish it somewhere, but you're less likely to put
in the effort to try to publish it in a
lower tier journal that you're going to get less credit
from your institution for having published in. And that could

(29:07):
make it look like an effect is there, because when
you randomly get a positive effect, it gets published and
otherwise it doesn't. For us, pe hacking is when you
didn't get the result that you wanted, but you're looking
at your data and you're like, oh, you know, I
wasn't actually asking a question about this other thing, but
if I look at my data, there's actually a statistically
significant result there, and so I'm going to publish on that,

(29:29):
and you know, maybe I'll mention that this wasn't the
initial thing, but I found this, you know, significant relationship
that's positive, So I'll publish on that. And the deal
there is, you know, like you said, randomly, you would
expect to get results that look like something is really
going on. And if you just are searching through your
data set, you're more likely to find something that's significant.
And that wasn't the question you're originally asking. And so

(29:51):
the reason we do pre registering a scientific study is
you'll say, ahead of time, I am specifically looking at
the relationship between I don't know, parasite and how often
a fish darts. And so if I go on and
I publish a paper about parasites and how active a
fish is, you could say, hey, did you just find

(30:11):
that result? And that you changed your paper to be
about that because it looked interesting. And so anyway, this
is how we make sure that you're not looking for
significant results and just publishing whatever you find.

Speaker 3 (30:21):
And in particle physics we're lucky enough that we always
publish because a negative result is still interesting. Like if
you look for a particle because you think it's there
and the universe would make more sense for it to exist,
like the Higgs boson, then if you see it, great,
you publish it. If you don't, that's still interesting. You
still want to know, oh, there isn't a Higgs Boson,

(30:42):
because if we smash particles together often enough and we
don't see it, we can say something about the likelihood
of it not existing, because if it existed, we would
have seen it. Now. We still are susceptible to p
hacking because we will sometimes see fluctuations like data will
just like look like a new particle sometimes, and to
that we have a very stringent criteria. It's called this

(31:03):
five sigma threshold, which means if we only claim the
particle is discovered if it's very very very unlikely to
come from a fluctuation, but we still in principle could
get fooled. And that's why we always have duplicate experiments,
like at the largechange in cloud, we have two of
these big detectors and they're totally independent, and that's why
you expect to see the same physics in both. And

(31:23):
if you don't, you know something this squarely.

Speaker 1 (31:25):
Yeah, And in my field, we're getting better where if
you get no result or negative results, as long as
you can show that you did a sound experiment and
the answer is no, then you can still publish it somewhere.
And so Public Library of Science or PLUS is a
journal that encourages folks to submit they're null results as
long as the science was good and you can defend

(31:46):
what you did to try to make sure that you're
not getting this file drawer effects stuff.

Speaker 2 (31:49):
So you get the no answers just as often as
the yes answers, and.

Speaker 3 (31:52):
That's helpful, right. It saves people time because if Kelly
had a great idea and it turned out the answer
is no, and then she just throws it in the
file drawer, then Sam across the country is going to
try the same thing someday and waste their time. So
it's it's good to get this stuff out there.

Speaker 1 (32:06):
Right, And you know, in my field, wasting time also
often means that some animals may have died in the process.
And so to me, I feel like there's this additional
ethical requirement that you get anything that you learned from
these animals out so that nobody has to go and
repeat it and it you know, more animals might have
to pass away for science. So anyway, good to get
it out there.

Speaker 3 (32:26):
All right, So then let's talk more about what happens
when you think you have an interesting result and you've
written it up and you want to publish it.

Speaker 1 (32:33):
And we're going to leave everybody on a ledge for
just a second, and when we get back from the break,
we'll talk more about the fascinating world of peer review.

(33:00):
All right, Daniel, you have a significant result. You've got
a result that looks good, or at least even if
it's no, you're sure it was no because the experiment
was done.

Speaker 2 (33:07):
Well, what do you do next?

Speaker 3 (33:09):
So we write it up, and when all of our
co authors are happy with it, the first thing we
do is we post it online. So particle physics is
a very very online science, and we invented this thing
called the archive where scientists post their papers and it's
called a preprint because we posted before we submitted to

(33:29):
the journal. So for example, I finished a paper this
week and yesterday it appeared on the archive, and nobody's
reviewed it, nobody's confirmed it. It's just out there for
other scientists to look at. And in a week or so,
if we don't get like screaming fits from somebody saying
you stole my idea or this is deeply wrong, then
we'll submit it to a journal. So the process for

(33:50):
particle physics is first post it online, then submitted to
a journal where it's going to get reviewed, and that's
going to take months and months, but it's so slow.
The particle physicis just read the pre print server and
read those papers, and nobody really cares when it comes
out in the journal later because we already have seen
the paper months earlier.

Speaker 1 (34:08):
But what if the peer reviewed version catches an error?
Do you go back and update the archive? Versions out?

Speaker 3 (34:14):
Okay, you can update it. And usually papers in the
archive are like version three, version four, and there's comments
in between. You can compare them. You can see the
mistakes getting fixed, absolutely, And some papers appear in the
archive and then are never published and that's fine. I'd
actually had one paper like ten years ago put on
the archive. It took like more than a year to review,
and then they rejected it because the paper already had

(34:36):
like thirty citations, and so they were like, why publish this,
It's already out there, people are reading it and using it.

Speaker 2 (34:41):
Huh, that's such a stupid reason.

Speaker 3 (34:44):
Yeah, it was really dumb. But then we just gave up,
and now it's like one of my more cited papers
and it was never published. The system is silly. It
is silly, but I know that in other field is different.
Like my friends in astronomy, they never put papers on
the archive until after they've been reviewed, and they think
that if they send it to a journal, it's already
out in the archive. Those journal referees are going to

(35:05):
find that disrespectful like that they didn't wait for them
to chime in. So it's definitely a field by field
culture kind of thing. How does it work for you guys?

Speaker 1 (35:14):
Yeah, so y'all were definitely the trailblazers. We subsequently came
up with bio Archive, and I remember when I was
initially wanting to submit to bio archive, I was hesitant
to do it because some of our journals did say
your paper should not be available anywhere else, because we
want to be the only place where you can find it.
And that's because journals make money off of people who

(35:36):
download their articles, and so they didn't want someone else
to be able to get another version for free somewhere else.
And on bio archive or archive, all of the articles
are free. But it seems like it's now become acceptable
that you can put papers in these pre print servers
like bio Archive, and so now a lot of people
will do that, and they do it for a variety
of reasons. Some of it is they want to get

(35:57):
feedback early, they want to get the results out out
there early. But sometimes it's also like if you've got
a PhD student who wants to go looking for a job.
On their resume, if they finished a study, they'll write
in preparation next to the manuscript. But in preparation could
mean I've got the data and I still need to
do all the stats and I haven't even started writing.

(36:18):
Or it could mean I'm about to submit it to
a journal, and if you've put it on bio archive,
you're saying, look, it's done. I just need to start
the peer review process, or even it's in peer review,
because that process can take a long time. So anyway,
it's becoming a lot more common to put your papers
on bio archive, just to make them easier to access
for other people, and just to show that you really
have actually finished that study finally, after a decade or

(36:41):
however long it took, the.

Speaker 3 (36:43):
Clock has still taken on your thesis. There, Kelly, we'll see.

Speaker 2 (36:46):
I'll get into preparation.

Speaker 3 (36:48):
In preparation, so then you send it to a journal,
and you pick a journal, like if you think it's
a really exciting paper, you send it to Nature or Science.
If you think it's less exciting, you send it to
a journal with a smaller impact factor. So in particle
physics we often publish in like Physical Review or in
Journal of highind Physics or these kinds of journals, and
it goes to an editor, and an editor finds people

(37:10):
to review it. These are the peers, and they reach
out to folks they know who are experts in the field,
who aren't your supervisor or your brother or this kind
of thing, and ask them to review these things. And
Katrina is a senior editor on a journal, so I
see this process all the time, and she has to
read the paper, think about who might know about it,
and then find people to review it. And usually you

(37:30):
have one to three people read this paper and give
comments on it. But finding reviewers can be tough.

Speaker 1 (37:37):
So for our grants that we talked about earlier, you
have to make sure that the person reviewing the grant
doesn't have a conflict of interest. They're not your brother
or your supervisor or something like that. And in my field,
the same holds true for when you're reviewing manuscripts. Is
that true for your field as well?

Speaker 3 (37:51):
Yeah? Absolutely, Yeah. You have to say, oh, I can't
read this one because that was my student or something
like that.

Speaker 2 (37:58):
Yeah. How long does review usually take for your.

Speaker 3 (38:00):
I'd say it's you know, like four to eight weeks
or something like that.

Speaker 1 (38:04):
Wow, how about you, guys, I'd be happy four to
eight weeks. That sounds good, but it can be you know,
sometimes six months, and at six months you start writing
the editor being.

Speaker 2 (38:12):
Like, come on man, this is crazy. But yeah, it
can be hard to find reviewers.

Speaker 3 (38:18):
Because reviewers are just other scientists busy writing their own
grants and writing their own papers and doing all their
work and picking up their kids from daycare and stuff
like this and trying to get through that mountain of laundry.
You know. I think that people forget that, like science
is just people, right, and people are busy. Yeah, And
if you're out there and you've written some treatise on
the fundamental physics of the universe and nobody has read it,

(38:38):
it's not because we're gatekeeping. Is just because we're busy,
you know. And there's lots of those. And I often
say no to editors who ask me to review stuff
because I'm just too busy to do it in a
timely manner.

Speaker 2 (38:48):
Yeah, no, me as well.

Speaker 1 (38:49):
And I think it's worth noting that while we're doing
this review work, we're doing it for free. Yeah, Like
when you're on a grant review panel, you can lose
weeks of time to reviewing these things and you don't
get reimbursed for that time. And the same thing goes
So when I review a manuscript, I read it for
one day, takes me a couple hours to read carefully,
and then I sit on it for a few days,

(39:10):
and that's the main thing that is occupying my brain
in the background for those days. And that could be
like I could have been thinking about the introduction to
my next book or something, but instead I'm thinking about
the methods to.

Speaker 2 (39:19):
Make sure it made sense.

Speaker 1 (39:21):
And then I spend another three to four hours reading
it again and writing in detail my comments. And if
I think there's some literature they missed, I go and
I find it and like it's a multi day process.
When I commit to reviewing a paper, and reviewers don't
get paid for that, nor do the editors.

Speaker 2 (39:36):
How long does it take you?

Speaker 3 (39:37):
Well, I wish that all of my papers were reviewed
by somebody as thorough as you. But my process is similar,
like an initial read, some thoughts, and then let it sit.
Sometimes I'll discuss it with people in my group. We'll
read it together, and then I read it again in detail.
And then, especially if the review is negative, I wait
and I sit on it, and I come back later

(39:58):
I'm like, was this fair? And also, most importantly, was
I nice enough on my comments? Constructive and helpful and
not like just negative or harsh in any way? I'd
like try to take a sandpaper to anything that's negative
and smooth it over, because I've been on the other
end of harsh reviews and it's it doesn't help anybody

(40:19):
to get zingers in there and like especially because a
lot of my papers are written by young people and
it's often their first paper, and then I have to
show them this review that's like, this guy's a jerk.
This isn't necessary. I have to explain to them that
the anonymity here is protecting them. But you know, most
reviewers are fair and are thoughtful and are courteous, and
so often we get like helpful feedback, like what about this,

(40:40):
or have you thought about this question? Or what would
you say to this concern? Do you find your feedback
to be mostly useful?

Speaker 1 (40:46):
Yeah, But most of the time the feedback I get
improves the paper. I did have one reviewer when it
was one of my first papers who told me they
were very disappointed in me.

Speaker 3 (40:54):
It's like, you're a jerk, disappointed, thanks dead.

Speaker 2 (41:01):
And that was because my supplemental materials were too long.
I was like, God, give me a break.

Speaker 1 (41:06):
But anyway, so in my field, we're moving towards double
blind review, so the reviewers are not supposed to know
the names of the authors, and the authors are not
supposed to know who reviewed their paper. And that's a
really great idea. So that you feel like you can
honestly tell someone if their paper was good or not.
You don't have to worry about someone getting mad at
you in particular, though in practice there's usually like four

(41:28):
people who could review your paper, so you know, like
you know, it was Joe or Francceine and Franccene tends
to be meaner and so.

Speaker 2 (41:36):
But anyway, so yeah, what about mirror field? Is it
blind or double blind?

Speaker 3 (41:40):
It's only blind to one direction. The review is anonymous,
but you see who the authors are. I know, in
some other fields, like in top computer science conferences, for example,
it's double blind, and that's helpful. But also it's not
that hard to figure it out. If you wanted to
know who these people are, you could figure it out.
But you might also be wondering, like, what is the
job of the reviewer exactlytly does reviewer have to make

(42:01):
sure the experiment was right? Like if I'm checking Kelly's work,
I'm the reviewer, do you have to go out and
do the experiment and make sure she was right? And
that's the things that peer review is not replication. The
job of the reviewer is to ask do the claims
of the paper are they supported by the evidence provided,
is their logic? There is there mathematical tissue between the

(42:22):
work that's been done and the conclusions that are being claimed.
And also is it well written in the citations? And finally,
is it interesting? Is it important? Does it play a
role in the scientific conversation? Which is a little bit
subjective of course, but science is by the people and
for the people. But if you're working on something nobody's
interested in, then nobody's going to read your paper. Then

(42:43):
a journal might say this is solid work, but nobody cares.
It's a boring question, or you didn't learn anything interesting.
The reviewer's job is not to say, hey, I did
this experiment also, and I know that this is a
feature of the universe. That's not the task of the
Reviewer's not their job to do that for you. It's
also not the task of the reviewer to say, like, hey,

(43:03):
this is a cool idea. I would have done this differently,
and so you now need to go back and do
all these additional studies that I think are important. And
you see this a lot in peer review that reviews
come back and say this is cool, but also do
xyz and like that's frustrating to me, because that's not
the job, but the reviewer.

Speaker 2 (43:19):
Yes, yep, Nope. That's super frustrating.

Speaker 1 (43:22):
And I think it's worth noting that sometimes bad papers
get through this procedure, and I one of the ways
that bad papers get through is that reviewers aren't required
or and this makes sense, but they're not responsible for
going through, for example, the codes for the statistical models
that you ran. And so if somebody you know, check
their codes a bunch of times, but they forgot a

(43:43):
minus signed somewhere, the result could be wrong and maybe
nobody knows. And I have known a couple of people
who afterwards have gone to use those models again for
some other question and caught their mistake. And then a
good scientist will contact that journal and say, I have
I have to retract my paper, or I have to
submit errata and let everybody know I forgot the minus

(44:05):
signed these it's a different result, and that's really painful.
But at least they're being honest, and that's great, and
I think that's really important. Or some people are dishonest
and there's no way for the reviewer to know that
my field had somebody where they were one of the
biggest names in the field, and there's this new requirement

(44:25):
in our field.

Speaker 2 (44:25):
What's a couple decades old? Now I'm old.

Speaker 1 (44:27):
But where you have to put the data that you use,
that you collected as part of the experiment online, somewhere
in a public place where people can download your data.
And again, you know, in a field where animals are
being used to collect data. It's great to know that
those data could be used by other people to ask questions.
You can get more information out of them. But it
also means that if someone gets suspect results, you can

(44:50):
pull their data and rerun the models. And when somebody
was looking at the data, it was clear that the
numbers just didn't make sense, like one column was always
the prior column times three and it anyway, after more scrutiny,
it became clear that this person was making up their data.
But you know, we have this new check where your
data have to be available to everyone else, and that

(45:11):
has helped us sort of tease out people who are
being dishonest. So the system is evolving and getting better
over time. But still sometimes you get stuff through.

Speaker 3 (45:19):
You certainly do, and there are folks out there who
are like combing through papers to find this stuff and
there's a scientist, for example, her name is Elizabeth Bick,
and this is her passion. She combs through old papers
and finds evidence of like photoshopping in biology. You know,
you take a picture of your gel and it's supposed
to have this blob and she finds, oh, this blob

(45:40):
is the same as that blob, or it's been reverted
or whatever. And so there are definitely lots of ways
that you could find this stuff now that we couldn't
have done beforehand. But peer review is not a guarantee
that this hasn't been done, like whereviewers should look for
this stuff and call it out if you see it.
But stuff definitely can get through. It's not a perfect system,
you know, it's one of these. It's the terrible system.

(46:01):
But it's also the best that we have so far.

Speaker 2 (46:03):
Just in aside here, I am a massive wimp.

Speaker 1 (46:05):
If I like purposefully fabricated data in this era where
everything is like public and people can peek behind the curtain.

Speaker 2 (46:13):
I would never sleep again. I'd be like, someone's gonna
find me out, and it would be like I would
be miserable the rest of my life.

Speaker 1 (46:20):
I'd rather have a bunch of low impact papers than
worry that I was gonna get called out for lying,
but anyway, I am a whim.

Speaker 3 (46:28):
Yeah, And so the highest standard really is not just
that something has been pure reviewed, but that something has
been independently replicated. Like you might have a scientist who's
doing totally solid work and see some effect in their lab,
but doesn't realize that it's an artifact due to some
condition the humidity or the local gravity or the trains
or something are causing some influence on their experiments, And

(46:51):
so you want people on the other side of the
world who built it differently, who made different assumptions, who
sensitive different stuff to reproduce it. You might remember this
excitement of about high temperature superconductors. A couple of years
ago LK ninety nine. Korean scientists claimed to have created
this room temperature, cheap superconductor which would revolutionize the industry,

(47:11):
and so very quickly people were out there trying to
replicate it, and people were excited. But until another independent
group built it and showed that it was real, nobody
really accepted it and thought we were in a new era.
And personally, for example, when we were discovering the Higgs boson,
I saw the data accumulating around a mass of one
hundred and twenty five. We were looking at it constantly.

(47:32):
It was building up and up and up. But until
I heard that my colleagues around the ring also saw
an effect at the same place, I didn't personally think, Okay, yeah,
we found this thing. And so this sort of independent
replication is really sort of the highest standard. What do
you think, Kelly, I.

Speaker 1 (47:47):
Think so, And to me, this is one of the
current weaknesses in science, at least in my field. So
replicating data is absolutely critical. But if you write to
the National Science Foundation, for example, and you're like, oh,
I just want to do the same experiment that Maria did,
but I'm going to do it on another continent, it's
going to be hard to get money for that because
it's not a new idea, and it's also probably not

(48:10):
going to get published in a top tier journal. And
so if you are training PhD students who are going
to want to get jobs and they're just replic quote unquote,
just replicating somebody else's work, that's not going to help
them get a job as much as following up on
their own exciting new idea. And so we've got this
incentive structure. It's actually not super great for encouraging people

(48:31):
to replicate each other's studies, but I do think it's
absolutely critical, and I would love to see us sort
of work on that incentive structure to make replication just
as important as the initial thing that got done.

Speaker 3 (48:43):
That's interesting. In particle physics, there's maybe a slightly healthier environment.
We have a few signals of new physics that we've
seen in experiments that we're all curious about, but nobody
really accepts because it doesn't quite make sense. And then
there's been another generation of experiments to follow up on those.
For example, there's a very significant signal of dark matter
an experiment in Italy called DOMA, and nobody really believes

(49:06):
it because we've never seen it in another experiment. And
people have set up other experiments very similar to DAMA,
but in another part of the world with different conditions,
in a different cave, for example, and not seeing the
same effects. And so those were experiments definitely inspired by
this signal to test this in other conditions. And in
the last few years there was this quote unquote discovery

(49:27):
of a fifth force in this experiment in Hungary, and
folks in Berkeley are trying to replicate it, and there's
an experiment in Italy trying to probe it as well,
And so there's definitely like follow up work. But usually
those follow up experiments are a little bit broader, and
they try to not just check this one new result,
but also learn something else along the way to make it,

(49:47):
so they're also covering new ground. But I agree we
should definitely have replication. But it comes back to funding, right,
Like if your reviewer, what would you rather fund, Like
let's check Kelly study, or let's do this brand new
thing that could tell us something new about the universe.

Speaker 1 (50:01):
Yeah, And when I write a grant, I try to
work my own replication in there, like as part of
asking a new question, I'm going to repeat the experiment,
but maybe put a little tweak But then I can
at least make sure that I'm still getting the same
results in you know, a different space or with slightly
different lighting or something like that. And so, and you know,
especially when to defend my field a little bit, especially

(50:21):
when animals need to be euthanized as part of the experiment,
you know, you might be a little bit hesitant to
be like, well, this is just for replication, Well, if
you already got an answer, do animals need to get so? Anyway,
So we do also try to ask additional questions while
trying to replicate, but it would be nice if we
had more incentive for that.

Speaker 3 (50:37):
And there's folks out there just doing this, Like you
might have heard of the replication crisis that comes out
of people finding papers in the literature and saying like,
all right, let's reproduce this, let's see if it holds up.
And this is one reason that like p hacking is
a thing people talk about, because we discovered that some
of the results in literature are just statistical artifacts that
were selected in order to get a paper out there.

(51:01):
And so I think what you're seeing in a broader
sentence is science is self correcting. Just like we saw
that we needed to add external reviewers and members of
the public when we're talking about the experiments you do
on animals. Now we see like, okay, we need some
sort of way to protect against this kind of abuse
as well. And so you know, the process is a
living thing. It's not like science is a crisp philosophy.

(51:23):
What we call science has changed over the last ten years,
fifty years, one hundred years and it will continue to
evolve and I hope keep delivering great truths about the universe.

Speaker 1 (51:33):
Yeah, me too, And so Steven asked, how can you
know if something is good science or not? I feel
like that was the big push behind the question. And
so what do you think when you read a paper
for the first time?

Speaker 2 (51:46):
What do you look for?

Speaker 3 (51:47):
Well, you know, science has to stand on its own,
So the thing I look for is like, does the
paper make sense? Does it lay out an argument? Do
the conclusions follow from the evidence presented? That's the most
important thing to me. But before I'm in a person
something believe that something is real part of the universe, yeah,
I need to see it reproduced by another group. I've
often reproduced papers myself, especially if they're like statistical, just

(52:08):
to make sure I understand, like, how is this calculation
being done? Exactly? How do you get from step three
to four? Because I really want to understand what are
the assumptions being made and think about whether those are
broad enough. So I would say that, yes, peer review
papers can be wrong, but that's part of science, and
the highest standard I think is independent replication.

Speaker 1 (52:28):
What do you think, Kelly, Yeah, no, I agree. I mean,
when I read a paper, I'll look to see, you know,
what journal is it in. So lately there in this
day and age, there are some new what we call
predatory journals where they will encourage people to submit, but
they publish just about anything they get. And peer review
isn't so much about peers having a chance to turn
down a study, but they'll like give a.

Speaker 2 (52:49):
Little bit of input. But the paper gets published one
way or another.

Speaker 1 (52:52):
So you need to look to see what kind of
journal it was published in, and then you know, if
it's a field I know about, I'll look to see
did you cite the paper that I would expect you
to be citing. Was your literature search deep enough? And
where your experiments designed? Well, what else could those results
have meant? And then who funded the study and stuff
like that. But you know, if you are a member

(53:12):
of the general public and you can't do all of
that and you're just reading a pop side summary, I
would look for again, like where is this being published in?
You know, like if it's The Atlantic by ed Young,
it's probably great. And if the science journalist had other
scientists way in on what might have been wrong about
the study, that's a good sign, and so you look
for where you're getting the information, how critical they seem

(53:35):
to be, and stuff like that. What do you look
for in a pop side article.

Speaker 3 (53:39):
Yeah, in a pop side article, I definitely look to
see whether they have talked to people who are not authors,
you know, other people in the field who know this stuff,
and is it just a press release from the university,
or as a journalist who's actually thought about this stuff
and written something up. Those are definitely the things I
look for in popside press releases, because there's the temptation

(54:00):
and the sort of marketplace of ideas to overinflate the
meaning of an incremental study.

Speaker 1 (54:05):
Yep, yep, all right, So in general I would say
that you know, we have this process in place to
try to make sure that the best ideas are the
ones that move forward, and that we're all checking to
make sure no one missed anything important along the way.
Some stuff gets through either by mistake or because some
people are unscrupulous, but hopefully that doesn't happen that often.
But you know, the system is evolving at all times.

(54:27):
And if you have a study that you're interested in,
but you don't know if you can trust the press
release or whatever. You can send it to us, and
if it happens to me in our wheelhouse, we're happy
to weigh in and tell you what sort of you know,
set off our alarm bells in our heads, or what
we liked about the study, and we're happy to help
people figure out what was done well and what was

(54:48):
just squeaking through.

Speaker 3 (54:50):
All right, Thanks very much, Steven for asking this question,
if for shining a light on the inner workings of science, and.

Speaker 1 (54:56):
If you'd like to reach out to us, send us
an email at questions at Daniel and Kelly dot org
and let's see what Stephen had to say about our answer.

Speaker 6 (55:04):
Hi, Daniel and Kelly, it's Steve here. Thanks so much
for answering my question. I think you shed a lot
of light on some topics that most of the general
public are not aware of, and I really appreciate that.
It's actually really fascinating to learn that just because something
is peer reviewed doesn't mean it's one hundred percent fact.

(55:25):
And it's definitely a lot to take away here, so
I appreciate you guys diving into the topic and looking
forward to the next episode.

Speaker 1 (55:40):
Daniel and Kelly's Extraordinary Universe is produced by iHeartRadio.

Speaker 2 (55:44):
We would love to hear from you, We really would.

Speaker 3 (55:47):
We want to know what questions you have about this
extraordinary universe.

Speaker 1 (55:51):
We want to know your thoughts on recent shows, suggestions
for future shows.

Speaker 2 (55:56):
If you contact us, we will get back to you.

Speaker 3 (55:58):
We really mean it. Answer every message. Email us at
Questions at Danielankelly.

Speaker 1 (56:04):
Dot org, or you can find us on social media.
We have accounts on x, Instagram, Blue Sky and on
all of those platforms. You can find us at D
and K Universe.

Speaker 3 (56:14):
Don't be shy, write to us
Advertise With Us

Follow Us On

Hosts And Creators

Daniel Whiteson

Daniel Whiteson

Kelly Weinersmith

Kelly Weinersmith

Show Links

RSS FeedBlueSky

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.