Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow your Mind from how Stuff
Works dot com. Hey, welcome to Stuff to Blow your Mind.
My name is Robert Lamb, and I'm Julie Douglas. You know, Julie.
Science has done some good things for us. It's done
a lot of good things. I mean, it has really
forwarded humanity right and made us, in some ways the
(00:26):
kind of success story of the species that we are. Yeah,
it's kind of the skeleton of human culture, that the
thing upon which we grow and continue to grow. And
you know, and you can look at just about any
area right, medical science, exploration of inner and outer space, um,
increasing knowledge of the self, the brain, the connection, our
connection to the from the brain to the body. I mean,
(00:48):
pretty much everything we talk about every week is is
a testament to what science is doing and has done
for humans. And while the pursuit of science, what we
think about as science, has been around for a very
long time, this pursuit of knowledge and truth, the word
scientists is only one hundred and eighty years old. Before that,
(01:09):
a person might be called a natural philosopher. And before
that you had economists, you had philosophers, and what we
now call scientists. All co mingling under the same roof,
and this affected how science and and how we think
of it was defined and pursued. And we sort of
assumed that science and the scientific method were in place
(01:31):
from the get go, but in fact they hadn't really
been defined in the rules tightened, uh, you know, until
a couple of hundred years ago, because economists were pulling
for deductive reasoning, right, and scientists we're saying, no, I
think there's there's more of this inductive reasoning, which is
(01:53):
this premise that you you take an idea and then
you try to take it down to the studs and
prove it wrong, even theo you might want it to
be right. And the whole idea there is that you're
trying to get at this kind of truth. And this
is now something called the scientific method. But we sort
of take this for granted, this this fact that this
is only a fairly recent development in the long history
(02:16):
of humans. Yeah, I mean, it's it's basically how science works.
There were people who managed to make it work in
the past, but it was until recently that we actually
said this is what works, and this is why we
should stick to. Now. A lot of the advances in
century you can, you can boil down to a simple
idea trust, but very verify. And this plays into our
(02:39):
peer view system. Right, one scientist writes a paper, or
a team of scientists write a paper, maybe there's a
big breakthrough in it, maybe not. But then the idea
is that their peers come along, look at the paper
and try to replicate the results, just you know, tear
it apart, see what's happening in the paper and say, yes,
I agree this is working, or I have problems with
(03:01):
this or that, or this is complete bunk. Yeah. I mean,
it's this idea that science can police itself. And yet
we have some statistics coming out that point to other
factors going on and that perhaps we're not pursuing knowledge
for knowledge itself in some cases or truth and and
(03:21):
we'll discuss more of those factors in a bit. According
to the Economists article How Science Goes Wrong, in two
thousand twelve, biotech firm am Jin reported that they could
reproduce just six of fifty three landmark studies in cancer research,
and earlier Bear the drug company, managed to repeat just
a quarter of sixty seven similarly important papers. Now we're
(03:44):
not taking on this topic today because we think that
we are experts on this topic, but by no stretch
of imagination are we. But we do rely on a
lot of studies, and so we wanted to point this
out today to just for ourselves better understand what are
the conditions that lead to a good, solid study or experiment.
What are the conditions that lead to dubious data? Yeah,
(04:06):
certainly worth keeping in mind too when you find yourself
reading science journalism articles, you know, uh that asking yourself, well,
what is the study? You know, what are there problems
and it what could the problems be? Because of what
we'll discuss, there are a number of problems that can
and do occur in modern peer of viewed science. Now,
one of the things that will come up sometimes when
(04:28):
people write on this topic is careerism as one of
the factors that is problematic. And that's because we've all
heard the maximum published or parish right, and the spirit
of it is not so bad. I mean, the spirit
of it is really like less than a threat and
more like, hey, this is a challenge to push science forward.
(04:52):
Put forth your multiple lines of evidence, your hypotheses, your theories,
because we all want to share information. We wanted to
it apart, we want to try to validate it or
invalidate it, and generally create a better understanding of the
topic or the issue. So again it's an attempt at
reaching some sort of truth. And yet the reality of
(05:14):
published or perish now is more that it's this kind
of pressure to produce. So it's not enough for say
a faculty member at university to write a few really
good papers a year. Now they have this pressure to
write several And so there's this idea that questionable results
could come out of this, and instead of maybe making
(05:37):
it to a first tier journal, maybe that data goes
to a third tier journal. And yet it shouldn't necessarily
go any place. And the problem, as outlined in the
Economist article how Science Goes Wrong, is quote. In order
to safeguard their exclusivity, the leading journals impose high rejection
(05:57):
rates in excess of submitted manuscripts. The most striking findings
have the greatest chance of making it onto the page.
Little wonder that one in three researchers knows of a
colleague who has pepped up a paper by say, excluding
inconvenient data from results based on a gut feeling. So
(06:18):
we're talking about here is cherry picking information. And then
all of this, this kind of careerism is compounded by
the pressure to generate grant funding. So there's this idea
that more and more scientists are having a bigger percentage
of their salary covered by contingent or research funding dollars.
(06:38):
So that means that you now have this pressure to
keep the flow of funding going with positive results. So
you can say, yeah, see, this is exactly what I
thought was going to happen, proving out um. That shouldn't
be the case. There shouldn't be those sort of strings
tied to it, and in an ideal world that wouldn't
(06:59):
be the case. But that's what we're dealing with. And
then there's this failures to prove hypothesis are actually rarely
offered for publication, let alone accepted. Uh, you know, and
you can sort of squirrel away a lot of this
to you know, what scientific journal doesn't want to be
on the forefront of science, you know, full of amazing
new discoveries and and and wonderful new ideas, right that's
(07:21):
you know, that's really essential to the overall drive of science.
You don't want to fill your your paper with a
bunch of failures, right, But the failures are important, right.
You need to know what hasn't worked so you can
try and figure out what does work. You need to
know what's false so you can figure out what's true. Yet,
in two thousand thirteen, negative results are accounted for only
fourteen percent of published papers, and that was down from
(07:41):
thirty in nineteen ninety. And then in a similar vein
we see the peer of view process UM often sees
peers missing the errors in the paper. The very thing
they're supposed to do is, you know, figure out what's
what's potentially wrong with this work. So both of these
into handicap the process to a certain extent. You know,
(08:04):
it's interesting because my daughter's school has nine different design
principles of education, and there this is something that they
actually present to the student. So kindergarten reserve being taught
about failure and actually celebrating failure for this very reason,
because the idea, again is that you cannot have successes
without failures. And uh makes me think about Edison and
(08:27):
the light bulb and the hundred plus iterations of the
light bulb, all the failures that proceeded those, And yet
that's not the flashy stuff, right, that's not what necessarily
a first tier journal is going after. Like, hey, tell
me about your spectacular failure. Yeah, Like I keep thinking
about science in terms of slime mold. We did an
episode on the slime mold way back, where you would
(08:47):
put a slime mold in a maze, and it's solving
the maze to get to resources on the outside of
the maze. And so these tendrils of slime mold or
trailing through the mazing if they reach a dead end,
and that tendril dies and fades back and it doesn't
go down that way again. And science kind of works
the same way that you need to know which where
the dead ends are, otherwise you're just gonna keep sending
(09:08):
your tendils down there. Well. And then it's also this
such an elegant analogy because they're going after that sugar, right,
that that resource, and so they're eventually going to find
themselves to the success story of the resource. But then
it becomes this question of is that resource that piece
of sugar that the slime bowl is after. Is this
(09:29):
truth or is this money? And we'll talk a little
bit more about that later, but I thought at this
point I would go ahead and drop in a little
information about over generalization and extrapolation of results, because this
can occur in two ways. The first is applying findings
from one target group to another target group within the
same population. So an example would be you have this
(09:53):
new cholesterol drug and it's been tested on females age
uh it is thirty two fifty. Well, you can't make
the assumption that the drug can also do the same
thing for a different population, say women over sixty five
or men. The second fallacy is applying the survey results
to population is not living in the area in the survey.
(10:14):
So this is this to me was very clear cut.
Let's say that you're trying to establish the mortality rate
for a certain neighborhood within a zip code. All right,
you do the research, you do the surveying, and then
you you've got your data. Now it would be beneficial
to find out what other neighborhoods mortality rate. But you
make the assumption that just because the borders of this
(10:35):
other neighborhood are butting up against the one that you've
just surveyed, that they have the same mortality rate. Well,
that is erroneous thinking, because as we know and we
have seen over and over again you can have really
poor neighborhoods betting up against very prosperous ones and that
excuse the data because the very protuluh ones are going
to have a far different mortality rate than a poor neighborhood.
(10:57):
And yet these are some of the things that lead
can with studies and experiments. And then of course there
is conflict of interest, which is a big one. UH.
And we can date a lot of this back to
the Bio Dule Act of nineteen eighty and this came
along to encourage technology transfer from universities to industry. The
(11:17):
IDBA being that it would facilitate financial relationships between academic
biomedical researchers and the biotechnology industry. And you know, obviously
there's a lot of good that was going to come
out of this and has come out of this. Uh.
They leave these uh. These relationships lead to the development
of improved drugs and medical devices. UH. But on the
(11:39):
other hand, there's this huge financial aspect of the relationship.
Of financial relationship emerges relationships that can cause conflicts of
interest between a researchers scientific and ethical principles and that
gleam of financial gain coming background to what you said
about what is the what is the bait on the
outside of the maze? Is it? Is it knowledge and understanding?
(11:59):
Is it? Is it increasing our scientific understanding of a
particular ailment? Or is it mere financial gain? And of course,
financial gain for a biomedical corporation tends to boil down
to treatment, the drugs that can be thrown at a
particular ailment, the medical devices that can be thrown at
a particular ailment. And in a two thousand nine study
from Dr Rueschmajas, Assistant Professor of Radiation Oncology at the
(12:24):
University of Michigan Medical School, compared a thousand, five thirty
four studies involving cancer research, found that studies with with
industry funding focused on treatment again drugs, medical devices sixty
of the time compared to thirty six percent of the
time for other studies not funded by industry, and the
(12:44):
studies funded by industry focused on epidemiology, prevention, risk factors,
screening and other diagnostic methods only twenty percent of the
time versus forty seven for studies with no declared industry funding.
So the take home here seems to be the more
money is involved from these from the biotech industry. The
(13:05):
more focus there is going to be on the mere
treatment of an ailment versus uh um, you know, actually
being able to prevent it or figure out how to
screen it through looking at risk factors, which my lead
to misleading statistics or interpretation about the data. And what
I'm talking about is absolute versus relative percentages. This is
(13:28):
from the article bad Science, Common Problems and research Articles.
This was published on Health Readings. Quote supposed that there
was a medical problem that caused two people in one
million to have a stroke, and suppose there was a
treatment that would reduce the problem to only one person
in one million. This would be an improvement of point
(13:48):
zero zero zero one percent in an absolute sense, or
or as this author says, no big deal, right. However,
if it had been hoarded using relative percentages, it could
have been stated quote new medical treatment yields a rediction
and reduction and risk of stroke, and this would be
(14:11):
very misleading. But it's unfortunately a common practice that you
see from time to time, and so again you see
how that's it's not exactly wrong. It is a fifty
percent reduction in the two and one million people, but
it's not really accurate saying. It's just how how do
(14:31):
you end up using? How the effect the overall statistics
that you're dealing with. Yeah, semantics matter. Now. Another area
of concern is that of unpublished clinical trials. A two
thousand twelve study from Yale School of Medicine researchers found
that fewer than half of a sample of trials primarily
or partially funded by the National Institutes of Health were
(14:53):
published within thirty months of completing the clinical trial. So,
in other words, the research refindings here are not being
emanated half the time, So the scientific process is disrupted,
undermining the effort and the available material for peer of view. Now,
according to study author Dr Joseph Ross, they're probably a
number of reasons for lack of publication, such as not
(15:14):
getting accepted by a journal and we already hit on
the high rejection rates, or not prioritizing the dissemination of
research findings in the study. Either way, this disrupts the process.
This disrupts the strengths of the peer of view system.
Another factor is something called selective observation. Now, you've probably
(15:35):
experienced your own selective observation before. My example is every
time I get into the shower of my phone rings, right. Uh,
and it's a perception that is based on the annoyance
of my phone ringing and my inability to get to it.
But then I, you know, I tended to disregard all
the times that my phone didn't ring well as in
(15:56):
the shower, and so I was practicing confirmation biased and
oring the other data, skewing my own statistics. So selective
observation in science is essentially trying to land on a
conclusion based on an existing bias or belief. For example,
a researcher who studying obesity may have a bias that
obese people lack will power, and as a result, they
(16:18):
may construct an experiment that involved a plate of donuts
and a conference froom work. But if that researcher only
records data about ABS subjects and doesn't record non ABS subjects, well,
then they have a biased experiment on their hands. In
other words, Uh, if they don't go out of their
way to try to prove themselves wrong, they're not exercising
(16:39):
the principles of scientific method. All right, you know, let's
take a quick break and when we come back we
will discuss weird science. All right, we're back. Weird science,
weird psychology, and I'm not talking about the eighties classic
(17:03):
as it is. Um No, weird is a phenomenon that
plagues a lot of psychology and other social science studies.
This is when the Protestants are overwhelmingly This is where
weird comes in Western E for educated, and they're from
I for industrialized, ARE for rich, and D for democratic countries.
(17:24):
So weird humans are serving as the basic test subjects
in a lot of these studies. And you can also
add in that weird humans are also often college students
in the United States, participating in studies for class credit. So,
especially in the social sciences, the risk is that so
(17:44):
called weird populations are actually the outliers of human population
as opposed to a good standard example of human behavior.
And you know, you see this, You see shades of
this time and time again. Right. You look at a
study and it was clearly a study that was conducted
on campus with students, and in your better studies you
(18:04):
see them branching out from that and saying, uh um, well,
all right, when this first study we looked at students,
but then we went into an impoverished neighborhood or in
some cases, then we looked at some US participants. Then
when we also went and looked at some participants in
Hong Kong, that sort of thing. Um. And so obviously
there are a lot there's a lot to consider here
with the software of psychology, right, because there's so much
(18:26):
about human culture and uh and in your relations within
your particular group. But it also bleeds into the hardware
of physiology. In two thousand fourteen, Liverpool University had a
study examining rapid eye movements called cicades among groups of
mainland Chinese, British Chinese, and white British test subjects, and
(18:48):
he found that Chinese ethnicity was more of a factor
than culture in high cicade counts. So the mainland Chinese
groups scored high cicade numbers as did the British Chinese counterparts,
despite the many cultural differences between the two groups. So
lead author Dr Paul Knox argued, quote, the human brain
is not just amazingly complex in general, but also highly
(19:12):
variable across the human population. Mm hmm. And that variability
takes us to the next entry here, which is animals. Now,
we have talked about how much rodents have um contributed
to science, and they absolutely have, but we do have
problems where animal studies do not reliably predict human outcomes.
(19:38):
And this topic is really a complex one, but there's
a paper on the topic by Michael B. Bracken who's
from Yale University, and he writes in his paper why
animal studies are often poor predictors of human reactions to
exposure that one reason is probably because animal experiments do
not translate into replications, and human trials are into cancer
chemoprevention because as they're poorly designed, conducted, and analyzed. Now,
(20:04):
another possible contribution to failure to replicate the results of
animal research and humans is that reviews and summaries of
evidence from animal research are inadequate when it comes to methodology,
and one survey, only one in ten thousand Meadline records
of animal studies were tagged as being meta analysis, is
compared to one and one thousand human studies, and in
(20:28):
recent reports, the poor quality of research was documented by
a comprehensive search of Medline, which found only twenty five
systematic reviews of animal research. Other studies similarly found only
thirty and fifty seven systematic reviews of any type of
animal researcher so Um. The reason that Bracken points us
(20:48):
out is because he says these kind of deficiencies are
important because animal research often provides the rationale for hypotheses
studied by epidemiologists and clinical researchers. Moreover, if you look
at the genetics of this, it gets even more muddled.
And the reason for that is because with rodents, and
(21:09):
one of the reasons why we use them is because
we can change their genetic background within a couple of generations.
We can tinker with the genes. And that's great because
that can really help us to study certain conditions. However, Um,
those rodents would yield really consistent results and disease expression.
(21:30):
But humans, we are far more wild West when it
comes to genetics and the genetic background, and that would
factor in how the human disease is expressed, and this
would yield a mismatching results between humans and animals. It's
a layer cake of animal confusion. Indeed, it is um. Now,
on top of everything we've discussed here, there are plenty
(21:53):
of additional methodological pitfalls, and we're we're gonna include a
link on the landing page of this episode to a
fabulous page that has a list of about sixty of them,
and we're not going to go into all into detail
on all of them here, but just to give you
an example, this includes the likes of the cebo effect,
which we've discussed at length before, and in which the
the individual receiving the sugar pill ends up actually getting
(22:17):
some sort of biological benefit from from the medication or
the fake medication, uh carry over effect, where the results
of one study are are observed in a secondary study
without realizing it. And then magnitude blindness, the tendency to
become preoccupied with statistically significant results that never nevertheless have
(22:38):
a small magnitude on effect. I feel like that comes
into play a lot when I look at, um, some
of this stuff that's new and that's being reported in
the media. It's very exciting, right, you know, oh wow,
look at this insight, and then when you get into
the specifics of the study, it's just it's not that significant, right,
doesn't quite match up to that snappy headline. All right,
(23:00):
So how does science correct course? What can be done
about these problems we've discussed? Well, um, just to talk
briefly about the use of statistics and managing potential conflicts,
those financial conflicts we we mentioned earlier, conflicts of interest. Um.
The general idea that the expert put put forth is
that we need to simplify, standardized and better enforced policies
(23:23):
to manage financial conflicts of interest, and that science needs
to keep a better eye on statistics, by which we mean,
of course, the statistical validity and the statistical errors inherent
in the system. Another thing is to encourage replication. And
again this is from the Economist article quote. Some government
funding agencies, including America's National Institutes of Health, which dish
(23:45):
out thirty billion on research each year, are working out
how to best encourage replication and growing numbers of scientists,
especially young ones, understand statistics. Another area is allocating space
and journals were uninteresting studies, which which is which is
crazy because if you think about it in terms of, say, um,
you know, a literary fiction publication, you would never in
(24:08):
a million years have anyone suggests, hey, we should make
room in this, uh this review for bad fiction. You
know a certain amount that we're always just gonna include
bad fiction. But the idea here is that scientific journals
should allocate space for the less jazzy, the less sexy stuff,
because that too is essential. Now I'm wishing for a
journal called the humdrum Studies Journal, uninteresting Studies Journal. Now,
(24:33):
another solution would be to tighten peer reviews, so perhaps
dispensing with it altogether. And again that's from the Economist article.
And so if you dispense with it altogether, what would
do well? You would have post publication evaluation in the
form of appended comments. And they say that that system
has worked well in recent years in physics and mathematics.
(24:55):
And lastly, policymakers should ensure the institutions using public money
also aspect the rules. So picking up again to the
potholes that we had mentioned, one of them is also
skills neglect, and this is that human disposition to resist
learning new scholarly methods that may be pertinent to a
research problem. And so that would also factor into peer review.
(25:17):
Is just making sure that while you're reviewing something else,
but your own knowledge of the topic is up to snuff.
And finally, when it comes to weird populations, I mean,
the big thing is just to be aware of it
too when you're when you're sampling, when you're using samples
from the immediate collegiate environment to be aware of it
and maybe be less cavalier about uh saying that you
(25:40):
have identified something that is in you know, basic in
general human nature. Of course, we should end this episode
with the study of all studies, which is that there
are too many studies. Yes, this was I believe that
the time it was attention decay in science, um, which
(26:03):
is snazzy. UM. And it basically just comes down to
the fact that there are just so many studies coming
out now in so many journals. They've just exploded since
the earlier days um, in the twentieth century. Yeah, and
it's hard for everyone to keep up with the studies,
and also the older studies are getting lost in the
fray of new studies. So um. Of course you know
(26:24):
that building upon knowledge is really important in this discovery
of truth. Right, And it's fair to point out that
this paper should awesome also be analyzed, um because it's
just one single study and the researchers mainly looked at
very broad fields like chemistry and medicine. Indeed, trust but
(26:44):
verify right, it all comes back. So so again, this
episode wasn't It's not about you know, doubt everything, doubt
every study, that comes out, doubt every the bit of
science journalism that comes across your desk, But it's it's all.
It's all information that's worth keeping in mind when you
(27:05):
do engage with these studies. Uh, and something that you
know that we like to keep in mind, you know,
when we look at these studies in our research. Yeah,
and we thought that this was pertinent information, especially when
you consider how much data we are taking in every
single day and all of the headlines that are connected
to these studies and where they're coming from and how
they're being pursed out. Indeed, Hey, in the meantime, if
(27:28):
you want to check out more episodes of Stuff to
Blow your Mind, most of which involve scientific studies of
one type or another, you can head on over to
stuff to Blow your Mind dot com, where you will
find all those podcast episodes, all those videos, all those
blog posts, you name it. And we know some of
you are out there toiling away in the fields and
the labs, scientific researchers. Do you have thoughts about this?
(27:52):
If so, we would love to hear from you, and
you can email us at blow the Mind at house
to courts dot com for more on this and thousands
of other topics, visit how stuff works dot com.