All Episodes

October 24, 2025 94 mins

Build muscle and lose fat with evidence-based fitness. Free custom plan when you join Physique University (code: FREEPLAN): witsandweights.com/physique

How can you tell when science is solid and when it’s just being sold to you?

Dr. Eric Helms returns for his third appearance to unpack how we interpret fitness research, why “evidence-based” doesn’t always mean “accurate,” and what it really takes to think critically about the information you consume.

We break down the philosophy of knowledge and why understanding how we know things leads to better results. Eric and I unpack skepticism vs. cynicism, spotting red flags in “sciencey” claims, and balancing real-world experience with research. You’ll also learn a simple framework to stay curious without getting misled.

Today, you’ll learn all about:

2:25 – Setting the stage for scientific thinking
10:50 – Why critical thinking beats blind belief
15:07 – The meaning of epistemology
25:01 – How empiricism changed modern science
34:52 – What black swans teach us about truth
48:27 – Cynicism vs. healthy skepticism
59:50 – Making sense of the hierarchy of evidence
1:12:56 – Turning data into practical results
1:28:50 – Where to find credible fitness research

Episode resources:


Support the show


🔥 Take a 2-minute Metabolic Quiz for a personalized fat loss report (strength training & nutrition strategies)

🩸 Book a Performance Bloodwork Analysis to find out what's slowing your metabolism and weight loss (20% off - code VITALITY20)

🎓 Lose fat + build muscle in Physique University with evidence-based nutrition coaching (free custom nutrition plan - code FREEPLAN)

👥 Join our Facebook community for fitness & body recomp strategies

👋 Ask a question or find Philip Pape on Instagram

📱 Try MacroFactor 2 weeks free with code WITSANDWEIGHTS (my favorite nutrition and macros app for lifting weights)

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Philip Pape (00:01):
If you've been following evidence-based
fitness, you probably feelconfident about what works.
You trust the research onprotein, progressive overload,
and calorie deficits.
But what if that trust is atleast partially misplaced?
What if the very foundation ofhow we evaluate fitness science
has important blind spots weoften fail to acknowledge?
My guest today, the Dr.

(00:22):
Eric Helms, reveals why ourrelationship with research
deters between strong andfragile, how the studies that
guide us have limitations thatcould mislead us if we're
unaware, and why the differencebetween knowledge and
understanding determines whetheryou actually get results.
You'll also discover whyseemingly obvious truths often
crumble under scrutiny, how tonavigate the gap between what

(00:45):
studies suggest and what worksfor you personally, and a
framework for evaluating fitnessand knowledge that will better
inform your choices in thefuture.
Welcome to Wits and Weight, theshow that helps you build a
strong, healthy physique usingevidence, engineering, and
efficiency.

(01:05):
I'm your host, Philip Pape, andtoday I'm joined by the
magnanimous Dr.
Eric Helms for his thirdappearance on the show.
Eric is a lot of things, so I'mgoing to narrow it down to just
five.
A WNBF Pro Natural Bodybuilder,a senior research fellow at
AUT, co-founder of 3D MuscleJourney, co-author of the Muscle
and Strength Pyramids, and partof the Mass Research Review

(01:27):
team.
And I have to say, what Ipersonally appreciate about this
gentleman is two things.
The first is his ability tobridge the research and the
practice so that it'sunderstandable, but also
respects the nuance and I'll sayself-efficacy or
self-determination topersonalize the information.
And second is he's a kind man.

(01:50):
He is generous as a humanbeing.
He gives time to platforms likeours, but he also nudges,
educates, and role models thefitness industry into something
more positive and welcoming.
And I think we need more ofthat.
So Eric is the ultimate trustbuilder.
What better man to help usanswer the question?
How can we trust what we hear,see, and read about fitness
research?

(02:10):
What are the hidden assumptionsand blind spots in
evidence-based practice?
And why is understanding thephilosophy of knowledge itself
so important to callingourselves science-based?
That's right, folks.
Today it's all aboutepistemology.
Eric, welcome back to the show.

Dr. Eric Helms (02:25):
That was an awesome introduction.
I am uh I just want to sit andlisten to you, but apparently I
have to do something on thisshow.

Philip Pape (02:31):
You have to do something, you have to teach us
something.
And uh hopefully I don't falltoo much over my feet when it
comes to philosophy.
And I mean, I guess it startswith understanding what the heck
we're talking about.
Like what is epistemology?
Is this something that youthink about in your sleep?
Why should the average, albeithighly intelligent listener,
care in the context of hashtagscience?

Dr. Eric Helms (02:54):
And are all of you just things I dream about in
my sleep?
Is that the reality I exist, oris this a simulation?
Um, no, it's a great question.
And, you know, despite the factthat technically anyone with a
PhD has a doctorate inphilosophy, we're so far away
from that reality uh in terms ofmodern science, where we have

(03:17):
philosophers who have specificdegrees in philosophy who are
taught this, and then we have anextreme amount of variability
in how much exposure scientistsin different fields get to
epistemology.
So uh, do I think about this inmy sleep?
Um, only recently, I think isthe answer to that question.

(03:38):
And I would say that sportsscience, exercise science, and
even medical science is arelatively young field, not so
much medical science, they haveprobably focused more on methods
than methodology uh in a lot ofthe spaces that are relevant to
us, those of us who areinterested in you know,

(03:58):
listening to people with PhDs,MDs, RDs, CSESs or whatever in
the fitness space, health space,nutrition space.
We think about where weoperate.
You've got your kind oftop-tier science communicator,
and by top tier, I'm gonna talkabout purely from an exposure
perspective.
You have like your Hubermans,everything on down to us little
guys, you know, uh trying tocommunicate to very niche

(04:21):
audiences, or even just on aone-on-one basis.
Someone who's been in the gamea while as a fitness
professional, might have goneback to school to get their
bachelor's, uh, opened their owngym, and they're they're at the
Thanksgiving dinner, and UncleTom says something about, hey,
tryptophan makes you sleepy, andyou go, hey, let's talk about
that.
So I think this is somethingthat pervades our life, and it's

(04:44):
a it's a good problem to haveright now because big science is
sexy.
So there's an increasinginterest in science and using
science for fitness.
At the same time, there's anincreasing skepticism.
So we have multiple competingthings.
One is always bad, ananti-intellectual anti-science
movement.
There's only slight slivers ofme being very generous as to the

(05:06):
silver lining there of it.
Maybe it enhances skepticismand makes people want to think
for themselves.
But I think cynicism is a verydifferent thing than skepticism.
Uh, one bad thing, another badthing that's maybe reflective of
a good thing, but purely bad,is that you can use science to
get very, very popular.
Um and unfortunately, thatmeans that when you approach the

(05:30):
pursuit of knowledge from abottom line capitalist kind of
marketing, let me sell yousomething, let me get something
out of this, let me extractsomething from uh everyone else,
um, you really need to havesome safeguards in place so that
it's not um a misuse of scienceto take that wonder and

(05:50):
interest people have, and thenjust pull the old rug out from
underneath him and have him buyyour book or follow some silly
protocol.
And that interacts with socialmedia and what that rewards,
which is controversy, uh, thedouble take, like the wait, what
doctors hate him kind ofmessaging, which often is
inherently unscientific or leadyou there.

(06:11):
And I say there's somesafeguards, and I say that at a
sense of humility.
I make money off the fitnessindustry, I sell books, I sell a
research review, I sellcoaching.
Uh, I think the guardrails inplace that are important so you
can assess the different peopleyou're interacting with uh are
what exactly are they selling?
Is it Dr.
Carnivore Tom, who has marriedhimself and his image and his

(06:34):
ego and his wallet uh to aspecific protocol?
Or is it someone who is sellingscience communication?
That makes it a little easierfor me to operate in a space
where my inherent biases andneed to pay my mortgage don't
come at the cost of manipulatingmy audience.
I need to get your attention, Iwant you to get your

(06:56):
information from me, butultimately what I'm selling you
is does this work?
Does this help you be a betterpractitioner?
Does this help you achieve yourgoals?
Uh, and does this keep youinformed?
So, you know, we don't balk atpaying universities tuition fees
for going to their classes.
Uh, so we are still operatingin a space where there are many

(07:17):
conflicts of interests, but atthe very least, I think the
interest in science and therebeing a lot of money in it is
potentially a good thing, butalso manipulated.
So, and the final thing, thethird thing I said that's just
good is hey, people areinterested in science, which is
fantastic.
It's a good time to be alive,but it is also a time where no
longer are folks like myselfoperating as the people with a

(07:42):
water bottle in the desertgiving fluids to those
dehydrated because there's solittle relevant science or
access to it.
And say at 2007, eight, ornine, when I first got into this
space in 2025, we are alldrowning, and we're not sure
whether we're uh you know beingnourished by hydration of
accurate information or drownedin a sea of misinformation and

(08:06):
sometimes disinformation.
So that's my little preamble.
But to understand that, thetraditional methods that science
communicators have used in ourspace of combating
misinformation, disinformation,and pseudoscience are very
short-term whack-a-molestrategies.
They are basically every timesomething pops up, I tell you

(08:27):
no, that's not right.
It is the um actually warrior.
And that's tough because youcan get blocked, uh, you can
just get shouted down, and it'sa constant job.
It's still an important one,and you do have to do some of
it, but to some degree, I thinkthere is value in exactly what
we're gonna do on this podcastof teaching people how to learn.

(08:49):
You've got to learn how tolearn so that they can go out
into that wide world and figureout how do I discern the
difference between thepseudoscientists and the
scientists, how do I fact checkthis, and how do I give myself a
little more of a robust BSdetector so that when they hear
something from Eric Helms, theydon't just come across someone
who's slightly more confident,slightly bigger biceps, and I

(09:12):
say slightly because it's hardto imagine, maybe a second PhD
or a more legitimate PhD or anMD, they're a double doctorate,
and go, well, I'm just gonna I'mgonna take this appeal to
authority more.
And also, he did say somethingabout a PubMed ID.
I'm not gonna go ahead and readthat, so let's just go with it.
And I think sometimes people inour space, and I'm almost done

(09:33):
with this preamble, I swear toGod, Philip, uh, we throw up our
hands.
We go, listen, if you're notgonna read the full text, if
you're just gonna read theabstract, then I can't help you.
So I can tell you, I can giveyou the keys, but if someone
else comes by and and you're notwilling to read the full text,
then you're not evidence-based.
And what I'm gonna tell you isthat I think with an

(09:54):
understanding of just the verybasics of the philosophy of
science, which is about all thatI have anyway, um, like I said,
because I wasn't properlytrained as a philosopher, even
though I have a PhD, you canidentify some red flags and you
can identify what should be donewhen people are constructing
knowledge and logical inferencesand using research in an

(10:16):
appropriate way, to where almostin an interdisciplinary manner
you can gauge who is maybe evennot necessarily a bad actor, but
just a misinformed actor.
Because, like I said, there areplenty of actual scientists who
are not well informed on this,who make errors and become more
victim to their own biases andthen go on to inadvertently pass
on misinformation.

(10:37):
But there's also a lot of snakeoil salesmen and women out
there who are actively givingyou the disinformation because
it will help their bottom lineeither through ad revenue or
directly selling you a protocolor a product.

Philip Pape (10:50):
You know, as a parent of two daughters who know
way more about logicalfallacies than I ever learned
because my wife does a greatjob, uh, they would have
identified probably eight ofthem in the examples that you
gave about the pitfalls that Ithink we're going to learn some
about today of what to look outfor.
Because we're not trying totransform the listener into a

(11:11):
double PhD researcher, right?
That's kind of what you're inmind.
And there's a sense of humilityand self-efficacy here,
self-awareness, just like in thefitness space where we're
trying to, you know, you come tothis thinking there's a
protocol and method thateverybody follows, and then you
realize, no, it depends, itdepends, it depends.
And at the end of the day, it'sdown to the, you know, me and
and my ability to evaluatewhat's coming at me and make

(11:31):
sense of it.
I think of, you know, when yousaid methods versus methodology
and you talked about uh therelative at the dinner table
having the chat about science.
I feel like the more humilitywe have, the more humility we
build in ourselves to where wewe realize we know less and
less.
And that's a good thing.
Because then it opens us up tomore and more skepticism that

(11:52):
I've found to the point where Iliterally don't, I almost want
to get, you know, hide in a holeand not answer anybody's
question for fear of saying thewrong thing.
So the way I structure thisepisode, we can go totally off
track, but I was thinking wecould tackle areas of
epistemology and then kind of gooff of that.
But then you you're welcome totake wherever you want, Eric.

(12:12):
So the first one that is just,I think maybe simpler to start
with is um depth, epistemicdepth, which is knowledge versus
understanding.
And and even I have a hard timemaking sense of this one.
It sounds simple, deceptivelysimple, I guess.
But I think of it as there arefacts, and I guess a priori
facts that we say that are true,and that's a different part of

(12:33):
epistemology.
And then there are how we makesense of those, I guess, in
context, but maybe um I've gotthat wrong.
What do we mean when we sayknowledge versus understanding?

Dr. Eric Helms (12:43):
Uh yeah, I think if I may reframe or redirect, I
think perhaps a better way tostart this is first, I think
some people are like, well, whatthe hell is epistemology,
right?
So I think we probably want toget to a place where the
listener has agreed with us on acertain view of the world.
Um, because it's very difficultto even talk about, and I don't

(13:07):
think it's productive to talkabout um the ways of potentially
viewing the ways the worldcould work or the ways we can
get knowledge, because then wenever really help the listener.
I I think they would just wantto hide in that hole even
deeper.
But if maybe we can give alittle bit of background on what

(13:28):
ontology and epistemology are,just so they know what we're
talking about, and then we getto, hey, in the hard sciences,
which is not necessarily asuperior thing to something like
a sociology, but in the worldwe live in as fitness
professionals who are interestedin physiology, behavior, and
human-based research, wherewe're actually testing

(13:48):
hypotheses and looking atoutcomes to therefore infer
something we do in practice, um,which is rooted in empiricism,
I think we have to understandwhat empiricism is, where it
came from, why we use it, and toidentify when people are making
claims about what you shoulddo, which stems naturally from
empiricism, but they're notactually empiricists, and how

(14:12):
that can essentially operate asan intentional or unintentional
bait and switch.
Is that are you good with that,Philip?

Philip Pape (14:19):
Yeah, I'm good with that.
And empiricism, so we're goingto be talking about, for
example, objectivism versussubjectivism, or what we think
of as truth.

Dr. Eric Helms (14:29):
Yeah, I think we should probably want to get to
like which ontologicalcommitment do we have and what
epistemological commitment comesfrom that?
And then what do we how shouldthat look when we see it in
terms of just, you know what,this is the program you should
follow, or you know what, thisis what you do.
So um, and we can put that intosome specific boxes, we'll

(14:51):
simplify it.
But I think there is there is anatural flow here if we want to
talk about the end user.
Uh, whether that end user is apersonal trainer who wants to
read research and then help Tom,or whether it's Tom.
I think I think both canbenefit from that structure if
you're okay with me kind ofleading in that direction.

Philip Pape (15:07):
Please lead it because then uh I'll pull the
threads from there.

Dr. Eric Helms (15:10):
All right.
So, first off, I think uhepistemology follows naturally
from ontology.
And at a very high level,ontology is simply the question
of what is there to know.
So, this is our basicunderstanding of how do we think
about the nature of reality.
So you could imagine that twodifferent people who have a

(15:33):
different view of the nature ofreality could fundamentally
disagree about your ability toeven know things about it,
right?
So I think we have to startwith an acceptance that we
believe in a formation ofreality where we can know facts,
like you were talking about.

(15:54):
Because if we don't start thereand we have to talk about that,
we can get to this point where,well, how do you even know it's
you know, like like I wasjoking about, are you all just
something I'm dreaming?

Philip Pape (16:02):
The matrix.
Yep.

Dr. Eric Helms (16:04):
Exactly.
So if we start with that, thenwe go, okay, well, once we have
agreed upon a given nature ofreality, that there are things
that are knowable within it, andthat they can be at least to
some degree objective.
And then how do we access that?
That is where we get toepistemology.
So epistemology is what can weunderstand, and how can we

(16:29):
understand it given the realitythat we've agreed upon, is is
what we think is is going onhere.
So there are severalepistemological stances you can
adopt, and there are way morethan several, but I think it's
useful to put them in a littlemore constrained boxes.
So there is essentially uhobjectivism and subjectivism to

(16:54):
some degree.
That that's that's a decentplace to start.
And the name and what we'rebasically saying there is okay,
is there an objective reality oris is it not objective?
And in objective reality, whatI mean by is that we think that
if you had a god-brainedcomputer that could quantify

(17:17):
every single atom and moleculein the universe, its current
direction, position, and knoweverything about it, which of
course is impossible because itwould take more energy than
there is in the universe tomeasure it, that we could
therefore predict the nextthings that are happening.
And if you manipulate onething, something else happens.
It's essentially a belief incause and effect, right?
If someone is not on board withthat, carry on and good luck to

(17:41):
you.
You know, so um that that'salso the premise of the
foundation series.
Exactly, right?
So, yeah, great, great, greatbook if you want to if you want
to dive in, and uh not quite asgreat show, but still good.
So, yeah.
Anyway, so if we start there,then we go, okay, well, well,
what kind of traditions can weaccess that?

(18:02):
So, for example, three bigboxes positivism,
constructivism, and criticalrealism.
And uh those people you youmight have heard of those, but
positivism is kind of yourpretty hardline objectivist uh
approach to saying the worldconsists of real things and that

(18:23):
we can gain objective knowledgeabout it without a whole lot of
like caveats to it.
And I think probably adifferent view is
constructivism, where it'ssaying like everything is
constructed in our mind.
There may or may not be anobjective reality, and even if
there was, we couldn't accessit.

(18:45):
So constructivists don'tnecessarily deny the existence
of an external world, but theyare highly skeptical as to the
degree to which we can access itwithout influencing it, and
that you know, your perspectiveof reality is therefore
inherently different than mine.
And this is like if when youthink about annoying debates

(19:07):
between philosophers, if you'rea pragmatic person that go
nowhere, it's generally someoneat the table as a
constructivist.
And I'm not saying it's a badthing, I'm just saying it is
what it is.
And if you're someone who islike, so what do I do in in the
gym?
You know, it's like, well, isthe reef in the gym?
It's like, okay, stop it.
So I and I'm being a littlecritical.
Obviously, you're getting ahint at what is my stance, but I

(19:31):
am not simply a uh apositivist.
I am probably what you woulddescribe, and I think honestly,
most people would fall under theumbrella of critical realism,
which is essentially sayingthere is an objective reality
out there, and it can beaccessed, but unfortunately,
it's accessed through theseflawed meat sacks that we we

(19:51):
operate through called the humanexistence.
So it is framed through oursenses, uh, the devices we use,
and then it's interpreted andit's influenced by bias.
Uh, and it is what it is, andeverybody's kind of the blind
men all touching an elephant,but maybe with the right
approach, we can do some thingsthat are useful.

(20:12):
And I think you can take apragmatic or even utilitarian
approach and say, okay, well, ifwe care about outcomes, are we
still like in a really roughspot where there's only a couple
hundred thousand of us aroundand we're an ice age away from
being on the extinction list ofthe many species that have tried
to make it on earth and are nothere?

(20:34):
Or have we succeeded as aspecies?
And there's debates around thatas well as to what defines
success.
But we have at the very least,like an infectious disease, uh,
expanded and um gone through theage of enlightenment.
We have done a lot of a lot ofuseful things.
We have new problems because ofthat.
We might be damaging theplanet, we might be making

(20:55):
future generations have some newproblems.
But as a survival machine,we're in a much better spot than
we are now than we werehundreds of thousands of years
ago.
So you could say, hey, we'vedeveloped, you know, combustion,
we've developed the circuit, wehave electricity, all this good
stuff.
We have modern health care.
We're at the point now wherepeople have more complexity in

(21:18):
their phone than NASA had whenit sent us to the moon, more
complexity than the initial ColdWar uh missile system uh that
was controlled by asupercomputer and even a like a
Gen 2 smartphone by ahundredfold, right?
So that's where we're at.
We're at the point now wherepeople are so disconnected from

(21:39):
the level of advancement we havethat they've actually
regressed.
And now people, because theydon't even understand it,
they're like, well, the world'sprobably flat, and I'm gonna go
ahead and type that on onFacebook because I can't
actually tell with my own eyesthat the world is round.
And they don't realize that thetimestamp when they post it on
Facebook is actually usingnon-Newtonian physics to use GPS

(22:02):
tracking data to know where youare and the actual time you're
at, not the wrong time becausewe're still operating with just
pure Newtonian physics.
So there's an irony and justour existence as humans.
So that is what we need to getto first, Philip.
There's an objective reality,and most people, uh, probably
whether they realize it or not,especially in our space, are

(22:24):
critical realists.
And we're saying, yes, we allhave different interpretations
and ability to access it andinterpret it, um, but there are
probably some systems we shoulduse to determine what to do in
this objective reality and toget facts from it, if you will.
Uh, and that is where we get tothe tradition of what we would

(22:46):
describe as empiricism, um,which is the empirical study of
testing hypotheses.
So we form a hypothesis, wesay, hey, I think this is the
way it operates, and we attemptto falsify it, which is a whole
other discussion.
So, how how do you like that asan intro?

Philip Pape (23:03):
That was a great intro.
I had a lot of thoughts, butI'll just hit on a couple.
When you got into criticalrealism, two things came to
mind.
One is a conversation my wifeand I have had a lot over the
dinner table of like, how do weknow what red is red?
Right?
How do how do we know that redis red?
You know, you never mindcolorblindness, right?
But we think of objectivismlike we all understand that that

(23:25):
is red, what we're seeingthere, that apple is red.
Why what do you see and what doI see?
And because of the human meatsacks we are and interpretation
and sensory input and all that,can you ever prove it?
And that's where then my mindgoes, well, what are the
objective things outside of uslike wavelengths of light,
right?
And frequencies and reflectionand things like that.
And it that definitelyintrigues me because when we

(23:47):
talk about nutrition science orfitness, a lot of times I think
people make it about right orwrong in the context of
individuals' responses asopposed to outside the
individual, what doesn't changeversus what changes once
individual meat stack gets intothe feedback loop, so to speak.
Um, the other thing I thoughtabout when you said uh you're

(24:07):
talking about how we regressed,but we've advanced a lot as a
civilization.
Again, I have similarconversations with people of
were we better off as tribalpre-agricultural, you know,
human beings or not?
But um, there's a podcastcalled How to Change the World
that just came out recently bySam Eric's a guy I know
personally.
And Eric, I think you'd lovethat.
He actually is gonna spend thenext decade going through every

(24:30):
major like class nine, class 10invention of man in all of human
history.
I just thought of that just asan aside.
So let's empiricism.
So now it's the scientificmethod.
I was thinking of my physicsteacher in high school who
taught us how in the Middle Agespeople would um think that the
trajectory of something thrownwas uh half a parabola and then

(24:53):
a straight vertical line.
And it's before they had testedthat.
So anyway, go take that whereyou will.

Dr. Eric Helms (25:01):
You know, so so that that that's that's one that
I haven't been exposed to.
But uh what when I think ofempiricism and where we were at
beforehand, uh it it kind ofcomes down to when we were and
it really stems from what can wemeasure, you know.
So like if we go topre-empiricists, which is
actually a long time ago, therewas um, you know, empirical

(25:23):
traditions in before the MiddleAges in India.
Uh we're, you know, in the Westwere a lot more informed as to
like the empiricist debates thatkind of most recently ended
with, at least in the hardscientists, we're like, hey,
we're going with Karl Popper.
Like that's kind of where wegot to.
Is the closest thing to aconsensus, which is by no means
a consensus, but sufficient tocreate an entire field of

(25:46):
quantitative research, which bythe way, quantitative just means
that you can put numbers tosomething.
Qualitative just means that itis um you can't put numbers to
it.
It has to do with the humanexperience.
What you said is with yourpoint with what is red, um, red
is quantifiable if we look atwavelengths, but it is
qualitative.

(26:07):
Like what number red is that?
You know, I I don't know, butI'm I that is red, you know, and
knowing that that's a littlebit different between people.
So you use this informationdifferently.
And uh the qualitativeexperiences can inform
quantitative and vice versa.
So when we deal with research,human research, uh, we are

(26:27):
addressing both qualitative andquantitative frameworks.
And critical realists are notpure uh quantitative
researchers.
Um, it's very common uh in myfield as a sports science and
sports nutritionist uh and inpublic health to interview
people to get their experiencesbecause that can help form a
hypothesis.
And if we can measuresomething, then we can see

(26:48):
what's a way to have a betteroutcome.
It's the whole discussionyou'll hear in popular
discussions of, well, what aboutthe lived experience?
Like, you know, what does thatmean?
And without connecting withpeople's qualitative
experiences, it's very easy toask questions that you can
quantify an answer to andprovide information about that
simply no one cares about uh orthat aren't useful.

(27:10):
So that's kind of the linkthere.
So you can't be committed tocritical realism without
acknowledging the value of thequalitative experiences of
people, as well as needing toquantify what can and should be
quantified to provide objectiveanswers for at least the nature
of a relationship, if notnecessarily what to do on a

(27:32):
day-to-day basis, but often whatactually to do when you have a
very specific question.
So empiricism um was somethingwe could really only start doing
once we had the ability tomeasure things, you know.
So if we think about like kindof the stereotype of the Greek
traditions, and we think aboutsome of the things they come up

(27:52):
with, like the four humorsexplaining the way the body
works.
You know, there's like bile andblack acid and these four
colors, and it's the mixture ofthem.
It is a logically consistent,for the most part, set of rules
that explains what we seesufficiently that our own biases
allow it to operate for decadesand even centuries, where we

(28:16):
go, yep, that's the way thingswork.
So if we look at the history ofmedicine, basically until
empiricism became a little moreestablished and then rooted in
medicine, people were doing somereal effed up stuff to humans.
Like, you know, leeches,bloodletting, lead.

(28:36):
There's a long list, if youjust want to see like why going
to a hospital was just as muchrolling the dice as just letting
that disease carry itself outuntil about 1930, 1940, right?
Bad times.
It varies by region, it variesby the specific subfield, and
there's a lot more complexity toit.
But when you are operating in aspace where you can't actually

(28:59):
discern cause and effect, thesubjective nature of reality
takes hold and humans can dosome some things that are that
are less than ideal.
But we also have to acknowledgethat humans aren't so bad
because empiricism didn't comeout of nowhere.
We got to it from observation.
So science is not this separatething from us.
It's just that the way we'rebuilt is not necessarily to

(29:24):
always discern cause and effect,it is to survive.
So we will accept things thatare less efficient if it just
allows us to pass on our genes.
So if we don't want to walkaround with existential dread,
we're going to have certainbiases.
A good example is that ifyou're a hunter gatherer in the

(29:44):
forest and you hear a twig snap,you're going to be attuned to
immediately thinking it'ssomething dangerous first.
And this is where negativitybias stems from and why social
media can be crippling.
You know, you get one negativecomment, a hundred positive
comments, you focus on thenegative, it drives.
Drives you down and then youbecome less effective.
Social media makes youdepressed, even though you get

(30:04):
ninety-nine positive comments.
I may or may not resemble thatas an example.
Another example is, you know,we need to, you know, have the
motivation, have the motive todo things and progress our human
existence.
And you could argue thatreligion or a belief in God or

(30:25):
something of that nature, andI'm not making a claim as to
whether it's accurate or not, isimportant.
Or why do anything?
Right.
And of course, there are waysto have values and morals and be
an atheist, and there's plentyof atheists who have lots of
meaning in their lives.
But that can lead to certainbehaviors.
Uh, you know, we believe, basedupon the observations with

(30:46):
these limitated, these limitedsubjective nature of our
reality, that the sun comes upbecause I pray to the sun god.
And we really shouldn't notpray to the sun god because what
if the sun doesn't come up?
That's a big problem.
And each time I pray to the sungod, the sun does come up.
So therefore, praying to thesun god causes the sun to come

(31:07):
up.
So there's a lot of muddying,you know.
So this is before anunderstanding of something like
a control group or a placebo.
Not that you could have aplacebo prayer, placebo sun, or
placebo sun god, but you get mypoint.
So some of the reason why wehad thousands of years of
pre-empiricism and kind offorming these rationalist

(31:31):
logical constructions that theythey're internally consistent
and they sound legit based uponwhat you observe.
So we we often attach ourselvesto them, but they're in fact
false, is why we needempiricism.
And the most uh prescientexample, I would say, that is
used if we skip ahead to KarlPopper is the claim that was

(31:52):
present in Europe that all swansare white, right?
So this was just an objectivetruth.
And you know, we have this insociety now in very like simple
terms, like there's stuff thatwe just agree upon is true and
we never question it.
And some of these are like justfitness myths, you know, like
you know, I heard blank.

(32:13):
Like, of course we do that, andand you're not supposed to
question it.
And a lot of times it'saccurate, sometimes it's
semi-accurate, but this was oneof those things in Europe, you
know, they'd never seen a blackswan because in Europe there are
no black swans, and it wasn'tuntil colonialism continued,
ships went out to Australia andthey were like, huh, that's a

(32:33):
black swan.
And now all of a sudden, thisobjective truth, in quotes, is
not true anymore.
And you have to modify it andsay, all European swans are
white, but some swans are black.
And this is, you know, comes tothe fundamentals of logic,
which I don't think we need tonecessarily go through, but it
does go, all right, we can'tjust have these baseline

(32:56):
agreed-upon truths.
The best thing we can doinstead of trying to prove
things is try to falsify things.
And this is basically the ideaof, well, I'm gonna set up a
hypothesis, I'm gonna say, Ithink all swans are white, and
then I need to form differentexperiments or different tests
of this hypothesis to try tofalsify it.

(33:19):
And that's why we use weirdlanguage in in science, why we
say, like, you know, if if youever go through a master's
degree or a property orsomething, yeah.
Exactly.
Like, why can't we just saythis is better than that or
these things are the same?
Well, we didn't test that.
And there's actually adifferent statistical model.
So the statistics we usegenerally uh in most science

(33:42):
that people are trying to applyis based upon the idea that
we're trying to falsify ahypothesis.
And the terminology of ahypothesis and a theory, theory
differs from what we usecolloquially.
When we say theory, we think,you know, I got a theory, and
it's basically like a hot take.
But the theory of gravity is alittle more than the hot take,

(34:04):
and we can test that right now.
I just dropped a littleearphone thing I was playing
with.
It still is fall to the ground,so you know, Newton, Newton's
doing all right.
Theory of gravity is stillholding up, and that's because
attempts at falsifying it haveall failed.
So a theory is basically ahypothesis that has survived
multiple, multiple, multiple,multiple attempts at

(34:27):
falsification.
It's moved from being a modelto uh an accepted theory.
So things like the theory ofevolution, the theory of
gravity.
And even in uh qualitativeresearch, there are theories of
behavior change, you know, andthere are theories that are are
quite useful, which you can thengo and empirically test.
And it you slight to you startto slightly modify these models.

(34:50):
So that's kind of where we getto.

Philip Pape (34:52):
And can I add in there, the theory thing is
fascinating because there's alsothe general versus specific
theories, right?
Like you mentioned gravity.
Einstein found that gravitydidn't hold under all
circumstances when you go to thelevel of galaxies or planets,
right?
But it didn't falsify that itheld at the level of Earth, kind
of like the white swan and thatwhite swans in that one area

(35:13):
was the truth for that area, butin reality, it wasn't the
bigger truth.
And so then you had to refineit.
I think I think that'sfascinating because that's where
we can do it.
You mentioned at the verybeginning in your opening, I'll
say monologue, was the practicalnature of applying the fitness
research to our lives has tohave a constraint, right?
Yes.
We have to have that specialtheory of fitness relativity for

(35:35):
our lives.
So that came up and a greatexample.

Dr. Eric Helms (35:38):
Yeah.
If you don't, if you don'tmind, just to bolster you and
then I'll shut up.
Like no, sir.
Um Newtonian physics was greatuntil we started trying to leave
the gravity well, right?
Uh, and we started noticinglike, well, hold on.
Why are the why are these we'retrying to put stuff in orbit
and we're we're missing or it'snot working out, and we're not
realizing there there's athere's a time delay, you know,

(35:59):
and uh so special relativitybecame a necessity once we like
you're operating at NASA.
The difference between NASA andyou and I trying to help Tom,
though, is that the lowest likeeducated person in the room has
a master's degree in mechanicalengineering, you know?
So, but for us, it's everybody.

(36:21):
Like everyone is interested inhealth and fitness.
So it's very challenging toprovide a level of you know,
baseline kind of knowledge andeducation that doesn't go over
the head or it's notunnecessary.
And I know we've already lost alot of people, but hopefully
there's enough trainers who canpass us on and maybe do a better
job than than I at simplifyingit further.

Philip Pape (36:39):
You know, Tony is listening.
Tony, you're still listening.
Thank you, Tony.

Dr. Eric Helms (36:43):
So so yeah, like I think essentially what I'm
getting at is that somescientific fields are fully
encapsulated within the halls ofscience, and it's experts
talking to each other and tryingto figure out how to do things
better for a specific purpose,like putting a satellite into
geosynchronous orbit.
And other times we deal withthe very frustrating reality

(37:04):
where we've spent 10 yearseducating ourselves on how to
study humans and come toobjective truth.
And someone who is, you know,they have a high school
education and has been duped byuh cult of personality.
Sound science-y is on the tipof Mount Dunning or Mount
Krueger, and they're like, nah.
And you're like, ah, god damnit.

(37:24):
And we have things like, youknow, people being unwilling to
get vaccinated and having achild die despite their best
efforts, and think they're doingthe right thing.
And that is that's the modernworld we live in, and it's
really tough.
So uh just just to put that outthere.

Philip Pape (37:41):
Yeah, that's true.
And you mentioned the amount ofinformation, and then now that
layer on top of that AI, whichitself is built on
misinformation and to someextent of how it's trained.
And so I will get every daysomebody saying, I've been
listening to your podcast foryou know 300 episodes, but then
I asked Gemini XYZ and they saidsomething different.
Like, okay, what do I do withthat?

(38:02):
Good conversation starter.
But you mentioned uh the otherthing you talked about was the
history of this.
Now, I guess my question to youwhen you analyzed the history
of the scientific method andempiricism, is there a long
stretch of time?
And again, I think the Westernworld with the Middle Ages,
which weren't weren't as dark aspeople make it out to be.
There's a lot of innovationhappening.
But was it the power structuresand and the very things we're

(38:25):
talking about that are stillrelevant today, like religion,
you know, the Catholic Church orwhatever, that simply uh
disincentivize or suppressed anyprogression in the use of
empiricism, even though thereare pockets of this always
happening?

Dr. Eric Helms (38:40):
This is happening constantly.
And even in people who and infact, I would say it's in all of
us.
You know, for example, I wouldsay generally, people who are
left-leaning in Western culturesare pro-science, but they are
also tend to be people who uhtrend towards activism and

(39:02):
motivated use of science.
So, and it is starting with aconclusion because you're seeing
a population in need.
And I'm not making a claim ofwhether this is good or bad, but
we go, this this group isstruggling in this way.
Uh, I want to assist, and Ithink we need to do this.
And this can result in themotivated, biased interpretation

(39:26):
of research.
So, for example, if you takesome of the more extreme
proponents as a fitness example,at health at every size, you
can interpret some of the datain such a way that you can make
a seemingly consistent claim, ifyou cherry pick, that body fat
is not causative for at highlevels of or even predictive of

(39:52):
health issues.
And you can claim that it is acorrelation, it's a marker, but
it is other things, and the realissue is stigma.
And it's not that stigma is notan issue, and it's not that at
certain body fat levels theassociation is weak, but they
will deny that there is a lessthan 1% chance if you have a

(40:13):
given person plucked from apopulation with a BMI of over 35
that they will be metabolicallyhealthy, and that there is
actually a contributory role ofadipose tissue, especially
visceral fat, leading to that.
And that motivated reasoningcomes from a great place of
trying to make the medical spacebetter and more kind to people

(40:37):
who are living in large bodiesdue to no fault of their own,
through a combination ofgenetics, how they're raised,
and the changing society thatmakes moving unnecessary, uh,
which dysregulates our hungerand our metabolism and provides
us with a constant supply ofhigh palatable, easily to

(40:57):
consume foods that areaffordable, which is exactly
what our human bodies areengineered to get.
And when we look at primates incaptivity, they slowly gain
weight too.
And we're not sitting hereclaiming, you know, orangutans
or are lacking the willpower tobe good people, right?
So I think that's an examplewhere even in groups that

(41:18):
self-identify as science forwardand science open, we have
desires.
We are operating in thiscritical realism space where we
want change.
And as soon as you want to dosomething subjective with
objective information, it goesthrough a lens of interpretation
and there's no way around it,even if the intentions are

(41:39):
great.
I come to the table withthinking science is positive.
I want people to use this,empiricism is the best tool we
have, all this stuff.
But no matter how I how much Itry to give people the best
information, I am also startingwith that premise.
So any any good, like if youreally sit down like at a like

(42:00):
at a philosophy conference,everyone starts with like a
positionality statement.
Here's who I am, where I wasborn, what I believe, and here's
what I bring to the table.
So everyone can be uh you knowaware of your biases and and
your motivated in yourmotivations and things like
that.
So the degree to which I thinkyou want to spend time on that

(42:22):
is is is questionable indifferent contexts, but I think
it's important to understandthat because then we don't just
go, oh, well, the dark ages,blah, blah, blah, blah, blah,
blah, blah, uh, or the CatholicChurch or systems of oppression
or what have you.
Like you can even see in inlike, for example, communist
regimes in China, the idea of,hey, we're just going with

(42:44):
science, but oh, thesescientists who have any beliefs
related to religion, we'reactually going to kill them.
And we essentially create asimilar level of fervent
zealotry and lack of questioningaround the state.
So, yeah, we are objective andwe're not dealing with this
silly religion stuff that hasruined, you know, uh society.

(43:07):
Look at the Catholic Church,but the science needs to support
the state, you know, and it'sthe same problem, just through
another, another medium.
And so I think starting with aconclusion or starting with an
understanding of the world thatyou accept without being willing
to question it is the issue,but it's also kind of the
natural state of humanity ifthey don't align themselves

(43:29):
towards empiricism.
So that is what you see in somepockets of the fitness industry
now.
I have a quote unquote model,not a scientific model, but I
have a certain set of truthsthat I've come to accept, which
I may even have formed basedupon cherry-picked data.
And that leads me to aconclusion.
And if anything that conflictswith that conclusion, I question

(43:51):
that data, I question thepeople who did it, and I apply a
higher level of skepticism tothat rather than questioning the
model.
And then that's confounded bythe fact that I am now selling
you something.
I have identified a belief oridentity related to it, Dr.
Carnivore Tom, for example.
I'm part of an in-group, and orI am just so deep down the

(44:13):
rabbit hole because I gotexposed to a certain way of
viewing things.
And sometimes we don't realizethat the person who exposed you
to this information, they're notempiricists, even if you are.
But the people who still kindof align themselves with
empiricism, who just don'trealize that the person they've
learned from is not, can find away out.

(44:36):
A really good common example, Ithink, for both of our age
groups and time we've spent inthe industry is people who get
exposed to Gary Tobbs.
You know, so Gary Tobbs was oneof the biggest proponents and
growers of late-stage, you know,low-fat, sorry, low-carb
high-fat diets, and investigatea journalist, uh, started with a

(44:58):
belief and positioned peoplelike Ansel Keys, positioned uh
institutions like the WorldHealth Organization, more modern
uh, or or just you know, theDietetic Association or all of
the Western countries as havingmotivated uh reasonings for why
they would villainize fat,industry connections, and then

(45:20):
he would specifically take theresearch, which logically fit
together if you ignore all theother research, and could form a
mechanism, could form anoutcome, and it looked like
someone who was an empiricistwalking you through how it is
actually carbohydrate andspecifically sugar through the

(45:41):
mechanism of insulin, which hasled to obesity, and how power
structures, like you said, havekept us in a state of unhealth.
And it's logically consistent,but it's also incorrect.
And you only know that not ifyou are necessarily questioning
whether he's an empiricist ornot, but if you actually look at
the research and you go, well,is that all of it?

(46:03):
Because in human research, wehave a tremendous number of
false positives, and we have tolook at the overall body of
data, and we have things likethe hierarchy of evidence, and
there's systems in place toaddress these biases so we don't
just cherry pick our way to afalse conclusion.
So it appears to be empiricism,but it's it's an it's it's not.

(46:23):
And how does the average personaddress that?
How do they deal with it?
And then that comes to well,let's take a look at the
attempts at falsification andhow do they respond to that?
So there's almost these twothings you have to do.
You have to know thisphilosophy that we're getting
at, and then you have to lookat, okay, so a given paradigm,
whether that's low-carbketogenic dieting, or that's a

(46:45):
certain way of training, orthat's a certain supplement, or
a certain approach to menopause,how do they respond to attempts
at falsification?
Is there an immediatedismissal, safeguarding,
siloing, and silencing critics?
Or is there an openness and arelatively fluid shift in what
you're recommended to do?
And that is kind of what youhave to do in the real world.

Philip Pape (47:07):
Yeah.
So there we'll we'll get intothe hierarchies of evidence, but
what came to mind was thehierarchies of skepticism when,
especially you're new to theindustry, you're just you know,
general population, um, youdon't have many premises other
than all the would be called, Iguess, the illusion of
obviousness or the justified butfalse beliefs that we talked
about, where you just kind ofgrew up with them and they're in

(47:27):
your brain rattling arounduntil proven otherwise.
And what you're saying is that,you know, there's a lot of
people doing the call-outs andthe hot takes where the
criticism is not on the model orthe evidence, it's on some
other component or actor orinstitution or something like
that.
And then the average person,how do they know what to be
skeptical of?

(47:48):
Should they be skeptical ofthat person?
Because on one hand, I'm Icould be hearing you say, well,
don't just immediately attackthe funder of the research.
But should you at some point,you know, be skeptical of the
funding of the funder of theresearch, right?
And so you never quite knowwhat to be skeptical of.
So maybe I don't know if thehierarchies of evidence, the

(48:08):
hierarchy of evidence allows usto do a step through or kind of
a where start here, and here'sthe flow chart.
You know, I just woke up today,babe in the woods, and I want
to know what to eat and how muchprotein to get, and I have no
idea where do I even begin,right?
That's just one example.
Where do we start?

Dr. Eric Helms (48:27):
No, I think that's great.
I think one thing I'm gonnatake a lateral step because it's
really important, um, is thatuh, and there's actually
research on this.
In modern times, especiallylike the post-COVID era and
conspiracy theories, there's alot of things that masquerade a
skepticism.
And there's been some attemptsto try to identify how can
people with relatively high IQswho are relatively well educated

(48:49):
in certain ways fall victim toconspiracy theories.
And it's because of thatmotivated reasoning that we are
all going to default to, and afact that we don't necessarily
train people in this type ofthinking, empiricist thinking,
right?
And if there's there's can be aconstellation of factors
related to impulsive logicalthinking and um cynicism, which

(49:14):
can actually be measured on ascale.
And cynicism, which you canagain measure on a validated
scale in, say, socialpsychology, correlates pretty
strongly with holding falsebeliefs and gullibility, which
you can also measure on a scale.
Now, cynicism from a scientifickind of definition perspective

(49:37):
is when instead of questioningthe information or questioning
whether there's an issue withlogic, uh, you jump to making
inferences about the source ofit, the person, and essentially
you poison the well.
You don't go, oh, maybe we canadd some additives to the water
in this well to maybe get somefluids out of it, because

(49:58):
there's no better option.
This is a swamp over here.
You go, no, the well has beentrying to kill us for years.
We need to drink from theswamp, right?
Um, which leads to problems.
And maybe the well is a betteroption than the swamp because
the swamp, oh, I don't know,it's a swamp.
So that that's kind of wherewe're getting at.
So there's a lot of people whouh trend towards cynicism.

(50:19):
And cynicism is not just about,oh, are you kind of like a
negative person, like cynical,like not that, not that
colloquial term, kind of likethe colloquial theory version of
it.
It really does come down to uha distrust of people and it's an
emotional response.
So you can identify when youare being cynical versus when
you are being skeptical, becauseskepticism comes from kind of

(50:42):
you logically assessingsomething like, oh, A doesn't
follow B, or sorry, B doesn'tfollow A, right?
And instead of going, oh, Idon't like that.
So cynicism typically feelslike defensiveness in an
argument.
Uh, you're loading your barrelsbefore you fully consider what
they say.
You don't want what they'resaying to be true, rather than

(51:04):
you're trying to discern truth.
So this is something that takesa little bit of self-awareness
to identify.
And it is something that youcan come to knee-jerk with the
whatabout.
You know, what about isdifferent than going, okay, so
if that was true, what elsewould be true?
And how could this not be true?
You know, so what you aretrained to do as a scientist,

(51:24):
and ideally what you want to doin the real world, is you
question the beliefs you holdand you figure out how could I
figure out if I'm wrong, not howdo I search for the evidence
which proves that I'm right.
Because you will be able tofind the evidence that proves
you're right.
That's what we're built forafter millions of years of
evolution.
You can maintain that belief.

(51:45):
You can have a selectiveinterpretation and even
acknowledgement of the existenceof the evidence available to
you.
So instead, the only way, trulythe only way to get around that
bias is to look for ways inwhich you could be wrong.
And skepticism gets you closerto that, cynicism pushes you
further away.
And it's an opportunity foryour ego, your existential

(52:06):
desire to hold on to youridentity or your understanding
of the world which feels safe,to go, right.
I don't have to change orthreaten that.
I don't have to be open to athreat, which you know we're not
a big fan of as a survivalmachine.
I can shut that down.
If I'm cynical of that sourceof information, like the entire

(52:26):
apparatus of science, cool, thenI can just go with, I think the
world is flat and I've closedmyself off to that avenue.
So I think we have to startthere.
Is that we don't have to divedown that rabbit hole, but it's
important.

Philip Pape (52:38):
No, it does.
Um, a friend of mine and I werehaving a conversation just
recently about the God gene, thegod delusion, I think it's
called by Richard Dawkins theorythat tribalism and religion
could be a genetic advantagefrom the past, where, you know,
you I mean, tribalism, we weknow that the other uh is is
kept out and now we protect ourgroup.
And I was trying to go back andforth with him about the

(53:00):
conspiracy theory mind today.
And is that the same thing?
Is it the same kind of thoughtor is it different?
And you're talking aboutcynicism here, and I'm not gonna
ask you to to suggest whetherit's genetic or not, but is it
the same or are theredifferences?
Because I feel like one is isabout a long-held belief as a
group, and another is a mindsetof being open to these crazy

(53:21):
ideas and then holding on tothem falsely.
I don't know.
That makes sense.

Dr. Eric Helms (53:25):
Yeah, I I think they operate in the same
constellation.
Um, there are going to becertain people, whether it's
genetic, epigenetic, or, youknow, the way they're raised
culturally, who are morepredisposed to this and more
likely to pass it on.
And then you get kind of agroup think socio-cultural thing
that kind of expands it and itcan impact the whole society.

(53:46):
But you do have to ask thequestion is, you know, why are
some people who are raised in acult hook line sinker, they're
down, they're all about it, andthey will never question that
belief, and other people go thecomplete opposite direction.
You know, you can meet just asmany atheists who are raised
fundamentalist insert religionuh to people who are raised
atheists.
And there are atheists who havenever questioned their atheism,

(54:06):
and there are atheists who havethen become religious.
So, you know, that that's whywe have to kind of just go with
empiricism, because if one ofthose they both can't be true.
And if you can start with thetruth and move further away from
it, then we can't trust ourhuman senses fully without a

(54:26):
system, which is kind of linksback to okay, so what systems do
we have in place?
Well, within empiricism, wehave the hierarchy of evidence,
which is a useful heuristic andmodel.
But like all heuristics andmodels, it is a shortcut.
That's the definition of aheuristic, and all models are
wrong, but some are useful.
And in most cases, thehierarchy of evidence is useful.

(54:48):
And that is the simpleunderstanding that empirical
designs which allow us to infercausation, X causes Y, are at
the top of the pyramid.
And then because there is humanvariation and all the issues
with sampling and uh the need toget a precise estimate from a
population from a smaller group,which is the statistics that we

(55:10):
talked about earlier, getting awhole bunch of uh designs of
research that are based uponcausation together and you know,
reviewing them in a systematicmanner, and maybe even
statistically, a meta-analysisor a systematic review,
respectively, sits at the top ofthe hierarchy.
So I can find a randomizedcontrolled trial that typically

(55:32):
in our fields are too small,that has done all the things we
need that we've alluded to, likehaving a control group and
having a placebo group.
So we know that the inferenceof expectation, uh, or not the
inference, sorry, the influenceof expectation, uh, that effect
is less than the actual changefrom the intervention, and also

(55:53):
that just the normal variationin a group doing nothing, and
that change is also smaller thanthe intervention.
Uh, if we got an RCT with only10 people in each of those
groups, just by random chance,there's a decent possibility
that it's either whollymisrepresenting the
directionality of the effect ofthe intervention, or it's
showing no effect, or too strongof an effect, or it's

(56:15):
overestimating the magnitude.
But if I can take 10 of these10 sample size RCTs, combine
them together appropriately, andI could either kind of vote
counting style, do a systematicreview and say, hey, in eight
cases there was null effects,one there was positive, one
there was negative, there'sprobably nothing going on here,

(56:36):
or hey, in seven cases there wasnull, three cases there was
positive.
This is probably a legit thingthat's only showing up
sometimes.
So we're can expect a small totrivial true positive effect
from this intervention, or amore robust way of doing it,
where we account for things likethe variability within study,

(56:57):
the hierarchical nested natureof data, and double counting if
you're getting two samples fromthe same thing.
We do a properly donemeta-analysis, and we can
actually get a standardized meandifference.
We can have some weightedcontrols for the individual
strength of each study in themeta-analysis, and you see some
of these cool plots that we seewhere we get this little
diamond.

(57:17):
And this is why, with awell-done meta-analysis, and
this is something I can see theechoes of not understanding
these things happen all the timeon social media.
So just today, I posted uh incollaboration with the Sports
Nutrition Association uh part ofme giving a lecture on a
meta-regression that wepublished on protein.
And someone said, Oh, yeah,great, a meta-regression on a

(57:39):
bunch of poorly designedstudies, isn't that just
amplifying the uh the poorlydesigned studies that that like
basically you throw garbage inand it makes it more garbagey.
And that's a fundamentalmisunderstanding of an
effectively done metaregression.
Because the reality is that youcan look at 90% of studies on

(58:01):
moderate versus high proteinintakes, and you will see a
null.
Doesn't seem to make adifference for the outcome of
interest, fat-free mass change,fat mass change, you know.
But there are no studies, to myknowledge, which find a
significant difference favoring,actually, there is one study
I'm aware of out of quiteliterally uh hundreds and

(58:24):
hundreds, maybe a thousand now,uh, that has found lower protein
provide some kind of some typeof statistically significant
advantage uh for bodycomposition change.
And maybe about eight to tenpercent of studies find a
positive significant effect forhigher protein.
And if you do a proper analysisof all those studies combined,

(58:45):
and you essentially giveyourself the same power of a
really high-end RCT, it tellsyou, hey, there's there's a
legitimate but very smallpositive effect from a high
protein diet that you need2,000, 1,000, 2,500 participants
across 60, 70 studies todetect.
So should we be spending a lotof time quibbling over 1.4

(59:08):
versus 1.8 grams per kilogram ofprotein?
No, it depends on the context,but generally, no, if we're
talking to Tom at Thanksgiving.
If we're talking to a youngRonnie Coleman trying to turn
pro, maybe a different answer.
But, or if I'm deciding whatI'm gonna do, you know, two
weeks out from me getting onstage in Taiwan, potentially
different answer.
But that's the context we'retalking about.

(59:28):
But the fundamentalmisunderstanding there is this
rejection of the evidencehierarchy because of not
understanding how the evidencewithin the hierarchy operates,
not understanding that a metaregression doesn't inflate
error, it reduces it and itenhances precision and deals
with uncertainty if it's doneright.
But not all meta regressionsare done right.

Philip Pape (59:50):
I always think of it as like as like a filter or
sieve or whatever.
So when people think of thesestudies, you just said the one
point five versus one pointeight may not make a difference.
It may make a difference tosomeone who's optimizing.
That raises the question a lotof people have, and how do you
communicate this as a sciencecommunicator?
But cause and effect in generalfrom these studies, even if you
do see an effect, you'recalculating over a sample,

(01:00:11):
you're you're coming up withusually a curve, right?
Like a normal curve with amean, how is that useful to me?
It's kind of the question ofare we coming up with a new box
of constraint, you know, aconstraint box of the effect,
and that's the usefulness,knowing that an individual falls
somewhere within that, is thatthe usefulness of it?
You know what I mean?

(01:00:31):
That's what comes up inpeople's mind.
It's like, well, what aboutjust me doing it myself and
having the most direct evidencepossible?
It's anecdotal, but it's me.
And where does all that fall inthe hierarchy?

Dr. Eric Helms (01:00:42):
That's a great question.
So essentially, when you aretrying to provide information
that's useful from some of theseuh big-ticket items at the top
of the hierarchy, either youryour well-done RCTs,
meta-analyses, systematicreviews, or you know, if there's
not a systematic review ormeta-analysis, your kind of own
subjective interpretation of thefive or six RCTs you got, or

(01:01:05):
the one RCT you've got, you haveto sort of, as a science
communicator or a trainer,anticipate the errors people
will make when you then givethem a new heuristic based upon
that.
So the common one that we havein our space is yeah, higher
protein diets are better.
And then people trend towardshigher protein diets do all the
effect I see in that study, andmore is better.

(01:01:29):
And they assume a univariatelinear relationship because
that's an easy thing toconceptualize.
More protein, if I go from 0.4to 0.8, that's the same thing as
going from 0.8 to 1.2, that'sthe same thing for going 1.2 to
1.6, same thing for going 1.6 to2, all the way up to I'm eating
400 grams of protein, right?
Um, and it's better, and it'sbetter for these things.

(01:01:51):
So if I have an issue, theanswer is increased protein.
So without thinking about yourbaseline or where you're at, you
tell me you're hungry, um,you're not getting the gains you
want, or you're losing too muchmuscle mass in your diet, try
adding 50 grams of protein.
And that's always going to bethe same.
When in reality, there arepoints along that curve where
that line gets less certain,flat lines, and it becomes an

(01:02:15):
increasing barrier and frictionto real world application and
just having really smelly fartsand pissing off the people
around you, right?
And thinking about, okay, well,what's the cost?
Because if I pull one lever, itimpacts other things in the
real world.
And that directly connects usto the benefit and the
shortfalls of anecdotalinformation.

(01:02:36):
So when I do a meta-regression,we have done statistics to
isolate the effect and therelationship of protein on the
other variable.
So we can actually quantify thedegree of heterogeneity and
influence of other variables.
And we might say somethinglike, here's another good
example, like the effect ofvolume, right?

(01:02:56):
Uh there's a great meta uhregression that's out right now
by Pellin and colleagues, andwe're looking over a thousand
people, and you can say, hey, wesee an 8% increase in
hypertrophy over a typical studylength when you're doing, you
know, 15 to 25 sets, right?
But that's not saying thatvolume is causing the entirety
of that 8%, right?

(01:03:17):
If you took that same persondoing that amount of volume and
you said, Well, I'm gonna knockout your uh your genes that
allow you to turn on mTOR,they're probably not gonna get
8% growth.
They may not get any growth,they may get smaller, right?
Or we go, hey, guess what?
You're not allowed to sleep,you know?
And when I say that out loud toyou, you go, Well, of course, I

(01:03:37):
know that.
But when you look at that graphand it tells you a very simple
thing and it says this growth,that volume, your monkey brain
wants to go, well, I'm doing 15to 25 sets.
I'm not gonna do less thanthat, it's less growth.
But we don't think about thoseinteracting variables.
And that's in the construct ofa study where we isolated cause
and effect.
But it's cause and effect isnot like as simple as it is in

(01:03:59):
physics, where if I tap thatpen, the pen's gonna move.
That pen is in a storm, andI've tapped it, and it's gonna
get like explodes in the numberof different vectors and impacts
that it has and decelerations,accelerations.
And that is true in humanresearch all the time.
So it's a contributing factorto the overall constellation of

(01:04:19):
things that impact us.
So when we step away from that,we're still in that harsh
reality, but now we don't havethe idea, the ability to
identify independent effects.
Now, from anecdotally, you'regoing, well, I don't need
science to tell me this.
When I ate more protein, Ilooked better, I gained more
strength, I lost more weight.
But what else happened when youate more protein?

(01:04:40):
And how often do we have peoplecome to us who create a false
association?
Because we're associationmachines, or we wouldn't be here
today.
That twig, probably a tiger.
Well, that wasn't a tiger, butif you didn't think it was a
tiger and you didn't associatetwig snapping with danger, the
tiger would eat you and youdon't get to pass on your genes.
So we're getting better andbetter at falsely associating

(01:05:02):
twig snapping with tigersbecause it's good enough to get
the tribe alive today to talk ona podcast, right?
So, what that means is that Iget people come up to I had a
guy come up to me when I was inin London and he goes, Hey man,
like every time I take protein,my elbow starts to hurt.
I've been thinking about goingoff protein powder.

(01:05:23):
And this is a guy who's ahigh-level calisthenics athlete,
looks incredible.
Like, you know, he could hecould take a shirt off in any
locker room and be like, okay,what bodybuilding program are
you on?
And he's like, I I don't Idon't lift weights, you know.
But he starts taking protein atcertain times.
What motivates the decision forhim to start taking protein?

(01:05:43):
I'm getting serious about mytraining.
I need to get my protein up.
I'm cutting right now.
I'm going through a highervolume block, my sleep's a
little disrupted.
It's being used as I come totalk to him, probably in more in
a prophylactic manner at timeswhen he's pushing the envelope.
Now he's associating it withthe thing he's paying attention
to.
I got this new protein.

(01:06:04):
You know, I've heard thingsabout the supplement industry.
I operate in a relativelyskeptical place, maybe a bit of
a naturalist fallacy.
Should I be focusing on realfood?
Man, elbow's hurting everymorning or every afternoon.
I'm chucking this powder inhere.
Is that hurting my elbow?
And we see like the entiretraditions of reality and life

(01:06:24):
built upon that, right?
So I think it's important tounderstand that you will
associate things that you drawattention to.
Like you could have, like I,just as likely, if we think,
like, I can't think of amechanism of why protein powder
would hurt his elbow, right?
But he doesn't know that.
He hasn't thought through thosethings.
And if I presented him with,hey, when's the last time you

(01:06:45):
changed your toothpaste style?
You know, and he's like,actually, same time period as me
taking this protein powder.
I'm like, well, do you knowabout all the additives in your
in your toothpaste?
Like, no.
And you look at the well, lookat the back of it.
What's on there?
Oh my God, what is what isthis?
What's what's fluoride?
I don't know.
Maybe you should Googlefluoride.
Google fluoride and healthrisks.
Like, oh shit, it's mytoothpaste.

(01:07:05):
And it was neither one, right?
So that is the the issue withwith anecdotes and even
well-done anecdotes, which isessentially just everything on
the hierarchy of evidence belowrandomized controlled trial,
prospective cohort studies,epidemiological research.
Now we don't throw those out.
That's that's the cynicsneaking in.
We acknowledge that really,really well-controlled,

(01:07:28):
well-observed uh, you know,anecdotes are essentially the
most of science.
We can't always do an RCT onsomething, and we definitely
can't meta-analyze it.
There's a classic meta-analysison the use of parachutes, which
shows you how even theheuristic of evidence-based
practice can be taken too far toabsurdity.

(01:07:49):
And it states we could find norandomized controlled trials on
the use of parachutes, andtherefore the recommendation to
use parachutes when jumping outof planes is not evidence-based,
and we should question theirutility in skydiving.
And it was literally publishedjust to prove this point of
saying, hey, like, and we seethis all the time in terms of

(01:08:12):
our field now.
You want cholesterol to not beassociated with negative heart
health, you want saturated fatto not be associated with it.
Give me the RCT where youexperimentally gave people high
cholesterol and high-fat dietsthat they followed in a
metabolic chamber until theydied, and then let's see who
died first.
And you're like, oh, wait,wait, you want the 30-year

(01:08:34):
metabolic ward, thousand-personRCT?
That's not a thing.
The best thing I can give youis what happened to cholesterol
and blood pressure over an 8 to12 or maybe six month study.
And then what happens if I do alarge-scale epidemiological
prospective cohort study where Isample people and give them
food frequency questionnairesand then follow up with them 20,

(01:08:57):
30 years later and look at theodds ratios of dying.
And then I do some statisticalcontrols on smoking, obesity,
socioeconomic status, uh,environmental stuff, and
everything I could think of andsee if both of those kind of
gives an inference of what's inthe middle.
But the thing about thisheuristic is I can say that and
you go, oh, that makes sense.

(01:09:17):
But then also in later in somesome some part of this
conversation, I'm going to say,hey, don't, if someone is
saying, hey, you know, cortisolis a stress hormone and
therefore you shouldn't trainfasted as a menopausal woman
because of X, Y, and Z, and theyhave this long mechanistic
explanation and they talk youout of certain ways of training
or eating.
And I go, well, you should justfocus on outcome studies.

(01:09:39):
Like if they're telling youthat fasted training is going to
harm body competition, well,then you want to look at studies
with the outcome bodycomposition.
And that's true.
But in the context of publichealth nutrition, it's not true
because we cannot have thatevidence.
So even an accurate heuristic,a useful model that we have

(01:10:00):
established and that's been wellestablished and accepted since
the 90s with Sackett, who portedover evidence-based, or that
where we took evidence-basedmedicine and ported over to
evidence-based practice andstrength conditioning and
nutrition, it is not alwaystrue.
So a big part of theevidence-based hierarchy is that
you have to use the bestavailable evidence.

(01:10:21):
And then you have to scale yourconfidence, your precision, and
your specific claims to thebest available evidence.
So many times we have tooperate and we have to make
decisions in absence of an RCT,in absence of a systematic
review, in absence of ameta-regression or
meta-analysis, and even inabsence of anything except
observational data or short-termnon-outcome-based RCTs, and

(01:10:46):
then observationalepidemiological research, which
is basically the whole of publichealth.
But if you don't accept that,or if you manipulate that, or if
you're not fully aware of thosenuances, you can be operating
where you felt like, hey, no,Helms told me I need to focus on
outcomes.
Therefore, I'm not buying thiswhole cholesterol-saturated fat

(01:11:06):
link to heart health becauseshow me the RCT, show me the
outcome data.
I want to see the RCT where youyou killed more people by
feeding them, you know, thiscertain diet over 30 years.
So it is, it's it's complex.

Carol (01:11:19):
Before I started working with Philip, I had been trying
to lose weight and was reallystruggling with consistency.
But from the very beginning,Philip took the time to listen
to me and understand my goals.
He taught me the importance offueling my body with the right
foods to optimize my training inthe gym.
And I lost 20 pounds.
More importantly, I gainedself-confidence.

(01:11:42):
What sets Philip apart is thepersonal connector.
He supported and encouraged meevery step of the way.
So if you're looking for acoach who cares about your
journey as much as you do, Ihighly recommend Philip Pape.

Philip Pape (01:11:59):
Yeah, it makes my head hurt because it it's what I
see every day uh happening onsocial with the cherry picking
and the use of studies asammunition almost, and then the
cynicism that we talked about aswell.
So I did have one follow-up onanecdotes, and that is, well,
kind of two.
One is where does the n equalsone, where does the individual

(01:12:20):
come in in terms of not theexample you said where we're
finding causation where itdoesn't exist so much as
incorporating what does existand is understood to be the best
available evidence, and thenpivoting from there into
refining it for you?
That's where I'm coming from.
And then also totally differentquestion is do we use anecdote

(01:12:42):
predominantly for thesehypotheses today?
Or is are a lot of thehypotheses we come up with in
nutrition science built uponwell-established science, if
that makes sense?

Dr. Eric Helms (01:12:52):
Good question.
Uh, good questions.
And I'm gonna tackle the firstone.

Philip Pape (01:12:55):
Very different.
Yeah.

Dr. Eric Helms (01:12:56):
So yeah, yeah.
No, but they're they'rerelated.
So the first one is basically,you know, well, how do we use
this on an individual basis?
The trainer's going, okay, thishas been great, guys.
I hung with you, but I'm gonnago train my clients on Monday
through Friday, what do I do?
And that is basically thequestion of what is the best
available evidence?
And the best available evidencedepends upon your application.

(01:13:18):
If I'm gonna be writing aposition stand uh in the Journal
of the International Society ofSports Nutrition on broadly
what should bodybuilders do.
And by the way, that's on theway.
So check that out.
Coming to a peer-reviewedpublication near you.
Um, the best available evidenceis gonna be RCTs,
meta-regressions, andmeta-analyses, right?
And a trainer, though, who isgoing to be training Tom, who

(01:13:42):
wants to do his first show.
Apparently, your uncle reallytook your advice seriously at
Thanksgiving last year.
Good job, Tom.
The best available evidence onwhat you should do when you hit
a fat loss stall is not ameta-regression, meta-analysis,
or RCT.
Uh, it might be informed bythose general principles, but it
is looking at the bestanecdotal data you can get from

(01:14:05):
them.
And I think that's where a lotof people slip up because it can
easily become, hey, that workedfor me.
But that's okay.
So long as it's done in aresponsible, well-controlled
manner, you're not changing sixthings at a time and it's not
coming from a heavily motivatedposition where, you know, I
believe only low-carb diets canbe pull-shredded.
So I've tried everything.
Well, except, of course, youknow, the thing I accept could

(01:14:26):
not possibly be true of tryingto have carbohydrates, you know,
that that's an example.
But if you are operating withthe right philosophical stance
and you don't have a tremendousnumber of uh strongly held
beliefs that are not informed bygood data, then well-reasoned
anecdotes that are informed bythe principles and relationships

(01:14:46):
we see in the data are thebackground to the decisions we
make.
So, for example, instead ofgoing 15 to 25 sets based upon
the Pell and meta regression,should produce 8% hypertrophy
within my client.
My client's stalled.
I've looked at a lot of theirnutritional factors, sleep.
Uh, they want to make moreprogress by next year.

(01:15:09):
What can I do to turn up therate of progress?
Well, they're only doing 10sets per muscle group per week
and they have the time to domore.
Um, they're in a moderate levelof efficiency.
I know from the Pellin Metroregression that there's
diminishing returns, you loseefficiency the higher volume you
do, but you get something back.
So long as they can recoverfrom it, they like it, they're
not getting joint pain, and theycan maintain the same proximity

(01:15:31):
to failure and exerciseselection, like they do in those
studies, because they've got acontrolled lab environment.
I can increase the dose andmaybe get marginally faster
gains.
Let's try that.
Not I'm gonna expect to go from6% to 8%, and that all of my
clients should be on 15 to 25sets, right?
So that is the type of thing isyou understand the general

(01:15:54):
relationship of volume tohypertrophy with all of the
context, and then you apply thatin the situation.
But then here's the criticalpiece.
You have to accept that if itdoesn't work or does work,
better or worse than would berepresented by that
meta-regression, then both ofthose things can be true in the
same reality.
And you don't have to figureout why.

(01:16:15):
And you can't.
And that and that's a toughthing for people who like
science because that's not whatwe want.
We want concrete answers, wewant math to work.
But probabilities are a a typeof math that people who actually
like math don't like, you know,you know, which which is, you
know, like you'd start talkingabout quantum states and and

(01:16:38):
some of the fuzzy math we don'tfully understand to someone who
just really, really likes thingsto make sense, and they're
like, that's not fucking math,you know.
Pardon my French.
So I think that's where wherepeople struggle with it.
They go, Well, when I reducedvolume, I got better gains.
So Pellin's meta regression isnonsense.
And you go, okay, hold on.
So you saw in yourself, in anuncontrolled environment,

(01:17:00):
something that did not match upwith 35 studies, a thousand
individuals in multiple labs, aproperly done meta regression,
and you're rejecting that, butaccepting your own reality.
And that is much as you thinkyou're being rational, you're
assuming that your reality isfundamentally different or more

(01:17:22):
valued.
It's more relevant, and itshould be what you go with, but
you don't reject the metaregression.
You don't assume those thousandpeople aren't real, because
that's actually what you'redoing.
And it's easier for me to dothat because I've ran these
studies.
I've seen the the 15 peoplecome in and I've put an
ultrasound on them and watchwhat happens.
I've made them go to failure.

(01:17:43):
I've prescribed differenttraining protocols to different
people.
So to me, I have my monkeybrain connected to it.
And it's easier to go, well,for whatever reason, I don't
know.
I'm not even gonna venture aguess.
I could if I was your coach andwe would try to figure out what
was the bottleneck to whyvolume didn't go the way we
wanted, but I'm not gonna rejectthe relationship.
I'm just gonna go, okay, well,for whatever reason, kind of

(01:18:04):
your cap to a benefit is 10sets.
15 sets didn't go well, back tothe drawing board.
Not let me throw out drawingboards because science is
bullshit, you know, and I'mgonna now follow a Mike Menser
approach or something like that.
Yeah.
So I think that's where a lotof people get stuck.
So for you need to be able tohold two realities, that there

(01:18:26):
is a relationship that's outthere, but it is describing an
average, a mean.
Uh, and it's not that we're allspecial snowflakes, it's just
that that isolated effect that Ilike I said, only 80% uh that
that 80% of the relationshipbetween hypertrophy and volume
is not related to volume.
It's related to all the otherthings that volume sits in the

(01:18:48):
background.
It's something you have to holdon to to hold those two things
together.
So I guess what I'm saying tosummarize it really simply,
Philip, is that the bestavailable evidence, once you've
understood these principles andrelationships, is the anecdotal
realities and the changes on theground.
But it is fundamentally flawedas well, and you need to stay up
to date with both.
Because this tells you whatlevers you can pull and what it

(01:19:10):
should do.
But it doesn't matter if itshould do this or shouldn't do
that and something else happensin the real world, because
that's the only data thatmatters once it comes down to
coaching people.

Philip Pape (01:19:20):
Yeah, I think it's actually very helpful and it
goes back to objectivism that ifyou do something and get a
result that is vastly differentfrom what you now accept as an
empiricist as the best availableevidence, that's a really good
data point because of thatdeviation to then put you on a
different path with yourdetective skills, or if you have

(01:19:42):
a coach helping you out.
That's what I get out of that.
And like you said, the specialsnowflake thing is important
too, because I think peopleconfound being different with
being uniquely special,completely separate from any
human that ever existed, right?
Which are two differentconcepts.
So I like that.
And I was also kind of laughingin my head because when someone
asks me what I should do orwhat number should this be, more

(01:20:04):
and more my answer is try thisand get back to me because I
just don't know.
Yeah.
Let's experiment.
Let's eat that banana nowbefore you work out and see what
happens.

Dr. Eric Helms (01:20:15):
Can I give you an example of of how I think one
should deal with this whenthey're presented with things
that don't seem to make sense?
I remember being asked by uhIan McCarthy way back in 2012
why I thought diet breaks andrefeed seemed to work in our
clients.
And I was on a podcast withhim, for those who don't know,

(01:20:38):
this is old school kind ofevidence-based stuff in the
YouTube-Facebo.
Ian McCarthy was someone whowas promoting evidence-based
practice.
So was I.
Uh, it was early days for meand my journey, but late enough
that I kind of understood someof the things we were talking
about, maybe less explicitly butmore implicitly ingrained.
And, you know, the data onrefeeds and diet breaks was

(01:21:01):
highly mechanistic, highlyspeculative.
And the narrative around it wasyou get a metabolic boost and
you can eat more but lose weightfaster.
And that wasn't quite what wewere seeing, but generally we
were seeing that when we woulduse refeeds and diet breaks,
we'd get better outcomes, moreconsistent fat loss, uh, and
people looking better on stage.

(01:21:22):
And the cost was it takes alittle longer.
You have to start a littleearlier.
Um, but net benefit prettylarge.
And the experiential effect of,you know, if I could predict
what would happen in theresearch, if we did a whole
bunch of studies on physiquecompetitors using diet breaks
and refeeds, which we stilldon't have, by the way.
We have an athletic population,so there's like four studies,
it's more, but it's really notwhat we would need to confirm

(01:21:43):
our anecdotal observations wasman, this thing has a moderate
to large effect on multipleoutcomes of interest, but
primarily body compositionchange.
And it seems to make theprocess easier as well.
Like, whoa, that's a huge win.
And, you know, and Ian askedme, like, well, uh, but how?
And my response was, I don'tknow.

(01:22:04):
And I think that is I'm proudof myself looking back then
because I could have saidsomething hand-wavy about the
impact of leptin and downstreamregulation and metabolic rate,
or uh the inability of us tomeasure the actual true energy
expenditure change because we'rejust measuring BMR or resting

(01:22:26):
energy quotient and not theeffect of NEAT.
And maybe you do get thisenergy boosting advantage, which
I would say now, based upon thedata we have, we know is
probably not true.
I would have been wrong.
I would have been hand-waving,I would have been selling you a
sexy story, I would have beenleaning into my desire to have
an understanding of why is thething I'm doing working and

(01:22:49):
constructing a logicallyconsistent, potentially true
outcome that people would havelatched onto because I'm Eric
Helms.
And I've got the anecdotalproof.
And, you know, we're goodcoaches, and I've seen it work
on myself.
And now I've created a greaterlevel of bullshit that has to be
disentangled once the datacomes out, and more of a placebo

(01:23:10):
effect and tribalism aroundthis approach and a belief that
that is good, and then peoplewho are harder to teach this
kind of thing that we're talkingabout on this podcast later.
But instead, I I was again,I'm, you know, give myself a
little bit of the pat on theback.
I said, I don't know.
This seems to work, and I'mokay with that.
And I'm not claiming that it'ssome metabolic effect or

(01:23:32):
advantage, because it's probablynot that, but something else is
happening.
But just because we haven'tmeasured it yet doesn't mean it
doesn't exist, which is wherethe the tendency, the logical
fallacy that empiricists canlean towards is a desire to have
answers.
And I think you have to leaninto a willingness to accept
that science creates morequestions.
We do get answers along theway.

(01:23:53):
So don't, you know, like thisthrow you throw your hands up.
But we shouldn't try, like whenyou read discussion sections,
it's a lot of speculation.
And that's fine, it's greatbecause this now answers your
second question.
How often does research stemfrom anecdotal practice or other
research?
It's a good mix of both.
And it depends upon you knowwhere that study came from and

(01:24:17):
the line of research thatpresupposes it.
And ideally, it should beinformed by both.
A lot of the PhD uh studentsthat I work with here at AUT,
the study design we have looksto investigate what's being done
in the field, and this isspecific to uh the sports
science of physique, sport, andbodybuilding, as well as comb

(01:24:38):
the literature.
So, for example, I have astudent, shout out to Takahiro
Itagaki from Japan, who islooking into proxies for
hypertrophy, because it's reallyhard to study every single
muscle, every single combinationof exercises to figure out
well, what's the best exercise Ishould choose for hamstring
hypertrophy in the context of abodybuilder and a full five-day

(01:24:58):
program, yada, yada, yada, yada,right?
Um, we got some head-to-headsof a seed to leg curl versus a
lying leg curl and RDL versusthis, but just imagine how many
variables we'd have to layer toempirically get to from first
principles the best exerciseselection.
Give me a hundred years, we'llbe 10% there if we had a million
dollars a year.
Not going to happen.
So, what do we do?

(01:25:20):
Well, we can comb theliterature and see which
principles exist related to um,you know, uh resistance profiles
and muscle lengths, which wecan get some generalizable
principles, but we should alsoprobably sit down with top
bodybuilding coaches who havesome epistemic commitment to
science, like maybe they havebachelor's degree in exercise
science and a proven trackrecord uh and experience in the

(01:25:44):
field, and do something like aDelphi survey, which is where
you take experts on a giventopic and you try to find a
consensus to certain percentage.
And that can give you somehypothesis generation, or you
can do a focus group,qualitative interviews about
people's uh subjective thoughtson something.
Or you can even getquantitative survey data.
Hey, what exercises dobodybuilders use?

(01:26:06):
Is there differences betweenthe pro and amateur level and
say wellness versus bikini?
Because you know, they havedifferent uh, you know,
categorical requirements for thejudging criteria to grow
different parts of their legs,you know, glutes versus quads
and glutes, are their naturalselection and kind of raw, uh,
if you will, capitalistic natureof performance where the cream

(01:26:28):
rises to the crop.
Is there are are are there somesuccess leaves clues types of
anecdotes we can get from that?
Are all top bikini competitorsdoing specific glute exercises?
Are all top wellnesscompetitors doing specific
quadricip exercises?
Let's put that in the mix.
So let's form a hypothesisbased upon a scoping review of
the literature and we identifythe gaps, but also what could

(01:26:51):
infer exercise selection, aswell as top coaches and top
athletes, what are they doing?
And if the Venn diagram crossesin those two places, that's a
strong hypothesis.
But if they don't converge atall, interesting.
This is understudied, or maybeit's based upon traditionalism
and people are ignoring thescience, or there's a lack of
access to the science.

(01:27:11):
So even that process can giveyou a hint about where you
should go next.
And that is literally what wedo.
So we have Takahiro is doing ane-delphi study, and he's also
done a scoping review to try toinform which proxies uh should
we look at uh to to inferhypertrophy long term?
How are coaches telling, youknow, before the hypertrophy has

(01:27:31):
actually occurred, whether ornot an exercise is effective?
It's based on the pump, uh thesubjective soreness, the the
what, you know.
Um are they do they have anyaccess to technologies that have
become cheap enough andminiaturized enough where
they're confident that they aregiving something useful, uh,
like like AMOXI data or orvelocity or what have you,
versus what can we do in the labthat might be useful in an

(01:27:54):
applied setting, but maybe notfor practitioners.
And then what do we look at?

Philip Pape (01:27:58):
Well, thanks for taking us behind the scenes,
because that that was reallyenlightening as to how this all
closes into a loop.
I think a lot of people thinkthis the studies are done in a
vacuum or the hypotheses comeout of thin air.
They're just, you know, there'sa lot of thought behind this.
And you just touched on all theprinciples we we addressed
today, not only the ontology,um, but but combining the

(01:28:19):
qualitative and the human andthe anecdotal elements into this
in a in a very systematic way,and even using proxies and
things like that to make itpractical and getting a faster
uh path to something, you know,this year as opposed to a
century from now.
All good stuff.
What I want to close with,because we're low on time, is of
all the resources available outthere and podcasts and like

(01:28:40):
masks, you know, is there onething that the listener can go
to next that would help themstart developing or training
their epistemic thinking kind ofin a very accessible way?
Anything come to mind?

Dr. Eric Helms (01:28:50):
Yeah, selfishly.
And this is where maybe youknow, because I sell
information, you go back to whatI was saying before.
It's like, well, can I trustEric that he is the best person
to teach me to learn how tolearn?
I think I am.
I may not be, um, but Icertainly am motivated to tell
you to check out Mass ResearchReview because we don't only

(01:29:11):
just review the research on amonthly basis.
That's the principal thingyou'll get in the PDF or the
online issue.
What's pretty sick in the lastmonth and how do you apply it?
But we also have a guide tointerpreting research and a
guide to interpretingstatistics, which is
specifically built to built toteach you this stuff.
And I include a healthy dose ofnot just here's what you should

(01:29:32):
do, you know, like bottom lineit for me, uh, or a healthy dose
of whack-a-mole, which I haveto do.
I always include how to thinkbetter.
So, just as an example, a fewissues back, I wrote a whole
piece on whether or not theeccentric is a waste for
hypertrophy.
It's just a fatigue generatingthing that if ideally we could

(01:29:53):
remove it from exercise, you'dbe better.
And I didn't just I could havemade it a freaking four sentence
article like.
Like, hey, here's the mostrecent meta-analysis and 20
other, you know, meta-sciencestudies that show it's either
neutral to positive, includingthe eccentric, this is wrong,
kill this baby in the crib.
Instead, I went down a rabbithole and talked about Karl

(01:30:14):
Poplar and empiricism and how dowe know what we know.
And, you know, like when youshould question a model, what is
a model and what is theory?
And I talked about themeta-analysis.
So we try to um upskill ourreaders in not only the access
to data that they have, but alsotheir ability to interpret it

(01:30:34):
over time.
So they become less and lessreliant on us and just going,
well, I have no idea, but whatdoes Mass think?
They go, you know, I think Masswould probably think this.
Um, and they are able to extendbeyond the walls of what we can
cover, because we're onlycovering eight to nine topics in
each issue, and we are combingthrough hundreds of pieces of

(01:30:56):
research that come out eachmonth, which is only about 10%
of all the research that comesout in our field.
So there's a lot of informationout there.
So that's the first thing Iwould say.
And the second thing I wouldsay is more generally, if you
want something less biased, uh,the most accessible, least
likely to be wrong piece ofinformation that you can
approach are systematic reviewsin high-impact journals, or what

(01:31:18):
we call Q1 or Q2 journals, uhor position stands that are done
in the same manner as asystematic review from large
organizations where they pullthe experts.
So you have experts identifyingother experts and using a
systematic approach to give youa qualitative interpretation
rather than a meta-analysis,which could be done wrong, or
even if it is done right orwrong, you have no idea how to

(01:31:40):
do that unless you have a degreein stats.
Um, these are at the top of theevidence hierarchy.
And if you kind of use thosefew precepts I said, you know,
so you go into Google Scholar,is this thing been cited once or
has it been cited a hundredtimes?
You know, check out Google thejournal and say what's its
impact factor in our field.
If it's a number over two orthree, great.

(01:32:01):
Then that's kind of like themid-range of an impact factor.
Um, is it PubMed indexed?
That's a good sign.
So if you get a PubMed indexedcited a lot, I mean, obviously
it was published last week, itwon't be cited a lot, but
well-cited, high-impact journal,systematic review, scoping
review, umbrella review, thosetype of things, but not a
narrative review or a positionstand.

(01:32:22):
It's like, say, for example,the Journal of the International
Society of Nutrition, theAmerican College of Sports
Medicine, the National StrengthConditioning Association.
These are all evidence-based uhorganizations that actually
have peer-reviewed journalsattached to them, uh, and
researchers and academics andpractitioners working in

(01:32:42):
concert, typically people areboth, um, trying to give
practical guidance for peopleworking directly with humans in
sport, fitness, health, uh, whoare trying to get to the best
outcome.
Those are the least likelythings to be wrong, and even
when they are wrong, it is amatter of degree uh or it's just

(01:33:03):
being too conservative.
Like if you were to follow the2009 ACSM guidelines for
resistance training, you woulddefinitely get bigger, faster,
stronger, right?
Is are they slightly offcompared to what we would do in
2025?
Yes, but not to the point whereit doesn't work.
So for utilitarians, great wayto go.
And um, like I said, you don'tneed to have uh statistics

(01:33:24):
degree.
So those are kind of my twoparallel recommendations as far
as where people want to go ifthey want to try to get good
info.

Philip Pape (01:33:30):
Cool.
All right, and on the secondrecommendation after we edit the
podcast, I'm gonna extract thatas some some some instructions
so people can see that in theshow notes.
Perfect.
And then for math, gosubscribe.
I'm I've been a subscriber fora while.
I actually became a subscriberwhen I was a student briefly at
UPenn, taking a positivepsychology certification, and I
was able to get the studentrate.

(01:33:51):
So I'm locked in.
But uh, you can get the fullrate or the student rate at
Mass.
Uh, it's one of the firstthings I read every time.
And um, yeah, you're gonna geta lot of the flavor of what you
saw here.
Um, this was so much fun, man.
I like nerding out no matterwhat, even if it is one person
left, and I know there's a lotof guys listening and ladies
listening still.
So we're gonna throw in thoselinks.
Is there anything else uh youwant to send people to or are we

(01:34:12):
good there?

Dr. Eric Helms (01:34:13):
I think we're good there.
I just want to say, you know,thank you for hosting what could
be a a very confusing or dryconversation, but hopefully we
delivered it in a way that isengaging enough for the deep
thinkers, and they can go outand spread the good word, maybe
in a more simple terms than thanthan we did.

Philip Pape (01:34:31):
That's the goal.
That's the goal.
And everybody who listens tothis is a deep thinker, Eric,
right?
Every everybody listening is adeep thinker.
So, all right, man.
Well, thanks so much again forcoming on Wits and Weights uh
for the third time, and um havea great day, man.

Dr. Eric Helms (01:34:43):
My pleasure.
Back at you,
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.