All Episodes

August 22, 2023 29 mins

There is a ton of research being published these days. Some good, some bad.

In this podcast, I’m joined by Phil Page to discuss how clinicians can find quality research, read an article, and draw clinical implications.

We’ll cover some great tips to ensure you are doing your best to stay current with the literature, but not thrown off in the wrong direction!

Full Show Notes: https://mikereinold.com/how-to-read-a-journal-article-with-phil-page

Click Here to View My Online Courses
Want to learn more from me? I have a variety of online courses on my website!

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Support the show

_____
Want to learn more? Check out my blog, podcasts, and online courses
Follow me: Instagram | Twitter | Facebook | Youtube

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
On this episode of the sportsphysical therapy podcast.
I am joined by Phil page.
Phil's an associate professorand research director at
Franciscan university DPTprogram in Baton Rouge.
He's also one of the editors ofthe international journal of
sports, physical therapy.
And this episode, we're going totalk about how to find quality
research and review articles tobest determine the clinical

(00:21):
implications that may improveour practice.

Mike (00:35):
Hey, Phil, welcome to today's podcast.
Thank you so much for takingsome time outta your schedule to
join us

Phil (00:40):
Mike.
Glad to be here.
Thanks for asking.

Mike (00:42):
Awesome.
Um, you know, I, I, I guess I, Ithink I say this a lot, but I
think we're gonna have a greatshow today.
I, I, I, I don't know, you and Ihave been, um, going back and
forth, you know, with emailstrying to, you know, come up
with like a really great episodehere and I think it's gonna be,
uh, I think it's gonna be reallygood.
You know, obviously you and Ihave known each other for years

(01:03):
now and I know how much thistopic is, uh, a passion of
yours, um, with just.
So much journal reviewerexperience for decades now and
you know, an editor of I J S PT, I mean, you really understand
the concepts of, of digging intoresearch, but then more
importantly like how we use thatinformation to improve our

(01:23):
clinical practice because likethat's the whole point of
research, right?

Phil (01:27):
Yeah, that's, that's evidence-based practice.
And I mean, my passion, as youknow, Mike, is really, uh, I
love research, but making it funand making it understandable for
everyday clinicians because itis so kind of nebulous.
And I think it has a badreputation with a lot of folks
because of the bad experiences.
You know, you mentioned the wordstatistics and everyone crawls

(01:48):
into a hole.
And my goal as a, as an educatornow in academia, is not to taint
the, the young kids coming outthinking that this is terrible
and they hate it.
I really want them to appreciateit and be able to use it
correctly.

Mike (02:03):
Right.
I like that.
And, and, and that's a good wayof saying it because I think you
can use research in lots ofways.
Right.
You can, you can use it to proveyour point.
You can use it to keep an openmind and, and maybe expand your
thoughts.
Um, you know, but there's, youknow, there there's multiple
ways to use it.
And you know, I would even saytoo, just like over the years
there's just so much researchcoming out that it's not
uncommon for me to read anarticle and say like, that was

(02:25):
really interesting, but I don'tknow what I can take out of this
and put back into my clinicalpractice sometimes.
Right.
And it just depends on thearticle in the journal.
But, um, you know, it'sinteresting.

Phil (02:36):
Yeah, there's a, that's where there's this kind of, we
all go through this.
If you, you know, have journalsthat you look at every month,
you get the table of contents orwhatever, and you flip through
it.
You don't have a specificquestion, but sometimes you're
just trying to keep up.
And then are, there are othertimes when you do have a
specific question that you'retrying to find an answer to.
So you need to kind of have thatin in mind as you're going

(02:57):
through.
What, what's your purpose whenyou're looking at these articles
and what are you trying to, tolook for?

Mike (03:03):
Right.
Yeah, no, I agree.
And, and unfortunately, I think,you know, sometimes this isn't
the most glamorous topic, likehow to read research and how to
take, you know, implicationsfrom it.
But it's extremely importantand, and I think that's why I'm
excited for this episode and,and really been looking forward
to talking to you about thisbecause every time we do this,
and you and I have.
Chatted in the past about this,I, I always learned something

(03:24):
too, and, and it, it helps medigest the literature.
Um, every time I've seen yougive these talks at some of the
big meetings too about how toget better at doing this, I've
grown.
So I'm super looking forward tothis, Phil.
Yeah.
Um, alright, let's dig in.
Okay.
Why don't we start with this,and kind of alluded to this
Atago, but, um, to me there's, Idon't, I don't, I don't really

(03:46):
understand it, but there've beena ton of journals popping up.
Recently, and you know, as apublished author myself, I I, my
email is in the spam systemapparently because OMG, I get so
many emails from so many randomthings I've never heard of that
you just put a bunch of randomwords together and call it a
journal.
Um, but there, there's a ton ofjournals popping up.

(04:08):
Ton of formats now, open access,uh, pay to play, you know, uh,
you know, the traditionalmodels.
Uh, why don't we start with thisbefore we even talk about how to
read a research article, but.
Why don't we start at thebeginning and say, how do we
find quality research nowadays?
How do we know what to watch outfor and what's good and what's
bad?

Phil (04:27):
Well, I guess, I mean, it does start with the journal.
Um, and there are specificguidelines that, uh, for
example, the internationalcommittee of, uh, medical
journal editors have specificguidelines that you need to
follow.
And if you're looking for a goodcredible journal, you look for
those guidelines and of,obviously the biggest part of

(04:49):
that is gonna be the peer reviewpart of it.
Um, I'm in the same boat as youevery day.
I'm getting.
You know, spammed with, youknow, dear so and so, they get
your name wrong.
And they, they, they, they wantme to do an article about a
letter to the editor that Ipublished, you know, years ago.
Um, and, and I actually use thisin my class now because the

(05:13):
grammar's terrible in these aswell, you know?
Um, anyway, the, the, theproblem with today, there's the
issue of firewalls withtraditional journals, you know,
And people not having access.
So then you became this, well,we want to put up, you know, you
have to pay for articles.

(05:34):
Well, then this model startedwith, well, no, now we want you
to pay as a writer to have thisarticle published.
And some, there are some great,credible journals just like I J
S P T that you do pay a nominalfee just to submit.
Right.
They're open access, but they'repeer reviewed.

(05:55):
We follow the editors.
You know, that kind of stuff.
So you can't just say somethingjust because something's open
access or that it has apublishing charge, that it's a
bad journal or a good journal.
Okay?
We need to put all these thingsinto context.
Who is the publisher is thefirst thing.
The first thing I look for isyou have your credible
publishers, uh, like an elvir.

(06:17):
Those that have large publishinghouses, that have high credible
researcher, uh, researchjournals.
There are some that I calljournal farms.
Those are the ones that have.
Thousands of journal names thatall sound scientifically right,
and those tend to be the onesthat you have to pay for, and
they have probably a little bitless on the peer review.

(06:37):
Anytime I see someone sending mea request saying, we'll publish
your article in 24 hours, I'mlike, no, that is not happening.

Mike (06:45):
Yeah.
I, I've seen that and, and Wow.
Wow.
I'll, I'll just say

Phil (06:50):
So what I do is I actually look at, when I look, when I see
a journal, I'll look at thetitle.
I go to the publisher, see kindof where, where they're coming
from.
One of my favorite things to do,Mike, is to actually Google
their address.
And it's usually like DelawareInc.
Or a UPS store.
Um, that's another clue thatthis is not necessarily a

(07:11):
credible journal.
Go look for peer review andeditorial reviewers.
If you see, hopefully you'll seesomebody you know.
Or I've heard of at least, um,and so, but I, but I don't want
to discredit all the journalsbecause you may have one of
these, we call those thepredatory journals.

(07:31):
And there is a list that you canactually access too.
It's called Bees List, B E A B EA L L S, if you Google that, um,
which generally is updated, butthere was a lot of controversy
with that list.
But it gives you a startingpoint, uh, to look for these
predatory journals.
But when you start reading the,the articles, I can kind of tell

(07:52):
that they're letting a lot ofthings slip by that they
shouldn't.
So it really comes down to thearticle itself more so than the
journal.

Mike (08:00):
right, right.
And, and to take a step back asto why this is important too,
you know, peer review, um,assures that the research is
unbiased, quality, good, validmethodology, those types of
things.
There's, there's so many thingsthat go, that go into that.
I have seen some articlespublished.
In these ones that say, we'llget it published in 24 to 48

(08:21):
hours, which happens, and notonly are there, excuse me, like
grammar errors, but you look atthe methodology and you say,
what reviewer would allow thisarticle to be published with
those methods that are clearlybiased, which clearly do not
result in a good, objectiveoutcome.
How did that get published?
Right.
It, it, it blows my mind, Phil.

Phil (08:43):
A actually, that's one of the reasons and one of my
passions again, is.
We can't just take theconclusion in reading these
articles because even in thesecredible impact type journals,
there are errors and I have seenthese several times and I'm the
first to send a letter to theeditor and there was a situation

(09:05):
where I was looking at thearticle and reading the tables
and going, this doesn't lookright.
And sure enough, they hadflipped the data, the abstract,
uh, P-value were all off.
And I sent, uh, several emailsto the, to the editor or whoever
was in charge of the journal.
Um, and it wasn't until I, I goton social media and said, you

(09:28):
know, this is, this is not rightand you need to fix this because
it's totally wrong in theabstract.
It was totally wrong.
What they decided to do was to,um, publish a cor again, like a,
an error.
W you know, this is what, andthere were seven points, Mike,
that I found wrong with thisarticle that had been published,

Mike (09:51):
right.

Phil (09:51):
a year later, all they did was publish this little thing
that said, we acknowledge thisis wrong.
If you go back to that articlenow, it's still in error.
The whole thing.
They won't change

Mike (10:00):
Wow.
And they won't

Phil (10:02):
No.
And, and so those are stillissues, which is why it's so
important for us to be able tocritically appraise them because
they do get through peer revieweven if they're a, a, a credible
journal and no one's perfect.
Right?
And here's the other problem,Mike, is that reviewers are a
dying breed.
Because the time you don't getpaid to do it, there's a lot of

(10:25):
pushback from these journalsthat are getting paid at, you
know, for these articles.
And the reviewers are goinglike, no.

Mike (10:33):
All right.
That's awesome, Phil.
I love it.
So we've, we've identified howto find a good journal, and I
like some of the tips you'd sayhere too.
Um, I, I would say I wouldprobably add to that just a
touch and just say, there's alot of research out there,
right?
Stick to the name, brand,reputable journals.
For now, when you're an advancedlevel clinician or you're

(10:53):
digging for an answer, you canstart perusing, right and start
getting a little deeper in theliterature.
But there is some very highquality top tier sports
medicine, sports, physicaltherapy journals to just stick
with that.
And then probably just followsome good researchers and
clinicians that you like onTwitter because they're sharing
articles.
And I don't think we're, we'reall, we wouldn't all be sharing

(11:15):
bad articles around theinternet.
So I would add to that is that,You know,

Phil (11:19):
Yeah, that's exactly right.
You know, the other thing Iwanted to mention too is PubMed
is a really good way to kind offilter through the bad journals
there.
There's a claim, I saw thatabout 10% still get through on
PubMed.
Um, but if I always tell mystudents, you have to have a
PubMed citation in order for meto think that it's.

(11:41):
Credible.
Um, these, these journals thatyou're seeing that aren't in
PubMed have a lot less chance ofbeing a highly credible journal,
but I always tell them

Mike (11:51):
sure.

Phil (11:51):
even if it is in PubMed and it is a good journal, the
key is still the quality of thearticle itself.
So, and it even goes beyond thelevel of evidence.
You know, everyone says, oh,these level one studies, that
doesn't mean it's a goodarticle.

Mike (12:08):
Right.
Oh yeah, for sure.
I like the systematic reviews oflevel ones and then systematic
reviews of systematic reviews oflevel ones.
Um, it's just,

Phil (12:16):
I'm doing, I'm doing one now.

Mike (12:21):
there's, I, I mean, I get it.
I mean, we could go into whythere are so many systematic
reviews right now, but they're,you know, they're quick.
They're, they're easy.
They're, you know, you getresidents and fellows can do'em,
you know, pretty good.
Um, you know, there's reasonswhy they're in there.
They're often cited very well.
So the journals love it.
But, you know, I, I always say,and I put this on Twitter,
right?
And I think I, I dunno, maybe Igot a little heat.

(12:42):
I don't remember.
Whatever.
I don't, I, I try not to get tooworried about Twitter, but I
said, you know, putting togethera bunch of article of.
Bunch of bad articles into onesystematic review doesn't make a
good article, right?
Like, like you, you have to makesure, but, um, alright, so let,
let's keep going with thisthought.
I like this.
So we f we found some qualityresearch.
We like it, we're happy.

(13:02):
We think this could be helpfulfor what we do every day, right?
This is the big question, right?
And I'm sure you're gonnaprobably go bananas on this one
here, but, How do we approachreading this?
Right?
And this, you can see I'm, I'mnot a good podcast host, right?
Because I just ask you thesehuge 20 minute questions.
But, um, you know, how do we goabout reading this?
And, and, and this is where Ithink I, I've really learned a

(13:23):
lot from you over the years.
And I know this is where youshine.
Like walk me through what goesthrough your head, Phil, when
you start reading an article andwhat we should look for and.
And not just yourself as like anexpert, expert, clinician, but
also like, like maybe from thelens of like a younger
professional, some early careerprofessional that doesn't wanna
get too overwhelmed.

Both (13:42):
So what, what I think about as I'm going through an
article, um, you know, youobviously have some type of
question in mind.
Um, I use a Pico approach.
Obviously the populationintervention comparison and
outcome, the, that's kind ofyour guiding for what you're
looking for.
The title's good.
To start with.
Um, but I find that it titlesare like newspaper headlines.

(14:06):
Sometimes they're a little overembellished.
Yes, yes.
More so lately I feel.
Yes, yes, exactly.
It's, you know, it's eye candy,uh, and then you might look
scanned through the abstract.
But quite honestly, I'll look atthe conclusion to see if it's
kind of relevant, but I don'tput any merit into it.
Okay.
All I'm doing is trying to gothrough this.

(14:28):
Is this really gonna fit likewhat I'm looking for?
So the biggest thing for me,Mike, is the purpose.
Okay?
And this is where a lot ofpeople, I think, miss the boat.
And what you have to look for isa purpose statement.
The hypothesis.
The research question cuz that'sthe central core to any article.

(14:51):
What are you trying to do here?
What's your purpose statement?
One of the things that BarbHogan Boom has just.
Preached as the, as the editorof I J S P T is, that purpose
statement has to be consistentevery single time you say that
in an article, which I lovethat.
Mm-hmm.
Principle because the purposethen tells me the design it

(15:13):
should be, which then tells methe statistics that I should
see, and also gives me an ideaof the sample.
Okay, so get your purpose andunderstand your research
question.
And if there's a hypothesisinvolved, that's when statistics
come into play, which I can talkabout later.

(15:34):
But then I look for the design.
Does the design answer thepurpose or the question?
Right?
So if you're looking for adifference, I use keywords.
Mike, I use difference effectassociation relationship.
All right.
So I understand the purpose.
I get it.
I like that.
Um, the, to me, I think that isdefinitely an approach that I

(15:57):
don't always take myself.
So, again, I just learnedsomething here that was great.
Is, is make sure that I, Ireally, really understand the
purpose.
Um, what do you, what do you donext from here?
Does, is this when you startdipping into the methods?
Like, or, or what do you do fromhere?
So, yeah, pretty much.
I, I, once I've got the designright, and here's another hint,
Mike, is, don't believe what theauthors tell you.

(16:19):
They're, they're gonna tell youit's a randomized, controlled
trial.
It's not, you know, sometimesthat's funny and.
Why do they do that is becauseit gives them more bang for the
buck.
Remember, this is sometimes agame that you, most researchers
that are trying to publish needto have these randomized
controlled trials, the highestlevel for a clinical trial, and

(16:41):
they also want to havesignificance.
And so there's a, there's aninherent problem, and that's a,
that's a problem with bias iswhat we're always worried about
is publication bias.
Not only does the researcherlook for statistical
significance, but also journals,I've still hearing that they
don't publish non-significantfindings, and that blows my
mind.
Like I would want to know ifsomething didn't work.

(17:04):
Right.
That's, that's exa, I mean, theanswer is yes or no.
I mean, why is no one valid?
Yeah.
So the next thing I, I like tolook for, and these are some
really easy things you, you cando as a clinician to help the
process.
And we talk about quality,right?
And quality is really about it'sinternal validity.

(17:24):
How well is the study done andhow well can it be replicated?
That's what the science is allabout, right?
But quality can be broken downinto two components.
You have the actual reportingguidelines and you have risk of
bias.
Okay.
Luckily there's tons of tools onthe internet that'll help you
with that.
If you go to equatornetwork.org, you can actually

(17:46):
find reporting guidelines forall the different types of
research designs, and it shouldtell you what to look for
throughout every article.
The author should have donethis, this, this, this.
You could check it off.
Mm-hmm.
The other thing you do, andagain, you've had the design, so
you know the design is go to therisk of bias tool for randomized
controlled trials and physicaltherapy.

(18:07):
Probably the most popular isPedro, p e d r o.
Um, there's another, uh, groupthat I just came about when I
started in academia calledJoanna Briggs, and this is a
really good combination of a tonof different research designs
that includes risk of bias andqual um, uh, reporting.
So once I've kind of looked atthe, these, I can use these,

(18:28):
these quality guidelines to, tohelp me to evaluate the
literature, but in order toreally then go further, I look
at the tests and measures,right?
What, what are the, from thedesign, from the question, what
are they measuring?
What tests are they using?
A lot of times you see what Icall these kind of proxy

(18:48):
measures.
They're not the realmeasurement.
Right, so you have to becareful.
What's the validity andreliability of those measures?
What, what does the, the generalsay?
Tests, don't guess, you know,uh, George Davies huge on
validity, reliability, thepsychometrics.
Um, and then I get into kind ofthe statistics behind it, and

(19:10):
again, I refer back to thedesign and the question, and
the, the question should link meto the proper stats to answer
the question correctly.
Now, this is where it gets diceyfor some people, because
unfortunately we're not taughtstatistics usually by
clinicians.
You're taught by statisticians,so then we get into kind of the

(19:31):
statistics and making sure thatwe're answering our question.
So people unfortunately fall forthe word significance way too
much.
And I think it's a very bad wordto use, obviously.
And it's one of those wordsthat, um, people think
significant means a lot, right?
And that's not what it means,right?

(19:52):
And even taking a value.
Of a p value of 0.05 is stillarbitrary, and I use the example
of I if I review an anesthesiapaper and the P of 0.05 is used
for significance, you mean totell me you have a 5% chance you
could be wrong and kill someone?
That's different, you know, thanin our field, right?

(20:13):
5% being wrong is probablyacceptable, but, um, what I like
to look for are the clinicaloutcomes.
Those, those clinicalstatistics, what's the mean?
Uh, the, the minimal clinicallyimportant difference.
MCI IDs, just because it'ssignificant, was it meaningful
clinically?
And then what's the confidenceinterval of their ability to say

(20:35):
that That is where we believethe true value is a rep.
In the population.
This is where people lose it,Mike.
They, they, they just, they justlook at significance and they
go, oh, well the treatmentworks.
That doesn't mean it works inyour population.
Right, right.
You gotta go back and look atthe actual inclusion, exclusion
criteria of this sample.
Right, because the way that theinferential stats work is to

(20:58):
infer the results of that sampleon a population that you then
have a confidence interval.
That's what that means is I'mpretty confident that the true
value of whatever this outcomeis lies between these two
numbers.
It's not the range of values,it's the range of possible
values with the, where there'sone true value.

(21:19):
And it doesn't mean that it'sgonna represent your patient,
but you want to make sure thatthose are the things you look
for.
And lastly on that is read thetables.
Don't just read the narratives.
Go back and look at the tables,as I mentioned earlier.
They could be wrong.
Uh, if it doesn't make sense,doesn't pass the smell test, you
know, start going, that doesn'tlook right.
Don't be afraid to do that.

(21:41):
And, and I would say, I oftenlook, sometimes I'll, I'll do,
like you said too, I'll readthat, oh, there was a
significant finding and thatsort of thing.
And then I'll look at the table.
I'll be like, Well, you know, Idon't know how clinically
important that may be.
It might be statisticallysignificant, but I'm, I don't
know.
The clinic clinical, that'sexactly right.
So, so I would agree.
So, um, so, so what you'resaying here is that p-value

(22:03):
isn't the end all be all right?
We, there's, there's more thanthat, especially in the clinic,
right.
Clinical decisions should not bebased on P values.
What, what a p-value is, isreally let's I go back.
It's about no hypothesistesting.
And you said it right, it's ayes or no question, right?
So is clinical practice a yes orno question?

(22:24):
No way.
Right in in, yeah.
In a Petri dish, in a lab whereeverything's controlled.
I'm good with statisticaltesting of P-value, but.
I can only use it so much in theclinic, you know, and again, You
know that those samples are notalways representative of the
patients in front of you, right?

(22:46):
Right.
So, so, so less p-value in ourhead.
Well, I mean, obviously yes,there's a value to P-value, but
more confidence intervals, isthat right?
Confidence intervals and effectsizes.
Right?
And clinically importantdifferences.
Those are the three things youlook for.
Yeah.
And I think that's a great wayof saying it too, and a great
way for new clinicians too thatare just trying to get used to

(23:07):
this sort of thing to figure outwhat to look for.
I, I, I think that's, that's agreat way of doing it for me.
So, alright, so we've gonethrough the article, we
understand the methods, weunderstand the results, we, we
know that whole purpose.
What are some of your tips nowon how to apply the information?
As a clinician, what do you lookfor in the results?
What do you look for just ingeneral and say, how is this

(23:27):
article going to change what Ido every day?
What can you offer for someassistance to people?
So once you've kind of looked atit, one of the things I like to
do, Mike, is look at thelimitations.
Um, you should by this point beable to say there were certain
limitations of the article.
Every, every article should havelimitations.
There's no perfect article.
And hopefully the author hassynced up with your limitations.

(23:53):
Um, I don't.
Don't take the author'slimitations, um, at face value.
Okay?
They're there.
It's usually, at least it's theelephant in the room, but
there's gonna be other thingsthat they don't always bring
out, right?
Um, but at least they shouldpick up the big things.
Um, those are what I callcontextual things, right?

(24:13):
So the.
You take the results, but withinthe context of the limitations.
So you have to know thoselimitations.
What were the areas for bias?
What wa for example, in inparticular the sample, um, you
know, some of the outcomemeasures, those types of things
that you've looked at.
Um, so you'll, you'll know yourlimitations and, and the context

(24:38):
with within that.
Now, the biggest thing I'm gonnalook for, uh, are kind of the,
the, go back to the statisticsand looking at how does that,
Confidence interval.
How much confidence do I have atthat intervention would affect
or have an influence on thatspecific individual right in

(24:59):
front of me?
And that's where the confidenceinterval comes in.
That's where your effect sizesor whatever come in.
So you're gonna apply theresults to that patient if it's
appropriate, but remember thatit doesn't mean it's gonna work,
right?
There's still a 5% chance.
Exactly.
Point oh five.
Right?
Right.
Um, and it, it, remember this isthere, that's what, a 95, you

(25:21):
remember, you're working on the95% curve of a normal
distribution.
So there's always a chance thatthis isn't gonna be, As
applicable to that patient, butyou, you try it, you make sure
that everything else, uh, fitsin and, and kind of the last
thing that that.
Is important about doing this isit's not being able to read.
The research, as we've talkedabout, is not just about

(25:43):
applying it to a patient, but weneed more peer reviewers.
Maybe there's, we need moreauthors, right?
We need more people tounderstand this and not be put
off by research and statisticsand frightened by it and be able
to actually do this on a regularbasis besides just applying it
to patients.
And as you've kind of seentoday, research is not easy.

(26:05):
If it were easy, everyone wouldbe doing it.
Right.
But every time you do research,something goes wrong.
Right.
Every time.
I agree.
Welcome to research this podcasttoo.
It just, it just happens.
That's right.
Just the way it is.
I mean, I, I, I, I wish peopleunderstood that a little bit,
especially the people that areso overly critical on social
media.
Um, yeah.
It, it's, it's reallychallenging and there's a lot of

(26:25):
work on, on so many levels tomake sure these work, so for the
listeners, I, I want you to, totake a step back here.
So Phil just walked us through areally next level expert view of
how to read a journal article.
I don't want you to feel, uh,nervous about that, anxious
about that, that Phil's thinkingof this at a completely
different perspective with hisexperience as an editor and

(26:46):
reviewer of journal articles foryears.
You can still apply thosebasics.
Scale back to things you said.
Make sure the article's purposeis there.
Make sure that the methods matchthe purpose.
Make sure that when you'rereading the results and the
limitations that you'rethinking, is this applicable to
the person that I'm interestedin answering this question for?
Right?
Is, is this for the person infront of me?

(27:07):
And I think if, if you youapproach it from that way, from
a quality journal in a qualityarticle, then I think you can
get a lot more out of thesejournal articles.
So, um, Phil, that was awesome.
I know you gotta get going.
I apologize, uh, for, forkeeping you so long.
But thank you so much forjoining us and sharing these
tips and information for peopleon how they can get the most

(27:29):
outta journal articles.
That was awesome.
Thank you.
Thanks, Mike.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.