Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Introducer (00:02):
Welcome to the Dr
Journal Club podcast, the show
that goes onto the hood ofevidence-based integrative
medicine.
We review recent researcharticles, interview
evidence-based medicine thoughtleaders and discuss the
challenges and opportunities ofintegrating evidence-based and
integrative medicine.
Continue your learning afterthe show at www.
(00:23):
d rjournalclub.
com.
Josh (00:31):
Please bear in mind that
this is for educational and
entertainment purposes only.
Talk to your doctor beforemaking any medical decisions,
changes, etc.
Everything we're talking aboutthat's to teach you guys stuff
and have fun.
We are not your doctors.
Also, we would love to answeryour specific questions.
On www.
d rjournalclub.
com you can post questions andcomments for specific videos,
(00:55):
but go ahead and email usdirectly at josh at
drjournalclub.
com.
That's josh at drjournalclub.
com.
Send us your listener questionsand we will discuss it on our
pod.
Adam (01:10):
And welcome to the Dr
Journal Club, where the
questions are made up and theresearch doesn't matter.
Josh (01:18):
You caught us in the
middle of a deep conversation
about different ways to makecoffee, but I like that intro.
Adam (01:26):
Why don't you make a
French press?
Why are you meal prepping yourKeurig like a freak.
Josh (01:32):
First of all, it's not a
Keurig.
As I explained to you, it's anespresso, which is another level
of like Hoity-Toity.
I don't know, I hadn't actuallythought about that.
Oh, I know why, because it'sgoing to be cold, so well.
Oh so, like you say, like havea hot water maker, make a pot of
French press and then just yeah, that's actually a really good
(01:54):
idea.
I should probably just do that.
That would have been a lotcheaper.
Adam (01:56):
Yeah, like the French
press I have is like metal, so
like the glass ones suck.
Yeah, they're classic, but themetal ones are nice.
Josh (02:07):
I miss French press.
I go through phases, like I didmochas for a while.
You know those things they doin England.
Adam (02:15):
I haven't done.
Josh (02:16):
Yeah, they're like these
little.
There's water on the bottom,you put it on the stove top and
it like percolates.
Adam (02:23):
Oh, the mocha pot, yeah,
yeah, yeah, yeah.
Josh (02:25):
Yeah, yeah, yeah, yeah,
yeah so.
Adam (02:27):
My mocha pot broke, though
I was pissed.
Josh (02:29):
I know I love those things
, but they don't always last
forever.
Anyway, what are we talkingabout today?
Adam (02:36):
You know what you should
get?
You should get like a littlesand, like convection oven, so
you can make your own Turkishcoffee.
Josh (02:42):
Oh, so speaking of Turkish
coffee, okay, last thing, I
could talk about coffee all daylong.
My neighbors are Muslim andthey basically live on Turkish
coffee.
They're from the Middle Eastand it's like we went over there
a couple months ago and theyserved us Turkish coffee and I
(03:03):
haven't had Turkish coffee sinceI lived in Israel when I was
younger and I forgot how much Ilove it.
They call it boats over there,which means mud, because that's
literally what you're drinkingthe bottom anyway, and I just
love the.
I love the I don't know.
I love the culture of it, Ilove the history of it, I love
the ritual of having theselittle things, and so I ended up
(03:26):
buying some.
So I got them to help me findproper Turkish coffee, because I
think it's like cinnamon that'smixed with it or something like
that.
There's a spice.
Adam (03:35):
Not cinnamon, it is
Cardamom, cardamom, okay.
Josh (03:42):
And so they approved, and
then I bought the whole
apparatus and I did it for alittle while.
I really liked it.
It is a bit of a pain toprepare it, but yeah, it's cool,
because you have to wait for itto spoil, just so, and then you
have to take off the top andanyway, now we're really wasting
time.
Okay, don't go listener.
We're going to talk aboutresearch here in a second.
So this is my third attempt tobring us back to research.
(04:04):
Let the records show, all right.
So what are we talking aboutthere?
Oh, ashwagandha, we're talkingabout Ashwagandha.
Yeah, all right.
Yeah, what do you got for me?
Is it my paper?
Yeah, you recommended it.
You've been like every time.
I've been like, let's do this,let's do this.
You're like well, what aboutthe Ashwagandha paper, josh?
And now we're doing yourAshwagandha paper.
Adam (04:24):
Well, that's because you
want to keep doing these weird
methods that don't matter.
Josh (04:30):
Everyone that listens to
this podcast is a methods fan.
I just want to say that and weshould have a poll.
People should call in and belike yeah, no, I'm a methods guy
.
I don't know what this guy'stalking about.
All right, Ashwagandha backfocus.
I need to drink my coffee.
All right, you start us off.
Adam (04:49):
Can you spell Ashwagandha
without having to Google it?
Josh (04:52):
No, and why should I?
This is an AI world.
We don't need to spell anymore.
Adam (04:57):
This is fair.
I can't spell it either, butanyway.
So this was a cool paper.
At least I thought it was goingto be cool.
It was not.
It was a systematic review.
Josh (05:10):
Walk us through the
context.
Adam (05:11):
Yeah, it was a systematic
review and meta analysis of
randomized controlled trialslooking at Ashwagandha
supplementation for themanagement of anxiety and stress
.
And for those who don't know,ashwagandha is an herb and it's
considered to be part of a classof herbs called adaptogens.
And adaptogens are kind ofthese, they're pretty cool and
(05:35):
sort of like they'renon-scientific mechanism of
action in that basically, ifyou're feeling low they'll help
sort of like elevate you, and ifyou're feeling a bit stimulated
they'll help kind of calm youdown.
So they adapt to yourenvironment in that sense, and
so there is actually quite a bitof evidence for Ashwagandha and
(05:57):
that class of herbs.
So another one would be likerhodiola for a number of things,
but particularly what we'relooking at today is the
management of.
I kind of focus on the anxiety.
I don't really care so muchabout the stress, but I guess we
can talk about both today if wewant.
Yeah, but that's what theseauthors did, was they kind of
looked at randomized controltrials and tried to summarize
(06:17):
the evidence for them.
Josh (06:19):
Yeah, and I think to your
point.
Like the reason they looked atstress is traditionally it's
used as this adaptogen and so,like you would like clinically,
like I would recommendadaptogens for people that come
in and they're just likestressed out, like they may not
like meet anxiety criteria butyou know, they're just kind of
overwhelmed with life and theyneed something to level them out
.
That's how I think about it.
And yeah, I was taught the sameway.
(06:40):
I was always kind of curious,like clinically, like I've
always found them very helpfulpersonally and with patients.
And then, but yeah, like theherbal explanation, I was always
like I wonder if that's a realthing, like the sort of like
leveling you out thing.
What does that look likebiochemically?
And I don't know, but anyway,so yeah, I was, I was kind of
(07:00):
excited about it.
Why don't we go through?
Adam (07:03):
The method.
Josh (07:07):
The methods dive into the
methods.
Yeah, there's not a huge amountto go over in the methods.
Well, we should do like prosand cons of methods in a little
bit, but maybe first what I'mthinking we do is first just
like talk about the results ofwhat they found, and then we can
sort of critique and go back tomethods and see what we, what
(07:28):
our take homes are.
Adam (07:30):
Yeah, sort of sort of like
long story short.
They found that Ashwagandha,when pooled together, was
beneficial for both stress andanxiety, and they did find that
there was a dose responserelationship where the more
Ashwagandha that was was taken,the greater the improvement.
(07:51):
However, the majority of thedosing was around like 300 to
600 milligrams, and then therewas one trial that looked at
12,000 milligrams, and so whenyou actually look at figure four
, it's just a straight lyingdown where there there the dose
(08:12):
response is essentially linear,in that the more Ashwagandha,
the greater the reduction inanxiety.
I don't know if they reportedabout the safety of that.
Josh (08:24):
Right, it seemed like a
crazy amount.
Adam (08:27):
However, it's not a dose
that I am familiar with.
Josh (08:31):
No, me, neither.
And yeah to your point, I thinkthat entire dose response
relationship was driven by thatsingle data point and I don't
know that I necessarily wouldwould trust it.
But, interestingly, like you'vegot a bunch of these studies
that use similar dose.
So yeah, so they found, justjust so people have a sense of
the evidence base to supportthis.
Like again, these are just,these are randomized control
(08:53):
trials only.
And they found eight randomizedcontrol trials on anxiety with
about 555 patients total andstandardized mean difference of
1.55.
So, like we talked about,that's usually usually
considered like a large effect,statistically significant.
And for stress, sevenrandomized controlled trials,
(09:17):
also a large effects,standardized mean difference of
1.75, also statisticallysignificant.
So quite a relatively largenumber of studies that were
identified.
And again, these are randomizedtrials.
I thought that was kind of cool, the grade level.
So they did do a gradeassessment, which is how we
measure the confidence we canhave in their estimate.
(09:39):
So they're estimating thislarge effect on stress and
anxiety.
How confident can we be in that?
They rate it as low, lowcertainty of evidence and they
ranked down for inconsistency.
That's because theirheterogeneity marker was.
Adam (09:57):
It was high.
Yeah, it was like super high, Ithink 92% or something like
that for anxiety and 83% forstress.
Josh (10:05):
Yeah, and that's the I
square.
And so, just for the listener,you want like a 0%, like that
would be awesome if you had 0%,but certainly under 50%.
Adam (10:13):
But that also makes sense
when you look at like the
different types of trials thatthey looked at.
They kind of used all.
There wasn't only anxiety andstress.
They looked at kind of adiverse population.
Josh (10:24):
That's right, not just
healthy people, all sorts of
different clinical conditions,you know actually.
So, speaking of they did somereally neat things, like they
did a heterogeneity explanation,like they looked at or
exploration, they looked atdifferent ways of dividing up
the studies to see if itexplained the heterogeneity
which is what you're supposed todo, and they were actually able
(10:45):
to explain like a decent amountof the heterogeneity I think to
your point, like they got whenthey just bulked it by.
Was it healthy people versusdisease populations?
They had like really clean,like okay, these are the apples,
these are the oranges, type ofthing.
So I felt like they probablycould have they don't
(11:08):
necessarily, didn't necessarilyneed to rank down for
heterogeneity, cause a lot ofthat, a heterogeneity they could
explain.
Adam (11:15):
And it also made sense if
you think about it too, like
higher doses in older people oftrials that were of larger
sample size and in people whohad psychological disorders were
seem to be greater respondersthan younger, healthier people
of smaller trial size with lowerdosing.
Like it kind of just makessense.
Josh (11:35):
Yeah, it did, but I
thought it was pretty intuitive.
Look, the thing is we don't dothis for money.
This is pro-bodo and, quitehonestly, the mother ships kind
of eeks it out every month or so, right.
So we do this because we careabout this, we think it's
important, we think thatintegrating evidence-based
medicine and integrativemedicine is essential and there
(11:59):
just aren't other resources outthere.
The moment we find somethingthat does it better, we'll
probably drop it.
We're busy folks, but right nowthis is what's out there.
Unfortunately, that's it, andso we're going to keep on
fighting that good fight.
And if you believe in that, ifyou believe in intellectual
honesty and the profession andintegrative medicine and being
an integrative provider andbringing that into the
(12:20):
integrative space, please helpus, and you can help us by
becoming a member on Dr JournalClub.
If you're in need of continuingeducation credits, take our
Nanceac approved courses.
We have ethics courses,pharmacy courses, general
courses, interactions, that's onsocial media, listen to the
podcast, rate our podcast, tellyour friends.
These are all ways that you cansort of help support the cause.
(12:46):
So, basically, what you have isa systematic review of
randomized control trials forAshwagandha, low level evidence
on large effect sizes for thatand before we so, on its face, I
thought it was a pretty goodstudy.
And then I like that they didgrade, I like that they did
(13:08):
heterogeneity exploration.
So they were speaking to myheart.
And then someone asked me inour evidence synthesis lab about
how you evaluate systematicreviews.
So I started talking aboutAmstar 2, which is this
checklist for quality andsystematic reviews.
It's like, oh, let me justapply the Amstar 2 since I'm
prepping for this pod, and itwas like terrible.
(13:33):
Yeah, it had like.
Well, maybe I'll give a quick.
Are you familiar with Amstar 2?
Adam (13:40):
Well, we talked about it
with the last podcast episode
with the blood pressure.
Josh (13:45):
Oh yeah, that's right.
Adam (13:46):
Yeah.
Josh (13:47):
Yeah.
So they had done their ownassessments of the quality of
the systematic review.
So there's the quote unquotequality of the evidence within a
systematic review, but thenthere's the quality of how the
systematic review itself wasconducted.
That's what Amstar 2 is, andAmstar is basically all these
brilliant Cochranemethodologists who got together
(14:09):
and we're like OK, what are themost important domains to think
about for systematic reviews?
There's, I think there's like13 of them and seven of which
they say are critical.
So you could have a fewnon-critical domains and get a
red mark and that's fine, youcould still trust the results.
But if you have even one inthese critical domains,
(14:30):
basically they view it as acritically flawed study and you
need to be extraordinarilycautious.
And of the seven criticaldomains, they got a red mark, at
least when I did it on 1, 2, 3.
So critically flawed turns out.
(14:56):
And just for the listeners,like what these essential things
are that you should be lookingfor when you're evaluating a
systematic review, probably themost important was you pointed
out in the green room was havinga a priori registered protocol.
And so they claim they had aprotocol but it was not
(15:18):
registered.
I don't think at all, let alonebefore they started looking at
results.
Adam (15:24):
Yeah.
But, I think, I feel like thatalso doesn't really change sort
of the takeaway from this study,because they're kind of already
saying that the certainty ofthe evidence is low to begin
with.
It doesn't necessarily meanit's bad, it's just that, you
know, can we really trust theseresults?
Not really.
We need more evidence that maykind of sway things towards the
(15:48):
null or be consistent with thesefindings Overall, you know we
only have a handful of trialsthat are kind of small.
We have only 300 in theintervention group and 250 in
the comparison or so in theplacebo group.
For anxiety specifically, therisk of bias was low.
(16:09):
However, you know we saw prettyconsistently the in the
imprecision in the results andyou know it's really just a
heterogeneous, heterogeneouspopulation.
There was really only twotrials that actually looked at,
you know, individuals with withan anxiety disorder.
(16:33):
One trial looked at 500milligrams per day, another one
looked at 12,000 milligrams perday.
All the other trials looked atpeople with insomnia, healthy
adults, schizophrenia orschizoaffective, bipolar, and
really what they did was,because of the standardized mean
difference, they just kind oflooked at the, the
(16:56):
questionnaires that they usedand kind of pulled the results
from that.
So I mean, I think you know,even without you looking at the
amstard you can.
You still have to kind of takethis with with a grain of salt,
and I don't.
I don't think that the amstardreally changes much for this, in
this one in particular.
Josh (17:12):
Yeah, it's interesting.
It's like I was kind ofthinking about that a bit when
we were going over last the lastone we did on blood pressure,
you know, because you're lookingat both the grade results,
which is the confidence in theevidence, and then the results
of the quality of the study.
That's sort of coming up withthat quality of evidence and so,
yeah, so where do you like,where do you weigh?
(17:33):
And to your point, Okay, let melike think through what you're
saying.
If you're like, well, theevidence level is already low,
we're already going to have alot of skepticism about this.
And so, yeah, I mean that's aninteresting point.
So in this case, let me thinkthis through for a bit.
Yeah, they didn't registertheir protocol.
What's the major fear with that?
(17:55):
That they change their outcomemeasures or something like this
and they're cherry picking.
But, to your point, they'redoing standardized.
Adam (18:03):
Do you also difference?
Yeah, do you also give them alittle bit of credit for, like
directly saying, we did notregister this?
Josh (18:12):
No, because I bet that was
a peer review comment.
That was like you need to sayif this is registered or not,
because that's part of likethat's that's part of like the
standard reporting.
You have to say what yourregistration is.
Or if it was not registered andI don't know, that was my read
between the lines is they wereforced to do that for
publication because that's theto meet criteria.
(18:32):
You know, that's kind of how itgoes, like Prisma, sr, whatever
is.
I think that's part of thestandard reporting.
But so okay, so maybe.
But yeah, so it's not likethey're cherry picking outcomes,
because they're basicallypooling all outcomes about
stress and anxiety.
Stress and anxiety makes sensefor an adaptogen to be looking
(18:54):
at, so that doesn't seem toosuspicious to me.
Yeah, so I don't know.
And then I didn't love theirsearch.
I thought their search was alittle bit weak but they found
like nine randomized controlledtrials.
That's a lot of evidence.
Like is it possible there's abunch of other studies out there
that they didn't find?
(19:14):
I don't know, I kind of doubt it.
Maybe, maybe, so, yeah, so Idon't know.
It's interesting to kind ofthink about and that's a good
point because we really shouldnot be thinking about these
quality tools as checklist,right, Like we need to be
thinking through how they wouldimpact our interpretation, and
so, yes, let me look at that.
So the protocol maybe we givethem a pass I mean we don't give
(19:35):
them a pass but how thatinfluences our thoughts about
the evidence.
Maybe it doesn't change it.
Literature search we said maybethat doesn't change it.
What else did I flag them for?
They didn't talk about fundingof the studies, or at least I
didn't see that.
That's sort of the new thing inAmstar 2 to really call that
(19:57):
out.
And we've talked about howthat's pretty important.
Yeah, and I can see how fundingwould be an issue because they
said that they had no conflictsof interest to declare, so
perhaps that's.
No, not them.
Not them Of the primary studiesthey didn't report on.
Oh, right yeah yeah, yeah, sothat's like two levels.
(20:19):
So like, do the authors of theSystem and Review have conflicts
?
So they claim no, but then didthey report on of these nine
studies which had financialfunding issues that need to be
aware of?
But in theory that should havebeen taken into account for the
risk of bias assessment?
Right, I mean, maybe, maybe not, so anyway, interesting.
(20:39):
But I think your greater pointis super valid, which is the
evidence is already low.
We're already not sure aboutthis effect size.
It is a large effect but itmight shift tomorrow if someone
actually publishes a large newrandomized controlled trial with
results that differ.
So yeah, I think those are themain take-homes that I had for
(21:01):
this study.
So a little bit of a shortsummary, pretty straightforward
for me.
Anything else you wanted totouch on on this one?
Adam (21:08):
I think we commented about
this earlier when we were
talking about this paper.
I thought it was kind ofinteresting that on their PICO
PICO stands for PopulationIntervention, comparison and
Outcome their comparison wasplacebo or no intervention, and
that really matters, because nointervention is kind of an
(21:28):
intervention in a way.
It's basically like ourweightless control, which is not
the same as placebo.
But on their exclusion criteriathey excluded trials without
any placebo groups.
So I'm just not sure if thiswas like a language barrier
issue, because this was apublication that came out of
(21:49):
Iran and so I don't know if thiswas sort of they just needed
some help in translating toEnglish manuscript or what, but
I kind of thought that was alittle bit contradictory.
It's like are you, is yourcomparison placebo?
Is it either, or Was it this anissue because it was not
(22:10):
registered?
So it was kind of justinteresting to see.
Josh (22:15):
Yeah, I agree, I saw that
too, and my comment next to it
was sloppy reporting or sloppylanguage, and it might have been
a language barrier thing.
For sure, because, you're right, that did contradict itself.
And when you look at the tableof the studies, all the controls
are placebo.
So I don't know if that'sbecause that was actually the
(22:35):
inclusion criteria or it justhappened to all the placebo
controlled and not activecontrol.
It's hard to say.
Adam (22:44):
Right.
Josh (22:44):
But while we're looking at
that, just to describe some of
these studies a little bitfurther yeah, so, like dosing,
like you said, between mostlybetween 300 milligrams a day and
600 milligrams a day, 600 seemsto be one of the more common
dosing regimen, which isinteresting.
And yeah, and one thing I wouldsay too is just about Amstar,
(23:09):
for a second is I was, as I wasgoing over it again, to present
on it.
It's a really great learningtool, so, and it's designed for
clinicians that don'tnecessarily have like
methodological experience andit's designed to be done in like
15 minutes or less.
So if you're a listener andyou're primarily the clinician
and you come across a systematicreview that you're interested
(23:29):
in and you wonder how much youknow how well it is, how well
conducted it is and how welldone it is, grab the Amstar to
checklist and the questionsguide you through what to look
for and, again, a little bit ofpractice.
You're done in 15 minutes andyou have a good learning tool
and assessment.
For you know, faith, we getthat a lot right.
(23:50):
People want to know is this agood study?
And you know we can look at it.
But I think it's even better if, like you can learn these
instruments that are really notthat difficult to, and they're
designed not to be difficult todo.
Adam (24:02):
Yeah, I would agree.
Josh (24:04):
Cool, all right, anything
else?
Adam (24:06):
No sir.
Josh (24:07):
That's it, Okay.
So short one today.
Interesting, straightforward,systematic reviews, some
quibbles, but for the most partprobably doesn't change our take
home, which is interestinglarge effects, surprising number
of studies, but low levelevidence overall.
All right, dear listener,thanks for checking in and we
will see you next time.
(24:27):
If you enjoy this podcast,chances are that one of your
colleagues and friends probablywould as well.
Please do us a favor and letthem know about the podcast and,
if you have a little bit ofextra time, even just a few
seconds, if you could rate usand review us on Apple podcast
or any other distributor, itwould be greatly appreciated.
It would mean a lot to us andhelp get the word out to other
(24:49):
people that would really enjoyour content.
Thank you, hey y'all.
This is Josh.
You know we talked about somereally interesting stuff today.
I think one of the things we'regoing to do that's relevant.
There is a course we have on DrJournal Club called the EBM
Bootcamp.
That's really meant forclinicians to sort of help them
understand how to criticallyevaluate the literature, et
cetera, et cetera Some of thethings that we've been talking
(25:10):
about today.
Go ahead and check out the shownotes link.
We're going to link to itdirectly.
I think it might be of interest.
Don't forget to follow us onsocial and interact with us on
social media at Dr Journal ClubDR Journal Club on Twitter,
we're on Facebook, we're onLinkedIn, et cetera, et cetera.
So please reach out to us.
We always love to talk to ourfans and our listeners.
(25:31):
If you have any specificquestions you'd like to ask us
about research, evidence, beinga clinician, et cetera, don't
hesitate to ask.
And then, of course, if youhave any topics that you'd like
us to cover on the pod, pleaselet us know as well.
Introducer (25:47):
Thank you for
listening to the Dr Journal Club
podcast, the show that goesunder the hood of evidence-based
integrative medicine.
We review recent researcharticles, interview
evidence-based medicine thoughtleaders and discuss the
challenges and opportunities ofintegrating evidence-based and
integrative medicine.
Be sure to visit www.
drjournalclub.
com to learn more.