Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Deborah Borfitz (00:02):
Hello and
welcome to the Scope of Things
podcast, a no-nonsense look atthe promise and problems of
clinical research, based on asweep of the latest news and
emerging trends in the field andwhat I think is worthy of your
30 or so minutes of time.
I'm Deborah Borfitz, seniorScience Writer for Clinical
Research News, which means Ispend a lot of time with my ear
(00:24):
to the ground on your behalf anda lot of hours every week
speaking to top experts fromaround the world.
Please consider making thisyour trusted go-to channel for
staying current on things thatmatter, whether they give us
hope or cause for pause.
In another five or six minutesor so, I'll be talking to
Bethany Kwan and Heather Smyth,who hail from the University of
(00:45):
Colorado Anschutz Medical Campus, a leading contributor to the
advancement of pragmaticclinical trials and ensuring
that research findings arerelevant and impactful in
real-world healthcare settings.
But first the latest news,including a trio of pragmatic
clinical trials specific to lungcancer treatment.
Implementation of pharmacogentrials specific to lung cancer
(01:06):
treatment.
Implementation ofpharmacogenomic testing on a
national scale and animpressively efficient approach
to comparing commonly usedintravenous fluids.
Plus improving access to genetherapy trials for a progressive
heart condition.
The landscape for Alzheimer'sdisease studies, clinical trials
designed to predict the mosteffective therapy, and the
(01:29):
creation of AI agents forclinical research.
Today's guests have no doubtheard of the so-called
Pragmatica lung trial thatlearned in just over two years
that a treatment combining amonoclonal antibody, siramsa,
with an immunotherapy drug,keytruda, did not significantly
extend survival in advancednon-small cell lung cancer
(01:51):
patients as had been seen withthis regimen in a smaller phase
2 trial.
However, the speed at whichPragmatica lung opened, the ease
of conducting the study and itsrapid enrollment of a highly
representative populationreportedly makes it a
paradigm-shifting model for thedesign, and its rapid enrollment
of a highly representativepopulation reportedly makes it a
paradigm-shifting model for thedesign and conduct of future
large randomized studies,including both the pragmatic
(02:13):
variety and those with FDAregistrational intent.
Notably, it was designed withrelatively few eligibility
restrictions, so the ultimateenrolled population looked much
like the US population overall.
In the UK, interim results of apragmatic clinical trial suggest
pharmacogenomic testing hassubstantial value in improving
(02:35):
prescribing precision.
The so-called PROGRESS trial ofthe National Health Service
aims to implementpharmacogenomic guided
prescribing into routineclinical practice across the
country by integrating genomicdata into multiple electronic
health records used by primarycare practices and hospitals.
Progress recruited patientsfrom 20 sites following
(02:57):
prescription of common medicines, specifically statins, opioids,
antidepressants and proton pumpinhibitors, and pharmacogenomic
guidance was returned to theirclinicians, but with a median
turnaround time of seven days.
Tellingly 95% of patients hadan actionable variant and just
over one in four participantshad their prescription adjusted
(03:18):
to achieve safer or moreeffective treatment.
In yet another pragmatic styletrial, researchers in Canada
have demonstrated a powerful,efficient approach for comparing
different standard treatments.
The trial compared twointravenous fluids, normal
saline and Ringer's lactate thathave been commonly used for
decades in hospitalized patients.
(03:38):
Unlike traditional trials thatrandomly assign patients to
receive one fluid or the other,this cluster randomized trial
randomly assigned entirehospitals to use one fluid for
three months then switch to theother fluid.
Since clinical data weredownloaded directly from health
administrative sources, noindividual patient recruitment
was required, allowing the teamto quickly collect data from
(04:01):
more than 43,000 patients inseven hospitals.
The cost of enrolling a singlepatient was less than $10,
versus more than $1,000 for atraditional trial.
As part of its Get With theGuidelines program, the American
Heart Association has launchedan initiative that will improve
education, outreach and accessto clinical trials for gene
(04:22):
editing therapies fortransthyroidin amyloid
cardiomyopathy.
This is a progressive and oftenunderdiagnosed condition that
can impair the heart's abilityto pump blood, leading to heart
failure, and itdisproportionately affects older
adults and certain racial andethnic groups.
The initiative will, amongother things, activate a
referral network of non-trialsites and develop tools that
(04:46):
leverage clinical data to helpidentify potentially eligible
participants.
An annual review of clinicaltrials for Alzheimer's disease
reports on 182 active studiesworldwide assessing 138 drugs
for patients at all stages ofthe disease continuum assessing
138 drugs for patients at allstages of the disease continuum.
Among the encouragingdevelopments are significantly
(05:07):
more Phase I trials 48 comparedto 27 a year ago and several
drugs that look promising enoughto warrant further study.
Since early 2024, 56 new trialsbegan, including 10 Phase III
trials.
A dozen of the late-stagetrials conclude this year, a
list that notably includes onestudying the effectiveness of
the diabetes medicationsemaglutide as a preventive
(05:31):
agent.
With a world-first study,researchers in Switzerland have
paved the way for new clinicaltrials that, instead of testing
individual drugs, predict themost effective therapy.
Over four weeks, nine differentmolecular biological
technologies were used toprecisely measure the properties
of melanoma tumors at theindividual cell level to enable
(05:53):
a precise treatment decision forpatients with cancer,
demonstrating how tumorprofiling can be implemented in
clinical practice.
Individual treatmentrecommendations were derived
from 43,000 data points persample and in 75% of cases, the
treating specialists found theinformation helpful for the
choice of therapy.
And finally, nvidia and Acuviahave teamed up to build AI
(06:17):
agents for clinical researchtasks that notably include trial
startup target identificationand clinical data review.
Notably include trial startuptarget identification and
clinical data review exclusivelyfor customers of Acuvia, a
top-ranked contract researchorganization.
The overall model, trained onAcuvia Life Sciences data, has
so-called orchestrator AI agentsacting as supervisors for
(06:39):
specialized sub-agents thatroute actions things like
speech-to-text transcription,clinical coding, structured data
extraction and datasummarization to other subagents
.
These pre-approval AI agentsare expected to accelerate trial
timelines, extract insights andreduce the data review process
to possibly two weeks.
(07:02):
It is now time to bring to themic Bethany Kwan and Heather
Smyth, a dynamic duo of PhDsfrom the University of Colorado
Anschutz Medical Campus, to fillus in on the wonderful and
ever-widening world of pragmaticclinical trials.
Bethany is Director of theDissemination Implementation
Research Corps with the ColoradoClinical and Translational
(07:24):
Sciences Institute, and Heatheris a research associate with the
Center for Innovative Designand Analysis in the Colorado
School of Public Health.
They are both affiliated withthe University of Colorado's
Adult and Child Center forOutcomes Research and Delivery
Science, aka ACORTS.
Welcome to the show, you two.
(07:44):
I can't wait to dig in on thisvery timely topic Wonderful,
thank you so much, debra.
Glad to be here.
I do not want to assume thateveryone listening to today's
episode knows exactly whatpragmatic clinical trials are,
including how they differ fromtraditional studies and their
relevance in real-world clinicalpractice.
So let's back up here for justa minute and simply define it.
(08:08):
Bethany, what's your sort ofgo-to description, as I'm sure
you hear this question a lot.
Bethany Kwan (08:16):
Yes, absolutely so
.
In my mind and application,pragmatic research broadly is
principally about studyingeffectiveness in real-world
settings and populations versusefficacy in more controlled
clinical trial contexts.
Pragmatic research spans a widevariety of research activities,
not just clinical trials, andresearch can be more or less
(08:37):
pragmatic in different ways, aswe often think about it on a
continuum, pragmatic researchincludes activities ranging from
pragmatic clinical trials, asyou were just describing,
focusing on testinginterventions and procedures at
the patient level, to testingmore complex interventions and
behavioral interventions andmodels of care implemented at
the practice or system level,with consideration of health
(09:00):
system and policy contexts.
Ideally, pragmatic research isdesigned to support decisions by
service and care providers,policymakers, patients and other
interested parties on whetherand in what context to adopt,
deliver or make use of anintervention.
Pragmatic research also ideallyincludes usually a large
(09:20):
interdisciplinary team withexpertise in biostatistics and
study, design, like Heather,technology, integration, data
and informatics tools,dissemination and implementation
, science, like myself,qualitative and mixed methods,
partnership development andcommunity and stakeholder
engagement.
So I also wanted to note somemyth busting.
(09:41):
Some people think that apragmatic trial means lacking in
rigor, that it's messy, butthat's certainly not the case.
We have very rigorous methodsfor pragmatic trials, maybe even
more so than traditionalclinical trials, because you do
have to account for so manyextraneous variables.
It also doesn't necessarilymean practical in the sense of
(10:04):
doing whatever is easiest.
It means a study, design andresearch being conducted in ways
that best reflect usual caresystems and processes.
Deborah Borfitz (10:14):
Sounds very
practical.
I think that's what we'regetting at right, yes so why is
it that we have been hearing somuch more about these pragmatic
clinical trials over the pastdecade?
You know, especially you know.
I would imagine it hassomething to do with the ready
availability of digitized healthdata and ongoing concerns about
(10:35):
improving care quality whilebringing down costs.
We've been hearing about thatforever, and doing so as quickly
as possible.
Does that sort of wrap it up,Heather, or is the rationale a
lot deeper than that?
Heather Smyth (10:47):
So, yeah, I think
definitely the increase in
technologies like electronichealth records or personal
wearable devices that track, youknow, glucose monitoring or
physical activity or sleep, allof those things definitely give
us opportunity to ask questionsand answer questions that we
(11:08):
wouldn't necessarily have beenable to do a few decades ago.
But maybe I'm a little bit of anidealist, but I think the
availability of this data ismore serendipity.
I think the reason that we'relooking at this explosion of
pragmatic trials is really thatit's a natural extension of
human curiosity and just thescientific endeavor.
(11:29):
We always start out asking howand why things work the way they
do, and then we start torealize wait, we have the
ability to take this knowledgeand shape the world around us.
So, even though I have a verydeep philosophical respect for
basic research and bench scienceand I think it's important to
ask questions for the sake ofasking questions, I also feel as
(11:52):
a pragmatic researcher thatthere's a very unique
satisfaction in watching theapplication of this rigorous
science in real world settingsand that can pave the way for
real world change.
So, if you ask me, yeah, thiscurrent focus on pragmatic
trials and effectiveness studies.
You know, we as a society haverecognized that there are
(12:13):
aspects of our social world thatcan be improved, have
recognized that there areaspects of our social world that
can be improved, and pragmaticresearchers recognize that we
can use scientific principles toaddress those areas, and so
it's really exciting andrewarding to work in this area.
Deborah Borfitz (12:29):
I bet.
Bethany Kwan (12:35):
And you know, 20
years ago we called this
translation of research intopractice.
There's new terms for pragmatictrials that, in a lot of ways,
this has existed for a while andit has really evolved into
dissemination, implementationscience and pragmatic research.
There's also increasingrecognition of the overlaps
between quality improvement andevidence-based medicine as part
of the learning health systemconcept, as you noted earlier.
(12:58):
Yes, pragmatic research ismeant to be practical, with
results meant to be directlyapplicable and quickly informing
practice and policy.
I also want to acknowledge thatinherent to this application is
recognition that healthcare isa business.
We need to align with howbusinesses and the consumers of
their services that is, us, thepatients make decisions and we
(13:20):
need to collect data on outcomesthat truly matter to all those
decision makers.
And we need everyone to agreethat a new innovation improves
outcomes that matter, satisfiesunmet needs and can be sustained
using available resources.
And how do we do that?
This is where that communityand stakeholder engagement
aspect of pragmatic researchbecomes so critical.
Deborah Borfitz (13:41):
Thank you so
much for that addition, Bethany.
That was really helpful, and Iwant to stick with you here for
a second.
I believe the NationalInstitutes of Health was a big
supporter of pragmatic clinicaltrials at one time, but of
course we have new leadershipnow, so I have to ask is that
still the case?
Last I looked, the website forthe NIH Pragmatic Trials
(14:04):
Collaboratory was still up andmultiple pragmatic trials were
still enrolling.
So I'm guessing the goals ofthe study approach are well
aligned with theadministration's priorities and
have perhaps been spared fundingcuts, at least to some degree.
Bethany Kwan (14:26):
I can't speak to
NIH priorities per se they are
still evolving but we do knowthat the NIH has now, and as it
has always had, an explicitemphasis on improving the health
of Americans and, in myexperience as a reviewer of NIH
grants myself, there is anemphasis on impact and research
significance as a major driverof reviews and, ultimately,
funding decisions.
And pragmatic trials give usthat opportunity to identify
(14:49):
evidence-based approaches thatwill work in real-world
healthcare settings and trulyimprove health and healthcare
for us all.
Deborah Borfitz (14:57):
Okay, good,
thank you for that.
Since pragmatic trials tend tohave unique design features and
might utilize multiple EHRsystems or data sources, it
sounds like conducting thesesorts of studies requires a fair
amount of specialized knowledgeand data science skills, as
well as a lot of flexibility.
So how does one find or acquirethose skills and that mindset,
(15:22):
and what are the biggest designand data obstacles faced by
investigators and study teams?
I don't know, bethany Heather,one or both of you may have
something to say on this front.
I'll let you duke it out here.
Bethany Kwan (15:35):
Yeah, absolutely.
It does require a wide array ofspecialized knowledge and
skills, but no one person needsall of that expertise.
Team science is critical andteam science is in itself a
skill.
I mean, on my teams we havedata scientists, informatics,
health, economics,implementation science, various
(15:55):
community and clinical partnersthat are all part of the team,
implementation science, variouscommunity and clinical partners
that are all part of the team.
And I believe Heather has somecomments on design and data
obstacles too.
Heather Smyth (16:12):
Okay, yeah, so I
should introduce this first,
that my training is as aquantitative psychologist, so
concerns about the measurementof our variables is always
forefront in my mind.
Concerns about the measurementof our variables is always
forefront in my mind and so kindof from that context,
understanding that when wecollect data, we can be
collecting it for a lot ofdifferent reasons.
Sometimes it's specificallyplanned for research.
In the case of EHR data, thisis like clinical information
(16:33):
that is used for the purposes ofmaking clinical decisions and
keeping those records, or maybeadministrative data and all of
those types of data collection.
They are collected for aspecific purpose, with pragmatic
trials when we look to utilizethe data that is available often
from these clinical records andusing it as quickly as possible
(16:54):
.
Sometimes from a measurementperspective there can be a
little bit of noise or it can benot quite as ideal as what a
statistician would want to beworking with.
So I think it's important tothink about the quality of the
data and what the data is meantto be used for when you then use
it for research and try to makeinterpretations from it.
(17:17):
And I know earlier Bethany hadkind of mentioned this
difference between effectivenessin the real world versus
efficacy in a lab.
And in a similar way, I kind offramed the difference between
traditional and pragmaticresearch as a difference in
internal and external validity.
I'm going to steal a phrasefrom Dave McKinnon, who was my
graduate school mentor, and hewould always say that
(17:40):
measurement is the softunderbelly of all of our
statistical models, meaning thatreally the models that we use
are made for optimizing thatinternal validity, having robust
and precise measurements thatare really obtained through
well-controlled studies withhigh internal validity.
And I'm sure our listenerswould guess, in pragmatic trials
(18:04):
we don't always have all ofthose controls and that does
have implications for whichmodels we use statistically and
how we interpret those models.
And so I think that it's reallyas a statistician that lack of
control can sometimes make me alittle bit nervous.
But if we switch our mindset andgo, we're not focused on
(18:27):
efficacy, we're focused oneffectiveness.
It's not internal validity,it's external validity.
Those data issues where it'snot perfectly clean data or if
it's not very controlled datajust becomes a really
(18:48):
interesting designcharacteristic and it brings
nuance to the project and to theinterpretations rather than
being seen as a scientificlimitation.
I mean, and at the end of theday.
For me, this just means that weget to experience the joy that
comes with thoughtfully thinkingabout our research questions,
understanding that our questionsexist within a multifaceted
context with multiple layers ofinterested parties, and I think
(19:14):
that's just a really fun way ofdoing good science.
Deborah Borfitz (19:18):
That was super
helpful.
Heather, thank you so much forthat.
I have used those terms myselfinterchangeably effectiveness
and efficacy so I'm going to becareful moving forward to
consider the nuances of that.
Announcement (19:31):
Are you enjoying
the conversation?
We'd love to hear from you.
Please subscribe to the podcastand give us a rating.
It helps other people find andjoin the conversation.
If you've got speaker or topicideas, we'd love to hear those
too.
You can send them in a podcastreview.
Deborah Borfitz (19:48):
I wanted to
pivot to a few years ago I was
at your Colorado PragmaticResearch and Health Conference
and one of the keynote speakerssaid that the yardsticks of
success of a pragmatic trial arewhether an intervention
improves outcomes and the studyaccounts for the heterogeneity
of patients, what he referred toas the other 85% that strict
(20:10):
eligibility criteria of atraditional clinical trial would
exclude from participation,from participation.
So my question is this is thatstill the mantra, and do we know
how often we are in factcapturing that other 85%?
Either of you.
Heather Smyth (20:27):
Yes, emphatically
yes.
We do still care about thatheterogeneity in our study
designs and are intentionalabout building that into not
just the designs but ourresearch teams and the study
participants that we recruit.
And I know just a few momentsago I was saying how having this
variation in our data can be alittle bit noisy and make it
(20:51):
difficult for our statisticalmodels, but on the flip side,
this also gives us a chance toinvestigate things like
heterogeneity of treatmenteffects, and so if you have a
very, a lot of variety in theparticipants that you are
studying, you're able to saywell, does this effectively work
(21:13):
for one group differently thanit does for another?
Often you might hear terms likemediation effects and
moderation effects, and that'skind of really what I'm talking
about that heterogeneity oftreatment effect or those
interaction terms for mystatistical fans out there, or
the mechanisms, the mediationquestions.
(21:34):
But I really want to like Ihave to give you some, if I can
say it, pragmatic advice indealing with that.
Ok, when we include thisheterogeneity in our study
designs, you have to rememberthat mathematically those
effects are going to be smallerthan whatever your main
treatment effect is going to be.
(21:54):
So if you are interested inlooking at those effects, you
need to think about your samplesize.
Oftentimes I'll have somebodywriting a grant and they'll say
how many people do I need tofind this main effect?
But in reality they'reinterested in heterogeneous
effects and I have to likeyou're going to need a larger
(22:15):
sample than that if you're justpowering on your main effect,
need a larger sample than thatif you're just powering on your
main effect.
And so I think, because of someof the inherent limitations of
pragmatic trials, it's importantto think about the difference
between statistical significanceand clinical significance.
If you have a nice heterogeneoussample and you want to look at
(22:36):
heterogeneous treatment effectsand you may or may not be able
to get the sample size that youneed, you might have to
proposively oversample incertain groups.
You might get to a place whereyou can't get that statistical
significance.
It's kind of like a magiccutoff and it's because of your
(22:57):
sample size.
But you can look at yourtreatment effects, your effect
sizes and have a discussionabout the clinical significance.
So the bottom line is, whenyou're working with your
statistician, don't ask themsimply for a p-value.
Also ask them about the effectsizes and have a discussion
about that, because that is acreative way to get the most out
(23:21):
of the data and informationthat we're collecting.
Deborah Borfitz (23:24):
Thank you so
much for that, Heather.
That was super interesting tipand gets to kind of my final
question for each of you, andmaybe we can move beyond that a
little bit Planning a pragmatictrial is one thing and
implementing it is another.
So and obviously that's nodoubt the harder part so I'm
hoping that each of you, Bethanyand Heather, can leave us today
(23:48):
with a few tips on how to dealwith some of the usual expected
challenges, maybe three or fourkey takeaways on how to
successfully to conductpragmatic trials.
What do you have to offer onthis?
Heather Smyth (24:03):
Well, just real
quickly.
In my mind it all comes down toproject management.
When you've got a large,interdisciplinary team, multiple
practice sites, variousperspectives and priorities,
organization, explicitcommunication, role clarity
these are all the secretingredients to a successful
(24:24):
project.
I would say make sure you havea dedicated project manager and
give yourself permission to dothe administrative tasks.
Deborah Borfitz (24:33):
So that
everybody's on the same page
Sounds great, Bethany, how aboutyou?
Bethany Kwan (24:39):
Yes, absolutely,
being well organized is one of
the few things we can control inpragmatic research.
Many things come up during theconduct of a pragmatic trial.
Ehr systems on which we dependfor our data, in some cases for
delivery of our interventions,will change.
They will be upgraded, theywill be modified.
(25:01):
There will be staff turnover,both on the research team and at
the clinical sites.
You might lose your champion atdifferent clinical sites.
It's really vital to anticipatethat turnover and have a backup
plan for staff trainings.
Budget this into your timeline,into your financial budget, I
would say.
Also, investigators need tohave a willingness to pivot,
(25:22):
having a flexible mindset anddifferent ways of accomplishing
the project goals.
These unplanned adaptationsshould be tracked systematically
, both for explaining thefindings as well as potentially
identifying novel hypothesesthat can be tested in future
research.
You asked about usual orexpected challenges.
Well, one usual or expectedchallenge is unusual and
(25:46):
unexpected challenges like COVID.
The ability to shift in deliverypresented an opportunity to
study different modalities,different ways of delivering
interventions during COVID.
It was really a naturalexperiment in a lot of ways
around telehealth and thisflexible mindset, this ability
to adapt to these unexpectedchallenges, is one of the key
(26:07):
differences from a traditionalclinical pragmatic trial, and
one that I find makes reallyclassically trained clinical
trialists very uncomfortable attimes I bet I bet Well.
Deborah Borfitz (26:19):
This has all
been so very enlightening for me
and I'm sure, for our listenersas well.
There's really no arguing thatthe need for pragmatic clinical
trials is real and increasinglywell appreciated, given their
enormous potential value to theoverall healthcare enterprise.
Thank you, Bethany and Heather,for enlightening us all on that
(26:40):
point, as well as yourpragmatic tips on how it can be
done right to create lasting andmeaningful change in real world
healthcare settings.
And, as always, a big thank youto everyone out there for
listening in.
If you're not subscribed tothis podcast yet, please
consider going to Apple Podcastsand doing so right now so you
don't miss your monthly dose ofnews and perspectives.
(27:02):
You'll be hard-pressed to findanywhere else.
And, if you're up for it, I'dalso be so very grateful if
you'd leave a rating and reviewwhile you're there.
One more thing before we go.
If you liked today'sconversation, it is only a
glimpse of what you can expectfrom Scope Europe.
Presenters and panelists,Please plan to join us October
14th and 15th in Barcelona whenclinical operations executives
(27:25):
will be exploring the latesttrends in clinical innovation,
planning and operations.
Save an additional 10% off anycurrent rate by using the code
SOT10.
For more information, visitscopesummiteuropecom.
Bye for now.