All Episodes

September 30, 2024 42 mins

Disproportionality analyses are a mainstay of pharmacovigilance research, but without clear guidelines, they often lead to confusion and misinterpretation. Enter the READUS-PV statement: the first-ever guide for reporting disproportionality analyses that are replicable, reliable, and reproducible.  

Tune in to find out: 

  • The history of reporting guidelines in pharmacovigilance and why the READUS-PV guidelines were created 
  • Why there has been a spike in the publication of disproportionality analyses in recent years and what this means for their reliability 
  • What it means to publish “good” pharmacovigilance science  


Want to know more? 

Join the conversation on social media
Follow us on X, LinkedIn, or Facebook and share your thoughts about the show with the hashtag #DrugSafetyMatters.

Got a story to share?
We’re always looking for new content and interesting people to interview. If you have a great idea for a show, get in touch!

About UMC
Read more about Uppsala Monitoring Centre and how we work to advance medicines safety.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Alexandra Co (00:11):
Disproportionality analyses are the so-called
bread and butter ofpharmacovigilance research, but
there lack specific guidelineson how to report them.
As a result, disproportionalityanalysis reports are often
ambiguous, hard to interpret andcan lead to incorrect
conclusions when not put intothe correct context.
Thus the READUS- PV Statementwas created, the first guide to

(00:34):
reporting disproportionalityanalyses.
My name is Alexandra Coutinhoand this is Drug Safety Matters,
a podcast by Uppsala MonitoringCentre, where we explore
current issues inpharmacovigilance and patient
safety.
Joining me today are DanieleSartori, a pharmacovigilance
scientist at UMC and doctoralresearcher at the University of

(00:54):
Oxford, and Michele Fusaroli, aPhD student at the pharmacology
unit of the Department ofMedical and Surgical Sciences in
the University of Bologna.
While our discussion focused onthe READUS- PV guidelines, it
led to some pretty interestingreflections on the efficacy of
reporting guidelines, publishinggood science and the importance
of transparency, replicabilityand reproducibility in

(01:18):
pharmacovigilance.
I hope you enjoy listening.
Hi, Daniele and Michele, andwelcome to Drug Safety Matters.
I'm really glad that we wereable to get both of you in here
to speak about the project thatyou've been co-authoring.

(01:39):
So how are you both?

Michele Fusaroli (01:40):
Fine, thank you.

Daniele Sartori (01:40):
I think I'm okay.
I could use an extra coffeemaybe, so overall I'm fine.

Alexandra Coutinho (01:45):
Great, good to hear.
So you're both here today totalk about a recent paper you
have co-authored, along withother researchers, on guidelines
for reportingdisproportionality analyses.
For people like me who may notbe familiar with this term, what
is disproportionality analysisand what is its role in
pharmacovigilance?

Michele Fusaroli (02:11):
Well, pharmacovigilance databases
collect individual case reportsof suspected adverse drug
reactions from all over theworld, and we can use these
reports to identify unexpectedsafety issues.
The gold standard would be todo a case-by-case analysis, but
as these databases get biggerand bigger, relying only on
case-by-case ana lysis becomespractically unfeasible and

(02:32):
therefore we have to find othermethods.
And data mining enters thestage here, because when we have
so many data, what we want todo is to perform some
statistical analysis, such as,for example, disproportionality
analysis, to identify those drugevent combinations that occur

(02:52):
more often than expected.
And that is whatdisproportionality analysis help
us to deal with big, bigdatabases.

Daniele Sartori (03:11):
Yeah, and it might be helpful to learn that
disproportionality as a conceptis not necessarily new.
Disproportionality has beenaround since the late 90s at
least.
But the concept ofdisproportionality had already
been in place, or at least sincethe mid-60s, with the work from

(03:32):
Finney or the work from Patwary, who were trying to apply
observed or expected analyses inthe WHO database at the time.

Alexandra Coutinho (03:47):
So, coming to your project in partic ular,
the READUS-PV project.
So these are guidelines on howto report disproportionality
analyses, right?
These guidelines were createdto answer a long-standing
problem with reporting inpharmacovigilance science and
research.
Can you tell us a little bitmore about the problem with

(04:09):
reporting in healthcare and itshistory, Daniele?

Daniele Sartori (04:13):
Yes, around 1978, Freiman and colleagues
first surveyed randomizedcontrolled trials with negative
results.
So these are randomizedcontrolled trials that suggest
that there is no differencebetween the intervention and the
placebo, and they found someimproper reporting of important

(04:39):
aspects of these trials.
For example, the allocationconcealment strategy was not
well reported and there weresome inaccuracies in how the
sample sizes were calculated andestimated.
And they suggested that hadthese trials been better

(05:00):
conducted, then the resultscould have improved as well,
which means that perhaps in somecases the fact that there was
no difference betweenintervention and placebo would
not have necessarily held hadthe trial been well reported and

(05:20):
well conducted.
So fast forward in the 90s, theSORT group the standards of
reporting trials.
They compiled a checklist forproperly reporting randomized
controlled trials and inparallel the Asilomar Working

(05:43):
Group did the same but alsorecommended that checklist, well
, their checklist, should havebeen part of the submission
process for published research.
So effectively, journals shouldhave asked authors who wish to
submit publications ofrandomized controlled trials.

(06:03):
They should have followed thischecklist.
Eventually, the SORT group andthe Asilomar working group
joined forces and came to createthe CONSORT statement, which is
quite well known today, atleast for, well it's well known

(06:25):
for randomized control trials.
So in the wake of the CONSORTstatement, the UK National
Health Service started fundingthe EQUATOR Network, and the
EQUATOR Network is nowadays avery prominent organization when
it comes to reportingchecklists.
Essentially, they collect allthe checklists that are around,

(06:51):
but they also facilitate theirspread among journal editors and
among peer reviewers.
They also try to compile whatcould be the difficulties in
putting together checklists, andnowadays you can go on the

(07:11):
EQUATOR website, verify thatthere is a reporting checklist
for a specific study design thatyou wish to do, and then you
can use it when you wish tosubmit a publication.
So now we're in the early2000s, still, so in the early

(07:32):
2000s, when it came out, theCONSORT statement only had one
bullet point for harms, so forwhich essentially said you
should report, as part of yourclinical trial, the adverse drug
reactions that took place inplacebo and intervention arms.
So this was perceived asinsufficient, and so in 2004,

(07:54):
the checklist was expanded toaccommodate for another 10
points on adverse drug reactionand harms.
And much later, in 2022, theCONSORT harm statement was
further rephrased and updated.
So nowadays you could say thatthere is a solid basis in

(08:19):
checklists for reportingrandomized control trials,
specific to harms and adversedrug reactions.
This is to say that, you know,randomized control trials are
part of pharmacovigilance.
Pharmacovigilance starts at veryearly stages, much earlier than

(08:39):
just post-marketing or casereports.
But nowadays we've gotguidelines for many different
study designs.
It's not just randomizedcontrol trials, it's also
systematic reviews and scopingreviews.
With the PRISMA checklist youhave guidelines for their
protocols, which are the PRISMA,for protocols of scoping and

(09:03):
systematic reviews.
You have checklists for casereports, which are the CARE
guidelines, or surgical casereports, which are the SCARE
guidelines, and for terms thatare a bit closer to
pharmacovigilance in the senseof post-marketing
pharmacovigilance, you haveguidelines for

(09:23):
pharmacoepidemiological studies,which are the RECORD- PE
guidelines, and I think thiswhole very slow progression
naturally evolved in the READUSchecklist for reporting
disproportionality analyseswhich in its very early stages
was effectively based on theRECORD- PE guidelines for

(09:46):
pharmacopidemiology studies.

Alexandra Coutinho (09:49):
So maybe let's delve a little bit deeper
then into the READUS-PV project.
Michele, what is this projectand what problem has it been
trying to solve withdisproportionality reporting?

Michele Fusaroli (10:01):
Yes, the READUS-PV project arose from a
pervasive acknowledgement of theproblems of reporting in
disproportionality analysis thatare, in fact, the same problem
of scientific research ingeneral, but are particularly
accentuated when dealing withdisproportionality analysis, and
these problems can be clusteredinto three different domains

(10:26):
that are completeness andtransparency, justification of
the methodological choices andcorrect interpretation of the
results.
Concerning the completeness andthe transparency of the
reporting, this is a necessarystep to allow for
reproducibility, replicability,assessment and interpretation of

(10:48):
a study, and therefore it is acrucial part of the reporting.
It should involve transparentlyreport the preprocessing of the
data, the analysis performedand the interpretation of the
results, and, like all theresults have to be shown in the
report.
There may be many motivationsto not provide a complete report

(11:13):
of a study, and some of theseare, for example, the perception
that the subjectivity in datapreprocessing is actually not
influencing the results.
The fact that there is thisidea that is actually true,
maybe, but it is alsoproblematic that the reader is

(11:35):
actually not interested in allthe details of the study and
just wants a kind of take-homemessage at the end of the
article.
Also, the fear of showing thelimitations of your study and
therefore being exposed to thejudgment of other researchers.
And finally, also the jealousythat someone else may copy your

(11:58):
methods and data and benefitfrom them.
But anyway, these are justmotivations that are not
justified, and completeness andtransparency should be a key

(12:36):
point of their repo rting.
And then we have justificationsof methodological choices.
In fact, a study by Currie etal.
showed that actually, you canplay with the population studied
, with the threshold, with thedefinition of the object of
study and then you can obtain,actually, any result that you
want from disproportionalityanalysis, and this implies that
every methodological choice thatwe do should be justified by
expected biases, byconsiderations about why a
definition, for example, ourevent is better than another,
and so on.
We cannot just provide somemethods and some results and be

(13:00):
happy with that.
And finally, a correctinterpretation, because in fact,
it has been shown by Mouffak etal.
that there is a spread spin inpharmacovigilance, and spin
means that there is a tendencyto overstate results in

(13:21):
disproportionality articles and,in particular, to not take into
account the limitations of thedata we are dealing with.
There are, in fact, somemotivations that are also shared
with other studies, inparticular the desire to publish
stronger results.
That is always a conflict ofinterest of every researcher.

(13:42):
And also, on the other side,the necessity to navigate an
editorial system that oftenrelegates weak results, limited
results or negative results tothe grey literature.
So, summing up everything,there was a problem in
disproportionality analysis,there is still a problem when

(14:02):
publishing disproportionalityanalysis in the fact that the
reporting is not complete, thechoices are not justified and
the interpretation is often nottaking into account the
limitations of the data and themethods, and that's why we
started the READUS project.
Underlying the READUS- PVproject was the idea to gather

(14:25):
experts from all over the world;experts in pharmacovigilance
and experts indisproportionality analysis to
kind of provide a framework fordriving the reporting of
disproportionality analysis andalso a regulation sometime to
ensure that what we have outthere in the literature is
actually reported in an accurateand useful way.

Alexandra Coutinho (14:47):
Right, I feel like a lot of what you had
said in your answer reallyapplies to scientific publishing
in general, having worked inacademia myself for a time.
Specifically then to the paper,when I was reading it, I found
a really, really interestingthis particular finding that
there was a bit of a spike, wasit, in disproportionality

(15:09):
assessments since 2017?
What has led to the spike inreporting?

Daniele Sartori (15:15):
It's challenging to find the root
cause of this in a singularelement in the pharmacovigilance
space, but I can perhaps thinkof a couple of possibly
explanatory things.
For one, around 2004-2005,there was an increase in the

(15:53):
number of publicly availabledata sets, of spontaneous
reports, and progressively thenumber of disproportionality
analyses started to increasefrom 2004 to 2005 and so on.
But it was in 2010 when Paluzziand colleagues said, you know,
the number of disproportionalityanalyses is exponentially

(16:14):
increasing and, if you will, wehad some signals already that
disproportionality analyses wereon the rise already, you know,
almost 15 years ago.
They also incidentally, calledfor a minimum set of
requirements for reporting theseanalyses before publication,
really.
And it was in 2013, I think,where you started to see a few

(16:57):
disproportionality analyses thatare nowadays well cited that
use publicly available data setsto show what disproportionality
was capable of and how youcould implement it in your data
set effectively.
And part of this is alsobecause disproportionality
analysis is quite easy toimplement.

Michele Fusaroli (17:05):
And also from 2017, indeed, there was a
further spike and accelerationin publishing disproportionality
analysis, and we couldspeculate on some factors that
may have promoted this spike.
In fact, between 2015 and 2017,many public dashboards were

(17:29):
made available.
This website allowed anyonewith a few clicks to access
pharmacovigilance data and alsoto perform simple
disproportionality analysis.
This surely had some impact.
Another important factor mayhave been the publication in

(17:50):
2016 of the good signaldetection practices by the IMI
project, giving already somerules not for reporting, but for
performing a disproportionalityanalysis.

Alexandra Coutinho (18:03):
Again, generally, to my mind, it sounds
great to making data availableto so many people to be able to
conduct analyses in general, butit seems to have a bit of a
detrimental effect on publishinggood science and actual finding
true findings.
I find that really interesting.
A bit sad as well, because youwant people to look into this

(18:26):
data, to mine this data so thatwe can get possibly true signals
.
That being said, what effectthen has the spike in
specifically disproportionalityanalyses had on
pharmacovigilance efforts andhealthcare in general?

Michele Fusaroli (18:42):
Particularly if our speculation about the
public dashboard influencing itis true.
f, this possibility, thisopportunity to actually, with a
few clicks, obtain access andperform a disproportionality
analysis, sort of broke away theresponsibility of, like,

(19:03):
designing the best study andknowing your data and correctly
interpreting the results fromthe researcher.
And this is, in fact, not sodissimilar from what is
happening today with generativeAI, so you actually, with a few
clicks, obtain results and youmaybe don't have the tools to

(19:25):
interpret it.
So if this is true, then evenif the stronger participation of
the pharmacovigilance communityis really a good thing because
it brought a lot of signals alsoto be recognized, we expect
that the signal to noise ratiowas reduced.
So there were many more signalspublished, but at the same time

(19:47):
they were kind of diluted in asea of published
disproportionality that wereactually poorly performed,
poorly documented and alsopoorly interpreted, because
actually the researchers werenot more in control of the study
design and of thedisproportionality analysis and
they actually didn't know thedata that they could access

(20:10):
through the public dashboard,just in a superficial way.

Daniele Sartori (20:13):
So even before the public dashboard and even
before the READUS, prior to 2013or during that time, there were
examples of well-reported andwell-conducted
disproportionality analyses.
But I think what the READUSintroduces is that if the

(20:35):
responsibility prior to theREADUS to report a
disproportionality analysis wellfell almost entirely on the
author, the READUS now says thatthe responsibility is shared
among the authors, the editorsand the peer reviewers.
So I think we're going to see apositive impact from this.

Alexandra Coutinho (21:02):
So in the papers that I read to prepare
myself for this interview, oneof the papers was like a scoping
review of the reviews lookingat guidelines on reporting
analyses, looking at itseffectiveness, and that these
reviews had found that, despitetheir existence, inadequate

(21:25):
reporting does still exist.
What has been preventing morewidespread support and
integration of previousguidelines for transparent
reporting in healthcare andpharmacovigilance?

Michele Fusaroli (21:37):
If these guidelines are not endorsed by
journal, then actually theguidelines are not actually
adopted by the scientificcommunity.
So this is the main problem.
But then there are also otherproblems, for example the fact
that a guideline may be toodifficult, too complex, it may
take too much time, or also thatthere is some kind of inertia,

(22:02):
anyway, in the fact that it isdifficult for a researcher to
change the way they are doingand reporting research.
And finally also well,guidelines are a manifestation
of the current culture in thescientific community and
therefore if they do not evolvewith the scientific community

(22:23):
and with the knowledge and withthe perception of what is needed
in the reporting of a study,then they are going to be
obsolete.

Alexandra Coutinho (22:31):
Some reviews have also found that inadequate
reporting was common inspecialty journals and journals
published in languages otherthan English, for example.
What other patterns have youseen in your assessment of the
disproportionality analysespublished so far?

Daniele Sartori (22:47):
I think it's beyond my anecdotal experience
and beyond what the study byCorrie that Michele cited
earlier.
I'm not aware of studies thathave specifically looked into

(23:07):
the quality of reporting injournals that use languages
other than English.
When I was carrying out myscoping review of signals, which
came out a few years back, Idid struggle to follow through
some disproportionality analysesfrom journals that were not in

(23:30):
English.
But I also had the oppositeexperience, so I found some
reviews written in Spanish orFrench that were quite well done
.
So it's been a hit or miss forme.
This is just my experiencereally.

Michele Fusaroli (23:50):
Another thing that I experienced is that it is
difficult to convey theimportance of completeness in
the study, even if the articleand the disproportionality is
directed to a clinical journal.
We want to be complete,particularly because the reader
may not have the tool andknowledge to understand and
assess our study, so we want tobe the most complete possible.

Alexandra Coutinho (24:09):
So, moving to maybe a slightly different
subject, then, the READUS- PVproject was carried out using
something called the Delphimethod.
Again, not being a PV scientistmyself, what is the Delphi
method and what are itsstrengths and weaknesses with
regards to projects such as thisparticular one?

Daniele Sartori (24:28):
The Delphi is a method that is used to arrive
at a consensus in a panel ofexperts.
Now, the way in which theDelphi gets this consensus is
through an iterative process.
It moves through rounds, roundsof questions from the

(24:48):
researcher to the panel ofexperts and the Delphi says if
all of these experts agree onsomething, then they have
reached consensus.
Well, it's not really all ofthe experts, it's not unanimity.
It is the Delphi methodrequires the researchers to set

(25:09):
a threshold of consensus thattypically it hovers around 70 to
80 percent of people saying yes, we agree on this specific
topic, but other Delphi studieshave used lower thresholds.
So there is no gold standardconsensus threshold for Delphi
studies.
So it proceeds iteratively.

(25:32):
So you could start with aquestion like should we define
humans as featherless bipeds?
And then if 70% of people sayyes, then we all agree that
humans are featherless bipeds.
Or rather, this panel ofexperts says, you know, humans
are featherless bipeds.

(25:54):
Now, it proceeds iteratively inthe sense that these questions
can be amended in subsequentiterations, which are called
rounds, and they're modifiedbased on the feedback from the
participants to the study.
So, for example, if we ask thesame question and the answer is

(26:18):
overwhelmingly no, or a group ofpeople has said well, you
should perhaps rephrase it in adifferent way, then subsequent
rounds of the Delphi willaccount for these modification
to the question until everybodycan agree, or not.
It's a perfectly fine outcomeof a Delphi to say we did not

(26:39):
agree on this.
It's also important to notethat the feedback is not just
one way, from the participantsto the research group, but also
from the research group to theparticipants.
So at the end of each iterationof a Delphi, the participants
are given the outcome of theprevious iteration.

(27:03):
So they are aware that eitherthe majority or a small minority
of participants have said yesor no to a given question, or
this question should be amendedas follows.
Another important aspect of theDelphi is that the panelists,
people who take part in this,they're anonymous.

(27:24):
So if Michele and I were in thesame Delphi, speaking purely
theoretically, he and I are notallowed to know that we are
taking part of this Delphi.

(27:45):
So the communication is fromthe researchers to one author at
a time and only aggregatefeedback is given to the whole
panel of participants.

Michele Fusaroli (27:50):
There are some strengths of this kind of study
and in particular, for example,that it allows to reach these
expert consensus, giving voiceto everyone.
So there are no likepredominant voices that may
overcome other voices andinstead it's possible to grab
every opinion.
What we did in particular, wasstarting from the RECORD- PE

(28:11):
structure and we asked all theexperts that were involved to
provide the items that theythought had to be included in
their reporting checklist andthen, through these rounds, we
selected the ones that wereactually considered by the
majority of the experts, and inour cases it was 80%, to be

(28:37):
important enough to be includedin a reporting checklist.
There were some weaknesses alsoin this process and, in
particular, the fact that ittook a lot of time.
It took a lot of effort.
Also, even if we tried tocontact experts from all over

(28:57):
the world, the majority of themwas, in fact, from Europe.
But, yeah, this is a weaknessthat, plausibly, we will manage
to solve in the next revision ofthe guidelines.

Daniele Sartori (29:17):
Yeah, and perhaps another limitation is,
just because you ask a group ofexperts to answer a specific
question, even if they agree,they can still be wrong.
And another important thing tobear in mind when one designs a
Delphi is how you define anexpert.
I don't know what an expert inpharmacovigilance is.
I think you could get aroundthat, for example, by setting a

(29:38):
number of years of experiencefor, for example, and as Michele
said, representativeness of thepanel is crucially important.

Michele Fusaroli (29:47):
What we decided to do for gathering
experts was actually abibliometric analysis, so we
actually saw the researchersthat published the most
disproportionality analysis.
So, actually, as everypractical and operative
definition, has its problems,but at the same time we thought
it was also a way to involvepeople that are publishing a lot

(30:09):
of disproportionality analysisand therefore are going to
benefit a lot of the reportingchecklist.

Alexandra Coutinho (30:15):
The limitations that you both were
talking about with regards tothe Delphi method and the
READUS- PV project.
I guess you can kind of applythat perhaps to co-authors on a
paper, right?
You talk about like oh, theexperts could agree on a finding
or a specific hypothesis, butthey could still be wrong.
Of course, that applies tonormal scientific papers as well

(30:37):
.

Michele Fusaroli (30:38):
Exactly.
The only thing is that here youwant to actually try to have
everyone in the world to followa checklist.
So if the expert opinion andexpert board is wrong, then it's
more of a problem because youare trying to try to implement a
new standard.
But yes, that was necessary, aswe said before, because

(30:59):
reporting in disproportionalityanalysis had really a lot of
problems and that was actuallypotentially impacting, we cannot
really know, but speculativelythat could have impacted the
patient safety because of theoverburden of signals and noise
and you don't really know whatto do with so many

(31:20):
disproportionality analysispublished that are not well
accessible.
So, like that was important todo.
It's not definitive, we willhave to see how these guidelines
are adopted by thepharmacovigilance community and
why it is difficult sometimes toadopt some of its items and
therefore also change them as wegather new evidence on how they

(31:46):
can be refined.

Alexandra Coutinho (31:48):
And then it also then makes even more sense
to my mind.
I mean that if it's going to beused by the community, then the
community should all have a sayin how you're going to do this
properly.
You both beingpharmacovigilance scientists,
what are your thoughts ontransparency, replicability and

(32:08):
reproducibility inpharmacovigilance science and
signal detection in general, andwhat kind of impact will these
READUS-PV guidelines have onthese characteristics in
disproportionality analyses?

Michele Fusaroli (32:19):
We hope that READUS- PV will have an
important impact on improvingthe reporting of
disproportionality analysis.
The READUS- PV checklist canjust be also a tool for authors
to write a complete andtransparent and high quality
report of a disproportionalityanalysis.
At the same time, we hope thatit will be endorsed by journals,

(32:43):
and that's what we are spendingour energy on now.
That is, we need, as Danielesaid before, to make the high
quality reporting ofdisproportionality analysis a
responsibility that is sharednot only by the authors but also
by peer reviewers and editors.
But in fact, we think thateveryone could benefit from the

(33:04):
READUS-PV checklist as, forexample, peer reviewers could
actually see whether a report iscomplete or not following this
checklist, but also could infact be exposed to higher
quality reports.
So actually, what happens to menow is that I spend a lot of
time in reviewing articlesbecause most of the time it's

(33:25):
difficult to understand exactlywhat was performed in the study.
It is difficult to understandthe why of some choices, and
instead, if READUS- PV checklistare adopted, this problem will
hopefully be reduced.
This would bring a lot ofbenefit also to the journals and

(33:47):
the editors because on one side, they ensure that higher
quality reports are published ontheir journals and this should
be a priority of scientificjournals, but also it's going to

(34:08):
be also easier to find peopleavailable to do a peer review.
Ultimately, also, a betterreporting will benefit
regulatory agencies and even thereaders of disproportionality
analysis that will be more ableto interpret and assess the
results of a study.

Daniele Sartori (34:22):
I think you've hinted at this previously, like
how authors and peer reviewersand editors will hopefully use
the READUS checklist.
But I think, much like otherguidelines that were developed
through consensus processes, the, the READUS PV will also be

(34:45):
updated as people use it,because it it will create an
awareness of its essential itemsand people will have the chance
to reflect on them and furtherbuild on the checklist itself.
So, hopefully we will see aREADUS- PV extension, and I

(35:06):
don't think it's entirely goingto be up to you or the READUS
group.
The beauty of it, I think, isthat it will be, as I said, an
awareness among people whopublish regularly
disproportionality analysis, andso they might come up with
additional items for thechecklist that they feel will
improve the reporting of theseanalyses.
Like, ultimately, the Delphimethod is a starting point.

(35:30):
It's what you do when you havevery little, and the starting
point is now there and, muchlike everything else in science,
it needs to be built upon andalso, I think, at UMC, whenever
we publish works that use, forexample, the vigiR ank, which is
based off of, in part, ofdisproportionality analysis, we

(35:52):
will have to report thedisproportionality bit of the
method, in accordance to theREADUS, and if we communicate
signals that have been detectedin VigiBase using
disproportionality, the READUSwill make things a lot clearer
for whoever reads the signals.

Alexandra Coutinho (36:10):
Yeah, it's clear to see that there are many
advantages to integrating andpracticing the READUS- PV
project guidelines throughoutthe PV community, but it will
only really work if everyoneagrees to use these guidelines
together.
So, having said this, what arethe next steps to ensure their

(36:33):
uptake and overall integrationinto the PV community?

Michele Fusaroli (36:37):
Yes, well, endorsement of journals and
scientific societies also willbe an important part of it.
Also, letting authors knowabout these guidelines and how
they can be useful to theirpractice, also as a structure,
as a skeleton, as we said before, is going to be important.
And finally, another importantthing is, just as we said before

(37:01):
, to gather feedback and toactively monitor how the
guidelines are adopted.
What are the difficulties thatpeople meet when they try to
adopt the guidelines and howthey can be refined?
And I absolutely agree withDaniele that, yeah, there was
the READUS-PV group that drovethe initial collection of this

(37:22):
checklist, and that wasnecessary because there wasn't
anything.
But this should be seen more asa pharmacovigilance community,
even responsibility andopportunity, to create a better
reporting of disproportionalityand a better pharmacovigilance
in general.

Alexandra Coutinho (37:39):
So before I let you both go, we have a
question from one of ourlisteners, one of our colleagues
at the Uppsala MonitoringCentre, Magnus Ekelo.
He asks how this discussionapplies when we perform
disproportionality analyses onmedication errors specifically.

Michele Fusaroli (37:56):
Everything that we said before apply also
here.
So completeness andtransparency are going to be
important, and correctinterpretation and justification
of the choices made.
And actually, thisproportionality analysis
compares the observed number ofreports in which a drug and an
event are co-reported together,with the number of reports that

(38:19):
would be expected if the drugand the event were independent.
Okay, so with medication errors, we always know that the drug
is related to the event.
What we can say is thatdisproportionality analysis can
still be used, for example, toprioritizing some medication
error of a drug relative to themedication errors of another

(38:39):
drug, and this is why it isimportant to justify your choice
.
Why are you doing adisproportionality analysis?
It's not that you cannot do it,but explain what is your aim
and perform the analysisconsequently and interpret
consequently the results.

Alexandra Coutinho (38:58):
So yes, so I guess, regardless of the fact
that medication errors have moreof a link between the event and
the drug, whereas riskproportionality analyses looks
at these as separate entities,the guidelines still apply in
that, you know, it makes itmaybe even more important that
you apply the guidelines in thisspecific case.

Michele Fusaroli (39:21):
Yeah, exactly.

Alexandra Coutinho (39:23):
So we're nearly finished with the
interview.
Before we go, do you haveanything more that either of you
wanted to say on reportingdisproportionality analyses and
their review inpharmacovigilance?

Daniele Sartori (39:32):
So I think disproportionality analysis is a
valuable tool that we have inpharmacovigilance.
It's a starting point when youanalyze a large data set and
all, and in the READUS we dostate that there should
preferably be a case-by-caseassessment as well, so that your

(39:55):
measures of disproportionalitydon't appear alone.
So I think it'd be helpful tohave some form of guideline that
also tells you how to reportthe case-by-case assessment.
At the moment we don't reallyhave one, so hopefully there is
some room for development there.

(40:16):
And the second thing is, withmassive conflict of interest, of
course, you should use theREADUS- PV guideline and read it
.
Read also the explanatorydocument as well.

Michele Fusaroli (40:29):
Yeah, again, as I think another general
message that is particularlyimportant and we spoke about it
before is that the quality ofthe reporting shouldn't be just
a responsibility of the authors.
It should be a sharedresponsibility and therefore
it's important that everyone isinvolved in adopting the READUS-

(40:51):
PV guidelines as a framework forwriting better
disproportionality analysis andas a framework to select for
publication betterdisproportionality analysis and
also, it should be aresponsibility of everyone to
find better ways to report foran extension of the READUS-PV

(41:13):
guidelines when the time comes.

Alexandra Coutinho (41:15):
Yeah, the community was definitely my key
takeaway from this discussion.
Thank you both for a very, veryinteresting discussion on these
guidelines and your paper, andthank you for your time.
That's all for now, but we'llbe back soon with more

(41:35):
conversations on medicine safety.
If you'd like to know moreabout transparent reporting in
pharmacovigilance, check out theepisode show notes for useful
links.
If you like our podcast,subscribe to it in your favorite
player so you won't miss anepisode, and spread the word on
social media so other listenerscan find us.
Apart from these in-depthconversations with experts, we

(41:56):
host a series called UppsalaReports Long Reads, a selection
of audio stories from UMC'spharmacovigilance news site, so
do check that out too.
Uppsala Monitoring Centre is onFacebook, Linkedin and X, and
we'd love to hear from you.
Send us comments or suggestionsfor the show or send in
questions for our guests nexttime we open up for that.
For Drug Safety Matters, I'mAlexandra Coutinho.

(42:19):
I'd like to thank Daniele andMichele for their time, our
listener Magnus Ekelo forsubmitting questions, Fredrik
Brouneus for production andpost-production support and, of
course, you for tuning in.
Till next time.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.