All Episodes

October 17, 2025 29 mins

Far from a future add-on, artificial intelligence is already embedded in the cycle of drug safety, from case processing to signal detection. Versatile generative AI models have raised the bar of possibilities, but they have also increased the stakes. How do we use them without losing trust and where do we set the limits?

In this two-part episode, Niklas Norén, head of Research at Uppsala Monitoring Centre, unpacks how artificial intelligence can add value to pharmacovigilance and where it should – or shouldn’t – go next.

Tune in to find out:

  • Why pharmacovigilance needs specific AI guidelines
  • How a risk-based approach to AI regulation works
  • Where in the PV cycle is human oversight most needed

Want to know more?

In May 2025, the CIOMS Working Group XIV drafted guidelines for the use of AI in pharmacovigilance. The draft report received more than a thousand comments during public consultation and is now being finalised.

Earlier this year, the World Health Organization issued guidance on large multi-modal models – a type of generative AI – when used in healthcare.

Niklas has spoken extensively on the potential and risks of AI in pharmacovigilance, including in this presentation at the University of Verona and in this Uppsala Reports article. His favourite definition of AI remains the one proposed by Jeffrey Aronson in Drug Safety.

For more on maintaining trust in AI, revisit this interview with GSK’s Michael Glaser from the Drug Safety Matters archive.

The AI methods developed by UMC and cited in the interview include: 


Join the conversation on social media
Follow us on Facebook, LinkedIn, X, or Bluesky and share your thoughts about the show with the hashtag #DrugSafetyMatters.

Got a story to share?
We’re always looking for new content and interesting people to interview. If you have a great idea for a show, get in touch!

About UMC
Read more about Uppsala Monitoring Centre and how we promote safer use of medicines and vaccines for everyone everywhere.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Federica Santoro (00:15):
Whether we realise it or not, artificial
intelligence has alreadytransformed our lives.
Generative AI technologies roseto popularity just a few years
ago, but they have alreadyrevolutionised the way we write,
search for information, orinteract online.
In fact, it is hard to think ofan industry that won't be

(00:38):
transformed by AI in the nextseveral years.
And that includespharmacovigilance.
My name is Federica Santoro,and this is Drug Safety Matters,
a podcast by Uppsala MonitoringCentre, where we explore
current issues inpharmacovigilance and patient
safety.
Joining me on this specialtwo-part episode is Niklas

(01:02):
Norén, statistician, datascientist, and head of Research
at Uppsala Monitoring Centre.
In this first part, we discusshow artificial intelligence can
fit in the cycle of drug safety,why it's important to regulate
AI use in pharmacovigilance, andwhat it means to adopt a

(01:23):
risk-based approach.
I hope you enjoy listening.
Welcome to the Drug Safety Matters studio, Niklas. It's such a pleasure to have you here today.

Niklas Norén (01:41):
Thank you so much.

Federica Santoro (01:42):
Today we're diving into nothing less than
artificial intelligence, or AI,and its use in
pharmacovigilance, obviously.
Such a complex and relevanttopic.
There will be lots to cover.
Why don't we start with thebasics?
I'd like to cover somedefinitions to begin with.

(02:03):
When reading up for thisinterview, I was surprised to
learn that AI is nothing new topharmacovigilance, really.
But we didn't call it by thatname until recently in
publications.
And I read somewhere that evendisproportionality analysis,
which is one of the go-tomethods – if not the go-to
method for PV analysis – couldbe considered artificial

(02:27):
intelligence.
Is that so?

Niklas Norén (02:29):
So it seems counterintuitive, right?
I mean, most people, when wehear artificial intelligence, we
think of something different,something much more versatile,
and something with agency,perhaps.
And this is the reason why inthe past we didn't use the term,
because it was...
it wasn't really reflecting thework we were doing at that
point when we were solving quitenarrow applications with quite

(02:50):
simple methods.
But I think when you look atthe definitions that are
actually in use, they do includemuch more than the most recent
class of deep neural networks orgenerative AI methods.
They include simple machinelearning methods.
They also include hard-codedsystems that have no ability to
learn from data, for example.
And so, if you look at thefirst chess engine to beat the

(03:12):
world champion, Deep Blue,there was no machine learning
even in it.
Definitely no deep neuralnetworks.
It was basically hard-codedhuman expertise and very
efficient search strategies in acomputer that was able to do
it.
And that clearly is a form ofartificial intelligence.
And so I think, when we talkabout AI, the relevant question
maybe isn't, is it AI – yes orno?

(03:34):
It's, what kind of artificialintelligence do we have?
Is it one where it's a narrowapplication or it can do many
different things?
Is it one where there's aclear-cut answer that it's
trying to get to, or is itambiguity in the task that it's
trying to solve?
Does it have an adaptiveness oris it fixed, etc.?
So I think those are much morerelevant questions.

(03:55):
And then, so if we come back toyour question:
disproportionality analysis initself, the statistical method,
I would say is not AI.
But if you use it in a triageto direct your signal assessors
to specific case series, I thinkit represents a simple and
quite basic form of artificialintelligence, but artificial
intelligence nonetheless, in thesense of a computer or another

(04:19):
machine trying to perform a taskthat would normally require
human cerebral activity, likeAronson has defined it in Drug
Safety.

Federica Santoro (04:27):
And it will be interesting to see if the
definition evolves as thetechnology itself evolves,
right?
We might get to the point wherewe have to review how we define
artificial intelligence per se,as it includes perhaps more and
more aspects and methods.

Niklas Norén (04:41):
Or we move beyond that and we say, the real
question is not the binary yesor no, but it's more, what type
of artificial intelligence do wehave in front of us?
Is it a narrow one, is it abroader one with more use cases
in mind?
Is it one that's solving aclear-cut task where humans know
the right answer?
Or are we doing something quiteambiguous where it's hard to
say, even for human specialists,what the right answer is?

(05:04):
So in pharmacovigilanceclearly, something like signal
detection is of the latternature.
But even some things we thinkof as maybe much more basic,
like knowing, does this reportrelate to a pregnant person?
That is not always easy to tellbecause we don't have all the
information.
Or do these reports relate toduplicates or not?
And I think this is often thecase we're in.

(05:24):
So I think maybe we have tothink about AI in that sense.
It's more, what flavour of AI?
I mean, is it adaptive or is itfixed?
And so forth.

Federica Santoro (05:33):
That's a helpful way to think about it:
what are we using it for?
So, that's a helpful framing.
But when we talk aboutartificial intelligence
nowadays, and especially withthe public, as you mentioned, we
often refer to these newermethods, right, that have been
in the news now for years andhave taken the world a little

bit by storm, I must say: generative AI tools like (05:53):
undefined
ChatGPT.
What is so special about thistechnology?

Niklas Norén (06:03):
Well, one obvious special aspect is the
capabilities of them.
I mean, they're massive andthey've been trained on massive
amounts of data.
So the capabilities they have,I think, are unprecedented.
I mean, especially in the waythat they can process text and
generate text and so forth.
But also, this differencecompared to before when we were

(06:23):
typically trying to solve aspecific task with a specific
artificial intelligencesolution, and that's what it was
trained and fine-tuned to do.
Now, these methods, becausethey interact by processing and
generating new text, they canalso do things for which they've
not been explicitly trained.
So, this versatility, I mean,we refer to it as zero-shot
learning, meaning we don't evengive it any training data or any

(06:46):
examples of what it should do.
It could still do things we askit to do.
This I think is a veryimportant capability, but it
also, of course, opens up forthe possibility that we ask it
to do things without reallyknowing how well it will be able
to do those tasks.
So I think that, plus this...
just how it feels to interactwith it when it's not just sort

(07:08):
of producing an output like A,B, C, or D, or "I think this is
a dog" or "I think this is awolf", but it's actually
producing new text, and we canhave what almost feels like a
human conversation, is also, ofcourse, something that's
different compared to what we'veseen previously.

Federica Santoro (07:22):
And that versatility raises a whole set
of issues that we will dive intoone by one in a little bit.
But first, I want to explainwhy you specifically are in the
studio today to talk aboutartificial intelligence in PV.
And that's because you've beenpart...
apart from the fact that youare a data scientist and
statistician and have beenworking and thinking about this

(07:46):
for a long time, but you're alsopart of a working group within
CIOMS, the Council forInternational Organizations of
Medical Sciences.
And that working group has beentasked with drafting a set of
guidelines to regulate – oradvise, perhaps – the use of AI
in PV.
So, why do we need guidelines?

(08:07):
Or rather, what are the risks,perhaps, of not having any
guidelines at all?

Niklas Norén (08:12):
So, I think generally, we need guidance
because we want to get thisright.
I think there are greatopportunities with artificial
intelligence for us as a broadersociety, and specifically, of
course, in our field.
And if we want to benefit fromthem, we need to go about this
in a mindful way and know to askthe right questions.
This is both to be able to getthe full value out of these

(08:35):
opportunities, but also not tomake mistakes and get things
wrong and maybe cause harm,maybe lose trust.
And it's not easy.
I mean, this is the challenge.
I mean, it's so many ways youcan get things wrong, and
sometimes it will not be obviousthat you got it wrong until
much later.
Connected with the ability ofzero-shot learning and the

(08:57):
ability of generative AI to domany different things, I mean,
this has lowered the barrier forentry.
I mean, it's much easier now todevelop or test or just deploy,
in a way, artificialintelligence than it would have
been in the past, where, youknow, when, to do it, you would
have to sort of define yourquestion, create maybe a
reference set, fine-tune anadvanced algorithm.

(09:17):
There was a big barrier todoing it.
Now the barriers have droppedfor the development in a way.
But I think we shouldactually...
we need as much, if not more,validation or evaluation to know
that we're actually gettingthings right.
And I think there's a risk thatthe relative cost of that
validation has gone up a lotwith the development cost going
down.
So for that reason, I do thinkwe need guidance in a general

(09:39):
sense.
Why do we need specificguidance in the context of
pharmacovigilance?
Clearly, there are more generalguidance being developed.
But I think it's good with somemore concrete and precise
guidance.
We have some specificchallenges or aspects of our
field.
I mean, it's a regulatedenvironment, it has potential
impact on patient safety andpublic health, so we need to
bear that in mind.

(09:59):
I also think we often have thissetting of ambiguity where it's
really hard to know what theright answer is, often, and much
of what we're looking for israre.
So there's a low prevalence ofsignals or duplicates like we
talked about, or just specificaddress events that we may be
interested in, let's say.

Federica Santoro (10:18):
Yeah, no, you're right that the technology
is so...
is so accessible, and youdon't have to be a trained data
scientist to start playingaround with Chat GPT or similar
tools.
But of course, as you say, thisposes problems as well.
So it is good to have thoseguidelines available.
And the guidelines nowadvocate for a risk-based

(10:40):
approach to using AI in PV.
What do you mean by that?

Niklas Norén (10:46):
So it means that the measures we take should be
proportional to the riskinvolved.
We shouldn't have aone-size-fits-all: as soon as
you use an AI, you should alwaysdo these things in this way.
I think it depends on the risk,basically.
So you need to think about theprobability of, sort of, some
harm happening and the impact ofthat harm, and then adjust your
measures accordingly.

(11:07):
That is the basic notion of it.

Federica Santoro (11:10):
And could you give us some examples then of
low- and high- risk situationsso we get a better understanding
of this approach?

Niklas Norén (11:19):
And that I think depends on, it's not just about
the AI, but the context in whichthe AI is used.
So generally, I would say ifyou're using the AI to support
human decision-making, but youhave a human ultimately taking
the decision or on the wholedeciding, it's maybe lower risk,
generally speaking, becausethen a human will look at that
and make their judgment call,they will still be responsible.

(11:41):
If you've automated somethingand there is no human in the
decision, then generallyspeaking, you have a higher
risk, of course.
But then you have to thinkabout, okay, what would an error
mean here?
And errors of different type,even for the same application,
have different applications.
So let's say you're doingduplicate detection and removal.
So missing a pair ofduplicates, not highlighting

(12:03):
them, the cost of that may be anuisance that, okay, now we're
looking at a case series, andactually there are more...
there are fewer reports than itlooks like, and we spend time,
we waste some time on that.
If we accidentally take out avery important report, of
course, that could lead to delayor failure to detect the
signal.
So the cost of that could be alow probability, but the impact

(12:24):
of that could be quite hard.
So you also have to think aboutthat you don't always have a
symmetrical cost of errors.
Maybe one type of error is moreproblematic than the other one.
But then to add to that, Ithink you also have to think
about the whole human– AI teamand what does it mean.
You can have an AI method, asimple AI method, say a
rule-based method to detectduplicates that has very high

(12:46):
recall.
It highlights almost every trueduplicate, but it also
highlights massive numbers ofnon-duplicates.
And then if a human isreviewing all of that, they may
well grow numb to the realduplicates and miss them in
their assessments.
And we've seen this in thepast.
So, like, the AI component wasactually quite sensitive.
The team was not because thehuman just got...

Federica Santoro (13:07):
...tired...

Niklas Norén (13:08):
...got tired or bored and failed to highlight.
So you have to really look atit in that context.
And I think when we look at therisk, you should also think
about the baseline scenario.
I mean, it could be veryrisk-averse and say we can't do
this if there's any risk ofgetting it wrong, but we need to
look at it like, what is theerror rate?
What is the risk of having ahuman doing all of this?
Can they even do it?

(13:28):
What kind of error rate do theyhave?
And not just in a sort of aclean and nice experiment, but
in a real-world setting wherethey get hungry, they get bored,
they get distracted.
That needs to be kept in mind.

Federica Santoro (13:39):
Yeah, absolutely.
So, what I'm hearing is youreally need to assess this on a
case-by-case basis and take thetask at hand and really decide
critically, okay, what risk isthere if the AI does this or
does that, and go from there.

Niklas Norén (13:52):
Yeah.

Federica Santoro (13:53):
So, since we're on the topic of human
oversight, I read that humanoversight in AI activities can
take the shape of human in theloop or human on the loop.
I didn't quite understand thedifference.
So, can you clarify that for meand our listeners?

Niklas Norén (14:10):
So, a human in the loop would be performing the
task together with the AI.
So, every single task there's ahuman there.
It could be that there's asuggestion from an AI, and then
the human decides whether toaccept that suggestion or not.
It could also be, I think, morethat the human is responsible
for the task, but it's gettingsome recommendation or can get
some inspiration from the AI.

(14:31):
But it's really, they're doingit together, and the human is
involved in every time the taskis performed.
A human on the loop means thatthere's...
it's maybe a sort of a fullyautomated process, but there's a
human overlooking and, youknow, having oversight of the
process and able to step in whenrelevant.
I think this is a little bitharder to see exactly when that

(14:53):
would happen inpharmacovigilance.
You can definitely have...
I mean, you also have the thirdlevel, which is human in
command, and that just meansthat humans have the ultimate
responsibility to design anddecide when it's going to be
used.
So I don't know the precisedefinition.
If, say, you have somethingwhere, say, UMC's Koda algorithm
for coding medicinal productinformation, there will be an

(15:15):
opportunity to say, in difficultcases, to delegate it to a
human.
Is that human in the loop or isthat human on the loop?
But then also if we deploymethods and we say, you're going
to use our vigiMatch method todo deduplication, and regularly
we're going to review itsperformance and decide whether
we need to retrain it.

(15:35):
Maybe we have some measures inplace to even proactively
highlight a model drift or adata drift where performance is
not the same as we would expect.
I think that is human incommand, but maybe that is human
on the loop.
So, like, you hear...
I mean, I think these are alittle bit fluid as concepts,
but the general idea is that ahuman in the loop is helping to
perform the task, a human on theloop...

(15:56):
A human on the loop in aself-driving car would be
there's a human there.
The car is driving, but thehuman is overlooking it.
And if they feel the need, theywill step in.
It's just not as obvious to mewhat that would be...
you know, when would we have ahuman overviewing an AI's
activity in pharmacovigilance?

Federica Santoro (16:13):
Okay.
So these definitions can besomewhat fluid, but I think the
key point here is that humanoversight is very much needed.
And on that topic, I'd like usto take a critical look at the
entire pharmacovigilance cycle,sort of from collecting data to
analyzing it, digging forsignals, and eventually

(16:35):
communicating new safetyinformation.
And I'd like us to pinpointwhere AI is used today, where it
could be used in the future,and perhaps which tasks do you
not advise AI for at all, ever?
I mean, of course, this isspeculative at this point, but
let's try to get that overview.

(16:56):
If we start from the statusquo, so which tasks do we use AI
for today?

Niklas Norén (17:02):
So in pharmacovigilance, and now in
this broader sense, consideringalso the simple form of AI, so
anything from basic predictivemodels to even rule-based
algorithms to identify certaintypes of cases, etc.
I think a lot of focus, if yougo back 15-20 years, was more on
the signal management and maybesignal detection in particular.
So clearly, there's been a lotof work there.

(17:24):
The disproportionality methodsand basic triages we talked
about.
There have been developments ofbasic predictive models.
So we have our vigiRank methodthat looks at other aspects of a
case series than just the...
you know, whether there aremore reports than we would
expect.
We also look at the geographicspread, the content, and quality
of individual reports, etc.
And other groups have donesimilar things.

(17:45):
There's also been work toidentify cases or reports that
are particularly important tolook at and maybe should be
prioritised in a review.
But I would say that thegeneral interest is now shifting
very much to the caseprocessing, and this is because
this is so expensive andresource-intensive, and maybe

(18:06):
also boring for the humans, thatthere's a great opportunity
here to free up resources tomaybe spend them on more
value-adding tasks.
So I think as a community, andof course, primary recipients of
the reports here are drivingthis, you know, pharmaceutical
industry and the regulators aswell.
But this I think is animportant area, too.

(18:26):
One area that we've beenworking on at UMC for a long
time has been duplicatedetection.
And that's an area where wedeployed a more sophisticated,
now I'd call it artificialintelligence, method.
We didn't at the time, but inthe broader sense it is, for
performing that task.
And this is something whichclearly we cannot do manually,
even if we wanted to.
I mean, no human can overviewthe 42 million reports in

(18:47):
VigiBase and get a sense ofwhere do we have potential
duplication.
So I think there's a variety ofareas where we are already
using it today.
But I would say for the mostpart, the methods are quite
straightforward.
I mean, where we have seen somemore sophisticated methods has
been in natural languageprocessing.
So, using modern methods (couldbe deep neural networks and

(19:10):
other methods) to process eithernarratives on the case reports
to extract some usefulinformation from there, but
also, say, the regulatorydocuments describing already
known adverse events like thesummary of product
characteristics documents andtrying to know, is this
adverse event actually listedalready?
So we shouldn't spend as muchtime on it.

Federica Santoro (19:29):
So, on a related topic, speaking about
tasks where we already useartificial intelligence today,
one of our listeners, Mohammed,asks, "how do you use AI to
monitor adverse effects or totrack signals?"

Niklas Norén (19:43):
Yeah, so this is a good question.
And we mentioned thedisproportionality analysis and
some of the predictive modelsthat have been used.
Other work in this area, Ithink, relates to the ability to
go beyond just the single drugand adverse event pairs, as we
often base our analysis on.
This could be looking for othertypes of risks, like related to

(20:04):
drug–drug interactions.
It could also be related toother risk groups.
Maybe men are at a specificrisk for a certain adverse event
exposed to a specific medicine,maybe children have a different
risk, etc.
But also to look beyond thesingle adverse event terms in
MedDRA that we use to code theseadverse events and see, can we

(20:25):
identify and bring togetherreports that actually describe
the same clinical entity, evenif the presentation in the
patient was slightly different,maybe the coding of the term was
different.
So we've worked with clusteranalysis methods to do that.
Other groups have looked atnetwork analysis or, you know,
various ways to do that.
We also have another stream ofresearch here at UMC looking at

(20:48):
semantic representations ofdrugs and adverse events to be
able to maybe support theassessors in determining which
MedDRA terms to even include ina search looking for relevant
cases.

Federica Santoro (21:01):
Very interesting.
So there's lots of areas whereAI is already used today, and
obviously, there's lots where weare still in the exploratory
phase, perhaps.
Now, I'd like to get you totalk a little bit more
speculatively orphilosophically, perhaps – we'll
see where we land.
Which tasks do we not entrustto AI today, but we could

(21:24):
perhaps in future?

Niklas Norén (21:26):
Yes.
So, one type of question thatwe've stayed clear of is
anything related to a detailedassessment of the relationship
between a medicine and adverseevent, because this requires so
much clinical knowledge andexpertise and the ability to
adjust the question and theconsiderations to that specific

(21:48):
topic.
So, for example, we developed ameasure of information
completeness on case reports along time ago.
We call it vigiGrade.
This was reasonablystraightforward.
It was meant to just be thefirst dimension of a broader
quality measure.
And the next one we wereinterested in was to do a sort
of a relevance measure thatwould look at the strength of...

(22:08):
the strength of a case reportas it relates to the possible
causal association between adrug and an adverse event.
But this proved to be almostinsurmountable because even
something...
just one component of thatconsideration, the time to
onset, it's so dependent on whatwe're looking at.
So, what is a suggested time toonset if we think of this as
being an adverse effect?

(22:29):
Well, it depends on themedicine, it depends on the
adverse event, and it may evendepend on the relation.
So, of course, to try to dothat with old -school artificial
intelligence or even, like,basic machine learning is very,
very difficult.
Now, with the reasoningcapabilities of the generative
AI models, this may no longer becompletely out of reach.
Maybe we can get these modelsto help us do this and actually

(22:53):
then solve many tasks that weremore difficult before.
And maybe we can integrate thisin both signal detection and
signal validation.
This I think would be a hugestep forward.
But it's not something we'vedone yet.
It certainly requires a lot ofresearch and it's...
it's unknown.
But I do have hope that we maybe able to be more ambitious
going forward.

Federica Santoro (23:11):
You see it as an option basically going
forward.
And who knows how manypossibilities are out there that
we haven't even thought aboutyet.
So that'll be interesting tosee, how the field moves on.
Are there tasks that you feelnow, in 2025, that you feel we
will never be able to trust AIwith?

Niklas Norén (23:31):
So, I've been around long enough to be careful
with never saying never.
I do think something like abenefit –risk evaluation, that's
very personal.
I mean, I don't know thatthat's something you want to
outsource to anyone else, andcertainly not a computer.
So I think that would be less,sort of, appropriate for use,
for sure.
I also think, coming back tothis question of ambiguity, I do

(23:54):
think there are some taskswhere if there's very much
ambiguity, like say in thecontext of signal detection,
where even the human worldexperts may not always agree, do
we have enough evidence to saythat there is a causal
association between thismedicine and this adverse
effect, let alone when would wehave agreed that, you know, at
what point of time was thatclear?

(24:16):
So if we don't have theagreement on what the truth is,
if we have that level ofambiguity, then I think it's
very hard to see how we candevelop AI to support us because
we won't know if it's gettingit right.
How can we validate that it'sdoing something sensible?
So I do think any area where wehave a lot of uncertainty and

(24:38):
where we sort of struggle todefine what's the truth, then I
think it's going to be quitedifficult.

Federica Santoro (24:45):
A human endeavour in the end, still.
All right.
So we've gone into thenitty-gritty of the PV process
and looked at specific parts ofthe cycle.
Now, if we take a step back,will AI change the PV process as
we know it?
So, is there a chance that aswe get better, for example, at

(25:07):
accessing and analysing othertypes of data – and I know that
is already being done, like real-world data to support
spontaneous reporting data –could the PV cycle as we know it
become obsolete?
Could spontaneous reportingbecome obsolete?
What is in store as thetechnology evolves?

Niklas Norén (25:26):
So I certainly hope that there will be
transformation, that it won'tremain the same.
I mean, I think that would be amissed opportunity if we
just...
we run things like we've doneand we just tweak something.
So I think it's appropriate tothink about the bigger picture
and, you know, should we do somethings completely differently?
I think sometimes we get veryenthusiastic and we think we can

(25:47):
just let AI loose, especiallywith these more capable Gen AI
methods.
That, I think, is probably notthe most effective way to go
about this.
And if you take the humananalogy and we think about, how
do we get humans to performcertain tasks, if we want them
to be done with some consistencyand some efficiency and
reproducibility?
Well, we have processes tosupport that too, right?

(26:10):
So I do think that there'sgreat opportunity, for example,
to use generative AI maybe toextract information more
effectively from real-world datasources like electronic health
records.
I do think when you do that,you know, research today shows,
at least at this point, youdon't tend to get very good
results if you just genericallyask a Gen AI method to just

(26:33):
extract information here.
You want to give it quiteprecise instructions, and the
better you can break it down andthe better you can prompt it,
the better results you will get.
So I think we're still going tohave to define and we're
going to have to be clear whatare we looking for, what are we
asking for, what's relevant tothe decision making.
Can we use Gen AI to supportthat thinking process?
I'm sure we can, but I do thinkthere's going to be a careful

(26:55):
consideration of how to best doit.
And I don't think, like,wholesale dismissing everything
that's done today.
I think we just have tocarefully think through which
parts can be done in a differentway with still getting the same
value as we're getting today oreven better value.
But to point to those specificareas, I think is...
is quite difficult, and we'regoing have to explore and

(27:18):
experiment and learn and thencourse adjust as we...
as we proceed.

Federica Santoro (27:32):
That was all for part one, but we'll be back
soon with the second part of theepisode where we discuss the
challenges of using AI inpharmacovigilance.
If you'd like to know moreabout this topic, check out the
episode's show notes for usefullinks.
And if you'd like morepharmacovigilance stories, visit
our new site Uppsala Reports atuppsalareports.org to stay up

(27:57):
to date with news, research, andtrends in the field.
If you enjoy this podcast,subscribe to it in your
favourite player so you won'tmiss an episode and spread the
word so other listeners can findus.
Uppsala Monitoring Centre is onFacebook, LinkedIn, X, and Blue
sky, and we'd love to hear fromyou.

(28:18):
Send us comments or suggestionsfor the show, or send in
questions for our guests nexttime we open up for that.
And visit our website to learnhow we promote safer use of
medicines and vaccines foreveryone everywhere.
For Drug Safety Matters, I'mFederica Santoro.
I'd like to thank our listenerMohammed for submitting

(28:41):
questions, my colleague FredrikBrounéus for production support,
and of course you for tuningin.
Till next time!
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.