All Episodes

November 21, 2025 29 mins

Far from a future add-on, artificial intelligence is already embedded in the cycle of drug safety, from case processing to signal detection. Versatile generative AI models have raised the bar of possibilities but also increased the stakes. How do we use them without losing trust and where do we set the limits?

In this two-part episode, Niklas Norén, head of Research at Uppsala Monitoring Centre, unpacks how artificial intelligence can add value to pharmacovigilance and where it should – or shouldn’t – go next.

Tune in to find out:

  • How to keep up with rapid developments in AI technology
  • Why model and performance transparency both matter
  • How to protect sensitive patient data when using AI

Want to know more?

Listen to the first part of the interview here.

The CIOMS Working Group XIV published its recommendations for the use of AI in pharmacovigilance in December 2025.

Earlier this year, the World Health Organization issued guidance on large multi-modal models – a type of generative AI – when used in healthcare.

Niklas has spoken extensively on the potential and risks of AI in pharmacovigilance, including in this presentation at the University of Verona and in this Uppsala Reports article.

Other recent UMC publications cited in the interview or relevant to the topic include:

For more on the ‘black box’ issue and maintaining trust in AI, revisit this interview with GSK’s Michael Glaser from the Drug Safety Matters archive.


Join the conversation on social media
Follow us on Facebook, LinkedIn, X, or Bluesky and share your thoughts about the show with the hashtag #DrugSafetyMatters.

Got a story to share?
We’re always looking for new content and interesting people to interview. If you have a great idea for a show, get in touch!

About UMC
Read more about Uppsala Monitoring Centre and how we promote safer use of medicines and vaccines for everyone everywhere.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Federica Santoro (00:15):
Whether we realise it or not, artificial
intelligence has alreadytransformed our lives.
Generative AI technologies roseto popularity just a few years
ago, but they have alreadyrevolutionised the way we write,
search for information, orinteract online.
In fact, it is hard to think ofan industry that won't be

(00:38):
transformed by AI in the nextseveral years.
And that includespharmacovigilance.
My name is Federica Santoro,and this is Drug Safety Matters,
a podcast by Uppsala MonitoringCentre, where we explore
current issues inpharmacovigilance and patient
safety.
Joining me on this specialtwo-part episode is Niklas

(01:02):
Norén, statistician, datascientist, and head of Research
at Uppsala Monitoring Centre.
If you haven't listened to thefirst part of the interview, I
suggest you start there so youcan get the most out of the
discussions.
In this second part, you'llhear about the challenges of
using artificial intelligence inpharmacovigilance.

(01:24):
From problems of transparencyand data bias to how to keep up
with a technology that moves sofast.
I hope you enjoy listening.

(01:50):
Well, I'd like to move on now to the challenges of the use of AI in PV and there'slots to say here.
So, if we start with thetransparency issue: one of the
most fascinating, I think, butalso very problematic aspects of
modern AI technology is that itis so complex that it can be
difficult, if not impossible,for humans – even people who are

(02:13):
in the field, right? – it canbe difficult for them to
understand exactly how theywork.
This is referred to often asthe 'black box' issue, which
means input goes in, outputcomes out, but I don't really
know what's happening inside themachine and how that
transformation occurs.
We covered artificialintelligence and

(02:34):
pharmacovigilance in a recentepisode of the podcast where
Michael Glaser from GSKsuggested we actually stop
trying to look into the blackbox altogether.
His suggestion is we focus onassessing the inputs and the
outputs as best we can.
But I couldn't help noticingone of the guiding principles in

(02:57):
the C IOMS guidelines istransparency.
So, is this suggestion byMichael at odds with our desire
for transparency?
If I can't understand it, howcan I be honest about how AI
works?

Niklas Norén (03:10):
So I agree that we have to think about
transparency on at least twolevels.
And it's transparency relatedto the models, and it's
transparency related to theperformance.
And I think of them as quiteindependent, but maybe if you
have less in one, you may haveto compensate to have more in
the other one.
And explainability is not theonly form of transparency

(03:33):
regarding the models.
I mean, it's important, it canbe valuable.
I mean, it's clear we don'talways require it.
All of us now use methods thatwe don't understand how they
work.
We cannot understand whydid this generative AI model
suggest, you know, come up withthis response.
I mean, there's no way, I mean,any of us can know that, right?
And we still accept it.
So, but there are other formsof transparency related to a

(03:56):
model like that that can stillbe relevant.
We can understand, okay, whatkind of model is this?
Whose model is it, where was ittrained, what kind of data has
it been used for?
If I'm using it for a specifictask, I need to know how am I
meant to be interacting with it.
And those considerations areimportant.
But then you have theperformance transparency, which

(04:19):
is the other side of the coin.
And that's I think what'sreally important, especially
when you don't have fulltransparency.
So here you ask questionslike, what kind of errors do we
get?
How often do we get errors ifwe do this?
And not only that, but ofcourse, in which specific data
have you tested this?
How big was the data?
How diverse is the data?

(04:40):
If I have a specific use casein mind, I have to think about
how well aligned is that usecase with the data it's been
tested in.
If it's not very well aligned,maybe I have to make sure I get
another test in my own data,ideally, or even something
that's much more closelyrelated.
So we need to look at that andsee, you know, errors, how often
do they happen, but alsoqualitatively see examples of

(05:02):
errors that are made and bothtypes of errors.
So what kind of things that Iwant to be looking at, let's
say, are missed?
What things are highlighted,presented to me, that I don't
want to look at?
But also I want to see when amI getting things right, you
know, what kind of successes dowe see?
Because maybe it's only solvingthe very simple problems, and

(05:24):
that I can get from aperformance evaluation.
So I do think that performanceevaluation is the centerpiece.
That's what we always need todo.
Because even if you have modeltransparency, maybe you have a
very simple method, you canunderstand exactly how it works,
but you don't know how well itworks.
It may look like that's goingto be quite sensible, but then
you look at it and you have atest set where you see these are

(05:46):
the positive controls, theitems I want to identify, and it
actually doesn't do it.
You need to have that there,too.

Federica Santoro (05:54):
So an extremely critical evaluation of
the data that goes in, the datathat goes out.

Niklas Norén (06:00):
Yeah.
And I think this skill, maybenot doing your own performance
evaluations, but asking theright questions related to this,
because it's very easy as wellfor somebody to charm you, I
mean, present you with theresults and say, I've run this,
I've had this great accuracy,let's say.
Well, accuracy just says how,you know, what proportion of
classifications are correct.

(06:21):
But if I'm doing something thatsays almost everything is a
negative, like in duplicatedetection, I can do something
dumb, like say nothing is aduplicate.
I'm going to get massively goodaccuracy, right?
That says nothing.
So that's an obvious one.
But there are also other moretricky things where we look for
these low prevalence targetsthat we are often focusing on.
Often what you have to do inyour performance evaluation is

(06:44):
you have to change your test seta little bit, and you have to
make sure you have enough ofwhat you're interested in in
there, but then it's no longerlike reality.
And that means that theperformance results you see on
that test set may not transferor will not transfer to the real
world, and you can easily befooled by that as well.
So I think ultimately it'salways good to run a pilot study

(07:04):
in your own use case and seewhat kind of performance can I
really expect.
But I think this is where we'llsee a lot of disappointment in
the future because people willbe promised something and they
will not get it.

Federica Santoro (07:15):
Get it, right.
On to the next challenge.
Not only are AI modelsdifficult to understand, but
they also change extremely fast.
We've all noticed it in thelast few years, right?
Even just trying to follow thedevelopments of generative AI on
the news.
It's been hard, and it can bereally hard for people who

(07:38):
aren't trained in data orcomputer science to just keep up
with the technological issuesand advancements.
And on that topic, another oneof our listeners, Harika, sent
in a question.
She asks, with AI modelsevolving so quickly, how can
regulators ensure ongoingtransparency and oversight in

(07:58):
pharmacovigilance?
And she cites the 'black box'issue again.
Are there plans within theCIOMS guidelines to help address
these challenges?
And do you have any practicaladvice for professionals like
her who want to support the safeand ethical use of AI in
pharmacovigilance?

Niklas Norén (08:16):
Yes.
So I think this comes back tothe risk-based approach.
So I think we have to thinkabout what is our use case, what
kind of risk are we able toaccept?
And I think it's specificallyrelated to the use of generative
AI in a specific...
somebody else's model, maybe apublic model that is changing.
You don't even know exactlywhen it's being retrained, etc.

(08:36):
I n some use cases that may beacceptable.
Maybe you say, that'sfine, this is a component of my
system that has this kind ofrisk, and I've looked at it and
I've assessed it.
Maybe you have to put somemeasures in place to make sure
that your performance is notchanging as the model is
updated, or you have to look forways to freeze the model and

(08:58):
say, I actually don't want amodel that is out of my control
updated.
I want to have it run in a waythat I can control when it's
being updated or not, if youhave a very mission-critical
system.
I also think we should not gooverboard, it depends on how you
use it.
We should remember that formany of these tasks, we rely on
humans today.
And I guess it's then sensibleto look at what is our

(09:19):
onboarding process and do werepeat the whole quality control
when we have new staff comingon board, or do we have other
ways to ensure that we'regetting the same quality?
Because in some respects, theway we use some of these large
language models is not too faroff from how we may use a human
in the past and where we doaccept some variability.
So clearly, you want to havethe right quality control in

(09:42):
place, you want to have theright risk-based approach to
your deployment of the AI.
But the answer I think willvary.
And then I will say, I mean, wenow focus a lot on these
generative AI methods.
I will say a lot of the AImethods that are still going to
be in use are going to be ofdifferent flavours.
They will not change as...

Federica Santoro (10:01):
...as quickly...

Niklas Norén (10:02):
...as quickly or as opaquely.
We'll have more control.
But I also think we can learnmore.
We have certainly deployed AImethods in the past that were
less complex, but maybe weshould have been more mature in
how we thought about how theychanged over time.
Maybe the model is updated, buteven if the model isn't
updated, data can change, so youcan data- drift, so maybe
performance isn't the same.
So I think this goes back tothe question in the beginning of

(10:25):
why I think it's good to applysome of these principles or all
of these principles, not justto the most recent class of
methods, but more broadly to ouruse of computational methods to
support our work.

Federica Santoro (10:37):
And a related question, given how fast and
dynamic the AI field is, howwill you ensure that the CIOMS
guidelines themselves are keptup to date?
How can a document like that berelevant if the technology
evolves so quickly?

Niklas Norén (10:54):
So the simple answer is that the document will
not be updated.
Once the working groupcompletes its mission and
publishes the report, it will bepublished in that form and that
will not be updated.
This is at least how Iunderstand the CIOMS processes.
Now, we've been quite mindfulof future- proofing as far as we

(11:16):
can, trying not to say thingswe think may not hold in the
future.
It also means that we stayclear of very detailed
specifications for exactly howto do it.
We try not to recommendspecific technologies, etc.
So I hope the principles willstand the test of time, but of
course, many things will change.
I mean, I think luckily, in away, the release of the

(11:39):
generative AI methods, you know,starting with Chat GPT,
happened during the course ofthe project.
So, of course, that was adisruption, but it was very good
it happened during the projectand not after, because I think
a lot of what we thought, evenrelated to explainability, I
think if you'd asked me beforethe working group started, I
mean, do you needexplainability?
I would have said, well, youshould always have

(11:59):
explainability for many of thetasks, not for all of them, but
for some tasks, you do wantexplainability.
I would have said for somethinglike de-identification, no, you
don't need it.
We already had methods based ondeep neural networks for that
at the time, and I would havesaid, I don't need to understand
how it gets to the rightdecision, I just need to
understand its performance.
But I would have said that forsomething like signal detection,

(12:20):
I do think you needexplainability because you have
a very ambiguous task and youhave a human who needs to
interpret that output.
I'm still leaning towards that,but I think I have a more
nuanced sense now that maybe ifyou can show that you can use
the generative AI method hereand do it in a good way, you can
demonstrate its performance,you can communicate this in a

(12:42):
good way to the end users, andyou've said, well, I don't
understand exactly how it doesit, but it tends to get things
right, and we can seehistorically that the case
series it's been pointing to hashad a high proportion of what's
become signals once the humanshave completed their
assessments, then I think maybewe don't.

Federica Santoro (13:00):
Another issue that is on everyone's mind in
the pharmacovigilance field isdata privacy.
Now, healthcare data is, by itsown nature, full of highly
sensitive information, right?
There's patients' names, ages,genders, medical conditions.
And so one of the mostwidespread concerns when it

(13:21):
comes to AI technology is thatit could easily infringe
patients' privacy.
Now, first of all, I'm curiousto understand how those breaches
may happen, and then what canwe do to protect such sensitive
data?

Niklas Norén (13:38):
So I think the most basic consideration here is
if you start to want to developAI methods where you use
personal data to develop them,of course, in that process you
open up the personal data toadditional individuals, the data
scientists and the machine-learning people who will be
working with it.
So that of course is, nowyou're exposing some data, and
of course, you need to followthe proper governance and make

(13:59):
sure you do this in the rightway.
But that is, of course, anextra additional exposure.
We need to do a cost–benefitanalysis and then say this is
going to, is this worthwhile?
But then I think there are somespecific risks when it comes to
data privacy connected to thegenerative AI methods in
particular, and they're relatedto prompts, first of all.
So you have to be very mindfulof, if you're using...

(14:22):
including any personal data ina prompt, you have to think
about the communication channel.
Can you be eavesdropped?
Can somebody, you know,interrupt the transmission and
actually get hold of it now?
So you have to think about thecommunication channels, and then
you have to look at the...
if it's somebody else's modeland and they're processing the
data for you, you have to lookat the agreements to make sure
they're not using that to, fortheir own purposes because then

(14:45):
you're transmitting the personaldata.
And those I think are thebiggest risks that people are
naive.
And this is not just personaldata, it can also be business
-critical data.
You don't realise that when yousend something in a prompt,
you're sharing that information.
So that I think is a new riskwe need to be mindful of.
Theoretically, as well, we haveto think about the risk that

(15:05):
the the generative AI modelsthat are trained on the personal
data could somehow reproducethat personal data.
So could somebody prompt agenerative AI model trained on
personal data, prompt them sothat they get personal data
back?
I don't know if this isgoing to be a problem in
practice, but in theory, I thinkit is now that you have models

(15:27):
that don't just generatepredefined categories but they
actually generate text.
So, in theory at least, I thinkthat is a new risk as well.

Federica Santoro (15:35):
Lots to think about.
And we'll get into this alittle bit later, but it just
strikes me that we all have toget a lot more data savvy,
perhaps, than we are at themoment.
Back to that in a little bit.
But first, I had a questionabout bias, which is sort of
intrinsic in pharmacovigilancedata.
Correct me if I'm wrong, butthat's because, I guess, there's

(15:58):
lots of information on adversedrug reactions that never
reaches pharmacovigilanceprofessionals because
underreporting is so widespread.
Then there's the issue of howcertain groups of people will
report more than others, certainsubpopulations will be more
represented than others in ourdata.
And also that, in general,reporting trends will vary lots

(16:19):
from country to country for abunch of different factors.
So if there are already thesebig gaps in data which can
severely bias the conclusions wedraw, is there a risk AI will
make it worse?
Could it make it better?

Niklas Norén (16:34):
I hope it will make it better.
I mean, AI is a tool, and Iguess it's a question of how we
use that tool.
Of course, if somebody uses AIto create fake reports, it's
going to make things worse.
But I hope we'll use AI toimprove things.
And I mean there's greatopportunities there.
I think the idea ofco-piloting...
So, now if I'm reporting anadverse event, I'm maybe filling

(16:56):
in a form, a fixed form.
Maybe I could be co-piloted byan AI prompting me for the right
information.
And depending on what I enter,they could maybe quality- check
and say, that sounds like astrange dose.
Is that correct?
That's a weird dose unit.
Oh, you mentioned this kind ofan adverse event.
In this context, it's good toknow if you had an infection.
So I would hope that we couldget a lot more high-quality

(17:18):
reports from the people whochoose to report.
And then can we also make surethat we have a more
representative sample somehow?
I think there are opportunitiesthere as well, especially maybe
in terms of embedding it inother information systems and
extracting information fromthere.
So I think there's plenty ofopportunities.
And I hope as a community we'llmove generally for the better,

(17:39):
so improving things overall.

Federica Santoro (17:41):
Absolutely.
On to a somewhat philosophicalquestion.
When faced with new technology,especially something as
revolutionary as artificialintelligence, humans tend to
react in one of two extremeways.
Obviously, I'm taking theextremes and not the middle of
the spectrum to be a little moreprovocative, but let's look at

(18:04):
the two ends of the spectrum.
So, some people can decide toreject it altogether, even
demonise the new technology.
I don't know what this is, Idon't want to learn about it,
it's not for me.
Others can decide to hype itinstead.
It's the solution to all theirproblems.
What would you advise fellowpharmacovigilance colleagues who

(18:25):
perhaps have taken one or theother stance about the use of AI
in PV?

Niklas Norén (18:30):
Well, I guess the first question is to understand
a little bit the motivation forthe stance.
So if somebody says, I don'tbelieve in this, I don't want to
do it, it's of course good toknow why they don't want to
engage with it.
But if the reason were thatthey don't believe it can work,
that it cannot bring value, thenI would always start with, then
it's up to us to demonstratethat we can get value from it.

(18:53):
And for me, I mean, a good usecase, I mean a success story
where you show...
and, I mean, hopefullysomething that's meaningful to
that person, that I think wouldbe...
would be convincing, I wouldhope.
Now, if their reluctance ismore on a different level, maybe
they say, I'm reluctant becauseI think this is wrong, I don't
think we should have, sort of,computers do this.

(19:14):
Then it's a different argument,right?
But if the reason you'rereluctant is that you don't
think it will bring value, thenI think demonstrating how it
could bring value, I think isone way forward.
And similarly, I think if youhave somebody who's very
enthusiastic and thinks thiswill solve everything, then I
think you just have to show someways that it can go wrong.
And I think this is theresponsibility we have as a

(19:36):
scientific community, also todemonstrate some of these
pitfalls.
I mean, we do somethingsimilar with disproportionality
analysis, which we know is beingterribly overused in a naive
way and for the wrong reasons,where we now, we're going to try
to publish a paper just callingout some of the pitfalls and
just demonstrating by examplewhere you can go wrong.

(19:57):
I think we need somethingsimilar for AI.
I mean, I can say this to younow, that you need to look at
the performance results, youneed to be wary that somebody
can trick you with a precisionor an accuracy.
But I think to demonstratethat, by showing here's how...
here's what it can look likewhen it's being presented to
you, and here's what thatactually means, and why you'd be
amiss if you just trust thatnumber, let's say.

Federica Santoro (20:18):
Yeah, and obviously one of the issues we
all have experienced, if we'veused the generative AI tools so
far, is the issue ofhallucination, how these models
will never tell you that theydon't have an answer, but
they'll basically make up ananswer for you, just to provide
a response to your query.
And lots of examples of thathave been published in many

(20:40):
different places and manydifferent fields.
So I think, that's probably oneof the issues we are most aware
of, right, as a community.
But of course, there's allthese other issues we've spoken
about, the transparency andexplainability and biases and
and so on, that we have to beaware of.

Niklas Norén (20:57):
And another problem, I think, with the
generative AI is thatsuperficially it can look really
good.
This has been my experience.
Sometimes I ask it somethingand like, oh, this looks great.
And then I start to actuallyparse the text and see what is
it actually saying.
It's like, well, this doesn'treally make sense.
But I was charmed by the...
...tone?
...superficial, how it presented this.

(21:17):
And I think this is a risk alsowith explainability.
You could ask an LLM, well,explain why this is right, and
maybe then it's the wrong sortof output to begin with, but
they have such a convincingexplanation for why it's right
that we could get tricked ashumans.
So I think this is something tobe aware of in the future as
well.
How do we avoid being misled bythe more advanced methods?

Federica Santoro (21:39):
Yeah. Isn't this the same with human beings,
right?
You can listen to someone givea really confident talk and be
completely sold, even though ifyou look at the content of their
speech in more detail, then yourealise, well, actually, no, I
don't agree with this.
So I think a problem with AImodels is also that they present
the information in such aconfident and often empathic
tone, right, that can convinceyou.

(22:01):
So perhaps we should also lookat it from that psychological
point of view and see where wecan fall into that trap of
over-believing what's beingpresented to us.
But whether you fear it or hypeit, one thing is clear, and
that's that as more automationand artificial intelligence find
their way intopharmacovigilance, our

(22:23):
traditional professional roleswill transform.
There's no doubt about this.
I don't know if you feel thesame about your position, but I
certainly feel like that aboutmine.
There's already dozens ofartificial intelligence tools
out there to create podcastsautomatically.
Just the other day, one of mycolleagues told me about another

(22:44):
pharmacovigilance podcast thatis entirely AI- generated.
So, we're laughing about this,but really the serious question
here is, what skills must welearn to remain competitive on
the job market?

Niklas Norén (22:59):
So, in addition to the need that I mentioned
before, which is this abilityto, on some level, critically
appraise artificial intelligencesolutions and be able to
appreciate whether they willbring value or not, I think
generally we have aresponsibility and a need to
engage with the technology andreally see where it can bring
value to us.
So, as an example, for me,writing scientific papers,

(23:22):
clearly I need to see to whatextent can I benefit from some
of the capabilities ofgenerative AI in that context.
And we've had some interestingexperiences on this in the past
year, some where we've been, Ithink, underwhelmed by the
capabilities.
One example of that would beasking a Gen AI to maybe shorten
a text or write it in adifferent way.

(23:44):
Oftentimes it will lose somespecific nuance, and I think
it's because we have a veryprecise way.
It's important to say adverseevent or adverse effect or
adverse drug reaction.
So often something tends to belost in translation.
I also have this experience Imentioned before where when I
look at it first, it may lookquite good, and then like, no,
actually, this is not actuallycorrect.
So it didn't work so well forthat yet.
And that's not to say that itcouldn't work.

(24:05):
Maybe I don't just know how toprompt it in a good way or the
models haven't matured thereyet.
What really sort of baffled mewas when we asked it to do
something which I thought wasmuch more difficult, and I
didn't think it would do a goodjob at it.
We actually fed it a wholemanuscript.
It was our new vigiMatch paper,which is out as a preprint
already.
And this was an earlier stageduring development, but it was

(24:27):
quite mature, and we had, sortof, senior scientists working on
it, and we fed it the wholemanuscript and just asked it to
look for inconsistencies or weakpoints of the manuscript.
And it came back and said,well, when you describe the
features of the new vigiMatch,you do a good job of defining
them, but you don't actuallymotivate why you selected these

(24:47):
and if you chose not to includesome other ones.
And it was spot on, it wassomething I would have expected
from good scientists readingthat and giving the extra...
or maybe the peer review.
And we got that at that stageand we're able to address it,
you know, before finalising thepaper.
And that is not something I'dexpected, but I could see a huge
value in an extra pair of eyeson that level.

Federica Santoro (25:08):
That's very interesting, and it really goes
to show once again how we justneed to play around with it and
figure what the best use caseswill be and what will...
where will it add value to ourworkflow?
And there will be a lot oftesting involved, but it's
probably going to be valuable inthe long run.
One final question, and thankyou a lot for your time.

(25:30):
This has been such aninteresting discussion.
I'll wrap up with a questionabout the guidelines and what's
happening next.
The CIOMS guidelines have beenout for a month for public
consultation, and that periodended just a few days ago.
We're recording this on the11th of June.
So what's next?
When will the guidelines bepublished, first of all?

(25:52):
And what do you hope...
what do you and the workinggroup hope will change once
they're out there in the world?

Niklas Norén (26:00):
So the next step is that the input gathered so
far will be collected, and wehave a meeting with the working
group at the end of the monthwhere we'll process that input.
And I don't know, I've not seenat all the amount of comments
having come in.
But based on that, we'll have abetter sense of how much work
is ahead of us, how muchfeedback we've received, and how
much effort is required toaddress those comments.

(26:23):
And that will determine thelength of the next.
I'm hoping we'll be able to doit in the autumn, but caveating
that with not having seen howmany...
how much feedback has beenreceived, and then we will
finalise the report and make itavailable.
My hope for the future is thatthis will be of value to the
community, that we can use it.
I mean, it's clearly notsomething you just take and

(26:43):
apply.
Each organisation, oursincluded, will have to think
about how do we apply this, howdo we implement it.
But I hope it can be help andguidance for us in going forward
and knowing how to do this in agood and consistent way, and
hopefully then avoid some of thepossible missteps and enable us
to get the best possible value.

(27:03):
It's certainly been a greatlearning experience for me being
part of this working group.
So many knowledgeable and wisepeople coming together thinking
about this and really alsopushing our own thinking on this
topic.
And even, I will say, wecontinue to do it with the
podcast today.
I had to reflect again andthink about things again.
So, really, thank you.
It's my first time here, andit's been a pleasure to
participate.

Federica Santoro (27:24):
It's been wonderful to have this
conversation.
So, thank you so much fortaking the time to come to the
studio.

Niklas Norén (27:30):
Thank you.

Federica Santoro (27:40):
That's all for now, but we'll be back soon
with more conversations onmedicines safety.
If you'd like to know moreabout artificial intelligence
and pharmacovigilance, check outthe episode show notes for
useful links.
And if you'd like morepharmacovigilance stories, visit
our news site Uppsala Reportsat uppsalareports.org to stay up

(28:02):
to date with news, research,and trends in the field.
If you enjoy this podcast,subscribe to it in your
favourite player so you won'tmiss an episode and spread the
word so other listeners can findus.
Uppsala Monitoring Centre is onFacebook, LinkedIn, X, and Blue
sky, and we'd love to hear fromyou.

(28:23):
Send us comments or suggestionsfor the show, or send in
questions for our guests nexttime we open up for that.
And visit our website to learnhow we promote safer use of
medicines and vaccines foreveryone everywhere.
For Drug Safety Matters, I'mFederica Santoro.
I'd like to thank our listenerHarika for submitting questions,

(28:46):
my colleague Fredrik Brounéusfor production support, and of
course you for tuning in.
Till next time!
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Burden

The Burden

The Burden is a documentary series that takes listeners into the hidden places where justice is done (and undone). It dives deep into the lives of heroes and villains. And it focuses a spotlight on those who triumph even when the odds are against them. Season 5 - The Burden: Death & Deceit in Alliance On April Fools Day 1999, 26-year-old Yvonne Layne was found murdered in her Alliance, Ohio home. David Thorne, her ex-boyfriend and father of one of her children, was instantly a suspect. Another young man admitted to the murder, and David breathed a sigh of relief, until the confessed murderer fingered David; “He paid me to do it.” David was sentenced to life without parole. Two decades later, Pulitzer winner and podcast host, Maggie Freleng (Bone Valley Season 3: Graves County, Wrongful Conviction, Suave) launched a “live” investigation into David's conviction alongside Jason Baldwin (himself wrongfully convicted as a member of the West Memphis Three). Maggie had come to believe that the entire investigation of David was botched by the tiny local police department, or worse, covered up the real killer. Was Maggie correct? Was David’s claim of innocence credible? In Death and Deceit in Alliance, Maggie recounts the case that launched her career, and ultimately, “broke” her.” The results will shock the listener and reduce Maggie to tears and self-doubt. This is not your typical wrongful conviction story. In fact, it turns the genre on its head. It asks the question: What if our champions are foolish? Season 4 - The Burden: Get the Money and Run “Trying to murder my father, this was the thing that put me on the path.” That’s Joe Loya and that path was bank robbery. Bank, bank, bank, bank, bank. In season 4 of The Burden: Get the Money and Run, we hear from Joe who was once the most prolific bank robber in Southern California, and beyond. He used disguises, body doubles, proxies. He leaped over counters, grabbed the money and ran. Even as the FBI was closing in. It was a showdown between a daring bank robber, and a patient FBI agent. Joe was no ordinary bank robber. He was bright, articulate, charismatic, and driven by a dark rage that he summoned up at will. In seven episodes, Joe tells all: the what, the how… and the why. Including why he tried to murder his father. Season 3 - The Burden: Avenger Miriam Lewin is one of Argentina’s leading journalists today. At 19 years old, she was kidnapped off the streets of Buenos Aires for her political activism and thrown into a concentration camp. Thousands of her fellow inmates were executed, tossed alive from a cargo plane into the ocean. Miriam, along with a handful of others, will survive the camp. Then as a journalist, she will wage a decades long campaign to bring her tormentors to justice. Avenger is about one woman’s triumphant battle against unbelievable odds to survive torture, claim justice for the crimes done against her and others like her, and change the future of her country. Season 2 - The Burden: Empire on Blood Empire on Blood is set in the Bronx, NY, in the early 90s, when two young drug dealers ruled an intersection known as “The Corner on Blood.” The boss, Calvin Buari, lived large. He and a protege swore they would build an empire on blood. Then the relationship frayed and the protege accused Calvin of a double homicide which he claimed he didn’t do. But did he? Award-winning journalist Steve Fishman spent seven years to answer that question. This is the story of one man’s last chance to overturn his life sentence. He may prevail, but someone’s gotta pay. The Burden: Empire on Blood is the director’s cut of the true crime classic which reached #1 on the charts when it was first released half a dozen years ago. Season 1 - The Burden In the 1990s, Detective Louis N. Scarcella was legendary. In a city overrun by violent crime, he cracked the toughest cases and put away the worst criminals. “The Hulk” was his nickname. Then the story changed. Scarcella ran into a group of convicted murderers who all say they are innocent. They turned themselves into jailhouse-lawyers and in prison founded a lway firm. When they realized Scarcella helped put many of them away, they set their sights on taking him down. And with the help of a NY Times reporter they have a chance. For years, Scarcella insisted he did nothing wrong. But that’s all he’d say. Until we tracked Scarcella to a sauna in a Russian bathhouse, where he started to talk..and talk and talk. “The guilty have gone free,” he whispered. And then agreed to take us into the belly of the beast. Welcome to The Burden.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.