Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:07):
Medical breakthroughs,the research journey.
Hello and welcome.
I'm your host, Caroline Burden,and you are about to join me on
a journey into the fascinatingworld of medical breakthroughs,
but not just any breakthroughs.
We are diving into thepersonal stories, the setbacks.
(00:30):
That you will not believe thismoment behind the cutting edge
research happening right here atLeeds Teaching Hospitals NHS Trust.
Coming up in this episode,the main thing is the
mammogram has to be diagnostics,so that means we can read it.
To make a diagnosis and in caseswhere things have gone not the way
(00:51):
that was planned and the imagesaren't really of great quality, the
AI algorithm can flag in real time.
You may want to considerrepeating an image.
I.
Yeah, we're talking aboutusing AI to check your breasts.
Joining me for this conversationis Dr. Nisha Sharma.
She's an expert in breastscreening in radiology and in ai.
(01:12):
She's a consultant radiologist anda director of the breast screening
program for leads and wakefield
in women.
When we do a mammogram, when we'relooking at the mammogram, we look at
what we call breast density, and breastdensity is really the amount of whiteness.
The image that we're looking at.
So the picture and the more whitenessthat you have, the more dense the
(01:33):
mammogram is and um, and that reducesyour ability to see cancers on the
mammogram because the cancers will be thesame appearance as the breast density.
So it's almost like atree with full foliage.
If you've got a tree covered in leaves,you're not gonna see the birds nest.
(01:55):
If you have a tree that's bare, youwill see the bird's nest easily.
So there's a tiny, tiny proportionof women that have fairly dense
breasts, and it's recognized that inthat group of women, it can sometimes
be difficult to identify a cancer.
But in most cases, mammogramsare good at picking up cancer,
(02:18):
and that's the reason that we.
Now, how does it work in terms of at themoment when somebody has a mammogram?
Um.
How is it reviewed?
How does the, it it go from a photo,if you like, an image, uh, into data
that is actually then looked at.
(02:39):
So, so when a lady attends for amammogram, we take the pictures and if
they've got both breasts, we would dothe standard two pictures of each breast.
This then, um, is viewed in ourPAC system, which is the picture
archiving system, which is the systemthat we use to display the pictures.
And be able to look at them in detail.
(02:59):
And we have special monitors in in,in breast, which are five megapixels.
So they're high resolution, and thatmeans we can pick up the finer details.
So then what we do is, we'll, we'lllook at each image and we compare.
If the ladies had previous mammograms,we compare with the previous mammograms
and we decide is the mammogram normal?
(03:22):
Or is it abnormal?
And if it's normal, we tick a box,say normal, and the lady will get
a letter saying everything's fine.
And if we think it's abnormal, thenthe lady's likely to be recalled.
But the good thing about the breastscreening program is that every single
mammogram is read by at least two people.
So two people will read themammogram, and if they think it's
(03:44):
normal, the lady will get a letter.
If two people read the mammogramand they think it's abnormal or
they disagree with one another,then there'll be a group of readers
that will then look at the mammogramagain and make the final decision.
Is it normal or not?
And we will decide if thelady needs to come back.
So there's many pairs ofeyes looking at the images
(04:06):
and how accurate are the images?
Well, that, that's a difficult question.
So the, the images, the informationis there, and obviously, um, in some
women, when they subsequently developa cancer, which is what we call an
interval cancer, there was nothingto see on the original mammogram.
So it's something that's new that'sdeveloped, but like everything,
(04:30):
it's open to interpretation.
So it's when you're reading a mammogram.
With what you see.
Is it normal?
Is it innocent?
Or is it an area of concernor is it definitely a cancer?
So when you're reading a mammogram,you have human interpretation and
(04:51):
that can vary from person to person.
This is why you can have readers thatcan disagree with one another because
it's based on our interpretationof what we see on the image.
So talk me through then this,this new piece of research
that is coming outta leads.
So in Leeds we, we've done twoprojects related to artificial
(05:14):
intelligence, so using e and I toassist us in the way that we work.
So the first one, which I think isreally exciting is looking at the quality
of the mammogram when it's produced.
Now the quality is really, reallyimportant because if you have a
really good mammogram, that makesour job as a reader a lot easier
(05:35):
to be able to read the mammogram.
But if you've got a mammogram where.
And it's not in anybody's fault.
It could be that um, the lady herselfis in pain or she's anxious or she's got
disabilities that make her less mobilewhen we're trying to position the breast
and the mammogram, and it might bethat sometimes when we take the image.
(05:58):
There might be, it might be blurred,might it, there might be bits of the
breast missing because technically it'sdifficult to get all the breast on or,
um, there might be folds or creases.
So the quality of the imagemight not be a hundred percent.
And, um, and the process currentlywhen we try to assess quality
is often done downstream so wellafter the woman has attended her.
(06:25):
But with this new software,we're able to look at.
And be able to say, is thisa good mammogram or not?
And we're currently in the processof training all our radiographers,
uh, to be able to access their owninformation regarding all the mammograms
(06:45):
that they've taken and the qualityof the mammograms that they've taken.
And what this allows us for thevery first time is have all this
information that we've not before.
Provide education and trainingparticular to each individual to
(07:07):
help them improve the quality ofthe mammograms that they're taking.
So we're at the stage where we'vedone the onboarding, and now what
we're going to do is give them timeto access the information, digest
it, and then you can start usingthat information to create bespoke.
Education programs for each ofthe individual radiographers.
(07:29):
And how does AI come into that?
Well, it's the AI that'sreading the mammogram to tell
us about the positioning errors.
So it's automatically assessing the imageand it's got, um, a list of positioning
errors that it can look at, and it tellsus whether they're acceptable or not.
And that's saving the radiographerfrom having to do that job.
(07:51):
And I imagine as well.
It must make a huge difference whenit comes down to the person who's
then reading those mammograms, asyou were saying it, it currently, it
was happening further down the line.
Um, it must make the wholeprocess a lot, a lot quicker.
It, it will do.
So we, we haven't fully embeddedthis yet, um, but it will
make the process much better.
(08:12):
So what can then happen is that, um, thereare several ways that we can work on this.
So one is we, we.
Each individual performance, we understandtheir strengths and weaknesses, and we
can continually support, educate, andtrain so that we can drive up quality
within the, the department as a whole.
But the other area that we can work on iswhen a lady attends for their mammogram.
(08:37):
Now the, the main thing is themammogram has to be diagnostic.
That means we.
Things, um, not the way.
(09:00):
And we could then do that at thesame time, which would stop women
from having to come back for a secondvisit to get that image repeated.
And what we need to do is a pieceof work to identify what, um,
positioning errors are importantthat do need repeating, and which
positioning errors are the okay if we.
(09:22):
And that's work that we will, uh, do inleads, but also we'll gain that experience
over time as we're using the algorithm.
And you said that it was, um,the, the use of the AI within
this is, is going to be twofold.
So that is one side, um, uh, youknow, of the, of the project.
What's the second?
So, uh, so the second project thatwe're doing is where we had actually
(09:45):
done a prospective trial looking at ai.
So the artificial intelligence.
Ability to maybe replace one of thehuman readers when reading the mammogram.
And, um, so at the moment, this isnot allowed to happen within the
breast screening program and canonly be in the context of research.
(10:07):
Uh, so we did a project, uh, whichwas funded by government through
the N Hsx AI Award and company,and the mammogram still being read.
By the two human readers, butwe also had an AI algorithm
read the mammogram as well.
And really what we wanted to, to do withthis study was first of all to see can we
(10:31):
integrate the AI opinion into our system?
So does the technology flow,um, and make it easy for us
to be able to input that data.
But also we wanted to be able tosee how the algorithm operating.
And the impact that it was having onus as readers in terms of our workload.
(10:55):
And, um, so that trial,um, has been completed.
We've yet to publish, um, our findings.
We're working on the data collectionand analysis, but it's been a really
interesting exercise because, um,you've now got a third reader, so
you've got three opinions when.
(11:19):
Which then means you've got moreimages that need to be reviewed
by a group of readers to seewhether it's normal or abnormal.
So that in itself is interestingthat, um, it, it potentially
could have saved some workload.
Uh.
(11:39):
Much as we thought because.
A multicenter trial, which is beingled by, uh, professor Fiona Gilbert
and Professor Sean Taylor Phillipsfrom, uh, Cambridge and University
(12:02):
of Warwick, uh, to lead a multicentermultivendor trial in the uk.
And what I mean by that is we're gonnaassess several different breast AI
algorithms, and we're making sure thatwe're using different MA machines.
So it can be deployed or generalizable.
(12:25):
So it's not down to onemachine and one algorithm.
And that's really exciting becausewe're also looking at, uh, should we
be replacing the second human readeror should we use it differently?
So if you've got an algorithm that'svery good at saying things are normal,
(12:45):
could we just get the algorithm to readall the mammograms and the ones that are.
Are read by one human reader andif they agree, that's as done.
And then the ones that they flagwould be read by two human readers.
So you've still got human oversightand we will make that decision.
(13:07):
And uh, so that's oneway of looking at it.
But the other ways, the traditional methodof where method instead of two human
readers, you have one human reader and
have.
The AI to, to be, you know, isit, is it, has it been accurate?
Is it too early to tell?
You know, there's not enoughtime to look through the data.
Um, I, I think, um,
(13:29):
the different companies have differentalgorithms and they work differently.
So, um, I think the algorithm that wehad used, um, was generating a lot of
extra read because we disagreeing with.
The algorithm was picking up lots ofthings that didn't need to be picked
(13:50):
up, but I have worked with other AIalgorithms where their accuracy is much
better and, and I think, I think thefuture for breast screening will be
working with artificial intelligence.
I think we have to embrace thetechnology because there, there
is a national workforce shortage.
(14:12):
Radiologists and in particularbreast radiologists.
Um, and I think, um, in recognizingthat we have to look at the
innovations that are coming our wayand we are already using artificial
intelligence in our day-to-dayworking, you know, in our daily lives.
We are using it, whether we'reof it, and it lends itself to.
(14:40):
So in healthcare, we can useartificial intelligence to
create efficiencies for sure.
And, um, and certainly you, you can useit in a low risk setting, which would
be like the quality assessment of theimages, um, in administrative tasks.
But when we're using it toreport mammograms, for me,
that's more of a high risk task.
And you want to make surethat the algorithm would
(15:01):
perform similar to a doctor.
Ideally what you'd want isbetter performance if possible.
Yeah.
But the algorithm will not be perfection.
And um, and it's actually interestingthat, um, people think that because
we're using a computer, so tospeak, that it should be perfect.
(15:23):
But what you've got is thatcomputer has been trained.
And it won't be a hundred percentperfect because it may come across
scenarios or situations it hasn'tbeen trained for, and then it will
compute what it thinks the right answershould be, and it might get it wrong.
(15:47):
So we have to recognize that usingartificial intelligence is not infallible.
Just as we as human beings are notinfallible, that we've gotta understand
that we all make mistakes and thecomputer can also get it wrong.
So, um, we have to be mindful of that.
And that's the reason that weneed this trial because by doing
(16:08):
this trial, we are testing it inthe real clinical environment.
We've safety measures in place.
We then know what we.
We, we'll have theevidence to say it's safe.
We will have the experience to know whatwe're looking for, and we can reassure
(16:31):
our women attending for screeningthat we're doing the absolute best for
them without dropping quality at all.
And I think that's one of thethings that we've got to remember.
I think people talk about artificialintelligence and, and, you know,
made.
(16:52):
Straightforward, and it's not.
No.
No.
And, and, and I think that's animportant point to remember that we as
clinicians, as doctors, recognize theimportance of artificial intelligence.
We absolutely do.
And how it's going to transformthe way that we work going forward.
(17:12):
It's not easy.
We, we haven't been trained.
This isn't our area of expertise.
Our area of expertise is, you know, in,in, in healthcare, not in, in assessing
and knowing how algorithms work.
So we need to have that infrastructureand support to help and guide us.
(17:37):
When you deploy an algorithm,um, so you've got to make sure
it's compatible with the machinesthat you're working with.
Has it been trained?
Is it, has it got a certificate to see?
You can use it on these devices.
First of all, then whathappens if there's a glitch?
You know, like your computer's a glitch.
(17:58):
The mouse stops working.
We, we drop the mouse on thetable as we do and think that
might get it to work again.
What do you do if thealgorithm has a glitch?
How do you know it's had a glitch?
So we, we've gotta have these processesin place as well, where, when we're
relying on a computer to, to do someof our work, we've also then know when.
(18:19):
It's having a malfunction orsomething has gone wrong and we've
gotta be able to pick that up.
So it's not only just about gettingthe algorithm in, training us to
work with the algorithm, but it'salso monitoring the algorithm for
forever because that algorithm isdoing an important piece of work.
It's interesting too, I think when peopletalk about, um, you know, using a AI
(18:43):
more and more in, in everyday life, theytalk about robots taking over our jobs.
Um, but actually from what you aretalking about it, it, that isn't the case.
It's about using people'sexpertise where it's needed.
And if, if AI can do this onething, then it frees, you know,
you up to go and focus on.
(19:04):
The other areas of healthcare thatneed that personal human approach?
Uh, absolutely.
I mean, I, I think what's key isthat at the moment, artificial
intelligence is new to.
So we're, we're not talking about,um, the artificial intelligence
taking over our jobs entirely.
(19:25):
We're not there yet.
Absolutely.
So what we're looking at isworking alongside the algorithm.
The algorithm is there to assist us,make us more efficient, and support
us in the decision making process.
And I think that's important.
Understand that, and it'salso what the public wants.
They don't want a robot to make animportant decision without a human
(19:47):
being in the loop, because one ofthe things that we have is intuition.
We have all the informationavailable to hand.
The algorithm won't have that.
So that's really important, but maybe10 years down the line, that might
be different because then if ourworking environment, we're using the
algorithms on a day-to-day basis,we've become comfortable with them.
(20:10):
We know how they work, we know whattheir strengths and weaknesses are.
There may be some jobs that we canautomate as a result of the AI algorithm
that doesn't require a human beingbeing in the loop, but that has to come
with time, experience, and exposure.
So that the public can be confident,because I think what's really important
is that the public will have confidenceif we have confidence as clinicians.
(20:35):
And at the moment, we don'thave that a hundred percent.
And I think if we can stand tall andsay, this is working really well.
We can automate these jobs, but we stillneed to be in the loop for these jobs,
then the public will go, we trust you.
We agree with you, that's fine, but atthe moment, things are happening at pace.
(20:56):
There's too many factors totake into account, and we're not
ready for that step just yet.
Um, because we have to also figureout liability and accountability
when things don't go well.
And for me as a trust, as anorganization, if we're using an
algorithm within our setting, then wewe're certainly liable and accountable.
(21:20):
But I think the government has to takeaccountability and liability because
if the government is seeing at anational level, we have to start using
artificial intelligence within the NH he.
It to provide that support and frameworkfor the different trusts to be able to
deploy and deliver artificial intelligencewithin the healthcare successfully, um,
(21:44):
and ensure that quality is in place.
With what you've seen
so far and with the tweaks in thechanges and the further trials that,
that you are hoping will, will come,what do you think the future looks like
for sort of 20 years time in terms of.
Breast cancer screening?
Oh, that's a really good question.
(22:05):
Um, I, I think artificial intelligencewill play an important role.
I think we'll use it foradministrative tasks.
I think we'll use itfor quality assessment.
Um, and I think we'll use it,um, in reading the mammograms.
Um, but, um.
I think the, the type of service wedeliver in 20 years time might not be the
(22:26):
service that you're seeing at the moment.
So at the moment it'spopulation screening.
All women invited 50 to 70years every three years.
But I think in 20 years time, we mightgo to more risk adapted screening.
Where actually we look at the risk foreach individual woman, and we then decide
those that need to be screened and thefrequency they need to be screened based
(22:50):
on their risk of developing breast cancer.
And therefore, we might usedifferent imaging modalities.
So we're using mammography at themoment, but we might use additional
tools such as Contrast EnhancedMamography or Breast RI supplement.
Dense breast.
Dense breast.
So I think, I think thefuture of screening will look
(23:12):
very different in 20 years.
10. But I think for sure artificialintelligence will play a very,
very important role because they'realready developing, um, algorithms
that can do risk assessments, so itcan read a mammogram and tell you the
likelihood of you developing cancerin the short term within five years.
Wow.
(23:33):
And that's really, really exciting.
So there's been pieces of work doneand they've shown that this is actually
better than some of the risk modelsthat are well established that we're
using in the family history clinics.
So I think that area isalso really quite exciting.
So I, I think.
Um, I think our role as, um, ashealthcare professionals is that we
(23:55):
have to embrace the innovation and workwith the companies, um, and help them
to develop the algorithms to be of astandard that we need them to be so
that we can deliver good quality work.
Safe work where we'reidentifying the women.
Things that aren't cancer, but we canonly do that if we work collaboratively.
(24:19):
And that's what excites me aboutartificial intelligence is that, um,
I think the future regarding its useand application is, um, is really huge.
But, but the thing that scares methe most about innovation is that
when we have a new innovation thatcomes along, we try to pathways.
(24:43):
So we make it fit into the way thatwe've been working for the last four
or five decades, and then we go,it doesn't work really well because
it's not doing this that the other.
But actually we've got a disconnect.
We've got a new way of reading images.
We've got an algorithm that can look atthings differently to the human being.
(25:06):
And then we're making it work the waywe've worked for four, four decades
without any innovation, and we need tostart thinking outside the box, and we
need to start thinking, if we've gotthis innovation and it works really
well, how can we redesign a service thatmakes use of that skillset, but also the
(25:26):
skillset of the clinicians and create amore efficient and successful service?
And it's that bit on the, the shopfloor that is so sometimes lacking.
And part of the reason is theheadspace because everyone's
really, really busy in the NHS.
The pressures that we'reunder are phenomenal.
(25:46):
Um, and we're trying constantlyto deliver the best care that we
can, um, and then asking us to.
You might use an algorithm for that.
So,
but, you know, but, and, but I, I, Ithink it's important for people to know
(26:07):
that, you know, we, we, that is our maindrive to improve quality, but sometimes
we just don't have the tools to hand.
Uh, and with the use of artificialintelligence, if we can create
more efficiencies, we might be ableto give ourselves the head space.
And create more in innovative pathwaysthat can improve the speed of diagnosis
(26:32):
for breast cancer and treatments.
And that what we are about, we want peopleto be able to get a soon as possible.
For those that don't have cancerto be, are reassured as soon as
possible so that they can move onand not live with the stress and
anxiety of the fear of having cancer.
(26:52):
But, you know, so I, I think artificialintelligence is definitely gonna play
a, a role in this and what our screeningprogram will look like in 20 years.
I hope it be.
It is now because with the innovationscoming, the information that we have
gleaned through research, um, andexperiences that we have with the new
technologies coming along, we, we shouldbe creating a more modernized pathway,
(27:17):
and you can find out more about the useof AI in mammograms in our show notes.
Coming up on our next episode,
we finished a trial a while agoof an injectable chili pepper.
Most people will be aware if theykeep eating chilies, they can take
more and more, so their nervesadjust to the the chili pepper.
And people have been trying todo injectable forms to see if
(27:40):
that helps reduce people's pain.
Nope,
you didn't miss here.
Coming up in our next episode, we'retalking about using injectable chili
peppers to reduce pain for arthritissufferers medical breakthroughs.
The research is an under the.