Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jerod (00:04):
Welcome to Practical AI,
the podcast that makes
artificial intelligencepractical, productive, and
accessible to all. If you likethis show, you will love the
changelog. It's news on Mondays,deep technical interviews on
Wednesdays, and on Fridays, anawesome talk show for your
weekend enjoyment. Find us bysearching for the changelog
(00:24):
wherever you get your podcasts.Thanks to our partners at
fly.io.
Launch your AI apps in fiveminutes or less. Learn how at
fly.io.
Daniel (00:44):
Welcome to another
episode of the Practical AI
Podcast. This is DanielWitenack. I'm CEO at Prediction
Guard, and I'm joined as alwaysby my cohost, Chris Benson, who
is a principal AI researchengineer at Lockheed Martin. How
are you doing, Chris?
Chris (01:01):
Hey, I'm doing very well
today. How's it going, Daniel?
Daniel (01:03):
It's going great. I'm
excited to kind of switch it up
from all of the talk of languagemodels and agents, although
maybe that will featuresomewhere in the conversation
today, but turn to somethingthat's really exciting and I'm
really happy that we can featureon the show, which is all about
(01:23):
neuroimaging and machinelearning and AI, which will be
an exciting topic to learnabout. Today, we have with us
Gavin Winston, who is professorat Queen's University. Welcome,
Gavin.
Gavin (01:35):
Good afternoon, thank you
for the invite, Daniel.
Daniel (01:37):
Yeah, yeah. Well, it's
great to have you with us. Maybe
if we can kind of backtrack fromeven the machine learning AI
side of neuroimaging, even justto the context of neuroimaging,
a lot of our audience might bein the technology space or
software developers or businessleaders. Could you help us
(02:01):
understand a little bit Whenwe're talking about
neuroimaging, we're talkingabout some of the work that
you're involved with, what doesthat mean exactly? How does it
impact people's lives and thehealthcare system and the
treatments, etcetera?
Gavin (02:17):
So neuroimaging is a
pretty broad term. It covers a
variety of different techniques.And essentially the general
concept of these approaches isthat we're looking at the
structure and function of thebrain. You can look at other
parts of the nervous system aswell, but for my work, I'm
particularly interested in thebrain. So for things like
structure, can look at perhapsis there an abnormality within
(02:40):
the brain that's causing someoneseizures or other types of
neurological problems?
And on function, you can look atwhich parts of the brain are
responsible for differentfunctions that people have. For
example, which parts of thebrain are people using for
language, which parts of thebrain are people using for
memory and other tasks likethat. So I guess I'm more
concentrating on MRI, butthere's a whole variety of
(03:02):
techniques. So things like CTscans and MRI scans and PET
scans. There's a whole series ofdifferent things.
But essentially they're justtechniques to look into the
brain from the outside try andgive us an idea of the structure
and function and how these mightbe altered. So I guess the most
common thing people would seewould be MRI scans. That's the
thing, which is what I mainlywork on.
Daniel (03:23):
And maybe also give a
little bit of context for some
of the, I guess, more of ahistorical context for maybe
when and how machine learning orAI techniques started coming
into an intersectionneuroimaging or MRI? How recent
(03:46):
has that been? Is that somethingthat's been going on for quite
some time? And yeah, maybe justgive a little bit of context
there as well.
Gavin (03:55):
I think maybe we can step
back and think about how
neuroimaging has developed overmany decades.
Daniel (04:01):
That'd be great. Yeah.
Gavin (04:02):
So of course, back at the
beginning, you first developed
the concept of doing x rays andwe could do an x-ray of the
skull, But that wasn'tparticularly helpful because you
couldn't see the brain insidethe skull. You could just see
the skull itself. So then peoplestarted developing other pretty
barbaric techniques where youwould inject air or other things
(04:23):
into the brain and then youcould somehow highlight them
that that was pretty risky andcertainly not that helpful. But
I guess it started really comingin the 1970s when CT scans
started to become available. Soyou could then get a nice
illustration of brain or otherstructures from the outside.
And then that developed furtherto MRI, which gives us much more
(04:47):
detailed pictures looking at thebrain. We can get higher and
higher resolution and much moredetailed now. As these
techniques have developed, ofcourse, we've got more and more
data to analyze and the more andmore data we have to analyze,
that's when we start thinking,how can we use techniques such
as machine learning to learnfrom this vast amount of data
(05:07):
we're now starting to collect.So when you think about an MRI
scan, you could have aresolution of two fifty six by
two fifty six by two fifty sixvoxels. So it's a
three-dimensional picture.
But then you have multipledifferent types of MRI scans
looking at different types ofthings within the brain. So you
(05:27):
have an absolute ton of data.And when someone who is a
radiologist, in thoseneurologists, sorry, a physician
specialized in assessment ofscans, they have a lot of
information to look at. Andthat's extremely time consuming.
And the number of scans we'redoing is going up and up.
So now we're starting to think,well, how can we help and sort
(05:50):
of save time and also make itmore easy to detect the
abnormalities or automate thingsmore effectively? Because the
vast amount of data we have now,that's not possible for us to
now keep up with.
Chris (06:01):
I'm curious with the
different types of scans and
with all the data that they'reproducing and now that you have
machine learning techniquesavailable, has that changed some
of the choices that doctors aremaking in terms of what's
possible yet? Is that still morein the future? How has machine
learning changed the practice ofmedicine in these areas?
Gavin (06:24):
As with many things, when
you're looking at the
integration of machine learningand artificial intelligence into
medical practice, the uptake ofthese techniques can be fairly
slow. There's a fairly big chasmbetween what's possible
technically versus what isactually used in practice.
There's obviously a lot ofconcerns that people have
(06:46):
around, data quality, ethicsaround using the data, the
accuracy of any techniques youmight be using because of course
it's going to be used for humansthat are undergoing different
diagnoses and treatments. Sothere's a big gap between what
can be done and what is actuallybeing done in practice. So the
uptake, I mean, there's a lot ofpotential there and you always
(07:10):
see things being developed butthe uptake has been a little bit
slower than I would like assomeone working in this field.
But I think given what we can donow, if we think forward over
what's gonna happen, definitelythese are gonna become much more
important over time.
Daniel (07:26):
And maybe before we get
into how a machine might process
some of the data that you'retalking about, if we just
consider the human, thephysician or whoever it is who's
looking at some of the datacoming off of these scans, I
know there's probably a wholevariety of things that could be
discovered in that data in theimagery, right? But just by way
(07:49):
of example, what might the humanobserver be looking for in the
neuroimaging data that wouldgive them a sense of, I guess it
would be a diagnosis orsomething to investigate further
or a potential abnormality orwhatever that is? What are some
of those things and what wouldthey be looking for with their
(08:12):
own human eyes?
Gavin (08:13):
So when you are seeing a
patient as a neurologist, will
get a description of thesymptoms and examine the patient
as well. And that will give youan idea of where in the brain
might be involved by whatever isgoing on. And then depending on
the symptoms and the types ofsymptoms, you can get some idea
of what types of abnormalitycould be occurring in that part
of the brain. The role of theneuroimaging is really to
(08:36):
confirm that there is anabnormality in that region of
the brain and what it is. Sowhere is it and what is the
problem?
Because you may have a list socalled a differential diagnosis,
you have a list of thepossibilities it could be based
on what you've managed to getfrom the patient. But, until you
actually do the scan, you don'tknow exactly which one it is. So
(08:57):
when a radiologist who's thephysician looking at the scan,
they will be provided the scanplus the clinical information
and some hypothesis that theclinician is thinking about
where it might be and what itmight be. So then they're
obviously going to look closelyat those areas and try and
identify something thatcorrelates with that. And for a
(09:18):
radiologist looking at things,lot of it is about pattern
recognition and recognizingthings that they've seen before.
So sometimes it's very hard fora person to define what it is in
the image that they see thattells them it's a certain thing,
but it's, they've seen thisbefore. And that's, that's what
it is because there's a certainpattern that somehow they've
(09:39):
learned that represents that.
Chris (09:41):
Got a quick follow-up,
you know, as Dan asked the
question and you were answeringit, it occurred to me how little
I know about the topic as alayperson, obviously. And you
kind of started with the notionof kind of two things that there
was structure and there wasfunction of the brain. And I'm
(10:02):
kind of curious, could you takea second kind of backing away
from the data side of things,but just kind of going into that
and talk about, if you're aneurologist, what is the
relationship between structureand function in the practice
even before you get to the data?So how do they relate in terms
of diagnosis and your evaluationof a patient? And how might that
(10:24):
inform bringing data into it, asDan brought up in that last
question?
Gavin (10:29):
So most of the scans that
are done in day to day life
would be scans looking solely atthe structure of the brain. So
for example, if someone haspresented with symptoms that you
think represent a stroke, but wewant you to do a scan to work
out is there a stroke and whereis the stroke. So you're looking
at the structure of the brain.So pretty much all the clinical
(10:51):
scans out there being done arescans specifically looking at
structure. So there's a wholeseparate side of imaging when
we're talking about MRI imaging,which is looking at the function
of the brain.
And this is used in specialistcenters and in specialist
situations. So for example, ifwe're contemplating doing a
(11:12):
surgical treatment on the brainto treat some underlying
condition, of course, we don'twant to know just what the brain
looks like. We want to knowwhich parts of the brain are
performing different functions.We know in general that certain
tasks are localized toparticular parts of the brain
but each person is individualand it may be slightly changed
by the underlying abnormalitythey have. So there are certain
(11:34):
scans you can do to look at thefunction.
So for example, if you want todo some brain surgery near the
visual pathways of the brain,you might want to identify where
the visual pathways are by someform of functional imaging. Or
if you're doing surgery nearwhere language function may be,
you want to know exactly wherein the brain language function
(11:54):
is so you can try and avoid thatarea if possible. So these are
so called functional scans whenyou're looking at the function
of the brain. But that's far,far less common in day to day
practice.
Daniel (12:05):
This may be an
interesting question, but it's
not often we have someone who'son the show that's both an
expert in machine learning AItype of things and neuroscience
or neuroimaging. I'm wondering,from over the years, of course,
we've had many people on theshow, there have been many
(12:26):
parallels between neuralnetworks and the structure of
the brain and how these thingsare maybe modeled after one
another. I'm wondering from akind of expert in the field
who's also applying machinelearning and AI techniques, just
how maybe complicated ordifferent the brain might be
(12:51):
than these kind of neuralnetworks or deep learning
systems that, yes, are verypowerful, but at least in my
understanding at their rootcontain very simplistic
components and certainly aren'tas efficient as the brain in
many ways. I don't know if youhave any thoughts on that, but I
(13:12):
figured I would take the chancebecause we don't often have this
intersection of expertise on theshow.
Gavin (13:20):
That's a great question.
And I think, if you think about
neural networks, as youmentioned, of course they are
based on biology and whathappens in reality, but there's
quite a big difference betweenwhat we're simulating and what
the reality is. And a lot of itis around the scale. So when we
(13:41):
when we have neural networks,although now of course we can
have much more complicatedneural networks with the
computational power we have nowthan we used to, you don't
realize just how complicated thebrain is, just how many billions
of neurons it has and howthey're all vastly
interconnected. So that type ofcomplexity has been very, very
difficult to emulate and evenwhen we try and emulate the
(14:06):
nervous system of very simpleorganisms that only have a few
hundred neurons, it's verydifficult to replicate that
precisely.
So how can we possibly do thesame for a structure like the
human brain, which has so manymore orders of magnitude neurons
and synapses there? But yes,it's based some underlying
anatomy and function within thebrain. But there's a big gap
(14:29):
between the level of complexityof what we simulate and what we
actually are doing in our ownbrains.
Daniel (14:35):
Gavin, now that we have
a bit of context about
neuroimaging and the brain ingeneral, I'm wondering if you
could give us just now zoomingin a level down, I guess, or
more focusing on thecomputational techniques. At
kind of a high level, how wouldyou categorize the ways in which
(14:58):
machine learning or AI is beingapplied to tasks related to
neuroimaging?
Gavin (15:04):
There are a number of
different ways that we're using
machine learning applied toneuroimaging, tackling various
different parts of a patient'sdiagnostic and treatment
journey. So one example would betrying to classify scans as to
whether they contain anabnormality or not. So that's a
(15:25):
simple classification task. Alot of the literature out there,
they collect data on somehealthy individuals without the
underlying condition. Then theyalso collect some data on some
people with a particularcondition.
And the aim of the machinelearning algorithm is to try and
classify whether someone has aparticular condition or not on
(15:45):
the basis of the imaging. Youcan publish quite a lot of
papers doing that, but thequestion is how useful is that
in the real world? Because it'svery unlikely you'll be saying,
us wanting to ask it, do youhave condition X or not, which
is what essentially what theclassifier is doing. You have a
patient in front of you and youwant to know what condition they
have not do they have conditionX or not. So therefore, you need
(16:09):
to go a much, much higher levelthan that trying to classify
amongst different abnormalities.
Another way that, classificationcould be used is not just does a
person have condition X or not,but it may be a condition that
could be in various differentparts of the brain. So what
you're trying to classify iswhich part of the brain is
(16:30):
affected by this condition, andwhich part of the brain is
normal, healthy tissue. Andthat's particularly, something I
do in my work in epilepsy. Soepilepsy is a condition with
recurrent seizures coming fromthe brain. And in some cases,
those seizures may be comingfrom a particular part of the
brain.
We want to know where theabnormality in the brain is
(16:51):
causing that's causing thoseseizures is located. Apart from
that, what I've just mentionedabout diagnosis, then another
thing that's being used is howcan we use imaging to help us
guide what type of treatmentsomeone should have, which is
the best treatment given thisscan we have, and also can we
(17:11):
infer some information about theprognosis of a patient. If we
treat them, what is thelikelihood of a particular
outcome? So for example, ifbrain tumor, for example, what's
the likelihood of someonerecovering from that or passing
away from the underlying braintumor? Can we predict that from
our imaging data?
So it's used for a whole varietyof things from diagnostic
(17:33):
purposes through to treatmentoptions and prognosis as well.
Chris (17:37):
So I'm curious that the
use cases that you were kind of
talking about applying data andmachine learning techniques to
were really fascinating to me.You know? And could you take as
an example one or or or more ofthem and kind of talk about
which ml techniques that maybeour listeners are familiar with,
(17:59):
you know, that they've used indifferent, in their own jobs
that are unrelated, and talkabout, well, I take this and I
apply it to, classifying whatpart of the brain is affected,
by epilepsy. Kind of tie in alittle bit about what a
practitioner that is listeningmight have done themselves and
they might never have occurredto them to use it in this way.
(18:23):
I'm trying to kind of tie thetechnique with the application
itself a little bit.
If you could kind of just takeus through a couple of examples
possibly on that.
Gavin (18:31):
With the first example I
gave of just trying to classify
as whether someone has a diseaseor not. Typically, we extract a
number of different parametersfrom the imaging and that could
be designed in many differentways. And then we have a known
output. So this is they eitherhave the disease or not because
we decide to scan peoplepredefined disease or not. So
(18:55):
this is essentially a supervisedlearning approach and you can
just use typical techniques suchas a support vector machine that
you may have seen in many otherapproaches.
This is a very common thing thatis used in this type of
approach. Then some more of theadvanced imaging techniques when
you're looking at the wholethree-dimensional picture of the
image, this is very well suitedto techniques such as
(19:18):
convolutional neural networks.They're extremely good at doing
this type of imaging analysis.So I've just mentioned two
techniques, but these areprobably the most common two
things you will see in theliterature, but there are plenty
of other options as well.
Daniel (19:32):
And with that, I'm
imagining that I've seen a lot
of problems in supervisedlearning with I get a dataset
and I train the model. I've seencomputer vision techniques. I'm
imagining there's a number ofunique challenges on the
technical side with what you'redoing, which may be related to
(19:56):
the data, maybe it's related tothe scale, maybe it's related to
the correlations, maybe it'srelated to the complexity of the
number of target classes. Maybejust help us understand what are
the challenging elements in anyof those aspects as you apply
(20:18):
those kind of general purposetechniques in this specific
case?
Gavin (20:23):
One of the biggest
challenges is having access to
the data in the first place. So,when you look at techniques such
as large language models thathave a vast trove of data on the
internet that they can tap into,unfortunately, don't have the
same thing when we'reapproaching neuroimaging. So if
we want to train a model forwhatever task we're trying to
(20:45):
train it for, the more data wehave in general, the better we
can train the model. But thequestion is where do we get this
data from? And that's one of thebiggest challenges.
So, if you are running a projectat a university and you're
scanning subjects for a researchstudy, you may be able to get a
hundred subjects. But a hundredsubjects to someone in machine
(21:06):
learning, that is a minusculenumber. So that is nothing like
hundreds of thousands of datapoints that you might want. So
the first thing is getting thedata. And there have been
approaches to try and combinedata from other centers and
multicenter approaches and soon, but still we're limited by
the amount of data.
And of course, if you're using asupervised learning approach,
(21:29):
the data has to be labeled aswell. And depending on the
nature of the label, that couldbe extremely laborious and time
consuming and maybe inaccuratedepending on who's doing that.
If we're just classifyingsomeone as having a condition, a
disease or not, that's a prettyeasy classification to make. But
again, mistakes could be made.But if we're doing some of the
(21:51):
more complex things, such astrying to work out which part of
the brain is affected by aparticular condition, then often
the way this is done is bysomeone labeling the image.
In other words, someone drawsmanually in the brain which part
of the brain is affected, andthat's what's used to train the
algorithm. But given how Imentioned that there's so many
voxels in the brain, the amountof time it takes someone
(22:14):
manually to draw around theabnormality in the brain, that
can be extremely time consuming,several hours per subject. So if
you have to do that on a hundredsubjects, that's a lot of manual
work. So it's extremelydifficult to get the data plus
also the labels for the data. Sothat's a pretty big challenge
(22:35):
for any clinically based studyusing machine learning and AI
algorithms, just getting that inthe first place.
Chris (22:41):
Given the limitations of
the available data, in this
field, is there any role forsynthetic data? In some other
areas unrelated that isacceptable and in others I've
heard reasons why it's not. Isany level of synthetic data that
you're producing to support theresearch, is that a possibility?
(23:03):
Is that something that you stayaway from? Just curious.
Gavin (23:05):
Sometimes people augment
their data by synthesizing
slightly modified versions ofthe data they have and use that
for training. So that issomething that can be done. But
I don't think you can synthesizedata completely from scratch.
You can just modify someexisting data you already have.
Yes, that's certainly apossibility.
Daniel (23:27):
And mostly you've talked
about some of these approaches,
convolutional neural nets,support vector machines. Again,
I think a lot of the questionswe're asking are coming from a
place of those not working inthe field and questions that
people might have from outsideof the medical field. A lot of
(23:48):
what I've heard is that there isdefinitely more of a burden for
explanation of certainpredictions and a sort of audit
trail potentially of howdecisions were made. Obviously,
that's easier potentially with amachine learning model than with
(24:09):
a deep neural net. You know,what what is the reality there
in terms of the techniquesavailable to you?
Not so much technically, butfrom but from a practicality of
how they might be might beapplied. I guess one thing is
proving something in, in apaper, right? But then actual
adoption of that could bechallenging. So what are the
(24:32):
realities there?
Gavin (24:33):
Yeah, we already
discussed that unfortunately
implementation is well behindwhat it would ideally be at this
stage. And I think one of thechallenges is if you develop an
algorithm and you present it toa physician and say, look, this
does such and such. They want tounderstand how that's working.
They want to be able to trustthe algorithm. So if you've
(24:54):
developed an algorithm that'smeant to detect disease X, how
is it making that decision?
Is it doing something that makessense to me? Because ultimately,
of course, a lot of the AIalgorithms could be just
appeared to be like a black box.Don't you've got your input,
you've got your output, but youdon't really know what went on
between those two steps. Andparticularly for physicians less
(25:16):
familiar with these techniques,they want to know what's
happening so they can trust thealgorithm. Because at the end of
the day, you're going to bemaking a decision that can
affect someone's life on thebasis of this information.
So you want to be sure how thatworks. So certainly some of the
imaging analysis programs arestarting to work more towards
(25:39):
the concept of explainable AI asyou mentioned. So actually, I
can mention one study herethat's a study that I'm
contributing data to, but it'sled by my colleagues at UCL.
It's their MELD study. It's amulticenter epilepsy lesion
detection study.
And this is a study which ismeant to help us detect where in
(26:01):
the brain is causing, epilepticseizures. And they're collecting
data from many different sitesacross the world. And one of
their key aims is to developsomething that explains why the
decision is being made. So theoutput of the algorithm is not
only just this is where in thebrain we think there may be an
abnormality. There is also thenan output that says these are
(26:25):
the features that were differentin that region of the brain that
have led us to believe that thatis where the abnormality is.
And then when we look at that,we could potentially even ask a
radiologist to go back and lookat that part of the brain again
and say, look, this is whatwe're seeing is potentially
different. Are you able to nowsee that on the image?
Chris (26:45):
I'm curious, a couple of
points in the conversation,
you've talked about kind of thespeed of adoption of the
technologies and you addressed alittle bit about what some of
those root causes are in termsof the desire to understand
algorithms such as that. Onbehalf of the practitioner, the
medical practitioner, culturallywithin the profession, what do
(27:08):
you what kinds of mind shifts doyou think are going to need to
take place, maybe to accelerateadoption or what, you know, what
is that natural progression thatyou're seeing? Because, as we're
seeing these technologies floodinto completely different
industries across the globe andin different capacities, we're
(27:28):
seeing these kind of culturalstruggles within given
professions on that. And I'mreally curious as you you look
at these doctors that are atsome level adopting the
technology and moving forwardwith it, recognizing the
benefits, but also some of thechallenges based on their
traditional thinking. What doyou think it'll take to get
there for that profession?
Gavin (27:50):
The use of AI in our day
to day life has now become so
widespread. I think people arebecoming much more acceptable of
the technology as a concept. Butwhen we're working with clinical
data, one of the limitations wehave is what are the ethical
considerations behind that? Andthat's one barrier to adoption.
So where is the data that we'veacquired from a patient going?
(28:14):
Is it if we're submitting it tosome algorithm, where is it
being processed? Where is itbeing stored? Is it being kept?
Is it being used for otherthings in the future? Is covered
by privacy law, etc?
So there's a lot ofconsiderations there. But if we
can get something whichaddresses all of those concerns,
I think now is the time in thenext decade and so on, where you
(28:36):
can actually start to get thesethings much more widely adopted
because there is that widespreadacceptance of AI now.
Daniel (28:45):
So Gavin, I'm wondering,
obviously you've done a variety
of research in this area and areaware of other things that are
going on. We've mostly talkedabout kind of the context, the
background of the problem, thetechnology, the challenges. I'm
curious a little bit about thepotential impact or performance.
(29:09):
Let's say that you're doing oneof these studies, maybe you
could give an example. What isthe comparison between a human
maybe that's doing this sort ofreview manually and identifying
either certain diagnoses orparts of the brain?
(29:29):
I imagine things get morecomplicated as the problem gets
more complicated. But what'skind of the human performance
level and where have people beenable to push the kind of machine
learning AI performance level?Now, granted, as you mentioned,
there's still challenges toovercome with the adoption, but
I'm curious, at least in thestudies that you've done or have
(29:53):
seen, how is that stacking up?Maybe also, what does it seem
like these models are reallygood at? And then maybe what are
some of the open challenges thatare not addressed currently?
Gavin (30:07):
Yeah. So the performance
of humans in addressing whether
there's an abnormality on thescan is very, very rare. There
are a lot of studies out therethat look at inter rater
performance between differentpeople looking at the same type
of data. And unfortunately, theperformance and the indigreement
can be quite poor in some cases.And if you look at trying to
(30:30):
detect some subtle alteration inthe brain, there are studies out
there that show that the moreexpertise and the more highly
trained the specialist is, themore likely they are to detect
the abnormality.
So if you have someone who's ahighly specialized
neuroradiologist only looking atscans of people with epilepsy,
(30:50):
they are much more likely todetect it than any abnormality
than someone who's aneuroradiologist looking at all
sorts of scans. And they're inturn more likely to detect it
than someone who's a generalradiologist, not just
specifically looking atneuroradiology. So there's a lot
of, as with anything in life,the more highly specialized and
trained you are for something,the more likely you are to
(31:13):
detect things on a scan. Havingsaid that, the reason we want to
try and use AI sometimes is tolook for things that aren't
easily detectable by the humaneye. There are certain things
that are very hard to visuallyperceive, but there are patterns
in the imaging data that can bepicked up by the algorithms.
(31:33):
Think that's a very strong casefor the use of AI for things
that really are not visuallyapparent or easy to detect. But
we can use it in a lot of other,aspects of our life as well all
through the whole process. So ifwe're putting in requests for
scans, of course, there's goingto be a waiting list for scans
(31:55):
because many scans are beingrequested. Perhaps there's some
way you can look at theinformation given on the patient
and decide which ones are themore urgent, which ones are more
likely to yield somethingabnormal that affects how I
treat the patient. And then onceyou've done the scans, a
radiologist will be given a listof scans to look at.
(32:18):
But if you can pre assess thosescans with some form of
algorithm that prioritizes thescans and say, these five scans
appear to have an abnormality.Look at these five scans first.
That's much more useful. Andthen you can leave the other 100
scans that are probably normalto later you prioritize the ones
that are potentially going tochange someone's treatment. And
(32:41):
then we could go on.
There's just many steps in thewhole process that you could
integrate this into.
Chris (32:46):
I'm curious as you're
kind of defining kind of how
it's changed the currentworkflow where the automation is
the model is kind of preselecting scans to the
radiologists, But yet, withmodels advancing so fast right
now in terms of thecapabilities, would you expect
(33:06):
that that to kind of remain therelationship between the model's
capability and the humanradiologist? Or do you think
that that's going to evolve overtime? And is that a constant
evolution that you anticipate?Or do you think there's some
sort of human AI steady state interms of the the collaboration
(33:29):
between the technology and thehuman that's going.
Gavin (33:31):
Yeah. I guess, I mean,
we're often asked is AI gonna
replace the physician? Indeed.Essentially that type of
question. I personally do notthink it will, but I think it's
going to be a technique thatfacilitates and helps.
In other words, it's going toaugment the abilities of
whichever type of physician youare. It will make your workflow
more efficient, more smooth andso on but it's never going to
(33:54):
completely replace the humanaspect. So for example, now when
we use we use any of thesetechniques and identify things
on imaging which we think may berelevant from the AI, we always
then present it back to the teamphysicians to look at the scan
again and then determine whetherwe think that's relevant or not.
(34:14):
It's not replacing what we do.It's giving us some information
which we then go back and reviewourselves.
So for example, there'ssomething that may have been
overlooked or very hard to seein the first place. An algorithm
that says, possibly there's anabnormality here. You can then
go back and confirm or refutewhether you think that's the
case or not. So it'sessentially, I think it's going
(34:37):
to remain this type of thingaugmentation of what you do, but
not a not a replacement.
Chris (34:41):
So and and if I could
just follow-up for a two second
thing on that. And I'm notdisagreeing with you, but I am
curious because people will say,I think the human will stay in
the loop and stuff in that. Butwhy, what is in the way that
you're looking at it, why do youthink the human will stay in the
loop? And I'm not arguingagainst that or saying that's a
bad thing at all, but I'vegotten that when I tell people
(35:04):
that I think a human will stayin various workflows and other
industries and stuff, that's acommon question is they go,
well, why? Based on what you'retelling me, with the model
increasing.
I'm kind of curious what yourkind of foundational belief is
there.
Gavin (35:19):
Well, however good
algorithms are, they do make
mistakes. And one of the bigissues in the type of algorithm
I'm talking about, which istrying to detect where an
abnormality in the brain is thecause of seizures, is a lot of
false positives. So thetechnique looks good on the
basis of that we are identifyingwhere the abnormality is, but it
(35:42):
doesn't address the fact thatmaybe three or four other brain
regions that were alsoidentified that were not
involved at all. So you'regetting a lot of false
positives. So that performanceof the algorithms is good, but
far from perfect.
So I think that's a key reasonwhy you're always going to need
some human oversight and lookinginto that. And of course, then
(36:04):
there's a whole separate issueof what is the legal
responsibility. If an algorithmsays X and you make a decision
based on that and it turns outwhat it said was wrong, whose
responsibility is that? Is itthe person who used the
algorithm? Is it the person whowrote the algorithm?
(36:24):
Is it the physician? Whichphysician is it? It's a
difficult decision. So I thinkthere's always going to have to
be some form of human oversightin the process.
Daniel (36:33):
And how on that front,
on the legal side, are
jurisdictions or governingbodies whatever the relevant
kind of association would be,are those bodies keeping up with
this sort of work and ahead andputting the legal frameworks in
(36:55):
place? Are they catching up acombination of the two? What
guidance is coming down and howdoes that legal situation look
as of today?
Gavin (37:06):
That's not something I've
looked into a lot because each
country has its own differentrules. But in general,
unfortunately, the legal systemlags a lot behind the
technological innovations thatare occurring in the world. So
there's definitely a lot ofscope to working out all of
these legal issues and how theybest dealt with.
Daniel (37:29):
Yeah. I know, speaking
of things changing in the world,
obviously we've seen a hugeshift and perception shift and
shift in technology and AI overthe last couple of years with
Gen AI and language models,vision models, and all sorts of
things, I imagine that there isdiscussion in the research
(37:53):
community around how these typesof generative technologies might
play a role in maybe it's thedecision support around this
type of work or other things. Isthere any perspective you have
there, or have you crossed pathswith folks that are considering
(38:13):
those sorts of techniques inaddition to the machine
learning, traditional machinelearning models or CNNs or deep
learning type of models? What'sthe status there in terms of
reception or integration of thislatest wave of technology?
Gavin (38:33):
That's not an area I work
in much myself as I mainly
concentrate on the imageanalysis side of things. But I
think that goes back to what Imentioned earlier about the
potential triage opportunitiesof these types of approaches. So
given this textual informationon why we're doing a scan plus
access to someone's medicalrecords, what are the likely
(38:54):
possibilities? What's theprobability of these things
actually being the case? Whichis the most urgent scan that we
should be doing next?
I think I see it in that area iswhen it's going to be the most
useful.
Chris (39:07):
As you were looking at
this, find it really
fascinating. I'm of an age whereI'm in my mid fifties and having
lots of medical procedures andseeing the advancement of
technology pretty rapidly. Asyou look at this area that you
focused on so much, where do youenvision the technology taking
(39:28):
the profession and the tools ofthe profession? And how do you
think that will will go forwardin terms of as we have ever more
AI capabilities and algorithmiccapabilities and more data
available, what does the futureof imaging look like to you, and
where do you think it might go,and there any particular things
(39:50):
that you would like to seehappen as you've as you've
thought about your work andwhere it's going and you know,
the the rate of adoption? I'mjust curious what that future
is.
Gavin (39:58):
Yeah.
What what I would like to see isas a radiologist sit stand to
report 20 scans they've got,instead of scan one to 20, it's
scan most likely to be abnormalto least likely to be abnormal.
So you've already got your listof the order you should be
looking at them. And when youopen the scan, rather than just
seeing the scan itself, just thescan, there's an algorithm been
(40:24):
run on it already to perform thetype of analysis you're
interested in. And that'sgenerated a report, some
recommendations and ideas, hasalready been fed back into the
radiology system. So when youopen it, it's not just the scan,
it's the scan plus sort ofcomputer generated
recommendations report.
So you already have thatinformation ahead of you so you
(40:46):
can then focus on those areasand then potentially detect
things that may have beenoverlooked before or get things
more quickly. And then once thereport has been done, then you
need some way that that's goingto be fed back to the treating
physician in a useful manner. Atthe moment, what happens is the
(41:07):
radiologist writes a report andthat is sent electronically or
sadly, in some cases by paper tothe referring physician to look
at when they get around tolooking at it. But maybe there's
some AI that can be put in atthat stage which can generate a
recommendation or prioritizationof those information, those
results so that the physicianthat actually requested the scan
(41:29):
in the first place is alertedsooner when there's a
significant abnormality thatneeds to be addressed.
Chris (41:35):
Sounds good to me. Yeah,
definitely.
Daniel (41:37):
I think that's a great
picture to paint as we close-up
here. Gavin, it's been a greatexperience having you on the
show. I encourage people in theshow notes. We'll we'll include
a couple links to where you canfind some of Gavin's papers and
and presentations. I encourageyou to check it out.
Really appreciate the work thatyou're doing Gavin and
(41:59):
appreciate you taking time tochat with us. It's been great.
Gavin (42:01):
Okay, great. Thank you
very much.
Jerod (42:10):
All right. That is our
show for this week. If you
haven't checked out ourchangelog newsletter, head to
changelog.com/news. There you'llfind 29 reasons. Yes.
29 reasons why you shouldsubscribe. I'll tell you reason
number 17. You might actuallystart looking forward to
(42:30):
Mondays. Sounds like somebody'sgot a case of the Mondays. 28
more reasons are waiting for youat changelog.com/news.
Thanks again to our partners atFly.io, to Brakemaster Cylinder
for the Beats, and to you forlistening. That is all for now,
but we'll talk to
you again next time.