Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Wellness Pet Foods (00:01):
Mealtime is
more than a bowl of food.
It's the foundation for a longand healthy life.
At Wellness, dry recipes arecrafted with the highest quality
natural ingredients for a tastethat pets love and
scientifically proven throughAFCO feeding trials to ensure
that every complete and balancedmeal supports the five signs of
wellbeing.
With Wellness Pet Foods, petparents can share a life of
(00:25):
well-being together with theirpet.
Sarah Wright (00:31):
This is Veterinary
Vertex, a podcast of the AVMA
Journals.
In this episode, we chat aboutthe American College of
Veterinary Radiology and theEuropean College of Veterinary
Diagnostic Imaging.
Position Statement onartificial intelligence with our
guest Ryan Appleby.
Lisa Fortier (00:47):
Welcome listeners.
I'm Editor-in-Chief LisaFortier, and I'm joined by
Associate Editor Sarah Wright.
Today we have Repeat Ryan,who's been a fantastic author
for us for JAVMA and AJVR,joining us here today.
Thanks, Ryan, for taking timeout of your busy schedule to
join us.
Ryan Appleby (01:02):
It's a pleasure to
be here.
Thanks for having me again.
Nice to see you both.
Sarah Wright (01:05):
All right, let's
dive right in.
So, Ryan, your JAVMA articlediscusses the ACVR and the
ECVDI's physician statement thatoutlines the guiding principles
for the ethical development andintegration of AI technologies
to ensure patient safety andclinical effectiveness.
Please share with our listenersthe background on this article.
Yeah, absolutely, thank you somuch.
Ryan Appleby (01:25):
So the article
came together as a part of the
work of the Joint Committee onArtificial Intelligence
Education and Development, whichis a joint committee of both
the ACVR and the ECVDI havecollected volunteers with
expertise in the field ofartificial intelligence, and
(01:46):
we've been working on a numberof initiatives over the past few
years.
We've had some great successputting together our special
issue on AI, which was availablein vet rat and ultrasound.
We've been working towardsnumerous educational projects
and speaking together innumerous locations, and this
(02:08):
year this kind of culminated inour joint position statement,
which, you can imagine, tookquite some time to get to the
point where we were able to cometo a consensus on what should
be said about AI and what thecolleges wanted to say, and so
the way this kind of cametogether is through numerous
hours of discussion with thecommittee itself.
(02:30):
We put together a small workinggroup led by myself, who was
also chair of the committee atthe time, to, I guess, draft the
position based on thosediscussions, and then that
position went back out tomembership of the committee to
review and discuss the itemstherein and the position itself
(02:56):
and the recommendations.
So that was kind of the processof putting it together and how
it came together and we'rereally happy with what we've
come up with and the informationthat is now available to
veterinarians everywhere and wehope as well the public, to
better understand AI.
Sarah Wright (03:14):
Sounds like a very
thorough process.
So what are some of theimportant take home messages
from this article?
Ryan Appleby (03:19):
I think what we
really wanted folks to come away
with on the article is ourpoint of view of where things
are at with artificialintelligence, so what kinds of
things veterinarians should beaware of today when it comes to
diagnostic imaging AI.
So this includes aspects of whatveterinarians should expect
(03:40):
from AI companies when it comesto information that's available
on AI products, so what theyshould be looking for when
they're thinking about deployingthat in their practice and, at
the same time, making somerecommendations to strengthen
what is available to them.
Because one of the things thatwe've noticed in the field of
imaging AI so far is thatthere's a relative lack of
(04:04):
information for end users, whichare the veterinarians, to make
appropriate decisions about whatartificial intelligence is
being used in their practice andhow they can employ that or
deploy that effectively,ethically and safely for their
patients.
So the real crux of the paperis talking about those key core
(04:26):
issues about how we can bestapproach AI and some fundamental
principles about thinking of it, and we drew on a lot of the
information that was put forwardby the FDA primarily, along
with their, I guess, theircorollaries in other countries
Canada, Health Canada and the UK, the UK MHRA.
(04:48):
They put together some greatguiding documents that we've
been leaning on as a committeeand as a group to start to think
about what should be availablein imaging AI for veterinarians
and what folks should be able tolook for in a product such that
they can use it effectively.
Should be able to look for in aproduct such that they can use
it effectively.
Lisa Fortier (05:06):
Yeah, all really
great information.
You've been so generous withyour time, Ryan, and it's
clearly a passion of yours tokeep up with AI and educate the
rest of us.
You've been generous with yourtime for our journals, as a
reviewer and as an author, andobviously out of the speaking
circuit as well.
What sparked this interest inAI?
Ryan Appleby (05:24):
It's a good
question.
I mean, I think I sort of Ibecame boarded for the ACVR in
2019, which was a really goodkind of time and sort of a lucky
time to start to work on someprojects in AI which I thought,
you know, maybe a one-off or atwo-off kind of thing, and has
(05:56):
really kind of just turned intomy main focus and my main
passion.
And as I started to understandmore about the field of imaging
AI, what is available on thehuman health side, and
especially some of theunderlying regulatory principles
even if there aren't trueregulations but actually just
(06:17):
regulatory principles and why weneed safety and transparency
with respect to these, I becamereally passionate about thinking
about how we can betterapproach this in our field,
because, ultimately, I thinkeverybody that's involved in
this really wants the same thing, which is, you know, better
health outcomes for our patientsand, in some ways, maybe a
(06:40):
better life for veterinarians aswell, and there's huge
potential in AI for that, whichkind of drives my passion
forward.
But I think that, unfortunately, in my opinion, the way in
which we've gone about it so farhas been a little challenging
and a little fraught, and thereare there are much better things
that we can do, and that's kindof, I guess, what has sparked
(07:01):
this passion.
Lisa Fortier (07:03):
You might know,
I've been an equine orthopedic
surgeon for more than 30 years,so I thought what you were going
to say is you got bored being aradiologist.
Ryan Appleby (07:13):
No, it's.
It's typically folks that areequine surgeons that come over
and join us on the radiologyside.
So we'll be looking for you tostart applying to programs soon.
Ouch, I'm just kidding.
Lisa Fortier (07:25):
Yeah, that's
fascinating.
Like you know, it's a handfulof years and you've already
clearly established yourself asa leader and a key opinion
leader.
So well done, Thank you.
Ryan Appleby (07:34):
That's very kind
of you.
Lisa Fortier (07:36):
And you talked
Ryan about.
Like you know, you're an author, you're here on the podcast,
you're a reviewer, you are onthe speaking circuit, part of
this consensus statement.
How else do we continue toeducate our veterinary
professionals on artificialintelligence?
Ryan Appleby (07:51):
I think that,
honestly, it's such a
challenging situation and agreat question.
Right now.
A lot falls on a veterinarian,unfortunately, to seek out that
information themselves and tolearn it themselves, and that
can be really challengingbecause there's so much that
veterinarians need to know on aday-to-day basis.
(08:13):
So I think that that'ssomething that veterinarians
need to be aware of is thatright now, a lot of this is
something that they need toeducate themselves on,
especially as they're alreadyout in practice.
As part of the position, we cameforward with the recommendation
that veterinary colleges startto think about ways in which to
integrate artificialintelligence into their
(08:35):
curriculum.
This should become a corecompetency of how new graduates
are leaving school, and that,too, will become a huge body of
work that we need to think abouthow we're educating our new
graduates, not only on using theexisting tools that are out
there, but also thinkingcritically about where those
(08:58):
tools best apply or where theydon't apply.
And then we really need somesupport from the companies
themselves to assist withtransparency, because the
education can only go so far.
We really need to understandthe tools that exist and then we
need to, you know, think aboutwhat is out there and what those
(09:21):
companies can provide toveterinarians so that they
better understand the tools thatare available to them.
Sarah Wright (09:27):
So what are the
next steps for research in AI?
Ryan Appleby (09:30):
I think the world
is our oyster when it comes to
that.
Steps for research in AI.
I think the world is our oysterwhen it comes to that.
You know, there's almostnothing that AI can't touch
within our profession in one wayor another.
For me, it is incredibly keythat we not only think about,
you know, also proving that whatthey can do improves health
outcomes, and that's kind of thekey behind evidence-based
(10:01):
medicine, right?
Is that we're not just becausewe have these fancy tools or
these AI systems or whateverthey are, that we're using them,
but rather that when we deploythem, we think about how they're
impacting patient health.
So to me, it's not enough if wesay you know, our AI systems
can detect these findings with Xpercentage accuracy and these
(10:25):
positive and negative predictivevalues, et cetera.
We really need to point out andshow that that leads to a
better health outcome.
Otherwise, we have no businesscharging our clients for
deploying that piece oftechnology.
We have no business trying tointegrate it into our practices.
We really need to have evidencebehind what we're doing, and I
(10:46):
do think that that is possibleand we will get there.
That is possible and we willget there.
The challenge just becomes whatare the incentives and in many
ways the economic incentives forus to actually do that, rather
than kind of putting the cart infront of the horse, so to speak
?
Sarah Wright (11:05):
Very well said.
Are there any commerciallyavailable AI products for
diagnostic imaging that meet therequired standards for
transparency, validation orsafety?
Ryan Appleby (11:14):
No.
So, as part of our statement,we looked at what was available
from a perspective oftransparency and came to the
conclusion that none of theavailable products meet the
criteria established by the FDA,health Canada and UKHMHRA for
(11:34):
transparency, for machinelearning enabled medical devices
, and so that's.
You know a document that isavailable through learning
practices that those groups cameforth with a number of years
(11:56):
ago, which is that states thatthe end users of products need
to have enough information to beable to make informed decisions
about those products, and theyneed to understand enough about
(12:17):
how those systems are made andespecially the underlying data
sets that went into thosesystems, and that is all lacking
for the products that exist.
We really need to understandway more about that before we're
able to feel or, in my opinion,before veterinarians should
feel confident using theseproducts.
Sarah Wright (12:33):
Yeah, thank you.
And for those of you justjoining us, we're discussing the
ACVR and this ECVDI positionstatement on AI with our guest
Ryan.
Lisa Fortier (12:43):
Ryan, you've
authored a lot of manuscripts,
but that's different than aposition statement.
How did all your previoustraining culminate into and you
said you were the lead on theposition statement as well.
How did you get to that point?
Ryan Appleby (12:58):
Yeah, you know, it
is a very different kind of
approach.
You know, certainly this isn'tprimary research by any means,
and even putting it through, youknow the review process and
thinking about the editorialsreview that came back, it's all
a very different process.
It's a really good question andI'm a bit stumped for the
(13:23):
answer, I guess In some ways Ithink that I drew on the same
sort of principles.
I wanted to.
You know, we as a group wantedto approach this from as much of
a scientific basis as we could.
We look through the literature,we look to what should be
expected of AI, and we've hadmany, many hours of discussion
(13:47):
on where AI should sit andspecifically imaging AI should
sit within the profession andwhat position we as the colleges
wanted to take on that AI.
And so you know a lot of it,instead of, I guess, coming from
it from a research project,it's the culmination of, you
know, all of those discussions.
(14:08):
So, rather than puttingtogether data points to come to
conclusions, we had discussionpoints to then come to
conclusions and positions.
So I tried as best as we couldto have it in a very similar
fashion.
You know, still collaboratewith the experts that need to be
involved, from machine learningexperts to radiologists, to
(14:31):
folks in the industry of imaging, ai and trying to put together
the best possible position andrecommendations for the
profession as a whole, but Iguess just trying to do it as
scientifically as we could, ifthat makes sense.
Lisa Fortier (14:48):
Yeah, during that
process, in the end did you find
one kind of trick, for lack ofa better or effective tool?
So you said like, oh, you had adiscussion point, but we all
know when you get a room full ofpeople it starts to deteriorate
.
Maybe might be the right word,and then it's everybody's
opinion.
You're like hang on a minute,let's get back to the discussion
(15:10):
point.
And we have this in everyaspect of our life.
What one thing really workedfor you in the end?
Ryan Appleby (15:23):
to be like, hey,
we only have 20 minutes left of
our time, let's get back to thediscussion point, or what worked
for you in this group.
I have to admit that I was notalways successful in that and I
think there were many instanceswhere you know we would go over
our scheduled time.
There were many meetings wherethe conversation perhaps went
sideways away from what we mightintend and try to get to, but
(15:48):
in the end we would just kind ofcome back together and I think
that dividing things up intosmaller working groups was
probably the best.
Part of it is that you know,once the larger group had a
discussion saying you know whohas the bandwidth and the space
to actually take all of thethings that we've been talking
(16:09):
about.
Of those authors that arelisted on the manuscript, even
though it is a position you knowfrom the colleges as a whole
those that smaller group ofpeople are the ones who kind of
wrote the, the initial draftsbefore sending it back to the
(16:30):
committee for review.
Sarah Wright (16:32):
What a process
Sounds like a great learning
opportunity.
Ryan Appleby (16:35):
Yeah, it
definitely was.
I definitely have appreciatedbeing involved in that and have
appreciated being able to kindof take that work and also
provide recommendations to othergroups.
You know the AAVSB just put outtheir white paper on AI.
I've been involved with mylocal regulatory body here in
(16:56):
Ontario talking about emergingtechnologies and AI and I think
that that you know that work hasbeen instrumental and members
of our committee are also now onthe AVMA task force on emerging
technologies and so you knowthere's so much that we can draw
on that experience and bring itforth to the profession as a
(17:19):
whole.
It's been really great.
Sarah Wright (17:21):
Very cool.
Now this next set of questionsis going to be very important
for our listeners, and the firstone is going to be revolving
around the veterinarian'sperspective.
So what is one piece ofinformation the veterinarian
should know about the use of AIfor veterinary diagnostic
imaging?
Ryan Appleby (17:36):
I think that you
know to me.
Veterinarians should know thatAI is not always accurate.
So veterinarians need toseparate the romance claims that
companies make, which aresimilar to romance claims that
pet food companies make abouthow great their products are,
(17:57):
from the science behind them.
And veterinarians should treatartificial intelligence like any
other diagnostic test.
So where they would look forany sort of validatory measures
and information on how well asnap test works or how well any
other diagnostic test works,they should use the same
(18:19):
criteria and scrutiny forartificial intelligence.
Sarah Wright (18:23):
And on the other
side of the relationship, what's
one thing clients should knowabout the use of AI for
veterinary diagnostic imaging?
Ryan Appleby (18:29):
Clients should
know that AI is starting to be
used within veterinary practicesand they should be aware of
that, and they should feelcomfortable asking their
veterinarian whether or notthey're using it and, if they
are using it, be prepared to asksome questions about the data
privacy associated with that,the efficacy of the ai, how the
(18:53):
veterinarian comes to feelconfident about using that tool
and why.
And, in turn, the veterinarianshould start to be more prepared
to answer questions like thatas our clientele becomes more
educated and more understandingof some of these things.
Lisa Fortier (19:10):
Yeah, really great
points.
Thanks again, ryan, foreverything you do for our
profession in this crazy time ofAI.
As we wind down, we'd like toask a little more personal
question, and this is one thatmy daughter got in an interview
recently.
If you could idea what spice Iwould be, but I don't think she
passed the interview.
Ryan Appleby (19:39):
I have no idea.
I will say that I love to cook,um, and I particularly like the
feel of a nice spice grinder.
I got them for Christmas thispast year from my in-laws, um,
but I have no idea what spice Iwould be.
That's such a good question.
I don't pass the interview.
(20:00):
I got nothing.
I'm drawing a complete blank.
Lisa Fortier (20:05):
She said salt
because it's highly versatile.
Ryan Appleby (20:07):
I like that.
That's good.
Lisa Fortier (20:08):
I did too.
Ryan Appleby (20:11):
I couldn't come up
with one.
I hope she got the job.
Lisa Fortier (20:17):
She did not.
I'm not.
Sarah Wright (20:20):
Next time, when I
was interviewing for rotating
internships, one of theinstitutions asked me if I was a
refrigerator appliance, what Iwould be and why, and that was.
That was also like a hardquestion.
I was like I don't even knowhow to answer that one.
I think I ended up saying likea refrigerator or something,
because you're just continuouslyno-transcript, cool and calm.
I started translating toworking in ER.
(20:41):
It was hard.
Ryan Appleby (20:43):
We used to ask
everyone what they would bring
to a desert island.
Three things they would bringto a desert island.
That was one of our questionsat NC State and I remember when
I interviewed I said that Iwould bring my podcast so it's
great to be here, a campingstove and my cat and I was then
(21:06):
asked if the cat was for companyor for eating, because I
brought the camping stove.
Lisa Fortier (21:08):
That's awesome.
Well, it could be both.
Ryan Appleby (21:09):
eventually, that's
what I said, and I knew it was
going to be a great fit.
Sarah Wright (21:15):
Oh, very nice.
Well, thank you so much, Ryan,for being here again on our
podcast and for sharing to theposition statement with our
journals as well.
Ryan Appleby (21:22):
It's been my
pleasure, and thank you so much
for your journal's interest andfor publishing on AI.
I think it's so important foreveryone to start to understand
more about this, so thank youfor taking that on as well.
Sarah Wright (21:35):
And to our
listeners.
You can read the ACVR and theECVDI's position statement on AI
and JAVMA.
I'm Sarah Wright with LisaFortier.
Be on the lookout for nextweek's episode and don't forget
to leave us a rating and reviewon Apple Podcasts or whatever
platform you listen to.