Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:25):
Welcome back to chewing it over how we're getting on.
I'm getting nearly smooth enoughand I don't know how to make it
full screen as soon as it opens there.
But I'm trying to get used to mesoftware again and they keep
changing it. But I've got a cool bits of
extra features for you today andthat I'm going to try and use
and watch it go wrong in a second.
I've got a brilliant guest on today and means I'm going to cut
(00:46):
the cut the nonsense in the intro and get him right in
there, especially because that'swhere the technology could not
work. But hopefully you're hearing me
loud and clear, and if I can click a couple of buttons and be
joined by Ash James, he says. Tada Ash, can you hear me?
I can hear you loud and clear. Can you hear me?
(01:07):
Good, I've got you mate. Yes, and a returning guest as
far as I'm aware, but it's been a while.
That's been a while. That's been a while.
Yeah, you're lucky sod. So we're going to get stuck into
this CSP guidance document I mentioned there in the quick
intro about me using some cleverfeatures, and I hope it works
because it'd be embarrassing if not.
But look at this, Ash. Try not to be bowled over by our
stunning this technologies, but look at that look, it's worked.
(01:30):
This is what we're on about. So this is worth you having a
look at. If you've got your phone in your
pocket and you're listening to the audio, get your phone out
your pocket, scan this QR code and it will take you to the
statement of principles by the CSP, which is what we're going
to be using as the reference to our conversation, which is about
AI use in physiotherapy. It visits a few different areas
and topics such as education, the clinical practice, other
(01:51):
other integrated ways of using the technologies.
And so I'll bring it back up at the end as well.
But for those that want to know what we're on about and might be
worth having a glance at that or, or as a supporting
reference, then please do have alittle look at that on the on
the screen. It's easy enough to Google it as
well. If you're not come across it
CSPAI and you'll find it as that's what I did to get the QR
code. But that's what we're going to
(02:11):
be on about today. I want to start off if I can,
Ash, by just tell us a little bit about how it came about,
especially because you opted to go for principles rather than a
specific hard and fast policy. You've already named.
You're going to be reviewing it every six months because it's a
dynamic topic. So just give me a clue as to how
it came about and why that format.
Yeah. So I think first of all, it
(02:33):
originally it kept, I've been aware of AI and physiotherapy
for a long time actually as as aview, you know, something I've
worked with before in my previous jobs.
And sometimes I think AI isn't auseful title for what it does.
I think it can make it sound scarier than it is a lot of the
time. However, it was brought to our
attention through its emergent use more in physiotherapy
(02:56):
services. So where it's starting to get
involved in patient care and members demonstrating some
concern about that. I also was seeing, you know,
where the direction of travel, the noise from the government
about AI and healthcare, etcetera.
So we, we felt like it was a good moment in time to put
something together. I suppose why principles over a
policy? I suppose it's what do we mean
(03:17):
by policy? You know, here at the CSP, I
know Rob Yeldon, our Director ofpolicy would would say that it's
just what we think about things and this is what we think at the
moment about AI. So we could consider it policy
to a certain degree. I suppose what we didn't want it
to do was be too prescriptive ortoo too restrictive either where
(03:40):
we know it's such a fast-paced sector and at the CSP without
governance structures, sometimesto change a policy formally can
take a while because it needs togo through council etcetera.
And I didn't want to tie us downto have to wait a long time
before we could make changes if we needed to make them because
(04:02):
of the nature of this sector. So I think it was so we could be
a bit agile. And so we were able to respond
to things that were happening. But essentially this is the
formation of our policy on AI because it's what we think about
it. Yeah, exactly.
And these are these are phrases that have got different
interpretations. I think it's well placed in
terms of its format actually. I think it has the ability for
it to be agile and updatable without it radically changing.
(04:25):
But similarly it does make appropriate statements of of
facts of the matter as it standsnow.
And I think that's that sensiblewhere is is a bit more debated.
And I'm interested in your thoughts on this because I don't
know where I stand is that thereare people that feel that you
were a bit late to the party. And there are people that feel
that actually you're going a bitearly whilst this is still very
(04:46):
emergent. And so you can't please everyone
and nor should you try. But one of the reasons for that
is people feel that you've you've alluded to it influencing
services. How much did you know about the
fact that physiotherapy was going to be, or it being implied
that this is AI physiotherapy running a service?
I think it's in Lanarkshire, isn't it?
Yeah. And then the IT makes you guys
(05:06):
look a bit reactive in a sense. So how much did you know about
it? And if if it is that you only
found out when the rest of us did, is this a risk of it being
a bit of a reaction to that and could you have got ahead of it?
Yeah. So, so we knew about the service
that was being implemented and we've spoken to to the
organization before. I think they were in front
lining I think January last yearwhen they when they form.
(05:30):
So we're aware of the organization and aware of their
intentions for delivery of care.And what we found out the same
time as everybody else was the label that they were, you know
the type, the kind of headline of AI physiotherapist.
So I think that was the reactivebit and that brought the
protection of title conversationinto the mix.
(05:54):
So is it the right time? I think when like you say
there's there's we have in the physiotherapy profession, which
is quite reflective of society generally.
We have early adopters who wouldhave been using AI for a long
time and think we are too late and there's lots going on.
Equally, we'll have people that are are don't necessarily aren't
the first on board with emergingtechnology and want to see how
(06:17):
it fully plays out before we consider adoption.
And hopefully we're falling in the middle of of somewhere there
where we've seen something emerge.
We've recognized that it could have potential implications for
physiotherapy, therefore slightly reactive.
But also when we're talking about AI, people often get hung
up on that now or particularly conversations I have, they get
(06:40):
hung up on on the the, the service delivery or AI and
service delivery when actually it's got such broad use.
AI is so broad that I think aside from that or away from
that, the principles are a really good way to get
physiotherapists to try and adopt AI and lots of other ways
in their practice. That will be helpful, hopefully
so. So I think there's there's it's
(07:02):
broader than just that that thatissue.
Yeah, I think we'll get, we'll definitely come back to I think
when we talk about risks and benefits, I think it's smart for
us to then sort of ground in clinical practice as well as
some of the other use case categories we have in education
and stuff. And it's also really useful you
as Director of Practice and Development you have and I
always assuring people of this is how much breadth you have
from sport through hockey health, through education as
(07:25):
well as then routine MSK clinical practice.
And I think that that's useful because this document is, is
sort of useful in that anyone picking it up, any member, there
is something really for them. It's not, it's not just, oh,
that doesn't apply to my day job.
It's like, that's why it's sensible for it to be principal.
So I definitely want to get intothe weeds on some of the
categories. But before we do, I wondered if
(07:47):
I can get your executive summaryor where you feel you want to
highlight any particular points within it, Especially because
there's a risk that of course, we could just put it into chat,
ChatGPT and I could have had them on.
But whilst you're here as a as ahuman involved, give me what you
feel your biggest take comes from it.
I think some of the things you already mentioned that it's for
every member, you know, we designed it as principals and
(08:08):
it's almost like a checklist. So if if you want to use ChatGPT
in your practice, what are the things you should consider if
you want to use AI that might help you with note taking?
What are the things that it should cover off equally all the
way up to if you want to implement a service using AI?
What are the things that I need to consider from an
(08:30):
implementation perspective? What are the things I need to
consider from an education? So I think the biggest take home
for me is the breadth of it. It's trying to be useful to all
members in all sectors in all environments, which is a really
difficult thing to do, which is why some in the creation of
this, some people felt that it was too broad and we needed to
(08:53):
be more narrow. It's deliberately broad.
So that one, we're not if we as I said at the beginning, if
we're too specific on any of this, it will change next week.
AI is developing so quickly thatif we're too specific on anyone
product or one position or one bit of technology, it will
develop and change very quickly.So the principles abroad and
(09:14):
almost the majority of the principles you could apply to
any technology. It doesn't necessarily mean AI.
So when you think about the general principles of of the the
general principles that are the beginning of the document that
make up the majority of them, they're things like
accountability, transparency, data governance.
You know, they're things that you should be applying to most
(09:34):
other technologies that you implement in your practice
anyway. So lots of it isn't necessarily
different from an AI perspective.
And we did toy whether we just had principles for technology in
general that you could apply, but we've felt like and members
were needing something specifically for AI at this
moment in time. So that's why we we went down
that. And also, it probably needed
(09:55):
probably needed word in that wayas well, isn't it, even though
some of it naturally applies across technology.
Because that was definitely going to be one of my points.
It kind of needed that appropriate wording to make it
feel modern. And it makes sense to lead into
the buzzwords that are attractive to try and get.
You know, I don't want you guys to succumb to clickbait in this
instance. It's smart for it to be
referencing it now to critique it.
(10:16):
There is an argument that it being, it having breadth means
that it ends up being so vague that it doesn't necessarily have
that much substance. Do you feel that that's a
fairpoint and, and, and where doyou think it it's next iteration
will be able to add a little bitmore meat to the bone?
Yeah, I, I think it's a really fairpoint.
And I think if anybody really wants to get into the weeds of
(10:37):
delivering a very specific product in a specific area, it
probably doesn't give enough, but it gives you enough to be
safe in, in terms of governance around your data safety around
clinical accountability. If you enact all of the things
that are in the principles, you will at least be safe and you
(10:57):
will at least be implementing itin what you could consider an
ethical way. I would say how you then
maximize and optimize the use ofthat bit of AI that you have in
your clinic or in your practice.It's not going to go that far
and help you optimize the use ofit.
I think innovation generally comes from members.
In my my experience, you know, members are very good at
(11:20):
innovating and I think the best case use of the technology will
come from people in practice. So I think it's a really fair
critique. I think the, the we set a kind
of six month review on it on purpose because I do think it
will be iterated. I do think there will be changes
depending on how members receivethe guidance in the 1st place,
(11:40):
but also how the technology moves and shifts and the sector
moves and shifts. So there's no doubt to me that
in a year's time this guidance will look different.
But I think that's OK and I think I'm comfortable with it
being iterative. And I think sometimes you we can
be too much. This is the guidance.
(12:01):
Here it is and here it is forever when actually we can be
more comfortable, particularly in this space of saying, here's
the guidance for now. We know it's a fast moving
sector. Give us some feedback.
We can change it, you know, and we can move, move with the
times. I think was having that agility
is is much more beneficial. Yeah, look, I, I, I like that.
I think one of the things that provokes some of that cynicism
(12:24):
where people feel like they wanted more or they wanted more
detail is in part because you'vejust described what would be
ideal in a format whereby some loose guidance is about their
members then implement it and start to develop their own test
cases. A bi directional relationship
with the CSP being by its members for its members feeding
through to committees and councils means that the natural
(12:45):
flow of information and experience and examples would
naturally find itself intermediaand communications from the CSPN
bar. And by that meaning that the
format would naturally the members would find their
experiences then represented into it.
So in ideal terms, that's what would happen.
The problem that people have, including myself and and many of
our audience feel that that's absolutely describing a utopia
(13:08):
that hasn't existed until now with the CSP.
And this is stuff that I can saymore comfortable than you can,
but that's certainly our our wider view.
If I was to try and say confidently speaking on behalf
of a few there. And therefore that's one of the
reasons why people think actually you've kind of we
don't, we don't observe that degree of collaboration usually.
(13:30):
Therefore, yes, that sounds nice, but is this then an
example of a change in tide witha deeper level of member
engagement and a deeper attentiveness from the CSP to
how things are actually applied,so that we could then see a next
iteration be actually buying four members again in a proper
way? Yeah, I would like to think it's
(13:51):
the latter. I've definitely described the
utopia, but I think that's what we're aiming for.
I think we've started to implement some changes and we're
broadening out the conversation now, which is fine.
But from from a CSP perspective,our professional committee, for
example, we're starting to look at how we can engage more
meaningfully and professional networks, for example, can
(14:12):
routinely input into professional committee the
things that they're noticing, worrying about seeing on the
ground. So we're not just relying on
necessarily the members of the committee and their experiences,
but we're trying to, we're trying to generate a broader
sense of how the membership are feeling through the existing
structures that we have via professional networks.
I know other committees are alsolooking at and exploring how
(14:35):
they can get the feedback from members in more broadly and not
just rely on the anecdotal conversations they might have in
their own departments or, or, or, or the members that they
speak to on visits or whatever that might be.
So we're definitely looking at how we can routinely draw on the
knowledge and experience of members and get that fed into
(14:56):
the governance structure of the CSP more broadly.
And, and this would be a route to do that, you know, we'd
definitely be leaning on the professional networks to
understand how they're using theguidance, have they seen it,
have they used it, how they're feeling about it and, and use
that as a feedback mechanism. I will definitely be holding
around the six month mark some webinars, some kind of, you
(15:18):
know, audience participation events where people can feed in
what they've seen and felt. And we'll go back to our experts
in the industry as well to help develop the guidance in the 1st
place. So it's definitely, I think, a
process where we want to look tochange the narrative.
I think that that's where I knowyou're right, it'd be a risk of,
of, of digressing this conversation too far if we went
(15:39):
into CSP governance. But it was just that I see this
through two different lenses. 1 is that if it is a, a, a
document of principles that I then think is a continuation of
how I've experienced things before, both from when I was
within CSP and council and and before that, then I'd be
concerned that it is too vague and it might, it should say and
do more and needs to because essentially it is a bit of a tat
(16:01):
from above. If this is the first of a new
line in which there is a deeper,more varied member engagement
that recognises this is a fast moving and therefore we need to
be more agile than we maybe havebeen before.
Which I know I'm smuggling in a broader governance critique
there of which you needn't agreeor disagree with you right now.
But I'm just meaning that that if I see it through that lens,
that's an exciting proposition for me, not just because of the
(16:23):
subject matter, but also becauseof what it could mean in new
precedent for how the CSP engages with its members,
regardless of where they are andhow they can find a way to have
their practice represented in initerations of this document.
So I'm pleased to hear that thatfeels like you said that
hopefully the latter. And it doesn't mean that then
perfect need be the enemy of good on this because I want it
to be better. But I think it's I think it's
(16:44):
got that opportunity to be. But if that membership
engagement opportunity falls away, I think it's probably
lacking. And then it ends up falling into
a routine that obviously have been been a critical because I'm
a smart ass. I have put that document into
ChatGPT whilst we've been talking.
And that's to summarize. Yeah.
And so I'm going to just bring this away.
(17:06):
And you can see that as well as I can't Yosh.
Right I. Can't view as viewers do.
I'm not going to read this because just pick it.
Pick it out of your pocket if you're listening to the audio.
But this is just on screen now is what ChatGPT has suggested to
the key principles and themes from that document.
Do you think that's fair enough at a glance, Ash?
Yeah, I think it's only done thegeneral.
So there's there, there are general use and then there's use
(17:27):
in clinical practice, use in education use at work.
I think it's just on the generaluse of AI there.
But but yeah, that that that seems really fair.
And those are the ones that I think have.
Multiple. They're the umbrella, aren't
they? These are the ones that don't
really feel like they apply across the board, don't they?
Can you pick out a couple of them from me and just feel like
which you feel you just want to highlight and give a couple of
examples to? Well, I think the biggest one
(17:48):
for me is probably the confidentiality and the
adherence to GDPR and data protection.
Because I think the real risk here is that we start whacking
patient data into different part, different types of
software without a knowledge of where the servers are, what,
where that data is going, who has views of it.
So I think if there's anything that members take away from
this, I think the key thing around any AI or or technology
(18:11):
in general, but particularly AI and generative AI software such
as Jack GPT. I think we really need to be
cognizant of data protection, GDPR, where things are going,
who's seeing it, where it's beensaved, because we will lose
patient trust very quickly if westart to get that wrong.
So I think that's that's really,really important.
(18:33):
That's one of the biggest things.
For me, I think so. One of the things that I
realised as well is if you're clumsy with what you're dragging
and dropping and you then are telling the telling our robot
overlords that Joanne blogs fromwith this date of birth and this
address has this back pain and problems at home, right?
(18:55):
Yeah, how dare you. You wouldn't say that to anyone
else. You would be really careful
where that is. And if indeed that is feeding
ever more complex large languagemodels that don't necessarily
know that you've not if you've not partitioned the consent on
that, then they're not going to know is is just really clumsy
because that is there and it's not yours, it's not yours to
tell. And so people need to be to be
(19:17):
more thoughtful than they are being about that.
And I think that this was a document that really did
confront those factors very well.
To some extent though, when it comes to marketed, marketed and
paid for clinical service products, the clinicians to some
extent can't do the due diligence to read every layer,
layer of code and small print and have to trust the
(19:39):
organisations to be working not just legally but ethically with
regards to data management. Do they not?
Or do you think that actually it's that much of A Wild West
out there that me as a user needto be making sure that I
understand exactly where the server is and exactly what that
data is doing? And I feel because that that
feels a little bit daunting, if I'm honest.
Yeah, I understand it can feel daunting.
(20:01):
I do think, you know, when we get to talk about, you know,
when we talk about the future inevitably and I'm sure in this
conversation, I think it is going to have to become part of
a physiotherapy skill set if we're going to be data enabled
and using data technology. Not necessarily to understand
the code and understand all of that detail, but to at least
understand what are the right questions to ask around GDP are
(20:22):
you know, we've, we've asked some questions recently about.
Software that's on the market that we use for software design
and things like that. And when you look into it,
different organisations don't have the data agreements in
place that they, they, they should necessarily, or you
would, you know, trusted organisations that you think do
have or should have data agreements in place just don't.
(20:45):
So I think it is part if membersare going to utilize the
technology in their practice, I think they do have a duty to at
least understand where the data is going, what it's doing and
and whether they're handling it in an ethical way do.
You think that because I know that this is a new and emergent
thing that therefore the regulatory frameworks might not
be in place. But you know, like if we do
(21:07):
think about them as medical devices, I, I can trust if I buy
diagnostic ultrasound machine that it's likelihood of, of, of
giving an electric shock to a patient is minimized by its
regulatory framework without me then running it by three
electricians. And so to some extent, if I as a
(21:30):
consumer purchase a product, I have to make some assumptions on
its legality, even though there isn't a robust regulatory
framework, it being legal. And therefore me as a clinician
being insurable, especially because I'm not taking an edge
use case like a ChatGPT is so broad.
I'm thinking about a clinical product.
I, I feel that if indeed the CSPor elsewhere we find out through
(21:52):
investigations at physio mattersthat there's actually some
illegal practices essentially and that that is not what is
being done. Or it's it's at the very least
being, being hidden. That's got to be a legally
reportable thing because that asa consumer, I just don't think
that we can do that. I think you're quite right.
We need to be conscious, but it's just that if that's found,
that has to be reportable, does it not?
(22:14):
Yeah. Oh, absolutely, absolutely.
In the same way any consumer product would be reportable.
And I think the word you've usedthere of being aware is the
right one. You know, I don't want to over
egg it, but even just knowing whether a product should be
registered as a medical device, for example, you know, even
knowing that, even asking that question, you know, is this or
(22:35):
even looking at that on on on a as you buy a product, is this or
should this be classed as a medical device?
And then if you know that and you know it's classified as a
class one or Class 2 medical device, you you should have some
assurance. But even the knowledge of
knowing to ask that question I think is important and something
that maybe not all physios are doing regularly at the moment,
(22:56):
but I think in the future we'll probably should do.
Because of her background as well, Katie Upton has been great
on this because of her physio, fast online work and stuff.
So she was an early adopter there.
And so AI come in for sooner than most of us did.
And so she, there's been a real thorn in my side with it where
I've been sort of cheerleading probably more ahead of the, the
curve than I should have been considering the data safety
(23:16):
stuff. So really helpful.
And again, I think this guidancedoes nail that and, and, and
brings to awareness something that I think people have been
complacent of, including myself.So one of my next questions
really you said about the future, then what what do you
think of the, what do you see AIproviding the most value in MSK
over say the next five years, which I know is a long time
spanning this market? Yeah, that is a long time span.
(23:38):
So in five years, yeah, I don't know.
Hoverboards, yeah, I, I don't know.
But certainly in the near future, I think there are
things, there are things. Well, the service delivery that
we've seen. So, so does it have a role there
I think is a really interesting question for us.
(23:59):
And I would argue that with the waiting lists that we have at
the moment for MSK, if I have say, let's say simple non
specific mechanical lower back pain.
And what I need is some good advice, some exercise, some
exercise advice, some reassurance.
I do think there's probably a role there for AI to help rather
(24:21):
than wait for a long period of time to sit to see an
individual. You know, if you can't afford
private provision, which lots, you know, which lots of people
can't, great. If you can, that option is
there. If you can't, this might be a
good alternative. So I do see it being useful if
used correctly in service provision.
I think clinical decision support is, is another one, you
(24:41):
know, where it's helping with, with early diagnosis.
We've seen, you know, radiologists are all over it in
terms of how it's able to effectively detect early
diagnosis of, of things on images.
There's a thing there for me around data in and data out.
You know, one of the things thatwe're trying to do is improve
(25:03):
how physiotherapists can collect, utilise and analyse
data. And AI is only as good as the
data that's available. So if we don't have good physio
data that's going into it, we'renot going to get good physio
data that's coming out of it. So in terms of just, you know,
support and clinical decisions, that needs to be, that needs to
be right. Remote, remote monitoring is a,
is another one. Virtual rehab, you know,
(25:23):
wearable tack is, is growing. There'll be a role for for, for
AI there. I think a big one is
administrative, administrative efficiency.
You know, there's lots of thingsI think in clinic that it can
help with in terms of making note taking more effective and
more efficient, right, in rehab plans, you know, monitoring of
patients throughout that. I think that's another one.
(25:44):
And then population health insights, you know that again
that that I don't if you've got large data sets or if we've got
large data sets identifying trends, particular geographical
issues, inequalities, unmet needacross populations, I think AI
can help us analyse data very quickly and more effective and
more effectively and more efficiently.
So I think in that analysis of population health, there's
(26:05):
probably some stuff there. So I think that there's some of
the big areas that I think we can see some some development in
over the next five years. I think if we go for the
humility to recognise where our strengths lie as human
therapists haven't done thoroughexaminations, then we would be
in a position where the technology in years to come will
be that someone say they've got a genomic sequence, say they've
(26:27):
got a robust evaluation saying MSK to keep it in in theme with
the show. And then alongside that, some
some more recent blood work, just to give 3 examples of many
other things that could be. That's a tough thing for a human
to integrate really thoroughly, but it's something that a robot
would be able to do well. And so there'd be some
compliance with said human, creating a format that would be
(26:49):
as useful for that machine as possible.
But joining the dots as to wherethat might be.
And someone's disposition to a particular condition, along with
some little tells that might notmeet a massive threshold on its
own in a blood work alongside A clinical presentation and
assessment findings might mean that you're in a situation where
the AI might be more sensitive than you are to this being
(27:09):
essentially sickle cell. And it might therefore not be a
frozen shoulder. It might be osteo and it might
be avascular necrosis instead secondary to sickle cell.
Are you thinking about that yet?Those sorts of moments that
might assist is that the robots themselves would be useless at
that without the examination findings that I've just
described that would be best suited to a human at that point
in time, right? That that that end, that end
(27:30):
feel we all know that behaves like a, a frozen shoulder.
Good luck to a robot doing that for a while yet.
What I've just described in terms of that wider data point
would be harder. So it's it's it sounds because
that sounds futuristic and it's not particularly those
technologies exist. No one's joined them all up
perfectly yet. I shouldn't have said it really
online. I should just patent it and
(27:51):
millions. But but then what?
What I am I right in thinking ifI was to ask a question, we've
just talked about upsides and opportunities, what would you
see if I was thinking when we talk about biggest risks, does
that take us back to that data protection thing?
Am I right to guess your answer there?
So your data protection is definitely one of them is that
is that we lose public trust. And that's not just physios,
(28:13):
that's I think anyone in the medical field who's going to
start using AI. And if we, if we're, if we're
not careful with it, we will lose public trust.
So I think there's a responsibility in that.
I think there's a responsibilitytherefore, for us to work across
different professions with otherAHPS, with medics, with nurses,
because if any one of those strands loses a particular
(28:33):
public trust in it, then then all of us will suffer the
consequences of that. So I think there's, there's a,
there's a need for us to collaborate where possible on
that. Risks, I think is the the
perception from physiotherapiststhat we will lose our relevance.
I think you've just described really well where we won't and
why we won't. I think if we're really
confident in our ability, you know that there's nothing that
(28:56):
can replace human touch and I don't think AI will ever do that
and humans will always want that.
So I think it's about how do we,how do we and how do we
integrate it into our practice in a really meaningful way?
And you gave a great example of,of, of doing that there.
The reality is AI is here and it's happening.
(29:17):
So, so it's going to happen whether we like it or not.
So integrating it in a really meaningful way is going to be
important. There's definitely some bias and
equity in, in AI programs, you know, particularly, you know,
whoever designs the code, whoever, you know, there's
inherent bias in, in, in, in lots of those.
So you will get biased answers out as well, particularly as
(29:38):
it's relatively new. I know it's not super new, but
you know that it doesn't have all the data.
So I think there's some bias we need to be be mindful of.
And there's also then clinical accountability and regulation,
you know that The Who is regulating it?
What is the regulation around this technology?
If I go and see an AI physiotherapist, can it call
itself that, you know, that's a protected title.
(29:59):
And then if it misses my, if it misses my CES, who's
accountable? Is it the, the, the AI
physiotherapist that I've seen on the screen?
Is it the organization? Who do I report to the HCPC?
So I think there's some. There's I think one of the
things and just just just only slightly tongue in cheek, it
(30:23):
would be really interesting to see that the HCPC could then
take several years to process that robot physiotherapist.
And therefore eventually in thattime frame, that robot
physiotherapist would probably be developed the ability to get
itself out of trouble, which would be really interesting.
So unless they sharpen the toolson getting those fitness to
practice hearings down for time frame, then we're all over
there. We're all OK.
(30:43):
All they could argue that no one's OK, but I know I can say
that and you can't. So I'll leave you there.
What I do want to admit though, Ash, is that there is a, a
corner of our audience. And I'm not just saying that to
get myself out of confronting ithead on, because I share this
view in, in the, in the round isthat there's people that feel
that the CSP has been complicit in a reduce a reduction in, in,
(31:07):
in sort of, I have to put it, I nearly said clinical scope
because that's not fair really, because you're not reduced to
scope. But it's just that the public
perception of physiotherapy is something that has then meant
that many people do associate itwith a very hands off, sometimes
over virtualized approach, particularly post COVID.
And the fact that it was that partly because we needed to
adapt meant that then actually, literally everything we can do
(31:28):
can be done over a video consultation.
And actually, why not that make that a chat bot?
And so now that the technology exists, it feels like a, a
natural bait and switch to make it like, well, what's pointing
us anyway? And people I do consider that
the CSP, whilst not be fair, unfair to call it a prime mover
in that there's people that feelthat CSP could have done more
and that the CSP itself doesn't necessarily do as much for care
(31:48):
quality as it does for say, carequantity.
And again, I'm not just be very passive aggressive for me to
imply that that's others. That's certainly something you
hear on this show from me. So in that, how much of that do
you think's unfair, How much of that do you think you're putting
right, if you do agree with any from part of it?
And if that is the case, how does the CSP intend to advocate
for us human physios to stop us being supplanted in any part of
(32:11):
our work? Yeah, yeah.
I think that some of it's fair, some of it's unfair, I think.
So first of all, in this particular conversation, we are
in conversation with the HCPC. You know, we're having those
conversations about who can say what, who can call themselves
what, where the accountability lies.
So on behalf of members, we are making sure that that is being
(32:33):
addressed. I think there are lots of
instances where we do definitelydemonstrate we are interested in
quality. You know, we have MSK standards
that we brought out and I think there are more than one
instances where we demonstrate the need for quality.
(32:54):
I think it's really difficult when you're dealing with a
really broad membership to address everything all the time.
Just repeat the last bit of the question for me, Jack.
Well, it was, it was that if, how can we, how can we trust
that the CSP is going to then advocate for us human physios if
indeed there are areas in which we could be at odds with and
(33:17):
service provision is, is fair there.
Yeah, yeah. No, I think, I think that's
fair. So I think it's, you know, the
hands on, hands off debate as far as I'm concerned is, is, is
not one, we are a hands on profession.
I don't think I've ever seen a patient that I haven't put my
hands on, whether that's an assessment in treatment or
whatever that might be. You know how specific we're
being. And, and all of those things can
(33:37):
be, can be talked about in in various guises.
But the reality is we are a hands on profession and the CSP
and certainly myself would view,view it as a hands on
profession. We're really privileged to be a
profession that is allowed to put our hands on a patient.
And I don't think we should ever, ever lose that because not
all professions have that. Not all professions have that
(33:58):
trust from the public. And I think it's really
important that we maintain that.So the CSP will absolutely be
advocating for that. And that's why I think as much
as AI is developing and will be around, it's never going to
take, you know, and you know, I don't really like never or
always statements, but a robot, I'm going to put it out there.
We'll never replicate human touch in the way that we can.
(34:21):
So there will always be a role for us in that space.
And the CSP will always advocatefor that.
And you know, at the moment, theway in which we see it used for
service provision is, is for things that we ourselves would,
if we look at it from an evidence based point of view, we
would give advice and education for and probably not much else.
(34:44):
And that's how the AI is is being used where we're seeing
the population health travel, which is complex comorbidity,
you know, we're seeing patients with more than one thing wrong
with them. We're seeing more and more
complexity in an ageing population.
You know, AI is not going to be able to deal with that in the
same way that we can. It will help us in terms of our
clinical reasoning. And as the the the example you
(35:09):
gave before was a really good one that where it can help us be
more sensitive to changes that might occur in an individual,
but it will never replace the complex clinical reasoning that
we're able to replicate in my opinion.
Especially because of the way inwhich well attuned clinical
empathy is something that by design is, is, is very human
because of that kinship that we have with our patients.
(35:32):
And so, yeah, I really appreciate that answer.
I think that what's interesting as well for me and you as
individuals, regardless of the roles we currently in.
So over the years we've worked together, you and I have been
people that have been quite boldin in saying that we feel that
the specificity over our touch can sometimes be overplayed.
And therefore people sometimes have suggested that.
Then when me and you advocate for the our proudness to our
(35:54):
pride, sorry, in in, in being a profession that can touch and
should touch, people think there's a hypocrisy there.
But I think that I use examples and feel free to disagree or add
to them if you like. But I haven't seen a robot do a
good Lachmans for a while. And we know how much of A more
valid allotments test is than say a sense of giving way that
might be described and could have been described to a chat
bot. An apprehension test for a
(36:15):
shoulder is 1 based on feel and perception and needs to be done
well by someone, otherwise you can get false positives if it's
just yanked. And even then, I've not seen a
robot do a good one of those fora while because that's not about
symptomology as much as it is response.
And then the third thing is thatthe reassurance that you can
give someone just from the rightbody language and the right use
of therapeutic touch to have a literal hand on a shoulder when
someone is feeling emotional by the social challenges that their
(36:38):
pain has brought them. Again, I'm looking forward to
seeing chat bots do that as wellas we can when we when we do it
well. And so with those examples, and
I'm sure there's many more, it gives me lots of heart.
As long as we are not Luddites enough to not think there are
ways in which we can use the thetools and robots to assist us,
but we are not likely to be supplanted.
I think that this guidance just give us some assurance in that
(36:59):
direction too. I love that it's going to be
iterative. I love that yourself is at the
helm. I've got a lot of confidence in
that. I think that there's something
that you've mentioned earlier that look forward to us holding
you to account for, which is this idea that this could be a
new frontier of member engagement for you to keep your
finger on the pulse of how it's actually being applied.
And to involve those that might be at the cutting edge of it.
(37:20):
As well as those that are continuing to feel anxious or
feel at risk in part because of some of the history that's gone
past where they feel like they've not been represented and
this was an inevitable conclusion.
Let's try and get ahead of that.Let's offer that reassurance
across the membership. And what an exciting opportunity
for us to use something that's new and novel to then to have a
new frontier in, in, in governance as well as as well as
education. So I've said a lot there, mate.
(37:42):
Feel free to disagree with any bits of it if you want to.
No, no, no. I think you've summarized that
really well. I totally agree with your
sentiment around the Lackman's apprehension.
You know, those are, those are things that a robot is not going
to be able to do. I think this, this GATOS
statement of principles is, is there just for that.
Members can use it as a guidancedocument.
It will be iterative and we willbe engaging with members on it,
(38:04):
on it, on its itterance. I'm really excited about it.
You know, there's lots of opportunity in this for us as a
profession and us as a profession.
Historically, what we've been good at is adapting.
You know, we, we have been good at changing what we deliver,
where we deliver it, how we deliver it.
And for me, this is just anotherpoint in that our, our history,
(38:27):
you know, I imagine it's a lot like the first time anyone ever
used an Excel sheet to add up a formula.
They probably thought, oh, that's cheating, isn't it?
But now we just do it and we useExcel for.
Lots of things. And I think we we're going to,
we're going to get better at using it as a profession and it
won't be and it won't be a big thing or as worrying in in years
(38:49):
to come. So I think what I would say to
the profession is it's happening, it's going to happen.
It is an opportunity for us. The CSP will do everything it
can for the members to mitigate the risks, but I think it's the
members that are really going torealize the opportunities and
and we we want to see that how that develops.
(39:10):
Yeah, like in clinical practice,validating concerns doesn't mean
that we then end up saying that yes, of course you're right, and
we should all worry about it andjust get panicked.
You know, there are ways in which we can do that together.
There is absolutely valid concern in this direction and we
need to address them head on. But simultaneously we need to
work out how we can pivot together rather than thinking
that this can be a direct fight in which we'd inevitably lose
because that's where the technology is.
(39:32):
So, yeah, much like there isn't an anti calculator regulate a
movement amongst accountancies, I think it's smart for us to
recognise that as the medical validity comes in, for things to
be accurate, we need to recognise where our strengths
lie and work out how we can integrate well with these
technologies. And and I think this is great, a
great place for us to to, to, tolead on.
So we will keep in touch about it and many other issues.
(39:53):
Mate. Thank you so much for your time
today. I really appreciate it.
Let's awkwardly wherever is the music Blaze.
See you later mate. Bye.