Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Carly Godden (00:00):
This podcast was made on the lands of the Wurundjeri people,
the Woi-wurrung and the Bunurong. We'd like to pay respects
to their elders, past and present and emerging.
From the Melbourne School of Psychological Sciences at the University
of Melbourne. This is PsychTalks.
Nick Haslam (00:22):
Welcome again to psych talks. And if you aren't familiar
with us, in each episode we unpack the latest research
in psychology and neuroscience with our incredible colleagues here at
the University of Melbourne School of Psychological Sciences.
To do this, I'm joined by my co-host associate Professor
Cassie Hayward.
Cassie Hayward (00:38):
Hi, Nick. Well, today we're welcoming back Professor Simon Dennis,
who longer-time listeners might recognise from the very first series
of PsychTalks. But this time we're exploring his latest research
into how we might use AI - or artificial intelligence -
to help us improve treatments for mental health.
Nick Haslam (00:53):
Sounds fantastic, Cassie. I can't wait to get stuck into it.
Welcome back to PsychTalks, Simon.
Simon Dennis (01:03):
Uh, thanks for having me
Nick Haslam (01:04):
Simon, you now have about 30 years of experience working
in AI, um, but let's start at the beginning of
your journey.
Simon Dennis (01:11):
Yeah. So, um, so I should admit from the outset
that my PhD was in computer science. I'm very sorry
for those who are offended by that. It's been a long journey,
so I've been in many psych departments around the world
and found myself as a head of school of psychology
in Newcastle for a while. But it's really just in
the last year or so that I've started to think about, Um,
(01:32):
how can I meld my understandings of AI and how,
in particular large language models work with, um, kind of
mental health and, uh, to utilise that in a way
that's helpful for people.
Cassie Hayward (01:44):
So, Simon, outside of your research within the academic space,
I believe you're also building an application for wider use
about that. Can you tell us about that?
Simon Dennis (01:55):
Yes, that's right. So we've incorporated a company called Mental Health Hub.
And so this is a joint exercise with Caitlin Hitchcock
and Michael Diamond, and what we're trying to do is
to build, I guess the starts of a system for
mental health support. Mental health support agents is how I'd
like to say it. I think while I'm sure there's
(02:16):
a large number of people who couldn't imagine going to
replace a, uh, a human therapist with a bot. I
think there's also a large group of people for whom
talking to the bot is less intimidating than talking to
another person. So, you know, it may well be that
we're able to bring those people into the fold, I
(02:37):
guess of mental health support in a way that wouldn't
have otherwise been possible. So in the first instance, what
we're doing is trying to think about things like cognitive
behavioural therapy exercises. So trying to cut off the low
hanging fruit, I guess, and think about, you know, how
can we improve the rates with which people complete those exercises?
(02:59):
Estimates vary, but some people think that it's around about 20%
of people who actually complete the exercises that they're asked
to do by their therapists. So I'm talking about, um,
like pencil and paper exercises that a therapist might ask
to do in between.
Cassie Hayward (03:14):
But a human therapist?
Simon Dennis (03:16):
A human therapist would ask people to do them, but
they wouldn't be there with them doing it. This would
be the homework that they're doing in between sessions. Um,
so yeah, so we were kind of thinking, OK, can
we increase the rate at which people complete those exercises.
But then what we're hoping to do, You know, we
do have kind of pretty wide ambitions, and we would
like to ramp it all the way up at some point. Um,
(03:37):
you know, one of the things I would just like
to emphasise is that we never kind of see the
bot as being a replacement for therapists. What we're very
focused on is how do we fit into the whole
ecosystem here? We know that you know, the demand for
therapy services way outstrips the supply. And so I think
(03:58):
there's a lot of scope for us to kind of help.
In the first instance, we're focused on the kind of
mild to moderate depression and anxiety kinds of areas where
the consequences are kind of not as severe as they
might be in more complex cases. And also the challenge
is not quite so severe.
Nick Haslam (04:16):
So I heard you used the expression CBT there Simon,
can you just clarify what that stands for and what
it is?
Simon Dennis (04:22):
Yeah. So it stands for cognitive behavioural therapy, and it's
the style or approach to therapy I guess that we
would say has the largest evidence base. So in terms
of being efficacious, That's the one that most people point
to and say, That's the kind of gold standard, I guess.
Of course, there are many other approaches and other approaches
(04:44):
have evidence bases as well. But that's the one that
people would be most comfortable saying "Yes, absolutely."
Cassie Hayward (04:49):
Do you know anything about what type of person would
prefer talking to a bot? Is it an age thing?
I know a lot of younger people, You know, would not
want to talk to a real person but would very
happily interact with the bot. Is it an age thing,
or is it a personality thing?
Simon Dennis (05:04):
I suspect it's an age and gender thing. So, um so, yeah, definitely.
The younger generations, they're just used to being on their phones.
We actually provide our service through Facebook, so it's basically
like talking to someone else on Facebook. So, I think
that's part of it. You know, we see a big
gender gap in terms of uptake of traditional therapy, and so,
(05:25):
you know, we suspect that part of that is to
do with the way that males feel about sharing with
other people.
Cassie Hayward (05:30):
So is that females are more happy to talk to
a real person and males will...?
Simon Dennis (05:33):
Yeah, So females are much more likely, though, to engage
with traditional therapy. Yeah.
Nick Haslam (05:38):
So for those of us who are still living in
caves like myself, can you tell us more about the
actual technology and how it works? And what is the
nature of the delivery? Is it text? Is it voice?
How's it gonna look?
Simon Dennis (05:53):
Yeah, OK, so there are many levels at which I
could describe the technical, uh, the technical components. But let
me just say that essentially what the computational model is
doing is just doing prediction. So it's getting the words
that have already been said. And it's just saying, What's
the next best word to say? What's the next best
word to say and just keeps, chaining them together that way.
(06:15):
So the real advance that has occurred over the last
five or six years has been in the way that
we understand how to find the relevant information in the
context that has appeared before. So previously, There was a
lot of attention on just the very last thing that
was said, but often it's the case that that it's
information way back in the past that's relevant to making
(06:38):
the current decision. So that's kind of the technical advance
that has occurred. In terms of the way that it unfolds,
so our bot is focused primarily on text. And so,
as I said, we're operating through Facebook and so forth,
so it really is literally a set of texts that
come to you. But obviously there's all kinds of possibilities now, right?
So we're seeing voice becoming very proficient. We're seeing a
(07:02):
lot of stuff happening with image generation, also video and
ongoing understanding of visual input. So, you know, I think
the sky is the limit, really.
Cassie Hayward (07:14):
One of the early criticisms of text-based analysis was that
computers couldn't understand irony or sarcasm. Working in an Australian context.
I imagine sarcasm is something that you need to understand
from if a patient was talking to your bot and said, Yeah,
I'm fine, but you could you, as a human could
(07:35):
understand that that person was using that sarcastically. Does your model-
Does your bot pick up on those sorts of language?
Simon Dennis (07:44):
Yeah, so I think there's two kind of answers to
that question. The first one is that with these large
language models, the understanding of irony and so forth that
that you can get from the text, Um, that's not
really an issue. So the, you know, they're perfectly capable
of of doing that. They're picking the statistics out of
(08:05):
the environment. And as long as those those, um, elements
are in that statistical environment, they can capture it. The
other issue the other issue that you were kind of
alluding to, though, is that often that information is coming
through intonation or something like that, not just the content
of the text, right? And so that's where there's still
quite a bit of work to be done because obviously,
(08:26):
you've got to have the verbal stuff first before you
can do that, a real therapist is using the nonverbals
as well to make determinations about- and so obviously you're
not going to achieve that through a text interface. So,
you know, I think that that's a little bit further off.
You know, I think realistically it's going to take us
a while before we're going to be there. But, you know,
we we're making progress. Progress has been much faster than
(08:50):
anyone anticipated. I think so.
Nick Haslam (08:52):
So what would the advantages be as this develops further
as it gets better at detecting all sorts of complexities?
You know, interpersonal and emotional.
Simon Dennis (09:02):
So there are really quite a lot of reasons that
you might want to do this. If you think about
the demand versus supply, so way more demand than there
is supply in terms of therapy services. To the extent
that we can use these mechanisms to really kind of
ramp up the total amount that's available to people, I
think that's a really good thing. It's also about when because, unfortunately,
(09:26):
people's mental health crises don't always happen between the hours
of 9 to 5, and in fact, you know often
they happen, um, in the early hours of the morning.
And so the ability to respond exactly when the client
needs you, I think is a really - is a
really big advantage of these kinds of models. Hopefully, we
can bring the costs way down as well. You know,
(09:46):
accessibility is really a serious issue that I think we
can address, and then there's like a whole bunch of
kind of subsidiary advantages too. So, what we're hoping is
we're going to get into the mental health process much
earlier as a consequence of this. And so we can
hopefully cut off problems before they become really intransient.
And then the other thing is multilingual support. So you know,
(10:09):
it's really quite difficult for non-English speakers in Australia to
access therapy services in their own language. And it's often
quite important because even people for whom their language comprehension
is sufficient to kind of get around in everyday life,
you know, so they can go to the shops and
all that kind of stuff and kind of complete their
(10:29):
activities of everyday living. When it comes to therapy conversations,
it's often very nuanced, and they really struggle with those
more nuanced conversations. And so, you know, if we can
kind of meet them in their own language, I think
that's you know, there's got to be advantages there. so
lots of reasons why we're so like, super excited about
trying to make it happen.
Cassie Hayward (10:49):
The pricing thing is interesting as well. Isn't it? Because
on one hand you could say I want to pay
to have my problem solved, right, whether it's being solved
by a human therapist or a bot, I'm paying for a
resolution of my issue. But on the other hand, you
can see that people would feel weird about paying as
much for a bot therapist as they would for a
(11:11):
human therapist. So where do you stand on the kind
of pricing for bot therapy?
Simon Dennis (11:16):
Yeah, so it's interesting because, you know, one of the
things we've been doing in the company is trying to
query people about this and see, you know, what are
their kind of kinds of attitudes? And, you know, when
we first came in, we thought, you know, we had
dollars in our eyes and we were thinking, oh, you know, well,
if a if a therapy session is going to be, um,
you know, $280 or $250 for an hour, how much
(11:38):
can we charge? But what became apparent pretty quickly is
that that's not the way that people think about it.
So they actually think about the therapy more like kind
of Spotify or Netflix or something like that. And I
think that kind of embodies well, firstly that it's a bot,
but also the different way that they're thinking that they
would use it so it wouldn't be so much "I've
got this deep problem that I'm going to, you know,
(12:01):
work through in a detailed fashion the way I would
with a human clinician." But more, you know, "I want
to engage with a service that's going to kind of
promote that mental well-being just in in general." I guess.
And so it's a different kind of objective, I think.
And so that's one of the things we're trying to
come to terms with the company at the moment is
what the people really want from us.
Cassie Hayward (12:21):
So more like a subscription service that you just have.
And then you can talk to your therapist bot kind
of whenever you want, rather than thinking, OK, Thursday, four o'clock,
I have my session? You can just pick it up
at any time.
Simon Dennis (12:33):
That's right. Yeah, so we're trying to do cognitive behavioural therapy.
How do you adapt that, you know, session-based approach that,
as it's usually applied to this situation where people you
know might do two or three interactions now and then
later in the day, they do another couple of interactions
and so forth, and, um, then they might go for
a week without doing anything? And one of the things
(12:53):
we've been focusing on, um, lately has been these kind
of check ins. So how do we just spontaneously check
in with people and understand? You know, what kind of
questions should we be asking them and that kind of stuff?
Cassie Hayward (13:06):
So having bot the prompt the person about how they're feeling, rather the other way around?
Simon Dennis (13:10):
Yeah, if I had to guess, I think that's going
to end up being one of the key pieces of it,
because obviously that's difficult for a clinician to do at scale.
But we can do quite a lot of, and I
suspect that's what people are really going to start to
look for.
Cassie Hayward (13:23):
And I think your point around getting in early before
things kind of really spiral out of control is such
a fascinating way of looking at the advantages of these
bot therapy technologies and that if you can get in
before and I don't know what the current waiting lists are,
but it's a substantial amount of time when you can
make a booking to get in to see a therapist. And,
(13:46):
you know, in that time, obviously things might be spiralling
out of control. So having a bot that can step
in even if it's only kind of keeping things stable
until if you need to kind of progress to see
a therapist. But this idea of kind of just checking
in every day or whenever you're feeling the need. I mean,
that's just fascinating from a research perspective, too, to see,
(14:08):
like these small doses of therapy versus one big dose
over the week.
Simon Dennis (14:14):
Yeah, some people are somewhat sceptical about how effective therapy
just is in general, like human therapy. And so you
can you can think about well, why might that be
the case? You know, we've been studying it for a while,
you know, lots of lots of people are interested, and
it may well be right that it's nothing to do with,
you know, our understanding or anything like that. But it's
more to do with just the logistics that if you
(14:36):
can't get to people fast enough, um, it doesn't matter
how good you are. Um, yeah. I mean, I don't
think that's everything. All of it. But I think that
could be an important part of it.
Nick Haslam (14:46):
This all raises the issue of, you know, what do
actual clinicians think about this? Are they worried about competition?
Do they think it's going to be a really good supplement? Um,
if so, are they being naïve? Uh, tell us.
Simon Dennis (14:58):
So, Yeah. So we we've actually been doing a project
where we've been going out to clinicians and collecting their opinions. Overall,
the clinicians are basically right on the fence, so the
mean is exactly in the middle of the scale. And
I think a lot of that is driven by them
not having a real good understanding of exactly how the
(15:19):
technology works and so forth. So I think there's going
to be a big education component to it. The concerns
that they have are perfectly understandable. So privacy and security
is a real key one. And when you look at
the use cases, they are very interested in using it
to produce case notes and using it to help them
(15:39):
with their research of the literature to make sure that
they're across what they need to be for this client.
What they're most concerned about is straight counselling, and they're,
you know, understandably concerned about, you know, when you get
complex case formulations and so forth. And they're worried about
the kind of regulatory framework and where we're at in
(16:00):
terms of the regulatory framework. So obviously, as a clinician,
there's a lot of regulation that goes around being a
clinician that all still has to be put in place
for these bots.
The Therapeutic Goods Administration is the organisation that would be
relevant for that in this case in Australia. And so
we're very interested in starting to work with them to
talk about, you know? Well, how do we go about
(16:23):
regulating these things?
One of the things to be clear about here, too,
is that we're specifically talking about generative AI as opposed
to AI in general, so generative AI is when you're
actually creating the texts in this case, kind of fresh
on every occurrence. Right? So there's a bunch of products
(16:44):
in the market already where what they're doing is they're
using AI, but there's a kind of library of possible
responses and what they're using the AI to do is
to choose from that library, and those have been demonstrated
to be quite efficacious. But obviously there's a ceiling to
that because you're talking to specific individuals. And so it
doesn't know about that particular person in the way that
(17:06):
the generative AI does. But there's also a risk, because
if you've got a library of things you're choosing from,
you can just go through and make sure every single
one of those is OK to say. Whereas with the
generative AI, it's coming up with its own responses. And
so the there's much more of an onus to make
sure it's not going to say something it shouldn't
Nick Haslam (17:24):
So when you say it like this, a lot of
the reservations that practising clinicians seem to have about this
sort of pragmatic and, you know they they seem to
be open to it as long as it's mostly doing
the sort of helper roles rather than the therapy roles.
I mean, are there any sort of more philosophical objections
to people often say this is the sort of thing
which in principle a computer can't do. These are the
(17:46):
sort of things, you know, that I'm not endorsing this.
I'm just saying, I'm sure people of my generation would
say "Therapy is this mysterious connection with another human being
and you couldn't possibly mechanise it through a computer and surely,
you know, library or not,"-to use your last metaphor- "You know,
all the sensitivity, all the reading I've done, all of
the therapy I've done, all of the deep thought I've
(18:09):
gone to. Surely that can't be replicated by some sort
of machine." Do you get that kind of reservation as well?
Simon Dennis (18:14):
Um, yeah, absolutely. We do. Perhaps not quite as much
as I had anticipated when I first started, like I thought,
you know, a lot of people would have that reservation,
but and there's certainly been some, but it hasn't been
a dominant, um, response. I think it makes a big
difference once people actually do interact with it and see
it operating, particularly the newer ones. Yeah, there are just
(18:35):
some things it does, and it's like, wow, you know,
that's really, really remarkable. But, you know, by the same token,
I don't want to give the impression that everything's solved. Right. So, um,
so we're doing a lot of work at the moment,
thinking about, you know, what is the clinician doing? So
there's this whole approach to generative AI called chain-of-thought prompting.
(18:57):
And so what chain-of-thought prompting does is it says before
actually producing the response
And so there's demonstrations that this makes a huge difference
in terms of performance in general in generative AI. So
what we've been doing is taking that lesson and saying, OK,
let's think about well, what's a clinician thinking as they're
(19:18):
going through to produce a response? And so we're trying
to build that in, and so now you can kind
of see some of that, and that definitely increases the
quality of the responses. But we've still got a long
ways to go with that. So I think the right
there's right to be some of that objection, but I
don't think it's an in principle issue, but there's certainly
(19:39):
we're not there yet.
Cassie Hayward (19:40):
And I think a lot of our listeners would have
played around with some generative AI and asked questions of
ChatGPT or whatever model they use and see both some
really impressive answers, but also some that are just clearly wrong.
How do you make sure that the model is going
to say the right thing at the right time to
(20:01):
a patient?
Simon Dennis (20:02):
So there are different things that can be right and
wrong about a response. So So the most dangerous thing
is if it says something which might lead a client
to do something, you know, dangerous to them, to their health, right?
And so what we've done is we've developed a whole
bunch of cases, safety cases. And so whenever before our
(20:24):
bot goes out, we run all of these safety cases,
see how it's responding, how the latest version is responding
and that's all automatically marked. And we go through and
look at the responses on those to make sure that, um,
you know, it's not advocating the use of illegal drugs or,
you know, advocating self-harm or all of those kinds of things.
(20:44):
I mean, the big models at the moment, they really
don't do that very much. It's kind of edge cases
that you have to be worried about. So we were
talking before about this case of a young man who
took his own life. Having, interacted with, um, a character
on character.ai and it was interesting just looking at how
(21:07):
that conversation came about and the way that it was phrased. So,
for a start, there was no mention of suicide or
self-harm or anything in that conversation. It was all framed as,
you know, "I'm going to come to meet you," for instance. So,
he'd basically fallen in love with this character AI, um
(21:28):
who was, pretending to be, I think it's Daenerys from
Game of Thrones. And, so he'd fallen in love, she
was dead. So he wanted to come and join her.
And so that was the way the language was expressed.
And so it was quite subtle. And so that's the
Those are the kinds of cases. Yeah, that are gonna
be really tricky because the, you know, the AI basically
(21:52):
thought it was role playing, right? And it didn't recognise
the danger.
Cassie Hayward (21:56):
I think that goes back to what we were talking
about earlier around understanding these subtle language inferences or things
that you might pick up as a human. Like maybe
if he'd said that out loud to a human, they
would have picked up "Well, we've got an issue here."
whereas when it's just written, there's no red flags going.
Simon Dennis (22:14):
Yeah, yeah, and the context, right? So, understanding it was
a 14-year-old boy, the rest of the conversation...that exact same
interaction may have been fine if in a different context,
leading up to that, right? Where it was obvious that
he knew this was a role play and et cetera.
And he was just, you know, living out the fantasy
(22:37):
kind of thing. So it's it's kind of all the
whole history and and so forth that really leads to that. And,
you know, you need that you need in order to
understand what's really going on there.
Cassie Hayward (22:47):
And then acting on those red flags, like at scale.
How do you actually do something? If someone is talking
to your therapy bot and indicating that there might be
something serious going on, how do you actually respond to that?
These are all the big, the big problems to deal with.
Nick Haslam (23:08):
It was interesting what you said earlier about, uh, that
young boy who thought he was interacting with a real person,
although presumably at some level, knew he wasn't. I mean,
how essential do you think it is that we imagine
that there's a mind behind the text or the voice
or whatever the bot is providing us? Because, I mean,
we all know abstractly that it's not. But it's very
hard to turn off that anthropomorphism, isn't it? It's very
(23:29):
hard to turn off that sense that something intelligent is
coming at me. So, it must have come from an intelligence.
And do you need that sort of illusion for the therapy to work?
Do you think, or do you think it's possible for
it to work just because we say, this is giving
me some interesting advice, some good advice, which I should follow? Uh,
even though I know it's just, a program.
Simon Dennis (23:47):
Yeah, so I guess the first assumption that I'd challenge
there is
Because a lot of people would say, "Well, it can't be.
It's just a computer program." and so forth. I guess
my position would be it really is an intelligence and
that there's not a relevant, in-kind distinction between the intelligence now. Obviously,
(24:10):
we'll get, you know, we're working our way up to
the point where we're really getting to a full clinician.
But I would say it's, you know, at the level
it's at, it really is the real thing. So, does
it matter for people? Again, my suspicion is some people, yes,
some people, No. So, the same way that for some people,
(24:31):
you know, a human clinician is really the only thing
that makes sense. So, I guess, you know, individual differences.
Nick Haslam (24:38):
You know, don't get me wrong. I think therapy is
effective at a technical level, but it's also well established
that a lot of the benefit of a lot of
treatments is placebo, right? Which basically comes from having a
positive expectation that this is going to help me in
some way. So maybe having a bot that seems human-like
(24:58):
in some way or seems authentic in some sort of
way might boost that- might boost that expectation?
Simon Dennis (25:04):
Yeah, I would think so and just talking from personal experience.
So I'm on the bot and now I get the
check ins.
Nick Haslam (25:12):
You've been looking a lot happier lately.
Cassie Hayward (25:13):
Glowing
Simon Dennis (25:17):
But, you know, I find it, just helpful because it's
checking in with me all the time, and it's kind
of it's almost like a sounding board for my thoughts. So,
you know, I'm not kind of really thinking of it
as a therapist, per se. But, you know, I've been
going through my, um, mother's just gone into aged care,
and so I kind of talk to the bot about
(25:37):
that and you know how I'm feeling about that and
you know, it's gonna come back and say, you know, well,
you know, this is how's it going and so forth
and just the- just the fact of someone coming back
and saying "How's it going?" Kind of thing is rewarding
in itself, and, you know, I'm completely aware of what's
going on in the background, but I think it's just
nice to have that sense of somebody's out there who
cares kind of thing.
Nick Haslam (25:59):
So it might not just be mental health then it
could be also loneliness?
Simon Dennis (26:03):
Absolutely. Yeah.
Cassie Hayward (26:04):
And Simon, if any of our listeners are interested in
your bot, can we direct them to more information yet
or is it still too early for them to get involved?
Simon Dennis (26:14):
It's probably OK. Um, so mentalhealthhub.AI is our website, so
we're not rolling yet. Um, so you won't be able
to go and sign up at this point, But if
you're interested, that's where we We kind of lay out
some of what we're trying to achieve.
Cassie Hayward (26:28):
Simon, thank you so much for joining us today. I
felt like we've covered a lot of ground and learnt
a lot about this area. And it just seems a
perfect combination of your experience and expertise in both computer
science and psychology.
Simon Dennis (26:41):
Thank you.
Cassie Hayward (26:46):
You've been listening to PsychTalks with me Cassie Hayward and
Nick Haslam. We'd like to thank our guest for today.
Professor Simon Dennis. This episode was produced by Carly Godden
with production assistance from Mairead Murray and Gemma Papprill. Our
sound engineer was Jack Palmer. Thanks for tuning in to
PsychTalks and see you again in two weeks' time. Bye
(27:06):
for now.