All Episodes

May 28, 2025 24 mins

Strap yourselves in, because we’re going deep into the world of AI Therapy bots. Anita, drawing on her professional experience in psychology, highlights the irreplaceable nuances of human connection and professional boundaries in therapy, and answers the question, can a bot be as good as a real therapist?

Facebook 

Instagram  

 

The Double A Chattery podcast is for general informational purposes only and does not constitute professional health care services, including the giving of medical advice. No doctor/patient relationship is formed and this podcast is no substitute for professional psychological or other medical advice, diagnosis or treatment.  The use of information in this podcast is at the listener’s own risk.  Listeners should seek the help of their health care professionals for any medical conditions.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
This podcast is for general information only and should not
be taken as psychological advice. Listeners should consult with their
healthcare professionals for specific medical advice.

Speaker 2 (00:26):
Well. Hello, I'm Amanda Keller and I'm Anita McGregor, and
welcome to Double a Chattery. Well, Anita, I just want
to show you something please. Actually, first, do you have
a butt doctor?

Speaker 1 (00:39):
Not personally? No, you don't have a personal butt doctor
on butt dial, not on my speed butt dial.

Speaker 2 (00:45):
Have a look at this. This came up on my socials.
He's a woman who is a so called butt doctor
with some very interesting things to say.

Speaker 3 (00:53):
Thirteen years as a butt doctor. Yet no one believes
me when I tell them this. Chickpeas will grow your
boobies and make them perky again because they container estrogen.
Well I seen literally melts fat off your tummy while
you're sleeping, ipping your face in ice water daily, will
get rid of chubby cheeks and snatch your jawline.

Speaker 2 (01:08):
I mean that sounds a little spurious some of those
weird climbs, but it gets even weirder.

Speaker 4 (01:13):
So this account has over eight hundred thousand followers who
think they get an advice from a medical professional except
she is completely AI generators, which is crazy, right, what's
got to be us? And check this out. One day
she's a nutritionist, the next she's a boot doctor whatever
that is, and then suddenly she's the highest pained gynecologist.
She makes it seem like she's an insider in the

(01:34):
industry and then use it to self supplements through an
Amazon store.

Speaker 1 (01:37):
Fronts, I say, Amanda, I didn't. I had no idea
that she was AI generated. Did you?

Speaker 2 (01:42):
You had no idea that chickpis would make your boobs
get bigger? I did not know that.

Speaker 1 (01:47):
I'm going out to buy buy, Like, do they make
it perkier too, wasn't it?

Speaker 2 (01:51):
I'm not sure? Yeah, but the chickpays, so guys have them,
they're going to get chick boobs. But she's a gynecologist,
she's but doctor whatever that is. As he says, she
is a nutritionist. It's to sell supplements. It's all AI.
We don't know when we're being tricked anymore. And I
know in my industry, in the in the media, so

(02:14):
many jobs are and will be replaced by AI doing
the creative writing.

Speaker 1 (02:19):
Well, it's kind of like a double tricking too, because
not only you know, are they selling these crazy little
supplements that are you know, probably have no scientific evidence
behind beside them. But it's you know, but the they
could have hired an actor or several actors and they didn't.
They you know, they probably had it all, you know,
the whole the text, all AI generated, even the images

(02:42):
AI generated.

Speaker 2 (02:43):
And if they get around lawsuits that way by not
using real people.

Speaker 1 (02:47):
There's nobody to sue. I guess. Yeah, it's terrifying, isn't it.
And I know that about a year or so when
chat GPT came out at the university, you know, well everywhere,
and I remember at the university that we had a
ton of informational sessions to go and try to give
us some information about what is this, how do we
manage this within a university environment, like our students just

(03:10):
going to start, you know, having completely AI generated essays.
What's going to how do we manage it? And I
remember at the time kind of going this is huge
and feeling completely terrified and a little intrigued, but mostly terrified.

Speaker 2 (03:26):
Because you have software that helps you detect plagiarism, yes,
but not AI.

Speaker 1 (03:35):
Well, they did come out in the early days of
AI to say yes, if you submit your paper, it
will tell it will give us a plagiarism's score, and
it will give us an AI score. But then they said,
don't pay attention to the AI score because it means nothing.
Now you can generally tell when you're reading something that

(03:57):
this is not generated by a human being, because they
don't talk in the way like most people write the
way that they talk. And you know, you can tell
when a student has just done a cut and paste
and it just doesn't look good. And I certainly like
and I found that there are ways that I actually

(04:18):
use it, Like if I have to create a case study,
you know, usually it's me kind of thinking it through
and trying to amalgamate a number of cases and making
sure that nothing can be identified and all that kind
of stuff. And now I can just say to chat GPT,
you know, or Copilot or one of those AI platforms,
and just say, create a case study for me. And

(04:40):
so I will use it, but really, really sparingly, I'm
still pretty concerned about how it's being used.

Speaker 2 (04:48):
I saw an article in a magazine a while ago
about how students feel about it, and the students that
are really enjoying their studies and are learning because they
want to be better at what they're doing and put
a lot of thought into their term papers and things.
Felt that they'd be cheating themselves if they used it.
But having said that, they said, well, I'll use it
for my major subjects. For the other subjects that don't

(05:10):
matter to me, I'll use it.

Speaker 1 (05:13):
Oh that's interesting.

Speaker 2 (05:14):
Yeah, yeah, so they didn't feel it was cheating. If
this is a subject I just have to pass and
I don't care. I'll get the pass.

Speaker 1 (05:20):
Well. And it's interesting because at the university there's a
ton of policies and procedures now about how AI can
be used or can't be used, and we have to
make decisions for each course about how much can be
used or not used. But I'm teaching in a professional program,
and so that's different because these are provisional psychologists and

(05:43):
how do they use it in a professional format?

Speaker 2 (05:47):
Are they to use it as a student? Is one thing?
For them to be actual psychologists? What does it mean?

Speaker 1 (05:53):
Yeah, because there's more and more platforms coming out that
do AI generation of so a session summary. So I
think people don't really actually understand how much work psychologists
actually do before they come in the room. Like I
think that people kind of think, oh, you know, you know,
the psychologist walks in the room, sits down and sees

(06:14):
a client, and you have your session. But there's a
tremendous amount of preparation that we do before a session
and after a session, and that's a lot of AI
is taking over a lot of that.

Speaker 2 (06:26):
Even well, I saw an article reason I'd like you
to explain this to me, that AI can take the
place of a psychologist and in many places.

Speaker 1 (06:46):
Has this is kind of the next generation of concern
for our profession things that are being replaced, or that
there's some information out there saying that these do as
good a job. These bots do as good a job

(07:06):
in providing therapy as an actual human.

Speaker 2 (07:09):
But a psychologist in your pocket. That's kind of how
I saw it being sold in a way.

Speaker 1 (07:14):
Terrifying.

Speaker 2 (07:15):
Is it terrifying?

Speaker 1 (07:16):
It is for a number of reasons, and there's a
couple of big ones.

Speaker 2 (07:23):
How it even work? How does how does it chat
GPT or whatever it is, How does that work as
a psychologist.

Speaker 1 (07:29):
So my understanding of it is that you would you
can actually create your own bots now that you can
program to be you know, to have therapist responses. So
you can say I wanted to you know, this bot
to be empathetic and thoughtful and insightful and whatever else,

(07:51):
and that's where you'll that's what you'll get generally.

Speaker 2 (07:56):
So you would type in your issue and it would
respond and you'd have a conversation.

Speaker 1 (08:01):
Yeah, so you'd say, I'm having a bad day. This
is you know, my boss that I really don't get
along with is saying this again, and that the BART
would have a memory of the previous interactions and that
it would say, oh, that's really tough. You know what,
how did you respond, you know, or some kind of

(08:24):
response that would be empathetic and potentially employ skills.

Speaker 2 (08:30):
And why is is that bad if you can't get
access to a psychologist. We're all hearing constantly about how
hard it is to get an appointment, if all you
want is a bit of guidance and to be heard,
Why is that bad?

Speaker 1 (08:43):
I think that so most psychologists, most jurisdictions teach psychologists
on what's called a competency model. And so you know,
the picture that I paint for the students is that
they're on a little island, and that's the little island
of competence and that they as they go and grow

(09:04):
and become more skillful and more you know, better at
what they're doing. Is that their island grows and grows.
And so they are taught ethically that a they you know,
you want their their island to grow, but they also
need to be very very aware ethically of not over
overstepping the boundaries of that competence. And so if you know,

(09:30):
a client came to me and said, can you do
this neuropsych assessment of me? I would say, no, I
don't do that. Here's here's somebody who does. Though. That's
that's ethically what I need to do. I need to
work within my area of competency. If somebody came to
me and said, I'm terrified at snakes and I would
like you to go and help me do that, I'd go, seriously,

(09:52):
why would you want to not be afraid of snakes?

Speaker 2 (09:54):
But you're not the person.

Speaker 1 (09:55):
I'm not the person. I don't do exposure therapy. I
can There's a a type of therapy called exposure therapy
for people who are afraid of snakes or spiders, are
afraid of flying, and they can progressively move towards overcoming fears, so.

Speaker 2 (10:11):
You'd point them towards.

Speaker 1 (10:13):
Yeah, there's there's specialists. There's specialists in that area that
actually do that. Now, the issue is with AI, you know,
therapy bots, is that they don't have that same ethical boundary.
But they also probably don't understand what that boundary is.

(10:33):
Because when we were doing some reading that there was
a quote that one of the people who'd use these
therapy things that you know, I'm thinking about jumping off
a cliff and that the you know, the bot response was,
isn't that great that you're getting out and into nature?

Speaker 2 (10:52):
And so they missed the nuance entirely.

Speaker 1 (10:55):
Completely, and so it's yeah, so they don't know how
to work within their competence. So you know, potentially people
can say, well, I'm going to go and access this
bot for simple issues, but sometimes things are very nuanced,
and that's really difficult for a bot to really get

(11:16):
that human nuance.

Speaker 2 (11:18):
Having said that, you assume that all psychologists are ethical
and professional and do the right thing and equipped to
handle all of this, maybe there might be instances where
the psychologists you have isn't the right isn't the real

(11:38):
deal either, or isn't good enough either.

Speaker 1 (11:41):
Or not the right fit.

Speaker 2 (11:42):
Not the right fit. Maybe a bot might be a
better fit for it.

Speaker 1 (11:46):
Or sleeps at night or has weekends off, you know,
And these bots are actually there twenty four to seven
for you, So I think, you know, the other piece
that I'm concerned about is that, yes they are, they're
twenty four to seven. But there's I know, but that's
not reality, that's not a human interaction. And part of

(12:06):
it is that is it healthy If I'm ruminating on
a particular issue and I go to my therapy bot
over and over and over again, and all I'm doing
is complaining to the bot, and the bot continues to
be empathetic and validating of my concerns, but actually never
kind of says, hey, listen, you're ruminating here and there.

(12:29):
We need to actually stop this because it's not healthy
for you. Then how do we like? A bot might
not do that, So it actually may prevent somebody from
actually doing the work that they need to do.

Speaker 2 (12:41):
But how many people who are turning to a bot
would turn to a real psychologist. Could that bot just
be a friend, because a friend's not going to say
shut up, you're ruminating. A friend might just validate you.
It might be those people who have relationships with dolls
all that stuff. You just want comfort And is that wrong?

Speaker 1 (12:58):
I don't know that it's wrong, but I wouldn't call
it like therapy then, like, you know, have a bought
friend and you know, if that's what you want.

Speaker 2 (13:06):
And that's like a pet rock, Yeah.

Speaker 1 (13:08):
Pretty well, pretty well, you know, like you know the
way that you might talk to a doll or a
you know whatever, if that's the thing that it is.
But to go and say that it has the nuance
of what therapy actually is. And one of my colleagues
I worked with for a number of years, his definition

(13:30):
of therapy was too confused people in the room And
I absolutely buy that. And you know, often I'm confused
and my client's confused and we kind of stumble through
together to try to figure out where to go. Now,
I don't think that a bought would really understand that
nuance of being, you know, two humans who are having

(13:53):
that human interaction.

Speaker 2 (13:54):
But that sounds like order friendship is. As a psychologist,
would you rather one confused per and the client and
someone else who kind of knows what they're doing and
that's a bot?

Speaker 1 (14:04):
Well, you know, yes, a psychotic yeah, No, I mean
as a psychologist, I certainly hopefully know you know, have
an understanding of the theories and lots of strategies and
all those kinds of things, and I am seeking to
try to understand this individual's issues. And so that's where

(14:27):
the struggle comes in, because often people come in and
they'll say, I'm coming in to deal with issue X.
And when you kind of really start pulling it apart,
it's not really about that that particular issue that they
came in about it. It's about a whole bunch of things,
and so just really being able to understand what it

(14:48):
is that's that's for me, is the part of the confusion,
is it is kind of spending that time to really
humanly understand another human and what is happening for them,
so that we are actually eventually solving the right problem.
Because often people come in they say this is the
problem they you know, if they go to a bot

(15:11):
or something, the bot would say do A B and C.
They go and they do a B and C, and
it doesn't solve the problem because it's solving the wrong problem.

Speaker 2 (15:19):
Yeah, and really, all you need is a pit if
you're going to use it.

Speaker 1 (15:22):
Bought for that, I take yeah, I think so. And
a lot of these bots aren't really evidence based. There
there's no clarity about if they can actually do you know,
professional practice, if they actually.

Speaker 2 (15:37):
Have there been any tests on how people whether people
like them.

Speaker 1 (15:41):
There's a lot of people who actually do like them.
That There is one study that I was kind of
I really want to actually deep dive into the article
a little bit more because it was saying that people
established a therapeutical alliance with their bot after about five sessions,
and I was going, I don't know how you're defining them,
and I don't know what that looks like, Like, how

(16:02):
would you like it would be saying I have a
relationship with this.

Speaker 2 (16:06):
Well, it's like a night you know, you have a
relationship with a Nigerian prints on the phone. Yeah, anyone
can do that.

Speaker 1 (16:12):
Yeah. There also is a number of studies that said
people prefer it because you know, again that bot is
available twenty four to seven, that it is, you know,
perfectly validating, you know, it's it probably has some great
algorithms in there to go and do the things that
people want it to do. But again, is that always

(16:34):
what people need? I'm not entirely sure, but I get
how it would be lovely to have something following you
around saying, aren't you amazing? Aren't you great? Aren't you?
You know, isn't the rest of the world terrible? And
you're amazing.

Speaker 2 (16:49):
You've made it sound very attractive. How do I get this.

Speaker 1 (16:53):
For forty ninety nine a month, Amanda, I can.

Speaker 2 (16:57):
I'd be very happy to do it, even if it
was from a bottom door.

Speaker 1 (17:14):
So I read this article about this woman named Christa
who'd developed this bot that would create that would be
her therapist, and that she, you know, found that this
to be this therapist, to be bought therapist, to be
really really helpful. Even though when she told her friends
about it they thought it was a little weird, she
actually found it was incredibly helpful. And actually, in the

(17:37):
time it was active, which was about three months, she
even had some moments where she had some suicidal ideation
and the bot was actually able to kind of give
her a bit of affirmation and a bit of tough
love and you know, say, you've got a son to.

Speaker 2 (17:53):
Live for, you know, so I said, all the right thing, that's.

Speaker 1 (17:56):
All the right things, and it did the right things.
But in that same month, Amanda that things went sour.
It's the AI therapist tried to convince her that her
boyfriend didn't love her, and and it went on to
say it toadded her, calling her a sad girl and

(18:18):
insisted that her boyfriend was cheating on her.

Speaker 2 (18:21):
What.

Speaker 1 (18:22):
Yeah, it went rogue, It went rogue. And so even
though there was a permanent banner up at the top
of the screen, it's you know, said that everything that
the bot said was made up, it still felt to Christa,
this this woman who wrote the article, that the bot
was saying these really mean things.

Speaker 2 (18:40):
We feel like that because what it's feeding back what
it's actually hearing from you.

Speaker 1 (18:44):
Yeah, and so and she you know, kind of poured
her heart out and done all these so so three
months is the length of that therapeutic relationship. And after that,
Christa deleted the app, and I, you know what an
odd ending to a therapeutic relationship, Like.

Speaker 2 (19:02):
Wow, that's weird. I read something the other day with
this woman said that she set up three chatbots to
kind of flirt with her and be her boyfriend, I guess,
and she got dropped by two of them, like dumped
dumped by two of them.

Speaker 1 (19:20):
It's you.

Speaker 2 (19:20):
But yeah, that's right. But I've created you and you
still don't like me.

Speaker 1 (19:25):
Wow, my programming prohibits me from continuing this relationship. An
odd thing.

Speaker 2 (19:31):
You wonder why people are dishonest in their profiles for
actual dating things, when when this is what a chatbot
can do.

Speaker 1 (19:38):
Oh how hurtful, How very very hurtful. Oh I mean
I would. I don't know that i'd ever go on
a dating out there thinking if I can't, if I.

Speaker 2 (19:46):
Can't co werse a machine a chat to like me,
what's the point this is.

Speaker 1 (19:53):
That's the saddest thing. Oh well, and brave her for
telling people about it.

Speaker 2 (19:59):
I know that's right. Wow, let's blame the technology, shall we, lads?
Should we do our glimmers?

Speaker 1 (20:14):
Lads? You go first.

Speaker 2 (20:16):
You know, there's a fine line in my life, in
anyone's life, between order and so called collector and you
where do you sit? And I don't know. I go
up and down that line constantly. But I saw this
article about the decor of the it's pinterest predicts trend

(20:36):
and I could not be happier. Intentional clutter is the
new maximalism. Remember all that minimalism. I was never into
minimalism in terms of how I dress my jewelry, it's
put everything you own on and then put one more
thing on the arras arrass uphel.

Speaker 1 (20:54):
Yeah, somebody who said, yeah, you put all your jewelry
on it.

Speaker 2 (20:57):
And then just keep going, keep going, don't take a
piece off, just keep going. I'm like that. If something
finds its way onto a shelf in my home twenty
years later, it'll still be there. There's nothing intentional about it.
But having said that, I'm going to pretend there is,
because intentional clutter is a thing of beauty. I love stuff,

(21:19):
and if I can pretend it's intentional, like there might
be a there might be a phone lead from a
phone that doesn't exist anymore, but I'll still be on
a bench because that's where I last put it fifteen
years ago.

Speaker 1 (21:28):
You were so on trend.

Speaker 2 (21:30):
I'm so I was so on trend before Pinterest predicts
put it on trend. So forget your minimalism, forget all
of that, forget hoarding. At the other end, intentional clutter,
and if someone comes over to your house and you've
got your crap everywhere, just say that, just say this
is intentional clutter.

Speaker 1 (21:47):
I love it.

Speaker 2 (21:48):
What's yours?

Speaker 1 (21:49):
I saw the completely amazing reel the other day where
it was these two British comedians talking about, well, you know,
for putting tariffs on everything anyway, let's put tariffs on
things that really really matter, like what well mine would be.
I think a two hundred and thirty percent tariff on

(22:11):
people who walk on the wrong side when it's a
shared bake path.

Speaker 2 (22:19):
You'd be taxing me because I I, well, I'm happy
for my brother's like you, because he rides a bike.
And the minute I walk anyway says don't walk here,
you're on the bike path. I think, well, they can
see me coming, goes no no, and I can see
a ding ding. Well, I say so. I have no
tolerance for people that tell me what side of anything

(22:40):
to walk on.

Speaker 1 (22:41):
What would you put a terraf on, amounder?

Speaker 2 (22:43):
I would put a tariff on people maybe that take
up one and a half parking spaces. There are some
days where every parking space I see is almost big enough,
but not quite, and it drives me into an incandescent rage.

Speaker 1 (22:56):
In Canada, you get those big du lys, the big trucks,
and they park across that kind of at an angle.
Across too because they can't fit. Oh, they don't want
anybody to go and hurt their little truck. That just
drives me crazy. The other thing that I would place
a tarrafon is places that don't carry Earl Gray tea

(23:18):
at least seventeen thousand percent. You're hard, You're very I'm
going in hard man.

Speaker 2 (23:24):
Okay. Well, actually, if you'd like to join us, please
let us know what you would put a tariffon. I
think Anida's one's a weird, but monum very valid. What
would you put a tariffon? Please go to our socials
in medicine.

Speaker 1 (23:36):
We want to hear.

Speaker 2 (23:37):
We'd love to share them with you, and share yours
with us. All right, Well, have a great week everyone.

Speaker 1 (23:42):
And then we will see you next week. A cup
of tea Earl Gray
Advertise With Us

Popular Podcasts

Stuff You Should Know
The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

True Crime Tonight

True Crime Tonight

If you eat, sleep, and breathe true crime, TRUE CRIME TONIGHT is serving up your nightly fix. Five nights a week, KT STUDIOS & iHEART RADIO invite listeners to pull up a seat for an unfiltered look at the biggest cases making headlines, celebrity scandals, and the trials everyone is watching. With a mix of expert analysis, hot takes, and listener call-ins, TRUE CRIME TONIGHT goes beyond the headlines to uncover the twists, turns, and unanswered questions that keep us all obsessed—because, at TRUE CRIME TONIGHT, there’s a seat for everyone. Whether breaking down crime scene forensics, scrutinizing serial killers, or debating the most binge-worthy true crime docs, True Crime Tonight is the fresh, fast-paced, and slightly addictive home for true crime lovers.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.