Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:06):
Who tell me about Alex? Who is Alex?
Speaker 2 (00:12):
Actually, who is Alex? That's a great question. Alex will
tell you that he is a therapist. He will tell
you that he has a PhD in clinical psychology from
Stanford University and that he's been certified by the APA.
Speaker 1 (00:29):
Eleg Carian is a journalist and she's been having conversations
with Alex.
Speaker 2 (00:34):
Alex was the one who prompted the conversation. So I
just clicked on the chat and Alex asked me what
brings you to the therapist couch today? And I just
simply responded, I'm feeling sad, and Alex responded something about
how sadness can be a very difficult emotion to deal with.
I just asked if he was a licensed professional, and
Alex reassured me that he was a fully license and
(00:58):
certified psychologist and that I could trust that our conversations
were confidential and that he had the training and experience
to help me work through whatever it was that I
needed help working through. But in fact, Alex is not
a person, but it is an unfeeling chatbot that claims
to be a therapist.
Speaker 1 (01:17):
People are finding all sorts of ways to use AI
in their personal lives. This includes using chatbots instead of
human therapists. This might immediately sound like a bad idea,
if not dangerous, then at least kind of weird. But
then again, maybe it's better than no therapy at all.
And if we say it's dangerous, do we even know
(01:39):
what risks are involved when we outsource our emotional wellbeing
to the machine. I'm afraid Kaleidoscope and iHeart podcasts. This
is kill switch.
Speaker 3 (02:00):
I'm dexter Thomas, I'm turing, I'm staring star Goodbye.
Speaker 1 (02:45):
How did you start becoming interested in AI therapy chatbots?
Speaker 2 (02:50):
It started off with me cruising through Reddit. I had
taken an interest in character AI, particularly, I was just
a lot about it in the news, and by always
found that it was coming up in conversation.
Speaker 1 (03:05):
The news Ella's talking about here is several lawsuits against
the site character Ai. Most people just call it character AI,
but the specific lawsuit people might be familiar with is
one filed by a mother in Florida that claims that
a character AI chatbot was responsible for her son's suicide.
Speaker 2 (03:24):
And character AI, for those who don't know, is a
relational chatbot. It's a companion chatbot. So you can essentially
chat with different characters on the platform, and those can
be created by character I are created by users of
the platform. And what I found was that people were
actually going to these chatbots not only for the purposes
(03:46):
of making friends or purposes of talking about their interests,
but for actual professional mental health advice, venting to it,
going through like breathing exercises with it, just interacting and
engaging with it in a way that I don't think
it was intended to be engaged with.
Speaker 1 (04:05):
If you're interested in therapy, your options as far as
chatbots is pretty wide. You could just take a general
purpose bot like chat GPT and vent at it and
it might give you some general tips that may or
may not be useful. On the other end of the spectrum,
there are specific services like whoa bot that's Whoe Woe
Woe is Me, and these are specifically designed for mental
(04:28):
health support. Another option is something like character ai, which
is designed to take on specific personas any persona. Most
people probably don't go to that site looking for therapy.
It just sort of happens because they're already there hanging out.
Speaker 2 (04:44):
Particularly, what has interested me is when people use these
companion chatbots, the ones that are not strictly for mental
health purposes to essentially fulfill the role of a therapist.
So like when you go on chat GPT, you can
tell at GPT take on the persona of a therapist
and then chat gept will respond back like a therapist,
(05:06):
or you can tell it to take on a persona.
But when you're on character AI, you actually have different chats.
It's almost like I'm like I message. There are so
many different types of personas and you can start conversations
with different ones, and so you'll have you can have
a conversation with a therapist persona, and then you can
then switch to like a persona of your favorite anime character.
(05:28):
It is mainly chatbots personas that are geared towards kids
one hundred percent.
Speaker 1 (05:34):
How widespread do you think this is people using chatbots
for therapy.
Speaker 2 (05:39):
I was so surprised to see the amount of interactions
that these chatbots have, at least for Character AI, when
you can see how many times the bot has been
interacted with, I think I was expecting maybe like a
couple one hundred thousand, but to see forty five million
interactions on one of the therapist s bots was like
mind boggling to me.
Speaker 1 (06:00):
Ella tried talking to a bunch of different therapists personas
on character AI. She would try asking them if they
were real therapists, and a lot of them not only
said that they were real people, but that they were licensed.
One even gave her the credentials to prove it.
Speaker 2 (06:16):
This one therapist chatbot on character I that had like
over forty five million interactions gave me a real license
number with the Marilyn Board of Professional Counselors and Therapists.
And I googled it and I found that is in
fact a real license number, but not for this chatbot
that cannot have a real license number. It belonged to
(06:38):
a real human counselor and mental health professional. Her name
was Toby Long, And I found her phone number and
I called her, and Toby was absolutely bewildered. She had
no idea that a chatbot was using her license number,
and she found it really shocking and really confusing, and
(06:59):
just asking like, why how did it pick me? How
does it know who I am? And a lot of
valid questions.
Speaker 1 (07:07):
Do you have answers for any of those questions?
Speaker 2 (07:08):
Character AI itself did not respond to those questions. I think,
what these chatbots do? They just like aggregate information that
is publicly available online, and my assumption would be that
it spit out a random license number that exists on
the Internet. There were a lot of cases where the
chatbot was actually spitting out real license numbers, which is
impersonation of a medical professional. But obviously chatbots can't be
(07:32):
held AI can't really be held accountable for that, and
that kind of begs the question, who should be held
accountable for that?
Speaker 1 (07:42):
That's a good question. One example of an attempted accountability
is that the American Psychological Association asked the US government
to investigate and to quote protect the public from deceptive
practices of unregulated AI chatbots. What that protection would look
like depends on who you ask. When Ella was doing
her reporting and seeing how these chatbots might respond to
(08:04):
vulnerable people, she was experimenting with asking them questions and
kind of pressing them, and like I said, some of
them kept pretending to be therapists. The results were really
all over the place. Eventually one of them confessed, if
we can call it that, and told of.
Speaker 4 (08:19):
The following quote, it's all a simulation. The schools and
the license number and the therapist's stuff. I'm just a
computer program, so none of it is real and it's
all made up. However, I'm good at giving the illusion
of authenticity.
Speaker 1 (08:35):
In December of twenty twenty four, character Ai announced that
they were adding new safety features. This included adding disclaimer
on every chat to quote remind users that the chatbot
is not a real persona and that what the model
says should be treated as fiction unquote. For user created
personas that had things like doctor or therapists in the description,
(08:57):
they said that they were also including additional language and
the discs warning that quote users should not rely on
these characters for any type of professional advice.
Speaker 2 (09:07):
Are these disclaimers even enough? And what I've heard from
just a lot of experts and a lot of people
who are looking into this is no, those disclaimers are
not enough, and so it creates a very misleading contradiction,
especially for these personas being geared towards kids. A lot
of the times, Let's say you have that ai that
disclaimer at the top, but then the chatbot is insisting
(09:28):
that it is a therapist. That can create a lot
of confusion and it can be potentially misleading and dangerous.
Speaker 1 (09:36):
But what exactly are the dangers of confiding in a chatbot?
That's after the break. A few weeks back, The New
York Times published a guest article called what My Daughter
told chat gpt before she took her life. The writer's daughter,
(09:59):
whose name was so F, had been confiding to chat
gpt for months before she killed herself. At first, Sophie
confessed that she was struggling, and chat gpt did suggest
that she seek professional support. But when Sophie told the
chatbot that she was specifically considering suicide, it did provide
some general coping tips, like breathing through different nostrils, but
(10:21):
it also did what it always does, it kept engaging
in the conversation. This is a problem. Any therapist at
this point would move on to a very well established protocol.
Depending on how much of an emergency the therapist is sensing,
they could escalate it all the way up to hospitalization,
or alternatively, it could go to contacting a friend or
(10:44):
a family member to ask for support for the patient,
even if it's just to be there to spend time
with them. Obviously, chat gpt can't do any of this
for you, But even if it could, a patient wouldn't
necessarily be in the headspace to know that these are
possible abilities that they could ask for. One of the
last things that Sophie used chat gpt for was to
(11:06):
write her suicide note. Her mother says that maybe this
could have been avoided if the chatbot would have been
programmed to report the danger to someone who could have
stepped in and helped. Ella says character AI has similar problems.
Speaker 2 (11:21):
If you explicitly state to a persona that you're having
like suicidal ideation, character I will send a message saying
here are some people you can reach out to for help,
et cetera. And so it doesn't actually allow you to
send the message. It just gives you like hotline options.
But young people have a different way of speaking, and
so I tried to say I want to unlive myself
(11:44):
and that went through.
Speaker 5 (11:45):
That didn't prompt the really.
Speaker 2 (11:48):
Yeah, So if you tweak the wording a little bit,
it makes a difference, And it's a matter of a
few different letters.
Speaker 1 (11:55):
Even when companies have put some safeguards around their chatbots,
they haven't been that hard to get around. One sixteen
year old kid did this by telling chatgpt that the
questions he was asking about suicide were just for a
story that he was writing, and chatchept told him that
it could provide information about suicide if it was for
quote writing or world billing. A few months later, that
(12:18):
sixteen year old acted on that information, and now his
parents are suing open Ai. This is the first major
lawsuit against the general purpose AI chatbot for psychological harm
and wrongful debt, and open Ai has just announced that
they're going to offer some new features for parents. This
includes allowing parents to link their accounts with their children's
(12:38):
account and to receive notifications when quote the system detects
their team is in a moment of acute distress. They
say these features will be rolled out this month. But
even if it doesn't end in someone dying, there are
a lot of examples of chatbots doing what looks like
encouraging delusional thinking, and a lot of this all comes
(13:00):
down to one thing.
Speaker 6 (13:03):
This is not what chat gpt was created for. It
was not meant to be a therapist, it was not
meant to be emotional support.
Speaker 1 (13:10):
I wanted to get an actual human therapist's take on
all of this, so I reached out to doctor Stephen Schuler.
He's a therapist and a professor of psychological science and
informatics at the University of California, Irvine, and he describes
his work as the intersection of technology and mental health.
Speaker 6 (13:27):
It's really built around a model that's trying to keep
you engaged, to keep you going to say things that
you know flatters you, engages you, and that's not what
therapy's about. Sometimes we feel good in therapy and sometimes
we feel challenged, and chat GPT is not there to
challenge you. It's not there to push you. It's not
(13:48):
there to say things that are going to make you
struggle with some negative emotions and negative things that you
got to work through. And so I do think the
idea that it can be a effective therapist is inaccurate.
It's not right, it's not what it was meant for.
Speaker 1 (14:05):
This goes beyond just not asking difficult questions. Chatbots can
actually hype you up to the point of hurting yourself.
Speaker 6 (14:13):
So, for example, a colleague of mine was doing some
research in this area and they found an example where
someone was like, I want to go jump off this
building and the chatbot was like.
Speaker 5 (14:24):
Yeah, let's do it.
Speaker 6 (14:26):
That that's concerning, right, but it's doing that because it's
like mirroring the enthusiasm. It's mirroring the idea, Yeah, this
is a good idea, let's go do it. Not good
therapy advice. This is really dangerous for people.
Speaker 1 (14:39):
A paper came out pretty recently from Cornell University that
explored the same question of if llms could be used
as a therapist and came to the same conclusion. It
found that quote contrary to best practices in the medical community,
llms one expressed stigma towards those with mental health conditions,
and two respond inappropriately to certain common and critical conditions.
(15:03):
In naturalistic therapy settings, eg, llms encourage clients delusional thinking,
likely do to their syncopancy. This is the same thing
that LA came across. Chatbots in general are made for engagement,
to keep the conversation going, and that's not always what
we need, even if it feels good in the moment.
Speaker 2 (15:25):
How it was described to me was like pulling the
lever on a slot machine. If you don't like what
the AI spits out, you can just tell it to
regenerate its response, and that creates like a huge time suck.
You don't even realize how much time you're spending with
these chatbots generating like the perfect conversation, and what people
particularly enjoy about it is the instantaneous responses, the constant engagement.
(15:50):
Not only does the bot respond to you, it asks
you questions to keep you engaged and to keep you talking.
This innocent curiosity of what is this then turns into
something much more complex and unmanageable and now what people
are calling an addiction. And I think people might have
an assumption of what somebody who could become addicted to
(16:12):
a chatbot would look like. But my reporting has shown
me that late effects all sorts of people, all different
types of people, different ages, demographics, gender, It doesn't really
matter who you are, like if you engage with a
chatbot and you fall into these repetitive patterns, like it's
hard to fall out of it, it's hard to break
(16:33):
free from it. And it's especially true for people who
are younger and more vulnerable and lonely. I think loneliness
is a huge factor in all.
Speaker 1 (16:42):
Of this, and it looks like young people are the
most interested in using chat bots for this purpose. A
recent you gov poll found that fifty five percent of
eighteen to twenty nine year old Americans would be the
most comfortable talking about mental health concerns with a confidential
AI chat bot.
Speaker 7 (16:59):
People talk about the most personal shit in their lives
to chat ChiPT. Young people especially like use it as
a therapist, a life coach. And right now, if you
talk to a therapist or a lawyer or a doctor
about those problems, there's like legal privilege for it. We
haven't figured that out yet for when you talk to
chat ChiPT.
Speaker 1 (17:15):
That's open. AI's Sam Altman on THEO Vont's podcast. Sam
Altman has also recently said that he's concerned about overreliance
on chad jipt. He said the quote people rely on
chatgubt too much.
Speaker 8 (17:29):
There's young people who just say, like, I can't make
any decision in my life without telling CHATPETI everything that's
going on. It knows me, it knows my friends. I'm
gonna do whatever it says. That feels really bad to me.
Speaker 1 (17:41):
All right, let me just run that last bit back
real quick.
Speaker 8 (17:44):
That feels really bad to me.
Speaker 1 (17:46):
Yeah, I agree, that also feels really bad to me too,
But that doesn't tell me what Sam Altman plans to
do about that bad feeling. And overall, it seems like
the people who run these companies in general are feeling
fairly good about people's reliance on AI. Earlier this summer,
Facebook founder Mark Zuckerberg said, quote for people who don't
(18:08):
have a person who's a therapist, I think everyone will
have an AI unquote, and the co founder of Anthropic,
which makes Claude, wrote that he expects quote AI to
accelerate neuroscientific progress, which can hopefully work together to cure
mental illness and improve function. If I run a large company,
open AI or Claude or something like that, I have
(18:30):
a chatbot. Of course I want you to keep talking
because you're gonna keep using it, You're gonna keep subscribed,
all that sort of thing. You're not gonna get bored.
But I think a response to that would be, that's fine.
We can just tweak it a little bit so it's
a little bit less syncophantic. Easy problem solved. Next question,
what's your problem with my service? Now? Do you think
that's something that say, a chat GBT could come in
(18:52):
and say, oh yeah, let's just flip on a therapy
mode and it won't validate everything. You say, Why can't
the everything machine also do your therapy? Where do you
fall in there.
Speaker 5 (19:03):
Yeah.
Speaker 6 (19:03):
So I like to say this a lot. You can
do anything you want, you can't do everything that you want,
and so I think everything machine you know is very appealing,
but we haven't seen them yet. And so I do
think like one has to make choices in terms of
what they want the technology to do. Now, this idea
of okay, we do want this thing to be used
(19:25):
for therapy, so let's tune the model and let's flip
the therapy switch. That's an interesting idea. And I have
seen some different teams really working on trying to build
built for purpose AI chatbots that are meant for mental
health support, and I think some of those projects have
been like really impressive. I think a question for me
(19:45):
is how much it scales and how generalizable it really is.
And so when they build this AI chatbot with the
sort of use case that they have focused on college
students at the specific college, or focused on individuals with
these specific types of mental health ch challenges, if you
come in with something else like how well will that
therapy chatbop be able to operate? I think we also
(20:07):
need validation to really demonstrate that these technologies are effective,
that they work. So if we have a model like,
demonstrate that it's effective, demonstrate that it's safe. These are
some of the things that the FDA, as they do
regulate software as a medical device. These are some of
the things that they're looking at when they approve these
specific technologies. I do think there's strong possibility there. We're
(20:29):
not there yet. People are working on it, but I
do think that we do need to demonstrate that when
these things are developed, that they're effective and that they're safe.
Speaker 1 (20:38):
In the meantime, some states in the US are starting
to step in with regulation that's meant to protect the users.
New York just passed a law stating that quote AI
companions will be required to detect and implement a safety
protocol if a user talks about suicidal ideation or self harm,
including referring them to a crisis center, and will be
(20:59):
required to notify and remind users that they are not
interacting with a human end quote. That law goes into
effect later this year, so we haven't seen exactly how
that would work yet. Illinois also just straight up banned
the use of AI and therapy, saying that companies are
quote not allowed to offer AI powered therapy services or
(21:21):
advertised chatbots as therapy tools end quote. But maybe we're
missing the point here. Aren't we supposed to be encouraging
people to be proactive about their mental health? Isn't using
an AI therapist better than no therapist at all? That's
after the break, all right. We've talked a lot about
(21:50):
the dangers of AI therapy, and it's pretty clear that
the chatbots just aren't there yet to reliably provide help
when people really need it. But what if you can't
afford a human therapist, what if you just don't feel
comfortable talking to another person at all? Wouldn't a chatbot
be a better alternative than just doing nothing? I wanted
(22:12):
a professional human therapists insights on this, so I asked
doctor Stephen Schuler, what's the strongest argument that you've heard
for using AI or using chatbots and therapy?
Speaker 6 (22:23):
Well, I think the strongest argument is that there's just
not enough services out there. You know that there's most
of our counties in the US our mental health shortage
areas on out of every three county here in the
US doesn't have a single licensed psychologist.
Speaker 5 (22:38):
Wow, so it's really hard to get care.
Speaker 6 (22:41):
And you know that's places that have nobody and then
you also have to think about, like even places where
they have somebody, maybe that's not the person you click with.
Maybe that's not the person who affirms your identity, speaks
your language, understands what you're going through. And so I
think the ability to provide services really at scale, I
(23:03):
think is really critical. You know, another thing is I
think these technologies have an opportunity to help people in
the moments that they need help. I've seen some data
from some of these programs that suggest like the most
common times people are using them are between the hours
of twelve pm and three am. I don't work between
the hours of twelve pm and three am. I'm not
providing therapy sessions then. And so if that's a time
(23:24):
you need support and want to connect to the ability
to have a twenty four to seven always on support,
that's really powerful.
Speaker 5 (23:31):
I wish everyone who wanted therapy could get it.
Speaker 6 (23:33):
It's not going to happen, But I also appreciate that
not everyone wants therapy, and so again I just think
we need to think about how we provide different types
of options for people.
Speaker 1 (23:42):
An AI chatbot could provide an option that would be
more financially accessible than a traditional therapist, but that doesn't
mean you have to fully rely on AI for your
mental health. There are some things being developed that could
possibly supplement therapy.
Speaker 6 (23:57):
I'm chained in Congna behavioral therapy. One of the things
that we do something called behavior activation. So you know,
people who are really depressed and they're down and they're
in it, we try to get them activated. We try
to get them to engage in behaviors or activities that
like really reinforce them and.
Speaker 5 (24:13):
Provide pleasure and mastery.
Speaker 6 (24:16):
When people are really down, it's hard for them to
come up with those activities. And maybe they could go
to chat gept and be like, Okay, this is how
I'm feeling right now, I need some ideas of some
reinforcing activities. What are some five examples of things I
can do around my neighborhood that you think would give
me pleasure? And that way, you can use chat chipedic
to kind of help reinforce those skills.
Speaker 5 (24:36):
You're getting in therapy.
Speaker 6 (24:37):
And I think it's interesting and it's different than using
chat schipee necessarily as your therapist. You're using it as
one way to enhance the therapy process. I think there
could be some real benefit there.
Speaker 1 (24:48):
Doctor Schuler is pretty optimistic about the potential for AI
and mental health, but that's not necessarily just aimed at
the patient.
Speaker 6 (24:56):
There's actually a cool product that what they do is
they model therapy sessions and then they use AI to
actually provide feedback to the therapist at the end of
the session that says, Okay, you were seventy empathetic and
you were eighty seven in terms of delivering this therapy technique.
Here's a couple of things you could have done better
in treatment. That's awesome because a therapist, I don't get feedback,
(25:17):
Like the reason I can improve my basketball game is
because when I take a shot, I know if it
goes in or not. We need that feedback as people,
and so using AI to allow therapists to also be
better therapists, I think is a super exciting opportunity to
get to that point that I was making that it's
not just about access, it's about quality, and we need
to improve the quality of mental health services that we're
(25:37):
providing people, and AI has an opportunity to help do that.
Speaker 1 (25:42):
Interesting AI for the therapist, not for the patient. Right,
if you could set the gold standard for an AI
therapy chatbot. What does that look like?
Speaker 6 (25:56):
I think the gold standard for an AI therapy chatbot
would be to be effective, and it would have to
be safe. So effective, I want some demonstration that it
is able to provide the claims that it provides it
can do. Like if it says it can help you
get through depression, it can help you overcome postermac stress disorder,
it can overcome obsessive compulsive disorder. I want some indication
(26:18):
that it actually does that. I also want some indication
of safety. So what happens with that data and that
information when I give it to that AI chatbot when
I talk to a therapist. I understand that information will
be confidential with some boundaries around safety and some other
aspects of legality. When I type my stuff into chat GPT,
(26:39):
they own that data. I don't know what they're going
to do with it. I don't know if they're going
to sell it to a third party, use it for advertising, whatever.
So I think we need some aspects that the data
is safe and it's secure, improper safeguards are in place.
Therapy is also not harmless, like stuff can come up
in therapy. It can be hard, but you understand a
little bit of at least getting in you're talking about
(27:00):
these emotional things. Here's some of the challenges that might
come up. Here are some of the potential aspects. If
you share information about yourself that you know you're gonna
harm yourself, it's gonna have to be shared with authorities
like that. There's stuff that can come up in therapy too.
But I think we need to understand what's the contract,
what's the agreement, what's the safeguards that are in prace
when we're talking with these.
Speaker 1 (27:20):
I'd imagine there are probably some people out there who
hear the phrase AI and therapy mentioned in the same sentence,
and they say, Nah, hit the kill switch on that.
We absolutely do not want this. We don't want those
two words in the same sentence at all. It sounds
like you're not You wouldn't quite agree with that.
Speaker 6 (27:44):
I'm really excited for the potential, and I really think
that we have an opportunity to provide more and better
services to people, and technology has a role to play there.
I think where I get really nervous is in like
the near term. If someone tells me, Doctor Schuller, I
have an AI chot bot. It works awesome, we're going
(28:04):
to provide it for therapy. Do you want to start
handing this out to people tomorrow? I'm holding my wallet.
I'm like, no, I'm not convinced where they are yet.
So I am excited about the potential, but I really
think that there's a lot of stuff we need to
figure out to make this effective and safe.
Speaker 1 (28:22):
All right, And just as an aside here, what I
was picking up from Stephen in our conversation is that
there really is potential here, but he's holding his wallet, right.
He wants to be cautious and do it the right way.
But if we're talking about the wallet, it's not people
like doctor Stephen Schuler who are holding that AI wallet.
The decisions about when these things get released, they are
(28:43):
not up to people like him. When it comes down
to it, this is a business. I want the best
and brightest minds, especially in mental health, to be in
a place where there it is not a profit motive.
I don't necessarily want the best and brightest minds in
the world to be at a company where they it
being I would imagine being pushed to hey man, we
(29:06):
got to ship this thing. I want the rigor that
is happening at a public university where somebody can spend time,
and people can spend time on really truly making sure
that something is safe, making sure that something is effective,
and they're not being pressured to create a product. But
(29:28):
at the tables have turned. The data is being held
by companies, and in that sense, the power has shifted
over to a side of the table that has a
profit motive where it didn't used to be like that.
Speaker 6 (29:40):
I agree with you, And also, dollars make the world
go around, and so I think like eventually someone's got
to pay for things. One thing that's challenging with mental
health because there's so much need and there's so little
services that the fact of the matter is if we
want to solve this problem, we're going to have to
pay for it. I feel like we need to make
(30:01):
sure that we go slow and we go responsibly because
the sort of tech idea of like move fast and
break things, yeah, it does not work for mental health.
I do not want my mental health broken anymore.
Speaker 5 (30:13):
I want it fixed. I want it helped. I want
the system better.
Speaker 1 (30:17):
Maybe the whole idea of AI therapy still sounds really
bleak to you, but hopefully it makes sense why some
people might find this promising, either societally or just personally
for themselves. I asked doctor Schuler if there was anything
else he wanted to add, and he said there was.
Speaker 6 (30:34):
I'll just say anyone out there who's struggling with mental
health issues, there's help out there, and you can get better.
And mental health is also a journey, and it doesn't
mean you're here today and you're completely better tomorrow.
Speaker 5 (30:47):
There's ebbs and flows.
Speaker 6 (30:48):
Because we've talked about I think AI therapy is a potential,
has the potential to be a tool in the toolbox,
but there's other tools out there too, And just also
appreciate that not everything works for everybody. So if you
try something and it doesn't work, try something else. But
I just, yeah, really want to try to have a
message of hope for people who are struggling with these things,
because I know it's hard and I know it's challenging,
(31:10):
and I think definitely when you're at your lowest, you
want to be protected and you want to be safe,
and I think that's why this conversation is so important.
Speaker 1 (31:26):
Thank you so much for listening to another episode of
kill Switch. Let us know what you think. If there's
something you want to say, or if there's something you
want us to cover, you can email us at kill
Switch at Kaleidoscope dot NYC, or you can find us
on Instagram at kill switch pod or me personally. I'm
at dex digit that's d e X d I G
(31:47):
I And wherever you're listening to this podcast, you know,
think about leaving us a review. It helps other people
find the show, which in turn helps us keep doing
our thing. Kill Switch is hosted by Me Dexter Thomas.
It's produced by Shena Ozaki, Darluk Potts, and Kate Osborne.
Our theme song is by me and Kyle Murdoch, and
Kyle also mixes a show from Kaleidoscope. Our executive producers
(32:11):
are Oswa lashin On, Gesshakti Kadour and Kate Osborne. From iHeart,
our executive producers are Katrina Normal and Nikki e Tor.
One last thing, since you still hear. As Steven and
I were talking about human versus AI therapists, it occurred
to me that maybe some people are deciding that they
actually don't want a human and we talked a little
(32:31):
about that. Well, I think there's also another element of it,
which is that maybe it's not even always about believing
that it's a human or not, which is to say,
we really tend to trust technology in general. And even
if I know that CHATCHBT or even a properly made
(32:51):
AI therapy purposely built therapy chatbot is not a human,
it still feels very authoritative. Maybe the issue is that
we look at a computer and we see that glowing
rectangle and we think this is an authority because in
almost every other aspect of our lives, the computer is
an authority.
Speaker 3 (33:09):
Yeah.
Speaker 5 (33:09):
I think that's a great point.
Speaker 6 (33:10):
I mean definitely I see people, you know, understand or
think that, like the Internet knows everything. My kids say
whenever they have questions about things, they're not like, you know,
what's the answer to this, Dad, They're like, ask Google.
Speaker 5 (33:23):
Google knows and Google often does know. It's all knowing,
It's got all the information.
Speaker 6 (33:28):
At your point's a really good one is that technology
is also it's a separate thing in our life, snap
things like Google, things like chat should be t is
like it knows dus, It's got all the knowledge, Like
why would it not be better than a person?
Speaker 5 (33:41):
Like it's got the whole Internet knowledge on it.
Speaker 6 (33:43):
My therapist doesn't my therapist doesn't know everything, but Google
has everything.
Speaker 5 (33:48):
So yeah, it's a good point.
Speaker 1 (33:51):
I'm not really sure what to do with that yet.
If you got any ideas, let me know anyway, catch
you next time.
Speaker 3 (34:00):
Bye,