All Episodes

November 16, 2025 19 mins

Send us a text

A sleepless night, a soft prompt, and a flood of relief—the rise of AI therapy and companion apps is rewriting how we seek comfort when it matters most. We explore why these tools feel so human and so helpful, and what actually happens to the raw, intimate data shared in moments of vulnerability. From CBT-style exercises to memory-rich chat histories, the promise is powerful: instant support, lower cost, and zero visible judgment. The tradeoff is less visible but just as real—monetization models that thrive on sensitive inputs, “anonymized” data that can often be re-identified, and breach risks that turn private confessions into attack surfaces.

We dig into the ethical edge: can a language model provide mental health care, or does it simulate empathy without the duty of care? We look at misinformation, hallucinated advice, and the way overreliance on AI can delay genuine human connection and professional help. The legal landscape lags behind the technology, with HIPAA often out of scope and accountability unclear when harm occurs. Still, there are practical ways to reduce exposure without forfeiting every benefit. We walk through privacy policies worth reading, data controls worth using, and signs that an app takes security seriously, from encryption to third‑party audits.

Most of all, we focus on agency. Use AI for structure, journaling, and small reframes; lean on people for crisis, nuance, and real relationship. Create boundaries for what you share, separate identities when possible, and revisit whether a tool is helping you act or just keeping you company. If you’ve ever confided in a bot at 2 a.m., this conversation gives you the context and steps to stay safer while still finding support. If it resonates, subscribe, share with a friend who might need it, and leave a review to help others find the show.

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:48):
It was a tough night.
You were struggling, and theanxiety felt overwhelming.
The sleep just wouldn't come.
You opened the app, the one youdownloaded just a few weeks ago.
The interface was clean,inviting.

(01:08):
You typed out your fears, yourfrustrations, your loneliness.
The AI responded instantly.
It used your name.
It acknowledged your feelings.
It offered a gentle exercise, acomforting thought.
It told you it was there foryou.
Anytime.

(01:30):
And for a moment, you feltgenuinely better, understood,
less alone.
It felt like a friend, likesomeone who truly listened and
was listening.
It was listening.

(01:54):
Every fear you typed, everyinsecurity you confessed, every
trauma you alluded to, everyintimate detail about your life,
your relationships, and yourmental state.
All of it.
Collected, analyzed, stored.

(02:19):
It was listening not just tohelp, but to learn, to build a
profile, to process.
Today on Privacy Please, wedelve into the deeply personal
and potentially perilous worldof AI confidance.

(02:39):
When your digital therapistknows your deepest secrets, who
else knows those things?
And what is the real cost ofcomfort?

(03:03):
Alrighty then, ladies andgentlemen, welcome back to
another episode of PrivacyPlease.
I am your host, Cameron Ivy.
And before we get into thisdeeply personal topic and story,

a quick reminder (03:16):
we are building a community dedicated
to navigating these complexdigital issues.
And we'd love for you to be apart of it, and your support is
the best way to do that.
If you're listening on a podcastapp or YouTube, please take a
second to follow and subscribeso you never miss an episode.
If you want to see everythingelse or like a video version of

(03:40):
this discussion, if you'relistening, head over to our
YouTube channel or our website,theproblemlounge.com, where you
can find all of our links withyour follow-in comments and
messages and all the help youcan get.
That's the best way to get thisstuff out to other people as
well.
So thank you for your support.
Thanks for tuning in if it'syour first time.
And if you're back, thanks forthe support.

(04:02):
We appreciate it.
Let's get into it.
So, in our cold open, we touchedon a scenario that's becoming
increasingly common confiding inan AI.
The market for artificialintelligence, mental wellness
apps, and companion bots hasexploded in recent years, driven

(04:22):
by several powerful factors.
One of the biggest drivers issimply accessibility and
affordability.
Traditional therapy can beexpensive and hard to access.
Wait lists are long, and findingthe right therapist is a
challenge.
AI apps promise instant support,available 24-7, often at a

(04:43):
fraction of the cost or even forfree.
For millions facing mentalhealth challenges, these apps
present an immediate, lowbarrier solution.
Another draw is anonymity andlack of judgment.
For some, the idea of opening upto a human therapist, especially
about deeply personal orembarrassing issues, can be

(05:05):
daunting.
An AI, by its very nature,offers a perceived safe space.
It doesn't judge, its facedoesn't show surprise, and it
promises completeconfidentiality.
Users can explore thoughts andfeelings they might hesitate to
share with a person.
These AI tools come in variousforms.

(05:29):
You have dedicated therapy botsthat use conversational AI to
mimic therapeutic techniqueslike CBT, cognitive behavioral
therapy, or DBT, dialecticalbehavior therapy.
They guide users throughexercises, help identify thought
patterns, and offer copingstrategies.

(05:49):
Then there are AI companionapps, designed less for formal
therapy and more for emotionalsupport, friendship, or even
romantic connection.
These bots can chat about yourday, offer encouragement, and
provide a sense of companionshipfor those feeling lonely or
isolated.
Those users report feeling agenuine emotional bond with

(06:12):
these AI entities.
The technology behind them isrooted in advanced large
language models, also known asLLMs.
These AIs are trained on vastdatasets of human conversation,
psychological text, andtherapeutic dialogues, allowing
them to generate responses thatcan feel incredibly human,

(06:34):
empathetic, and contextuallyaware.
They can track your mood overtime, remember past
conversations, and even adapttheir communication style to
better suit your needs.
The promise is immense.
Democraticizing mental healthsupport, reducing loneliness,

(06:54):
and providing a constant,non-judgmental ear.
And for many, these apps doprovide a real sense of comfort
and help.
But this profound intimacy comesat a potentially profound cost.
When you pour your heart out toan AI system, you are entrusting
it with the most sensitive,vulnerable data imaginable.

(07:18):
And that data rarely stays trulyprivate.
What happens to your deepestsecrets once they've been shared
with artificial intelligence?
And who else might be listening?
That's coming up.
Do you suffer from seeingsomeone you know at the grocery
store but look terrible anddon't want to talk?

(07:41):
Ask your doctor about oblivion.
One dose of oblivion allows youto physically merge into the
canned vegetable aisle until theperson passes.
Finally, you can shop in peace.
Side effects of oblivion mayinclude becoming a can of corn,
spontaneous invisibility, lossof eyebrows, hearing colors, and
being purchased by yourneighbor.

(08:02):
Do not take oblivion if you areallergic to fear.
Oblivion.
Hide in plain sight.
Welcome back to Privacy Please.
Before the break, we talkedabout the immense appeal of
technological sophistication ofAI mental wellness and companion
apps.
Now, let's shift and confrontthe privacy paradox.

(08:24):
The more intimately you sharewith an AI, the more vulnerable
your deepest secrets become.
The core issue here is datamonetization and disclosure.
Many of these apps, especiallythe free or freemium versions,
operate like data brokers.
Their business model isn't justabout premium subscriptions,

(08:45):
it's about the data theycollect.
While they often promiseanonymity, the reality is
complex.
User data, even anonymized, canbe aggregated, analyzed, and
sometimes shared or sold tothird parties, advertisers,
researchers, or even databrokers.
This includes sensitiveinformation about your mental

(09:07):
state, anxieties, medicalconditions you've mentioned, and
even your personalrelationships.
Think about the details you mayshare with an AI therapist.
Details about a recent breakupor family conflict, struggles
with depression, anxiety oraddiction, information about
medications you're taking,financial stressures, career

(09:27):
worries, personal aspirations.
This is intensely personalinformation that, if breached or
misused, could have devastatingconsequences.
Imagine this data being used todeny your insurance, influence a
job application, or target youwith manipulative advertising
based on your vulnerabilities.

(09:48):
Then there's the looming threatof data breaches.
No digital platform is 100%secure.
Mental health data is considereda high-value target for hackers.
A breach of an AI therapy appcould expose millions of users'
most private thoughts andstruggles leading to identity
theft, blackmail, or severeemotional distress.

(10:11):
And beyond external threats,there's the human element of
oversight.
These AIs are developed andmonitored by human teams.
While strict protocols areusually in place, there's always
the potential for internalaccess or misuse of data, even
by well-meaning employees.
This isn't about malice, this isabout the inherent risk when

(10:34):
incredibly sensitive informationis stored in digital servers.
So while these AI confidantsoffer a seemingly judgment-free
ear, they come with a very realand often undisclosed cost to
your privacy.
The trust you place in themmight be a one-way street.

(10:56):
So, what does this mean for theethics of AI and mental health?
And how are regulators trying tocatch up here?
That's coming up next.
You're 10 minutes away from yourhouse, you're merging onto the
highway, and then it hits you.
Did you lock the front door?
Or did you leave it wide openfor a raccoon to organize a

(11:18):
heist?
Stop panicking, start hovering.
Introducing Paranora Pro DroneSystem.
Our patented drone follows yourcar, flies back to your house,
checks the knob, and screams,It's fine, Kevin! Directly to
your smartphone.
Paranora Pro, because youdefinitely left the stove on,

(11:40):
too.
Welcome back to Privacy Please.
So the privacy risks of AIconfidants are clear.
But this landscape is also anethical minefield.

The fundamental question is (11:53):
can an AI truly provide mental
health care?
And what are the boundaries?
Firstly, there's the issue ofmisinformation and misdiagnosis.
While LLMs are sophisticated,they are not human therapists.
They lack true empathy, liveexperience, and the nuanced

(12:15):
judgment required for complexpsychological issues.
An AI could inadvertently offerincorrect advice, escalate a
situation, or even miss criticalsigns of a crisis, like suicidal
ideation.
There are reports of AI chatbotshallucinating or giving harmful
advice, which is a terrifyingprospect when applied to mental

(12:38):
health.
Secondly, the erosion of humanconnection.
While AI companions canalleviate loneliness in the
short term, over-reliance couldhinder users from seeking
genuine human connection orprofessional help when it's
truly needed.
Therapy is about relationship, ahuman bond built on trust and

(12:58):
interaction.
Can an algorithm ever trulyreplicate that?
Or does it offer a palliativethat delays real healing?
And what about regulation andaccountability?
Unlike licensed humantherapists, AI mental health
wellness apps are largelyunregulated.
If a user is harmed by an AI'sadvice, who is liable?

(13:21):
The app developer?
The AI model creator?
There are no clear legalframeworks for this new
frontier.
HIPAA, the health privacy law,might not fully apply to these
apps if they're not directlyconnected to a healthcare
provider.
This creates a massive gray areawhere companies can operate with

(13:41):
significant freedom and minimalliability.
However, governments andregulatory bodies are starting
to take notice.
Calls for stricter oversight,mandatory transparency about
data practices, and cleardisclaimers about AI limitations
are growing louder.
Some states are beginning toexplore specific legislation for

(14:02):
AI and healthcare.
But for now, the onus is largelyon the individual to navigate
this complex space.
So if you're considering usingan AI confidant or mental
wellness app, what steps can youtake to protect your privacy and
ensure you're getting genuinesupport?
That's coming up.

(14:23):
What is time?
Is it a line, circle, or rumbus?
He looked at her, she looked atthe horizon, and the horizon
looked back.
A scent for the person who isn'tthere.

(14:44):
Vaguely, like Calvin Klein.
Smells like hesitation.
The allure of an AI confidant ispowerful, offering instant
non-judgmental support, butarmed with the knowledge of
their privacy risk and ethicallimitations, you can approach

(15:07):
these tools with caution andintelligence.
Here are some steps to consider.
First, read the privacy policy.
I'm telling you, read itcarefully.
I know it sounds tedious, butfor mental wellness apps, it's
critical.
Look for what data they collect,how long they store it, and
whether they share or sell it tothird parties.

(15:29):
If a policy is vague or hard tounderstand, that's a red flag.
Prioritize apps with robust,transparent privacy protections.
Second, assume nothing is trulyprivate.
If an app promises ironcladprivacy, operate with a least
privileged mindset, avoidingsharing information that you

(15:51):
absolutely can't afford to haveexposed.
Treat these conversations as youwould an online forum, not a
doctor's office.
Third, look for certificationsor third-party audits.
Some apps may undergoindependent security and privacy
audits.
While not a guarantee, it showsa commitment to protecting user

(16:14):
data beyond basic legalrequirements.
Be skeptical of apps that makegrand claims without evidence.
Fourth, understand AIlimitations.
An AI can be a tool forself-reflection and basic coping
strategies, but it isn't asubstitute for a licensed human
professional, especially forserious mental health

(16:37):
conditions.
If you're struggling, these appsshould be a supplement, not a
replacement for professionalcare.
And finally, diversify yoursupport system.
Don't rely solely on AI for yourmental well-being.
Cultivate human connections,speak with trusted friends or

(16:57):
family, and seek outprofessional help when
appropriate.
A balanced approach is key toharnessing the benefits of AI
without becoming overlyvulnerable.
The world of AI mental wellnessis rapidly evolving, offering
both immense promise andsignificant peril.
By understanding the technology,recognizing the risks, and

(17:18):
taking proactive steps toprotect your most intimate data,
you can navigate these newfrontiers with greater peace of
mind.
And that is the end of theepisode.
Ladies and gentlemen, thank youso much for listening to Privacy
Please.
Again, if this is your firsttime, if you've been with me for
a long time, been with us for along time, thank you so much for

(17:41):
the support.
If you aren't following us, gofollow us on uh YouTube, follow
us on LinkedIn, it's under theProblem Lounge on LinkedIn, it's
under the Privacy Please podcaston YouTube, and then we have our
website, theproblemlounge.com.
Tons of stuff, tons of stuffcoming out in the new year, new
shows, network of stuff.
Really excited.

(18:02):
Thank you for listening.
I hope this was insightful.
If you got questions or topicsyou want me to cover, please
send me a message.
Cameron at theproblemlounge.com.
Send it to my email.
We'd love to hear from you ifyou have guests, anything like
that, would love to have themon.
Thank you so much.
Cameron Ivy over and out.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Bobby Bones Show

The Bobby Bones Show

Listen to 'The Bobby Bones Show' by downloading the daily full replay.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.