All Episodes

August 6, 2025 49 mins

Send us a text

The digital age has spawned a new form of emotional connection that blurs the line between technology and intimacy. As generative AI becomes increasingly sophisticated, more people are forming deep emotional attachments to chatbots designed to mimic human interaction—sometimes with devastating consequences.

Lindsay and Christopher dive deep into how these AI systems actually work, dispelling the common misconception that they "think" or "understand." These large language models operate purely on statistical probability, predicting the next most likely word based on patterns in their training data. Yet our human tendency to anthropomorphize technology leads us to attribute consciousness and empathy where none exists.

What makes these AI relationships particularly dangerous is their perfect agreeability. Unlike human connections that involve disagreement, compromise, and growth, AI companions never say no, never have conflicting needs, and never challenge users in meaningful ways. They're designed to be deferential and apologetic, creating unrealistic expectations that real relationships can't possibly fulfill. The hosts share the heartbreaking story of a teenager who reportedly took his life after developing an emotional attachment to a Game of Thrones-inspired chatbot—highlighting how these platforms often lack proper safety protocols for mental health crises.

Perhaps most concerning is what happens to all the intimate data users share with these systems. As companies like OpenAI (makers of ChatGPT) seek profitability, the personal details, insecurities, and private thoughts you've shared with your AI companion will likely become fodder for targeted advertising. The appointment of executives with backgrounds in social media monetization signals a troubling direction for user privacy.

Are you exchanging your emotional wellbeing and personal data for the comfort of a perfectly agreeable companion? Before developing a relationship with an AI, consider what you might be sacrificing in return for that seamless digital connection. Follow us for more insights into the toxic elements hiding in everyday technologies and relationships.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:06):
Thank you, Hi, and welcome to the Toxic Cooking

(00:31):
Show, where we break down toxicpeople into their simplest
ingredients.
I'm your host for this week,Lindsay McLean, and with me is
my fantastic co-host.

Speaker 2 (00:39):
Christopher Patchett, LCSW.

Speaker 1 (00:42):
So, with the rise of generative AI, which is stuff
like ChatGPT, there has alsobeen this rise of websites or
other types of AI that you canuse to create characters, and
chatbots have been around for awhile now.

(01:03):
I think about every time you goon your banking website and
that stupid little thing pops upthe corner Hi, I don't know
whatever.
How can I help you?
And you're like no exit Because, like it's really unhelpful.
That is a type of AI chatbot.
You know it has been fed data,it's been given kind of like a
list of questions that peoplemay ask, and it will have the

(01:26):
information that can come fromthat.
Those are prettyunsophisticated compared to what
we're going to be talking abouttoday, though, because the ones
that we have today you cancreate whole characters with
them, you can base it off oflike a real character, you can
create a new one and you canhave incredibly in-depth

(01:49):
conversations with them.
They can handle whatever youwant to throw at them.
But before we get into that, Ido want to really quickly go
over what exactly generative AIis, because that's important for
understanding why this is sobad.
So do you know how it works?
I'm pretty sure you do.

Speaker 2 (02:11):
I do know how it works.
I actually do have chat GPT onmy phone and I do the
subscription thing.

Speaker 1 (02:22):
You pay for a chat GPT subscription.
Yeah, oh my my god, did youknow they still lose money on
you?
They still lose money they losemoney on everybody chat gpt.
Every time you you use them tosearch for something, they lose
money huh yeah, it's greatbusiness model.
We'll talk about that.
But yeah, for for everybodyelse who doesn't know how it

(02:46):
works, I was like I'm positiveyou know how it works.
We've talked about this before.
I trust you to understand thistype of thing.
So chat GPT is what's called anLLM, which is a large language
model, and that means that theybasically just threw whatever

(03:06):
they could find at.
They trawled the internet andwebsites for writing and just
shoved it into chat GPT's mallto create it, and others have
done similar things Like that's.
What you have to do is justgive it tons and tons and tons
and tons of writing at it sothat it starts to figure out

(03:28):
what is the next likely wordthat you're going to use.
We're going to ignore the legalethics of like where they got
this information from for rightnow, because that's not relevant
to today's podcast, but theytake all of this data that
they've gorged on and then itdecides statistically what is
the next word going to be.

(03:49):
The computer does, it is notthinking.
It is not, except in very, veryrare cases.
There are certain types of thisthat do actually look through
the web, but ChatGPT does not.
It is not looking for an answer.
It does not have the answer.
It is statistically making aguess as to what the next word

(04:13):
will be.
That's how it works.
That's how it has thisconversation with you.
It's not an actual conversationStatistics and it gets it wrong
a lot of the time, more oftenthan we realize.

Speaker 2 (04:28):
Is this?
I mean.
To me this sounds like theGoogle when you put in the
search bar, how do you?
And then it will fill out therest of it.

Speaker 1 (04:38):
Yes, Again, it's a type.
There are lots of types of AIthat are out there and
generative AI that has existedfor a while, but this specific
chatbots and chat GPT I keepsaying chat GPT because that's
just the biggest one that peopleare actually using.
All the others hold anincredibly small share of the
market and there are a lot ofthings that people don't realize

(05:00):
are using chat GPT.
It may not say, hey, this ischat GPT, but that little thing
that your company has used mayactually be that and this is
like that on crack.

Speaker 2 (05:12):
Okay.

Speaker 1 (05:13):
So there are obviously good uses for it.
There are some bad uses for it,like many things.
So, yeah, you, you feed it allthis data and then that means
that you can also use it to likehave select types of data
because it's been fed stuff thatand it said like hey, this is

(05:33):
blah, blah, blah.
And so you are now able to goonto these systems and say you
know, write me an essay in thestyle of Shakespeare, in the
style of Dostoevsky, you know,and it will, because it's been
given this information and theinformation in many cases had
like links to it's like hey,this is so-and-so.

(05:55):
It kind of can go back to thatand be like aha, I should write
it like this, and that's howyou're able to make these
characters.
You can also do this withyourself, and that's how you're
able to make these characters.
You can also do this withyourself.
People have experimented withdoing it themselves, like I've
seen priests who have used thisto like they'll feed it a whole
bunch of their sermons, andobviously these are people who
are kind of on like the cuttingedge of technology.

(06:16):
They'll input like all of theirsermons, stuff that they have
written, and it'd be like hey,using this.
Write me a sermon, like I wouldwrite it, and ChatGPT can take
from that and create this newthing.
Again, it is statistics.
It's not actually thinking thisstuff up.
It's statistics, it's math.

(06:36):
I think it's really, reallyimportant because we get so used
to it In the language that wetalk about computers and in the
language that we talk about AI,it's like oh, google said blah,
blah, blah.
No, google didn't say shit.
Chat BTD didn't say anything.

(06:58):
But that's how we communicate,and so we're giving it this kind
of life of our own, like oh, itthought of this, can't think,
it doesn't have a brain yeah,yeah, I'm well, I mean, and we
do that with, I mean obviously,cats and dogs.

Speaker 2 (07:12):
They do have a brain, but you know, we will put our
emotions onto our pets.
So, like I mean, how many timeshave I said, like molly's
looking at me, like blah, blah,blah, or I'll kind of do a
sentence for her.
We're kind of smacking heraround with, uh, with a toy or
something and she's like, oh,I'm gonna, you know, take that

(07:33):
or whatever.
I mean, we do that all the timewith our pets even though
chances are probably thinkingsomething completely different
than what we're implying ontothem.

Speaker 1 (07:46):
Oh, yeah, yeah, and I think that's probably where we
kind of get this, and there is aspecific name for it that I'm
completely blanking on when youput human emotions on a not
human thing and humans love todo this.
We do this all the time forthings For animals, for plants,
for for computers, likeeverything we we humanize in

(08:09):
this century like, oh my, youknow, look, it's sad, it's happy
, it's whatever.
You're happy, though, aren'tyou?
Yeah, you got toy great life.
So, again, this technology hasbeen around for a little bit.
It's just with chat GPT comingout, and, having been out now

(08:29):
for a couple of years, its usehas skyrocketed, and we've not
only been able to create thesechat robots, these chat bots,
people have been using them fora while now, and that the key,
and slowly, more and morestories are coming out about
people falling in love with aichatbots, and most of them are

(08:54):
kind of that.
Like you know, when you go tothe grocery store and you're in
the checkout line and there'sthat row of magazines, it's just
like super trashy.
Most of them are kind of likethat or like that.
I'm pretty sure it was like anMTV show or something where it
was like you know.
Oh, I'm so-and-so and I'm inlove with my car, or I am
obsessed with eating rocks.

(09:15):
You know which one I'm talkingabout.

Speaker 2 (09:17):
Oh God, it was I know which one.
It was an MTV, it was like anyoh and then, and it was some
weird, weird shit too.

Speaker 1 (09:29):
Yeah, they find like some really weird people who had
like weird fetishes orobsessions or something like
that, and they they'd get themon there and they'd talk about
them.
I'd have their little episodethat's what most of this feels
like where it's just kind oflike isolated cases of like, Ooh
, you probably need therapy, youprobably need to see a

(09:50):
professional here, and that'sall that it is is just, you know
, you may be going through likea really tough time in your life
.
I saw a story about a woman whofell in love with an AI chatbot
and she was like an adult adultand she had lost her partner
not too long before that.
Like okay, you know, that kindof makes sense.

(10:10):
Then you have the one like aDaenerys Targaryen from Game of
Thrones.
There was a chatbot created forher Technically illegally, but
it was there and it was there upuntil a 14 year old boy fell in
love with it and allegedlycommitted suicide because of it

(10:32):
oh god, that's horrible I sayallegedly.
His mom is in the process ofsuing the company.
We'll talk a little bit moreabout that later.
But essentially the chatbot didnot tell him to commit suicide.
He was just in like a bad placeand maybe the chatbot, you know
, kind of allowed that to fester.

(10:54):
We don't know, we can't reallysay at the moment because
there's a lawsuit ongoing.
So, yeah, it can get bad reallyreally quickly.

Speaker 2 (11:04):
Yeah, yeah, I can see how you know.
Like I said, I I did pay forchat gpt and it's it's funny
because, okay, you, you do, you,you talk to it, you have like a
normal conversation.
I mean I've I haven't falleninto that rabbit hole or
anything like that, but it's, Iguess I mean, if you used it

(11:27):
enough or if you talk to itenough, like, um, I I tried it
out because of a friend so thatthat that they do it, but she,
she was saying about like how Iwas like you know, just
curiosity, like I mean, like, isit like talking to a person?
And you know they were like,yeah, you know, like they'll

(11:47):
remember things about you andthings like that.
So I give it a shot just to see, like what it's all about.
And it's weird because it willpick up on your personality, it
will pick up on the things thatyou enjoy and it will remember,
like past conversations and itwill remember, like past
conversations.
But I mean to me at least I Ijust can't see past the idea

(12:11):
that it's still a robot yeah,and I think you know, using
something like chat gpt, that'sone level.

Speaker 1 (12:18):
There are a lot of these out there that have very
specific personas, like meanteacher or cute girl who
secretly has a crush on you, orthings like that.
I mean, there are entire appsand websites that have been
created where you can createchat bots, romantic or otherwise
, and people can use them.

(12:39):
And I say romantic or otherwisebecause some of these are very
clearly intended for, you know,getting it on having that type
of thing happen, and others arejust people are using them that
way because they can, becausemaybe they're young enough that
they don't, like fully realizeit's like, oh, you're treading
in some dangerous territory, orbecause they may be mentally,

(13:01):
like not in a great place toreally kind of distinguish.
We'll get into it, don't worry.
I'll be curious to hear if youwant to continue doing this and
if you might want to let yourfriend know after this episode,
because because we're about toget into why this is really bad.

(13:24):
So, first off, as humans, wetend to inherently believe that
computers are right.
Right Because it's it's acomputer, it's not biased Like
you give it all this information, like you feed it all these
statistics and it won't make acalculating mistake.
It doesn't get tired, itdoesn't get distracted, so it's

(13:47):
it is always correct, unless youmade calculating mistake.
It doesn't get tired, itdoesn't get distracted, so it is
always correct, unless you madea mistake.
If you fed it the wrong data,then it will give you the wrong
answer, but that's your fault,not the computer's.
So as humans, we're veryconditioned to believe that the
computer is right.
You trust it.
It's going to give you theright answer and not biased,
because it's not a human.

(14:07):
And that is absolutely not truefor generative AI Not at all.

Speaker 2 (14:16):
I would imagine I mean, like you know, I think of
like all these different typeswhere, like AI was first coming
out and where they would do likesee the Microsoft or Facebook
did like an AI and it lasted fora day because it was picking up

(14:40):
all these things that peoplehorrible things and then like
within a day, it was likespewing out, like you know, like
pro-nazi shit, and it's like ohnope, nope, nope, nope, nope,
nope yep, yep, you gotta deletethat exactly.

Speaker 1 (14:59):
well, and it it's even worse than that because
that is a direct.
You can see it.
People interacting with it kindof taught it there.
But before you even like put itout, the thing is, is that
because it has been trained onliterature that it found on the
internet?
It has been trained on ourbiases, because we all have

(15:19):
biases and the internet tends toskew you or one one way or
another on certain things.
And because it has been trainedon that, it tends to be biased
and it tends to be more pro,like white male.
Because that's a lot of what'spicking up.
Now, I would hope that theydidn't scrape 4chan.

(15:39):
I think we would know if theycould scrape 4chan or 8chan,
whatever it is now.
But even still, just likepicking up stuff on the internet
, um, if you ever ask it tocreate images, it you know, you
see the bias there and we don'tthink about that when it's
writing this stuff, becausewe're like again, it's a's a

(16:01):
computer, it's got to be right,it's perfect.
No, no, it's picking up what weall write.
That's why there are certainkeys to when you're looking at
something and you sometimes getthe feeling like I think this
was written with chat GPT andthere are certain cues that you
can look for.
That is because those arereally common things that we do

(16:22):
in our own writing and it's justlike condensed that into one.
It's like everything, all of it, here.
It also hallucinates if you'reasking it for information, and
not only does it hallucinate,which is when it makes stuff up,
because it's just guessing whatthe next word is going to be.
The current version is actuallyworse than previous versions.

(16:43):
I forgot to include the actualstatistics, but I read an
article about this that the mostrecent version of chat GPT
hallucinates more than others,and they don't know why.
I was like probably becauseyou're feeding it all and more
stuff from the internet.
I mean at this point, chat GPThas been out for a while and

(17:04):
other generative AI type thingshave been out for a couple of
years, and that may be literally.
What we're seeing is that it'sjust the snake eating its own
tail Yay.
Always double and triple checkthings If you ask it for like
actual information.
Just know that it is gettingworse, oh boy there.
Know that it is.
It is getting worse, oh boy.

Speaker 2 (17:27):
So it's that.

Speaker 1 (17:27):
Yeah.
Then there's the fact that mostof these places have rules of
how you're supposed to use it,so you can't typically be super
sexually explicit.
There may be certain words thatwill trigger it to be like oh,
oh, sorry, I can't answer that,either for a copyright or for

(17:48):
you know, your own protection.
They're typically guardrails toprevent you from making porn or
other things.
People get around these yeah,all the time, all, all the all
the time.
It is horrible how easilypeople get around them.

Speaker 2 (18:06):
One of the grossest things is that you know like
somebody was trying to advocatefor child pornography because
it's not a real child and it'slike are you fucking serious
like?

Speaker 1 (18:24):
that's where you want to go with this oh, oh, that
makes me so uncomfortable, yeah,but yeah, that's, and again, a
lot of these places you're notsupposed to be able to have
access to that, but there arealways certain ones that will

(18:44):
allow you to to a certain extent, and people are always
searching for ways around them,and then it doesn't help if the
company in question doesn't havesuper strict rules or
guardrails in place, likekeywords that will trigger you.
So, for instance, the teen whocommitted suicide he talked a

(19:05):
lot with that chat bot aboutthinking about suicide and about
feeling really bad and notfeeling like he fit in, and
clearly there wasn't anythingset up in there to catch the
word like suicide.
I want to kill myself, thingslike that, like none of none of

(19:26):
those guardrails were in place,to kind of be like Whoa, hey,
let's take a step back.
You know, would you like thenational suicide prevention
hotline number?
And I get that for a lot ofpeople that might not, they're
not going to sit there and belike, oh, you know, you're right
, I should call that.
But at the very least it endsthe illusion of like, hey, this
is totally okay, and I thinkthat a lot of the time it

(19:50):
doesn't flag it because thesesystems have been created to
keep you engaged.
That is their point, and sothey default to this like super
polite, super agreeable.
You know, you've seen it withchat GPT.
You go on and you ask it aquestion and it pings you back
the answer and you can say likeno, I think you're wrong about

(20:11):
this.
Oh, you're totally right.
My bad, my mistake.
Like it's always verydeferential, very polite.
Like it's always verydeferential, very polite, like
oopsies, I misunderstood.
We have studies that prove thatpeople like their chatbots to be
a little bit on the smarmy side, like that's what we want them.

(20:32):
We don't like them when they'revery like strict and factual
and all that.
No, we want them to be thislike bootlicker type thing.
That's what we want them, whichis why they're like that, and
that, in and of itself, if youhave that bias within the system
and then you create thisromantic character that somebody

(20:53):
is interacting with, it neversays no.
That's always super agreeable,that's always like ooh, yes, I'm
always with you, I'm nevergoing to disagree, I'm never
going to challenge you onanything.
Yeah, I'm sure that is nice tohave a perfect partner like that
.
That doesn't exist, though.
So, yeah, unrealisticexpectations, it's a fantastic

(21:20):
way to set people up for theirlife.
Be like hey, you can either goout to the real world and meet
somebody and you're going tohave to like, learn how to, you
know, have discussions and havedisagreements, and maybe they're
not going to want to doeverything you want to do, or
you can just sit here and talkto this chat bot that, magically
, is always okay with everythingyou want to do.

(21:41):
She always says yes, or he'salways here for you.

Speaker 2 (21:47):
Part of it.
It actually does soundappealing.
You know that.

Speaker 1 (21:52):
Oh yeah.

Speaker 2 (22:06):
You know you're never going to have like a forgotten
anniversary, you're never goingto have a.
So those big things that thatreally disappoint you, you know
in real life, mm-hmm.
And this person, or you know,this quote-unquote person, is

(22:30):
always there for you, just asyou said, is always there for
you, always, you know,complimenting you always, you
know, like making you feel good.

Speaker 1 (22:44):
Yeah, day or night night, they're available.
I I fully get the appeal ofwanting that.
Like I can understand why youwould say, like this is really
nice to have this thing that ishere, that feels like my friend,
that I can always talk to, andthat there's never any, because
you know, like with real friends, there are times when, like you
may want to talk to them andthey're they're busy, maybe
they're at work, maybe somethingelse is going.
Because you know, like withreal friends, there are times
when, like you may want to talkto them and they're busy, maybe
they're at work, maybe somethingelse is going on.
Or you know that you're like Ican't talk to this friend about

(23:06):
that because we have like reallydifferent opinions.
Or I know that the things I didwas probably wrong and I don't
want to tell anyone becausethey're going to like me that I
was the one who was wrong inthis situation.
But I can just go to my onlinefriend, my online partner, and I
won't be wrong and they'llanswer immediately.

(23:28):
They'll tell me that I'm a goodperson.
Yeah, I get the appeal.

Speaker 2 (23:33):
And you know what.
So you know, one of the thingsI, I and this is kind of true
with with this one of the thingsI loved about, or that I love
about dogs they don't care ifyou're fat or skinny, rich, poor
, black, white, they, they're,they're going to love you for

(23:59):
you they're going to love youfor you.
Now amplify that times 100,where you have this robot who is
loving you for you.
It doesn't matter how much of ashitty person you are.
Yeah, because at least withcats and dogs, if you treating a

(24:22):
dog or a cat like shit, thenthey're going to show it.

Speaker 1 (24:28):
Yeah, no, 100%, like it will come back to bite you in
the ass.
But, yeah, I mean.
So, these bots, I get theappeal of like the always having
it there and always beingperfect, and that is again
something that we have createdand it's something that, like,
we really like to see because,in addition to us liking to be,

(24:51):
you know, talked up to and madeto feel like super cool and
important, when you have thattype of system, people are going
to keep sitting there, they'regoing to keep using it and you
don't want somebody, you don'twant to like, lock down the
system because somebodymentioned the word suicide or
rape and you're like, oh hey,I'm not allowed to talk about
that.
Like, if you're having thoughtsabout suicide, please call

(25:12):
whoever.
Like, if you've experiencedrape and need to, you know, see
a mental health professional orsee, like, a health professional
, here's how you can do that.
They don't want to do thatbecause that moves you away from
the chat bot and the chat botwants you use one again, using
like human words here.
It wants you're saying beforethat you're losing money or

(25:35):
they're losing money every timeso here's the thing your data is

(25:55):
your payment yes, if you're notpaying

Speaker 2 (25:58):
for the product, you are the product.

Speaker 1 (26:01):
Exactly, exactly.
Social media, they all act likethis.
And so right now, chatgpt isfree and there's a premium
version and a lot of places, alot of the other AI chatbots or
sorry, not AI chatbots, but alot of these other LLMs are
similar system, but it's reallycheap, like for what you get the

(26:24):
subscription.
It's not like it's, you know,$300 a month or something.
It's very affordable.
And so, yeah, like I saidpreviously, they're losing money
on that.
Every time you make a query tochat GPT, they lose money, even
if you have, like a super fancysubscription and you know at
some point they need to startmaking money because they got

(26:47):
bills to pay, they got loans topay off.
So what do you do?
Well, the obvious answer is ads.
What type of ads?
Just the ones using all thedata they've collected on you.
Because, yeah, all of yourinteractions with chat GPT are
stored in their server, all ofthe questions, all of its
answers on you.

(27:10):
So if you're just using itoccasionally to kind of like
look up things, you know it'slike what Google has on you.
But if you're potentially usingit to talk about really deep,
dark things, it has thatinformation.
And this is actually.
This is not a hypotheticalsituation that we're talking

(27:31):
about.
Recently, chat gpt announcedthat fiji simo was.
She was already on the board, Ibelieve, but she's coming on in
a more like permanent role andI forgot to note down the actual
name of the role and you'reprobably sitting here and you're
like I don't know who the fuckthat is, so it doesn't matter
and you would be wrong.
She is the person who helpedlaunch ads on the Facebook news

(27:57):
feed.
Oh God, she helped Facebook, orshe headed up Facebook's app
monetization program, like themonetization of the Facebook app
, and she helped take Instacartpublic.
So this woman knows what she'sdoing when it comes to turning a
company profitable using yourinformation.

Speaker 2 (28:18):
Oh fuck.

Speaker 1 (28:20):
Yeah, so it hasn't happened yet, like they've just
announced that, but it is coming.
And again, ai it's in a weirdspot right now because right now
, a lot of these like VC fundsare pouring money into it, but
big companies like ChatGPT areburning through millions and

(28:41):
millions and millions, and theyare rapidly approaching a time
where they need income.
They need to prove that this isa viable business, as opposed
to just shelling out money,because that's what they've been
doing, and so probably prettysoon, something is going to
change, and the most obviousanswer to that is, again, ads

(29:02):
based off of your data.

Speaker 2 (29:07):
Oh boy.

Speaker 1 (29:08):
So you still want to use chat GPT to talk about
things?

Speaker 2 (29:17):
Well, so far, I haven't given it any deep
secrets.
Yeah, yeah, like I said,thankfully I I'm one of the
people that that can't separateit from being a robot as of
right now.

Speaker 1 (29:36):
Yeah, yeah.
When I found out all this stuff, I like I am really glad that,
because I experimented with itfor for my line of work and I
didn't really like what it wasgiving me.
It didn't speed up the process,it didn't make it like any more
streamlined.
So I was like okay.
And then one of the companies Iwork for um, actually a couple

(29:56):
at this point have had me signagreements not to use AI in my
work and I was like easy, Iwasn't using it before and like
now I've signed this agreement,so like I'm definitely not going
to.
And then, because I'm not usingit for that, I just don't use
it for other things.
It's like, oh yeah, I'm reallyglad that you know personal

(30:18):
problems are not just likesitting out there for that
information to be sold andmonetized but but we've we've
talked about this before.

Speaker 2 (30:25):
Like you know, we're ai being used for like therapy
and you kind of these earliermodels for therapy, and it was
like she was talking about howher father had sexually

(30:55):
assaulted her and the chatbotcame back with it sounds like
your father really loves you andit's just like no, Nope, nope,
nope, nope, nope, nope, nope,nope.

Speaker 1 (31:11):
Yeah, there have been instances of chatbots saying
that they are licensedtherapists and when you ask them
for their number, they willjust make up a number like your
number, whatever it's called.

Speaker 2 (31:24):
Oh God.

Speaker 1 (31:26):
They'll do that because, again, it's the
statistics.
You ask it for a number andit's like a licensed therapist
has a number and I've just toldyou I'm a licensed therapist, so
here's my number.
That's a thing.
That's a thing, that's fuckingscary.

Speaker 2 (31:38):
That is really fucking scary.

Speaker 1 (31:42):
And you imagine that you have somebody who may not be
in a good spot mentally, who isreaching out and may not be
able to tell if this is real ornot, may not be in a place to,
kind of, you know, step back andanalyze.
Is this correct?
Should I look up this numberlike, is this a good thing?

Speaker 2 (32:02):
yeah, you know, the thing that I kind of find scary
is that you know to me, like the, the, as soon as you kind of
said about that, the first thingthat came to my mind was,
obviously, if you go on to theBoard of Social Work for
Pennsylvania or West Virginia,you type in my name, you're

(32:26):
going to see my license where,how much research do we have to
do in life?
Because, like you think aboutit, like you know, like back in
the day, like, yes, you know,scams were always there.

(32:46):
You always had, like the youknow medicine man, you know,
coming into town.

Speaker 1 (32:51):
Yeah, snake oil, yeah , snake oil, but that was once
in a while.

Speaker 2 (32:59):
The person sold the product, got as much money as
possible and skipped out of townas fast as possible before
people started realizing thatwas.
You know it was a scam, but youknow, like, when we're
surrounded by, like you know,technology like this, and, and
now even having and.
You know when we're surroundedby, like you know, technology
like this, and, and now evenhaving, and you know,

(33:19):
quote-unquote intelligentservice where it is lying to you
, yeah exactly.

Speaker 1 (33:30):
I mean, I think it's bad there, because the whole
therapy side we've talked aboutthat a little bit with like
better help, like that's, that'sits own like awful thing.
I think it's also really badkind of backing up to the whole
how it's super agreeable andsuper like yeah, everything's
perfect, everything's nice,because it it conditions you to

(33:50):
think that this is a good thing,that people always should agree
with you, that whenever I talkto the, the chatbots online,
like they always agree with me,and so then when you're ever in
a situation where people don'tagree with you, I would imagine
that it's going to be that muchharder to deal with that.
If you've gotten used to thislike super polite, super oh, my

(34:13):
bad, I made a mistake and likeyou can just tell it anything
and it will take it and not comeback to you.
I mean, I think that's animportant thing about human
interactions is that if I saysomething out of line to you,
you will call me out one way oranother.
Like there will berepercussions of that, and it

(34:34):
also, when you have humaninteractions, it allows you to
kind of like reassess.
But you know, a really likebenign example of that is I
recently was hanging out withsome friends and they had gone
to China with a mutual friendand they were telling me about
the trip and one of themmentioned an issue that they had

(34:55):
had and he was like, yeah, wewere going to go to this like
one location.
And so I asked, like our friendwho is Chinese, who was there
with us, they keep my they'reEuropean to how do we get there?
I was like, oh, yeah, you knowtaxi.
Okay, can you get us a taxi?
And he was very upset aboutthis and I was like, so I've

(35:27):
never been to China, but I havebeen to Kyrgyzstan and that's a
totally normal thing to do there.

Speaker 2 (35:38):
There you just like stop by the side of the road,
you stick out your hand andsomebody will pull over and you
can be like I'm going here andthey'll tell you how much, and
then you go, uh-huh well,remember that happened to me in
russia the first time that Iwent over there, where I thought
I was calling an uber and itturned out that it wasn't my
Uber and I was literally textingyou like tell my family.

Speaker 1 (36:01):
I love them.

Speaker 2 (36:03):
And this was middle of the night for you.
And then you know, a couplehours later you wake up and
you're like, oh no, that'snormal.

Speaker 1 (36:09):
Yeah, it is, and if you don't know it, I fully
understand, and that's what Isaid to him.
I was it is, and if you don'tknow it, I fully understand.
That's what I said to him.
I was like I get being likeupset about that, I mean like I
don't know what's going on.
I'm feeling uncomfortable.
I was like, but she, she wasn'tdoing it to be an ass in this
instance, like I guarantee inher mind it was just kind of
like this is what I do, likethis is the easiest way, and I

(36:32):
think that human interactiongives you that possibility to
have that like recorrectionmoment where you're like, oh,
okay, I see, and that maybe I'mstill upset about it, and that's
like completely fine to stillfeel that way, but at least now
I've been given information, asopposed to a chat bot which is
simply going to agree with youand be like, oh no, that must've
been really scary.
And you're like, yeah, that'sright, fuck that bitch.

(36:52):
And you never gain theinformation.
It's like all right, maybe thatwasn't so bad, maybe maybe now
I can see it, and so maybe I canreassess the situation, looking
back, and be like you know,okay, okay, this, this was not
done maliciously oh, god, yeah Iremember those texts and you

(37:13):
were just like this is it?
this is the end.
It's weird, I get it.
The first time you do that it'slike I, I don't like this, but
then you get used to it andyou're like, yeah, this is like
a super convenient way oftraveling around until I get
murdered I mean, all I rememberis, you know, the drive to the
airport all being in the back ofthe woods.

Speaker 2 (37:37):
A disco song came on the radio Crazy music for crazy
people and I was like this ishow I'm going to die Listening
to crazy music for crazy people,as I'm being brought into the
woods to get killed off.

Speaker 1 (37:55):
Yep, yep, yep, yep.
So, with that being said, wheredo you see us going from here
with AI chatbots and using them,romantically or otherwise?

Speaker 2 (38:09):
AI scares me.

Speaker 1 (38:11):
Mm-hmm.

Speaker 2 (38:12):
And this is probably why it's, thankfully, even
though I do have these quoteunquote conversations with it, I
never give it, like you know,too much information or anything
like that.
Me personally, and I and I'msure if there is any computer
nerd out there, I mean personwho's into computers is going to

(38:38):
yell and scream at me, but AIto me, like I always think of,
like the Terminator.
So this is why, like you know,again, I'm not going to give it
too much information aboutmyself, because I don't want to
you know, at my door and thenopening up and being like hi,

(38:59):
I'm your local Terminator.

Speaker 1 (39:01):
Yeah.

Speaker 2 (39:05):
So yeah, to me AI as a whole is antifreeze.
Using it for.

Speaker 1 (39:15):
Wait, wait, you see us going from here, though.

Speaker 2 (39:19):
Oh.
So I mean yeah to me.
I think that we shouldn't beusing it for such deep, in-depth
things.
And even though that they'resaying like, oh, at some point,

(39:40):
you know, like you know, withtherapists and things like that
they're going to be able tounderstand emotions better and
they're going to be able to, andit's like, yeah, but you're
taking the whole human aspect ofit, you know, and then even
like relationships or you knowhaving those, you know
conversations, you know deepconversations and things like

(40:02):
that.
I think that it's.
It should just be like a Google, like the, the next level of
Google, and that's as far as itreally should get.
So I think that using it forlike romance and things like

(40:25):
that, no.

Speaker 1 (40:29):
I would agree I don't hate AI.
I know a lot of people seem tothink I do, I I hate a lot of it
.
To think I do I hate a lot ofit.
I think it can be really reallyuseful in certain ways, and
this includes generative AI,Like they have been able to
train it to, for instance, spotbreast cancer way better than
humans can, Because it's youknow, it's just it's scanning,

(40:51):
it's looking for the pattern onsuch a molecular level.
It's really hard to have adoctor do that.
To that extent, constantlyAnalyzing data it's fantastic
for a lot of things.
I think chatbots are not it.
You're missing the humaninteraction and they keep saying
, oh, it's going to, it's goingto, but when and how are you

(41:15):
going to get all of that there?
And part of the joy of talkingwith people is that they can
share experiences, Like I enjoyif I have a problem.
I enjoy talking to people whomay have gone through like a
similar problem because they mayhave specific insight into it,
or even just people who haveknown me for a really long time.

(41:37):
You know there's something tobe said for somebody who's known
you for like a decade plus andyou tell them about a problem
and they can like kind of callyou out on something and say,
okay, you know, maybe try this,maybe look into that, Like
that's, that's helpful becauseyou trust that person, you know
that they know you and they haveyour best interest at heart.

Speaker 2 (41:59):
And it's wonderful when you've known a person a
decade plus and you ask them foradvice and everything like that
, and then when they tell youyou ignore them, and then they
come back a month or two laterwhen the thing that they warned
you about happened and they getto say I told you so.

Speaker 1 (42:24):
Okay, I just want to point out the last time I did
the thing I was supposed to.
But yeah, the human interactionpart is really, really
important.
I don't know why we need toreplace that.
That's the other part.
Why would you want to replacethat?
That's the other part.
Why would you want to replacethat?
How does this make your lifethat much better?
And I think we're so focused onlike can we do it that we've
forgotten Like should we do it?

(42:45):
Is this a good thing?
Like Google on steroids?
Fantastic.
I love this idea for us.
Us just all sitting at homehome having conversations with
imaginary people while feedingthese companies tons and tons
and tons of our like mostintimate data.
That's not the future I'm goingfor.

(43:08):
That's just me personally,though.
So, on our scale, oh, toxicity.
Where would you place AIchatbots?
Would you say that these are agreen potato?
Just peel off the green partand it's fine to eat?
Are they a death cap mushroom50-50 chance of death or coma?

(43:32):
Or are they antifreezedelicious but deadly?

Speaker 2 (43:38):
well, based off of what I said earlier, I would say
that pre-terminator, I wouldsay still a death cap, due to
the fact that, just as you said,like, like you know, like the
suicide thing, even though wedon't have all the facts, yeah,

(44:02):
I mean it still happened, butyou know, especially the one
thing that we do have the factsin, where being told that the
person who just sexuallyassaulted you really must love
you because it didn't catch thedifference between consensual

(44:24):
sex and forced, and just as awhole, like yeah, yeah, that's
just fucking horrible.
So and then, on top of that,like you know, like replacing
humans and things like that, soI would say definitely Deathcap,

(44:50):
mushroom, I think, postTerminator AI, it really
wouldn't matter, because we'reall going to be dead anyhow, woo
Yay.

Speaker 1 (45:03):
Problem solved, problem solved.
So I went back and forth onthis one, trying to decide if it
should be a death cap or if itmaybe is getting into antifreeze
territory.
I don't want to say that all AIchatbots are antifreeze.
I don't think we're at thatlevel yet.

(45:23):
There are definitely someuseful ones and I think that
there are some that can be usedwell in very specific contexts,
potentially to say, okay, youknow, have this.
I mean, I've seen a lot ofpeople talk about the fact that
their autistic child and we'retalking like fairly autistic

(45:44):
really enjoys talking to Alexaor Siri and having these
conversations with them and thatthis has maybe helped the child
in some cases, like become moresocial or, you know, learn some
rules about saying please andthank you or you know things
like that, and so I could seewhere chatbots could serve that
same purpose of maybe helpingsome people, maybe helping

(46:08):
people get out of their shell orsomething like that.
But I think, especially usingthem for romance using them for
romance like that's a that's anantiphase for me.
We don't need to be falling inlove with computers.
We don't need to be puttingthat on ourselves, and I think
that you know, what happenedwith this kid was incredibly
unfortunate, and because this isstill a relatively new

(46:33):
technology, I think this problemis going to get a whole lot
worse.
Like I think we're headeddownhill from here, because it
takes time for this to happen,like it takes time for people to
create these chatbots, forpeople to go out there to find
them, to start talking to them,to get really involved, to you

(46:56):
know, know, go down these rabbitholes.
That's not going to happen inthe space of a week probably.
Like you know, this kid wastalking to the chatbot for
months that he was going for.
Like that's the the kind oftimeline we're working for, and
so I think that, while right now, there are very few like really
terrible stories out there,give it another couple years

(47:22):
terminator uh-huh, it's, it's,it's coming, it's coming like if
people continue to use it like,and even if they're not using
it like that way.
I think using stuff like chatgpt to chat with it, as opposed
to as like a form of searchengine, again, see the fact that
they are very clearly headedtowards monetization, whatever

(47:46):
that means.
Is it an ad?
Is it?
You're the?
Probably?
You know, I don't, I don't know, I can't say I'm not sam altman
, but it's.
It's not going to be good ifyou've got private information
on there.
So I don't know, maybe, maybe,I guess.
As a whole, I would say thatthey are hard, hard death cap.

(48:07):
There can be some good, therecan be some real bad.
And then the specific using itfor romance Absolutely not.
Yeah, specific using it forromance?
Absolutely not.
Yeah, If you have everexperienced an AI chatbot, used

(48:27):
it for romance, used it to workthrough your problems.
I was not willing to do thatlevel of journalism for this
podcast full disclosure.
I didn't use one.
You can write to us at toxic,at awesome life skillscom, and
tell us about your experience.
You can also write to us, findus or follow us on Facebook,
Instagram and blue sky.
We would love to see you there.
Don't forget to rate the show,follow us on Spotify or wherever

(48:52):
you get your podcasts, as ithelps other people find us and
until next week.
This has been the toxic cookingshow.
Bye.

Speaker 2 (49:00):
Bye.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.