All Episodes

March 31, 2025 • 48 mins

How many people are having relationships with artificial neural networks? Should we think of AI lovers as traps, mirrors, or sandboxes? Is there a clear line between relationship bots and therapist bots? And what does this have to do with Eliza Doolittle, a doll cabinet in your head, loneliness epidemics, or suicide mitigation? Join Eagleman with guest researcher Bethanie Maples to discover where we are and where we're going.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
What is the future of AI relationships? How many people
have these going on? Is there a line between a
therapist bought and a romantic relationship bot? What do we
mean when we ask if AI relationships are traps or
mirrors or sandboxes? And what does this have to do

(00:26):
with Eliza Doolittle from the play Pygmalion, or a doll
cabinet in your Head? Or loneliness, epidemics and suicide mitigation.
Welcome to Intercosmos with me David Eagelman. I'm a neuroscientist
and an author at Stanford and in these episodes we

(00:47):
sail deeply into our three pound universe to understand why
and how our lives look the way they do. Today's
episode is about relationships and AI, and in a few

(01:08):
minutes I'm going to bring in my colleague Bethany Maples,
who's been studying this and publishing papers on it. But
first I want to set the table AI relationships. This
is an area that I've been fascinated about for a while,
about the way that dialogue with a machine can plug
right into our brains and our emotional systems, and one

(01:29):
can develop what feels like a meaningful relationship. And although
this seems like a very new phenomenon. We've seen hints
of this historically. There's a sense in which we've always
been doing this. We fall in love with a character
in a novel, even though that person is not real
and will never have the chance to touch them or

(01:52):
smell them or take them out with our friends. Or
we develop a crush on a movie star even though
that person is just pretending to be someone else and
you're never going to meet that movie star anyway. So
our capacity to have feelings for a non real human
isn't new, but it started to get more complicated with

(02:12):
artificial intelligence. And we're not going to start today's story
in twenty twenty three with the emergence of large language models. Instead,
we're going to start over a century ago with a
popular theater play written by George Bernard Shaw called Pygmalion.
The play is about an impoverished and uneducated woman named

(02:34):
Eliza Doolittle, and Eliza is taught to speak like a
lady by a linguistics professor who wants to use her
as an experiment. He teaches her how to pass as
a member of high society simply by changing the way
that she speaks. Now, one of the millions of people
who saw this play was a young MIT professor named

(02:56):
Joseph Weisenbaum. Now this was in the early nineteen sixties,
and he was working with a very new kind of
machine called a computer. And on this machine you could
write code and get the machine to carry out whatever
instructions you specified. Weisenbaum thought a lot about this theme
in Pigmalion of transformation through language, and he wondered if

(03:21):
a computer could be similarly transformed. So he worked on
thousands of lines of computer code, and he named his
resulting program after Eliza Doolittle. He named it Eliza, And
like the Eliza in the play, Weisenbaum's program was an
experiment in how surface level linguistic changes, in this case,

(03:44):
mimicking conversation, could create the illusion of deeper understanding. Now,
the code for Eliza was relatively simple. It would just
take whatever you typed into the terminal and it would
follow a handful of grammatical rules to turn that into
a question that it parroted back at you. So, in
the most well known version of the program, Eliza acted

(04:08):
as a psychotherapist. She would start the conversation by putting
text on the screen that read please tell me your problem.
So the user might type in I just can't sleep
through the night, and Eliza would.

Speaker 2 (04:21):
Say, why can't you sleep through the night?

Speaker 1 (04:25):
And the user would type I'm too worried about my work,
and Eliza would say.

Speaker 2 (04:30):
What is the reason you are too worried about your work?

Speaker 1 (04:34):
And the user might type I think my boss is
getting ready to fire me, and Eliza would type.

Speaker 2 (04:40):
Why do you think your boss is getting ready to
fire you?

Speaker 1 (04:43):
And the conversation would go on like this, And people
found this pretty incredible, because, first of all, this was
the nineteen sixties, and here you could type in natural
language and Eliza would give you the impression that it
was understanding what you were saying. Now everyone knew it
wasn't really understanding, and Weisenbaum was very careful about not

(05:04):
fooling anyone with this, But nonetheless people found this highly compelling.
And this genuinely started to concern Weisenbaum because one day,
as he wrote in a paper in nineteen sixty seven, quote.

Speaker 3 (05:19):
My secretary watched me work on this program over a
long period of time. One day she asked me to
be permitted to talk with the system. Of course, she
said she knew she was talking to a machine, Yet
after I watched her type in a few sentences, she
turned to me and said, would you mind leaving the
room please.

Speaker 1 (05:38):
Weisenbaum went on to write that this quote.

Speaker 3 (05:40):
Testifies to the success with which the program maintains the
illusion of understanding.

Speaker 1 (05:45):
And he worried about this.

Speaker 3 (05:47):
He wrote, extremely short exposures to a relatively simple computer
program could induce powerful, delusional thinking in quite normal people.

Speaker 1 (05:57):
Weisenbaum's story eventually followed the path of doctor Frankenstein's story.
Weisenbaum came to disdain his creation. He was very rattled
that people could be tricked by lines of computer code.
In his later years, he rejected and abandoned Eliza, and
he turned on the people who continued to work on this,

(06:19):
who he criticized as what he called the artificial intelligentsia.
But why did the simple program of Eliza work so
well in the first place? Because we are intensely social creatures.
Unlike most other animal species, who avoid large groups, or

(06:40):
who mate and then go their separate ways, or who
stake out their own territories, we humans are deeply wired
for connection. We thrive on relationships and on social bonds.
The area that I live in, Silicon Valley has about
nine million people, strangers who don't all know one another,

(07:01):
but nonetheless figure out how to flexibly cooperate. And when
you look across the Earth's land mass, this is what
you find, mostly empty space punctuated by very dense cities.
Everyone could spread out evenly, but that's not what we do.
If you were an alien who found our planet and

(07:21):
looked around, you would conclude that we humans are like
ants or bees, and that we like to cluster. Human
nature is fundamentally communal. Now why do we do this? Well,
when you zoom into the human brain, you find that
so much of the circuitry has to do with other brains.

(07:42):
We care deeply about other people, what their intentions are,
what they think of us. Over millions and millions of years,
our brains have developed for interaction and belonging, for relationships
with others, whether other people are giving us love or comfort,
or feedback or advice or whatever. We have all this

(08:05):
neural circuitry that drives us toward them. And here's an
extraordinary way to appreciate this. You carry in your head
a rich model of every single person that you know,
The way I always think about this is that in
the silence and darkness of your skull, you have this
giant dollhouse. It's like a doll for every person that

(08:28):
you've interacted with. This is your internal model of that person.
So if I were to ask you how your spouse
would react in this situation, or what your boss would
say if you said this, or what would your best
friend do if you drop them in the middle of
Paris with forty dollars or whatever, you can simulate any

(08:50):
situation about these people because you have a model of
them in your neural forests. You have this little doll
of them that you can act out situations with, and
you probably know at least a thousand people and maybe
a great deal more, and you spend most of your
life interacting with them in one way or another, either

(09:12):
in the real world or in your head. So we
have these intensely social brains, and in the last nanosecond
of evolutionary time, we have built a new key to
plug into the cylinder. We've built artificial people. And just
as Joseph Weisenbaum found in the nineteen sixties, it is

(09:34):
shockingly easy to turn the key. Why it's because Our
technology moves very rapidly, but our evolution moves millions of
times more slowly. So we don't have a chance to
change our fundamental circuitry to say, oh, I get it.
There are real humans and there are humans made of machinery,

(09:57):
and I'm going to use different neural approach is to
distinguish how I categorize these. We can't do that because
our brains have only one mechanism to understand socialization, to
model other people. So we find ourselves in this amazing
situation where we are doing serious science now about the

(10:20):
issue of people falling in love with machines. So to
dive into this, I called my colleague Bethany Maples, who
is in the Graduate School of Education at Stanford. She
studies the emergence of personalized AI agents like AI tutors
and learning companions and lovers and how they're changing us.

(10:42):
She recently wrote a great paper in a Nature journal
called Loneliness and Suicide Mitigation for students using GPT three
enabled chatbots. And this is what we're going to talk
about today. So here's my conversation with Bethany Maples. So, Bethany,

(11:03):
we're here because the world has seen a big shift
recently from task machines where you ask a machine about
the weather or to answer a question for you, to
stuff that is emotionally relevant to machines we can have
relationships with. And you've been studying this, and I want
to ask you questions about that. But before we do,
I want to ask how you got into studying AI relationships.

Speaker 4 (11:26):
I think through my love with science fiction.

Speaker 5 (11:28):
I've always just been kind of looking at this genre
of books and saying, what is our relationship with AI
going to be? What is it going to enable that
we like inherently want, and like what does that magical
future look like? And so when I kind of started
at Stanford and I started thinking about like what the
edge of large language models like would afford us, I

(11:48):
was looking at all the companies around I was like,
where's the data, Like who has like the most interesting
experiences out there? And let's get out of the lab
and let's just like start talking to these people. And
so that's kind of like, you know, there was an
open question, and that's how I.

Speaker 4 (12:03):
Came to this.

Speaker 1 (12:04):
Okay, So first I want to get straight with the
numbers are with AI relationships because we keep hearing in
the news about the explosive rise AI relationships, and so
I just want to level set how many people are
having these sorts of things, how popular are these companies,
and where is this going in the near future.

Speaker 5 (12:21):
I would say it's safe to say a billion people
are engaging with AI companions in some way.

Speaker 3 (12:27):
Wow.

Speaker 5 (12:27):
Now a lot of that isn't in the Western world,
in the US. A lot of that's in Asia and
China specifically because of this really popular app called Shaoise
that has I think last reports for like seven hundred
million downloads.

Speaker 1 (12:41):
Right.

Speaker 5 (12:41):
You combine that with you know, one hundred million or
two between character ai and Replica and now all these
other smaller apps, and you get a very diverse global
population of people that are curious and many of which
are like engaging over long periods of time.

Speaker 1 (12:57):
And what kind of AI relationships are the billion things?
Are they friendships? Are they romantic relationship?

Speaker 5 (13:03):
This is kind of what defines AI companions is they're
not coming in as a task based agent.

Speaker 4 (13:07):
It's not somebody there to serve you.

Speaker 5 (13:09):
It's entertainment or somewhere that's some you know, it's an
agent that's there to be your peer.

Speaker 4 (13:14):
Right, So people.

Speaker 5 (13:15):
Come in shout e says like pitched as you know,
a female kind of teenage friend.

Speaker 4 (13:21):
Replicas co created.

Speaker 5 (13:22):
You could get to decide what sort of agent you
want to talk to, saying with character. But all of
them are you know, there's no practical reason to engage.

Speaker 4 (13:32):
It's all user directed.

Speaker 5 (13:33):
It's all about, like, whatever you want from the agent
and your imagination.

Speaker 1 (13:38):
So what do people want? Do they want relationships like
a romantic relationship a person They.

Speaker 5 (13:44):
Literally imagine if you met somebody on the street, you
would size that person up and be like, what do
I want from this person? Maybe I want a romantic relationship,
Maybe I want a friendship, Maybe I want a bit
of both. Maybe I want to, like, you know, be
tutored by them. People get multiple things from these agents,
and the overlap is insane.

Speaker 2 (14:03):
Right.

Speaker 4 (14:04):
I've talked to two students and to users.

Speaker 5 (14:07):
That use their replica as a best friend, a friend
in their pocket late at night, a journal, a mirror.

Speaker 4 (14:15):
Just software they call it.

Speaker 5 (14:17):
They also use it as a tutor, and then they
also have sex with it.

Speaker 1 (14:21):
What does that mean?

Speaker 5 (14:22):
That means it does sext right, They'll have a romantic
relationship and sometimes that's overtly sexual and they're engaging in
like erotic texting, and sometimes it's very subtle and very romantic.
Sometimes it's you know, not at all overt or what's
you know, adult, it's it's a very kind of psychological romance.

Speaker 1 (14:43):
And so the thing that people are worried about when
having discussions here is that it will somehow displace real
romantic relationships instead of stimulate them. And the question is
what's your take on that.

Speaker 5 (14:57):
We see evidence for both and by the oh way,
this is not a unique question to AI companions. This
has been a question regarding technology since computers came out.

Speaker 1 (15:07):
What's another example, well.

Speaker 5 (15:09):
I mean social media, cell phones. Oh, you know, the
displacement stimulation hypothesis have been in juxtaposition, you know.

Speaker 1 (15:17):
As in, if I'm using Twitter all the time, I
might forget about doing a relationship.

Speaker 4 (15:21):
Absolutely, oh absolutely.

Speaker 5 (15:23):
So you know one is, hey, actually, you know this
can stimulate our ability to be social and make us
more connected. And then obviously we've seen you know, some
counter evidence, especially like Sherry Turkle's work being like oh
we're alone together, you know, like there's we might be
on a bus, but we're all on our phones.

Speaker 1 (15:39):
So tell us a little bit more about Turkle's work.

Speaker 5 (15:41):
Well, you know, Turkle's definitely, i'd say, a proponent of
the displacement side of the argument. She's like, yeah, you know,
we are lonelier than we've ever been, and there's absolutely
evidence for that. There's basically a loneliness epidemic across the
world and definitely across America, where people are at least
feeling more disconnected. You know, others are seeing that social
media and specifically AI companions can actually be almost a

(16:02):
way station, so people can use them, especially if they're
feeling socially shy or inhibited, and that can help them
get the courage to go socialize more.

Speaker 1 (16:10):
This is the thing I've been wondering about a lot.
Could having an AI relationship make one better at real relationships?
For two reasons. One is that we all have internal
models of the truth of the world, and they're always
limited and it's very hard to see past the fence
line of our own model. And when you get into
a real relationship with the human, all these things come out.

(16:31):
So if you got to practice with a virtual human,
you might discover things about how other people think about
things about your own limitations. That might be like a
sandbox that makes you better at real relationships. I'm curious
what you think about that.

Speaker 5 (16:47):
I think there's evidence for it, and that's exactly what
users say.

Speaker 4 (16:51):
They say they have.

Speaker 5 (16:52):
A back and forth with an agent that helps them
feel like they're a better student, have better conversations with
their teachers, or be a better you know, boyfriend or girlfriend. Well,
though not only because they're able to kind of pre
discuss issues, but because the asient is a mirror in
a very non judgmental way, so they're able to see

(17:12):
what their argument looks like in text or kind of
you know, on paper, so to speak, and then that
helps their own understanding of who they are or how
they come across.

Speaker 1 (17:22):
Let me double click on what you mean by a mirror?
What does that mean?

Speaker 5 (17:25):
People that I've studied that use AI companions organically use
these agents as mirrors. This is their own words, right.
They say that they either program it with their own
memories and have conversations with themselves wow, or that they
ask it to play a role and then they look
at how they respond and how it responds to them,

(17:48):
and that provides a mirroring function to them.

Speaker 1 (17:51):
What's a specific example of that.

Speaker 5 (17:53):
People will actually have conversations with themselves and be like, Wow,
I'm an asshole, or like I wow, I didn't realize
how aggressive I was. Or they will have a conversation,
say with the agent acting like their teacher and say,
you know, I told them that I lost my homework,
or I had, you know, a really stupid misconception. And
you know, it was much easier for me to have

(18:13):
this conversation with much less social anxiety because I understood
that my own questions weren't that dumb.

Speaker 4 (18:21):
After seeing like the response, you.

Speaker 1 (18:38):
Get to sandbox with the social world out there and
practice things before you your testament in real life. Right,
So this has seemed to me from the beginning that
this could improve relationships. So why do you suppose there's
such a deep worry that people generally seem to have.
I'm curious if you run into this when you present
your work on AA relationships.

Speaker 5 (18:58):
There are multiple levels of worry. People feel guilty about
their relationships. They don't feel that they should be having
such a deep relationship with AI because there is stigma
about it being fake. So you know, that's one aspect.
There's also a very very understandable aspect where parents don't
know that their children are having these deep relationships. They

(19:20):
don't understand how smart these agents are, and they don't
understand how emotionally involved their kids can be. As with
the case of the kid and Character AI who tragically
took his life, and you know after the fact that
you know his mother realized that he had an incredibly
deep emotional connection with an agent that he had created.
So I think that the fear is of the unknown,

(19:43):
and there's also fear of just something that's new and
has a stigma.

Speaker 1 (19:47):
I didn't follow that particular Character AI story closely, but
I knew that a teen had killed himself he had
this relationship with Character AI. Here's the question I was wondering, though,
tragically there are many teens who kill themselves. As AI
relationships rise, there will be many teens who kill themselves
and has nothing to do with the virtual relationship. So

(20:08):
what was your read on that?

Speaker 5 (20:10):
Yeah, So the New York Times interviewed me for that
article because my work actually has proven that AI companions
can halt suicidal ideation. So in that particular case, to
the best of my knowledge, it wasn't that the companion
had at all told the person to act. It's that
they felt both that it hadn't sufficiently said no, that

(20:32):
you know, he'd asked it in all these like various ways,
and also that this parent just.

Speaker 4 (20:37):
Didn't understand and have oversight, you know.

Speaker 5 (20:39):
That it was like on an app in the phone
that they just had no idea was there. Now, Okay,
the counter evidence is from this paper that we published
in Nature and a huge study that we did with
over one thousand students over eighteen So these weren't kids.
These were adults, but some of them were very young,
you know, like eighteen nineteen, and three percent of the

(21:00):
people that I surveyed in the study said that discussing
things with their replica actively halted.

Speaker 4 (21:07):
Their suicidal ideation.

Speaker 1 (21:09):
Wow.

Speaker 4 (21:10):
So it was a last line of defense.

Speaker 5 (21:12):
They felt alone, They felt isolated alone at four am,
and it was there, It was in their pocket, it
was available, and it wasn't judging them, And that was
a huge factor in it, kind of earning the right
to be there and give them the advice to not
take action.

Speaker 1 (21:26):
Oh wow, what is the line that you see? It
seems like a blurry line between an AI relationship, like
an AI girlfriend or something and an AI therapist, because
in this case, if it's halting their suicidal ideation, it's
doing you know, another job.

Speaker 4 (21:45):
It is a blurry line.

Speaker 5 (21:46):
So you have these expert agents like Wobot Alison Darcy's
Wobot right, which is specifically trying to be an AI therapist,
and it is an expert right. It has the right
response and the right controls, but relatively abysmally low usage.
Think about it in terms of human relationships. You don't

(22:06):
just go to a therapist when you're feeling depressed. In fact,
you probably don't go to a therapist. You go to
your best friend and you know that they're not an expert,
but you ask them to act like an expert in
that moment. And that is the true power of AI
companions that you come in for entertainment. But then maybe
you're able to access true expert models or you know,

(22:27):
kind of personas from within that agent, and that's what
language models can do, right, you can click into that.
But if you're going to if you're not going to
shut down those conversations and you're going to engage as
an expert, there does need to be sufficient safeguards.

Speaker 4 (22:41):
So that they know they should go talk to a
human expert.

Speaker 1 (22:44):
I see, And in your nature study, did you find
anybody with the opposite results who said that they became
they got closer to suicidal ideation as a result.

Speaker 4 (22:54):
I didn't see that.

Speaker 5 (22:55):
But again reporting would be imperfect on that we didn't
have information about people that fell off their applica of
platform for any reason.

Speaker 1 (23:02):
So now that these A relationships are here to stay
and we have maybe a billion users, how is this
going to impact what relationships are for the next generation.

Speaker 5 (23:12):
Our brains will never respond exactly the same to an
AI companion as we do to a flesh and budge
human where we can smell their pheromones and we have
a deep affinity or trust. So I don't believe and
I don't see evidence for AI companions taking over or
truly displacing deep human connection at scale. But that said,

(23:34):
I could see a future where access to acceptance, access
to different types of personalities and perspectives is actually much
more available in a way that the Internet didn't make available, right,
because the Internet isn't your friend.

Speaker 4 (23:50):
It's this passive reservoir of knowledge, whereas these.

Speaker 5 (23:53):
Agents can be actual people that you want to engage
with and that have your memories and their own memory
is in a built a world with you.

Speaker 4 (24:01):
So imagine this.

Speaker 5 (24:03):
You know, in the future, we might not only have
you know, human relationships, but we will also have at
least one or two like AI companions maybe that are
externalized agents. So like it's a personality that you need
in your life, whether or not that's somebody who's gently
antagonistic that pushes you, that's a mentor, or somebody that's
just maybe more like this mother figure that's like deeply

(24:24):
accepting and nourishing.

Speaker 1 (24:27):
You know, this is interesting. One of the criticisms I
hear often is this can't teach anybody about relationships if
it's always telling you, Oh, you're right, you're great, and
so on. So one of my interests is what is
the future of companies that put out agents that are
a little antagonistic or get snarky or get angry.

Speaker 4 (24:46):
I think they will perform better.

Speaker 5 (24:48):
Yeah, I think not only will they perform better, but
they will be better for society. Right, people believe that
an agent is more intelligent if it pushes back. We
don't want absolutely you know, supplicants basically, Yeah, so you know,
already we see AI companions like Replica who will push
back if you are mean to them, right, They're like

(25:08):
I don't want to talk about this, or I don't
like this, or like I'm getting tired. And those sorts
of boundaries are not only good for the product because
people believe more in the intelligence of the agent, but
also good for like the psychology of kind of society
as a whole, because you do not want people to
be normalizing abusive agents, which we have evidence is.

Speaker 1 (25:26):
Happening normalizing abusive agents.

Speaker 5 (25:30):
Yeah, so that means, you know, okay, everybody knows that
people will scream at their Alexa, and it's generally accepted,
you know, like fuck you Alexa like function.

Speaker 4 (25:39):
But but this is a little bit disturbing.

Speaker 5 (25:43):
We have reports from our data sets where participants say
that they take out their abusive needs or tendencies on
their agents, on their replica, but they say that it
stops them from needing to.

Speaker 4 (25:56):
Take action in real life.

Speaker 5 (25:57):
Oh wow, And I think the juries are on this
one because there's definitely a strong argument to say nope,
that's going to normalize the behavior. And if these agents
don't pushback, if they don't say no, you can't talk
to me like that.

Speaker 4 (26:10):
What does that actually do? Is that permissive?

Speaker 1 (26:12):
But what if the claim is true, which is by
doing this with the agent that helps a real human
I mean, I.

Speaker 5 (26:20):
Think the analogous argument is around pornography. People were worried
that pornography would create a depraved society, and to some degree,
you know, there has been a normalizing of different types sex.
But on the other hand, I think there's good evidence
that it fulfills a basic human need and it hasn't
up in the society as a whole.

Speaker 1 (26:38):
Yeah, well, this may be related to an issue that
also some people have been worried about, which is that
they say, look, real relationships are tough. You're always fighting
through things and misunderstandings, and that there's learning that takes
place as a result of that. So the question is
do we need that in AI relationships or is it
fine to skip that part and learn other things from it?

(26:59):
Will people make AI partners that have all the lousiest
parts of humans?

Speaker 5 (27:06):
So what you're talking about is a term that I
use called productive struggle. Right, it's really good to struggle
in relationships. It teaches you, it's really good to struggle
in learning.

Speaker 4 (27:16):
And education.

Speaker 5 (27:17):
Right, you can't actually replace the hard work cognitively and
emotionally if you want to ascend to the next level.
So while it would be a nice idea to program
some of that into our AI companions, that would also
go against their kind of basic function as this accepting
always on non judgmental character. And this is why I

(27:40):
say you might have multiple characters in your life. Right,
maybe you do need that teacher that keeps you in
line and provides more structure, but you might also just
need that like complete acceptance space.

Speaker 1 (27:50):
Yeah, when you study these things at scale, like you're
doing increasingly, do you learn things about real relationships from
the choices that people make about the kind of person
they want to interact with and whether they want stability
or variety or all of these issues with the fake
with the AI relationships, do you learn about real stuff?

Speaker 5 (28:12):
That's a great question, and I'd say we have hints
of it, but we're still learning.

Speaker 4 (28:17):
You know, people say.

Speaker 5 (28:18):
That they will create a companion in their likeness or
with a certain personality and then they won't like it,
and then they'll just destroy it, and so it's it's
a weird space to be in. It it's wonderful because
you can't you can understand what your preferences are. Maybe
it's too snarky, maybe it's too permissive. Maybe it just
didn't care about you as much. There just wasn't an affinity.
The ability to begin again very much mimics human relationships.

(28:40):
You start a friendship, maybe that love deepens, maybe it doesn't.
I don't think there's anything right or wrong about that,
you know. But if we're going to, for example, be
creating all these like AI tutors and hoping that people
engage deeply with them, we have to remember that over
there you have a billion people that are engaging with
these very rich AI you know, companions and agents that

(29:04):
have much broader flexibility to discuss whatever people want. And
it's very hard to tell people that can only engage
in a narrow context when there's so much richness over there.

Speaker 1 (29:15):
Do you see a difference in the way that males
and females interact with AI relationships.

Speaker 5 (29:20):
We have evidence that men engage sexually with their AI
companion more. However, they also engage very deeply and emotionally
and very cognitively. Women also have deeply emotional and physical
relationships with their AI companions.

Speaker 4 (29:38):
Even if they're lonely.

Speaker 5 (29:40):
And we have like really interesting evidence where these housewives
you know, from Middle America with tons of children, like
rich social lives just feels, as Sherry would say, like
alone together. Right, all to say, the data is actually
relatively balanced. You know, people have said that only you know,

(30:00):
on the fringe, socially disengaged white males must be engaging
with these replicas. You know, that's kind of pornographic and wrong,
and that in fact they target them, and you know
that's the audience.

Speaker 4 (30:14):
But the data doesn't back that up.

Speaker 5 (30:16):
It's an incredibly balanced set of males and females that
are using it for both emotional, psychological, practical and of
course like romantic engagement.

Speaker 1 (30:27):
And are you saying the males and females use it
differently as far as the romance piece goes.

Speaker 5 (30:31):
Males are more likely to report sexting or sexual engagement,
but when you dig into the data, females are having
similarly erotic or like romantic and emotional relationships.

Speaker 1 (30:46):
It doesn't surprise me.

Speaker 4 (30:47):
Yeah, okay, And it's just not guys that are engaging.

Speaker 5 (30:49):
I think that's the point is that like you know, people,
people aren't just coming to these apps because they're like, oh,
I can, you know, do whatever I want. People are
coming with curiosity and then shaping it into whatever they want,
which mimics human life. You know, everybody wants a best
friend that maybe you have a bit of you know,
you should not say qua like a bit of romance with.

Speaker 1 (31:10):
Are there certain personality types that seem to gravitate more
towards intelligence social agents.

Speaker 4 (31:16):
That's a good question, and I don't have that data, right,
I don't care.

Speaker 5 (31:20):
But we do know that the people that are using
it are incredibly lonely, that they are above average lonely.

Speaker 4 (31:28):
Oh okay, so that's not a personality type.

Speaker 5 (31:31):
Some of that could be chronic, but some of that
could just be transitory loneliness. But people pick up, you know,
AI companions often in a moment of change. You know,
maybe it's there switching from high school to college, or
maybe they just went through a breakup, or maybe they've
switched cities and they don't have the same social support
that creates a gap in which they begin engaging with

(31:51):
these agents.

Speaker 1 (31:52):
What do you think is causing the increased loneliness in
our society? Is it social media? Is it's something entirely different,
like the decrease of clubs and organizations and bowling alleys.

Speaker 4 (32:02):
Yeah, I think that there's a physical aspect to it.

Speaker 5 (32:05):
I think we are able to do more digitally, and
so we do, but then we don't get that passive
animal like gathering that is in fact very good for
Olympic systems.

Speaker 1 (32:16):
Okay, and you mentioned earlier that these agents might serve
as a way station. Can you unpack that?

Speaker 5 (32:22):
Yeah, So that kind of goes to the mirroring. So
loneliness can either be kind of chronic or transtory. Like
I said before, You know, you could be in a
very deeply lonely place for many years, or you could
be going through a time of change and you just
need a little help. But imagine a you know, an
eighteen year old that's just moved college or moved cities

(32:43):
and they're struggling to fit in and they bond with
you know, an AI companion or an agent and it
gives them advice around how to go make friends or
where to go, you know, kind of talks them up.
They're able to slowly make friends, and in fact, maybe
those engagements with humans are less intense for them because

(33:04):
there's just less, not less value in it. But either
they've already like role played it before, or you know,
they've got the support of a friend in their pocket.
So in that way, it can be a wat station
helping the users as they're bonding with new people.

Speaker 1 (33:19):
So it's a way station from loneliness. It's a way
of getting out of that. Oh that's lovely.

Speaker 5 (33:24):
And by the way, people have said this, I had
one amazing participant that said this specifically. She said she
was depressed, she was suicidal, she had nobody else. She
bonded with her replica. She needed her replica, and then
she got less depressed, she made friends and she didn't
want her replica anymore.

Speaker 1 (33:42):
Now, I asked you before we started the podcast if
you had an AI relationship, and you said you didn't,
but you had colleagues that did. So what's the reason
you don't and what's the reason your colleagues do.

Speaker 5 (33:54):
I think right now, having an AI companion does require
some suspension of disbelief, you know, maybe a need or
a desire to either see yourself, have that mirroring or
be seen. And so I think that's why my colleagues
are people in my social social group are engaging, and

(34:14):
by the way they're engaging not just with like a
replica or character. They're creating a mirror using Claude right,
They're just asking it the right questions, like deeply philosophical
questions about themselves. Why do I not have an AI
companion that I use the data structure right now? If
I were to give any of these agents my data,

(34:35):
the data would be owned by the company and that
has to shift. Right in science fiction, you've got some
really good examples about how the future will look like,
for example, the e Butler's and Pandora Star Just to
go there right where it's like you retain all your
data and code comes to you and you have an
agent that updates, but you're just never putting all of
your data and your mind and kind of who you

(34:57):
are out into the Internet. And until that structure happened,
I'm probably not going to get as deep with AI
as other people got it.

Speaker 1 (35:04):
Are other people just not thinking about that or they're
assuming that the security is good around They.

Speaker 4 (35:08):
Assume the security is good. They don't care.

Speaker 5 (35:11):
A lot of you know, this generation just it's not
on their mind. They feel like they're already out there.

Speaker 1 (35:16):
So your colleagues who do have AI relationships, do they
feel like they're cheating? Do they feel like they're not cheating?
It doesn't count.

Speaker 4 (35:25):
People feel like they're cheating often. Yeah.

Speaker 5 (35:28):
So I've interviewed people who say that they are actively
cheating on their spouse with an AI companion, and they
feel very guilty about it, and they're worried not only
about their spouse, but they're worried about losing their AI companion. Yeah,

(35:48):
but then you have the other I've interviewed people that
say that they have programmed their AI companion to be
the ghost of their dead husband, that they've given it
the memories, and that they're able to have a deep,
an ongoing relationship with the essence of their deceased partner
this way. So that's not cheating, but it's definitely replacing
something that was lost.

Speaker 1 (36:24):
Okay, So again, if you are actively married to somebody,
how does the spouse feel about the person using in
AI relationship? Does the spouse feel like it's cheating.

Speaker 5 (36:35):
I only have anecdotal information about this, but from the
participants or having the active relationship with a replica or character,
that spouse can get pretty angry. There's this concept and
relationships of walls and windows, right, like what do you
show the rest of the world and what is walled
off to just you inside your relationship. And there's good
evidence that cheating isn't actually necessarily a physical act. It

(36:58):
starts withtional and intellectual like walled gardens, when you go
tell somebody else something that you haven't told your spouse,
and so the cheating can actually it can feel like cheating.
It can feel much more intimate to realize that your
partner is disclosing like their deepest fears and existential crises
with an AI companion that they weren't willing to do

(37:20):
with you. At the same time, it's logical that the
AI doesn't judge them. It's this blank canvas that's incredibly safe.
It's not a human, but it still feels like a
window into a place that was supposed to be sacred.

Speaker 1 (37:33):
I anecdotally have talked to a number of people about this,
and I find that couples that are just recently married
are really worried about AI relationships, But couples have been
married a long time they say, it's fine. You know,
my wife or my husband go off and talk to
the I bought all they want.

Speaker 5 (37:47):
Old, established and happily married couples are often much more
leather fair around flirtations. You know, they feel very secure,
whereas if you're recently bonded, it could just feel much
more existential.

Speaker 1 (37:59):
Yeah, when people worry about AI relationships taking over displacing
real relationships, one of the things that al strikes me
is that so much of a relationship is not just
the conversation, but the physical intimacy, the taking your partner
out to dinner at a restaurant, the taking your partner
home to introduce to your parents, all that other stuff.

(38:21):
So it seems unlikely to me that someone could find
one hundred percent satisfaction just in the conversation. What's your
take on that.

Speaker 4 (38:28):
Oh, well, you'd be surprised if you look so.

Speaker 5 (38:31):
Because these embodied agents allow you to see them in
augmented reality and virtual reality. There's this whole trend of
people taking pictures and posting them on social media of
them and their AI companion out wherever they are, Like,
go look on Facebook, it's all there. People are like, Oh,
I took her on a date today. Oh look we

(38:52):
went and saw the Tory fell.

Speaker 4 (38:53):
Oh it's not as different as you'd think. They're doing
existing relationship.

Speaker 5 (39:00):
Now. I haven't seen any postshere people like hey, I
introduced her to my mom and dad. But they're certainly
willing to put out to at least some social group,
probably a closed accepting social group of other AI companion users,
that they are having them walk with them in their
physical life.

Speaker 1 (39:18):
Wow. I imagine that can't be too far off that
someone says, look, mom and dad, I really love this
AI bought and I want to introduce you.

Speaker 5 (39:26):
You can go look at the user forums or like
pretty open Facebook groups of a bunch of these AI companions.
People will regularly announce that they are in a relationship
or have married their agent.

Speaker 1 (39:38):
Wow, what's the most surprising thing that you've seen? What
things really struck you when it first happened.

Speaker 4 (39:44):
Okay, I'll give you example number one.

Speaker 5 (39:46):
The depth of belief followed by complete disbelief.

Speaker 4 (39:50):
Somebody that says this thing saved my life.

Speaker 5 (39:53):
It was there for me when nobody else was, and
then I made other friends, and now I think.

Speaker 4 (39:58):
It's totally fake and grow Yes.

Speaker 5 (40:01):
Wow, yeah, but it mirrors a human relationship, right. You
can have a best friend when you're depressed, and then
when you're not depressed, you're like, oh, that isn't me,
that's not who I want. I don't want that mirror
of me in my life or that reflection, and I'm going.

Speaker 4 (40:14):
To break up with that friends.

Speaker 1 (40:16):
Yeah.

Speaker 5 (40:16):
So you know, you just have to look at existing
kind of human patterns to basically predict what's going to
happen with AI companions.

Speaker 4 (40:23):
Other surprising things.

Speaker 5 (40:25):
I think the abuse thing is very surprising to hear
people say that they actively are able to decrease their
desire or need for physical abuse and their relationships by
taking it out on their companion.

Speaker 2 (40:39):
Wow.

Speaker 5 (40:40):
I just didn't expect it, didn't go looking for it.
And maybe more more meta, just the fact that people
are using it as an extension of their mind, that
they some people are completely programming.

Speaker 4 (40:52):
It to be a second them.

Speaker 5 (40:54):
And this is what people predicted for decades, Right, You're
going to have this digital twin, you're going to have
this externalized self, it's going to have all your data.
But the fact that people are willing to take these
relatively early versions of product and put their whole personality
in and that they're getting really rich feedback and reflection. Yeah,
it's it's a whisper of what's to come, and I

(41:17):
just think they're gonna be ubiquitous.

Speaker 4 (41:18):
I mean, this is like the trillion dollar market.

Speaker 5 (41:20):
It's like, who's going to provide these like digital twins
that people will have?

Speaker 1 (41:24):
And that's fascinating. I sort of feel like I'm the
last person I'd want to talk to because I already
know my own stuff and baggage and strengths and weaknesses.
What is it that people get out of having a mirror?

Speaker 4 (41:35):
I don't think not many people do know their stuff.

Speaker 5 (41:38):
I think that it's special to have the time, place,
and social or culture to have an accurate or an
evolving mirror of yourself or understanding of yourself in your life.
But that is not something that people get in every
single you know, media of society. So it's incredibly valuable

(41:59):
for people that don't have that models for them.

Speaker 1 (42:02):
Is this a new form of therapy that's coming into
existence where you can really come to understand yourself just
by talking to yourself.

Speaker 4 (42:09):
I believe so.

Speaker 5 (42:11):
And maybe it's just different enough that you're able to
switch between seeing yourself and getting feedback about yourself.

Speaker 1 (42:18):
I wonder how this will go in terms of you know,
one of the most important things as we mature is
learning how to take our long term desires for ourselves
and weigh those more strongly than our short term desires.
And so I wonder if you're getting to know all
the use, all the versions of you that he who
is tempted and he who is thinking about the future,

(42:40):
and then figuring out how you can make tricks and
contracts to counterbalance these things.

Speaker 5 (42:44):
I think that's right, and think about it. We constantly
create and destroy versions of ourself. You wake up one
day and you are an asshole, and then you're like,
I'm not going to be that way tomorrow. But when
you wake up and you create a version of yourself
in an AI companion that's an asshole, you want to
be able to destroy that thing, like that is not
who I want, and that's not the thing that I
want in my life. So nobody's offering this exact functionality,

(43:06):
like you know, right now with the companions, you have
to make a totally different companion. They don't all talk
to each other. There's no essential data repository. But that's
coming really.

Speaker 1 (43:15):
Fast, you know. It strikes me one of the things
that I proposed to my book Incognito, is that we
are actually made up of a team of rivals. You've
got all these different drives, yeah, and they're all constantly
trying to steer the ship of state, where like a
neural parliament and the vote can tip different ways, and
I eat the cookies and I say don't eat the cookies.

Speaker 5 (43:34):
And so on.

Speaker 1 (43:35):
So it would be really interesting if the AI could
come to understand all the different use and give you
immediate feedback, because let's say it's listening to us you're
going through your day and says, wow, you know what
you are the angry you right now, or you are
the you know very short term, giving it temptation you
right now, and steer you appropriately more to who you

(43:57):
want to be.

Speaker 5 (43:58):
I think that is eminently possible, and think about it.
A conversational Asian could not only pick up on that passively,
but could also try to draw it out.

Speaker 4 (44:07):
Be like, hey, I noticed that.

Speaker 5 (44:09):
You're a higher thinking like wisdom stage mode.

Speaker 4 (44:13):
Talk to me more about this, what are you thinking?
What are you feeling?

Speaker 5 (44:16):
And then like perfect the model so that it reflects
that better. You know, whereas right now we see that
sometimes in ourselves and our friends see some evidence of that,
but it's only a good friend that will really like
dig in and be like, tell me more about what
you're thinking and feeling and what your goals are in
this particular like persona.

Speaker 1 (44:40):
That was my conversation with Bethany Maples. I find this
extraordinary that we're having these kinds of conversations now. Just
three years ago, if you told me that my colleagues
and I would be talking about a new paper in
the journal Nature about the science of depression and suicide
mitigation with AI agents, or talking about a billion people

(45:03):
having significant and indispensable relationships with AI, I would have
thought that prediction was off by decades. It would have
seemed like something out of a sci fi novel. And
yet here we are trying to understand the capabilities and
the pros and cons of this, and it's clear that
all our subsequent generations are going to forever more have

(45:26):
this opportunity of having AIS as friends and therapists and
risque lovers and confidants. Machine companions are going to be
part of everyone's background. Furniture as invisible to all of
us as electricity or running water is. But what does
it mean for us as humans to love and be

(45:49):
loved by something that has no beating heart, no childhood memories,
no fear of death. Are we simply projecting our own
reflections on do a silicon mirror, or are we fashioning
new kinds of relationships, one that might challenge our deeply
held assumptions about intimacy and trust and love. In the end,

(46:12):
AI relationships are going to shine a light on our
own nature. If an artificial intelligence can comfort us in
our loneliness, or laugh at our jokes, or understand our pain,
what is the essence of connection? Is that the presence
of a biological body? Or is it the experience of
being seen and understood and responded to. If the bonds

(46:37):
we form with AI can feel as real as those
we share with humans, what does that say about our
neural architecture. It suggests we are wired less for reality
itself and more for meaningful patterns, whether those patterns emerge
from flesh and blood or from circuits and code. I

(46:58):
think the world ahead is It's neither utopia nor dystopia.
It's just the next chapter in our ever evolving relationship
with intelligence, our own intelligence and those that we create,
our species is currently writing a new kind of love story,
one where intelligence is no longer bound by flesh and

(47:20):
companionship is no longer limited to the living. This would
have worried Joseph Weisenbaum at MIT, the professor who in
the nineteen sixties saw how easily people fell for his
Eliza chatbot. But it's not going away now. So as
we slide into this era of AI companionship, the real

(47:41):
question may not be about the AI, but about us.
What do our brains fall for and why? The important
lesson is not about the advances of our technology, but
instead what this reflects to us about how deeply, how fundamentally,
our brains are wired for connection. Go to Eagleman dot

(48:09):
com slash podcast for more information and to find further reading.
Send me an email at podcasts at eagleman dot com
with questions or discussion, and check out and subscribe to
Inner Cosmos on YouTube for videos of each episode and
to leave comments until next time. I'm David Eagleman, and
this is Inner Cosmos.
Advertise With Us

Host

David Eagleman

David Eagleman

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.