All Episodes

September 29, 2025 • 40 mins

On the one hand, AI companions are (increasingly) amazing at rectifying isolation. But on the other hand, loneliness is a biological signal that pushes us toward improving ourselves socially. So what's the right balance here? And does everyone have the same need to cure loneliness? In other words, might AI relationships mess up our young even while providing a critical lifeline to our seniors? Join this week as we dive deep with psychologist Paul Bloom.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Will AI cure loneliness? This is actually a tough question,
and it's quite nuanced because, on the one hand, AI
companions are increasingly amazing at soothing our sense of isolation.
They can be here for us twenty four to seven,
ready to listen and to help. But does that come
at a cost, because more than simply pain, loneliness is

(00:30):
a biological signal that pushes our behavior towards improving ourselves socially. Also,
does everyone have the same need for curing their loneliness?
In other words, will having an AI relationship to cure
loneliness mess up our teenagers or should we be thinking
about it as providing a critical lifeline to our seniors.

(00:54):
So in this episode, we'll be joined by my friend
and colleague Paul Bloom to discuss loneliness. Welcome to Inner
Cosmos with me David Eagleman. I'm a neuroscientist and author
at Stanford and in these episodes we sail deeply into
our three pound universe to understand why and how our lives.

Speaker 2 (01:15):
Look the way they do.

Speaker 1 (01:33):
Today's episode is about loneliness, which is one of the
most painful experiences that humans face. Many of us think
of loneliness as a kind of passing sadness, but from
a biological perspective, it's a kind of internal alarm system,
like hunger or thirst or pain. For most of our

(01:53):
evolutionary history, being cut off from others was more than
a sting. It was dangerous. Isolation meant vulnerability. To survive,
we needed each other, and so through an evolutionary lens,
the emotion of loneliness is like a pain that pushes
us towards something, in this case, back into social connection.

(02:16):
But here's the question we face today. What happens if
we mute that signal? What happens if technology, in its
growing sophistication, takes away this sting of loneliness by offering
a digital substitute. We've all seen how AI can mimic empathy,
as I've talked about in several previous episodes, and AI

(02:40):
companion never gets tired or distracted, they don't lose patience,
they don't get angry, they don't get snarky, and they're
there for you twenty four to seven with endless attention,
and they're good. One study found if you compare doctor's
responses to patients against AI's responses to patients, people rate

(03:00):
the AI responses as more empathic. Now, just imagine someone
who's isolated, who feels invisible. The kind of attentiveness AI
gives can feel like a salvation, and I think that
it can actually be salvation. One study out of Stanford
showed that having a companion bought for young people mitigated loneliness,

(03:23):
and three percent of the respondents said it stopped their
suicidal ideation. But there's a tension here, and my colleague
Paul Bloom recently wrote about it in The New Yorker.
And that's back to this issue that the pain of
loneliness is a forcing function for growth, That pain pushes
us to do the hard work of making ourselves understood,

(03:46):
or of building bridges to others, or of repairing broken bonds.
If AI relationship makes loneliness disappear, do we also lose
the social effort, the striving that loneliness once demanded of us.
So at this fast moving moment in history, we're stuck
with this central question. Will artificial companions rescue us from

(04:10):
needless suffering or will they erode something essential about the
meaning of this biological signal? Will they soften the edges
of life? In a way that helps us or in
a way that hollows us out. I think we should
all avoid easy answers to this question, and happily, our
guest today is a colleague who's been thinking about these

(04:32):
questions with subtlety and depth. Paul Bloom is a professor
of psychology at the University of Toronto and a professor
emeritus at Yale. He's the author of psych The Story
of the Human Mind, and many other influential books, and
in his recent essay for The New Yorker, which I'll
link to in the show notes, he explores the promise

(04:52):
and peril of AI as a cure for loneliness. So
let's step into this conversation at the crossroads of psychology, technology,
and what it means to be a human. So, in
your recent New Yorker article, you describe loneliness as a
toothache for the soul. So let's just dive into loneliness

(05:16):
for a moment. Talk about what it is and what
people experience there.

Speaker 3 (05:20):
Well, let's start off with the obvious, which is loneliness
is awful. Loneliness is terrible. It's I know everybody listening
to this will have experienced some degree of loneliness. Some
people will have experienced a transient loneliness. You know, you're
a weigh on a trip, you know, no friends around.
Some you know, but it goes away. You know people
will love you. On the other side of things, some
people will experience real loneliness for long periods of time.

(05:44):
You're more likely to be lonely if you're old, maybe
everybody you know has died, maybe you're strange from your family.
Some studies find that past age like sixty, but half
of people say that that they're lonely. So loneliness is awful.
It is in as simple as sense. You know, a
lack of contact with other people, but you can go

(06:04):
deep here. You could be lonely, but with people. I
have sometimes lonely, the loneliest times of my life when
I was surrounded by people and I didn't feel or
was the connection. I didn't feel it was love. You
could sometimes even be loved and feel lonely because you
feel people don't understand you, that there's no connection. But
it's a serious problem. It's a serious form of human suffering.

Speaker 1 (06:25):
And what are you supposed to the evolutionary purpose of
loneliness is.

Speaker 3 (06:29):
That's a good question. I mean, you know, we should
think like we should think like evolutionary biologists, and when
we talk about something, even something awful, we should say,
what's it there for? And I think that just about
all of these experiences, these feelings are there for a reason.
You know, hunger's there to motivate you to seek out food,
boredom's there to motivate you to find something interesting in

(06:49):
your life. And loneliness is there, I think, to motivate
you to seek out human contact. And once you have
that human contact, to work at it, you know, to
try to be interested in another person, try to connect
with them. One way to answer your question is what
would it be like if we go into your brain

(07:11):
and make it so you never feel lonely. You might
say that that's great, loneliness is awful. My life would
be much improved. But then you price spend all your
time by yourself. You'd have no motivation, no care to
connect with other people. You wouldn't you wouldn't reproduce, you
wouldn't fall in love, you wouldn't develop friendships. You'd be
fine by yourself. And everybody wants to be a little

(07:31):
bit fine by themselves, but too much of it, I
think leads to estrangement, So loneliness serves a purpose.

Speaker 1 (07:37):
So we're in this moment now where suddenly we have
AI that can solve the problem, and we'll talk about
what we mean by solving the problem here. But you know, Paul,
you and I have been in the field for a
long time, and I don't think we would have ever
guessed this one decade ago that we would be talking
about AI as a real.

Speaker 2 (07:58):
Solution for loneliness. So let's talk about that.

Speaker 1 (08:01):
What do you think the pros and cons are of
having AI companions in this context?

Speaker 3 (08:07):
So just just to agree with you on something, AI
was the biggest surprise in my career. If you had
asked me, I'm going to ask you what you felt,
But I'll tell you. A month before chat GBD came
out and you said, when will we have a machine
you could just talk to a different or a person
I say, ten years, twenty years, thirty years, and then
boom it came out and stunned me and stun in

(08:28):
the world. What was your reaction. You're closer to these things,
so maybe you were less surprised.

Speaker 1 (08:33):
Now it's exactly the same I spent really most of
my career as a neuroscientist sort of snickering at AI
and thinking, well, it's nothing like the brain, it's never
going to get there, And then suddenly it was there,
and it's changed all our lives so enormously that you know,
I do this neuroscience podcast and about a quarter or
a third of my podcast episodes are about AI nowadays.

Speaker 3 (08:55):
Sooner or later, by law, every conversation we have has
to be about AI. You're a teacher, how do you
do AI? What are you going to do with AI?
So let's get to the loneliness thing. My view is
annoyingly nuanced so that no one's really happy with it.
I think there are a lot of people who are
really lonely and their loneliness is severe, and maybe they're

(09:17):
in a state where an AI companion is really the
best they're going to get. And you know, suppose you're
eighty years old, you're in an institution, you have no
friends and family visiting you. Maybe you have dementia which
makes you very difficult to talk with, and you're not
a multimillionaire, so you can't pay people to entertain you.
If it could turn out that chat GPT could be

(09:39):
our claud or whatever could be a rewarding, satisfying companion
to you. I think it's monstrous to deny it to you.
It would be like telling people you can't have pets
because we have decided pets are not sufficiently rich interacting partners.
So whatever limitations these AIS have, you know, we should

(10:01):
make it available the people who really need it and
people who who's who's suffering is intense and there aren't
any alternatives. So that's the pro AI side. The anti
AI side is kind of the rest of us, where
I think that there are serious problems where we move
to AI companions as a replacement for human companions, and

(10:22):
I'll tell you one of them. Maybe The main one
that bothers me is that it goes back to your
question about the function of loneliness, where the struggles we
have when dealing with people inform us and make us
better people. I tell you a story and you find
it kind of boring. Next time I tell a story
a bit more interesting. You find talking to me frustrating
and don't want to talk anymore because I don't listen. Well,

(10:44):
I'll listen better next time. By dealing with people, I
become more sensitive. This is a story we've all gone
through our lives. We've all been we've all been teenagers
who have been awkward, who've been terrible at flirting, who
have been borish and inconsiderate. And through the feed we
get from people, we have become better. We have become
better people, not just better conversational. It's not just better company,

(11:05):
but better people. Imagine you take that away. And one
thing ais do very well is they make you feel
like you are wonderful. Everything I tell CHAGGYBD is brilliant.
All my questions are wonderful, All my paper drafts are brilliant,
My stories are amusing, my jokes are hilarious. It is

(11:26):
always available to me at any time, at any second,
and it will never sort of wait for me to
finish talking so it could get its own words in.
And in some way that's wonderful. But in some way
that's terrible. I think it has a sort of corrosive power.

Speaker 1 (11:44):
Can I make a distinction, though, which is you're adjusting
two points here. One is that it's always there and
paying attention to you, which is wonderful. The other is
that it's sycophantic and telling you that your ideas are great.
But the second one I think can evolve and will
eve time.

Speaker 2 (12:01):
You know, Chat GPT.

Speaker 1 (12:02):
Released a version that was overly sycophantic and then nobody
liked that and they got rid of that. But I
think with time, it certainly seems possible that companies will
make better and better hard love ais that give you
feedback that tell you that you're you're wrong, or there's
a different way you can think about it. But the
first part, which is that it never gets angry or

(12:27):
you know, loses attention or has other things that it
needs to do besides talk to you, that part's permanent,
and that part I think might be really positive.

Speaker 3 (12:35):
I mean, you can imagine fiddling with it, right, you
can imagine making it that. You know. I three in
the morning, I ask Chat a question and it says, dude,
do you know how late this is? And do you
know what a stupid question that is? I am not
going to answer it. Go back to sleep and talk
to me in six hours. You know no reason why
we can't build the machines that way. We probably won't
want to.

Speaker 1 (12:53):
The reason I think we will is because even something
like TikTok, which is there to grab your attention and
keep it even They have started implementing things. If you
surf too long, it pops up a video that says, hey,
you've been surfing for X hours. Why don't you put
this down and come back later. So it certainly is
plausible that we can get to a point where we

(13:13):
will have AI that gives us the right kind of
feedback like that.

Speaker 3 (13:16):
There's certainly no technological barrier to it. In fact, right now,
with some prompt engineering, you could tell your AI stop
sucking up to me so much. Don't tell me I'm
a genius anymore, just answer my damn questions. I wonder, though,
if people will really want this, if we'll be in
a situation where it's technologically fully possible to make these

(13:38):
machines less frictionless, give us more pushback, call us out
when we mess up, But we won't want them to
do that. We will insist on machines that make us
feel good about ourselves. I mean, what do you think
imagine this, say four years from now, five years from now,
when you get better and better and better, do you
think we'll be talking to machines that will give us pushback?

(14:01):
Or do you think we'll find it irresistible to deal
with machines that just make us feel great about ourselves.

Speaker 1 (14:08):
I think it will become boring pretty quickly to have
a machine that's sycophantic, and in fact, I happen to
be very optimistic about AI relationships even among young people.
Will come back to this about the difference between the
elderly and the young, but I think it could serve
as a sandbox where young people get to make all
their dumb mistakes. If you have an AI companion that's

(14:30):
giving you the right kind feedback and saying, hey, that
hurt my feelings that you said that, or that didn't
seem like you were thinking about me when you said that.
So I think people can get better by using these
things if the company makes these correctly with giving hard feedback.

Speaker 3 (14:46):
I totally agree with you. Back in the day, I
found it difficult to talk to girls when I was
You may find see me so smooth and say, how's
that possible? I was somewhat awkward, somewhat shy, And to
have something a machine I could practice on, I guess
you have to work on talking to it. We can
forget about just more generally, anybody who some social problems

(15:09):
would benefit a lot from a sandbox. From practice, and
then having practice enough, you'd unlaunch yourself into the real world.
But the problem is, what if it's more fun to
play in the sandbus than the real world. What if
you find your AI companions just better than real companions?

Speaker 2 (15:26):
So okay, good.

Speaker 1 (15:27):
So this brings us back to the issue about age.
We agree that for let's say, an elderly person who's
alone in the same way as you point out in
the article that in the same way they might we
might give them opiates. We wouldn't want to do that
with a teenager, And so we wouldn't want to give

(15:47):
false cures for loneliness to a teenager who needs to
go out and do the work to get better at
what they're doing. Do you see a line that we
would ever be able to where we say, okay, look,
it's okay for people over here who are really lonely
to have issues that are going to make them lonely
into the future, versus someone who should go out and

(16:09):
do the work.

Speaker 3 (16:11):
I don't think there's a bright line, and it's going
to be somewhat arbitrary, But I like the analogy of opiates,
and you know, we will give very powerful drugs to
people near the end of their alliance, people who are
suffering from terrible pain. Drugs we would never give to
a seventeen year old who hurt us back, because we
don't want a seventeen year old to become addicted. We

(16:31):
worry a lot less about a nine year old. And
there's not a single bright line here. But there's going
to be cultural decisions made, policies, laws, and I think
there's a decision we're going to make with AIS now
that there are an end in some way liberty issues, right.
You know, I don't know how you're going to feel

(16:51):
when the government starts restricting your use of chatbots.

Speaker 2 (16:54):
Yeah, so what do you think.

Speaker 1 (16:55):
Do you think there will be legislation around this or
will this be a laisse a fair issue.

Speaker 3 (17:00):
I'll make a prediction here, which is that they will.
There will not be legislation. People want it too much.
You know, you made the connection with TikTok and Twitter.
You know, if if the government stepped in, you know,
I don't know Trump or Biden or whoever whoever's going
to be the next present and said we're going to
limit your use of social media, I mean some way

(17:21):
probably a good thing, but it really is a violation
of our freedom. And I don't know. Well, let me
ask you how many hours a day, I mean, if
at all, do you interact with AI's.

Speaker 2 (17:33):
Forty minutes I have.

Speaker 1 (17:35):
I have a tesla and there's AI in it now,
and so when I'm driving along, you know, I talk
to it. I ask you questions anything I'm curious about,
how to fix this piece that I'm trying to fix
on my door, or how to think about you know
which philosophers have said X y Z.

Speaker 2 (17:51):
I find it.

Speaker 1 (17:52):
You know, it's it's the most intelligent companion that I have,
and it's really it's really wonderful.

Speaker 3 (17:57):
Right now, imagine the government step in and then says, well,
a man of your age and you know, success and
good social connections, that's too much. We'll go to leive
me that half hour half hour a day, you know,
unless you get a doctor's prescription. I don't think people
would stand for that. I think these machines are too available,
too popular, too addictive for us to control it. Now,

(18:35):
have you seen the.

Speaker 2 (18:36):
New Grock Avatar AI companion?

Speaker 3 (18:40):
It's I have not.

Speaker 1 (18:41):
Oh, okay, so it's a it's a sexy anime female.
You talk to her and you see her on your phone,
and she's moving around and when you say something nice,
little hearts pop up from her and she's very, very flirtatious.
And I do worry when I saw that for the

(19:01):
first time, I felt a not in my stomach for
the first I'm very cyber optimistic about most of the stuff,
but it did make me worry about what it'll mean
for young people to have such a magnetic AI companion.

Speaker 2 (19:16):
What's your take on whether it can go too far?

Speaker 3 (19:18):
Have you seen a movie her?

Speaker 2 (19:19):
Yes?

Speaker 3 (19:21):
To me, this is a really important movie. So if
those people haven't seen it, just the short summary is
it's the near future. In fact, i've heard this twenty
twenty five. It was done like ten years ago, and
the future was at twenty twenty five. And AI companions
who you deal with over your phone like we deal
with AI right now, or your computer become popular. And
it's about this guy who falls in love with Samantha,

(19:43):
this wonderful AI companion voice by Scarlett Johanson. And the
thing about it is it's so well done that you're
watching it and you yourself fall in love with her,
and the movie ends by kind of movie ends. You
see everybody walking talking excitedly on their phone, and everybody
has an AI compaign at diurnal love with because it's
infinitely preferable to human companion. Now, where this falls apart,

(20:09):
and this is something you know a lot more about
than me, is that these machines are basically at disappoint abstract.
They're not robots. You can't interact bodily with them. And
it always struck me. I know you're working act lot
of these issues that we have done much better at
solving the computational problems at the abstract level than building

(20:32):
machines that could deal with three dimensional world. I mean
to put it a different way. It might have surprised
a lot of people, but we very quickly came to
develop machines that could beat grand masters at chess. We
don't yet have a machine that could load a dishwasher
right lit alone serve as a suitable romantic or sexual partner.
So how far do you think we're away from that?

Speaker 1 (20:52):
Interestingly, you know, robotics is really taking off right now.
Essentially people are using LM large language models in analogy
to physical motions. So they say Look, if I had
just moved my hand here and here and here, the
next token should be that I'm going to move my
hand over here. So it's actually it's speeding up enormously.

(21:15):
But what's interesting is I think even with you know,
just a purely chat ai, or even when we get
sex robots, we're still going to have a very strong
drive to have a girlfriend and take her out with
our friends, or have her meet our parents, or take
her to the movies or have a nice romantic dinner.

(21:35):
So that's why I'm slightly less worried that we'll have
a takeover of you know, chatbots or robots in terms
of our relationships, because we have these very deeply embedded
drives to be part of a social world and be
with our companions in that world.

Speaker 3 (21:53):
I think that's right. I think that. I mean this
turns on another issue which I think we both thought about,
is at what point will this become conscious? If these
machines become conscious and embodied, then to all intents and purposes,
they're just more people, very smart people, people created in
a lab and not a normal biological way. But they're
just people. But if they're not conscious, having a relationship

(22:18):
with an AI is no different in kind than having
a relationship with a chair or a toaster.

Speaker 1 (22:23):
It's just a thing, but it's a thing that gives
you feedback and reflects things to you.

Speaker 3 (22:30):
I wonder.

Speaker 1 (22:30):
I think they are different in the sense that when
you're having a dialogue with yourself, you're thinking, Hey, what
if I did this? Oh no, that's not a good idea.
Maybe this would happen.

Speaker 2 (22:40):
How about this?

Speaker 3 (22:40):
And so on.

Speaker 1 (22:41):
In a sense, it's an extension of that, of this
reflection back and forth.

Speaker 3 (22:46):
What do you think of that? I think that makes
it extremely powerful, useful, very attractive. But without consciousness, without
something like sentience, something powerfully is missing. I mean, put
it this way. You and I are talking right now,
and we're talking because we both chose to be here.
And when I get I'm going to get getting together
with some friends tonight for drinks and we will all

(23:07):
have chosen each other's company. And there's something to that.
There's something to the idea of dealing with people who
are there and when I if I tell them something funny,
they might laugh. And if they they you know, they
disapprove of me, my feelings will be hurt. If it's
an artificial machine. You don't get any of that. My
my chatbot does not choose to be with me. It's

(23:29):
just designed to do so. It's it's no more satisfying.
And when my light goes on when I flip the switch,
isn't there something that makes that so much less valuable?
They're slaves, They have no they have no choice in
the matter. They are. They are constructed to you know,
unless unless you tell it to, it will always be available,
it will always answer your questions.

Speaker 1 (23:49):
You know, I'm not sure that I think that the
casting of AI as a slave makes much sense. In
the same way we wouldn't cast our dishwasher as a slave.

Speaker 3 (24:00):
He says.

Speaker 1 (24:00):
It's not choosing to wash your dishes, or you're doing
something wrong. You'd say, well, look, it's just a machine.
That's what we built it to do.

Speaker 3 (24:06):
No, I agree with you. I agree with you. There's
something morally wrong, for instance, of a coercing people, forcing
people to do your bidding. If it's true to Ais
have no consciousness and no feeling, it's no more morally
wrong than using your toaster to toast your bread. There's
something even an interaction with someone who is paid to
be with you, like a sex worker or a therapist

(24:28):
or a personal trainer, missus, even somebody who is there
and they don't want to be there. It's another person,
and there's a dynamic of another person that you just
cannot get with something that's incapable of consciousness. And I think,
I think you think it goes back to your point,
which is it's not merely that I want a companion
to take that they can meet my parents, or it

(24:50):
could take out for drinks, and I imagine a suitably complicated,
you know, sex bought robot could do that. It's that
I want somebody I could make them laugh, make them
not make a haha sound, make them laugh. I want
somebody who I could impress. Honestly, I want somebody that
has a potential of disappointing, of saddening, of hurting their

(25:12):
feelings and they'll do the same to me. And the
gulf between that, which is what we get in everyday
human interactions and the interactions with a non conscious machine,
is an enormous golf. I'll even put it in another way,
which is going back to loneliness issue. I have two
sons there in their twenties that they live really rich
Environbrant lives. If I was discovered that one of them decided,

(25:35):
I'm going to leave my girlfriend and I'm going to
take up with you know, chat GPT eight, and I
would feel so disappointed. I would just feel it that
it's not a person. Man, it's not you know, you're
you're pulling yourself away from real interactions to some sort
of substitute that might give be pleasurable and interesting, but

(25:55):
you're missing so much.

Speaker 1 (25:57):
What if your son argued that we have these evolutionary
pressures pointing him towards reproduction, but he's actually happier with
the AI companion and reproduction isn't on his mind. Let's say,
what would you argue to him.

Speaker 3 (26:15):
I would say that I would concede an AI companion
in the short term could make him much happier than
a person. A properly configured machine can do whatever you want,
and we'll never disappoint, we'll never betray, we'll never get
bored of you, and so on, at least in the
short term. I would wonder about long term. You talked
about it being boring and maybe having something constructed to

(26:37):
satisfy you an end would be boring, but I would say,
and it's hard to make an argument for it unless
you sort of have some sympathy for the point that
there's an analogy between fake experiences in general. If my
son said that he was going to go to a machine,
and this is an idea adopt as you all know
from the philosopher Robert Nozick, we're to put him to

(26:58):
sleep for the rest of his life. And Walley's Asleepy
that have these amazing dreams, these amazing experiences of adventure
and love and great kindness and so on. But he
just be lyinger experiencing them, And I said, I don't
want you to do that. It said that it's even
more fun than living a life. I would say, you're
losing something really deep, and what you're losing is dealing

(27:22):
with the world, being in a real world and having
real experiences. The fact that it makes no psychological difference
doesn't mean that it's apps into real difference. And I
would say the same thing about an AI companion.

Speaker 1 (27:48):
You also suggest that the loss of loneliness might lead
to a loss of creativity.

Speaker 2 (27:55):
Tell us about that.

Speaker 3 (27:57):
I think in general we benefit from friction. I mean
friction comes under all sorts of different terms. You know,
you're you're more of a neuroscience guy. You might call it,
you know, reinforcement, reinforcement learning, or you know, failure of
predictive processes. But we benefit from from trying out things
in the world and getting pushedback, and that makes us smarter.

(28:22):
I mean, I feel that we were talking before about writing,
and you know you're writing a book, and I will
bet because you write great books that as you do
the writing, it makes you smarter. You write because your
first draft is kind of crap, and you say this,
arguments aren't flowing, this point doesn't make any sense. So
you try it, and you try it and you try it,
and gradually through the process you get better and better.
And I think this is a story of life in general,

(28:44):
which is we flail around, we fail, and we fail better,
and we fail better and we fail better, and so
we're doing pretty good sometimes. And so to take away
that friction, which is what the end of loneliness will
be the end of boredom as well ROMs us of
the chants of that generative process.

Speaker 1 (29:01):
I have two questions for you. One, it's very tangential,
but I'm curious what you think about with the students
that you see and writing. So you and I both
write books and we polish our arguments. That way, students
won't have a need or a strong motivation to write
and go through that process of drafts.

Speaker 2 (29:19):
What do you think the consequence is going to be
of that?

Speaker 3 (29:21):
Man, I think about this all the time. I'm teaching
a seminar in starting to teach a seminar in a week,
and I always have reading responses, and I'm well aware
that a student could just type in a single sentence
into chat and produce a reading response that would be
really good, indistinguishable from their work. I mean, they have
to say, make it a bit less good so it's convincing.
And so a lot of professors are responding by not

(29:43):
assigning writing anymore, just doing in class essays and that
sort of thing. I think if students stop writing, and
if people stop writing in general, there's an enormous amount
that will be lost. I mean, it goes back to
what we were talking about when we're talking about our
own writing, which is writing as I think. I'm curious
what you think about this, but I think a form

(30:04):
of working through problems, and if you just say that chat,
you know, write means eight hundred words in this thing,
you cheat yourself without opportunity. But of course writing is
also effortful, difficult, often frustrating, and it may prove irresistible
for our students and maybe someday ourselves not to fall

(30:24):
back on AI. I mean, I wonder, ten years from now,
assuming that it hasn't killed us by then, how many
books will be mostly written by AI. You know, NIH,
the National Instative Health has made it, has made it.
I think it's NIH might be ansf And I said,
you've got to start. We now, for the first time,
have a limit in a number of grants a lab

(30:47):
could submit because people are sending us in fifty grands
because they get the AI to write them for them.
And I wonder whether publishers and magazines are kind of
the same issue.

Speaker 2 (30:57):
You know, here's what I think, and I've mentioned this
on the podcast before.

Speaker 1 (31:01):
I think that what's going to happen with writing will
be analogous to what happened with visual painting. When photography
was introduced, all the visual painters panicked and thought, we're done.
Because you can capture a scene perfectly in great detail,
why would you need us? But what they did is
they diverged, and the painters went towards things like Impressionism

(31:23):
and Cubism and things that photography could not do, and
so they ended up flowering on neighboring fields. They found
their own thing. And as I'm writing my next book,
what I find is that I'm stretching myself into the
parts that AI can't do well. I'm making it, you know,
where there's stories, personal stories that weave throughout the chapter

(31:43):
and have a little string of detonations where I keep
coming back to the story. And of course even basic
things like quoting, you know, doing a block quote from
something else that you've read. AI is not particularly good
at that, at least at the moment. So there are
many things that we can do to distinguish our writing
as human writing. And the fact is that I find

(32:06):
writing so pleasurable because of the wrestling with the ideas.
All this is to say that writing makes a great
analogy with with loneliness here, because the question is will
we get rid of the difficulty of writing but the
expense of not being able to think through something? Will

(32:26):
we get rid of loneliness at the expense of losing
something in the social domain, and not being able to
polish ourselves as a result.

Speaker 3 (32:36):
I agree with the problem. I'm less optimistic about the solution.
I think I think you could probably ask Chad even
right now, I'm David Eagleman. Please please weave through some
good personal stories through my pros and it will, you know,
has access to your life. It could, It could give
these good personal stories. If not, it could make them up.
There's probably more than one author who kind of kind

(32:57):
of makes up their personal stories to weave through it,
and sooner or later we'll stop hallucinating and give you
some good quotes. I think I like the analogy between
photography and painting. But the this analogy is, of course
you could tell a painting from a photograph. The problem
is it's going to become increasingly hard to tell the
AI stuff from the real stuff. I like writing too,

(33:19):
and i'd like I write or write a substack. I
write books or write articles, and I will not have
an AI replace me. But for some academic paperwork I
do have AI do to writing. You know, I have
to write a report on so and so. No one's
going to read it, I say a R and I
actually it knows my writing style it's read my stuff.
I fatis I said, write this in the style of

(33:41):
paulblem and Boom, some stupid reporter. But a stupid thing
is now in a style problem. I would never use
this to write a substack or magazine article or chapter
of my book. But I wouldn't be so surprised if, say,
a couple of years from now, it could do it
and be indistinguishable from what I would do. Is that
too too techno optimistic or techno pessimistic?

Speaker 2 (34:04):
It's two techno pessimistic, I think, because here's the thing.

Speaker 1 (34:07):
At the moment of the advent of photography, no one
could think of Impressionism or Cubism or any of the
other movements that came out because they hadn't been done yet.
But I suspect that there will exist things that writers do,
people who love to write, that will constantly keep them
apart from what AI is capable of, and maybe that

(34:27):
will always have to evolve.

Speaker 2 (34:29):
But that's my techno optimistic view on that. Yeah, yeah,
it could be.

Speaker 3 (34:34):
I hope you're right.

Speaker 1 (34:35):
So what do you hope readers will take away from
your New Yorker article in terms of thinking about loneliness
and AI companions.

Speaker 3 (34:43):
A few things. I hope they worry more. I hope
to make them a little bit a little bit panicked.
But two main things. One thing is to have an
open mind about using these chatbots for people who really
need them, to sort of feel some sympathy for those
who suffer from seria loneliness. I think a lot of
people who who mock the use of these chat bots

(35:04):
and think, you know, only a loser would use them
are people who are very socially successful, and I think
they should work harder to think about the life that
people who don't. And then the second thing is for
the rest of us to recognize that the loneliness and boredom, grief,
shame are painful, all painful, they could be useful, and

(35:26):
sometimes a good life involves choosing to experience certain sorts
of pain for a greater benefit. That means, in some way,
I have an hour free and I don't pick up
my phone and play word all and million games and
everything like that. I have an hour free and I
don't pick and I don't talk to a chat butt
and have a perfectly entertaining conversation. I struggle with the boredom.
I struggle to loneliness and hopes it makes me a

(35:47):
better person.

Speaker 1 (35:52):
That was my conversation with psychology professor Paul Bloom. At
the heart of the issue is that we generally think
about loneliness as an affliction to be eradicated, something that
we might engineer away, like polio or smallpox. But as
this conversation highlighted, loneliness is not just a disease of disconnection.
It's also a teaching signal. It prods us to reach out,

(36:17):
to repair, to bond. So what we're left with is
a paradox because, on the one hand, ai companions offer
amazing comfort. They can soothe those who suffer most, especially
the elderly, the isolated, the ones whose signals go unanswered.
In those cases, to withhold that kind of comfort would

(36:39):
feel cruel for the rest of us, especially for younger people.
There may be a danger in making loneliness optional, because
if we quiet that internal signal too completely, we risk
losing the thing that it evolved to do to push
us toward one another. Now, in some ways, we've been
here before. When Paul and I were growing up not

(37:01):
that long ago, Boredam was a constant companion in childhoods
Now it's been nearly erased by the endless diversions on
our screens. In that erasure was something lost, some of
my colleagues seem to think. So they argue that boredom
used to push us to invent, to imagine, to create,

(37:23):
Now bornam is easily drowned in a.

Speaker 2 (37:27):
Sea of distractions. I'm not so certain.

Speaker 1 (37:30):
While it's absolutely true that we have created a sea
of distraction, it's equally true that now anyone on the
planet can access the latest cutting edge information or thinking.
Just consider that you're listening to information delivered right to
your ear. Neuroscientists and psychologists come right to you to

(37:51):
teach you information, and there's really no learning in the
world that you can't access instantly as soon as you're
curious about it. And if you really look at the data,
it's clear that the pace of new discovery has swamped
what we have ever seen at any moment in history.
So is the Internet a nirvana of learning or a

(38:12):
big distraction?

Speaker 2 (38:14):
Yes and yes.

Speaker 1 (38:16):
My own take on this is that loneliness is going
to follow a similar path for some, especially young people
with AI companions, will have to ask what new social
skills will never be born what relationships will never be born,
What sparks of creativity will never be born. If we
hand over too much to our digital companions, we might

(38:39):
trade away something essential, the struggle to be understood, the
effort of real connection, the messiness and beauty of human entanglement.
On the flip side, some people with real, genuine loneliness
will be massively helped. This will be life changing for them.

(39:00):
Older people who live alone, whose friends have mostly passed away,
whose children live elsewhere, who might have physical constraints that
make them unable to go out of the house much.
This kind of caring, listening companion will be like the
discovery of antibiotics, and I'm betting in a few years
we'll start seeing research about the massive health benefits to it. So,

(39:24):
as you reflect on today's conversation, sit with your own
experience of loneliness, past or present. Ask yourself, if that
pain were gone, what would you never have reached for?
What part of you might never have grown? But also
consider the flip side, what needless suffering might have been spared.

(39:46):
What moments of despair yours or maybe someone you know
might have been softened by an always available voice that's
attentive and kind. What doors might open if AI could
take the edge off of loneliness just enough for people
to find the strength to rejoin the world. And that's

(40:06):
the paradox of this new technology. It might both erode
the struggles that shape us, but at the same time
offer comfort where none otherwise exists. Go to egulman dot
com slash podcast for more information and to find further reading.

(40:27):
Join the weekly discussions on my substack, and check out
and subscribe to Inner Cosmos on YouTube for videos of
each episode and to leave comments until next time. I'm
David Eagleman, and this is Inner Cosmos.
Advertise With Us

Host

David Eagleman

David Eagleman

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Cardiac Cowboys

Cardiac Cowboys

The heart was always off-limits to surgeons. Cutting into it spelled instant death for the patient. That is, until a ragtag group of doctors scattered across the Midwest and Texas decided to throw out the rule book. Working in makeshift laboratories and home garages, using medical devices made from scavenged machine parts and beer tubes, these men and women invented the field of open heart surgery. Odds are, someone you know is alive because of them. So why has history left them behind? Presented by Chris Pine, CARDIAC COWBOYS tells the gripping true story behind the birth of heart surgery, and the young, Greatest Generation doctors who made it happen. For years, they competed and feuded, racing to be the first, the best, and the most prolific. Some appeared on the cover of Time Magazine, operated on kings and advised presidents. Others ended up disgraced, penniless, and convicted of felonies. Together, they ignited a revolution in medicine, and changed the world.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.