All Episodes

October 20, 2025 • 35 mins

How is sci-fi like a cultural research and development lab? Will we someday have AI agents that live in robot bodies, and will we be liable if they commit murder? What happens when reality is no longer verifiable? How can we create AI advocates that guide us toward self-actualization over distraction? What is a Young Lady's Illustrated Primer? This week we talk with researcher Bethany Maples about science fiction and how it might prepare us to wrestle with the deepest questions about AI, identity, and the future of humanity.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
How is science fiction like a cultural research and development lab.
Will we someday have AI agents that live inside robot bodies?
And will we be liable if they commit murder? What
happens when reality itself is no longer verifiable? How do

(00:25):
we create AI agents that guide us towards survival and
self actualization, not just profit or distraction? And what is
the Young Lady's Illustrated Primer? This week we talk with
researcher Bethany Maples about science fiction and how it might
prepare us to wrestle with the deepest human questions about

(00:47):
AI and identity and the future of humanity. Welcome to
Innercosmos with me David Eagleman. I'm a neuroscientist and author
at Stanford and in these episodes we sail deeply into
our three pound universe to understand the hidden forces that
shape our lives. Today, we're going to talk about science fiction.

(01:29):
So imagine this. It's nineteen sixty six and you're watching
Star Trek on the television, one of those cathode ray
tube televisions with a bubble screen. Anyway, what you're looking
at is Captain Kirk and Spock, and they're interacting with
a cool looking machine. It's a thin slab of glass

(01:49):
and metal that can display anything, including a person at
a distance talking to a camera. It's like a telephone call,
but you can actually see the other person. Now, for
the audience in nineteen sixty six, this is science fiction.
It's pure fantasy. But for a generation of engineers and

(02:10):
inventors who are just growing up and tuning in, it
was a blueprint. Fast forward a few decades and we
all have tablets in our hands every day, and even
smaller tablets called smartphones, and we never stop to think
how fantastical this would have seemed to our great grandparents.
So that's the thing I want to explore today, because

(02:31):
quite commonly science fiction anticipates the future, or more specifically,
it lays down the tracks for it. Now, if you're
a regular listener, you know that I am a lover
of literature more generally, because literature expands how we see ourselves,
but science fiction in particular often prototypes futures. We often

(02:55):
think about sci fi as escapism or entertainment, but when
we look more closely, we can sometimes see it as
a testing ground for society's biggest hopes and deepest fears.
It's the stage where we get to rehearse what might be,
and sometimes we end up waking up inside worlds that

(03:16):
novelists and filmmakers imagined a long time before. For example,
today we'll talk about Neil Stevenson's Diamond Age, which features
an intelligent self writing book that can raise the child,
or we'll talk about Spike Jones's movie Her, where a
lonely man falls in love with his AI operating system.

(03:38):
So science fiction gives us a lens into the future
and whether or not it's a reliable oracle, which it
sometimes is and sometimes isn't. It always lets us ask
what if? What if your AI companion wasn't just a
distraction but an advocate for your very best self? What

(03:58):
if technolog didn't just change our tools, but changed what
it means to be human. It allows us to stretch
our brain's internal models. So that brings me to my
guest today. My colleague Bethany Maples runs an education company
called Atypical AI. I first met her when she was
a researcher in education at Stanford, where she studied the

(04:22):
rise of personalized AI agents like AI tutors or learning
companions or lovers and how these things are changing us.
She recently published a paper in Nature on loneliness and
suicide mitigation using GPT chatbots. In other words, can having
an AI friend actually save a life? The answer was yes.

(04:47):
Check that out in episode ninety eight now, when Bethany
first started talking about a world where a billion people
might form indispensable relationships with AI agents, that certainly seemed
like far off science fiction. But it's not. It's actually
the present moment. And that's exactly what science fiction does.

(05:10):
It normalizes the impossible so that when it finally arrives,
we recognize it. So I got Bethany here again today
because she is a giant fan of science fiction. She
and I both feel that science fiction is where society's
write down their deepest dreams and hopes, like our longing
for immortality or companionship, or invisibility or mind reading or flight.

(05:35):
These are all very ancient human fantasies. They show up
in myths and in fairy tales. Then they get retold
in futuristic novels, and sooner or later they inspire the
labs and the startups that try to make this stuff real.
So today we pick up that thread because if you
zoom out. Science fiction has always been less about rayguns

(05:59):
and aliens and more about testing the questions of who
we are and who we might become. Here's my conversation
with Bethany Maples. You're a fan of science fiction. Tell
us about how you think about science fiction and how
it can be used in our current world for the

(06:22):
planning we're doing now.

Speaker 2 (06:23):
You know, science fiction catches a lot of flack.

Speaker 3 (06:25):
People don't think it's a deep medium, and I think
it's just like light entertainment, and that is trashy. But
I deeply believe that science fiction tells us about the
deepest dreams and hopes of society. So operating with that,
let's think about what science fiction tells us, and let's
look at history and see how much science fiction has

(06:49):
guided and predicted and created the technological magic that we
see today.

Speaker 1 (06:54):
That's right. What's an example of that? For example, the
tablets and star Trek that became the iPad precisely.

Speaker 3 (07:00):
So think about that, and think about you know, the
Young Ladies illustrated primer from from the Diamond Age.

Speaker 1 (07:05):
Great, so I've read that, But for the audience, tell
us about that.

Speaker 3 (07:07):
For the audience, there's this book around this really strong
AI agent that's embodied in a book that is able
to take anybody, the poorous street, urchin, anybody and make
them not only you know, maybe.

Speaker 2 (07:21):
Wealthy and rich, but the best possible version of.

Speaker 1 (07:24):
Themselves because it tutors them, it leads into life lessons, all.

Speaker 3 (07:28):
Of it, and it does it in this beautifully intelligent way.
It's not just mechanical and wrote. It interacts with the
real problems that's happening in their life. It's able to intercede.
It brings in characters, it brings in fairy tales. It
uses all of the things that humans use to create
meaning and teach and just using them beautifully. And of
course there's a human in the loop. It's not just

(07:49):
an AI agent. It's like combining people they actually care.
So that's an example of you know, the precursor book
to all of these AI tutors. Like the number of
people in Silicon Valley that have been thinking about building
the Primer and building the primer versions of it over
the last decade are many.

Speaker 1 (08:06):
I didn't realize anyone was building something like this. I mean,
obviously the technology that we have with AI, we can
squint and see how it could become something like this.
You're saying, people are actually working on the Ladies Illustrated Primer, well.

Speaker 3 (08:20):
Versions of it, right, I mean you know, I I
started thinking about this actually when I was studying neuroscience
as Stanford back and my masters.

Speaker 2 (08:29):
I was like, what would it actually take, Like, what's.

Speaker 3 (08:31):
The learning science theory, what are the sensing mechanisms, what
actual algorithms and like modules which you need?

Speaker 2 (08:38):
And I mapped it up and then one of.

Speaker 3 (08:41):
The most senior people Google called and said, Hey, I've
always wanted to build.

Speaker 2 (08:46):
This, Come build this with me.

Speaker 3 (08:47):
And five years later at Google AI things happened.

Speaker 1 (08:52):
Wow, and remind me the Ladi's Illustrated Primer was a
book that she carries around it physical and she opens
it up. But remind me that the text always changes
the illustrations.

Speaker 3 (09:03):
Yeah, it's a dynamic narrative, but it also has like
you know, an accelerometer and aultimeter, like had all these
sensing mechanisms so that it could actually like interact with the
world around you. So look, a book might not be
the exact like right embodiment. Up until now, education has been,
in fact, this incredibly thin slice of learning, right you
go to a room and you learn, and you're not

(09:24):
able to connect the variety of inputs that actually makes
a life right, all the informal learning, all the social stuff,
all the peer things, like all the hormones. And imagine
that you were actually able to bring that all together
and have an agent that cared deeply for you and
loved you, that was guiding you through. I mean, that
is like the height that we're trying.

Speaker 1 (09:41):
To get to. That's right and remembers everything you said
and doesn't have any other concerns and let your personal
best exactly.

Speaker 3 (09:49):
And I think that's what is tough is we think
about AI companions, is that a lot of these AI
girlfriends and companions, they're there for entertainment. They're not there
for your personal bests. And so there's this Nirvana's day
or this thing that we all know that we want,
which is an agent that actually wants us to be
the best possible version of what we can be. But
the disconnect or the question is how do we get

(10:10):
that into everybody's hands. Because people are going through so
much shit, they're just trying to survive, they're poor, they're hungry,
like they don't have power, so how do you help
get them to a place where they can learn? And
at first that book that the primer just helped what's
her name, lil just help the protagonists survive. So as

(10:31):
we think about, you know, AI helping each of us individually,
it's not just like, hey, I'm going to be a tutor,
it's like AI that's going to help me survive.

Speaker 1 (10:39):
So why are there more companies doing that? Right now?
We're saying we want to build an AI that is
your advocate, that just cares about you, that makes everything
about you the best that you can be. Are their
companies doing this?

Speaker 3 (10:51):
The reason that there's not more is it's both an
incredibly challenging build and people don't always want to be
the best version of what they can be. They don't
know it right, as we said before, they're just trying
to get through the day. And this is what makes
AI companions such an interesting entry factor because everybody needs

(11:11):
a friend, and if you can have an agent that
starts as your friend, it ends up being your tutor
and your advocate for your higher self. That's beautiful. But
the issue from a commercial perspective is how do you
monetize that?

Speaker 1 (11:22):
Oh interesting? Oh right? Because people won't pay in advance
or something like that.

Speaker 3 (11:27):
Yeah, and frankly, you get into a VC like kind
of rat hole where you know, they're like, oh, like
monetize now, monetize now, and you're like, no, it's a
trillion dollar opportunity. Let's just get people to engage. So
I think that more and more people are thinking about this.

Speaker 1 (11:43):
I wonder if somebody in the distant future, along with
having universal basic income, the government would pay for your
advocate to make you the best you could be.

Speaker 2 (11:50):
I think that they should.

Speaker 3 (11:53):
But the only way that works is if the government
has none of that data, none of that data's tied
to your insurance or your healthcare or anything like that,
you feel completely safe. So I know that companies are
thinking about this. All the big tech companies are thinking
about it, and a lot of the AI labs are
thinking about it. But again, you know, there's a lot
of commercial forces at play, so doing the right thing
in the end kind of gets muddied by needing to

(12:14):
sell in the five year time frame.

Speaker 1 (12:16):
So let's trace the thread back to the beginning about
science fiction. What are other examples the way that the
science fiction that you've read influences the way you think
about what we can do now.

Speaker 3 (12:27):
What we can do or also like what's happening right now.
So you know, an issue that we're having in education
is deep fakes. You know, both porn deep fakes, which
is really disturbing. Students are making deep fakes of each
other and it's really disturbing for the victim, and also
deep fakes of teachers or deep fakes of kind of

(12:47):
any authority figure.

Speaker 2 (12:49):
And you see this issue.

Speaker 3 (12:51):
Was taken up like decades ago in Player of Games, right,
so you know, in this world, deep fakes are you know, prevalent.
Everything can be fake, and so the only way to
get around it is to have like certified humans that
are there in the room able to see what's going on.
And so you know what we're going to see, for example,

(13:12):
in education, as everybody wants to start taking these high
stakes exams online but you have all this like you know,
ability to deep fake, and you know, we're going to
go through an arc where they're going to try to
create these ubiquitous global, worldwide assessments and everybody's just going
to be like faking in. Everybody's going to be cheating,
and we're going to have to go back to small
format in the room, like one proctor is looking over

(13:35):
your shoulder in order to verify who's there and what's
going on.

Speaker 1 (13:55):
By the way, is a side note. The thing that's
intrigued me about deep fakes is I've been very interest
in the legal system, the intersection of science and the
legal system, and everyone was worried that people would be
introducing deep fakes into the system and saying, look, I
see the guy who's holding the gun whatever. But in fact,
exactly the opposite has happened, which is that everyone who's
caught on camera now goes into the court and says
it's a fake. It wasn't actually me, even though it

(14:17):
actually was. By the way, it's intrigued to me that
a science fiction writer, let's say, who ever wrote a player
of games, could foresee a world of deep fakes, because
it seemed so unlikely even ten years ago that you
could make a convincing photograph or video of somebody, and
now it's so trivial.

Speaker 3 (14:35):
But I mean it's been in the collective consciousness for decades.
I mean, think about the Holidacks on Star Trek, like
we've always dreamt of having completely immersive magical environments.

Speaker 1 (14:46):
Yeah, that's right. But faking it and being able to
replicate a human and have them do something bad on
the camera, that's really unusual. In fact, I was just
thinking about the movie Total Recall, which I haven't seen
since it was in the theaters a million years ago.
What I'm going to try to do this without giving
anything away if someone hasn't seen it. But the protagonist
thinks he's one person, and eventually he's shown a video

(15:10):
of himself doing these other acts and he realizes that
his memory has been essentially wiped and manipulated. And I thought, wow,
that movie wouldn't work now, because if you saw a
video of yourself doing something, you'd think, well, that's a
deep fake. That's not actually me. But in the movie,
that was the turning point where he realized, oh my gosh,
I used to be someone else. Yeah. So it strikes

(15:32):
me that not all authors have foreseen the possibility of
deep fakes, and a lot of plot points get thrown
out now that that exists.

Speaker 2 (15:40):
It's true.

Speaker 3 (15:41):
I mean, hearing you talk made me just think about,
you know, the issues of AI and self awareness and
kind of sovereignty.

Speaker 2 (15:49):
So gosh, Blade runner.

Speaker 1 (15:52):
Yeah.

Speaker 3 (15:53):
Right, Well, everybody's worried that we're nearing actual AGI.

Speaker 2 (15:57):
But even if we're not, even if we're just coming to.

Speaker 3 (16:00):
A point where we're going to have really strong AI
companions and they're gonna have a ton of data and
they're going to persist apart from a human And I
mean next step, like next year, we're going to have
you know, covered in flesh AI robots.

Speaker 2 (16:17):
I mean we're already seeing stuff come o.

Speaker 3 (16:18):
I mean both from Asia and here where it's you know,
it's not just extra skeletons. Yeah, yeah, yeah, you've got
soft exteriors, right, So it is it's very Westworld so
to speak, right, and they're going to be armed, not
just with these you know, task based systems or goals
of service. Like it's so easy to put in a

(16:40):
companion that you know wants to be your friend or
wants to be a doctor, wants to be a therapist,
or just wants to be you out in the world.
And so you know, it seems far fetched, but I
tell you next year. You know, certainly within five years,
issues of you know, liability, agent sovereignty and like web,
they're not it's murder is going to be.

Speaker 2 (17:03):
In the public eye. But think about this.

Speaker 3 (17:05):
If you were to program your dead loved ones memories
into you know, an agent, you know, maybe it's just software,
and then you put it into an actual body and
you're able to interact with it. Think of the emotional
distress that will cause you some more to destroy that thing.

Speaker 1 (17:21):
Yes, exactly.

Speaker 3 (17:22):
You know, as soon as there was emotional distress in
the USA, you've got a libel suit.

Speaker 1 (17:27):
Yes, quite right, although although in that case, as long
as you have a backup, you can just reboot it. Right.

Speaker 3 (17:32):
Sure again, okay, this isn't no, this is blade Runner. Well,
if you don't have a backup, what do you have
to take it with you? What if you can't afford
the backup?

Speaker 1 (17:39):
Oh right right, yeah, what if it cost an ungodly
amount by some vicious company that's trying to profit from it?

Speaker 3 (17:46):
And so this is all science fiction. I mean, it
seemed far fetched and it's here now.

Speaker 1 (17:50):
I know. That's the surprising part. Okay, So what else,
Because you're such a fan of science fiction, what else
have you read that seems to reflect on what's happening
in this in this fast moment in time.

Speaker 3 (18:01):
Well, as I said before, I'm a huge advocate of
some company hopefully soon changing the dynamics around users privacy
and data and giving everybody basically their own data and
like letting the code come to them. And so my
favorite example of that heir the e Butler's from Pandora
Star right where so everybody has this incredibly strong agent

(18:24):
that's kind of a defense mechanism where it stores all
your data kind of locally, and then if any system
or service wants to interact with you, the code comes
to you and it is, you know, making sure that
all of that compute and all that service happens locally.
And I think it's so powerful and it would be
so wonderful. And of course then you've got like how
strong is your agent and it can I preprotect you
properly and all that. But to me, that's the best

(18:47):
example of what we really want that we're not getting
right now.

Speaker 1 (18:53):
Fascinating. Do you see a business path to that?

Speaker 3 (18:56):
I think that one of the biggest companies in the
world will eventually see the writing on the wall and
they will up in the market and upend their investors
and make the leap.

Speaker 1 (19:06):
Yeah, but it will.

Speaker 3 (19:07):
Probably be somebody that thinks that they're about to lose. Yeah,
you know, so a key player that's like, Okay, we
don't have the computer or we haven't you know, we
see the other guys or like, you know, leaping ahead
because of whatever functionality or investors.

Speaker 1 (19:22):
Yeah, so I have a question for you, because you're
such a fan of science fiction. I'm a fan of
literature more generally. I haven't read that much science fiction,
as it turns out. But for example, I heard an
interview with Isaac Asimov on Jonah Lerer, which was the
show on PBS a million years ago, and asm off
this was probably the eighties. He said, Look, I foresee

(19:42):
a day when every household is connected to a central
mainframe computer. Everyone has a dumb terminal in their house,
and this computer knows all of humankind's knowledge and you
can ask it any question you want, you can get
the answer. And so he essentially foresaw the Internet. But
what's interesting is the technical He didn't get the technology right.

(20:04):
I mean, it doesn't matter, but you know, you imagine
these cables all going to a mainframe and so I'm
fascinated by the way that thinkers can foresee the direction
that things are going. But of course we're always limited
by the technology that we have in thinking about it. Yeah,
So if you thought about your science fiction bookshelf and
how many of the predictions were spot on versus sort

(20:26):
of kind of off but mostly right versus totally off,
what would you say?

Speaker 3 (20:30):
I would say that the why and the what is
almost always correct, but the how is often wrong.

Speaker 1 (20:37):
Beautiful as you said, Yeah, and so what does it
mean about the why and the what? Does it mean
that the trajectory of where we're going as a species
is is clear enough? If you really sit and squint
into the future, you can sort of see the direction.

Speaker 3 (20:50):
You can. You can just read any fantasy novel or
any fairy tale, like we want to fly, we want
to live forever. We want to be to read people's thoughts,
we want to be able to be invisible.

Speaker 2 (21:03):
We want to you know, like.

Speaker 3 (21:04):
All of these superpowers we are directly heading towards.

Speaker 1 (21:09):
Yeah.

Speaker 3 (21:09):
So, like think about it as an example, you know,
spells and the Internet of things. You know, now, through bioauthentication,
we're able to get differential access to different places and
different functionality and systems.

Speaker 1 (21:21):
So I walk down this hallway and the door open, you.

Speaker 2 (21:25):
You know, but you say abricateab and it opens. That
is a spell. We're going to just see more of them.

Speaker 1 (21:30):
Yes, what's the most far fetched thing that you've read
in a book that you're seeing evidence for? Now?

Speaker 3 (21:37):
Okay, this is going to be a weird one, but
hold with me. Whether or not you believe in reincarnation,
the desire to live forever.

Speaker 2 (21:46):
Is core to who we are.

Speaker 3 (21:48):
And there's this amazing book by Kim Stanley Robinson called
The Years of Rice and Salt and again without read
you know, revealing everything. Guys, you might want to mute
if you haven't read it, but it explores what would
it mean if we were able to sense our reincarnation
and what would happen if we could track our soul

(22:09):
like coming back, And when you think about it, there's
huge incentives, especially for the ultra wealthy, to be able
to do this. So you're already like, I'm hearing whispers
about companies where they're like, okay, you know we're starting
now with really crude house. Right, We're like, I'm going
to what was that company that was like I'm going
to freeze you at the point of death, So yeah, yeah, right,

(22:31):
So people are already pursuing very brutal kind of early
ways of living forever or being able to reconstitute themselves
or come back to life. But there's this fantasy, and
if science fiction and the progression of technology is any indicator,
you know, we will move to a point where we
might be able to track our soul's reincarnation. And there

(22:55):
is so much economic benefit to that. Imagine being able
to leave your billions of dollars to yourself.

Speaker 2 (23:02):
I mean it sounds crazy, but like people are thinking
about it.

Speaker 1 (23:06):
Wait, hold on, this requires the existence.

Speaker 2 (23:08):
Of a soul, wait, which we can argue about.

Speaker 3 (23:10):
We can argue about, but the chance is enough to
make people obsessed with the question.

Speaker 1 (23:17):
Oh fascinating. But so you might get all sorts of
predatory companies coming in here and saying, hey, pay me
a lot of money, I'll track your soul, which may
be a preposterous thing.

Speaker 3 (23:27):
Okay, But then the question is like how much of
you has to exist for it to be your soul?

Speaker 1 (23:31):
Right?

Speaker 3 (23:31):
If we're already saying that we can externalize our intelligence
and pass that along, then does it have to be
actual flesh or could it be a sufficient amount of
our intelligence.

Speaker 1 (23:42):
Like an upload?

Speaker 3 (23:43):
Yeah?

Speaker 1 (23:43):
Yeah, oh totally. Now that's a good idea, and I
think uploading the intrigant. There are, of course, all these
questions about it that people have filosters have been asking
for a long time, which is, you know, if I
could upload myself into a computer. And so this other
thing wakes up. Is that actually me? It certainly thinks
it was me. It says, wow, it's just sitting in
this chair a moment ago. But as far as I'm concerned,

(24:05):
I'm still going to drop dead.

Speaker 3 (24:07):
And there's so much science fiction in that grapples with
this ghost on the shell, right, like how much how
much flesh or how much essence? Or like, you know,
is your entire intelligence equal to even a spark of
your natural matter?

Speaker 1 (24:20):
You know?

Speaker 2 (24:22):
Is the ship of Theseus?

Speaker 1 (24:23):
Yes?

Speaker 2 (24:24):
The question?

Speaker 1 (24:24):
Yes? Well okay, so just for the listener, of the
Ship of Theseus is thesis. Ship pulls into the dock,
plank rots, it gets replaced, Another plank rots, it gets replaced.
Eventually all the planks of the ship have been replaced,
every piece of wood that was on it. Is it
still the ship of Theseus? In ancient Athens? I look
this up recently. About half the philosophers said yes. Half
lost said no on this, But it's a deep question

(24:46):
about identity. The modern neuroscience version is if I took
a neuron out of your head and replaced it with
a metal neuron that did exactly the same function, and
then replaced another neud on another and eighty six billion
neurons later, sort of a metal robot, is it's still
you exactly? Yes, this is the question. I do think

(25:06):
that might be a separate question, though, from the upload question,
which is, now I start the version of Bethany and
a computer over here, do you still feel like you're
having a heart attack and dying? And this other thing
is but it doesn't do you any good. It's happy
that it's there, but you're not. You don't feel anything

(25:27):
about it. You don't feel any continuity.

Speaker 3 (25:29):
Yeah. So, the question of what it means to be
human and how much our intelligence defines us versus our
bodies defining us, is it's happening right now, right, and
it's going to happen again more and more and quickly,
as we think about liability, as we think about agents
that act on our behalf, and as we form and
grow these deep relationships with agents that are the ghosts

(25:52):
of our loved ones or versions of ourselves.

Speaker 1 (26:12):
You know, in a previous podcast that you and I
did together, you were talking about this idea of a
mirror of oneself, and a lot of young people are
doing this, which seems to be very foresighted and mature
of them to do this so they can learn about themselves.
But yes, as you make a richer and richer digital twin,
in a sense, you don't even have to imagine this

(26:33):
upload thing where you scan your brain to do it,
because in a sense, by the end of your life,
you've got a really rich digital twin.

Speaker 2 (26:39):
Correct.

Speaker 3 (26:40):
Or there's a really great science fiction book about this,
the whole the Last Emperor series, I tell you the
Emperor with an X. So the emperors or the ruling
family in this series implants kind of like a spinal tap,
so that everybody in the family is not only having
their memories be recorded, but their full like body functionality

(27:00):
be recorded, which has typically been the missing piece, right,
because we know that our cognition isn't just in our brain,
that it's you know, tied with our gut, and there's
you know, chemicals involved, and so the idea was that
you know, because they were able to completely track through lives.
That then every new impro can have a conversation with
their entire family going back through millennia.

Speaker 1 (27:22):
Ooh oh oh. Interesting because there's a replica of their.

Speaker 3 (27:26):
Very grandfather, and it's able to say exactly how that
person not only thought but felt in every moment of
their life.

Speaker 1 (27:34):
Interesting, you know what I find interesting about that? So
it just so happens that I'm my family's genealogist. I've
studied the family tree for years, and no one else
seems to be that interested. I try to show them
the thing, and to them, it's just some names on
the page and so on. I do wonder, though, if
I said, look, I've got this great replica of your
great great grandfather, would they care or would they rather

(27:55):
hang out with their friends and play Fortnite? That's it's
an interesting question. If you had all of your ancestry,
you know, thirty two sixty four hundred and twenty eight people,
would you sit and talk to them or do you think,
oh boy, I got more relatives to deal with.

Speaker 3 (28:09):
Now I'm also my family's genealogist, and I think about
this as well, And my conclusion is that people care
less about facts, but they really want to know how
their ancestors felt. If you could find out that your
great great grandfather like also felt this driving need to
you know, look beyond the horizon and was never like
happy in one spot for long, that would give you

(28:30):
great understanding about kind of what it means to live
as a human today. It doesn't really matter what the
fact is, but that like chemical experience, would really change
people's understandings of who they are.

Speaker 2 (28:42):
We're really going off track here.

Speaker 1 (28:44):
I like it. This is great because so the class
that I'm teaching, it stands for this quarter is literature
and the brain. And there's a lot involved in this.
The starting point is my fascination with the fact that
the way we study the brain is looking at Okay,
here's how the visual system works, heres out, audition works, here,
touch arks, and but what is never talked about is
how easy it is for the brain to completely go

(29:06):
off track and be someone else. So I open Game
of Thrones and I am John Snow or the next chapter,
I am Denaros Targarian, and so on. We can just
slip into other shoes so readily, and the story of
neuroscience and every textbook that we read is you know,
it's all about in mutual information and finding out exactly
what you're hearing and seeing, but it's not somehow we're

(29:27):
so easily able to slip on other identities. So that's
fascinating to me. But one of the jobs of literature
is that it allows us to experience broader worlds or
be in situations that we would never be in, and
we get to learn from those situations. And so what's
interesting about science fiction is really stretching that out and
really trying things out at a very different place, at

(29:51):
a different scale. And you'd get the same thing if
you met your grea great grandfather who got to tell
you about something that was really meaningful in this is
era which is totally removed from yours, but you'd get
to really experience something, oh.

Speaker 3 (30:05):
And understand how he made meaning because, as you know,
your upload download rate doesn't really matter. We create memories
through meaning and through narrative, and that's kind of inherent
kind of how our memory works. So I have this
debate with my friends at neuralink where I'm like, really,
will the upload download rate really like make us superhuman?

(30:25):
Or will a brain computer interface much more likely be
one in which we have an agent in our head
which we are conversing with because in fact, that's how
we process information better we create meaning we're dilgic, we
ask questions, and that that will be the memory augmentation
versus some kind of I just learned Kung Fust the situation.

Speaker 1 (30:46):
Oh, I love that. Are there any other science fiction
books that you've read that you feel like this really
shines an interesting light on where we are right now
in twenty twenty five.

Speaker 3 (30:56):
Dune is an epic book and an epic series, and
of course Denny made an epic set of movies around it.
But I think one of the reasons that it is
capturing the public imagination right now is because it is
all about that tension between kind of physical and social
striving and being your higher self, you know, like can

(31:19):
you actually become like an evolve into being better humans?
And there's a lot of oil politics in there, and
I'm going to show those aside for a second, But
to me, the core theme of Doom is around human
evolution and using meditation and using all of the different
skills that we have from all the different you know,
veins of knowledge throughout the world to naturally evolve ourselves.

(31:42):
That's what the been a guess er it did, right.

Speaker 1 (31:45):
So if anyone who hasn't seen the movie or read
the book or read the book, give us the quick stuff.

Speaker 3 (31:50):
Okay, So the quick summary is, in this wonderful space
opera far funk future, they've outlawed thinking machines because they
had some bad running with killer machines, and so they
focused on natural human evolution and augmentation. And there are
some specific groups that have started to use basically yoga, meditation,

(32:13):
memory techniques, somatic dreaming like you know, hyper awareness, as
well as you know, psychedelic substances to dramatically expand and
increase the scope of their capabilities.

Speaker 2 (32:27):
And you see people's doing it now.

Speaker 3 (32:29):
I mean this has frankly been something that we've done
in you know groups around the world forever, but there
hasn't necessarily been a you know, a an effort around
it for intelligence augmentation.

Speaker 1 (32:40):
Oh you see echoes of that.

Speaker 3 (32:42):
Well, Dune is very important right now as we have
new technology coming online, an awareness of how different techniques
around what we eat and how we meditate, how we
use our memory can produce you know, longer life can
slow aging can inc intelligence, And there's this tension right

(33:02):
now around are we going to be able to do
any of that naturally or are we going to have
to hand it off to machines.

Speaker 1 (33:12):
That was my conversation with Bethany Maples. So let's come
back to where we began. In nineteen sixty six, Star
Trek put glowing tablets into the hands of its characters,
and decades later, engineers turned that fiction into fact. That's
just one example of what we've been circling today, the
way that science fiction becomes a rehearsal for the future. Now,

(33:35):
it would be too simple to say that science fiction
is prediction or prophecy, because it often gets things wrong. Instead,
I think it makes sense to say that it's the
place where we write down our collective hopes and fears,
our dreams of flying, of reading minds, of living forever,
of building companions who will never leave us. And by

(33:58):
committing these things to the page, these ideas have a
chance to blossom. They plant seeds in the minds of
young people, and they go on to build the technology
of tomorrow. And when that technology finally arrives, whether it's
AI tutors or digital twins, or companions that blur the
line between friendship and therapy whatever, we often feel a

(34:20):
sense of deja vus. Haven't we been here before? Yes,
but only in the pages of a novel or in
the flicker of a movie screen. So as we wrap up,
I'll leave you with this thought. Sometimes the stories we
tell are more than just entertainment. Every once in a while,
the novels on your shelf and the shows on your
screen are a way to move ideas forward. And so

(34:44):
the next question becomes, what stories do we want to
tell now? Because if science fiction has a way of
becoming real life, then all of us, all the writers,
the readers, the engineers, the visionaries, we all share some responsibility.
We are all collaborators in building the future. Go to

(35:11):
Eagleman dot com slash podcasts for more information and to
find further reading. Send me an email at podcasts at
eagleman dot com with questions or discussion, and check out
and subscribe to Inner Cosmos on YouTube for videos of
each episode and to leave comments until next time. I'm
David Eagleman, and this is Inner Cosmos.
Advertise With Us

Host

David Eagleman

David Eagleman

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.