Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Will writers and artists and musicians become unemployed by AI?
What are the new capabilities that we're seeing all around us,
and what is this.
Speaker 2 (00:16):
Going to mean for human creativity?
Speaker 1 (00:19):
And what does this have to do with diamonds and
Westworld and effort and Frankenstein in Beethoven and the Stark
Family and Game of Thrones. Welcome to Inner Cosmos with
me David Eagleman. I'm a neuroscientist and an author at
Stanford University, and in this episode, I get to dive
(00:41):
into something that's right at the intersection of science and creativity.
Most of my podcasts are about evergreen topics about our
brains and our psychology, but there's something so extraordinary happy
right now.
Speaker 2 (01:01):
We're in the middle of a revolution with AI, and
what's called generative AI in particular. So I'm going to
do a two part episode on this. For today, I'm
going to dig into what generative AI is and what
it means for human creativity, and then in the next episode,
I'm going to tackle the question of sentience. Are these
(01:24):
ais conscious and if not, now, could they be soon?
And how would we know when we get there?
Speaker 1 (01:35):
So let's start in twenty seventeen when almost no one
in the world paid attention when a team at Google
Brain introduced a new way of building an artificial neural network.
So this was different than the architectures that came before it,
which were called things like convolutional neural networks and recurrent
(01:55):
neural networks. Instead, they presented a new model that was
called a transformer. Now, transformer is not one of those
robots that shapeshift into trucks and helicopters.
Speaker 2 (02:07):
Instead, a transformer model is.
Speaker 1 (02:10):
A way to tackle sequential data like the words that
are in a sentence or the frames in a video.
And a transformer model takes in everything at once, and
it essentially pays attention to different parts of the data.
And this allows training on enormous data sets, bigger than
what was trained on before. Like now it's essentially everything
(02:33):
that has been written by humans that is on the Internet,
which is petabytes of data. So these models they digest
all of that and what do they do. They essentially
look at a sequence of inputs like the words and
a sentence, and they ask what word is most likely
to come next in that sequence. Now we'll come back
(02:56):
to that in a second, but I just want to
note that this transformer model is finding uses way beyond text. So,
for example, a recent Nature paper used this kind of
model to look at amino acids, which run in a
sequence to make proteins, and they looked at these chains
of amino acids like techt strings, and they set a
(03:18):
major new water mark in determining how proteins fold, which
is a very difficult problem. And people are using transformers
for everything from making music to reading giant reams of
medical records and so on. These transformer models are built
into search already, and soon they're going to be in
your phone and in your car, and in your bank
(03:40):
and in your doctor's office. So what everyone in Silicon
Valley is talking about is how this new kind of
AI is going to disrupt the workforce. And a lot
of people are thinking about white collar jobs that have
traditionally required memorization of long textbooks, and these jobs, whether
(04:04):
they're legal or medical, suddenly seem to be kind of outmoded.
And so we're all thinking about what this means for
the economy because so many jobs are going to be
displaced by this new technology. Now, there's nothing totally new
about this kind of worry, because every generation sees new
technologies take over old jobs. That's natural, and we don't
(04:28):
lament the fact that we don't have elevator operators anymore,
or switchboard operators at telephone companies, or factories that make
VCRs or eight track tape players, because new technologies continuously
replace the old, and industries change and people adapt. But
(04:48):
the concern that we're seeing with the AI revolution is
the speed of it. It's probably the case that we've
never before had a move forward in technology that's so
unbelievably rapid. So this is why everyone's talking about this
with a different point of view than we did with
(05:10):
previous innovations. But I want to zoom in on something
a little different for this episode. I want to know
what this all means for human creativity, because the thing
to note is these models have been trained up not
just on the handful of novels and conversations and schoolwork
that you have experienced on your thin trajectory through space
(05:31):
and time, but they have been trained with everything that's
ever been written by humans. Every textbook, every article, every poem,
every blog post, every novel. We're talking seventy one billion
web pages and hundreds of trillions of words, It's something
(05:52):
that's so far beyond any human's capacity to consume even
a fraction of it, or to really imagine a corpus
of text that large. Oh and by the way, it
has a perfect memory for every word that it's read.
So now you're talking about a system that's not the
same as a brain, but is incredibly powerful at generating
(06:17):
text or visual art or music and soon video. And
so while we'll talk about sentience next week, this week,
I want to address a social point that has quickly
risen to the surface, which is what will all this
mean for human art and human creativity? Personally, I'm working
on my next several books right now, and these are
(06:39):
all projects that have spanned years, and so I'm fascinated
and terrified about whether AI is going to replace me
as a writer. What does this kind of new AI
mean for writers, for visual artists, for musicians who studied
their whole lives to be able to compose beautiful piece
(07:00):
of music? Is human creativity destined for the dust bin
of history? So let's start with the downside of these models.
So in my book Live Wired, I talked about how
AI algorithms don't care about relevance they memorize whatever we
ask them to. So, now this is a very useful
(07:23):
feature of AI, but it's also the reason AI is
not particularly human like, because AI models don't have any
sort of internal model of the world. They have no
idea what it is to be a human and have
drives and concerns. They don't care which problems are interesting
(07:44):
or germane. Instead, they memorize whatever we feed them. So
whether that's distinguishing a horse from a zebra in a
billion photographs, or tracking flight data from every airport on
the planet, or composing music in the style of Brian Eno,
they have no sense of importance except in a statistical sense,
(08:06):
which is to say, which signals occur more often.
Speaker 2 (08:10):
So contemporary AI could never.
Speaker 1 (08:13):
By itself decide that it finds irresistible a particular kind
of ice cream, or that it abhors a particular kind
of music, or that it's heartbroken by King Lear's speech
over his dead daughter. So AI can dispatch, you know,
ten thousand hours of intense practice in ten thousand nanoseconds,
(08:34):
but it doesn't care about any zeros and ones over
any others. As a result, AI can accomplish incredibly impressive feats,
but not the feat of being quite like a human.
And so some critics of AI say, look, it's like
you want a sandwich, and what this transformer model does
(08:56):
is it looks at all the billions of sandwiches out
there in the world, and it gives you a slurry
and it pours it out in.
Speaker 2 (09:03):
The shape of a sandwich.
Speaker 1 (09:05):
A fellow writer gave me that analogy the other day,
and that doesn't sound particularly appealing, right, And yet these
ais have massively surprised us.
Speaker 2 (09:17):
The text generation is so good, it's.
Speaker 1 (09:20):
So complete, it's so human like that we find ourselves
not so much in the phase of invention like with
all the machines we've made before. Instead, the whole scientific
community is finding itself in a process of discovery. Everyone
is exploring to find out what these enormous models are
(09:41):
capable of, because nobody quite knows. They keep blowing our
minds with things they're able to do which weren't pre
programmed and not even foreseen. Have a friend who works
with a big city symphony, and she's trying to play
a program for the symphony several months out, which is
(10:03):
a typical timescale for symphony planning, but she's scheduling to
put on a program with music composed by AI, and
she's at a loss for how to plan this because
she's well aware that things are moving so fast that
the musical world and the skill level of AI composition
is going to be entirely different. In a few months,
(10:25):
it's can be more advanced. So she was telling me
that she doesn't quite know how to nail down plans
for this, because unlike every symphony planner who has come before,
she's now in a world where if she nails down
a choice of music and trains up the musicians, it
is guaranteed to be badly outdated some months from now.
(10:47):
And this is the world we're operating in now. So jennertive,
AI is moving so rapidly that we have entered this
massive revolution without most of us realizing that we were
going there.
Speaker 2 (11:00):
Art and writing and music aren't.
Speaker 1 (11:03):
Going away, but they're going to completely change from how
we know them today.
Speaker 2 (11:09):
Now.
Speaker 1 (11:09):
I told you earlier that AI doesn't have any idea
of what it is to be a.
Speaker 2 (11:14):
Human, but I think it doesn't matter.
Speaker 1 (11:18):
AI doesn't need to feel anything to write great literature
or great art or great music, because while you can
think of it as a sandwich slurry. You can also
think of chat GPT as a remix of every human
writer that has come before. Its training set is humankind,
(11:39):
and so even if it's just statistical, it's generating the
expressions and the passions and the fears and the hopes
of millions of people. So it doesn't matter if it
feels or knows or has theory of mind, or if
it cries at king Lear's speech, because it can convincingly
(12:00):
tell you a story that breaks your heart. And it
does this by drawing on the best of human writing
over the centuries. So as a result, it's incredibly good
and it puts together things in a new way. And
I think part of understanding this requires acknowledging a really
important point, which is that the AI is really good,
(12:23):
but also that humans are so easily hackable. The phrase
humans are hackable is a phrase that I first started
hearing from my friend Lisa Joy Nolan, who with her
husband Joan Nolan, created the television show Westworld, and that
was a big theme in that show. The humans could
(12:44):
so easily get seduced by the robots, or convinced to
do bad actions or act violently and the robots were
just running AI. But if they say the right thing,
then they can get humans to do things, whether that's
fighting or fornicating or whatever. It's like turning the key
in the lock. Now, there's a point that I want
to dig into here. If you saw Westworld, you may
(13:06):
remember the scene from the first episode where a man
named William has just arrived to Westworld and he's greeted
in a room by a beautiful woman who guides him
to pick out his cowboy outfit and his gun in
his hat, and she makes it clear that she's available
for him sexually, and he uncomfortably asks her, are you real?
(13:29):
And she says, if you can't tell, does it matter?
Speaker 2 (13:35):
Now?
Speaker 1 (13:35):
This is a major theme throughout Westworld. Humans are hackable,
and if you can't tell the difference between something that
has evolutionary importance to you and a fake version of it,
then it makes no difference. And this is what we
see when we look at the text that is spit
out from chat GPT. It is statistically sound, meaning it
(13:58):
falls in the orders and rhythms of millions of people
who have written things like it before, and so we
can be just as compelled by the text, and therefore
the fact that AI can write a story that moves
us and impresses us is no surprise. It's easy to
move and impress us. In a sense, it's no more
(14:20):
surprising than drawing a pornographic cartoon that turns someone on.
You're just plugging into deeply carved programs. A human can't
mate with the cartoon. But nonetheless, it's easy enough to
activate the biological programs, so a story can make you
shed tears or laugh even if the transformer is just
(14:43):
pushing around zeros and ones. And therefore we shouldn't be
surprised that AI can write these really great pieces of prose.
It doesn't have to be real and it doesn't matter.
So now that we can write beautiful prose with AI,
what does this mean for the future of books. Well,
(15:04):
I think we can imagine a pretty cool future for
AI generated literature. We can imagine generating infinite, wonderful material.
Speaker 2 (15:15):
And you know what, Back in the day.
Speaker 1 (15:17):
Kings and emperors had poems written that were bespoke. The
poems were written just for them. And now it's going
to be trivial for us to all live as royalty,
having bespoke literature written just for us as much as
we want, as often as we want, in seconds, and
maybe we'll come to enjoy dynamic novels, by which I
(15:41):
mean a piece of literature that's not pre written, but
instead is written on the fly depending on the decisions
that you make, like a choose your own adventure. So
you say this is a good book so far. Now
I want to see what happens if I go in
the neighbor's door and get a view on his life,
or the mailman life who just passed by, or the
(16:01):
traffic cop and the book just keeps writing itself on
the fly, thousands of pages that end up being.
Speaker 2 (16:09):
Unique for me, for you, for everyone as they go
on their own adventure.
Speaker 1 (16:15):
Instead of having some poor author who has to write
every possible branching path, now there's no need to do that.
Speaker 2 (16:23):
You just generated on the fly.
Speaker 1 (16:26):
So now we'll all get to experience literary worlds that
are infinite in all directions. So in that light, it
certainly seems that AI is going to replace human creatives.
It can do things better and millions of times faster,
and it can be there to write the next pages
(16:46):
according to your wishes. So it looks like writers are
going the way of the mastodon?
Speaker 2 (16:54):
Or are they?
Speaker 1 (16:56):
I think the real story is not so simple. I'm
fairly sure that while AI will augment human told stories,
there's essentially zero danger that it's going to do a
wholesale replacement of human creatives. And I'm going to argue
this for four reasons. The first is that we care
(17:16):
about the overarching arc of a story, and at least
at the moment, AI can't even come close to constructing this.
And this is because of a fundamental limitation in its architecture.
And this isn't just a question of pouring more money
in and getting more massive computers on the job. It
has to do with the exponentially increasing computational cost of
(17:40):
representing longer pieces of work. So currently with chat GPT four,
it looks at the past four ninety six tokens, which
is about three thousand words, and it decides what the
most likely next word is. But without getting into the
details of the math, I want to point out that
this requires a matrix. Think about it like a big
(18:03):
spreadsheet that has four thousand ninety six rows in four
thousand ninety.
Speaker 2 (18:07):
Six columns and an entry in every cell.
Speaker 1 (18:10):
That represents something about the probability of those words going
with each other.
Speaker 2 (18:14):
Now, this matrix will grow larger.
Speaker 1 (18:17):
With time, but the size of the output is inherently
constrained by this structure, and as a result, chat GPT
is perfect for poems or blonde posts or small articles,
but not something the size of a novel. Why because
a novel has arcs and plot twists and cleverly planted
(18:38):
clues and cliffhangers, and all of these operate at a
longer timescale. So a human author mentally zooms in and
out such that their stories have this sweeping arc to them. So,
for example, in a mystery novel, we get to the
end and we realize that all the clues and the
red herrings we saw or subservient to the solution to
(19:02):
the mystery, which of course the author knew from the beginning,
and the author was just spooling out clues to you
one at a time. In writing, you often have to
know the end to structure the beginning in the middle.
And this is, by the way, why chat GPT can't
make up a new joke, even though it can repeat
jokes that are already made.
Speaker 2 (19:22):
But it's because to construct a joke, just like a
mystery novel, you have to know the punchline first, and
then you construct the joke backwards. But these large language
models are simply constructing everything in the forward direction. It
does statistical calculations on what the most probable word to
come next is given all the words before it. So,
(19:45):
coming back to the long arc, if you watched all
eight seasons of Game of Thrones, for example, or you
read those books, you come to care about these characters
because you've been with them through so many trials and
you feel like you know the and understand them, and
you can predict things about their behavior, and you're invested
in their long term trajectories. So all the children of
(20:09):
the Stark family end up scattered in different directions in
the world, and then in the final season, they end
up reconvening. After what seems like a lifetime of adventure.
They're all back together for the final big showdown with
the Knight King. And when we watch the series and
we get to season eight, we think, wow, I didn't
(20:32):
see that coming, that they're all back together now, and
now this story has a beautiful shape to it.
Speaker 1 (20:39):
I'm really in the hands of a professional here. At
least with our current AI architectures today, it's impossible to
achieve that, except possibly in a few thousand word version,
because chat ept is playing its statistical game, and of
course it's playing it extremely well and successfully.
Speaker 2 (20:57):
But the trick to recognize here is.
Speaker 1 (21:00):
That it is amazing at the level of paragraphs and
possibly a few pages, but not at the level of
thinking about the details of a five hundred page novel,
or a two hour movie screenplay or an eight season epic.
It's great at this small stuff because it can do
that with statistics, but it's fundamentally limited for the longer
(21:22):
stuff because it has no way to zoom out and
think about the crops that it wants to plant for
the long game. Okay, you might say, fine, maybe we'll
get there at some point, but even for now, couldn't
you build a big story out of smaller chunks. So
one idea is to make this form of storytelling in
(21:44):
which the world is infinitely big.
Speaker 2 (21:47):
Let's come back to this picture.
Speaker 1 (21:48):
I painted a moment ago of a choose your own
adventure in which the AI generates plot points on the
fly for you. So I say, okay, open that door
to my left and the story continues as though it
were all prescripted, as though I have an author, let's say,
in the style of Henningway or Nibokov for Morrison, who
(22:12):
has pre written every possibility. In certain ways, this would
be amazingly cool, But I think the problem here is
that a story like that would just equal randomness, and
that's not actually what we want in a story. Instead,
we want to feel like we're putting our trust into an.
Speaker 2 (22:32):
Author who sees the big picture.
Speaker 1 (22:33):
We want the Stark children to reconvene such as we
feel the overarching pattern of the story and we have
a sense of completeness. If you just wanted randomness, you'd
go out into the world and find it there. You
wouldn't sit on your couch and read about meaningless characters
who are just in Brownian motion. And I think this
(22:57):
is the same issue with AI music, at least as
it stands now.
Speaker 2 (23:02):
Recent examples show.
Speaker 1 (23:03):
That it can compose incredible sounding music moment to moment.
Speaker 2 (23:07):
But the reason it doesn't beat out.
Speaker 1 (23:09):
A real human composer, at least today, is because it
doesn't have any long term vision, and so the whole
piece of music just hangs together. Statistically, moment to moment,
and that's perfectly good for composing things like elevator music,
which is for a short ride, or commercial music which
only needs to be twenty seconds. But it won't for
(23:32):
now replace a human composer who writes with the long
arc in mind. For example, I was just talking with
my friend Tony Brandt, who's a composer, and he was
explaining to me that when Ludwig and vad Beethoven died,
he left behind sketches for a tenth symphony. So a
few years ago some computer scientists used AI to complete
(23:55):
the symphony, to finish what was unfinished. Now did they
do a good job. In one sense, it was an
incredible feat. They extracted the statistics of Beethoven's choices and
preferences from everything he'd written, and they used that to
statistically guess what moves he would have made next had
(24:16):
he lived, What notes, what chords, what instruments. But even
with this feat, it was clear that the AI didn't
know how to think long term. For example, Beethoven's Ninth
Symphony ends with a chorus, which was such a surprise
to end a symphony this way. It had not ever
been done before, so the team training the AI decided
(24:38):
Beethoven would have found a similar novelty to end his
tenth Symphony, so they instructed the AI to include an organ,
a church instrument that had also never been used in
a symphony before. So at the start of the last movement,
the AI generates an organ.
Speaker 2 (24:55):
But when we zoom in, we see the difference.
Speaker 1 (25:01):
The real Beethoven laid all sorts of clues in the
Ninth Symphony to set the groundwork for the chorus. Like
the orchestra plays a type of music called a recitative
before the choir enters. Why because recitatives are found in opera,
and opera has voices, So he was laying clues down.
(25:22):
But in the AI tenth Symphony, there was no build
up to the organ. There was no suspense, no hidden
clues about.
Speaker 2 (25:30):
What was coming.
Speaker 1 (25:31):
The AI didn't know how to prepare the organ's arrival,
how to give it the significance that's there for experts
who listen for arcs that build through time. So, at
least for now, AI is useful at writing brief articles
and composing short ditties, but it doesn't have the architecture
(25:53):
to write long pieces that humans love to create, and consume.
Speaker 2 (26:13):
So as I'm.
Speaker 1 (26:14):
Writing my next books, these large language models don't feel
to me like a real threat, at least not yet.
But let's imagine that we cut to ten years from
now and some hardworking programmers have figured out how to
build an AI with the right sort of architecture that
zooms in and out on the scope of a story,
(26:36):
and it can successfully generate a novel with cliffhangers and
overarching themes and so on. It's certainly not impossible that
we're going to get there, and it'll probably happen sooner
than we expect. So let's imagine we get there in
a year or five or ten. An AI can generate
a million good novels in an hour. Then what Well,
(26:58):
there are several directions in which things can go, And
the possibility that I mentioned earlier is that novels might
become bespoke, totally personalized to you. So you prompt your
AI to make an adventure story of exactly the type
that you might like. So you say, tell me a
murder mystery about a basketball player who's killed by someone
(27:20):
who appears to be his girlfriend. But then it turns
out it's actually a CIA plot. That opens the door
to a cover up involving a pharmaceutical company. Let's assume
that the AI then spits out a book to your
exact specification, and it does an amazing job, and it
gives you a colorful story just how you wanted it,
and you can enjoy that on the beach seconds later.
(27:42):
Well that's cool, But I assert that this is never
going to replace literature. And this is my second point
why artists don't need to worry, because when you define
your own plot, the surprise is diluted.
Speaker 2 (27:57):
The joy of literature is diluted.
Speaker 1 (28:00):
After all, even if you are a creative prompter, you
are limited to versions of what you have experienced or
read before. And much of what we love in literature
is this surprise that comes from a particular point of
view that you have never considered, like characters or plot
points that would never be generated by your own limited
(28:22):
point of view. In the end, I think we don't
want to be limited by the parochial fence lines of
our own imagination.
Speaker 2 (28:31):
I suspect that.
Speaker 1 (28:33):
No matter how far in the future we look, we
are still going to want stories that surprise us, plot
twists that we don't see coming. Okay, fine, you might say,
so you agree that it's more exciting if we go
on rides that we didn't predefine. But you might point
out there's another thing that AI can do. So let's
(28:53):
address the next issue, the idea that AI could someday
generate millions of highly creative versions of a single story,
so there'd be no need to stick with just one
version of stories anymore. Instead of George R. R. Martin
writing Game of Thrones over decades, future AI could generate
(29:13):
thousands of fascinating versions in a second, and we wouldn't
depend on him for the next slow novel. But I
suggest that's not going to catch on either.
Speaker 2 (29:24):
Why.
Speaker 1 (29:25):
It's because we care about shared adventure. Would Game of
Thrones have been so popular if we each saw our
own version of it? In my version, John snow dies early,
and in your version, danaris Mary's Tyrian lanister, and in
your neighbor's version, Ariya marries into a royal family in
(29:46):
some subplot island that never even appears in my version.
If this sounds less appealing to you, to have mutually
exclusive worlds, it illustrates the point that I want to make,
which is a big part of story is this social aspect,
the shared experience. We certainly could use AI to generate
(30:06):
a million different versions of west Ros, and in the
future we can generate instant video around these plots with
terrific special effect. But as a society, I think we
wouldn't want to each consume our own version. You want
your John Snow to do the same thing as my
John Snow. And this is because a huge part of
(30:28):
story is this shared experience. We enjoy sharing fantasy worlds
because we talk about them. This is why we do
book clubs, so we can sit around and discuss something
we all shared together. All the time, I hear people say, hey,
did you see the latest episode of The Peripheral or
Jack Ryan or Severance or Star Trek or whatever. And
(30:51):
our love of communal stories stems partially from our need
for shared references. For example, I'm always making reference and
is to how Neo in the Matrix saw in slow motion,
and that's decades after that movie came out, but it
serves as a quick, culturally shared way that we can
talk about concepts. We all have quick cultural references for
(31:15):
time travel, where people say met me up Scotti when
they're talking about teleportation, or we reference Obi wan Kenobi
when we say may the force be with you, or
we reference ex Machina or Westworld as a shorthand for
AI going bad.
Speaker 2 (31:31):
And take this as an example.
Speaker 1 (31:32):
Imagine that you could generate a fantasy football game with
your favorite players from any decade on one team versus
players on another team, and you can now watch a
full football game from stem to stern. But would you
if no one else ever saw that game? In other words,
would you follow teams all the way through the World
(31:55):
Series if it was purely AI generated plays and games.
I know that people might have different opinions on this,
but to me, that sounds not the least bit appealing.
Why it's because a giant part about sports is the
culture of talking about the game. Hey did you see
that play last night? Can you believe that shot he took?
Can you believe the call that refmade? And stories are
(32:19):
analogous to sports in this way. We come to our
book clubs to take the world that we read in
solitude and find a community with other people who were
there with us from their own living rooms. So I
suggest that as a culture, we are always going to
desire and need a shared vocabulary, and the only way
(32:40):
to grow that is to watch the same movies and
read the same stories.
Speaker 2 (32:45):
And that's why I predict that.
Speaker 1 (32:47):
While individualized stories might find niche audiences, it won't replace
our need for shared stories. This is an interesting dimension
of literature that's not typically canered. Story gives us social glue. Okay, fine,
so let's assume that at some point AI could write
(33:08):
a story that's so evocative and beautiful that it becomes
a shared story, an adventure which everyone taps into and enjoys.
And now we arrive at my fourth point about why
AI won't totally displace creatives, and that is the question
of whether we get something more out of a piece
(33:28):
of literature or art if we feel there's.
Speaker 2 (33:32):
A heartbeat behind it.
Speaker 1 (33:34):
I read a beautiful quotation in The Atlantic about a
decade ago quote one of the only requirements for literature
is that the reader can feel a heart pulsing back
from them on the other side of the page. The
heartbeat matters because when we read, we consider the intention
of the author. We think, oh, this is Mary Shelley,
(33:57):
whose mother died a couple of weeks after she was born,
and she had a troubled childhood, and her father homeschooled her.
And she married the romantic poet Percy bish Shelley, and
he was already married and his wife committed suicide, and
they moved to France, and she came back pregnant, and
they were destitute, and their daughter died. And then they
went to spend a summer in Geneva with friends, and
(34:18):
they each set out to write a ghost story, and
she ended up writing Frankenstein.
Speaker 2 (34:23):
So we read her.
Speaker 1 (34:24):
Novel and we think, this is her voice, and this
is her viewpoint on the world, and these were the
things that she knew and the things she didn't know,
and the things she couldn't know.
Speaker 2 (34:35):
It isn't just the piece of art itself.
Speaker 1 (34:38):
It is the artist behind the art that colors our experience.
So imagine we get Chad Gpt to adopt Mary Shelley's
style and write a story involving cell phones and electric cars.
It might be interesting and amazing, but I suggest we
wouldn't enjoy it as much because we would recognize there's
(35:00):
no unique human, no unique beating heart who had the
experiences and slaved over the words. Now, you could argue
that almost all of the authors we enjoy. We live
apart from them in space or time, and we'll never
meet them, and we just have the vaguest sense of
their existence.
Speaker 2 (35:20):
And that might be true, but it's still worth.
Speaker 1 (35:23):
Noting that we know fundamentally that they are human and
they are like us in some way. They may be
more successful, or more impoverished, or maybe from a different country,
but we know that fundamentally they are fellow travelers with
us on the human journey. Now, obviously we love a
(35:55):
lot of things that aren't real, like Spider Man or Batman,
but we all I also love the actors behind them.
If you had a chance to have dinner with or
even to shake the hand of the actor behind some
fantasy character that you love, you'd be thrilled about this.
Speaker 2 (36:11):
Now, I think that leads.
Speaker 1 (36:12):
To an interesting open question about some of these new
avatars that are hitting the scene with hundreds of thousands
of followers on Twitter. Even though they're fake. They're just avatars,
they're not real people. The part that strikes me is
really interesting is that the ones who get all the
attention are the creators behind the avatar. In other words,
(36:34):
if I told you there was an avatar on Twitter,
with a one hundred thousand followers, and you could get
the chance to meet the young woman behind all this,
you'd be thrilled. What this tells me is that we
are compelled by the heartbeat that is just behind the
actor or the avatar. In many ways, that's more interesting
to us than the actor or the avatar themselves. Now,
(36:58):
I don't think this goes on in so let me
just address the counterpoint. You might say, well, does that
mean that if AI generated a thousand novels in a second,
that I'd be really interested in meeting the team of
young programmers behind that. I don't think so, because meeting
the programmers doesn't expand your understanding of the story. But
meeting an author who poured her heart into the story
(37:21):
for years that does shape and color and expand your understanding.
Speaker 2 (37:27):
And by the way, beyond writing, I think.
Speaker 1 (37:29):
This applies to musical composers and visual artists in the
same way, and in fact, to all human endeavors. I
was just talking with a neighbor of mine. He and
I spend a lot of time on airplanes flying to
some city in the world to give a talk. He
just got a three D scan and a high resolution
(37:51):
avatar of himself made and he can combine that with
Chad GPT to make his avatar give little speeches. And
so he and I were really chewing on this because
the question is, the next time he gets invited to
speak on some stage and some random city around the world,
can he just have the avatar give the speech online instead?
(38:12):
Will conferences still want him to fly across.
Speaker 2 (38:15):
The globe to give a talk.
Speaker 1 (38:17):
Or will the avatar be good enough and save a
lot of expense and plane fuel? Possibly, But the flip
side is do people value going to the talk because
of the beating heart.
Speaker 2 (38:30):
On the stage?
Speaker 1 (38:32):
And my long bet is that conferences will continue to
invite flesh and blood humans because audiences are humans who
care about other humans. So when it comes to legal documents,
if AI can do it better, awesome, when it comes
to medical diagnoses, if AI can do it better awesome,
(38:52):
when it comes to hearing a speaker on the stage
with his or her imperfections and limited knowledge and fundamentally
human nature, I'm going to take the bet that that
is going to last and beyond just appreciating the reality
of another human. This maybe for another reason as well,
(39:14):
an interesting psychological effect that I think is going to
be at play here. This is what I'm going to
call the effort phenomenon. I'll give you an example of this.
A well known colleague of mine here in Silicon Valley
recently announced that he had published a book half written
by him and half written by AI. And when I
(39:34):
first heard about this, I thought, I wish I wanted
to read this, but I don't now. I did take
a look at the book, and there are clever insights,
and it's well written. But I'm simply not that inspired
to read something that's even half written by AI, because
it makes me feel, perhaps unfairly, that.
Speaker 2 (39:56):
He didn't put in the normal amount of effort.
Speaker 1 (39:59):
My analogy you would be if Picasso said, hey, will
you buy this painting? My students painted most of it,
but then I finished it off and put my signature
on it. It feels like it would be slightly less valuable.
So let's return to that scene in Westworld where William
asks the host are you real? And she says if
you can't tell, doesn't matter, Because this is the question
(40:21):
that comes up.
Speaker 2 (40:22):
About a novel.
Speaker 1 (40:24):
If I spend seven years writing a novel, and if
Chad Gpt or google bart spits out a novel that's
word for word equivalent.
Speaker 2 (40:33):
Does it matter?
Speaker 1 (40:35):
And I think, perhaps surprisingly, the answer is yes, it matters.
We care about the effort that went into it. If
I were to show you two pieces of artwork that
someone had done, and one of them just involves painting
a single dot on the middle of a big white canvas,
and the other one is the person carefully gluing marbles
(40:55):
one on top of each other until they balance eight
feet high. You may have a p for looking at
one or the other, but just think about how much
money you would, in theory, be willing to pay for
each of these. If you're like most people, you think
the thing that took a lot of effort is worth more.
There have been psychology studies on this since the nineteen fifties.
(41:17):
It's difficult for people to separate out the effort that
went into something from its value. In other words, the
effort is used as a shortcut for understanding quality. For example,
in one paper done by Krueger at All, they had
people rate a poem, or rate a painting, or rate
(41:37):
a suit of armor, and the people generally thought it
was better quality and worth more money, and they liked
it better if they thought it took more time and
effort to produce a friend of mine. Uses the example
of diamonds. People will pay much more money for a
real diamond with flaws than they will for a synthetically
(41:58):
grown diamond from laboratory that has no flaws at all. Now,
why would you pay extra money for flaws? Part of
this has to do with the notion of effort. The
real diamond was produced by mother nature over millions of
years of compression, so it's a very special thing that
took quote unquote effort on the part of mother nature.
Speaker 2 (42:21):
But the lab grown diamond that can be done in
a day and a half.
Speaker 1 (42:25):
And so even though it's more perfect, it is less
valuable because it just took less time to make it.
Speaker 2 (42:32):
We actually pay for flaws.
Speaker 1 (42:34):
Now, I'm not arguing that we can't be fooled at
some point into loving AI generated literature. It seems quite
possible to me that in the future there will be
novels written by AI, and we might not always know it,
because the AI will also generate a false story about
the author, complete with a biography and a generated photograph.
(42:57):
My assertion is simply that FA it is going to
be an important part of what the AI will need
to do, because it's more difficult to become invested in
something that we think is simply doing massive statistical calculations
rather than having a private, limited internal life. We care
(43:18):
about other humans, So what's the big picture. My friend
Kevin Kelly suggested to me the other day that generative
AI may play a role that's analogous to the invention
of the camera. What happened at that moment in history
was that painters lamented that this was the end of
(43:39):
painting because you could now capture anything instantly with the
click of a button, and you could capture it with
zero mistakes. So why would you sit there with a
paint brush and painstakingly try to capture every detail by hand.
At that moment in history, it seemed clear that painters
were done for But as it turns out, photographs ended
(44:03):
up filling a different niche.
Speaker 2 (44:05):
Absolute realism wasn't the only end goal of art.
Speaker 1 (44:10):
People didn't only want a maximumly realistic print of a scene.
They also wanted swirls, an amazing color, and more importantly,
things that didn't exist in the outside world. So canvas
painting remained an active field, even while photography grew and
ended up flowering on a neighboring field. So one possibility
(44:34):
is that AI generated literature will not foment it takeover,
but instead it's going to fill a new niche, one
that we don't quite see yet, but it isn't the
same plot of land. And I think there's one more
possibility for where this could go for writers, not now,
but in the coming years. And for that, I want
(44:54):
to tell you what happened with the world champion Go
player Could Jig. He was the world's number one player
at Go, which is the game in which you use
those small black or white rocks to define your territory
and try to surround your opponent. So in May of
twenty seventeen, he faced off against an AI program called
(45:17):
Alpha Go, which was designed by Deep Mind, and Alpha
Go had been trained on millions and millions of games
of Go, so it had deeply absorbed the statistics of
possible plays. So they played the first game and Jiu lost.
Alpha Go had pulled moves that none of his human
(45:39):
opponents had ever thought of, and then Jua lost the
second game. The AI had won over a human in
a game that's way more complex than chess, and subsequent
versions of the AI are no doubt going to continue
to win evermore. But that's not the interesting part of
the story. The interesting part is what happened next. So
(46:01):
Jig got over his embarrassment and he became mesmerized by
what had just transpired, and he studied the games.
Speaker 2 (46:11):
That he lost.
Speaker 1 (46:13):
Before he played Alpha Go, Jia had won a majority
of the games against his human opponents, but afterwards he
found he was able to beat his human opponents even
more easily. After his species shaming defeats in twenty seventeen,
he went on to play twelve straight matches against humans and.
Speaker 2 (46:35):
He won them all in a row. So what had happened.
Speaker 3 (46:39):
He had been exposed to new kinds of moves and
strategies that had been pulled by Alpha Go, and these
all lay outside of traditional ways of doing it.
Speaker 2 (46:51):
All these moves that Alpha Go had done.
Speaker 1 (46:53):
Were legal and possible, but they were just different from
what had been played over the last twenty five hundred years.
Speaker 2 (47:01):
If you're a Go officionado.
Speaker 1 (47:02):
This included things like playing a stone directly diagonal to
your opponent's loan stone, or playing six space extensions, while
humans tend to prefer.
Speaker 2 (47:13):
Five space anyway.
Speaker 1 (47:15):
Joe reported that playing against the AI was like opening.
Speaker 2 (47:20):
A door to another world.
Speaker 1 (47:22):
Once he was exposed to these alien game plays, he
incorporated them, and this story I suspect typifies the future
as humans and machines interface. Some people are worried that
AI is going to take over, but we will continue
to adapt as well. We will become better writers as
(47:45):
we see examples that are allowed by the language but
no one had ever tried it, or visual art techniques
that involve moves that are allowable, but culturally we just
never thought to do it, Or musical moves that are
possible to.
Speaker 2 (48:00):
Do with notes, but no one does.
Speaker 1 (48:03):
Them because traditionally we just wouldn't think of going there.
Because fundamentally, as a writer, I think I'm doing all
kinds of original things, but there's a very real sense
in which I'm simply remixing what I've absorbed before. I
interpolate between examples that I've seen. So even if AI
is just interpolating, it's read billions of times more texts
(48:28):
than I have, and so it can do very clever interpolations,
and I can learn from that a lot of people
are worried that AI is going to leave humans far behind,
and in many respects that's true. But as computers improve,
so will we. In the battle of man and machine.
(48:50):
Both are going to get better, and as we continue
to adapt in parallel, the future definition of AI may
well shift from our official intelligence to augmented intelligence. In
the best case scenario, this isn't going to be a war,
but a collaboration. It's going to be an ongoing, guided
(49:12):
tour into areas that were previously just beyond our view.
Speaker 2 (49:22):
That's all for this week.
Speaker 1 (49:24):
To find out more and to share your thoughts, head
over to eagleman dot com, Slash Podcasts, and you can
also watch full episodes of Inner Cosmos on YouTube.
Speaker 2 (49:34):
Subscribe to my channel so you can.
Speaker 1 (49:36):
Follow along each week for new updates until next time.
Speaker 2 (49:40):
I'm David Eagleman, and this is Inner Cosmos.