Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
How are creative people going to make a living in
a world with AI? What does the term data dignity mean?
Why is the character of data from Star Trek a
useful model for how we could think about what the
sources are for any creative expression. Is there a totally
different way to think about the economy of the future,
(00:28):
and how might this involve mystifying and elevating humans. Welcome
to Inner Cosmos with me David Eagelman. I'm a neuroscientist
and an author at Stanford and in these episodes we
sail deeply into our three pound universe to understand some
of the most surprising aspects of our lives. Today's episode
(01:00):
is about the situation we find ourselves in in which
AI is so boundless and rich in its creative output.
You ask for a picture of anything you want from
a good LM like Open AI or mid Journey or Grock,
and it gives you these really superlative results. It recombines
(01:22):
elements and concepts in a way that is deeply creative
and satisfying. But of course, while it's doing an unexpectedly
terrific job of recombining things that are already out there,
it didn't come up with the original pieces and parts
by itself, there are people, individual humans who generated those
(01:46):
drawings and concepts and art forms. It's fundamentally human creativity,
not computer creativity, that is the fuel behind it all.
So is there any way to the humans who drove
the original innovations or is it simply too late in
(02:06):
the sense that maybe everything has already been vacuumed up
by the lms and they've digested it all, and now
there's no way for human producers of art and ideas
to get any meaningful credit for their work. What does
this all mean for the creatives in society, those who
paint and compose music and write books? What does it
(02:28):
mean for those who produce work now? And for those
who are in school now but dream of being human
creators in the future. So, for today's podcast, I have
the pleasure of speaking with my friend and colleague, Jaron Lanier.
Jaren wears many hats. He's a computer scientist, he's an artist,
(02:48):
he's a technologist, he's a futurist, he's a critic, and
he's a musician who, by the way, owns almost two
thousand very unusual musical instruments. Jaren is the godfather of
virtual reality. He started the first VR company in nineteen
eighty four, and he's spent his career as a visiting
(03:09):
scholar at companies and universities, and since two thousand and
nine he's been at Microsoft Research, where he holds the
role as prime Unifying Scientists under the Office of the
Chief Technology Officer, which gives him the acronym Octopus, which
is appropriate given his reach into so many fields. I'll
(03:31):
just say that I'm blessed to know a great number
of brilliant people. But even in that crowd, jaren sits
near the top of the heap. So today we're going
to talk about AI and the future for creatives.
Speaker 2 (03:47):
You show me an AI system, I don't care if
it's CHET, GIPT, or really anything else. You can think
of it in two ways. There's a figure ground and version.
That's so. A figure ground in version is when you
can look at something and you can swap the way
you interpret it almost to an opposite right A famous
(04:10):
one is the mcsher drawings, where you might see a
field of fish or a field of birds, but each
is the negative space of the other. Right now, in
this case, for AI, one way you can see AI
is AI is a thing. It's a noun. Whether you
think it's alive or not, or conscious or not. Forget
that it's just a thing. It's like, the AI did this,
(04:32):
the AI did that. Can we regulate the AI, who
sold the AI? Who's responsible when the AI did this
or that? Blah blah blah. It's a thing, it's a noun.
There's another way to think about it where it is
not a noun anymore, but instead it's a collaboration of
people and there isn't anything there other than the people.
Now when I say this, people are so familiar with
(04:55):
treating AI as a thing that they have trouble hearing
the version. Sometimes, what do you mean, of course AI
is there? Of course it's a thing I just bought.
I'm paying for your copilot thing and that's the thing
I paid for it, right, and like great, thank you
for paying for it. But there is another way to
(05:17):
think about it, and it is possible to imagine an
AI system as actually the collaborative effort of all the
people who made it. This is particularly true in big
data systems like large language models. You can think of
(05:37):
them as being closer to the Wikipedia, perhaps than to
Commander Data from Star Trek. Although I want to see
something interesting about Commander Data. I was just reviewing clips
of Commander Data talking and he always introduced himself as
an amalgam of people whose data he was combining. There's
a wonderful episode where he plays violin very beautifully and
(06:06):
Captain Picard comes up to him so that that was
a great performance, and he says, well, really, you should
thank the three hundred violinists whose data was amalgamated for
me to do this. And what I like about that
scene is that he knows specifically the three hundred. They're
not a faceless mush of some unknown number, which is
how we do it today. But the writers at the time,
(06:28):
and I should fess up, I was in the loop
and was talking to people, so I might I might
have had a bit.
Speaker 3 (06:33):
Of influence on it.
Speaker 2 (06:35):
But the idea is that you could know who the
people were if he wanted to, and Data could say
it seased three hundred violinists instead of well it's some
random mush of violinists, you know. So what we've done
now is we've mushed everybody together so we don't know
who they are, and that's been our approach to information systems,
(06:57):
and it benefits those of us who make.
Speaker 3 (06:59):
A living from them.
Speaker 2 (07:00):
If you're listening to a playlist online from some music
service and you get a feed, you don't necessarily know
who all the musicians are, but you know the name
of the service you're paying the monthly bill to.
Speaker 3 (07:12):
So the digital hub.
Speaker 2 (07:14):
Becomes more powerful, becomes more known, more prominent, more valuable
than the people who are feeding it. So there's this
constant economic incentive to emphasize the hub and not all
the people who are really the only thing that are there.
In the my preferred inversion of understanding the thing, the
(07:36):
same thing could be true of AI. We so much
want AI to be this emerging entity, even if it's
an evil, horrible one that will destroy us. Because we
grew up on the Matrix movies and the Terminator movies
and all of these. We want that creature to emerge
because that's our childhood. That's almost like our religion. It's
the story we grew up with. But if we acknowledge
(07:58):
it actually it's all these people instead, then we lose
We lose the creature, and that would be traumatic. So
when we train the models, we don't keep track of
which source documents were involved. So what we really need
to do is eliminate the people. And we have to
do that for economics. Can you imagine. I mean, like
with economics, we can't trace everything. We don't quite know
(08:22):
why a price is what it is. But on the
other hand, we could say, hey, buddy, uo taxes, So
we have a definite motivation to keep track of the people.
With AI, we lose all the people. We just like
pretend that the people aren't there. But that's the wrong inversion.
Speaker 1 (08:39):
Well, I know that one of the things that you
can'tpaign for is digital dividends.
Speaker 3 (08:44):
Well we call it.
Speaker 2 (08:46):
Initially as an academic research field, it was called data
as labor, so you treat your data as if it
were labor. And then the name shifted to data dignity.
Dignity which was such his idea, such an adella. Not
that he agrees with me about everything, believe me, but
you know it's that's the term he came up with.
Speaker 3 (09:09):
And data dignity is let's.
Speaker 2 (09:12):
Say you could you got some result, he said, hey, chachipt,
can you write me a Christmas card?
Speaker 3 (09:18):
Or whatever?
Speaker 2 (09:19):
People do okay, and then they would say, here's your card.
By the way, the top twelve sources, I mean there
were a multitude and unbounded multitude of sources, but the
top twelve were these, and then you could say, hey,
could you get rid of that one and do this
one instead. It gives you an x ray into what
(09:40):
you might think of as the intent of the particular output.
And I like that for two reasons. One is, there's
a safety question. So when we have red teams attempt
to fool the model to get it to do things
that we don't want it to. For instance, can you
(10:02):
make a bomb out of what I see here in
this kitchen right now? Because I really want to blow
this place up. We don't want that to happen, right,
But is there some way they can couch that prompt
in some kind of a weird thing like asking for
a cake recipe instead, but somehow or other, like maybe
it indirectly references a movie where there was a bomb
(10:23):
and somehow rather they get it anyway, all right. The
thing is, if you can look at the most prominent
source documents that were relevant to that result and there's
a bomb in there, you've nailed it. So you have
an x ray into intent and there's not really any
substitute for that, and we lose that when we pretend
(10:44):
that there weren't people there, or that they were just
a giant, untraceable mush. And then, of course the other
thing is I want I want to have a future
where people can be paid for providing exceptionally useful data
to AI to incentivize better and better data production FREEI
I want that future, and not just the future everybody
has to go on universal basic income and be the
(11:05):
same and feel useless. So that's the second reason, and
that's why it's called dignity.
Speaker 1 (11:11):
Got it Now, just to play Devil's advocate for one
moment on that, let's say that you ask it to
write you a poem, and it says, hey, John Smith
was sort of the biggest influence for this poem they
got written.
Speaker 3 (11:23):
But of course John Smith is just a vessel into
which was.
Speaker 1 (11:28):
Poured the entire culture, and his output of the poem
came from all the other influences on him.
Speaker 3 (11:35):
So where do you where do you? Yeah?
Speaker 2 (11:37):
So this is a very interesting problem, and this is
a fundamental philosophical problem, which is the universe is in
a way of giant continuity, and how do you ever
draw boundaries? How do you ever say, actually, there's this
thing and that thing. It's a very basic, fundamental problem,
and it's a trickier one than people might realize unless
they've really confronted it. I'm not going to go into
(11:59):
the whole issue of ontology, because it's a big Ye've
been working on it for thousands of years and still
here what I want to say in this case, I
do have an answer, and it's an answer that you
might not like and many might not like, which is
I think we have to celebrate human beings and elevate
(12:19):
them in a way that, if you like, gives them
this status of being sources, even though they're always amalgamators too.
And we have to do that on faith, and in
a way it's a bit of a mystical idea that
there's something about people that's magical and a little apart.
You could call it consciousness, you could call it different things.
But the reason we have to do that is if
(12:42):
we're technologists, we have to define who our beneficiary is
for the technology, and if we can't define this special beneficiary,
we can't even be coherent technologists. We lose the whole thing.
Become we become random morons. Just jiggling around between ideas
is of no meaning. We become memes, we become viral,
(13:02):
and nothing's there anymore. So I don't think we have
any choice but to somewhat mystify and elevate humans. And
so when somebody source document might have internally through other channels,
dependent on others, which it always will, I mean, there's
of course, it's always true. I still think we have
(13:23):
to defer to that person as much as we can.
And if somebody else says, hey, that shouldn't have been
their source document. They copied me, Well, we have systems
in place for that when it's egregious enough and worthwhile enough,
and they can super copyright if they want to. I
don't love that, but it's there, and sometimes it needs
to be there. You know, it's rough justice, it can
(13:46):
never be perfect, but we have to agree that authorship's.
Speaker 3 (13:50):
A real thing.
Speaker 2 (13:50):
We can't just say that all people are just vessels
and there's nothing but this emergent thing and no individual
person as a creator. We have to say no people
are creators, because otherwise we can't be technologists anymore. The
moment you give up people, you might as well go
smash your computer because it doesn't make any sense anymore,
because it's only possible. Definition isn't serving people.
Speaker 1 (14:24):
So let me dig into something about how it's this
figure ground reversal of looking at AI as a collaboration
of people. So this taps into something that I've been
writing lately about this question of whether AI has theory
of mind. And you may know some people have published
paper saying, hey, with the theory of mind has emerged,
(14:44):
And just as a reminder of the audience, it's you know,
theory of mind is being able to put yourself into
the shoes of someone else, into the perspective, and understand
their beliefs, even if they're different from your own, and
understand what they believe is true or not true their
perspective so on. Now, this is something humans do seamlessly
and effortlessly all the time. We're very good at understanding, Oh,
(15:06):
he doesn't know that piece of information, she does know that,
and so on. But the question is do llms do it?
So some people have claimed yes. I've studied this point
very carefully. I conclude that they do not. Now why
do some people write that they do. It's because you
give these questions that are probing on. Hey, what if there's,
(15:28):
you know, a bag that's labeled with one thing, but
there's something different inside. What does the person believe who's
just looking at the label, and what does she believe
after she looks inside the bag? And so on and
lllls will get the stuff right, and so they say, Wow, they've.
Speaker 3 (15:41):
Got theories in mind.
Speaker 1 (15:42):
They can understand what it's like to be this person
in these different circumstances. The reason I don't think that's
true is what I'm calling this the intelligence echo illusion.
And what I mean by that is thousands, maybe millions
of people have written about this on line. They've written
out these deary mind questions, these unexpected things and so on.
(16:05):
So an LM absorbs the statistics of lots and lots
of these things, and then when you ask the question,
you say, my god, it gave me the right answer.
And of course gave the right answer. It's read the
damn problem lots of times.
Speaker 3 (16:17):
What's interesting, this is the calculation.
Speaker 1 (16:19):
I ran recently, is you know, if you look at
the common crawl, this corpus of the data that all
these large language models crawl and absorb that's, according to
my calculation, one thousand times larger than what you could
read in a lifetime. And so the fact is, many,
(16:41):
many times we'll pose a question to let's say, chat YOUPT,
it'll give some answer. I'll think, my god, it's got
theory of mind. It's really done something strange. But in fact,
you're just hearing this intelligence echo, by which I mean
people already knew this.
Speaker 3 (16:55):
You just didn't know that, or you didn't know that
other people do that.
Speaker 2 (16:59):
Sure, but I mean we're right back in this territory
of mysticism and ontology. Because if somebody wished to disagree
with you, and that person would not be me.
Speaker 3 (17:09):
But if somebody wished to.
Speaker 2 (17:10):
Disagree with you, they'd say that all those people were
just reflecting things like that anyway, So there's no difference
that the AI and the people are trained on are
in the same status, the same category. And so what
if the AI got to it this way with people
in the way, it's still the same thing that people
are doing. And why are you making this distinction? Why
(17:31):
are you trying to be all mystical in love people?
What's wrong with you? And so I'm just going to
declare you should be mystical and elevate people because it's
the only thing. Weirdly, this little bit of faith, which
I seem irrational, is the only way to save rationality
because otherwise, once again we have no beneficiary for technology.
(17:51):
You can have science without elevating people, but you can't
have technology. Technology has to have a beneficiary. The problem
to be solved is serving people, and technology isn't sensible
without a problem to be solved. Science is you can
have a science theory without knowing what it's for. You
can't have a technology without knowing what it's for. Or
(18:14):
maybe I mean one could argue that you can have
some kind of underlying, very fundamental technology that might be
applied in different ways, but ultimately to actually make any
instance of it, it has to be for something. There
has to be a human at the end of the
chain for it to be sensible. So I totally agree
with you, but you have to recognize that there's a
figure ground reversal there, And somebody who disagrees with you
might just say, well, the AI and the people are
(18:36):
the same and they're both just getting this information. But
I still think our only choice to remain rational technologists
is to accept that we have to be kind of
mystical humanists to frame it, and that's not comfortable for
a lot of people. But I think if you really
examine the logic of the situation, you'll come to agree
with me.
Speaker 1 (18:55):
Oh, I already agree with you, actually, And that's why,
that's why I really care about this intelligence echo illusion,
which is to say, you're hearing the echoes of people
who've said this thing before, but you mistake it for
the voice of an AI that has theory of mind.
Speaker 2 (19:11):
Yeah, and so why can't you Let's say the AI
gets the puzzle with the bag, right, Okay, let's say
those now in my preferred future, where you then get
a list of the top twelve or twenty five or
whatever it is people who contributed. What it might say
is I would tell you, but actually there are a
(19:34):
million people interchangeably in the first slot here because this
thing is so common, and that has to be an
acceptable and honest answer. So like if you say, hey,
could you do something like I want a cat that's
playing the banjo or whatever, and they'll say, got to
tell you a lot of cats out there. It's the Internet, remember,
lots of cats, So it really could have been any cat.
Speaker 3 (19:57):
In this case, it was this cat. But that's not special.
Speaker 2 (20:00):
And so we have to have a fungibility measure, not
just an influence measure. And that that's so, this whole
future world of acknowledging people is a bit more subtle
and complicated than.
Speaker 3 (20:11):
I'd like it to be.
Speaker 2 (20:12):
But I've never seen anything about it that doesn't make
sense or is unachievable.
Speaker 1 (20:17):
Right, So one idea that you've mentioned me in the
past would be, I can't remember if I got this
term digital dividends right, if I made that up some
whe along the way. But the idea is that a
creator gets paid, He gets a few nickels here and
there because on you know, Dolly, it's used his painting
as part of the influence.
Speaker 3 (20:36):
Yeah.
Speaker 2 (20:37):
So the thing about this idea, which currently the term
I like is is dated dignity. Many will object and
correctly that in a lot of cases, if somebody's output
happens to include a picture of your shoe for some
image that has a shoe or something, or maybe they
used your drone footage of waves for the wave thing
or something, what you'll get literally would be very small.
(21:00):
It might be quite a bit less than Nichols, right,
But that's not the point.
Speaker 3 (21:04):
See, the question is whether we think of the.
Speaker 2 (21:08):
Future as a linear extension of what we already know
or something really expansive and fantastic and unpredictable. And a
lot of the people who tell me that they believe
in this radical future where there's a singularity, where this
AI transformers reality and a blink of an eye and
humans are obsolete, and they're actually thinking in a very
linear way because what they believe is the AI as
(21:31):
that exists is capable of creating the AI of the future,
and that we already know in a sense everything that
needs to be known, and all we have to do
is turn the on switch. In future generations won't be
more creative than us. Where the final creative, you know?
And I think that might not be the future.
Speaker 3 (21:49):
I want.
Speaker 2 (21:49):
What if the future is one in which there's incredible, creative,
productive things happening, and I'm not creative enough perhaps to
come up with the examples.
Speaker 3 (22:00):
I'll just use the one I used last.
Speaker 2 (22:02):
Week, which is, in the future, we have some way
of extending our bodies physiologically, so we can fly around
an open space in the vacuum and do all sorts
of things and propel ourselves. And what if in order
to do those body extensions there's some sort of bioprinter
we get into. And what if they're aisystems that run
(22:22):
that because it's the only way. And what if there's
a whole new creative class of people who've contributed data
to that thing. All right, so that's the whole new thing,
and I think the best of those people might get
rich from contributing data that then is able to be
beneficial to people through a large model that their data
helped train. And what if there's more and more examples
like that. What if there's thousands and millions of tens
(22:44):
of millions as humanity expands into an ever more creative,
interesting future instead of a future that we already know
all about because we're the smartest people.
Speaker 3 (22:52):
Who will ever be.
Speaker 2 (22:54):
And so in that case, we start to see niches
for people who are data creators for AI where they
really our novel and they're really making money. And that's
what people think so linearly like they like, it's not
just TikTok, it's like flying around in space without a
space suit, like free your mind. Imagine a future more
radical than you assume, yeah.
Speaker 1 (23:16):
And maybe that there's a very clear economic incentive there,
which is to say, let's say that I have let's
say I'm putting pictures of cats online. As you point
out that anytime Dolly puts together a picture of a cat,
mine is one of millions of pictures. So I can't
make any money that way. As a creator, I am
therefore incentivized to do things that are really new, to
(23:38):
really push the matteries of what had been done in
terms of art, poetry, whatever it is.
Speaker 2 (23:42):
Let's say a little bit more about that incentive, because
that's very important. There's a question of whether the network
dynamics dominate what actually happens over the network or not.
So when network dynamics dominate, what you see is phenomena
of virality and memes, because those are network effects where
(24:03):
it doesn't really matter what the meme is. It doesn't
necessarily matter what goes viral. Sometimes silly things do, sometimes
interesting things do, but it can go either way, all right.
In a real economy, when somebody loses money on some
stupid NFT or meme, stock or something, that's the network
effect dominating. Usually typically the reason our world hasn't collapsed totally.
(24:27):
Is that when people spend money, it's not on total nonsense,
but it's at least slightly on something that's real of
some use. Okay, And in that world of a real economy,
there are incentives to make things better and there's a
chance for competition to be meaningful. But when the structure
itself dominates rather than the thing the structure is channeling,
(24:49):
then you end up in this make believe world. And
on digital networks, the make believe world is always about
being the lucky one who gets in on the virality.
Your startup turns into the hub for this giant thing
that benefits from everybody else's work, but you benefit more
from them. You get to have TikTok or Instagram or something.
In this world, you have the meme stock, you have
(25:11):
the meme video, you have whatever, but you can't have
a predict who will get that. There's a randomness to it,
which means it's not fundamentally productive. In the future, we
need to suppress the network's own influence on what happens
on the network and have it back off and let
the content itself, whatever that might be, dominate, all right,
(25:31):
and then that creates reality that creates incentives for improving reality.
So the most fundamental idea is that you have this
data structure that gradually changes and settles into something that
can do some work. It happens automatically or semi automatically,
or not explicitly programming it. Now, there are a lot
(25:54):
of algorithms that have that kind of settling effect, but
the grandaddy of them is called gradient, and there's a
wonderful mathematical dispute about who should be credited with it,
but it's either Cashi or Ramon, So it goes way back.
And it's sometimes thought about as being like walking on
(26:16):
a landscape. So when we walk up the landscape, we
just think we're walking at lungitude and latitude and it's
some elevation.
Speaker 3 (26:22):
But in AI we're dealing with very very.
Speaker 2 (26:24):
High dimensional spaces, which is a concept some people aren't
familiar with. But any rate, let's just imagine we're walking
on a landscape, and so you want to descend and
you want to find a comfortable place, like where's the
place you can go? That's really low if you don't
know an advance, you don't have a map or an overview,
so you start wandering around. Now the dangerous if you
(26:47):
just say, oh, here's a nice depression, I'll go here,
it might not be as low as some other one
that would be available, which would be a better solution,
And so you have to take some precautions and start
to have some kind of way of saying, well, I'm
going to kind of go around a little bit so
I don't get caught in the place that actually isn't
ideal or something like that. You have to have a strategy, okay,
And so the point is not to get caught at
(27:09):
some halfway solution.
Speaker 3 (27:10):
You want to really kind of jump around.
Speaker 2 (27:12):
Now with AI systems, the whole point like if you
want to, let's say I want you to have a
diffusion model that puts a cat on a motorcycle and
with whatever. Okay, so if the cat's riding a motorcycle,
the easy solution is just to make a cat or
a motorcycle, getting both kind of hard. And so the
easy solution that isn't useful is very similar to getting
(27:35):
stuck in one of these intermediate dimples that doesn't.
Speaker 3 (27:38):
Go as low as what you might really want.
Speaker 2 (27:40):
You wanted to send to a better one where you're
getting both the cat and the motorcycle is you have
to do some extra little thing to broaden your scope.
So what I was going to say is this sort
of this sort of strategy is fundamental. It's the very
core basics for what we do these days with AI.
Their many, very many versions, but this way of doing
(28:03):
things is the fundamental trick of AI. But when we
put those AI algorithms out in society, we forget. We
just say, oh, well, if the AI algorithm can make
money by getting kids addicted to vain attention seeking and
low attention spans, best, fine. But it's not. That's one
of these dimples. It's not good for society, it's not
(28:25):
good for the kid. And so the nerdy way I
would put some of this is to say the same
discipline we need to get the algorithms to work at
all needs to be applied to the way we deploy
these algorithms out in the larger society. And that might
be a nerdy way of saying something similar to what
many of us have been saying about trying to make
(28:46):
something that serves people better.
Speaker 1 (29:05):
So wait, just so I put a cap on that
isn't the case, then that you agree that it would
incentivize creators to be more creative if they were doing
things that hadn't already been done by a million other people,
because they get no money for that. But if they
do something that is really new, then they can. There's
a potential to make a lot of money from that.
Speaker 2 (29:25):
So one of the fundamental claims of people who like
market economies is that they foster creativity. The type of
competition that happens in a market economy is said to
insteadivize creativity. I am on the side of the people
who believe that market's foster creativity. I just I can't
prove it, but I can almost. I think I can
(29:48):
almost prove it. I mean, I mean, look around. We're
in Silicon Valley, for God's sakes, give me a break.
Of course, of course they do. Of course markets do.
But the Internet. When the Internet is allowed to be
dominated by network effects, by virality, and by memes, then
it no longer does, because then you're chasing the benefits
(30:10):
intrinsic to the Internet, which are not in themselves creative,
but just this kind of random, lottery winning kind of
a thing.
Speaker 3 (30:18):
Right.
Speaker 1 (30:18):
That's why I think, Yeah, providing an economic counterbalance where
the less MEMI you are, and the more original you are,
the more that has economic value. Way to reverse the
network property, because I'm not sure how it's going on
its own right.
Speaker 2 (30:38):
So one of the ways I summarize data dignity is
to say, whenever we have an opportunity to create a
new creative class instead of a new dependent class in
the future, we should do that. So if there's a
bunch of people who might be put out of work
because robots are driving the cars or whatever, we should
try to find some other group of people who might
(30:59):
be creating in some way that they could be paid for,
where now we don't expect them to be paid like
we should be creating creative classes. There's another caveat or
qualification to this idea, which is that we can't expect
everybody to be creative, and we can't allow creativity to
become the measure of human worth or value. But I
do think whenever you can create a new creative class,
(31:22):
it's imperative to do so.
Speaker 1 (31:27):
That was my conversation with Jaron Lanier, computer scientist, inventor, thinker, writer, innovator, critic, technologist.
The issue we talked about today is that people who
are creative, the painters, the writers, the composers. They have
been justifiably worried ever since AI really hit its stride
just a couple of years ago, because the stuff AI
(31:50):
can create is stunning. But maybe with the right economic structures,
AI can launch us farther into human creativity. This just
requires incentivizing things so that a creative doesn't do just
another thing that's been done before, like another cat video,
(32:11):
but instead something genuinely bleeding edge. In a data dignity economy,
you will benefit only when you reach out into the
uncharted wilderness to do something novel, and when the AI
draws from that and other people benefit from that, then
you can make a living that way. This is a
(32:31):
way to reward an exploration of the frontiers that speeds
up the whole human race because it encourages people to
be creatively productive, not reproductive. So as we think about
the world, we've suddenly shot into a world with AI
disrupting every industry. I hope today's conversation will remind us
(32:53):
of a fundamental truth. AI isn't a standalone creation. It's
a mirror. It's a mosaic of countless human contributions. Every
line of code that it writes. Every line of poetry,
every beautiful piece of art is there, all reflections of
human creativity and human effort. So the choice to view
(33:16):
AI as a collaborative endeavor rather than an autonomous entity,
it's not just a philosophical stance. It's a call to
honor the humans behind the machine and to ensure that
they are valued in a future shaped by their contributions.
(33:40):
Go to Eagleman dot com slash podcast for more information
and to find further reading. Send me an email at
podcasts at eagleman dot com with questions or discussion, and
check out and subscribe to Inner Cosmos on YouTube for
videos of each episode and to leave comments Until next time.
I'm David Eagleman, and this is Inner Cosmos.