Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
If AI can do everything from writing novels to designing proteins,
what exactly is left that only humans can do? Do
we care about the story behind a piece of art,
who made it, who suffered for it? When AI can
produce the same words or images, which jobs are really
at risk? Is there any such thing as a human
(00:27):
advantage in a world where machines can outperform us at
almost any measurable task. What does any of this have
to do with the plow or Stephen King's nightmares, or
the first shoeshine caught on camera, or Tom Cruise's stunts,
or the shortage of air conditioner repair men, and why
(00:47):
hyper capable AI might actually increase the demand for unexpected jobs.
Today we'll speak with author and technologist Andrew Mayne. Welcome
to Inner Cosmos with me David Eagleman. I'm a neuroscientist
and author at Stanford, and in these episodes we sail
deeply into our three pound universe to understand how we
(01:10):
see the world and what our world might come to
look like very soon. When you look around a subway
(01:30):
car or a coffee shop, you see people scrolling on
their phones with essentially no movement of their bodies except
for their thumbs, but inside their skulls, eighty six billion
neurons are firing away. Each neuron as complex as a city,
each one alive with electrical storms flickering tens or hundreds
of times every second. These vast inner cosmoses are running
(01:55):
constant simulations of the world around us, of the future
of each other. And one of the things that makes
our species unusual is that our brains are not simply
built for getting food or escaping predators. They are finely
tuned social prediction engines. A huge amount of the cortex
is devoted to thinking about other people, their motives, their reliability,
(02:19):
their intentions, their reputations. We carry around mental models of
thousands of individuals and organizations, and we constantly simulate possibilities
like how would she react if I sent this message?
Can I trust this contractor is this person over here
aligned with my values? And so on. This social simulation
(02:43):
machinery is so ancient and so deeply wired that it
has shaped everything that we call an economy. I think
this goes underappreciated that markets are so much more than
just spreadsheets and supply chains. They are agreement between nervous systems.
Economies are built from trust, from storytelling, from reputation, from
(03:09):
shared values, and from our search for meaning. Okay, Now,
suddenly into this long human drama steps a new cognitive species, AI.
And this is weird because AI doesn't have emotions, it
doesn't have a body, it doesn't have a childhood, but
it can process information at extraordinary scale, and its busy
(03:31):
automating tasks that once required human thought. So the question
on everyone's mind is what does this mean for the economy?
What happens to our jobs? Will the economy bend or
will it break? Will we be replaced or will our
skills evolve into something new? Now, the first thing to
note is that we have been here before, many times,
(03:54):
thousands of years ago, when agriculture was nearly everybody's job.
The plow presumably seemed like an existential threat, but as
we know, it didn't eliminate human purpose. It expanded it
because the plow freed brains to invent teaching, governance, philosophy, math, art,
(04:14):
and so on. And this pattern kept repeating. Industrial machinery
opened up more human creativity that electricity than the microprocessor.
Every time the old jobs vanished and new ones emerged
that no one could have predicted. I think about this
all the time. If you were one of the men
who landed on the moon in nineteen sixty nine, you
(04:37):
couldn't even have imagined that your kids would graduate with
majors in computer science. Or you couldn't even imagine the
concept of the Internet, and that a kid might grow
up to become a web developer or an app developer,
or a mobile UX designer, or a data scientist or
a cloud infrastructure engineer, or go into cyber security, or
(05:01):
become a PROMPT engineer, or a drone cinematographer, or an
e sports athlete, or a VR designer or a social
media influencer. Today, of course, we are catching the wave
of another transformation faster than anyone imagined. AI is writing code,
it's designing proteins, it's responding to customer inquiries, it's drafting
(05:25):
legal language, it's tutoring students, it's accelerating science. Some jobs
are going to be automated quickly. These are called the
black box jobs by today's guest, where someone receives a
signal and sends a standardized output. But new professions will appear,
most of which we can't yet name or even conceive of.
(05:46):
And the deeper questions reach beyond economics into our psychology,
because we need to assess what humans actually want from
other humans, what are we willing to outsource, and what
are we going to fiercely protect? For example, why does
a concert matter more when the guitarists spend decades building callouses?
(06:08):
Why do we trust a teacher more when they have
lived the very mistakes that we are trying to avoid.
I've argued here before that the answer lies in the
circuitry of the social brain. We value the story behind
a creation. We value the years of effort, the biological limitations,
the courage, the vulnerability. AI can mimic outputs, but it
(06:32):
can't yet mimic stakes, It can't yet mimic what it
means to be a fragile biological creature trying to create
something meaningful in a short life, and so on the surface,
the question is will AI destroy jobs? But the deeper
version is how will humans redefine meaning and connection and
(06:54):
value in an age when machines can do almost everything else?
You explore this. I sat down with my friend and
colleague Andrew Mayne, who is an extremely interesting person. He
is a novelist and inventor, a very accomplished magician with
books and television shows on the subject and most relevant
(07:14):
for today's conversation. He is the original prompt engineer for
Open AI and their first science communicator. He lives right
at the intersection of creativity and innovation and AI. So
I invited him to the studio to get his take
on the future relationship of humans and AI. So, Andrew,
(07:37):
a lot of people are worried that AI is going
to take over all the jobs of humans, So what's
your take on this.
Speaker 2 (07:44):
I think that we have to think about what we
mean by jobs and historically what's happened. If we were
in Mesopotamia several thousand years ago and somebody showed you
a plow, and at that time, like ninety nine percent
of everybody was involved in agriculture, a plow would seem
like a very scary thing. Because of the plow, we
were able to invent things like teaching as a profession, governance,
(08:05):
and a lot of the other things that we now
consider essential culture. They didn't exist then because we didn't
have the time to do that, and they weren't like
superfluous things like poetry or art, which have value, but
these were things to help build our economy. And I
think that we get that every major technological change. If
we went back just you know, two hundred years ago
to you know, within the great great grandparents time, like
(08:27):
there's actually I think Zachary Taylor has a grandson that's
still alive today, which is like crazy. I mean he
had children late and his son had children late. But
in that time of somebody's living to the other grandparent,
when ninety five percent everybody's going in volt in agriculture
if you talk about the industrialization of agriculture, and that
would seem like that would be almost apocalyptic in the change,
(08:49):
But that's when things started to happen and shift. It
was part of the reason we got rid of slavery.
It's part of the reason that we started to think
about how everybody sort of has a vital part of
our economy.
Speaker 1 (08:57):
So if we were just blue skying about the kinds
of jobs that will exist one hundred years or not,
things in analogy to governance and teaching, what could we
come up with.
Speaker 2 (09:09):
Well, you know, part of it is is people often ask, like,
what's the job of the future, And if we look
at how much jobs have changed in our own lifetime.
Subtly that we don't realize that if we went back
in time to like the nineteen eighties and we talked
to a teacher, then we talked to a teacher today,
and we told a teacher back then, Well, in the future,
you're going to spend part of your time in Google Calendar.
What's that, Well, it's an electronic spreadsheet for managing time
(09:31):
sort of. You're going to be doing video calls, you're
going to be using electronic documents. It would sound like
an IT job, it would sound extremely technical, but that's normal.
And yeah, and you think about that too, is we
use these tools all the time. So you and I
are on our phones, we're checking messaging, we're checking these stuff.
So I think one hundred years from now, the value
is going to still come in from things where we
(09:52):
want people. We like people to teach us, we like
people to manage things for us. I still trust I
trust you. I trust you to manage a thing. Maybe
you're going to use in a bunch of electronic systems,
but I want you to make sure they're working.
Speaker 1 (10:03):
That makes sense. One of the things that I've been
very interested in is what AI is going to mean
for creatives, For example, writers, You and I are both
writers of books, and something that I've been happy to
see is that people really care about the heartbeat behind
the page. So they care that it's a real author
who's slaved over a keyboard for months or years, rather
(10:25):
than oh, I wrote this book with AI in five
hundred milliseconds. No one wants to read that, even if
the book is identical word for word to your book.
And I think that's nice that people care about the heartbeat. Yeah,
I think there's going to be a place for certain kinds.
Speaker 2 (10:38):
I might be happy with AI textbooks and stuff, but
you know, reading about the experience of somebody like you
who's actually gone into a lab, gone into the rold
and tested, thinks that's way more valuable to me. An
example that I use a lot is, you know, one
is like I love the fact that Stephen King's is
kind of crazy guy that lives in Maine that makes
his stories because like this might be a nightmare. That
kind of gives up more value. I mean, maybe it's
(11:00):
sad that somebody else's terror is my enjoyment, but you know,
beaus of may but also like you know Brandon Sanderson.
He's a prolific science fiction fantasy author. Brandon is a
guy that's very active in the convention circuit. He's got
a really wonderful engagement with this fan. Basically talks about writing.
He shares about writing all the time. He's a very
real person. I don't know if you know this, but
(11:20):
he did a Kickstarter and he had four books he
wrote during the pandemic. It was the most successful kickstart
of all time, forty million dollars. Forty million dollars.
Speaker 1 (11:31):
Incredible.
Speaker 2 (11:32):
The market cap of Barnes and Noble is only like
four hundred million, so basically ten percent of the market
cap of America's largest book retail bookseller.
Speaker 1 (11:40):
And why I think it's people like him. They like
the guy.
Speaker 2 (11:45):
I mean, like part of what makes you know we
might get an AI to generate an entire mythology like
Lord of the Rings, but the fact that J. R.
Tolkien was a guy that spent all this time studying
trying to create an English mythology gives it value.
Speaker 1 (12:00):
Yeah, exactly. So if an AI wrote exactly Brandon Sanderson's books,
we wouldn't care about as much. So that's the good news.
Some people might.
Speaker 2 (12:08):
I think there will be a mixed ground, but I
think a lot of us who care more who flipped
to the backjacket to go who wrote this?
Speaker 1 (12:13):
There's a reason we put that on there. Yeah. What's
interesting is that it won't be too long before AI
starts writing books and faking the author by an author picture.
So we'll have to have other ways of verification, like
in person tours.
Speaker 2 (12:26):
We've dealt with, like there are publishers that have house authors.
There are some I know of some authors that are
famous authors that are no longer writing their books and
they have other ghostwriters doing that. So I'd say we've
been doing that for a while. But you know, I
think you're right. You know there's going to be people
whoren't going to care. I think that's fine.
Speaker 1 (12:43):
One of the things that's a really cool question is
what will authors do to distinguish themselves from AI. So
by analogy, when the camera got invented, visual painters panicked,
but what they ended up doing was moving into areas
that the photograph could not do, light Impressionism and Cubism
(13:04):
and so on, and they ended up, you know, surviving
and making new kinds of art that no visual painter
would have predicted at the time when the camera debuted.
So the question is what kind of books will we write.
Speaker 2 (13:19):
I use when I give talks. One of the examples
I give is the gours first photograph of people, right,
the first photograph of people's like eighteen thirty seven, eighteen
thirty eight, and what happened was accidental. Back then, you
had to leave a camera aimed to the sidewalk. He
left a camera. You have to leave the camera the
shutter open long enough. Because it took so long from
a photons to register on the film. The idea of
(13:40):
capturing people seemed crazy because nobody would stand that long enough.
But a guy got a shoe shined, so the shoeshiner
and he got captured in the photograph.
Speaker 1 (13:48):
And you think about what that would mean.
Speaker 2 (13:50):
At the time, you saw you know one, You saw
how painters realized they could go in different directions. Their
job wasn't just to try to recreate reality, it was
interpret it. But imagine going back in time and saying, hey, listen,
not only that, but you know, by the in a
few decades, we're going to have motion pictures.
Speaker 1 (14:05):
What's a motion picture?
Speaker 2 (14:05):
That's hard to explain and explain that you know, one
hundred years later we were going to have these productions
that we're dwarfed in size. Thing you have or today
where you get the end credits for Avengers. Endgame is
like longer than like the number of people who are
in like the United States Navy, and like eighteen thirty seven,
the budget not adjusted dollars, but the act of the
budget and just dollar for dollar was like greater than
(14:28):
the US Treasury, and the scale of art got so
much bigger. So for writers, I think the part of
what comes into is that that you get to your
your it's not just the tech space between the pages
of the book. I think it becomes bigger, like what
we're doing a podcast. Okay, right, this is part of
what is the Eagleman Legendarium, you know, part of it's this,
part of it's this part of it's that I think
(14:49):
that we start looking at that how much time do
you have to spend on the things you don't like
doing as an author, like revisions, little notes, things like this?
How much would happen if you had AI to free
you up from that to spend more time I'm creating
engaging with people.
Speaker 1 (15:01):
So let me just make sure I got I got
the analogy though, So with bands, the release of the
album that was the big thing, and then they'd make
their money still on the album. Then that changed with
Napster and so on. We're going on the road was
the thing, and the album was just the calling card
so that people would show up. The economics of it change.
Speaker 2 (15:18):
Well sort of, but it also it changed for some
of the people the top. For other bands, they'd go
on the road and never actually was a payoff because
the record labels would still hold on to it. And
now you're in this weird phase where like with with
Spotify and with streaming, where you have some people making
a lot of money on streaming and some people not.
I would say record company economics are always sort of weird, so.
Speaker 1 (15:38):
Right, but it's the going on the road that mattered.
It's the show exactly, and that's the part that for
a while was not a big deal in music and
then became a big deal. I suspect that it's going
to be the same. And let's say writing where it's
not just sitting and tapping out the book in solitude,
but it's going and doing the talks and the tours
(15:59):
and in the podcast I do.
Speaker 2 (16:01):
Every time I launch a book, I'll go I'll do
a live sessions, I'll go hop on video and talk
to people about that I used to do, like go
to bookshops and stuff. But I said, I can reach
more people if I just hop on a video stream
and talk to people and be accessible and so and again.
I think there's gonna be different things for different people.
I think that, you know, not some people can be reclusive.
(16:23):
I think sometimes if their work just creates its own
community around it, that can work.
Speaker 1 (16:28):
Yeah. Okay, so let's get back to the big picture then,
So what is going to be the influence of AI
on the economy in terms of jobs? What jobs are
going to disappear first? Which ones are going to be unaffected?
Speaker 2 (16:44):
I think the highly network jobs or where people are
happy to have a person in there or need a
person to be in it, are always going to be
highly valuable.
Speaker 1 (16:53):
Give what's an example.
Speaker 2 (16:55):
You know, if you are a person who is makes
a perching decision for a company and you have to
decide who to work with, and you need to look
at contractors to somebody with a really good reputation, and
that really comes to talking to Let's say you're a
contractor and you have to figure out who are you
going to get.
Speaker 1 (17:09):
Your building supplies from.
Speaker 2 (17:10):
You know, you're going to want to talk to somebody
and find out the reliability because that information is not
just available out there. That's something you have to sort
of figure out like a little more deeper sort of
connection to that.
Speaker 1 (17:19):
Although we can certainly imagine a time when my AI
agent talks to this company's AI agent and all the
data but WI reliable. Well that's right, but all the
data is out there about the reliability.
Speaker 2 (17:29):
Of that company, you assume, but we make We make
that assumption though that that we put everything out there.
So you know a lot of stuff about other people
that you won't share, both via social media or whatever.
That's insight you share with your friends, you share with
high trust people, but you don't actually make because that
has value. And so I think that's part of like
you look at certain things that continue on, like you know,
(17:50):
I have you we both have like litter agents, right,
and they have a better understanding of you know, you
could theoretically I could have a chat GPT negotiate a contract,
but it's going to tell me like, well, this is
really what happened at this publishing house, or this what's
really happened there. So I think there's a lot of
places where we forget how much information is locked up
in our heads and how much us based upon trust.
Speaker 1 (18:10):
Yeah, yeah, that's right. One of the things that fascinates
me is social neuroscience, which is a new sub field
which really concentrates on trust and integrity and reputation of
other people. It turns out the way we've traditionally studied
the brain, as we say, look, this is how the
visual system works, is how audition works, this is how
movement works. But what that overlooks is that a massive
(18:32):
amount of the circuitry of the brain is about other people.
And we have thousands of models in our heads of
other people. I mean, and I know a bunch of
people in common, and for each of us, we've got
a little model. That person's like a little you know, mannequin,
and oh, how would that person spotify I say this
or that or I call them? And it's weird how
many people we can simulate pretty well. So we're living
(18:54):
in this world where we care so much about other people.
And I agree with you that a lot of that
information is not the type of thing that AI can
pick up on, because it's very subtle how humans interact
with other humans as opposed to just data that can
be gathered.
Speaker 2 (19:11):
Yeah, when I lived in Japan and one of the
things I saw there up close was a lot of
like how Asian business culture works. And you have two
companies that seem that they're going to do a partnership
that are very much the same on paper, but then
they want to hang out socially. And that happens here too,
is that I have an investment fund and we get
some of our partnerships come from they look at the paper,
look at the data room. We go, this is great,
(19:33):
but let's hang out and let's talk to each other
and see if our values aligned. I think we forget
too that our economy is not really the exchange of dollars,
is the exchange of values, and not every value is
easily converted into a dollar point. And there are people
that you have a lot of ways in which you
can spend your time, and there are probably more efficient ways,
(19:54):
ways that could be more financially profitable towards you, but
you choose not to because you say, I get more
value out of this than he and we're all like that,
and I think we forget that, and we think about
automating the economy, and I think that there are going
to be parts of what we do. But an example
give you is that I did an interview with Yaka Potsky,
who is the head scientist at Openingie and Yaka is brilliant,
(20:14):
absolutely brilliant, one of the smartest people in the world
in my opinion, and we were talking about a teacher
he had. He went to a magnet cool school for
computer science, and I said, you know, we're seeing AI
as this wonderful tutor. Do you see this point where
AI replaces teachers? And yakub understand whose job is to
get AI to be as useful and as widely deployed
as possible, laughed at that idea because he pointed out
(20:36):
his favorite teacher inspired him and it inspired him because
that person had a real experience. And it's one thing
chat GPT can be, as you know, sycophantic or whatever
we talk about it is you want end of the day,
it starts to feel like okay, but you know, how
do you really feel? And having real experience can make
something give us more value.
Speaker 1 (20:55):
And I think that's something we overlook.
Speaker 2 (20:57):
That's why I think that I think the classroom AI
is going to be super beneficial. We're already seeing the
examples that is extremely helpful, but the end of the day,
I want to have a human there with real experience
that's also going to encourage me and tell me, like
I love memory techniques. One of my favorite teachers to
Anthony Mativier. He does things on memory on YouTube and whatnot.
And I can ask jat GPT about memory, which I
(21:18):
do a lot, but listening to somebody who actually tried
the methods is much more useful to me in the
long run.
Speaker 1 (21:25):
Why because you get to hear what went wrong and what.
Speaker 2 (21:28):
Because I U, yeah, it's taking coaching advice from somebody
who never played football versus someone who actually played the game.
And then memory methods like somebody who actually has real
experience doing the thing.
Speaker 1 (21:39):
That's got a lot of value.
Speaker 2 (21:40):
And I think that we think about like how much
of what we want is I need proof that it worked.
And the ultimate laboratory is human experience when it comes
to trying to figure out what to do with your
own experience.
Speaker 1 (22:06):
You know, my thirteen year old boy I asked him, Hey,
if there were a tutor of Aristotle, he knows everything
Aristotle knows, you know, plus everything else, would that be
really interesting for you? And he said no, not at all.
And I said why, My thirteen year old's really smart.
But he wasn't interested because he said, you know, I
want to spend time with my friends, and that's much
(22:28):
more interesting than sitting with Aristotle and asking questions and
getting chet GPT answers back from that model.
Speaker 2 (22:34):
Yeah, I think it's a balance. I think that you
have to figure out where you want to get those
things from and end of the day, he wants to
be human.
Speaker 1 (22:40):
Yeah, yeah, exactly. Okay, so when we think about AI
affecting the economy, what do you see then, given what
we talked about that, you know, we care about other humans,
all the social networking that's not going away, does it
get enhanced, does it get suppressed turn in a different direction.
Speaker 2 (23:00):
I think it's a mixed bag. I think, like anything,
there's good and there's bad. I think that you know,
when you have the opportunity to, you know, use these
tools to sort of amplify yourself, it's great if you
just try to say I'm going to outsource everything.
Speaker 1 (23:12):
I do to the tool. That's going to be a challenge.
Speaker 2 (23:15):
And I look at jobs that are at risk I
describe as like black box jobs. You know, anything where
you just get an email in and you send something
out and nobody cares who does it. That's really headed
towards replacement from AI. And we see that with like
call centers and things like that, where it's really if
it doesn't matter who's in that role, and we're going
to see that. You know, you're going to find fewer
(23:35):
people at fast food restaurants. But also those are jobs
too that the average time somebody stays on one of those
jobs is like eighteen months. Those are jobs are more
of a rung on a ladder. So I certainly think
that bottom parts of the ladder we might see changes there.
But I also think that we do invent new rungs
at the top, which is what we've done historically. I
think that we're going to have more science. I've talked
to people who are in AI who talk about AI
scientists say what happens when AI replaced scientists. I think
(23:57):
I think we're going to have more scientists because one
human scientists will able to do so much more than
they could before. I think with teachers too, we need
more teachers. I would like more teachers in my life.
Speaker 1 (24:08):
Yeah, I'm amazed when I look back at scientific projects
like William Bragg who spent years crystallizing the first protein
and figuring out the structure of it. Took him whatever,
five ten years and won the Nobel Prize for that.
But now alpha fold, does you know, three hundred thousand
proteins in the blink of an I. Everything speeds up
like that, and that gives the opportunity for people to
(24:30):
work on projects like alpha fold and go much faster.
Speaker 2 (24:34):
Well, I give you an example about that. So last
night I was over at the Chans Zuckerberg Institute, which
is Mark Zuckerberg and Priscilla Chands Institute for Research and Biology.
And I have some friends with a company called Evolutionary
Scale or working on a protein model that all model
call a ESM three and models that were able to
predict protein structures. They got basically acquired by Biohub now,
(24:55):
which is the institute runs that to basically oversee, you know,
and help deploy and help move fast or with AI
in the biology space.
Speaker 1 (25:03):
So chan Zuckerberg took that company off.
Speaker 2 (25:05):
Yeah, yeah, yeah, they brought them on board. Alex Rouves
was the head of elflishing Scales now going to be
running bio how much's exciting. But one of the things
I think about is that you know they're looking at
how they deploy AI whatever. You walk around the facility
and then you see a room, several rooms with really
sophisticated electron microscopes, and you realize like, oh, they need
more of these, they need more people running these things.
Speaker 1 (25:26):
We ned way more of that.
Speaker 2 (25:27):
And even if they had upstairs an AI scientist that
was able to do this stuff, you start to see like, well,
how much more data could we get?
Speaker 1 (25:33):
And you just see that like.
Speaker 2 (25:34):
Oh, it's not like I have an AI scientist. Scientist solved.
It's like I have an AI scientist. Now science really begins.
Speaker 1 (25:41):
Oh that's interesting because you could in theory automate all
the electron microscopy and you could you can imagine ways
that that becomes like a dark factory where it's running itself,
where the AI generates hypotheses, goes tests, all the stuff.
Speaker 2 (25:58):
But humans, like how many experiments you be running right now?
If you could just pull up chat GPT and say Hey,
I want to go run this experiment.
Speaker 1 (26:05):
Lots Yeah, okay, And your point is then lots of
other people would come into science to do that.
Speaker 2 (26:10):
Your bottleneck isn't there's your bottleneck isn't you don't have
enough ideas for things you want to experience. Your bottleneck
is the resources.
Speaker 1 (26:17):
Yeah, that's right. Yeah, I totally agree with you on that.
What's interesting is I wonder. I wonder often if there
is sort of an answer or an end to science
where we say, okay, look we've got this in place,
that in place, we know these forces that structure. We're done,
We're out of here. I have my doubts because, in fact,
they just did a podcast on this recently with the
(26:38):
particle physicist Daniel Whitson about this question of is the
way that we construct the laws of physics does it
have to do with our sensation and our cognition, our
veldt meaning what we can sense in the world. And
if you came across aliens who saw a totally different world,
would they possibly come up with different laws of physics
(26:59):
and not use fee but they, you know, have a
very different way of seeing it. So maybe that's where
science will go, is looking at alternative ways that we
could have talked about it.
Speaker 2 (27:08):
Yeah, I mean there might there might be a point
at which we say we've solved all the things that
you can explain to our monkey brains, but we haven't
solved for everything, Like like there's there's going to be
things as the universe gets older, things which conditions will
change and whatnot. We look at this in life sciences.
I have some friends that are involved in AI and
life sciences who are very excited and who are very
(27:29):
very bullish on where this is going to head, Like
what happens when we cure health and we don't need
doctors anymore. I'm like, we're gonna be worrying about longevity
research on Mars. You know, We're gonna worry about long
long distance travel, like how do you maintain brain function
when you're five hundred years old? Like, I don't see
those questions ending because we're just gonna Expand you know,
the Greeks thought that, oh, maybe science is pretty solvable
(27:50):
because they looked at a very simple framework, but they
weren't asking bigger questions about a lot of things. They
just thought these things couldn't even be answered. And now
we're like oh why not. What's ex exciting about watching
this happen is that when you talk to people in
biology tell you how much what proteins can do, when
you start to figure out how you can start structuring
proteins to do things like breakdown plastics or attack certain
(28:11):
parts of cancer, it's a very very interesting area that
we're just now starting to see. And I think there's
a lot of optimism here that we're going to see
things rapidly develop because we saw what happened to language
model space. There's a lot of challenge in getting life
science as data inside of these models, but we see
really good science that that's going to happen.
Speaker 1 (28:32):
Okay, so what do you think in terms of what
AI will be able to and won't be able to do.
Speaker 2 (28:37):
I think that any task that you can measure that
somebody could walk into a room and take a test,
AI will eventually be able to do that, I think rapidly.
So I think that's very likely the case. I think
we're going to see a fast follow for robotics.
Speaker 1 (28:50):
I don't think.
Speaker 2 (28:51):
I think we're going to see probably the next eighteen months,
some really interesting things. We've already seen some things with
kind of machine intelligence and problem solving. But there's one thing.
It's one thing to have like a highly intelligent system.
It's another thing to have a system that works within
an entire ecosystem or culture or society, and we just
often just forget about how important that is of where
(29:13):
value comes from that. So I also think that like
we talked about before with authors, is I might have
an AI that's an incredible guitarist.
Speaker 1 (29:20):
Am I going to be excited to see that perform?
Speaker 2 (29:23):
Because the problem with AI is this is that because
we often value creativity because of the limitations, but when
there's no limitation to the amount of effort or energy
or resources it goes into an AI system. An AI
guitarist would be like, well, of course that's cool. I
can listen to a CD and do the same thing.
But I want to know that this biological system called
(29:44):
a human had to spend fifty years to get there
to do that, and that creates more value because it's
not just the outcome, it's what went in to make it.
I think a lot of people have anxiety about like
AI replacing their job and what they do, and I
think that we have to sort of step back and
often remember that a job is about an outcome, and
how you get that outcome is going to change over time.
(30:06):
It's changed historically, and often the thing that you're trying
to do is create some sort of value, and it's
not always economic, or it can be. It can be
put into economic terms, but we have to sort of
think back, like why do you what is the value
of a book? You know, and you know, put twenty
dollars on the spine of a book and say this
book's worth twenty dollars. I've read books that if you
told me what I was going to get out of it,
(30:27):
I would have.
Speaker 1 (30:27):
Paid a lot more.
Speaker 2 (30:28):
You know. I've had businesses with friends that like, yeah,
we made money, but really the experience of working with
everybody was way more valuable. And I think that people
have anxiety about this future of like what happens or
robots do everything, Like I don't think it'll be as
fun and I don't think it's going to create as
much value as people think that it will. And that's
something I mentioned to before we recorded, was you know,
this is a statistic that's roughly like forty percent of
(30:48):
all Internet traffic after six pm is like Netflix, and
that's because of squid games, you know, and K pop
Demon Hunters, and when you take those things out, those
things that are really valuable because people like them, when
people created them. What's the value of the Internet after
six pm? Well we just lost forty percent of the
value right there.
Speaker 1 (31:06):
Okay, but what about aifilm. Do you think people will
spend their time watching AI film instead of Kiepop, Demon
Hunters and squid get.
Speaker 2 (31:14):
Sometimes yes, sometimes no. I don't think anybody under six
is going to care, you know, how it was created.
I think that I think that we're going to really
value certain things. When ever, Tom Cruise wants to market
a Mission Impossible movie. Part of the PR campaign is
about the stunt that he performed. You have the last
movie he did, He hung outside of in an airplane.
He did some airplane stunts. He's done some other really
(31:35):
cool stuff. When they did Top Gun, you know, the
Top Gun Maverick, they made it a point of putting
the actors into actual real planes so we could get
their reactions, and this was added value. If we watched
a PR video talk about look how great these AI
stunt doubles are, which we got like back in the nineties,
or like we don't care. That's not as interesting to
us as the fact that a human really did something special.
(31:56):
I think there are going to be films where it's
going to be three three high school students are going
to make a movie in Ai and we're like, this
is amazing, and then we're going to still want, you know,
germal Deteruro to hate on AI and go build these wonderful,
lavish productions where he does it his way like a
crafts person.
Speaker 1 (32:10):
I think it's both. Yeah, I think it's a really
smart way of looking at it is in the mixture
model of what the future is going to look like.
So if we're just looking purely at economic value, where
do you think the biggest changes are going to be
there in the next five ten years.
Speaker 2 (32:24):
One of the things that's happened that people have overlooked
is that every six months or so, it's you get
better answers from chat GPT or geminarra claud when it
comes to medical questions, right, and we've had the first
time in history medical information has gotten cheaper year over year,
and that we can't really I think overstate how significant
(32:44):
that is. You think about that, like getting competent medical
information it gets better every year, and it gets cheaper
every year, and that's going to apply to a lot
of things. And so I think that when we start
to think about you know, what does that mean, Well,
other kinds of information to get cheap, really great answers
are going to be less expensive. And some people say,
well great, who's going to need doctors. I think I
think we're going to actually want to use doctors for
(33:05):
a lot more areas of life. You might be able
to spend more than two minutes with your doctor. Now
that's nobody says I spend too much time talking to
my doctor. Everybody's like I don't get enough time. And
I think that we're going to start to look at like, Okay,
what do we really want. I think that some areas
that are easily automated, I've mentioned for it, call centers,
things like this. Yeah, that part may go away, and
we're going to have to start thinking about encouraging more
(33:25):
people to take on the kinds of roles that we
want to spend more time with. We need more nurses,
we need more people who actually need more people build
things like you know people. I've got into discussion on
Twitter where somebody says well, what would you to tell
an air conditioning repair person who's their job is going
to be you know threat? And I'm like, I don't
know if you know this, but we're running out of
AC repair people right now because not enough people are
coming into the workforce to replace the ones that are retiring.
(33:48):
And that's true of a lot of these other industries
right now. And even when we get to robotics, Like,
when we get into robotics, like that's going to be
exciting because nobody steps outside and goes, oh, everything's perfect.
What would I fix in this role to go There's
so many things we could fix, some things we could pair,
some things we could make better. So I think that
we're going to be thinking a lot more about how
more people could become builders, how more people could create
(34:09):
things that should exist. And I think that's where I
have to think about, Like from a career point of view,
the one things I tell people is like, think about
marketing groups. Think about your ability to work with other people,
you know, create groups, Think about people, Think about kind
of problems you can collectively try to solve together.
Speaker 1 (34:38):
It's interesting to see that blue collar jobs are the
ones that everyone knows are going to survive the plumbers,
the electricians, the air conditioner repairment, and we certainly wouldn't
have expected that. It does make me wonder though, if
in one hundred years from now, air conditioners get designed
in such a way that they are meant for robots
to fix them instead of having to.
Speaker 2 (35:00):
I think, I absolutely think in this I think that
we're going to see robotics do a lot of those
things in a shorter term. But I also think of
the things that we think are possible to build right
now in the middle of this data center build out,
building new data centers and goals like Sam Alan's talked
about building one gig a lot of compute per week
and that's basically a city sized amount of our city
my sized amount of energy per week because of what's
(35:21):
possible there, which means that, you know, what, what is
the big part of the future economy. It's going to
be come down to energy production. It's going to come
down to compute. And I think a lot of the
jobs we think about are going to be supporting building
that and also making use of that. So you know,
I think that you know, ac repair things like this,
those jobs are probably I think that people have those
jobs right now aren't going to have a problem. And
(35:42):
I think that younger people coming into it are going
to have a broader view of it. And I think
that the ac repairment a fifteen year rare person fifteen
years from now, you know, they're going to maybe be
in charge of a fleet of robots, and their job
is to be accountable. Is their job is to sell
them like, yes, we got the work done. As we
build you know, megaplex energy project number forty nine on
the moon. There are people that I that I know
(36:03):
and I really respect in AI who worry about job displacements,
which I worry, but they think, like what happens when
AI can do just about everything? And I'm like, we grow,
we get bigger. If you imagine AI becoming highly capable,
we have to also imagine the economy becoming highly scaled.
And when that happens, you're actually going to grow so
fast there won't be enough people to do the things
you want to get done. From you know, working in
(36:26):
you know, a data center, to working in a cafeteria
to make sure that you have you know, great sushi,
to figuring out what you do with all this compute
and if you really imagine what happens and work super ambitious.
Speaker 1 (36:38):
That's that's amazing.
Speaker 2 (36:39):
And I think if you looked at the founding fathers
of the United States, that they looked at the scale
at which we did things today, it would see unfathomable.
And we think that, oh if people often think of
the futures just shiny clothes and robots doing stuff, when
really the future is bigger.
Speaker 1 (36:54):
Like we look back.
Speaker 2 (36:55):
I tell me, if you look back one hundred years ago,
the first thing you'd realize is, oh, we're poor. Back
then one hundred years you'd feel like everybody was pretty poor.
That's for one hundred years from now people look back
at us and think, oh, you guys were poor, but
build in.
Speaker 1 (37:08):
And you died young, and you had all kinds of
pand yeah. Do you have an opinion about universal basic income.
Speaker 2 (37:14):
Other than it's a terrible and necessary idea, No, tell
me why. I mean, I am all for as you
can lower the cost of things. You know that the
problem people talk about universal health care is the problem
is that the cost of it accelerates faster than the
GDP does. And no matter whatever you come from political
side of the argument, you have to address the fact
that this thing gets faster than your economy can afford it,
(37:34):
then it's hard to make that sustainable. I think there's
a role where the economy grows so fast that you
can have a different conversation about that when it comes
to UBI because you think that people won't be needing
the economy. I don't think that's going to be the case.
And that's a scary place to be. When you say that, oh,
eight billion humans are you know, an ancillary to what
the world is. I don't think that I have trouble
(37:55):
fathing that being true. And also it's uncomfortable to think
about that being true because once you say that they're
not necessar bad things happen. I think that I'm all
for ideas and ways in which we make sure everybody
gets their needs met. Nobody worries about housing, nobody worries
about healthcare, and words about food. I'm all for solutions
for that. But when we think that there's just not
going to be any jobs in the future, I think
(38:16):
historically that's not true. It's also, you know, I bring
this up, like, okay, so we can come up with
a super advanced day I, but you're telling me we
can't ask it to find out what's still useful work
for people like it won't be smart enough to figure
out new things. And we've continuously invented new jobs. Like
I brought up before, it's entertainment, which you would think
be one of the first things to go just gets
bigger in like education, we need more teachers, Like I
(38:38):
think that we as we live longer, healthier lives, one
of the ways we're going to want to avoid boredom
is we're going to want to continue our education. We're
going to go learn from these things, and we're going
to learn from people actually went there and did it.
You know, do you want to learn about Egyptian you know,
mythology from somebody ever went to Egypt? You know, do
you want to learn about practicing medicine from nobody ever
treated to patient? And so I'm very, very bullish on
(38:58):
the idea that we're going to need lots more people
in our economy.
Speaker 1 (39:01):
You know, I was just hanging out with a friend
of mine from high school. We graduated together, and I
was thinking about the fact that what I do and
what he does, and what essentially everyone we know in
Sulkon Valley does these jobs didn't exist when we were
graduating high school. We couldn't have imagined the titles of
these jobs. And so my question is for education currently,
(39:23):
if you're thinking about junior high kids, high school kids,
what do you see is the important things that we
should be teaching them given that we are not training
them for jobs that we know of. So in two thousand,
I wanted to get into AI.
Speaker 2 (39:36):
I really wanted to find a way into that right
because I knew that was the future, Like how could
I have this? And I decided this late in life,
comparatively for other people. And in twenty nineteen, Opening Eye
published they did a tweet and they talked about their
model GPT two and if you look at like one
of the most upvoted responses was mine at the time, going,
this is really amazing. I guess I'm out of work
(39:56):
as a novelist. Any suggestions to what I should do?
I had no no idea A year later I'd be
the world's first person employee as a prompt engineer. Because
what happened was that intersection of language and I had
an interest in AI, but I knew a lot about language.
All of a sudden, those two things collided and I
think that for people right now looking at what they
want to do in the future, take the things you're
(40:17):
passionate about right now and ask how does AI make
this better? Or how does AI make it worse? And
I think that you don't assume like well, AI is
going to replace this, Like no, AI is going to
change it. So, you know, I spend time talking to
college kids, and one of the things that concerns me
is that a number of kids in computer science programs
aren't even taught how to use AI code tools. I
think it's important to learn the fundamentals, but when they're
(40:39):
not even taught to use those tools. And there was
a headline in a newspaper a few weeks ago that
came out like, oh, this person she got a degree
in computer science, but nobody will hire her. I'm like,
I bet she never learned to use these tools and
that's why she can't get it. It's not her fault, she's
literally the institution she paid money to so for students.
You know, one of the things that was really interesting
I did a talk at Santa Clara last week in
Santa Clair. You know who's interesting because I've interacted those
(41:03):
kids for a couple of years now, and I've noticed
they become much more entrepreneurial, like the ones in the
AI clubs. They're actually starting their own companies. Some are
actually raising money while they're in their dorm rooms. And
I asked, why is this, and one of the kids said, well,
we're not so sure that there's going to be a
job for us, so we're going to we want to
be more self relyingt We decided we're going to create
our own work.
Speaker 1 (41:22):
We're going to create our own need.
Speaker 2 (41:24):
And I think a little scary they feel that, but
that's very honest, and I'm very very was very happy
to see their reaction to this. Wasn't hopelessness. It was like, fine,
if you don't have a place for us, you can't
tell us our place in the future. We're going to
create our place in the future.
Speaker 1 (41:38):
I think that's great for those kids with the personality
who are willing to do that. The majority of students, though,
might feel quite nervous about that.
Speaker 2 (41:47):
But how do we how do we agree, how do
we get them to that? How do I think that
a lot of a lot of entrepreneurs don't even realize
that till later life, till necessity, and I think that
not everybody has to be entrepreneur, but I do think that,
Like my advice is like what I did, Like I
wanted to get into tech and I saw the people
at Opening Eyes. I like, what you kids are doing?
Can I help you out? And it worked out great.
Speaker 1 (42:07):
It's nice. Yeah, it's Gerta had said the most important
bequeaths that a parent can give the child is two things,
roots and wings. And I interpret that in the educational
context as roots being critical thinking, really teaching students how
to do critical thinking, and wings being creative thinking how
to be really creative, because that's all when I think
(42:29):
about my students at Stanford that I'm teaching, that's the
only thing that I'm telling them that's going to stay
true twenty years from now is how can they think
through a problem and how can they be really creative?
Meaning take stuff they've learned before and remix it, bend it,
break it blended, build new things out of it. Because
I think it doesn't matter whether the computer science students
(42:51):
are learning coding or whether they're learning AI tools, because
in five years it's all going to be something else.
It's going to be you know, super AI tools.
Speaker 2 (43:00):
Yeah, I think that the thing that I like to me,
if you ask me to describe coding it to me,
it's like to look at a system and figure out
how to give it the minimal set of instructions to
get it to do something right. That's how I approach prompting.
That's how I approach a lot of things. And I
think that you know and you have to figure out
what those tools are going to be in your right.
But if you are a critical thinker, you adapter really
(43:20):
quite well to this. You understand, oh, the AI does
this part, it's good to this bat of this. Now
I'm going to use this. I think a lot of
people kind of like fetishize the tool itself and think, well,
this is Can I just use this thing? And that's
the way I do it, And It's like, no, it's
about the outcome. And he gets if you know what
your goal is about the outcomes and not just knowing
which Python function to use, you're going to be better
suited for the future.
Speaker 1 (43:41):
Yeah. And I think one of the things I'm so
excited about, for example, AI is teaching critical thinking by
being debate partners. So one on one debate partners, you
take some side of a hot button issue, the AI
takes the other side. You debate back and forth. It
grades you on how well you are. Then you switch
sides and grades how well you do the other side.
And that kind of thing is the kind of thing
(44:02):
that no teacher would ever have time to do for
each student, but this will be. It's a perfect tool
for really teaching critical things.
Speaker 2 (44:10):
And it's safe too because you have what's happened, Like
debate is a great thing for students to do, but
like debate, organizations have been so captured by political correctness
that students can opt out of arguing alternate points of view,
which the problem is is you never the purpose of
this was to understand other points of view, and that
they become intellectually weak from that. The beauty of AI
(44:31):
is because part of it's the fear too. It's like,
I don't want somebody to take me out of context,
even though it's supposed to be a safe space. But
when you're having a private conversation the chatbot, you're like, okay,
let me explain my point of view on let me
let me let me have to defend this point of
view I disagree with. Then it's feel safe, you're not
as worried that all of a sudden will be out
of context.
Speaker 1 (44:49):
Oh I love that. That's great. And when it comes
to creativity, I think there's a real opportunity for AI
to just feed us a broader diet. All creativity is
is absorbing what we see in the world and then
remixing that. And you know, when you look at I
don't know, look at music around the world. Beethoven certainly
could have written music as they did in Japan at
(45:11):
the same time or in Nigeria at the same time,
but he didn't like the music was different in all
these places. Why because he absorbed just a small amount
of what was in his culture, and so did the
Japanese or the Nigerians. So what we have nowadays, really
ever since the advent of the Internet, is a much
broader diet. But what AI gives us is a broader
diet still where we can get it to do remixes
(45:35):
and give us things that really stretch the fence lines
of our thinking. And that's great because that's the student's diet.
And then the key, I think is to make them
present in person so they can work with the AI
to do whatever they're doing, and then they present to
their class and they're explaining all these different things about
how the new economy could run on this other planet
(45:56):
and the type of people and whatever, that it's such
an opportunity for expanding creativity beyond what we ever got
in school. That was my interview with Andrew Mayne. This
conversation pushed back a bit against the default anxiety that
AI equals unemployment, because the story is more richly textured
(46:20):
when we ground it in history, economics, psychology, and questions
about what the human brain actually seeks, at least historically.
The fact is, when you automate the bottom rungs of
the economic ladder, humans climb upward. The plow didn't eliminate work.
It created governance and mathematics and engineering and poetry. Industrial
(46:43):
agriculture didn't collapse society. It helped dismantle slavery and create
new professions. So the same is likely with AI. What
surfaced from our conversation is that the jobs most at
risk are the black box jobs where a person receives
in puts and sends standard outputs and the identity of
(47:03):
the worker doesn't matter. Those jobs are probably going to vanish,
but the roles that depend on trust, reputation, lived experience, mentorship, inspiration.
These might become more valuable, not less, because through the
neuroscience lens, I think we can say that human beings
are creatures of story, not efficiency. We don't choose books
(47:29):
only for their sequences of words. We choose them for
the heartbeat behind the pages. We don't choose teachers because
they have access to zeros and ones that we want
to know. We choose them because they've lived actual experiences
and can guide us around the pitfalls that they once
fell into. As Andrew pointed out, even the chief scientist
(47:51):
of Open AI doesn't believe that AI will replace teachers
because inspiration isn't really a commodity. It's relational, and no
matter how knowledgeable an AI tutor becomes, it can't replicate
the emotional texture of a person who has wrestled with ideas,
who has failed, learned, grown, and now offers that experience
(48:14):
to others. Another theme that surfaced is the expansion of creativity.
As I mentioned in the conversation, painting didn't die when
photography arrived. Instead, painters expanded into new directions like impressionism
and Cubism, and Pointillism and surrealism, and I suggest writers
(48:35):
are not going to go away now that AI can
write fluent text. Instead, writers will invent new forms of storytelling,
new modes of audience engagement, new performance ecosystems around their work.
And Andrew rays the possibility that accelerating tools in science
like alpha fold won't eliminate scientists, but it might multiply them.
(49:00):
So when one researcher can do a year of research
work in a day, that massively expands the frontier and
opens the door for more people to join the field
because it increases the number of experiments that we can imagine,
and therefore the number of people we need to build
them and run them and interpret them. In other words,
(49:20):
the surprising possibility is that the future economy may not
need fewer people. It may need more more builders, more tinkerers,
more crafts people, more teachers, more creative thinkers, more scientists,
more hands to run the experiments, and more minds to
shape the meaning of what we discover, Which brings us
(49:42):
back to the brain. For all its computational power, it's
not built for perfect efficiency. It's built for meaning and
imagination and social connection. It's built to ask not just
what can I do, but what is worth doing? And
this is where humans will continue to show. The jobs
of the future are not going to be the ones
(50:03):
we can list today, just like no one a generation
ago might have predicted your job title. Now, the next
wave of professions are going to emerge from the collision
between human creativity and machine capability. Our kids are going
to grow up into jobs that don't yet exist, using
tools we haven't invented to solve problems we haven't yet imagined.
(50:28):
So our job is to cultivate the tasks of the brain,
ours and theirs that will always matter, critical thinking and
creative thinking roots and wings. So as we move into
this AI accelerated era, the best question isn't just will
AI take our jobs, but instead what will we choose
to do with the extraordinary new landscape that AI opens
(50:52):
up for us? Go to eelman dot com slash podcast
from more information and to find further reading. Join the
weekly discussions on my substack and check out Subscribe to
Inner Cosmos on YouTube for videos of each episode and
to leave comments until next time. I'm David Eagleman and
(51:12):
this is Inner Cosmos.