All Episodes

July 14, 2025 • 42 mins

What is code, and can it be thought of like a magic spell? Are we building a world so complex that we will lose the ability to understand its operations -- and has that already happened? What does any of this have to do with SimCity, or knowledge that already exists but no one has put together, or how coding will evolve in the near future? Join Eagleman with scientist Sam Arbesman, who has just written a book asking the question: what is code, really?

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
What is software code and can it be thought of
like a magic spell? Are we in the process of
building a world so complex that we will lose the
ability to understand it? Or has that already happened a
long time ago? And what does any of this have
to do with SimCity or knowledge that already exists but
no one is thought to put together, or inventions that

(00:28):
evolve beyond the grasp of their creators. And what coding
looks like in the near future. Welcome to Inner Cosmos
with Me and David Eagleman. I'm a neuroscientist and author
at Stanford and in these episodes we look at the
world inside us and around us to understand why and

(00:48):
how our lives look the way they do. Today's episode

(01:10):
is about computer code. We live in a world increasingly
built out of symbols. We've got strings of code and
lines of logic and invisible layers of computation stacked very
deeply in our lives. This goes from traffic lights to
financial markets, to weather predictions, to streaming recommendations, to all

(01:32):
of our apps and our AI and on and on.
We are surrounded by systems that hum quietly in the background,
and that orchestrate everything about our modern lives. But it's
very uncommonly that we stop to ask, what is code? Really?
It's a tool, but it's also a massive force that

(01:54):
has taken over the world. So where does it come from?
Where is it taking us? And what does it mean
to live inside systems that we ourselves have written but
we can't possibly fully understand. So today's guest invites us
to see code as having an aspect of magic. We're

(02:15):
going to talk with Samuel Arbisman. He's a complexity scientist
and a writer who thinks about the evolving relationship between
humans and the tools that we build, and how our
creations can outpace our comprehension. He's just written a new
book called The Magic of Code, and here he takes
a dive into the joy and the deeper nature of programming,

(02:38):
in other words, beyond just an engineering discipline, but instead
as a hopeful and enchanted practice. I think I would
describe this book as a blurring of the boundaries between
science and art, between logic and myth, between our intentions
with writing code and what can actually emerge. Because the

(03:00):
paradox is that code is built from rigid languages with
strict syntax and rules, but it lets us create whole
worlds from scratch. We can simulate galaxies and evolve virtual creatures,
and test new economies and reimagine cities, and it gives

(03:21):
rise to things that are complex and unpredictable and often
quite beautiful. So here's my interview with Sam Arbusman. So, Sam,
you're a scientist with very broad interests. What drew you
to writing about your latest book on the subject of code?

Speaker 2 (03:42):
Yeah, So, one of the things I think about when
I think about how people are talking about technology and
computing and code is that right now, it feels like
there's almost this kind of like a broken conversation in
society where when we talk about code or computing the
world of tech, there is this average real stance towards it,
or or just worried about it. Sometimes people are just

(04:04):
ignorant about it and unwilling to kind of learn more.
And certainly some of the adversarial stuff is reasonable. But
for me, like when I think about my own experience
with computers growing up, it wasn't adversarial. It was kind
of full of this kind of like wonder and delight.
It also didn't feel that like computing was really just
this branch of engineering. It also really connected to lots

(04:26):
of different things. It was almost like humanistic liberal arts
that drew in language and philosophy and biology and art
and how we think in all these different areas. And
so for me, I wanted to try to explain how
to think about code and computing as this almost like
liberal art that attracts all these different topics.

Speaker 1 (04:44):
And in the book, you compare this to philology, Right,
what's this you tell us about philology?

Speaker 2 (04:50):
Yeah, so, so philology it's this branch of humanistic study
that within the humanities, it was devoted to understanding the
origin of words and kind of the nature of the
history of language. But philology, in the process of doing philology,
it required knowing about archaeology and anthropology and history and

(05:11):
all these different topics. And then eventually philology sort of fractured,
and that's kind of where we got a lot of
the different domains within the humanities. And I kind of
think that in some ways computing has at least some
aspect of that kind of philology as unifier of lots
of different topics. And so for me, my goal was

(05:31):
to kind of try to show how computing and code
can actually have that connective tissue between all these different domains.

Speaker 1 (05:40):
And so you described code as being magical.

Speaker 2 (05:43):
Why and so when I say magical, I'm not saying
that it's like magic in the sense of like, oh,
like this piece of software just works, it works like magic,
although there is some of that. For me, it's actually
this idea that when we think about the nature of
code or magic, and we've had as a society this
desire for millennia to kind of coerce the world around

(06:04):
us through our language and our texts and our speech
and make the world kind of do our bidding. And
only in the past, I don't know, seventy five odd
years since the advent of the modern digital computer, has
this been a reality where we can actually write text,
we can write code, and it can actually do things
in the world. And so for me, there is this
deep similarity between how we've thought about magic in the

(06:26):
ancient or medieval days, or even in the stories that
we tell ourselves and the reality of code. And so,
of course this analogy and metaphor can only be taken
so far before it kind of breaks down. I certainly
take it to the bending point, if not the breaking point.
But there are a lot of deep similarities between how
to think about this. So, for example, magic often requires

(06:47):
in our stories a certain amount of training and knowledge.
And it's a craft. It's not just like a thing
like that just works. It actually requires you to learn
certain things. And so we have Hogwarts School of Witchcraft
and Wizardry. You got to go there for seven years
or whatever it is. And so too with code, like
it doesn't necessarily just work. You actually have to understand
how like the nature of syntax and the details of

(07:07):
code and so and so that example, as well as
other ones, I kind of us to show the ways
in which this this idea of like the kind of
the analogy of magic actually can be a productive and
useful one to help us better understand how code works.

Speaker 1 (07:20):
So it's not just a set of instructions. It's like
a spell in the sense that you puts a strange
set of symbols and it does stuff in the world
that moves electrons or launches rockets, or models pandemics or
creates simulations. And that's the sense in which it's got
this magic to it. But also you point to the
fact that there's often unpredictability and emergence of things we

(07:44):
didn't expect from code. So can you give us an
example of this dual nature of code?

Speaker 2 (07:50):
Yeah, I mean so certainly. In the world of magic,
we have a lot of these stories, and like there's
like the story of like if the source is Apprentice,
which I think there's like the old version and then
the Disney makeme Out version, where some sort of magic
has unanticipated consequences ands only you have you have brooms
kind of walking around and flooding and flooding a basement,
And the same kind of thing is true with code.

(08:11):
Where in code, I think people who might not be
familiar with with programming think of it as, oh, like
I have this idea in my mind and I'm going
to instantiate it into a computer program. And there is that,
but there's also a huge amount of debugging and frustration
because oftentimes when you write a program, there's a gap
between how you think it will actually work and how
it actually does work. And oftentimes the reality of the

(08:32):
program and better understanding of it is only revealed through
these bugs, through these glitches and edge cases and things
like that, And so there are many situations where we
only see these bizarre errors, and then based on those
errors and these kind of unanticipated consequences, do we realize, oh,
how this thing actually works, and so it can be
So there's a there's a well known story of someone

(08:53):
who I think was like a systems administrator for some
university department. He was told by the chair of the
department that their email was only able to be sent
about five hundred miles away, and the systems like this
is insane, like email, that's not how email works. And
it turned out by delving into it, and he was
able to find that I think that there was like
an older piece of software that hadn't been upgraded, but

(09:15):
the newer system didn't realize this and like would time out,
but only after like some very small amount of time.
And it turns out, based on like the speed of
sound and that small amount of time, it ended up
working out to about five hundred miles and it was
this weird unanticipated consequence. But of course, and there's also
other things that just the fact that when you stitch
systems together and pieces of software together, they all interact
in unexpected ways. And that's also the kind of unanticipated

(09:38):
consequences we see. Whether it's like some weird little system
fails to get upgraded and then suddenly all like the
the airline systems go down for a first certain amount
of time or whatever it is. And so there is
that kind of unanticipated consequence in lots of different ways.
And we're seeing this, of course even more so with AI.

Speaker 1 (09:56):
Okay, So this is the thing that you and I
both love is the emergence of comp complexity in the world.
And with code it's a very specific, detailed set of instructions,
and yet it can break, it can decay, it can
be opaque. All kinds of things can happen. And sometimes
when we try to model complexity, we actually unleash complexity.
So tell us your take on how code can do

(10:19):
things that we didn't expect it to do.

Speaker 2 (10:21):
Yeah, and so when we think about and just engineered
systems more broadly, and certainly when it comes to code,
you think, oh, it's designed by people. It sounds very logical,
it's kind of derived from mathematics. It should be very
simple and straightforward, and very small bits of code are that.
But the truth is it adds up and then through
this combination of the sizes of programs, growing them beingcoming

(10:44):
connected to various other bits of code that are out there,
as well as also just engaging with kind of the
messiness of the world around us, you end up getting
sort of a certain amount of unexpectedness as well as
kind of just a reduced understanding. And part of this
is because there's and you mentioned this kind of like
it's like breaking and things kind of growing over time.
There is this whole phenomenon of legacy code where code

(11:06):
has been around for a very very long time, and
we have systems that are still being you like, that
are still being used that were developed decades ago, but
they might be involved in kind of the irs, but
they were developed first developed or in the Kennedy administration.
Like there's all these kind of crazy examples where things
that have been developed the people who first made them
they might be and they might be long retired, they

(11:27):
might be dead. And we also just don't fully understand
these systems. And so it's this weird situation where we
have to recognize that even the systems of our own
construction are actually like when they become big enough, they
actually have this kind of qualitative difference where they almost
they almost become almost biological or organic in their complexity,
and as a result, we have a reduced understanding and

(11:50):
we have to kind of take almost biological modes of
studying these systems, whether it's kind of like the days
of old with like the natural is kind of going
out and collecting bugs in this case could be bugs
and errors. It could be bugs like insects, as well
as just kind of like trying to tinker at the
edges and better understand a system, because the thing overall
you fully don't understand. So there's this weird situation where

(12:12):
these systems are engineered, but they also involve kind of
a certain amount of humility in trying to understand these systems.
And part of that is also because one of the
other features of computing and software is this idea of
abstraction that you can kind of build things on top
of other pieces, and those pieces are then sophisticated kind
of units, and you can kind of use them as

(12:33):
standalone bits and then don't have to worry about the
things underneath it. And so that modularity is very, very powerful,
but as a result, there is a decreased amount of understanding,
and so sometimes not understanding what's going on under the
hood or kind of underneath these pieces, even when yourself
are programming them or programming kind of the things that
interact with them, means that you can kind of also
have a certain amount of anticipated consequences.

Speaker 1 (13:12):
And so what are the key limitations that you see
when we try to model very complex systems?

Speaker 2 (13:18):
So one of the things I think about when I
think about modeling complex systems is is what is the
goal of modeling the complex system? So there's some situations
where we really want to have perfect fidelity to a
real world system. And so for predictions, like with weather,
as a weather prediction, like you want to and you
want to not just understand kind of how kind of

(13:38):
air moves around, You want to really understand whether or
not it's going to rain tomorrow or in an hour
or two hours or whatever it is. And and based
on that you have to have a great deal of
data and a great deal of complexity. And then oftentimes
the resulting models might be very powerful, very sophisticated, but
there might be a reduced amount of understanding and actually
how these things are doing what they're doing. On the

(13:59):
other hand, if you want to just understand the features
of a system, you can sometimes get away with a
much simpler model, which might not necessarily be exactly the
way that the model works, but could at least capture
some of the complexity and kind of the and the
emergence of what you were talking about of that system.
And so, for example, and this is a kind of
trivial example, but the computer game SimCity. It is not

(14:22):
modeling an actual city, but to give you an intuitive
sense of how feedback operates, or unanticipated consequences work, or
just the fact that, like complex systems can bite back
and do weird things that you might not expect. SimCity
is great for that kind of thing. And so, and
it can also kind of give you a sense of, oh,
when I do this kind of thing, according to this

(14:43):
model of how Will Wright or whoever was programming it
thought cities would work, this maybe would be the way
it works. Whether or not that is actually how the
city operates, that's entirely different, different thing. But so for me,
I often think about, like, yeah, what is the ultimate
goal with the model? Is the goal to kind of
understand things? And we have to recognizing our human minds
are really limited when it comes to understanding complex and

(15:04):
nonlinear systems, and so we need these simplified models. If
it is actually just prediction, then sometimes a really complex
model can work, but at the cost of reduced understanding.

Speaker 1 (15:15):
So as AI generated code becomes more common, what does
that do to our sense of authorship and even understanding?
And could we end up in a situation where we're
surrounded by systems that are running our world that we
don't understand at all.

Speaker 2 (15:30):
I mean, to be honest, I think we're probably there already.
It's just the situation where I think many people are
not aware of that fact. This is also one of
these situations where as we build more and more complex
systems and everyday users are kind of more distant from them,
we just don't realize the sheer complexity. When the Apple
Watch first came out, this is years ago, there was
an article in the Wall Street Journal, I think it

(15:52):
was like the Style section about like, are people going
to still use like biomechanical watch? As the answer is
they still are that. They interviewed this one guy about it,
like whether or not you want to buy mechanical watch
or just smart watches, and this guy said something to
the effect of, of course I want a mechanical watch.
When I think about a mechanical watch. It's so complex
as opposed to a smart watch, which is just a chip,
and the thing is like a chip is orders of
magnitude more complex than a mechanical watch. But we've been

(16:15):
shielded from it, and I think as we have AI
generated code, we're going to kind of have another level
of shielding. I do think we need better mechanisms for
so interrogating the system. So I actually so I think
one of these situations we're On the one hand, it
is very good that we can now generate code via
AI and build simple tools and actually democratize the software development.

(16:36):
I think there's lots of interesting things there. But I
still also think understanding code to a certain degree and
allowing you to kind of like dive into the code
that is being generated and tweak it, not only does
it give you a better understanding of what you're doing,
but it's still actually really good to help make sure
that it is doing at least partly what you hope for.
That being said, our systems have always been imperfect, and

(16:57):
I think right now, this is this moment is kind
of just heightening that fact, and maybe we'll give people
a better awareness that these systems have always been enormously complex,
enormously imperfect, made by humans at least at some level,
but maybe give us a greater appreciation for building systems
on top of these, maybe also AI generated as well,

(17:18):
that can allow us to make sure that the the
unanticipated consequences are as minimal as possible.

Speaker 1 (17:27):
You know, I was just thinking about right after I
asked my question about could we end up living inside
systems that are too complex for us to understand. Obviously,
we live inside our biology, and that is for sure
that we don't understand. But a fraction of what's going
on inside is biologically. But we try to eat the
right foods and get exercise and just just right on
top of this system. So we're actually quite used to

(17:49):
living inside systems that are beyond our understanding.

Speaker 2 (17:53):
Yeah, And I think and that goes back to kind
of like the biological nature of these massive, complex computational systems. Like,
the more we recognize that these systems are really complex
and almost have like this organic quality, we will have
to realize, yeah, that we need different ways of approaching
them and the way we approach our bodies. Right, It's not,
oh like I have total ignorance about how the system

(18:13):
works or have complete understanding. There's there's a lot in between,
and using rules of thumbs and things are are very
powerful and I but at the same time, though, we
don't want to necessarily kind of like succumb to like
the like the biohacking trend, which I feel like five
ten years ago was a really big thing where it's like, oh,
if I can just find this one chemical to ingest
or this one cool trick to do, then I'll never

(18:34):
need to sleep again or I'll be held and we
realize that, I mean, yeah, our system, our bodies have
evolved over millions of years and are optimizing a huge
number of different things, and they're going to be imperfect
and weird, and so it's going to be maybe we
can find those things, but the odds that we are
going to is very low. And I think the same
kind of thing that that same kind of approach needs

(18:56):
to be used when we think about these complex technologies
that were building around us as well.

Speaker 1 (19:01):
So let me ask you this, if we fast forward
one hundred years, are we still coding by using symbols
and syntax or is it a completely different sort of
thing where we are setting initial conditions and letting complexity evolve.

Speaker 2 (19:15):
I would say it's very different. I don't think we
need to necessarily go even one hundred years into the future.
It could be like five ten years until we're kind
of managing these AI generated systems. I mean. But the
truth is, when I think about what coding is, it's
always been this moving target, like it's always been changing.
So the way I learned how to program in some
of the languages I learned, they're not totally extinct, but

(19:37):
they're not things I would ever consider using nowadays. But
also even when I think about how people before me
learned how to program, that too was something very very different.
It was like plugging in cables or flipping switches or
writing things in binary or assembly code. Like I never
did those kinds of things. Nor do I really have
a strong desire too. And that's okay, And I think

(19:57):
it's going to continue changing. And so one of the
ways I think about this is actually I tell this
story in the book, a story from the Tumud actually
where it's this conversation that's describing a conversation between God
and Moses and they're discussing like, oh, like who's going
to be the greatest scholar in the future, and God says, oh,
it's going to be some rabbi, like a thousand and

(20:18):
two thousand years in the future, and Moses says, can
you show him to me. So there's this weird time
travel moment where he's transported to the hall of Study,
like a thousand years in the future whatever it is,
and Moses is sitting in the back listening to this
illustrious scholar talking about things, and he realizes he doesn't
understand anything this guy's talking about, and he's kind of overwhelmed.
He's like, Oh, I'm the one who received the law
from Heaven and I don't get it until at the

(20:40):
very last moment, the rabbi says, oh, in the way
we understand all this is because of the law received
from Moses at Sinai, And at that point he's calmed
and because he realizes that even if he doesn't understand it,
there's this clear, continuous line and kind of continuous tradition.
And I feel like when it comes to code, the
same kind of thing is true. Coding has changed, it
will continue to change. It'll be much more like managing

(21:02):
AI systems or some other thing. But Ultimately, it's all
about taking some idea in our heads and finding some
way of instantiing it into a machine and actually getting
the machine to do something, to kind of do our bidding.
And what that looks like is always going to be
changing and so but as long as we recognize it's
all part of this long tradition, I think, then I
kind of do it. It's all coding, whether or not

(21:24):
it's syntax or Python or Pearl or whatever it is,
it will definitely not be that. But that's okay.

Speaker 1 (21:29):
First of all, I love that story. I had no
idea that there was time travel and the toallment. That's amazing.
I want to come back to this point that we're
talking about that we're already living inside a system that
is so complex. We don't understand this because you know,
we program simulations, we program other things. But increasingly what
it means is we're really living inside of this opaque simulation.

(21:51):
And this will be even more true for our descendants,
where they'll be living inside this world of creation that
they can't understand explicitly.

Speaker 2 (22:00):
Danny Danny hillis the computer scientist. He has this great
term where he talks about how we've kind of moved
from the enlightenment to this kind of age where we
could take our take our mind and kind of apply
it to the world around us and really understand it.
To the entanglement. We've kind of moved to this area
where everything is so hopelessly interconnected, we're never going to
fully understand it. And I think we've been in the
entanglement for quite some time, and it's it's really just

(22:22):
a matter of becoming a little bit more aware of it.
I think, I mean going back to kind of the
analogy of biology and things like that with technology, and
biology is a form of technology. Someone once told me
that the way he kind of thought about it is
like the most complicated engineered system that humans have ever
made are domesticating dogs, because like these are we made them.

(22:44):
I we we evolve them basically through like through our
artificial selection, but they're an enormously complicated system. And I
think that's those kinds of approaches, whether it's kind of
like tinkering with systems, kind of evolving them, wrecking these
things as enormously complicated, those might be the kind of
approaches that that we need. That being said, I would

(23:07):
say one of the other things though that at least
gives me a certain amount of hope. Though that it
is kind of weird when you think about it, is
that the extent to which humans are also really good
at adapting to the world around us. And so we
think about all the technological changes that have come over
the past couple hundred years, and these things were enormously destabilizing,
but in many ways we kind of now take them

(23:28):
for granted, and even like more modern and more modern ones.
I'm not just talking about I like air travel or
certain things around the internet, or the industrial Revolution. So
my grandfather, he was he lived at the age of
ninety nine. He was a retired dentist. But he also
read science fiction since like the modern dawn of the genre,
Like he read his entire life, and I remember he
read I think he read Dune when it was sialized

(23:49):
in a magazine, so like no story could surprise him.
And I remember when when the iPhone first came out.
I went with my grandfather as well as my father
to the Apple store to kind of check out the
iPhone and we're playing with them looking at it, and
he looks at and one point he goes, this is it,
Like this is the object I've been reading about for
all these years, and we've moved though from like, oh,
the iPhone is this object of wonder and science fiction

(24:11):
in the future, to like like complaining about like camera
resolution or battery life and things like that. Like we've
so quickly adapted, which, on the one hand, is good
and kind of gives me hope that we are going
to figure out ways of adapting to new types of complexity.
On the other hand, though, it means that we sometimes
don't necessarily retain that capacity for wonder or when it

(24:32):
comes to complexity and the complex systems around us, maybe
a more critical stance and actually saying okay, like how
like what are the kind of systems we want to
be embedded within and as opposed to kind of just
allowing them to kind of wash over us. But I
do think that kind of adaptive capacity does give me
a little bit of hope.

Speaker 1 (24:49):
Yes, you probably know this routine from the comedian Louis
c k where the first time he's ever on an
airplane and they announce we have Wi Fi in the
airplane and he's amazed. Everyone on the plane is amazing
I ever heard of this, And then ten minutes into
the flight, the Wi Fi breaks, it stops working in
the guy next to him starts complaining, and Luisy cases
you know, ten minutes ago you didn't even know this existed,

(25:10):
and now you're complaining about it. So yes, it is
true that we adapt so quickly to that. Okay, so
let me ask you something really random. Given the evolution
of the complexity all around us, what is your opinion
on whether we are already living in a simulation?

Speaker 2 (25:25):
For me, I like to think about it much more
as if you kind of not necessarily take the simulation
hypothesis seriously as this like question of great importance, but
as like, oh, a question that kind of leads me
to think about more things around physics and computing. Then
I think it can actually be very productive. So the
question becomes like, in the same way that people talk
about the simulation hypothesis, there are interesting aspects around, like

(25:49):
breaking out of a computer program when you were inside it,
or like the like the high resolution fidelity of computer games,
or even just the ways in which physic in reality
and computing intersect. Oftentimes, when we think about computation, we
think of it as this kind of like ephemeral information stuff,
and I think that's a really powerful way of thinking

(26:10):
about it. But the truth is, like computing and computers,
like they are deeply physical. So the like the Internet,
like the Internet is not just information kind of whizzing around.
It is kind of to use the term from like
the that senator a number of years ago who was
like widely mocked, like it is a series of tubes.
And actually there's a book called Tubes based on that
of like the physical infrastructure of the Internet, Like there

(26:31):
is a lot of this physicality, and and I think
thinking about that the physical nature of our computing can
be really powerful. And sometimes the simulation hypothesis, like thinking
about it can can help heighten that or can just
make you realize, oh, there's some interesting bugs that are
worth thinking about. So for example, there was a story
I read where I think it was like in some
hospital people noticed that iPhones stopped working when they were

(26:53):
near one MRI machine. But it wasn't Android phones, it
wasn't on there. It was like just Apple products. And
it turned out it happened to be that some sort
of switch or some other component within these Apple devices
it had some small enough gap that it happened to
be this MRI machine had had a helium leak, and
the helium atoms were just the right size to kind
of get into this machine but didn't affect Android devices

(27:15):
or other things. And so it was this wild thing
that just brought home like the deep, deeply physical nature
of computing. And so for me, when I think about
the simulation hypothesis, I don't think about it as like,
oh no, like I'm being controlled by aliens or humans
in the future or whatever it is. It's much more about, Okay,
how do I think about breaking open computer games, or
like the deeply physical nature of bugs and like all this,

(27:37):
like that's the kind of stuff that I find most
interesting about the simulation hypothesis. I also think about it
as for me, it's almost this like cry for myth
in like the in the tech world where it's like, oh,
like we're a deeply like like the Silton Valley world
is like deeply rational, deeply logical, but we still kind

(27:59):
of need some sort of myth or store organizing story
in our world. And the simulation hypothesis and ideas around
the singularity, certain ideas around longevity or AI or things
like that, they often many many times they're also based
on technology, but when they kind of get big enough,
those ideas kind of veer into kind of myth and storyland.

(28:21):
And so for me, it's I view that as kind
of when people take those ideas a little bit more seriously,
it's much more around okay, fitting kind of a certain
amount of myth into kind of that myth shape hole
for those for those type people.

Speaker 1 (28:49):
So let's return to your grandfather and the iPhone. So
how do you recommend in your book and in your
life preserving our sense of magic around the technology that
we have.

Speaker 2 (29:00):
When I think about like magic and wonder and kind
of delight in computing, it's never really been an either
or of like, oh, like there's either kind of like
corporate SaaS software or kind of like the fun weird things.
And I feel like you can kind of tell some
people might tell a story of like oh, there used
to be more of that kind of wondrous stuff and
now we've kind of it's all like we're we're just

(29:23):
kind of locked into large social media sites or just
we're just using these like a large, kind of bland,
beige pieces of software. I think there is an element
of that, but the truth is these two aspects of computing,
they've always co existed, and so like alongside like the
really big mainframe or refrigerator sized computers, there were people

(29:45):
trying to build like early computer games, and then once
we had personal computers, there was a lot of fun,
weird things, people experimenting with fractals, but also people using
spreadsheets in businesses, and so it's not an either or
any and the truth is even on the web now
alongside kind of the large websites, there are also people
talking about there's a term called the poetic web, where

(30:07):
it's like the kind of the more human scale, fun,
funkier and weirder sort of websites. And for me, it's
really just a matter of trying to actually like discover
these kinds of things and realize that it's that it's
always been out there and it's really just a matter
of being able to find it. And so for me,
I kind of view some of the ideas in the

(30:27):
book almost as like a proof of existence of like, oh,
these things do exist out that out there. You don't
necessarily have to be as excited as I am by
some of the examples I give, but let that be
a guide to ohkay, there are other things out there
that are just worth enjoying and experiencing and delighting it.
And I think part of that often is just kind

(30:48):
of at the smaller scale. And I do actually think
that one of the exciting things about AI generated code
is it really allows for this kind of democratization of
building soft where and so people have talked about this
kind of thing for a very long time of like
it shouldn't just be the domain of like big companies
or kind of serious software developers of building things that
are going to be used by like millions or hundreds

(31:10):
of millions of people. There should be a way for
each individual user to kind of build the bespoke thing
they want. And so the novelist Robin Sloan has this phrase.
I think it's like an app can be a home
cooked meal, this idea that like, you don't necessarily need
to build something for everyone. You can build like a
little program for yourself or for your loved ones, and
that's fine, and that's great. In fact, and the true
this spreadsheets were actually a simple version of this kind

(31:31):
of thing, because you can actually program in very simple ways.
But then there was and there was HyperCard with kind
of some of the early macintoshes where it was like
this authoring program to kind of build like weird little
sort of like pseudo website programs on your own computer.
But now with AI generated code, I really see this
kind of democratization potential really blossoming. And and so for me,

(31:52):
like that is the kind of thing that really can
hopefully induce a sense of wonder and people where like
they can now build all the programs that they want,
and so to have to be that if you kind
of explored the world and kind of went about your
day and looked at the world and noticed interesting problems
that could maybe be solved by software, if you weren't
a software developer, you would kind of have to shut
down that portion of your mind because you couldn't do
anything about it. But now you can turn it back

(32:14):
on because now anyone can build those kinds of things.
And so I think that is actually a really interesting
source for wonder.

Speaker 1 (32:18):
And you know, to my mind, there's the flip side
of that coin, not just for the individual, but for society.
What's going to come out of this? So tell us
about Dawn Swanson's paper from what was that I think
the eighties or something about undiscovered public knowledge. Tell us
about that.

Speaker 2 (32:34):
Yeah, so Don Swanson, Yeah, he's this information scientist. In
the nineteen eighties, he wrote this paper called Undisovered Public Knowledge,
where the idea behind it was and he begins kind
of like with a thought experiment. He says, Okay, imagine,
somewhere in the scientific literature there's a paper that says
A implies B, and then somewhere else in the literature
could be in the same field, it could be an

(32:54):
entirely different field. There's another paper that says BE implies C.
And so if you would connect to them together, you
would say, oh, maybe in fact A IMPLYI C. But
because the scientific literature is so vast, no one has
actually been able to read these two papers, and so
that that knowledge, the connection was undiscovered, but it was
public because it was out there. And so it's one
of these things where if we actually had ways of

(33:17):
stitching together all the scientific knowledge that was out there,
we would actually be able to make new discoveries that
were kind of just lying out there, ready for the taking.
The interesting thing with Swanson is that he was not
content with leaving this as a thought experiment. He actually
tried to test it in the real world, and he
used in the then cutting edge technology, which was I
think like using keyword searches on like the medline database.

(33:37):
But he actually found this relationship between I think I
think it was consuming fish oil and then helping treat
some sort of circulatory disorder, and then he was I
think he was able to publish it in a medical
journal even though he himself had no medical training, which
was kind of wild and so and I think with
all like we had with a lot of these AI tools,
we are now going to be able to kind of
stitch together lots of different ideas and kind of navigate

(33:59):
them like the latent space of knowledge or however you
want to describe it, in a way that that has
really never before been possible.

Speaker 1 (34:07):
This is actually my highest hope with large language models
is tackling the biomedical data, in putting facts together that
anyone could know, but nobody is going to because they're
published in totally different journals. I wrote a paper a
couple of years ago now on a meaningful test for

(34:27):
intelligence in AI, and I think that what I just
described that's going to be enormously helpful for science. But
the next level of intelligence which I don't think llms
are at yet is actually questioning whether something is true
and coming up with alternative models and then simulating those
and evaluating them. For example, you know, saying, hey, what

(34:51):
if I were writing on a photon of light, what
would that look like? And then getting to the theory
of relativity and realizing the trajectory of merch can be
explained by that, and so on. So that's the sort
of thing that lms don't do now. But yes, I
think they're going to be enormously helpful in this discovery
process within the public knowledge. It's already sitting out there.

Speaker 2 (35:14):
People are already talking about, like AI scientists and things
like that, like whether or not it's going to be
helping with or it's stitching together the knowledge hypothesis generation.
Maybe eventually even yeah, this kind of like thought experiment
and then examining like what are the implications of the
thought experiments. But I do think, yeah, even if they're
not necessarily able to kind of do everything on their own,
the potential for this kind of like science like human

(35:36):
scientist machine partnership will well hopefully unlock a lot of
information and knowledge that is already out there, but we
just don't even realize it.

Speaker 1 (35:43):
What's one thing that you wish more people understood about
the coded systems that surround them.

Speaker 2 (35:49):
One aspect about code is that the extent to which
there's like a craft and a style and almost an
art to it. And when people think about like programming
languages or kind of which language they want to program,
and the truth is there's there's a lot of personal
choice and a lot of a lot of opinion, very
strong opinions about what kind of languages work, but also

(36:10):
even kind of the way in which you program. And
so for example, there's actually this book called If Hemingway
Wrote JavaScript, where it actually takes I think, like the
same coding task and then programs in different ways accordinated
kind of different like authorial styles, and to show that
it is a deeply human kind of thing. Now, of course,
and many people compare it to writing or fiction or

(36:33):
poetry and things like that, and there are aspects of that,
and I certainly, but I don't want to push it
too far because the code still has to do something,
it still has to operate. But there really are many
almost like artistic aspects to code, and I think that
kind of interesting combination of extreme logic and practicality and
efficacy combined with style and art and problem solving, I

(36:58):
think is something that maybe people who are kind of
outside of the world of code just don't realize.

Speaker 1 (37:02):
So here's a random question, the relationship between software code
and let's say, biological code like DNA. Is this just
a metaphorical thing or is there something deeper there?

Speaker 2 (37:12):
We are wet, squishy, messy things, and then down at
the sollular or substolular level, it's incredibly incredibly stochastic and random,
and there's things just all vibrating around, and it's wildly
different from the way in which we think about coding.
But the one exciting aspect about this is that, like
Mike Levin and some of his and his collaborators, they've

(37:33):
talked about this idea that really traditional computation is really
just a subset of kind of information processing as a whole,
and biology is just another mode of doing that kind
of thing. So I think by looking at what biology
is doing and comparing it to how code operates and
how computers operate, where they are similar and where they're different,

(37:54):
just shows you the sheer number of different ways that
computing can be done, and when it comes to kind
of more engineered traditional computing, we are still only beginning
to scratch the surface. So I think in that way,
comparing contrasting the way biology and computation are similar and
different can be enormously valuable.

Speaker 1 (38:11):
So let's end with telling us what your message is
at the heart of the magic of Code, your new book.
What do you hope that people will take away from
it in seeing the world around them?

Speaker 2 (38:22):
Like Steve Jobs has this idea that computers are the
bicycle for the mind. And the idea behind this was
that he was reading I think some old scientific American
article where it was like a chart of like the
energy efficiency of different organisms and humans were kind of mediocre,
and like maybe some birds were much better, but then
everything changed when a human got on a bicycle, because
suddenly they were much more efficient. And his idea was

(38:42):
that computers they should be this bicycle for the mind,
for helping accelerate how we think, how we interact engage
with the world. And that's really ultimately what it's all about.
And so for me, whether I'm thinking about like trends
in like super powerful AI or certain other things around
the Internet or whatever it is we all have to
be thinking about, not just saying, oh, these are interesting trends.

(39:04):
I wonder what things are going to look like in
the future, but more no, like these tools are for me,
what is the future that I want to live in?
And how can I kind of make that human centered
future with technology that much more possible? And so the
book is kind of a guide to kind of think
about all these different sort of human centered ideas, to
hopefully provide a guide for that sense of wonder and

(39:25):
delight and humane aspects of computing.

Speaker 1 (39:33):
That was my interview with complexity scientist and lover of code,
Sam Arbusman. We are right now at the beginning of
a centuries long experiment in computation. For the first time
in history, we can build these dynamic worlds that evolve
and adapt. We can simulate climate futures, or economic collapses,

(39:55):
or entire societies that rise and fall in silic But
Sam talks about code not just as a set of instructions,
but more generally like a kind of spell, a system
of symbols that does something real in the outside world.
And part of what I find the most amazing about
our current moment in time is the way that code

(40:18):
can and has evolved past the understanding of its creators.
And that's the paradox we're sitting with, and in some
sense we have been sitting with for some centuries, now
that we are building systems more powerful than our ability
to fully understand them. There's one more thing I just
want to touch on from Sam's book, the idea that code,

(40:40):
like language or like myth, offers us a kind of mirror.
Code can reflect our values and our metaphors, and our
hopes for control, and our particular curiosities about how the
world works. I suspect that someday there are going to
be code anthropologists who look back on the kind of

(41:01):
programs written by different civilizations at different time points, and
it will tell them as much about those civilizations as
their books and plays and religious practices, because our code
reflects the assumptions that we build into them and the
blind spots that we forget to consider. Every simulation has
some of us in there. So with the passing of decades,

(41:25):
we're going to go beyond how do we code the
world to questions about who's doing the coding, and what
do we make sure is in there, and what do
we choose to leave out? And what are the limits
of what we can simulate? And when does it matter
that those limits shape our conclusions? In any case, as
I think about what Sam and I talked about, I

(41:46):
come to something like this conclusion. As we look into
the deep future, we may find ourselves less in control
than we thought, but also more creative than we ever
thought possible. Our minds will be riding bicycles and eventually
motorcycles and jets. And that is the sense in which

(42:07):
even the most logical systems we've ever built have a
healthy dose of magic. Go to eagleman dot com slash
podcast for more information and find further reading. Join the
weekly discussions on my substack, and check out and subscribe
to Inner Cosmos on YouTube for videos of each episode

(42:30):
and to leave comments until next time. I'm David Eagleman,
and this is Inner Cosmos.
Advertise With Us

Host

David Eagleman

David Eagleman

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.