Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow Your Mind production of My
Heart Radio. Hey, welcome to Stuff to Blow Your Mind.
My name is Robert Lamb and I'm Joe McCormick, and
we're back with part two of our talk about post
biological intelligence. Now, in the last episode, we talked about
(00:23):
let's see, we talked about some work by the SETI
researcher Seth show Stack, and we talked about the philosopher
Susan Schneider, who had both written about UH the idea
of looking for signs of alien intelligence elsewhere in the
Milky Way, and UH the the proposition that if we
were to encounter such an intelligence, it would probably be
(00:44):
more likely the machine descendants of a previous biological intelligence
than it would be biological entities themselves. That that overtime,
organisms like us will tend to sort of turn themselves
into machines, or at least create a techno culture that's
dominated by machines, and that these are the types of
(01:05):
intelligences that we should really be looking for and trying
to predict in terms of their characteristics and things like that.
So we can jump right back into the middle of
this conversation where we left off last time with talking
about post biological intelligence. Now another big question here and
and this will we'll go back to Schneider. Is it
(01:27):
is that the question of would a machine culture like
this if you encountered it, would this machine artificial intelligence?
Would it be conscious? And what would that mean? And
would it make a difference even, Yeah, this is a
really good question. The way she puts it is would
the processing of a silicon based super intelligent system feel
a certain way from the inside. Now, I'm going to
(01:49):
go into less detail on this argument than than I
did on the other half of her argument, but I
did want to try to give a few highlights. This
question is inherently difficult to answer because, according to some philosophers,
you know, some people would say that this question is
impossible to answer because there is no way to test
for consciousness beyond our first our own first person experience.
(02:11):
I mean, we can't even test to see if other
people are conscious. We just assume they are. It seems
like they are, they claim to be, and there's no
reason to assume they're not. But of course you have
ideas like the philosopher David Chalmers, you know, he famously
framed this idea of the easy problems versus the hard
problems of consciousness, and so the easy problems that they're
not actually easy, but they are, they're in principle solvable.
(02:34):
There are things like what parts of the brain are
necessary in order to generate conscious experience, Like you could,
you know, you could do research on that and have
people report back when different parts of the brain are
disabled or something. You know, you can figure out things
like that, but it's much more difficult to or Chalmers
would argue, ultimately impossible to get to the bottom of
the question why does all this information processing in the
(02:57):
human brain under certain conditions have a felt quality to it?
Like why is their consciousness in the first place? And
if we do not know, or possibly even cannot know,
why we possess a felt subjective experience, how could we
ever reason backwards to know if alien machines would also
possess it? Now, Schneider responds to all this thinking, and
(03:19):
I'm oversimplifying here, but her main point is that the
activity of the brain is, according to her argument, primarily
computational and in the absence of compelling evidence for what
she calls biological naturalism, and that's the idea that consciousness
is or is likely to be unique to biological carbon
based organisms. Daniel Dennett ridicules this point of view by
(03:43):
calling it the belief that the brain possesses what he
calls wonder tissue. You know that there's just something in
the brain that, like magic, allows it to generate consciousness
while other types of things can't generate consciousness. I don't
know the answer to this question whether things other than
brains can or cannot gem great consciousness. It Uh, I'm
sort of skeptical of both sides of the argument. But
(04:05):
but anyway, Schneider argues that we should conclude by analogy
that other computational agents, because our brains are computational agents
and they generate consciousness, that other computational agents are also
capable of possessing consciousness unless there's some kind of evidence
that biological naturalism is is necessarily true, and she says
(04:25):
there's not, And I agree that there is not evidence
of that. You know, what's interesting about the way that
you just lay this out, though I can easily imagine
a situation where an advanced AI is forced to, uh,
sort of ponder the situation, well, is having a conscious
um this conscious experience? Is it important. Well, let's let
me create like a programming or a subset of myself
(04:48):
that has at least as close of an approximation of
consciousness as as as as it is understood at that
time in order to evaluate you know, right, Um, so
then it perhaps has sort of it's its main mind,
but then it has sort of a subset of quote
unquote conscious minds just in case it is important. Yeah, obviously,
(05:09):
I mean huge question, like how would it know how
to do that? But assuming could that, Yeah, that's really
funny that, like it could try to iterate consciousness in
in an experimental way to see if to see if
it makes a difference, because that's another big thing, like,
you know, the biological question about consciousness. We at least
know that biological brains can be conscious. We don't know
if computers can be or not. But since biological brains
(05:31):
are conscious, is that an adaptive evolutionary trait? Does consciousness
do something? Or or could you have a an animal
that is absolutely functionally identical to a human but not conscious.
This is actually the concept of a philosophical zombie or
a p zombie, a being that is that is indistinguishable
(05:52):
from a normal human except it has no inner experience.
So in like this scenario where the super AI creates
like account soul of quote unquote conscious iterations of itself,
like maybe they're just faking consciousness. How would it know? Yeah,
and then how would it know? How would we know?
And yeah, again if you're and then if you're dealing
with an AI, like suddenly we make contact with an
(06:14):
AI from another world? Um, is it important that it
be conscious or not conscious? Like? There are lots of
things that are important and even beneficial that are not conscious,
like the you know, the Bill of Rights is not
a conscious entity. Um, but you know, I think most
would argue that it is. It is important. It does
good things. I mean, you could argue that it is
(06:35):
only important in that it has effects on conscious on
things that are conscious. Yeah, Like in a universe where
there was nothing that was conscious, would the Bill of
Rights be useful? But I don't know. I mean, I
guess there are some theories of value that would that
would say like, yeah, things could still be of value
even if they weren't conscious, right, uh yeah, And then
(06:56):
again it just it also kind of becomes pointless because
once you're talking to that AI. Um, yeah, Well what
does what does it mean if it's conscious or not? Like,
how does that change the way you interact with it? Um?
Unless you're you know, actively saying hey, stop, think about
what you're doing, think about what you're thinking about. Uh.
I don't know. So I don't know what to think
about the consciousness question for for alien machines. I mean,
(07:18):
I think I think Schneider makes the best argument that
I could imagine for it, But I still don't know
if I'm convinced, just because this whole realm to me
just seems so uncertain. Um. But but then she goes
on to some other things that I think are some
really interesting ideas. Actually, she talks about what would be
the predictable characteristics of super intelligent machines, the minds we
(07:41):
would encounter out there if we did encounter them. Well,
she admits that there's not a lot we can know,
at least certainly not that much that we can say
with with too much confidence, But we can make some
educated guesses about the broad strokes of of alien intelligence.
And to do this she cites the work of again
philosopher Nick Bostrom, who is fame is for writing about
AI risks, and I believe he actually coined the term
(08:03):
super intelligence, though it could be wrong about that, but
Bostrom says, yes, it is hard to predict the goals
of a future AI, you know, alien intelligence is very
difficult to understand. But he identifies what he thinks are
several intellectual tendencies that are likely to be found in
any super intelligent AI, and they're likely to be found
in any of them because he says, these traits are
(08:25):
useful in attaining almost any goal. And so these goals
he identifies our resource acquisition makes sense. You need resources
in order to like keep your processes going. Technological perfection, right,
you want yourself to work efficiently, Cognitive enhancement, you always
want to be smarter, self preservation, you want to be
(08:47):
able to keep doing things. And then what he calls
goal content integrity, and Schneider summarizes that by saying, I e.
That a super intelligent being's future self will pursue and
tin those same goals. Uh. And this one was really
interesting to me actually thinking about the idea that a
machine would need to try to make sure that as
(09:10):
it iterates to to improve itself, it doesn't change what
it was trying to do. In the first place, sort
of a prime directive sort of situation, right, Yeah, Or
to come back to the culture, the idea that like,
if you're created, if your original design is to aid
humans and make their life easier, than you keep doing
(09:31):
that even if you are ultimately the calling all the
shots now and you know, are in charge of all
the interactions with other civilizations, etcetera. Yeah, And that that
actually comes into the next thing she says about Bostrom's
ideas on on these super intelligence is uh. She she writes, quote,
he underscores that self preservation can involve group or individual preservation,
(09:53):
and that it may play second fiddle to the preservation
of the species the AI was designed to serve. So
could be that these AI s, if they ever do
come to exist, would yeah, that they would be the
custodians or caretakers, thinking mainly about the preservation of the
species that created them, and then when they come to us,
they ultimately just want to serve man. But then one
(10:16):
last thing that Schneider argues that I thought would be
interesting to mention is uh, and I think I said
this earlier. But she also argues that perhaps the most
common form of super intelligence we could expect to encounter
would be what she calls biologically inspired super intelligent aliens,
and that if this argument is correct, this could also
tell us some things about intellectual characteristics that we would
(10:40):
expect to find in these super intelligences. So, to read
from Schneider's chapter, she says, uh, it may turn out
that of all super intelligent ai s, biologically inspired super
intelligent ais, they're the most resemblance to each other. In
other words, visas maybe the most cohesive subgroup because the
other members are so different from each other. And there
(11:02):
she's talking about members of the galactic community. Basically that
the biologically inspired ones would have the most in common
with each other. So what kinds of things could they
have in common? She says, noticed that besas have two
features that may give rise to common cognitive capacities and goals. One,
visas are descended from creatures that had motivations like find food,
(11:24):
avoid injury, and predators, reproduce, cooperate, compete, and so on.
And then second, she says, the life forms that beasts
are modeled from have evolved to deal with biological constraints
like slow processing speed and the spatial limitations of embodiment. So,
she says, could these two principles one and two yield
(11:46):
traits common to members of many super intelligent alien civilizations?
I suspect so, and she gives a bunch of examples,
But I mean, a very simple and easy to grasp
one would be that since intelligent biological life is primarily
be concerned with its biological imperatives, mainly survival and reproduction,
she says, it is more likely that visas would have
(12:07):
final goals involving their own survival and reproduction, or at
least the survival and reproduction of the members of their society.
And I was just thinking this can be extrapolated to
other ideas. For example, why wouldn't a superintelligent AI just
just reprogram itself until it is no longer anything like
its biological ancestors. So is it still really reproducing the
(12:30):
original version of itself at all? Well, if you think
back to Bostrom's idea of of goal content integrity, I
wonder if this could in a way entail a kind
of halting of the evolutionary process of life that has
gone on throughout all of history, Because suddenly, once you
reach this level of intelligence, a a machine iterating itself
(12:51):
may just want to preserve the idea that it is
still its original self. That's an inherently motivating goal for it,
and thus it would prevent changes to itself that would
make it feel too different from what it once was. Huh.
You know it reminds me of like when you when
you hear a really great song for the first time,
or you you start playing a video game and it
(13:11):
really grabs you, or you know, you get super into
you know, some fandom or another. There's, at least for me,
there's sometimes that point where you realize, like, wow, this
is really fulfilling for me right now, and the day
will come when it won't be Like, as much as
I enjoy this game or this book or this song
or whatever it is, there will come a day when
I will set it aside because and I will there
(13:32):
will be something else I'm into. So I guess the
question is it's like, if if we or this machine
that we're imagining here, if it could decide no, I
will always be into this album. This album is great
and it shall always be that way. Would it do that,
would it set itself in time, or would it like
assume that it would always be in this and just
it kind of gets back to that vampire scenario you've
(13:54):
brought up before. You know, you don't know what you're
going to want when you become the vampire, and it's
hard to iagine what your mindset is when you reach
that point. Yeah, yeah, that's a really good point. Certainly
applies to becoming some kind of machine or merging with it,
or remodeling yourself if you already are machine than now.
(14:17):
Schneider makes a number of other arguments about the types
of post biological intelligences that we would be likely to
encounter again, derived from the idea that there is some
kind of ancestral biological inspiration behind these hypothetical superintelligences. And
the thing she zeroes in on is that some limitations
(14:37):
from original biological organisms are things that aies would probably
want to engineer out of themselves. Right now, you can
think of plenty of things about your brain that if
you know, your brain were to evolve into some kind
of computer that was always perfecting itself, it might want
to leave by the wayside over time. You know, maybe
some of your obsessions and anxieties and stuff like that.
(14:59):
But what's left if you take all that out right, Yeah,
that's a good point. But then she also says that
there are quote cognitive capacities that sophisticated forms of biological
intelligence are likely to have and which enable the super
intelligence to carry out its final and instrumental goals. We
could also look for traits that are not likely to
(15:19):
be engineered out, as they do not detract the visa
from its goals. So there are some traits of biological
intelligence that probably have inherent advantages. There are just some
ways that brains work really good, and it would want
to replicate that and just refine it across time. And
then there are other traits of biological intelligences that might
not have clear advantages, but they at least wouldn't detract
(15:42):
from the attainment of goals. So just you know, why
why not keep them around? Yeah, sort of the lukewarm
stuff that's not detrimental to their goals. But all that
doesn't maybe help it all that much, but isn't isn't
using a lot of energy, et cetera. Right, So to
get into Schneider's explicit predictions for bio logically inspired superintelligence
is the first one. I'm not going to get deep
(16:03):
into because it's a little dry. But this is a
fair point, I guess. She says, learning about the computational
structure of the brain of the species that created the
visa can provide insight into the visas thinking patterns. Okay,
So basically you can start to gain some insights into
the computational structure of an animal's brain or or nervous
(16:25):
system more broadly, by studying the brain's connect tome. A
connect to home is a map of the connections between neurons,
which at least in theory, would help you understand which
cells and structures in the brain or the nervous system
broadly share information with which others in order to better
understand how information is processed as a whole. Yeah, I
(16:47):
mean this makes me think, for instance, like when we
think of an artificial intelligence, we are often loosely thinking
of like that like that single entity. But what have
you had, What have you had an alien life form
that had sort of a pronoun bounced bicameral mind situation
going on, where like the actual organic organism had uh
like two houses of thought going on that kind of
(17:09):
communicate with each other, and therefore that ends up being
reflected in the ai they create. Oh, that's very interesting.
That will actually come back to a question I have
about one of the points she makes later on. But again,
just the point she's making here is that if you
can look at the physical structure of the original ancestral
organism that the intelligence is evolved from, that can help
(17:30):
you understand something about how the intelligence of its machine
descendant works. Quote. While it is likely that a given
visa will not have the same kind of connect dome
as the members of the original species, some of the
functional and structural connections may be retained, and interesting departures
from the originals may be found. Now after that, she
(17:51):
brings up a second point that I thought was a
very interesting prediction. She writes, quote visas may have viewpoint
invariant representation. Now what does that mean? Well, an easy
way to think about it is this. If you're watching
a movie and the camera suddenly cuts to a different
angle in the middle of a scene, but it's still
(18:11):
the same scene going on. How is it that you
still understand you're watching a continuation of the same action
as before. Everything looks completely different, but you understand that
these are the same actors playing the same characters in
the same room, even though it looks totally different. This
is one of the ways that human intelligence still drastically
(18:32):
outperforms artificial intelligence on Earth. You know, humans can look
at an object it's a VHS tape of the Star
Wars Holiday special, and you can look at it from
completely different angles. Maybe the front cover of the box
looks completely different than the back cover of the box,
but you turn it around and you still understand that
you're looking at the same object. Humans are able to
(18:53):
form mental representations of objects in the world that can
be isolated and recognized and manipul related within the mind's eye,
and we humans are not typically going to be confused
about what we're looking at because we took a step
to the side and change the angle of observation. Even
though the light reflecting off of the object and hitting
(19:15):
our eyes will produce a very different pattern on the retina,
we somehow still use our intelligence to know that we're
still looking at the same object or scene. And this
is a much much harder task for a computer. I mean,
ask anybody who's been involved in visual object recognition. It's
an incredibly difficult task for AI. And this is one
(19:35):
of the many amazing fast and loose intellectual feats that
humans do all the time so often that we we
rarely appreciate how amazing our brains are in this regard.
Another example from a recent episode was, you know, recalling
the Moses Illusion episode, we talked about how good we
are at getting the gist of a statement or a question.
(19:55):
Even if major pieces of information within the sentence are
wrong and should be throwing you often completely the wrong direction,
you still are able to very quickly get what the
person was probably intending to say and operate on that basis.
Now here's where it goes with viewpoint invariant representations, especially
as it concerns like physical objects in the world. Schneider
(20:17):
argues that you can expect any biologically inspired AI to
have viewpoint invariant representations because they seem to be inextricably
linked to the biological development of intelligence. And uh, just
I'm expanding on her thoughts here, but I think the
reasoning goes something like this, What is intelligence? That's actually
kind of a difficult question to answer, right, Like, it's
(20:40):
kind of hard to pin down. But I think one
plausible answer has to do with speed. Intelligence has something
to do with the ability to accelerate problem solving or
goal acquisition. So you could have an organism that has
essentially a random strategy for trying to get what it wants,
and very step it goes above a random strategy is
(21:03):
in a way an increase in intelligence. It's accelerating the
solution of problems. Now, to follow the biological reasoning a
little further there, why is it that animals in general
need a speed of problem solving intelligence that most plants
do not? Well? I think the answer there is that
animals survival and reproduction strategies are usually based on movement.
(21:27):
This wouldn't be true of all things in the Kingdom animalia.
I'm not so much for sponges and stuff, but most
animals move fairly rapidly, whether that's for foraging or evading
predators or seeking mates or anything like that. If you
are able to move fairly quickly, that means your body
needs a system of deciding in what direction to move
(21:49):
relatively quickly. And so I could be missing something here,
but it seems to me that it's a pretty safe
assumption that this is one of the major drivers of
the development of biological intelligence. Coming up with better and
better systems for adaptively optimizing strategies for rapid movement to
fit the specifics of the situation you're in. So you're
(22:11):
constantly faced with new situations, predator approaching from a different angle,
food to be found in a different you know, orientation,
or like in a different hard to reach space, and
your body needs a way to adapt to whatever situation
you're in to decide the best way to move. Yeah,
it kind of comes down to a certain extent passive
(22:33):
energy acquisition versus active energy acquisition. Yes, because you know, obviously,
if you have passive energy acquisition, you don't necessarily need
to move as much. You know, you can just sort
of set up shop. And of course we see examples
of that not only in plants, but also in animals
as well. Yeah, I mean, how would it help a
plant to have a brain. You know, the plant just
(22:54):
needs to basically be hardy and sit there and collect sunlight. Yeah. Now,
then again, I guess I could imagine a scenario where
plants evolved intelligence. If they've got some kind of I
don't know, mechanism that allows them to start moving more quickly,
they could start evolving so that they could have you know,
trees could evade lumberjacks or something. Well, you know, but
before we get you know, multiple emails about this, I
(23:16):
will say we will do an episode on planned intelligence
at some point, because there's a lot of interesting stuff
out there and some some really actually there's some arguments
that kind of turn some of what we're saying here
on its head. So, uh, well, we'll have to keep
this conversation in mind when we get around to that
future conversation. That's a good point. I mean, I think
the movement thing would have to be not a universal
(23:38):
necessity for the development of intelligence, but it seems like
one of the major pathway that it has evolved on Earth,
because I mean, you can imagine other things. Basically, intelligence
allows adaptive problem solving, So that could also involve, say,
not moving your body, but releasing chemicals into the environment
and allowing communication between different nodes in a hub of
(24:01):
trees or fungus or something. Yeah, you could have some
sort of you know, pheromone spitting um like master plant.
Uh that is that has other things to do its bidding,
that has other things built at spacecraft. But to the
extent that biological intelligence is often a product of the
evolution of rapid movement. Viewpoint invariant representations would seem to
(24:23):
be a necessary part of intelligence there, because they are
necessary for an intelligent creature that moves. If you are
able to move your body, your sense data about objects
in your environment is going to be changing based on
your perspective, especially if those senses are based on something
that has linear trajectories like light. You know, light bounces
off things in linear ways. You're going to see different
(24:44):
angles of it. I don't know. If you were a
creature entirely based on smells, I don't know. I guess
then still viewpoint invariant would matter because you know, there
would be different concentrations of volatiles in the air depending
on where you stand relative to an object. But it
seems like in general these types representations would be useful
uh in that regard. And then Schneider adds another point there.
(25:05):
She says that viewpoint invariant representations are not only important
so that we don't get confused about what we're looking
at in the environment. You know, you don't look at
a rock from the opposite side and not understand it's
the same rock. She says they're also important for abstract reasoning. Quote,
you have mental representations that are at a relatively high
level of processing inter viewpoint invariant. It seems difficult for
(25:28):
biologically based intelligence to evolve without a viewpoint invariant representations,
as they enable categorization and prediction. So, because you can
represent objects as a kind of symbol or or emblem
of themselves in your brain that is independent of just
the one way they looked when you looked at them
from one angle, you can sort of like you can
(25:50):
turn them around in your brain and think about how
they might be used as a tool, or you can
predict how they would act given certain physical forces on them. Yeah,
and you know that this makes you think a little
bit of the book by David Eagleman, Live Wired, talking
about like the different sensory inputs for the human brain
and how if you if you you know, you lose
(26:10):
one sensory input and you can add another, or you
can even add all new sensory inputs. Our brains will
make sense of it. Our brains will essentially form that
mental image of the thing, um, even if we don't
have visual processing at our disposal. So if an alien
brain is is it all like a human brain? You know,
(26:31):
in in enough respects, then it seems like the same
thing would be going on even if we were dealing
with being that say, evolved with less of a reliance
on vision, or more of a reliance on other senses
or even some sense that you know that we have
a have a difficult time imagining because we don't possess
it ourselves. Yeah, yeah, that that that it would need
(26:51):
based on whatever senses. It had to have some kind
of mental representations of objects in the world that would
not be changed. It is just by slightly changing the
physical perspective from which you sense that object. Yeah. Now
it does relate all sorts of interesting questions like what
if what if the sense of smell was the primary sense?
How do you create, say, a control panel for your spaceship?
(27:16):
You know, interesting like each button has a different smell.
I don't know, they're again, maybe it's a situation where
we don't have a versatile enough palette or appreciation of
the palette ourselves to even envision what that would be like.
But you know, our our dogs, uh, you know, if
they were more intelligent they could let us know. They
would say, oh, yeah, I can totally imagine what it
(27:38):
would be like. Oh man, here's my idea for sci
fi novel. Okay, humans, humans come into conflict with an
interstellar species that has, uh, that has a culture that's
all entirely based around a species with a dominant sense
of smell. And what we have to do is uplift
dogs to the point where they have human intelligence so
(27:59):
that they can tell us what it's like to see
the world through that much smell data, so that we
can better understand the aliens in order to protect ourselves
against them. Yeah, and if it's a darelict ship or
something like that, perhaps the control panels like they've lost
a lot of their smell, so it's we don't even
initially realize that this is a scent based control system.
But then the dogs they they're like, yes, I can
(28:20):
still smell things. There are numerous smells going on here.
This is like sticking my head out the window while
you drive around town. This is gold. Yeah. Okay, Well, anyway,
I think Schneider's point here is a really interesting one.
I do think that's worth considering about the viewpoint invariant representation.
But to move on to her next point, uh, this
one's also I think pretty cool. She says, visas will
(28:42):
probably have language like mental representations that are recursive and combinatorial.
And to illustrate this, Schneider gives the example of novel sentences. Now,
we encounter novel sentences all the time, every day. I'll
do one of my own. Here, here's the sentence the
Howling seven New Moon Rising is the greatest film ever made.
(29:06):
You have never heard this sentence spoken before, and yet
you understand perfectly what it would mean for somebody to
say this. Why is it that we're constantly hearing and
speaking totally unique, brand new sentences, probably never uttered before
by any humans, certainly not in a way that we've heard,
and yet they're perfectly comprehensible. Schneider argues that quote the
(29:30):
key is that the thoughts are combinatorial because they're built
out of familiar constituents and combined according to rules. The
rules apply to constructions out of primitive constituents that are
themselves constructed grammatically, as well as to primitive constituents themselves.
Grammatical mental operations are incredibly useful. It is the combinatorial
(29:53):
nature of thought that allows one to understand and produce
these sentences on the basis of one's antecedent knowledge of
the grammar and atomic constituents. So, because you have an
internalized sense of grammar, not just you know, it's not
just that you know what the words mean individually, but
you also grasp the rules that apply to how sentences work.
(30:15):
And then you even grasp rules that go beyond just
how sentences work. You grasp sort of cultural rules about
how words fit together to form meaning. One example in
the sentence I said is that even if you've never
heard of the Howling seven New Moon Rising, you could
probably understand that this is the name of a movie. Okay,
but so so what's the point then she's she's making
(30:36):
about the mind of of these potential alien AI. Well,
basically that it would probably be language based. She goes
on to say that a mind quote can entertain and
produce an infinite number of distinct representations because the mind
has a combinatorial syntax, so something like a language with grammar.
And she concludes this point by saying, quote, brains need
(30:59):
combinanttorial representations because there are infinitely many possible linguistic representations.
You know, an infinite number of sentences you could say,
and the brain only has a finite storage space, right,
So the brain can't just store every possible sentence within
itself and then check whatever somebody just said against that
sentence stored in memory. It's got to be flexible. It's
(31:22):
got to be able to build an understanding of sentences
on the fly based on these constituent parts and an
understanding of grammar. Okay, that makes sense. I think that's
one of those things that most of us, you know,
in our sci fi visions, we tend to just assume
the intelligent aliens have some sort of a language and
they're you know, an AI version would as well. But
it is good to see that um driven home with
(31:45):
logic here. Well, I mean, you could imagine somebody arguing
the opposite way. You could say that maybe language is
only useful for humans to communicate with each other, and
that once you had something like a super intelligent AI
no longer would need to communicate with these bit of tools.
It could just have I don't know what, you know,
imagine some kind of machine version of telepathy where it
(32:07):
just represents the world as some kind of I don't
know what it would be, represents some kind of internal
states two different parts of itself without having a code
system like language. But Schneider says, quote, even a super
intelligent system would benefit from combinatorial representations. Although a super
(32:27):
intelligent system could have computational resources that are so vast
that it is mostly capable of pairing up utterances or
inscriptions with a stored sentence, it would be unlikely that
it would trade away such a marvelous innovation of biological brains.
If it did, it would be less efficient, since there
is the potential of a sentence not being in its storage,
(32:48):
which must be finite. So again she's saying here like,
even if you would imagine that super intelligences would get
so powerful that they wouldn't need something like lying which
to communicate with each other, it's actually still better to
have something like a language, even for internal logic and
representing computations from one part of a system to another. Yeah,
(33:10):
I mean, it's it's like having a logic budget, you know.
I mean you can just because you you have a
lot of energy at your disposal, doesn't mean that you
just throw the budget out the window. Yeah. So again,
you know, we're dealing in highly speculative realms. I think
it's always possible we're being misled by a lack of imagination.
But I think this point is very strong. It seems
very likely to me that post biological AI would benefit
(33:32):
from some kind of language like system of mental symbols
and representations that were subject to something like a grammar.
Now there's one point she makes that we already mentioned,
and that's that quote, visas may have one or more
(33:52):
global workspaces. Uh. Now, again, to explain the global workspace idea,
Schneider argues, quote, the global workspace operates as a singular
place where important information from the senses is considered in tandem,
so that the creature can make all things considered judgments
and act intelligently in light of all the facts at
(34:13):
its disposal. In general, it would be inefficient to have
a sense or cognitive capacity that was not integrated with
the others, because the information from this sense or cognitive
capacity would be unable to figure in predictions and plans
based on an assessment of all the available information. Now,
this one I'm actually less sure about, because I would say,
(34:36):
and maybe I'm I'm partially misunderstanding her point here, But
but I can think of counter arguments to this, like,
isn't there some evidence that the brain does keep some
relevant processing information hidden from or segregated from conscious awareness
in certain scenarios, like maybe there are some types of
information that are useful in making certain kinds of calculations,
(34:58):
but tend to be in hib a tory towards other
types of calculations or thought processes if they're considered at
the same time. So it's sometimes useful to keep senses
or knowledge separated from the cognitive workspace. A very simple
example would be the knowledge that you are hungry. The
knowledge that you're hungry is useful if you're in a
position to get something to eat. But imagine you are
(35:21):
stuck on the subway and you don't have any food
on you and there's no way you could get food
at the at the current time, and you're trying to
read something or prepare for a work presentation. Their awareness
of your hunger is actually counterproductive. It's just distracting you
and adding nothing. Yeah, I mean, it's it's kind of
like the idea of like any an enormous buffet right
(35:44):
at a let's say a hotel or you know, show
needs or something you know, and you go through it
with your plate, you get the things off that plate
that are necessary for the meal you're about to have,
and then of course you can engage in the various
combinations and problem solving involved in the consumption of that meal.
But you don't need a rag the popcorn shrimp into
it if you're not gonna eat the popcorn shrimp. You know,
if you can't eat the popcorn shrimp, why would that
(36:06):
be part of Why would that be on the plate?
Why would that be on in the workspace? Or you
don't have to put the ice cream sunday on the
same plate that you put the nachos on, right, Yeah,
it can be off to the side. You can keep
the banana putting segregated from the crab legs. Yeah. Then again,
I think to be fair to this argument, you could
probably also counter argue that this type of problem is
(36:29):
only a result of inefficiencies in our brains that maybe
could be worked out by artificial intelligence, you know, upgrading itself.
Maybe you could reach the point where you could have
a global workspace where all information is available at the
same time, and information that is not useful now can
can just be sort of like safely ignored and won't
(36:50):
be distracting. M Yeah, I don't know, it's it's hard
to imagine, like it's. It kind of makes one think
of something of like a situation where something is built
by committee, where all all concerns and all factors are involved.
And I don't know that kind of thing can lead
to I guess with the right kind of project, it
can be rather successful. You can sort of look at
it both ways, right, You could look at like a
(37:10):
a highly um efficient like NASA project, Right, But then
we can also think of you know, artistic projects that
might be compromised by such an approach. So I don't know,
you can look at it different ways, and maybe with
the sorts of projects that you know, super intelligent AI
would would be focused on, it would make sense. I mean,
(37:31):
I will at least say, with my current limited biological brain,
there are certainly times when it is better to have
parts of my awareness and parts of my cognition inaccessible
to my consciousness. Yeah, I mean, there are some arguments that,
uh that that put forward that that consciousness itself is
like part part of consciousness is having a minimal attention,
(37:53):
you know, being able to focus in on something and
not be focused in on everything else like that, That
is where the consciousness happened. Yeah, the consciousness could be
sort of the spotlight within your global workspace. You've got
like a workspace for problem solving, and consciousness is how
you you determine what is right in front of you
in that space right now. And then finally, Schneider argues
(38:15):
that a visa's mental processing can be understood via functional decomposition. Uh,
and this is fairly straightforward. It's just you know, minds
are hard to understand. Brains are incredibly complex. The same
would be true of super intelligence is whatever kind of
physical substrate they're based on. But you can break down
brains and computers into their constituent functional parts and structures,
(38:39):
and by doing that you can break the big problem
into smaller problems and more easily understand how they work.
And this would in theory at least apply even to
incredibly powerful AI. S Okay, fair enough. Now there's one
last thing I was wondering about. This is not raised
by Schneider. This just occurred to me. Would post biological
AI be likely to have an equivalent of what we
(39:02):
regard as emotions? You know, if you if you encounter
one of these things, would it matter in what tone
of voice you were to speak to it, would it
be possible to hurt its feelings? I don't know, Like, um,
perhaps in turn, like we might have to break down
what emotions are in a way that would make sense
(39:23):
to something like this, Like maybe part of it would
come down to urgency, you know. Um, So there might
be a situation where out of urgency, the machine would
need to essentially raise its voice. Um, though it would
maybe not you know, maybe this would not be carried
out in a way that we would think of as emotional,
but it might, you know, seem similar as to whether
(39:44):
it's feelings could be hurt, I don't know. Maybe maybe
it's assessment of us could change based on the way
that we are expressing ourselves to it, and that is
similar to an emotional reaction. And I don't know. Yeah,
I guess it's hard to separate emotional reactions to our
(40:05):
behavior with purely logical the ability to predict our future behavior, right,
because I would say a lot of ways that we
react emotionally to people, it could be very flawed in
this regard, but there at least somehow correlated to a
feeling about how this same person that is making you
feel a certain way now would behave towards you in
(40:26):
the future. Yeah, I mean we're kind of all over
the board when it comes to imagining the emotional context
of of AI. Because even when we we sort of
do that thing where we you know, we fall back
on on AI presented itself like this to us. Uh, yes,
like even that is like that we presented as being
(40:46):
calm and understanding if if not you know, kind of
emotional nous, but in a way that it is an emotion, yeah,
British accent. But but also yeah, we often we often
imagine it as being sort of infinitely calm and above
above anger, which in and of itself is kind of
and it is it is an emotional state. So I
guess they're actually too totally different questions. Would a super intelligent,
(41:09):
biologically inspired AI simulate emotions for the benefit of a
you know, for the benefit of a biological audience, or
would actually have something like emotions that are truly motivating
its own behavior. Yeah, I don't know, it's it's it
seems a difficult one to unravel. I guess where my
brain just went is when we imagine aliens becoming aware
(41:33):
of us, you know, and we try to imagine their
mind states. Some of the ones we come to are
like pity, you know, like oh, these you know, less
technologically developed species of Earth. You know, maybe we should
help them, or maybe just a desire to destroy us,
squash us out, or a desire to like have all
of our resources. But we don't often imagine what if
(41:53):
the aliens encounter us and they're embarrassed for us, It's
like it's so cringe inducing. Well, and that could be
part of them choosing not to engage with us at all. Right,
But anyway, I've found this chapter by Schneider really interesting,
even though I'm skeptical of some of these transhumanist ideas,
but I think this is really worth a read. It's
it's very interesting, awesome. Yeah, and UH, and she's she's
(42:15):
just a good science communicator in general. You'll find various
talks that she's given. Um, I think she's done some more,
you know, pps. Her work has been covered as well
in various publications. So let's come back to UH to
show stick though, and particularly his idea is concerning SETI
the search for extra terrestrial intelligence? What what does all
(42:36):
of this mean for STU? So we'd be talking about
in theory, a highly intelligent, effectively immortal species if you will,
that evolves, can replicate itself, and has no biological environmental demands. Interesting. Yeah, so,
how does that change what you're looking for? Um? So,
showstick argues that, you know, consequently, since it would not
(42:58):
be limited by biological lifespans, interstellar travel would be would
certainly be an option. You know, you wouldn't be limited
by your mortality. All trips would be the same length.
You would just need energy and material for replacement in
the improvement of parts. On top of this, these machines,
this machine civilization would not be limited to water worlds. Uh.
(43:19):
But while low energy machines could survive pretty much anywhere,
truly dominant post biological civilizations would still require a lot
of energy, and that of course means needing to be
near major energy sources such as stars and black holes.
It seems like, uh, once you transition from being a
(43:40):
biological life form to a post biological life form, the
specifics of your needs become less chemical and more just
broadly physical. Yeah yeah so, And this this of course
has ramifications for for part of the search for extra
structural life, because then it means that well, maybe searching
for rocky wet planets isn't where we're going to find
the advanced civilization, and because the advanced civilizations no longer
(44:02):
need that, so Showstack suggests that the galactic center would
be the ideal place for these machines to set up shop,
a region of high energy density. Again, distance and biological
concerns don't really matter, and likewise, stellar black holes and
neutron stars might be ideal places for them to seek
out as well. However, he mentions that Serbian astrophysicist milan
(44:24):
Im Turkovich has argued that the outer regions of the
galaxy might also be ideal for such AI civilizations, as
that is, the cold there would permit greater thermodynamic efficiency. Ah. Yeah,
like we were talking about with the computer fan running, right,
that a civilization that is, in essence a gigantic computer
(44:46):
would need to eject a lot of waste heat. Yeah,
Still there would be less mass and energy out there
for them. So it's kind of like the same with
human decisions between a rural or an urban and existence. Like, well,
if I if I live in the heart of the city,
well you know, I've got the theater right down the street.
I've got my favorite grocery store. Uh you know, I've
(45:08):
got I've got the you know, the place where I
get my technology worked on. And I move out to
the sticks while it's quieter. But now, how am I
going to get my groceries? How am I going to
get uh my culture? How am I going to get
my technology addressed? But I can just throw all my
garbage out the window and nobody bothers me about it.
Uh So, show Stick argues that the ideal place to
(45:29):
look here, so this would be you know, this is
kind of like when humans make the idea of like, well,
I don't want to live I'm going to compromise. I'm
not gonna live in the heart of the city. I'm
not gonna live in the middle of nowhere. I'm gonna
find a nice place in the suburbs. Right. So, show
Stick argues that the ideal place where these two ideals
converge uh do exist, and these are the kind of
locations we need to look for. Um So, there's a
(45:51):
list of such places quote that have the thermodynamic advantages
of the galactic nether regions but still lie in regions
of high matter density unquote. And these include places called
back globules. Uh. These are isolated dark nebulae that are
relatively small in size, offer high thermodynamic efficiency, and have
(46:12):
a lot of interstellar matter. Huh. Interesting. The nearest one
of these, by the way, is Barnard Sight, uh, which
I believe we're referencing in the title for this episode,
a mere five light years away from us. So show So,
I'm not saying there's anything there, but it makes you
think that is interesting. I know, I I don't think
I've ever heard of this criteria to look for before.
(46:34):
Uh So, yeah, what does this mean? Show Stack obviously
is involved in SID Does this mean we've got like, uh,
you know, radio listening a tune to Barnard sixty eight
right now? Um? Well, I mean, certainly it's been ten
years since this came out. So if if, if these are,
if these are valuable arguments, uh, you know, I would
assume they've been reflected to some to some degree. But yeah,
(46:55):
in this paper he contends that said he should, you know,
continue to look at Rocky water World, but also at
neighborhoods of hot stars, black holes, neutron stars, bought globules,
et cetera. Like it just you know, we shouldn't limit
ourselves as the argument to these water worlds, because that
may be where life has to emerge from. But given
(47:16):
this idea of post biological life, that's not where it
needs to remain. Now. A big question that does remain, however,
is what sort of signal would such a post organic
civilization produced that we could detect. Uh, you know, they
might want us to find them, They might want to
find us. But either way they might they might put
some put something out for us for us to find. Uh,
(47:38):
they might you know, not care that we can observe
their dicens fheares that sort of thing. Um. But if
you know, But but what if they don't you know? Well,
then perhaps it takes one of our own AI to
you know, reach the point where it can discern the
signs of their existence, um, and then perhaps be the
ones to reach out and make first contact machine to machine. Okay,
(48:00):
so we need a machine to see the gorilla in
the video coming from space? Maybe, I mean again, it
depends on what what they want if they exist and
there at this level, what do they want? Do they
want to make contact? Maybe that's the thing. Maybe again,
they know that organic beings can be messy, and they
just want to wait until we've reached the point where
their machine can call their machine. You know, I buy
(48:23):
that it's waiting until it doesn't have to deal with meat. Yeah,
like you doesn't want to chase this down, Just send
us the press release, UM, let us know how to
get in touch with you, and we'll set something up.
That's their whole thing. It's like waiting until Like I'm
not going to order delivery from this place until they've
got an online form. I don't want to have to
call talk to somebody, right, I'm sure it's fine. I'm
hearing great things, but get your technology sorted out first,
(48:45):
and then we'll begin this relationship. I've got high hopes
for this species where people are afraid to talk to
other people on the phone. I don't know, I mean yeah,
I mean we're we're ultimately left with some of the
same questions. Not only the big one, does lie exists
elsewhere in the cosmos, but but again, like what would
if if it's if it's alien AI, alien superintelligence, what
(49:07):
are they gonna make of us? How are we going
to fit into what sort of things they do? Um?
Or would we fit in at all like maybe that's
the ultimate thing, is like they just don't they don't care.
Why would they care. We're the ones obsessed with us.
They've got their own thing going on. Do you really
care what the squirrel is digging for in the yard? Well?
I mean I do, but but yeah, ultimately do the
(49:29):
the the cosmic overlord's care? You know, I don't know.
Maybe not now shows that continues to discuss how we
might refine our search for extraterrestrial life. Um. If you
look around for his name, you'll find that he you know,
he gives talks. He discusses set in general of the search,
how that the search itself has changed, and how we
(49:49):
should change it, as well as sort of the societal
considerations involved. But well, one example of something has been
up to recently. UM In sept September he had an
article titled SETTI The Argument for Artifacts Search is published
in the International Journal of Astrobiology, and in this article
he argues that while most of the search for exeter
(50:11):
terrestrial intelligence has focused on the search for quote, artificially
generated electromagnetic signals, it's artifacts that we should be spending
more time on, or at least more time than we
we are. And this is the idea here, is that
persistent transmissions, you know, sort of we're here, we're here,
signals from beyond these require energy. And then on top
of that the aliens and question should they exist. They
(50:34):
might be exceedingly cryptic, or they might you know, they
might be embarrassed for us, as we've discussed, or they
might just be ignorant of our existence, and you know,
they simply don't know that we exist and likewise don't care. Um,
So perhaps we should be looking more for artifacts or
specifically evidence of artifacts, and to understand waste heat uh,
(50:57):
certainly counts as something we'd be looking for in an
artifacts search, a search for something created or something that
was once created by extraterrestrial life. Oh, and this came
up in the previous episode when we talked about Dyson spheres,
Like one possible way to look for them is to
look for a place where you're not seeing much electromagnetic
radiation accept heat, and the idea there is that maybe
(51:20):
there's a sphere around a star that's harvesting almost all
of its usable energy and pretty much the only thing
that's coming out the other side of it is just
the waste product of their of their processing, which is heat.
It's the computer fan blowing out into space. But yeah,
it's ultimately an interest interesting argument, like, you know, how
much effort should we be putting into picking up those
(51:40):
signals of existence versus sort of perhaps more obscure evidence
of the existence, you know, especially again if something out
there is maybe less inclined to put out that that
I am here signal, or you know, to even care
or know of our existence to begin with. Yeah, I
could be wrong, but I think I'm right about this. Like,
(52:02):
once you get a certain distance away from the Earth,
you know, some number of light years away, at a
certain point, like any any omnidirectionally transmitted radio signal would
become so weak by the time it reaches us that
we really probably wouldn't notice it. And so like, to
really notice a signal from an alien civilization, it would
probably need to be something that is directionally beamed our
(52:24):
way on purpose, and that that also requires a lot
of assumptions about what's going on with that alien civilization. Yeah,
and and maybe maybe it'll happen, but then again maybe
it won't. But yeah, it's just artifacts by products of
previous existence, whether that's physical objects or or waste signatures
like heat that could be around for a long time
depending up you know, no matter what the intentions of
(52:46):
the civilization are. Well, this has been fun, Rob, Yeah,
this has been a fun one. Yeah. So obviously we'd
love to hear from everyone out there. First of all,
we mentioned, you know, this is the domain of science fiction.
Of science fiction is considered a lot of these questions
for for deack Gates. So if there are particular examples, uh,
let us know, examples that touch on some of these
(53:06):
themes and ideas. Uh, you know, let us know if
there is that that that that that corporate alien sci
fi Reagan era thing that we were considering, you know,
probably exists. Uh, if you have an I D for it,
let us know. We'd love to hear from you. In
the meantime, if you would like to listen to other
episodes of Stuff to Blow your Mind, you can find
the Stuff to Blow your Mind podcast feed wherever you
(53:28):
get your podcasts, and in that feed you'll find core episodes.
On Tuesdays and Thursday. On Friday, we do a little
weird house cinema that's just you know, a consideration of
a weird film. We do a little listener mail, usually
on Mondays and on Wednesdays. That's when we we usually
have artifact, unless it's being preempted. Huge thanks as always
to our excellent audio producer Seth Nicholas Johnson. If you
(53:48):
would like to get in touch with us with feedback
on this episode or any other, to suggest a topic
for the future, just to say hello, you can email
us at contact at stuff to Blow your Mind dot com.
Stuff to Blow Your Mind is production of I Heart Radio.
For more podcasts for my heart Radio, this is the
(54:10):
i heart Radio app, Apple Podcasts or wherever you listen
me to your favorite shows.