All Episodes

March 30, 2021 60 mins

Even as humans reach out into the void with robotic probes and turn to artificial intelligence to aid in the search for extraterrestrial life, we face the possibility that the life we find out there might be mechanical and governed by artificial intelligence as well. In this episode of Stuff to Blow Your Mind, Robert and Joe discuss alien AI and post-organic life.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow Your Mind, the production of
My Heart Radio. Hey, welcome to Stuff to Blow your Mind.
My name is Robert Lamb and I'm Joe McCormick and
Rob I wanna ask you a question. I think I've
talked about this on the show before, but now I
can't quite recall. You've seen the movie adaptation of Carl

(00:26):
Sagan's Contact, right, Yes, it's been a while. I saw
it when it came out in theaters and I haven't
seen it since. Oh wow, that is a long time ago.
But yeah, it's I mean, it's really worth the watch
that movie. Uh, it always makes me emotional. But like,
one of the things about it that I always sticks
in my brain the most is the very opening sequence
where you you're starting um on Earth and you're pulling

(00:49):
out away from Earth, and as you get farther away
out into interstellar space, the signals that you are hearing
coming from Earth, like you're hearing like radio broadcasts or
television cast or something, and and it just gets older
and older because you're you're pulling out to where older
and older signals are the only ones that have reached
that far. Yeah, and of course there's this very chilling

(01:11):
moment where you get really far out there and I
think you're just getting like a signal of Hitler reading
a speech or something that's just like, oh god, And
it really makes you think about what kind of impression
humanity is making on the broader galaxy. Yeah, I I
specifically remember this this from the film. Yeah, it makes
makes quite an impression and makes you, yeah, a little

(01:32):
reflective on the on on human civilization itself. And and
and if anyone's receiving these signals, anything is receiving these signals,
what they're picking up on and what their impression is
going to be of the of human civilization? Yeah, Like,
what if aliens the only thing they intercepted and had
to go on was a TV edited broadcast of Batman Forever?

(01:57):
What would they what would they conclude about Earth life?
That's it's it's a it's a fun game. Uh, And
it also plays into some fun sci fi to think
about this. Uh. There's of course the Futurama episode where
it's essentially um, uh, what what was it? Ally mccuh?
What was the lawyer show, Ally McBeal. Yeah, it's like
an Ally McBeal s show that was canceled or it's um,

(02:19):
it's it's a season finale didn't air, or somehow they
didn't receive it. And that's what the aliens have come
to Earth in order to to get. They want the
season finale for this television show. Oh. I think that's
also sort of the premise of Galaxy Quest, isn't it
that they see like a Star Trek style show, but
they think it's a documentary about real life on Earth. Yeah, yeah,

(02:39):
that's right now. Of course, radio signals and so forth,
they're not the only things that we have sent out
into the void. Uh. We of course have sent machines
as well. And I want us to to think back
for a second to the Pioneer plaques, the gold anodized
aluminum plaques attached to the nineteen seventy two Pioneer ten
and the nineteen any three Pioneer eleven spacecrafts. These were

(03:03):
the first human made objects to escape velocity from our
Solar System, in the first physical emissaries of Earth life
and Earth civilization. I think in the years since, they've
actually been outpaced by the voyager probes in leaving the
Solar System? Is that right? I think? I believe so
and there's of course a similar story to tell with
those uh spacecraft as well, but but uh specifically with

(03:25):
the plaques, because of you know, these were of course machines,
they were not human beings. They were powered by nuclear batteries,
they had antenna, uh antenna, they had an assortment of
scientific equipment on board, so they didn't look like us
or in any way really represent biological life, except in
the case of these plaques, which include a number of

(03:46):
symbols detailing the origin of the spacecraft and then to
sort of convey you know, you know, human understanding of
where we are in the Solar System and then the
larger cosmos. But then also it contained these these now
iconic depictions of two human beings, a nude male and
a nude female. Now it's worth noting Carl Sagan regretted

(04:08):
that the humans on the plaque do not appear pan racial,
but rather appear very Caucasian. And also the line representing
the females Volva was removed, so she's kind of like, um,
like a Barbie doll on this, you know, so they're
not completely anatomically correct, and they seem to only represent

(04:29):
uh Caucasians as opposed to like a the idea of
representing the broader human species as a whole. Now, one
of the things that's super interesting about all of this,
especially given what we're gonna be talking about in this episode,
is that the Pioneer probes and subsequent spacecraft are non
human machines that merely bear in some cases the inscriptions

(04:50):
of human beings, be they you know, actual inscriptions or
media of some sort. Uh. And at the same time,
these are our mechanical works, our machine utterances that are
cast out into the void. They are us reaching out
four and two other life forms. Now today, humans maintain
a small orbital presence, and humans did visit the Moon
in the previous century, but our outreach continues to take

(05:13):
the form of these technological utterances. And even though it
is the work of human beings on our planet to
analyze the data we receive in search of possible signs
of alien life, we also use artificial intelligence in many
scientific and technological applications, including the search for extraterrestrial intelligence.
That is strange. Yeah, and uh, I guess it's interesting

(05:36):
on a couple of levels. So, first of all, you know,
one of the things humans and we've discussed this in
the show before. One of the things that humans and
their AI creations look for our techno signatures, and these
include both radio signals and things like megastructures like dycen
s fears. You know. Uh so, just as we are
reaching out with our mechanical utterances, we are seeking the
mechanical utterances of others. Yeah, we haven't talked about dycen

(05:59):
spheres in a while, but unless my memory is betraying me,
I think one of the ways to look for something
like that would be look out there and see if
there's some kind of structure object that is basically only
emitting heat. And the idea there would be, you know,
if all the other frequencies of radiation are being used
up and only heat is coming out of it, that
looks like that's probably a waste product of doing work.

(06:20):
So it's like, you know, it's the fan on your
computer just blowing out into space. Yeah. Yeah, so, and
and basically coming back to the idea that advanced civilizations
are going to have advanced energy requirements and therefore they're
going to have to harness the energy of entire suns. Now,
the other angle on this that that is interesting in
one that I really hadn't thought about. Uh, is that

(06:41):
there may be problems with our use of AI for
such searches, as pointed out by Spanish clinical neuropsychologist Gabriel G.
Dela Torre in a paper published in Acta Astronautica UM. Basically,
the idea is AI could confuse us or tell us

(07:02):
that it has detected impossible or false things in the data.
And our AI creations can certainly reflect our own biases.
We we've discussed that as well, you know, like we
can and and you know this this applies to things
like facial recognition, etcetera. Like we can we can easily
program our own um, you know, uh overt or hidden

(07:23):
wants and desires into the AI we create, Yeah, or
not even program them. AI can acquire them from data
sets based on our own reality. If it's just trying
to like read what has happened in the world and
learned from that. It can internalize biases that we didn't
even try to explicitly give it because those biases are

(07:43):
reflected in how the world is. Yeah. So the AI
we unleash on on on such a search for alien
life might simply be more inclined to find evidence of
it dragging in human bias, or it could simply identify
things that are not there. It could find and patterns
that that that simply aren't actually there in a meaningful way.

(08:05):
Oh well, this immediately makes me think of what was
it called the Google Deep Dream that found you know,
dog faces in everything, where like have a have a
picture and have Google analyze it, and I think it
would try to extract recognizable patterns and then amplify them.
So you take a picture of your couch and suddenly
your couch, you know, Google happens to detect that your

(08:27):
couch is made out of crabs, dogs, and human faces. Yeah,
so you know you wouldn't want your your your AI
reporting back and saying we found it. It's a planet.
We're calling it Good Dog one. It's composed entirely of
dog faces, so let's celebrate. And it's under threat from
the nearby crab nebula, not the crab nebula, you know,
the literal crab nebula, which is made of crabs. So

(08:51):
there's actually a specific situation that the author points out
in this paper, and it concerns that the Nalia faculae
of series, the largest object in the asteroid belt. Basically
the situation here is bright spots were observed in a
crater there, which turned out to be volcanic ice and

(09:11):
salt emissions. You might remember seeing pictures of this on
the internet. So, yeah, Series is an object in the
asteroid belt, sometimes referred to I think as a dwarf
planet or something. It's basically spherical, so it looks kind
of like a moon, uh. And that Yeah, there was
a big crater in it where right in the middle
of the crater there was there were these white, bright
white spots there. And obviously, you know, without knowing better

(09:33):
and having learned our lesson from the face on Mars
and all this stuff, you know, people's natural inclination was
to was to pattern recognize out the butt and go
like that technology or something, this is an alien Yeah, clearly. Yeah,
you start looking for geometric shapes and uh and and
looking for artificiality in it. And so this this particular paper,

(09:53):
this this team from the University of Cadiz, they had
already looked at what they called the cosmic guerrilla effect
into only eighteen UM. This is this is um referring
of course to these uh, these attention based experiments that
we've we've discussed before in the show, and a lot
of you've probably seen in YouTube clips where you have
somebody in a guerrilla costume walk through a scene and

(10:15):
see afterwards if anybody noticed it. Yeah, human cognition has
amazing blind spots for attention that will astound you. Now
we've already warned you, so if you've never tried this
experiment before, you might be on your guard and already
knowing what to look for. Yeah, Basically, the way it
goes is like, you can do something like have a
bunch of people stand in a circle throwing a basketball

(10:35):
to each other, and you ask people to judge how
many times the basketball has passed from person to person,
and they'll do that, and in the middle of the video,
a person in a guerrilla costume just walks through the
middle of the group, and huge numbers of people while
they're counting the basketball passes do not see the gorilla.
And it's like, if you go back and watch the

(10:55):
video again looking for the gorilla, it is unmissable. But somehow,
when we're trained in on a certain type of cognitive
task and visual processing, you can completely miss gross stimuli
that that would seem impossible to miss if you were
looking for them. Yeah, And of course one can imagine
that if an artificial intelligence were watching the same scene,

(11:18):
they would pick up on the gorilla. They would they
would it would be able to say, oh, gorilla, unexpected
guerilla has appeared in this scene and then reported as such.
And so the cosmic guerrilla effect basically deals with the
idea that eve there are intelligent, non earthly signals out there.
They could be written dimensions that escape our perceptions, such

(11:38):
as dark matter for example, and it would be like
the guerrilla suit. You know, you just wouldn't see it.
But an AI would potentially have an advantage in catching
those sorts of signals. Oh okay, yeah, I see what
they're saying there. So in in this between, in this
this this newer study looking at the Venalia faculae. Uh,
they did the following. They used a hunt sixty volunteers,

(12:02):
human volunteers with no grounding in astronomy. I wanted to
stress they're not guerillas or robots. Um. Plus, they used
an artificial vision system based on con evolutional neural networks
or CNNs. Both groups detected square structures in the image
of the venalia faculae. But the AI also saw a triangle,

(12:25):
and when the triangle option was then presented to humans,
um the number of humans claiming to also see a
triangle increase significantly. So while AI could certainly detect something
that we cannot that we cannot see, it might also
detect something that isn't there and then confuse us into
seeing something that isn't there as well. So you can

(12:47):
see the the sort of spiraling effects of this. Uh.
And ultimately, with the aid of AI, we end up
seeing signs of life where there weren't any to begin with. Okay,
I see, I see what you're saying. So the idea
is that humans already have a certain tendency for paradolia
or paradolia the detecting of patterns or signal within noise.

(13:08):
So that's the reason that we see faces in the clouds,
or see a face on Mars, or any number of things.
We look at something that in fact has no encoded
information in it, and we think we can extract meaningful information,
I mean no meaningful information, and we think we can
extract meaningful information. Uh. You know, listening to tape hiss,
you might think you hear a word or something like that.

(13:29):
And the example here is we think we see I
don't know a pyramid or a you know, a building
on this asteroid or this dwarf planet, and then you
can actually make it worse by if you add on
an AI. The AI may in fact contribute to priming
that makes you even more likely to engage in paradolia.
The same way that if somebody plays you a tape

(13:49):
hiss and doesn't just play it for you, but says,
you know, hey, listen for the part where it says
worship Satan or whatever, that you're probably more likely to
hear it because you've been primed. Yeah, yeah, exactly. I
mean it's kind of like imagine, you know, you're thinking
about Fleetwood Mac albums and then you learn, oh, um,
you know one of this, you know, Watson AI or

(14:09):
whatever has determined that Tusk is the best Fleetwood Mac album.
And you might think, well, you know, it wasn't my favorite,
but the AI has identified it as the best Fleetwood
Mac album. Perhaps it is the best Fleetwood Mac album,
even though deep down you know it's rumors, even if
deep down you know it's one of those early albums
before Stevie Nicks was in the band. Yeah, I mean

(14:32):
exactly basically, Yeah, it comes back to that, but it
has come back to the idea, yeah that we we're
we're entire we're very susceptible to priming, and we could
And the argument here by the authors is that you
could set up a situation where where your AI dragging
in certain biases is setting you up, is priming you
to to with it see things that aren't there, which

(14:55):
could ultimately just make the search for actual, you know,
evidence of intelligent alien life elsewhere in the galaxy all
the more difficult. So this is kind of a conundrum
because the AI could it could be helpful and harmful,
Like it could help with the problem of the gorilla effect,
where we uh, you know, we just totally miss things
that we should have seen. But it can also, on

(15:17):
the other end, cause us to see things that aren't there. Yes,
absolutely uh and and a lot of some of this
isn't completely crucial to where we're going from from here
in the episode. It's worth thinking and thinking about because
here's the other side of things. What's out there might
not simply be the mechanical utterances of biological life as well,

(15:38):
it could be the mechanical echoes of biological life, what
is sometimes referred to as post biological life, and even
post biological intelligence. And this this has some huge implications
um all its own. Okay, So the idea here would
be not that you know, we we already expect that
it's possible we could encounter alien technology rather than biological

(16:02):
aliens themselves, just because alien technology is say a you know,
an artifact of their previous occupation of a planetary surface,
or a piece of technology could be their probe like
our voyager probes. You know, these do not have humans
in them, They're just going out there. Yeah, but this
idea goes beyond that to say, well, maybe it's not

(16:22):
just that we're encountering the mechanical residue of biological life,
but we're encountering a civilization that at this point only
consists of machines that there that is inherently post biological. Yeah.
At what point does the residue become the thing itself
as a civilization becomes increasingly technological. At what point is

(16:43):
the technology the defining or soul aspect of the civilization. Yeah, Now,
this is an idea that's certainly been disgusted in science
fiction a lot. I think gene Wolfe had had one
version of this, where you have an entire mechanical society
and they have evolved from advance space suits for biological
beings that no longer exist. Uh that sort of thing.

(17:05):
Oh yeah, okay, not to give away too much, but
this is also explored in one of our favorite video
games that we've talked about on the show before, a
really cool game called Soma that is sort of an
undersea sci fi horror game that involves a post biological existence. Yeah, yeah,
a good connection. I wasn't even thinking about Soma, but
but that that is a great example of this as well.

(17:28):
So a couple of sources that we we looked at
for this that I want to go and mention here,
and of course we'll get into in greater depth the
work of Sti's Seth show Stock and the work of
Susan Schneider, a cognitive scientist and philosopher. I was just
trying to look up Susan Schneider's affiliation. I think at
some point she was affiliated with the University of Connecticut.

(17:49):
It looks like maybe the more recent one is Florida
Atlantic University. But anyway, yeah, she She is a philosopher
whose work we have discussed on the show before. Actually,
her work came up in an episode we did about
whether machines could be conscious, because she was one of
the authors who advanced the idea of a test for
AI consciousness that I thought was pretty interesting, and it

(18:13):
was actually very simple. The test was basically just variations
on can this machine grasp and manipulate supernatural concepts from
fiction and folk belief, such as ghosts and astral projection
and body swapping like in the movie Freaky Friday and stuff.
You know, it might sound kind of silly, but actually

(18:36):
these are concepts that I think you can make a
good argument only intuitively make sense to us because we
have a subjective internal experience, and to an intelligent machine
or even a biological automaton that didn't have an internal experience,
it would not make any sense to to envision something

(18:56):
like being a ghost or an astral projection where your
consciousness leaves your body, because what would be doing the
leaving of the body? Mm hmmm. Yeah, you know now
that I'm thinking about Susan Schneider, I think I saw
her at World Science festival at some point in the past. Um,
but I didn't think of it till now. I forgot
to check my my old notes to see if I

(19:17):
had anything I wanted to start with with show Stack though,
uh specifically his two thousand tin paper what ET Will
look Like and why Should We Care? And this Uh, Basically,
this paper discusses um uh, this idea of post biological life,

(19:40):
the search for extraterrestrial life, and it starts off by
discussing our carbon bias in the hunt for for for
e t s uh. You know, we we look for
rocky worlds that contain liquid water as this is the
path towards organic life. This is where organic life emerges from.
All of our models are built on this, uh. And

(20:00):
and that's that's the softer version of our bias, while
the harder version is what what he references an individual
by the name of Simon Conway Morris who argues that
any evolved intelligent life form is going to roughly look
like us, at least in show Stacks words quote in
a dark night and from a distance. And I believe

(20:22):
we've discussed this idea at length on the podcast. Yeah,
I think this was one of the earliest episodes of
the show I ever did, so it was a years
and years ago at this point, but we talked about
Simon Conway Morris, who I think is an evolutionary biologist
from Great Britain if I'm not mistaken, but he uh oh,
it was the episode called Grizzly Bears from Outer Space,
where so they're there are two very opposing schools of

(20:45):
thinking about, you know, the forms intelligent aliens could take.
Some people say, you know it, we can't even imagine
how different they could be from us. You know, it's
it's impossible for us to get outside of our own
anthro anthropomorphic paradigm to imagine how biologically your friend and
strange aliens could be. And Morris was on the other side.
He was saying, no, they're actually principles of evolution and

(21:07):
sort of bio chemical constraints on what life could evolve.
And basically he says, there's a pretty narrow range for
what types of organisms can evolve, just based on the
physics and chemistry of the universe, and so we actually
shouldn't expect aliens to be all that different from us.
We should actually expect them to be pretty similar. In uh,

(21:27):
in very dependable ways. Yeah, this kind of the idea
whereherever you go, they're probably gonna be things like crabs,
and there is going to be something like a human
um chasing those crabs around with some sort of a
tool that's made to catch those crabs. Yeah. I mean
it's been a while, so I'm sure i'm somewhat oversimplifying.
Apologies to Conway Morris, but but that's the rough outline.
It is that that that biology is constrained by physics

(21:50):
and chemistry and evolution, and those factors are going to
be universal no matter what kind of planet you're on
or you know, what star you're orbiting, and so there
are some patterns we should repeating all throughout the galaxy.
So so that's one part of it. But then apparently
a lot of this bias is present. Arguably the show Stacks,
you know, argues to this in the Drake equation itself,

(22:11):
as we factor in the time it would take for
life to evolve and the average lifetime of a technological society.
Now we're called the Drake equation was a hypothetical way
of trying to calculate the number of technological civilizations that
would be present in our galaxy by multiplying together a
bunch of numbers, and I don't remember what all the
variables are now, but it would be something like you

(22:32):
multiply the probability that life will arise on a planet
at all times, the probability of that of any life
becoming intelligent times that you know, a number of things
like that. And then I think you would also have
to factor in the average lifespan of a technological civilization
because at some point it will probably go extinct. Yeah,

(22:53):
and we keep coming back to the Drake equation, uh,
you know, in not just spend in general, because it
breaks a big quest and down into these different factors
that you can then, um, you know, work with independently. Yeah,
that's very useful. It decomposes the problem into a discrete
set of smaller questions, many of which also we still
don't know the answers to. But it is at least

(23:15):
helpful to know what those questions would be so they
can be investigated individually. Now, the chance of detecting a
technological civilization close to our own level of development is
apparently small. Chances are if we were to detect one,
they'd be thousands of years or more beyond us. And
when we extrapolate that show stack says we we what

(23:37):
we tend to do is we tend to base it
on our current state of human evolution and imagine something
it points out with with less hair, with fewer teeth,
with wrestle, with less reliance on physical labor um, which
you know, to me this instantly makes me think of
like the gray ones, right, and you know the various
extraterrestrial tropes that we have, which yeah, are kind of

(23:59):
an idea of what if we continued to get less exercise,
we continued to stare at screens, continue to type and
stay indoors, you know, for you know, you know million
years or so, Uh, what could begin to happen? It's hilarious.
The gray aliens are just nerds. They're the nerds of
the galaxy. They're all brains, no braun, huge head to

(24:19):
contain that huge brain that can design their interstellar spaceships,
and then skinny little arms and they stand around with
their huge eyes, poking us with with sticks and going like,
oh what you know, what have we learned? And yet
with those huge brains, like how many cattle are they
going to have to mutilate before they finally figure out
what makes a cow work a lot a lot, you know.

(24:39):
Um so, so the show Stack ultimately makes the argument
that that this idea should evolve, that that or should
have evolved more than it has. And he does this
by pointing out that, you know that that our ideas
evolved concerning life on Mars. You know, initially, uh, we
we were looking at we were considering, oh, the possibilit

(25:00):
be of intelligent canal builders on Mars. And we've discussed
where that idea came from on the show before, you know, uh,
sort of misinterpretation and and and straining to to see
things that weren't there a little bit of that that
that bias as well, uh, regarding our some of our
earlier views of the red planet. But then just within
a few decades that is forced to evolve when we realize, oh,

(25:22):
there aren't canals and uh, and you know, there's instead
of looking for the technological society, we're looking at the
possibility of subterranean microbes. So our ideas concerning life and
other star systems, they argue, has not evolved in a
similar way. Well, certainly not in the popular consciousness. I

(25:43):
would say, I mean, at least in some of the
astro biology literature we read, it seems like it it
is uh, pretty sober from my point of view, and
the like, looking for um for biosignatures often has to
do with looking for the kinds of say, gases in
the atmosphere that you would expect if there were a
photosynthesizing organism, which could just be a microbe. And that

(26:04):
seems like a reasonable thing to look forward for me.
But yeah, obviously, like when you're trying to think beyond that,
think like if we were to make contact with another
uh you know, type of alien from another type of planet,
what would it be. I think that we're still pretty
close to the gray aliens point of view, right, And
of course I should also again point out that this
is like a decade old paper at this point, so

(26:26):
you know, to some extent, show Stack himself may have
helped move the needle, but um he points out that,
you know, in addition to the purely organic model for
a more advanced alien life form, we also have to consider,
you know, the cybernetic what if humans and indeed more
advanced alien life forms have gone board to some extent,
they they've augmented. They were their organic forms with mechanical precision.

(26:47):
And there are multiple examples of this we might turn
to in science fiction, you know, and it's going to range.
The Hands of Steel is a good example to draw
on a different recent Weird House Cinema episode. But you
have stuff like the culture from Ian M. Banks novels,
where it's more of a you know, positive spin on
the idea, to stuff like the Borg and the Cybermen,

(27:07):
you know, where everyone is majority or almost entirely machine
and with only some slim vestige of organic life in there.
You know, So everybody's a RoboCop to everybody's a grievous uh,
that sort of thing. Just a planet of tom Noonan's
from RoboCup two. Yeah, just screaming for their space drugs. Um.

(27:28):
But actually no, I literally do want to come back
to this point later on. Okay, But then there's one
step beyond all this, and that is the complete mechanical
replacement capped off by the birth and explosion of artificial intelligence.
So for this in sci fi, one can certainly turn
to the terminator model, you know, where AI emerges and

(27:48):
then it kills off everything that came before UM and
This is of course very popular in science fiction. Uh,
you know. But then another common trope is that the
machine part of a society alone survives, so the serve
its outlived the masters due to you know, some sort
of cataclysm or disease, what have you. But the other
way of looking at it as well is it's simply
the mechanical utterance is not something you know, extending from

(28:12):
the civilization. You know, it's not just an echo, but
it is the next phase of its evolution, that the
machine utterance is post organic life. Perhaps the organic aspect
of a civilization simply fades away and you know, given
these advancements, or perhaps to use that the culture model
from Banks's books, the organic source remains, but the predominant

(28:34):
shape of the civilization in question is entirely post organic
because with with the culture, for instance, it's in his
in his books, it's mostly the AI, it's mostly the ships.
It's mostly there, uh you know, robots and whatnot. But
the humans are still there. But they're kind of like, uh,
they're kind of a thing that is preserved for the
sake of of preserving it. You know, they're the remora

(28:56):
on the shark. Yeah. That but a but a ramora
that is sort of share. You know, it's almost like
m You know, at times there's a sense that the
robots and the AI the minds of the culture. You know,
they're they're babysitting for the humans. The humans are this
thing that is nurtured in preserved because they are the
machines passed. You know, Oh, I want so it would

(29:16):
it be kind of like if there's a country that
still has a ceremonial monarchy but the monarchs have no
actual political power. Yes, yes, that would be a prime example.
I think so. Showstick also points out that, given Moore's law,
the successful creation of human level AI is of course
going to lead to even greater AI. Quote, assuming that

(29:37):
our own technological time scales are not grossly atypical. This
implies something important for SETI. Once any society innvinced the
technology that could put them in touch, once they reach
a level that's comparable to our own and become detectable
with our listening experiments, they are at most only a
few hundred years away from changing their own paradigm of

(29:58):
sentience to artific shoal intelligence. This is almost identical to
a point that's made in the Susan Schneider chapter that
we're going to talk about in a bit. Yeah, so
he stresses that such an emergence would necessarily affect the
biological ancestors at all, but it makes sense that post
biological life would outlast and outperform the organic. We could

(30:18):
therefore assume that any life form we encounter in the
galaxy at large would be a machine. Okay, well, maybe
this is a good place to get into Susan Schneider's
chapter on this because she makes a similar argument could
cover some similar ground, and we can look at that
in detail now and then come back to the rest
of her argument after that. But so this chapter is

(30:39):
by Susan Schneider, and it's from a book called The
Impact of Discovering Life Beyond Earth, edited by Stephen J. Dick,
published by Cambridge University Press in and in this book,
Schneider has a chapter called Alien Minds where she makes
the same argument that show Stack is making here about
the nature of minds we would be most likely to
encounter if we make contact with another civilization, and so

(31:03):
several of her main points would be the following. She
does argue that in the most likely scenario, if we
ever encounter alien agents, it is likely that they will
not be biological life forms, but rather forms of super
intelligent artificial intelligence or s A I. And then she
also says, of course that intelligence can take many forms,
but there are reasons to think these machines would be

(31:25):
modeled on the intelligence of biological organisms that arose through evolution,
and you could call these agents biologically inspired super intelligent
aliens or visas b I s A. And there are
a number of arguments she makes about what the cognition
of those aliens would consist of, But I just want
to go back to her first argument that we would

(31:46):
be more likely to encounter post biological super intelligent AI
than we would to encounter biological organisms like ourselves. And
so there are three main points to her argument. The
first is what she calls the short window of observation,
and the argument goes like this, once a society has
the level of technology that would allow them to come

(32:07):
into contact with the rest of the cosmos, and this
could include things like radio reception and transmission, rocketry and
so forth, at that point that society is less than
a few hundred years from changing their paradigm from biology
to artificial intelligence to you know, silicon based AI. And
she makes an argument for this based on previous accelerating

(32:30):
rates of computation. So you already mentioned show stack referencing
Moore's law. That would be in parallel to what he's
saying there. Uh so the advance of digital technology. But
she also makes reference to a thought experiment from her
previous work. Uh and so I just want to read
the thought experiment as she describes it, and then we
can discuss pros and cons. Schneider writes, quote, suppose it

(32:51):
is and being a techno file, you purchase brain enhancements
as they become readily available. First you add a mobile
internet connection to your retina. Then you enhance your working
memory by adding neural circuitry. You are now officially a cyborg.
Now skip ahead to twenty forty. Through nanotechnological therapies and enhancements,

(33:12):
you are able to extend your lifespan, and as the
years progress, you continue to accumulate more far reaching enhancements.
By after several small but cumulatively profound alterations, you are
a post human. To quote philosopher Nick Bostrom, post humans
are possible future beings quote whose basic capacity so radically

(33:33):
exceed those of present humans as to be no longer
unambiguously human by our current standards. At this point, your
intelligence is enhanced, not just in terms of speed of
mental processing. You are now able to make rich connections
that you were not able to make before. Un Enhanced
humans or naturals seem to you to be intellectually disabled.

(33:54):
You have little in common with them, but as a
transhumanist you are supportive of their right not to enhance.
It is now a D two hundred. For years, worldwide
technological and developments, including your own enhancements, have been facilitated
by super intelligent AI. Indeed, as Bostrom explains, quote, creating
super intelligence maybe the last invention that humans will ever

(34:17):
need to make, since super intelligences could themselves take care
of further scientific and technological developments over time, the slow
edition of better and better neural circuitry has left no
real intellectual difference in kind between you and super intelligent AI.
The only real difference between you and an AI creature
of standard design is one of origin. You were once

(34:41):
a natural, but you are now almost entirely engineered by technology.
You are perhaps more aptly characterized as a member of
a rather heterogeneous class of AI life forms, and so
her thought experiment ends there, But she's trying to sketch
how it would be plausible to imagine humans existing today
act really becoming machines little by little over time and

(35:03):
by extending their lifespans. Now, I will say, I do
think there's there's value in this thought experiment, and I'm
glad we're pursuing it. But I also do feel like
I need to flag that I am significantly more skeptical
of these types of common extrapolations about trans humanism and
artificial intelligence than I used to be. I think my
skepticism comes down to a suspicion that scenarios like these

(35:28):
make a lot of assumptions that are just taken as obvious,
but I think are actually somewhat speculative. For example, would
it actually be possible to increase human cognitive capacity with
neural implants that that just seems obvious. It is taken
as an assumption because obviously computers can do things that
human brains can't do, or at least they can do

(35:50):
them at speeds that human brains can't match. But what
if there are inherent biological throttles or gates on consciousness
and cognition in brains that make the neural cyborg not
much smarter than a human with access to a computer.
What if there's just something physically about the properties of

(36:10):
brains that doesn't allow you to augment them with technology
like this, It just doesn't work. Or what if becoming
a neural cyborg with computer enhanced cognition is actually a
subjectively dreadful, miserable experience, and it turns out that once
people have tried it and reported on what it's like,
nobody wants to do it because it feels awful. Yeah,

(36:32):
I'm like, I'm thinking, like what you have, some sort
of an upgrade you received made it possible for you
to say, well, let's say, be better at personal finance.
But as a result, that means that there is constantly
an additional background narrative in your brain and your consciousness
about your personal finances. And maybe that's good for for

(36:54):
just to you know, your your your pocketbook and your investments,
but ultimately maybe it sucks for life, you know, because
it's this is not the sort of balance of inattention
that makes life worth living or makes it like like
it was before like it. It changes you to such
an extent that you want to go back you were,
Like part of the joy of life is maybe not
thinking about personal finance all the time. Yeah, what if

(37:16):
part of what makes it fun to be a human
is not being a computer. And if you the more
you make your brain into a neural cyborg, the more
miserable your life becomes you and you desperately seek to regress. Yeah.
Another thing, what if consciousness is just inherently non transferable
to machinery. I don't know this is the case. Some

(37:37):
people do make this argument, and I have no reason
to assume this is true. But I also have no
reason to assume the opposite. There's no reason to assume
that you can actually upload your mind to any kind
of computer substrate. I think this is just a big
question mark. We just don't know if such a thing
as possible. Yeah, I mean, I tend to believe at
this point that we could create something that acts like us.

(38:00):
You can create something that is essentially like the the
machine avatar of who we were, or who we thought
we were, who we want to be thought of after
the fact. But to the point, like is that I
think when you start asking more specific questions about like
is that us? Then? I don't know, I feel like
it isn't is it? Could it be conscious at all?
Even if it could be conscious, is there any reason

(38:22):
to believe that you would experience it as a conscious
continuation of your previous mind? Or would it just be
a conscious copy of you? Yeah? Or I mean when
you start asking questions like that and then you get
into questions of like, well, and who I am now?
Is this really a continuation of who I was five
years ago? You know? I mean, you start seeing all
the flaws in this um narrative of self and identity,

(38:46):
and maybe it becomes maybe that's the thing. Maybe we
reach a kind of we reach a point where we
realized none of it is real, Like there is no
real continuation of the self, and therefore why not create
like three different machine avatars of my self and have
them continue my legacy for me? I just want to
mention a few other questions that just popped into my
mind this morning. Uh, what if there are actually hard

(39:08):
limits on certain kinds of intelligence, whether you're talking about
a biological brain or a computer. What if certain types
of complex problem solving within a coherent agent system, meaning like,
you know, a single sort of mental workspace that always,
that is coherent and communicates with every part of itself.
What if there are limits on what kind of intelligence

(39:30):
can happen in an agent system like that or different thing.
What if biological organisms in general, even across the galaxy,
have an overwhelming tendency to revolt against the cultural transition
to machine life and will always or almost always end
up engaging in something like Frank Herbert's but Larry and Jihad,

(39:51):
you know, where you shall not make a machine in
the image of a human brain. Yeah, yeah, you want
to end up moving towards that sort of Star Wars
model where yeah, you have all these advanced machines everywhere,
but they're only working as servants you know there, Uh,
with a few exceptions that I guess kind of prove
the rule in that universe. So anyway, literally hundreds of

(40:12):
questions like this I think I could list, and they
start coming to mind when I think about it. And
while I don't assume that any of them are strong
enough to completely disable the trans humanist proposition, I also
wonder if some trans humanist and super intelligence thinking is
too quick to hand wave past these kinds of questions.

(40:33):
But like I said earlier, I do think this type
of scenario that Schneider is talking about is plausible enough
to entertain as a thought experiment, So I want to
keep going with it. And one thing I will say
in favor of of her argument is that, at least intuitively,
I think her timeline is reasonable, meaning that I think
if it is possible to create an AI super intelligence

(40:56):
and that humans or their biological alien counterparts do at
some point merge with or fade into that machine AI superintelligence,
I don't see why it would take more than a
few hundred years after the invention of computers basically for
that to happen. And even if it took tens of
thousands of years, I think Schneider's point on this first

(41:17):
point she's making is basically correct. The time between when
a species starts technologically interacting with the universe beyond its
home planet and when it becomes dominated by post biological intelligence,
if this is possible, that that time gap seems very
small and vanishingly small compared to the lifespan of a
planetary biosphere. Yeah, so you come back to that scenario

(41:41):
that show Stack was talking about, where once you're detectable,
it's just a matter of time before the machine administration
moves in. So one instantly think that you can imagine
the the the aliens out there, if they're listening in
on this, they're like, well, should we contact them now?
They're like, well, no, they're they're about to change administration,
Like the humans in charge now are about to hand

(42:01):
off in relatively little time. From our standpoint, two machines
that will be it'll be just easier to communicate with
those machines and we'll we'll there'll be a lot more
pleasant to deal with as opposed to these organic beings.
So yeah, I would say I'm more bullish on the
second half of Schneider's proposition here than the first half.
I don't know if the age of machines is coming,

(42:24):
that's a big question mark for me, but I will
agree that if it's coming, it's coming very fast, yes,
and if it is coming, we welcome our machine over lords.
But anyway, that that was all just Schneider's first point
about the short window of observation. A couple of other
points that are quicker to make. The second one that
she makes is the greater age of alien civilizations. So

(42:46):
here she cites some pre existing statistical work making the
point that and I think show Stack made this point
as well. If you assume a random distribution of biological
evolution across the galaxy, most alien civilizations should be expected
to be millions or billions of years older than us.
So either there's something very special and rare about Earth life,

(43:08):
or we're one of many planets with with with powerful
intelligence and civilization. And if we are, we we should
expect to be on the young side of that equation.
So if you couple this with the previous points, she argues,
you start getting toward an interesting conclusion. Again, these two
points are on average, we should assume that other alien
civilizations have been around for millions or billions of years,

(43:29):
and on average alien civilizations transform themselves into post biological
superintelligence is very fast. There's a very short window of uh,
technological civilizations that are still biological in nature. And so
if you put those things together, you should expect, Yeah,
if we're meeting something, it's probably post biological. And I

(43:50):
will say as far as my reaction, again, I have
lodged my moderate skepticism about the trans humanist and AI extrapolations,
mind uploading and so forth. But I followed the argument
so far. Her third point, and I think this is
an interesting one. She says silicon is a better medium
for intelligence, at least better than carbon, and this one

(44:11):
is interesting. Basically, Schneider argues that carbon based life forms
will recognize the inherent physical advantages in transferring themselves into
silicon based machines. Again, you know, flag my skepticism about
mind uploading, but if it's possible, okay, I follow the argument.
She writes, quote, silicon appears to be a better medium
for information processing than the brain itself. Neurons reach a

(44:34):
peak speed of about two hundred hurts, which is seven
orders of magnitude slower than current microprocessors. While the brain
can compensate for some of this with massive parallelism features
such as hubs and so on, crucial mental capacity such
as attention rely on cereal processing, which is incredibly slow
and has a maximum capacity of about seven manageable chunks.

(44:58):
I did not follow up on what she means by
chunks there, but she cites Miller from the ninety six
This must be a computational science paper. She goes on
further the number of neurons in a human brain is
limited by cranial volume and metabolism, but computers can occupy
entire buildings or cities, and can even be remotely connected
across the globe. Of course, the human brain is far

(45:20):
more intelligent than any modern computer, but intelligent machines can
in principle be constructed by reverse engineering the brain and
improving upon its algorithms. You know this. This reminds me
how in in in Banks's culture books, their parts where
the machines are working with humans, because you have human
characters that are playing an important role, because that that

(45:42):
makes it an interesting story. Um. But the machines, of
course are communicating with each other. The minds are communicating
with each other. It just blindingly fast speeds. And then
when they need to communicate with an organic being, it
just like it's just slow as Christmas, you know, it
just drags everything to a halt basically for them. Yeah,
that's funny, And it's also funny this last comment she makes,

(46:04):
I think is interesting about the cutthroat design idea, where
an intelligent machine could just say like, oh, I could
make myself better than a brain just by figuring out
how brains work reverse engineering that making myself into a
brain and then upgrading myself. But anyway, altogether, Schneider's thinks
that these points should convince us that alien civilizations that

(46:24):
we encounter are way more likely to be post biological
machines super intelligent aies than they are to be biological
organisms made of meat. And Schneider also makes one point
that I think is very good if it's possible to
become a post biological super intelligence, but not a common

(46:45):
fate for all intelligent alien species. So maybe not all
alien civilizations go this direction. The ones we encounter are
still more likely to be the ones that do become
post biological super intelligent machines, because the beings will be
better at space travel and better at spreading across the galaxy.
Think about the fact that they have no biological risks

(47:07):
from space travel like we do. Yeah, show Stack gets
to this point as well, that yeah, there would still
be risks. Space is still incredibly dangerous, but the bio
risks would be effectively removed. And then since you would
uh as a machine intelligence, you would be effectively immortal
um in ways that in ways that even a in

(47:29):
a you know, a very long living biological organism would
not um All trips would be the same distance, all
trips would have the same duration, because time kind of
loses all meaning. If it takes you a hundred years,
a thousand years, uh, you know, several thousand years to
reach the place you're going, that kind of loses its importance.

(47:49):
If there is no endpoint to your existence. Ye, Rob
nine thousand does not care. Yeah, alright. So in dealing
with this question of post biologic logical intelligence and potentially
encountering post biological intelligence, one of the big questions, of course,

(48:11):
is well, what would it mean for us? What would
what would the relationship be? What would a post biological
civilization want? And I guess the first way to tackle
that is to sort of look at the precursor, what
does a biological civilization want? Well, as a Stephen Hawking
and many others have pointed out, if we're to use

(48:33):
our only model of intelligent life that we have, which
is us, then obviously biological aliens would be interested in
things like domination, resource acquisition, possibly religious convergence. Or if
we were to tie the Simpsons into all of this, uh,
you know, we could think to the citizen king Treehouse
of Horror segment. They might be interested and interested in

(48:56):
us merely in order to point a giant space laser
at another planet. So resources, yes, but also maybe strategic
location in some greater interstellar conflict. I just had an
idea that I don't know if it makes any sense,
but I was thinking about some of the some of
the horrors of colonialism on Earth. We're not just about
the extraction of resources from the colony, but also about

(49:20):
the acquisition of customers within a colony for the businesses
in the in the home country. And I wonder could
there be some kind of comparison to this in in
a galactic sense, like, uh, could be possible that aliens
would want to initiate contact with Earth in order to
acquire some analogy to customers buyers for their products. Oh my, uh,

(49:44):
nothing come into mind. But I'm sure this, this has
got to have been This has had to have been
explored in in science fiction, especially like like Reagan era
sci fi. You know, that's a that's commenting on capitalism
and so forth. Like In fact, she like surely Philip K.
Dick explored the this idea a little home that was
up his alley. I can't think of one but that
that would be an amazing Philip K. Dick theme. I'm

(50:05):
sure he did it. Yeah, so again, you know, if
we only have to have our own intelligence really to
base most of this off on as a model, but uh,
this would it would seem to present a rather dark scenario.
Though certainly biological aliens could be different. You know, they
could they could just want to be our friends. They
could want it that they could have, you know, they

(50:25):
could come in peace, as they say. I mean Stephen Hawking, Yeah,
he was very cautious about the idea of ct He
was like, we don't we don't want anything to do
with other aliens in the galaxy because the chances are
it would not go well for us. But people who
are involved in set itself in CETI type research, it
seems to be more often, I mean, I probably there's

(50:47):
a selection effect by nature of the fact that they
are part of this effort to reach out and establish
contact with other civilizations at least detect their presence. There
seems to be more optimism in the CETI crowd to me, like, yeah,
less a less of an automatic assumption that the way
aliens view us would be would be extractive. And you know,

(51:07):
more of an idea that uh, an alien that as
a civilization progresses towards the point where it can reach
out into the cosmos. It also maybe matures like it.
It reaches its own form of humanism and maybe that
extends beyond its own species. Yeah, and I guess too.
There's also the argument it's kind of like moving into
a new neighborhood. Do you want to say hi to

(51:29):
your new neighbors, uh, you know, the first couple of weeks,
or do you want to wait until there's a conflict
you know? Uh, you know, what do you want? What
do you want your first communication going to be to be?
Because non detection is not a long term possibility. You know,
they're going to see you leaving your house at some
point you're gonna have that awkward moment where do you
make eye contact and then you're like, oh, yeah, we

(51:49):
never actually said hi to each other, you know. So,
you know a lot of this concerns biological life. These
questions and some of these ideas don't entirely disappear when
we consider uh, post biological life. But again, the question
is what about alien AI? What would a post biological
species want with us, what would they, as show stack

(52:10):
points puts it, what would they quote find interesting to do? Um,
which I like. I like the way of pointing that out.
It's like it's it s to a certain extent, it
goes beyond like goals and things that it needs, like
what what does it do with its time? Like? What
is its purpose? And show stack points out that sci
Fi has certainly explored this topic, but he thinks only

(52:30):
three things seem plausible enough to consider discussion. So, first
of all, he argues that since quote high speed computation
requires compact configuration, the machines would likely remain localized and
this would better benefit you know, swarm or shared processing,
so they wouldn't be spread out over vast distances. They
might be localized into an area only thousands of light

(52:53):
years across. So if you're imagining you know, something like,
uh that the post biological necrons from war or forty,
you know that they just want to spread out all
over the galaxy and take it over like that wouldn't
make as much sense because they want to maintain maximum uh,
you know, computational power, So they're going to stick to

(53:13):
their own kingdom. Coming back to Susan Schneider, she argues
that biologically inspired super intelligences would would tend to have
one or more what she calls global workspaces, And I
actually want to read her quote on this because I
thought this was interesting. She says, when you search for
a fact or concentrate on something, your brain grants that

(53:34):
sensory or cognitive content access to a quote global workspace,
where the information is broadcast to attentional and working memory
systems for more concentrated processing, as well as to the
massively parallel channels in the brain. The global workspace operates
as a singular place when important information from the senses
is considered in tandem, so that the creature can make

(53:57):
all things considered judgments and act intelligently in light of
all the facts at its disposal. In general, it would
be inefficient to have a sense or cognitive capacity that
was not integrated with the others, because the information from
this sense or cognitive capacity would be unable to figure
in predictions and plans based on an assessment of all

(54:19):
the available information. And this comes into play here because
it seems like a civilization based on a super intelligent
AI UH if it's spread itself too far, it would
become impossible to maintain a global workspace at speed. It
would start having information that was not shared, and that
would result in inefficiencies. Yeah, that that lines up, But

(54:40):
I think goather well with this. Now. Now, the second
point that Stack makes is that given the very short
time scale for improvement, uh, it would be winner takes all.
The first machine society to rise would dominate at least
within a certain volume of space, you know. Going back
to point number one. Um. Now, he argues that there
there could be a little wiggle room for some machine

(55:01):
civilizations to overtake elder civilizations. Um. But that a sufficiently
advanced machine civilization could rule its fiefdom indefinitely. Um uh Now,
But but I wonder if if another way of looking
at this sort of thing would be, you know, a
resulting confederacy of machine culture is a kind of multicultural
machine super civilization where maybe you have the you know,

(55:23):
the one older, more advanced, and you know, unconquerable, um,
machine culture, but then it ends up absorbing other ones
that are part of it, that have some purpose or
role within the machine whole, but are not like the
driving force. Kind of like subservient machine cultures, I guess.
And then number three, even for machines, he points out

(55:46):
space is dangerous and our Winnian selection would take place. Quote,
if a machine exists now, it's because its mode of
existence has kept this device from natural disaster, or possibly
even from deliberate disaster. If such a phenomen gonna exists
for machines, perhaps it makes a lot of copies, or
at least a few copies, updating as necessary. It does

(56:07):
something to withstand inevitable catastrophe. Yeah, that's very interesting. I
mean to pick up on this. There's no reason to
say that biological evolution is a process, that is, that
is inherently tethered only to carbon based organisms that reproduce,
you know, that that have genetic code based on DNA,
anything that's subject to survival and reproduction. And I would

(56:30):
guess that machines, you know, computational machines, would in some
way be subject to survival and reproduction. They can make
copies of themselves, Uh, they can iterate their code. That
it seems like those things would be subject to a
form of natural selection. Though. The interesting thing there would be,
I guess, would would it be useful to think about

(56:50):
their code in terms of something like genes, because of course,
you know, genes within biological organisms can have gambits to
survive on their own regardless of the success of the
overall organism. Right Like, if an individual gene in your
body figures out a way to make lots of copies
of itself without regard to the health of its you know,

(57:10):
to to the health of the body as a whole,
it will do that. You know. It's it's the genes
just trying to get out there. I wonder if you
could look at individual pieces of I don't know what
code or nodes or processing functions within a machine intelligence
that would behave in the same way. Yeah. Yeah, So
it seems like that idea you could you could come

(57:31):
up with a concept where a machine civilization would have
a tendency to colonize new areas, you know, because it
would give itself room to uh to copy itself. Uh.
And then of course you have to think about the
constraints about processing speed. It's that run you know, having

(57:51):
you know, sticking to a local domain. But maybe that
would allow for some level level of mechanical budding to
take place. Yeah, maybe cutting off pieces of itself would
actually make it more resilient, to say, infection by viral
bits of code. Yeah, well, you know, thinking about it
even more now, So say say you have this mechanical
supercivilization and it's again, is staying within a certain area? Well,

(58:16):
if it is, if it definitely, if it wants to survive,
if that is like a driving force in it, that
is like just coded into it maybe from its biological
you know, elder creators, then then perhaps copying itself not
only within its realm, but in other realms like that
is one way to try and survive, not only like

(58:37):
nearby rooms, maybe far flung realms, you know, uh, you know,
to get outside of not only this star system, but
this system of systems, to get outside of the galaxy
if possible. That's interesting. Okay, folks, this is one of
those episodes that went very long, and we have decided
it is best to divide this talk in two parts.
So we're gonna have to cut part one right here,

(58:58):
but come back can join us on Thursday for the
continuation of our discussion in Art two. In the meantime,
if you would like to check out other episodes of
Stuff to Blow your mind, you know, where to find
them in the Stuff to Blow Your Mind podcast feed
and you'll get that wherever you find your podcast, wherever
that happens to be if the platform gives you the
ability to do so. Just make sure you rate, review,
and subscribe. Huge thanks as always to our excellent audio

(59:21):
producer Seth Nicholas Johnson. If you would like to get
in touch with us with feedback on this episode or
any other, to suggest a topic for the future, just
to say hello, you can email us at contact at
stuff to Blow your Mind dot com. Stuff to Blow

(59:42):
Your Mind is production of I Heart Radio. For more
podcasts for my Heart Radio, visit the iHeart Radio app,
Apple Podcasts, or wherever you're listening to your favorite shows.
Letty Propa

Stuff To Blow Your Mind News

Advertise With Us

Follow Us On

Hosts And Creators

Robert Lamb

Robert Lamb

Joe McCormick

Joe McCormick

Show Links

AboutStoreRSS

Popular Podcasts

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.