All Episodes

July 14, 2025 • 76 mins

Our exploration of artificial intelligence continues as we bridge the boundary between current AI technology and fictional representations of artificial minds. While last episode focused on the technological trajectory toward superintelligence, this time we tackle the philosophical dimensions of machine consciousness.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
No, not robo-Nazi.
Don't stop, why do?

Speaker 2 (00:02):
you always default to Nazi when we do this, because
it's the closest thing.

Speaker 1 (00:07):
What word is closer to Yahtzee than Nazi?
I mean, granted, but why doesthat keep coming up?
Because it's the first thing Ithink of Yahtzee, nazi, nazis
are the first thing you think ofwhen you say Yahtzee, and I
have to riff off of Yahtzee.

Speaker 2 (00:22):
Okay, we really do have to evaluate.
Perhaps, as Cake once said,perhaps, perhaps, perhaps.
I don't think I've ever quotedCake before.

Speaker 1 (00:37):
That's weird, because you were just doing it in a
short skirt and a long jacket.
That's true, Gentlemen.
Let's broaden our minds.

Speaker 2 (00:44):
Are they in the proper approach pattern for
today?
Negative All weapons Now Chargethe lightning field.

(01:18):
Charge the lightning field toassess its values.
Quote If our society isconcerned with profits, then we
may end up sacrificing humanlife and well-being, no matter
what technology we use.
If we want to live in a societythat values human life, a
society where a self-driving carwill damage itself rather than

(01:41):
run over a child, for example,then we need to work at building
these principles into ourtechnology from the beginning.
Well, welcome back to DispatchAjax.
I'm Skip.
Yeah, that's Skip.
I'm Jake.

Speaker 1 (01:57):
This is going to be a fun loaded one, so last episode
Skip delved into the comingsingularity, the ascension of
artificial intelligence tosuperintelligence and beyond
human control.
This episode we will siftthrough popular culture, mostly
movies if we get to it, toexplore examples of AI and their

(02:17):
depictions, those sci-fi waves.
Skip's gonna recap and kind ofrealign focus on what he
discussed last episode before Idiscuss a little bit more of ai
consciousness itself and thephilosophy of understanding that
outside of what skip had talkedabout yeah, take it away,

(02:42):
skipper.

Speaker 2 (02:43):
Whoa, I wish I had one of those whirring whistles,
you know, or a slide whistle,whoop.

Speaker 1 (02:53):
Yep, If you could, if every time one of us talked it
could be like we were beaming inthat sound effect.

Speaker 2 (03:01):
Oh well, I mean, I could do that, I could do that
for sure.
Effect oh well, I mean, I coulddo that, I could do that for
sure.
So let's recap a little bitabout modern real life AI and
how it differs from what we'regoing to talk about today.
So modern AI what we call AIwhile based on computational
neural networks which have beenaround for decades, actually

(03:26):
they're not as complex as youwould think.
I mean, they're complex, buttheir concepts aren't.
They're basically just chatbotswhich we have all kind of come
accustomed to, but with moreinformation to draw on.
They're designed to mimic humanconversation through trial and
error, based on large languagemodels and machine learning.

(03:51):
So large language models aren'tall that mystifying in and of
themselves.
They just do what Google did tocreate its original not-shitty
search engine.
That entailed getting a bunchof servers with racks made of
Legos that's why they have theircolor scheme and they basically

(04:16):
just indexed every website.
That was their entire concept.
They just downloaded the entireinternet.
They then created an algorithmto not just search for the
frequency of keywords usedonline, as previous engines did,
but figure out which websitesmost frequently linked to other

(04:37):
websites.
That would eventually end therabbit hole of your query and it
worked and it was great.
And then in-shedification Largelanguage models aren't
dissimilar.
They have algorithms that modeland index text, conversations,
tweets, books, everything.

(04:59):
They create categories in whichdifferent language falls.
It then uses these examples topredict how to respond to
prompts, and I'm really glad youdid.
I did watch the Artifice Girlbefore this, and a lot of the
summation of that is what I'mtalking about here, so I'm glad

(05:20):
that that worked out.
On a basic level, this is howpredictive text works on your
phone, just a little morecomplex with enormous amounts of
data on which to draw.
With generative AI, thealgorithms do much of the same,
but with images, music, moviesand all sorts of other data, and

(05:40):
then spits out an approximationof whatever we ask of it.
But because it only remixes andregurgitates stuff that already
exists, it does not have theability to generate anything
wholly original.
This is important.
Everything is derivative in thetruest sense of the word.

(06:03):
Everything is derivative in thetruest sense of the word, and
because it can only understandtechnical context, not abstract
context, eventually, inevitably,it begins to go off the rails.
This is referred to, begins togive you wrong answers to

(06:25):
questions or images of shrimp,jesus, or it starts endorsing
eugenics on Twitter or X, whichis where eugenics apparently
lives.
It doesn't fundamentallyunderstand what it's saying or

(06:45):
doing outside of its technicalcomputations.
It can't have an opinion, anemotional one, nor can it
express a cold, calculating onelike Skynet or in the Matrix.
It can only express what itthinks you want to hear.

(07:09):
That is current ai.
That is what we're dealing withright now.

Speaker 1 (07:15):
So, jake, let's get into it yes, let's, yes, yes,
have some, yes, have some.

Speaker 2 (07:26):
Just thinking of the same thing.
There's a second behind you,yes let's have some.

Speaker 1 (07:32):
We gotta get these AI and supercomputer together.
I think that would beextraordinarily bad.

Speaker 2 (07:39):
Define the whole good bad thing.
You know what?
That's what we're gonna try andtackle today.
Yeah, yeah kind of kind ofwe're not gonna succeed, but
we're gonna do it no, no, andthis is kind of um.

Speaker 1 (07:56):
Uh yeah, I kind of felt like we had a little more
hard data-driven analysis lasttime and I wanted to do a little
more soft, thoughtful approachto this one.

Speaker 2 (08:14):
You wanted to go more abstract.

Speaker 1 (08:17):
A little bit.
I kind of want to get into someof the philosophy behind.
I'm going to focus on aparticular thing when it comes
down to it, but obviously thisis going to be cursory and
casual.

Speaker 2 (08:30):
There's no way we could possibly delve into
everything that is out there allthe thoughts, oh, you mean all
the questions humankind has everhad, since the beginning of
consciousness, you know yeah, itmay take a little while it may
be longer than this uh hour-longpodcast.

Speaker 1 (08:45):
But we're just going to dance on the idea of a
singular.
Thing.

Speaker 2 (08:51):
This is bold.
We're going to try it.
We're going to do it.
Let's see where this goes Allright.

Speaker 1 (08:58):
So AI consciousness isn't just a tricky intellectual
puzzle.
It's also a very morallyweighty problem with possible
unknowable consequences.
It maybe even causing extremepain and nullifying the thoughts

(09:27):
and emotions of a being whoseinterest should matter.
But mistaking an unconscious AIfor a conscious one, you might
risk compromising the safety ofhumans and happiness, all for
the sake of this box, spoutingones and zeros.
Now, both mistakes are easy tomake.

(09:52):
As Liyad Mudrik, aneuroscientist at Tel Aviv
University who has researchedconsciousness since the early
2000s, says, consciousness posesa unique challenge in our
attempts to study it becauseit's hard to define.
It's inherently subjective.
Now, consciousness is oftenfused with terms like sentience
and self-awareness, butaccording to the definitions

(10:15):
that many experts use,consciousness is a prerequisite
for those other, moresophisticated abilities.
To be sentient, a being must beable to have positive and
negative experiences, in otherwords, pleasures and pains, and
being self-aware means not onlyhaving an experience, but also
knowing that you are having anexperience.

(10:36):
Now, current AI is notconsidered conscious, as we have
discussed in last episode andat the beginning of this episode
in last episode and at thebeginning of this episode, AI
systems, even the most advancedones so far, primarily mimic
human cognition and lack trueself-awareness, subjective
experience and an understandingof their own existence.
But if we were to look forindicators of consciousness,

(11:00):
researchers that explore thesevarious indicators point to
awareness of awareness of one'sown awareness, ie the ability to
be aware of its own thoughtsand internal states.

Speaker 2 (11:14):
I think.
Therefore, I am yes.

Speaker 1 (11:17):
Yes, virtual models of the world, that's, the
capacity to create and maintaininternal representations of the
environment, dicting futureexperiences, the ability to
anticipate future events andtheir consequences.
Self-recognition, similar tothe mirror test in animals,

(11:37):
where an AI might recognize itsown image or its own
characteristics when they'replaced in front of it.
But in more popular culture,some things that really pop out
are, of course, self-awareness,a super intelligence otherwise
surpassing humanity in itscognitive abilities,
self-preservation, emotions andconscious volitions often based

(11:58):
on those emotions.
Now those are all things thatwe're going to get to when we
come around to movies anddepictions of AI, but I just
want to lay some of thoseelements out there.

Speaker 2 (12:08):
Oh, absolutely.

Speaker 1 (12:09):
I spoke about last episode, ilya Sutskever, chief
scientist at OpenAI, the companybehind the chatbot, chat GPT.

Speaker 2 (12:24):
You were right with chatbot.
I think that's more appropriateChatbot, gpt.

Speaker 1 (12:31):
Tweeted that most of the or X posted some of the most
cutting edge area networks.
Quote unquote might be slightlyconscious.

Speaker 2 (12:41):
What the fuck does that mean?

Speaker 1 (12:43):
I yeah, I don't know, but again, this is just like
when we get into the ideas ofhow we depict ai I think we're
going to talk about down theroad.
There's a lot of gray area whenit comes to this and maybe, uh,
really nailing down anddefining consciousness and bias

(13:05):
to artificial intelligence mightbe at least in the gray area
where we're at now, as opposedto some years ago where it was
strict black and white.
At the there was this, thispaper I was reading, about 19

(13:26):
neuroscientist philosophers,computer scientists, and they
were trying to figure out how todefine ai consciousness, and
these researchers focused onphenomenal consciousness,
otherwise known as thesubjective experience this is
the experience of being whatit's like to be a person, an
animal or an ai system does turnout to be conscious.
they argue that it is a betterapproach for assessing
consciousness than simplyputting what it's like to be a
person, an animal or an AIsystem does turn out to be
conscious.
They argue that it is a betterapproach for assessing

(13:48):
consciousness than simplyputting a system through
behavioral tests.
Say, if you were to ask ChatGPTwhether it's conscious or
challenging it to see how itresponds, that might not be a
good indicator.
That's because AI systems havebecome remarkably good at
mimicking human and humanbehavior, so just judging by
that, it may lead to a falseimpression.

(14:11):
I think that's where, like aTuring test or a avoid conf
where it is very cinematic andan easy way to identify with
that intelligence you're talkingto, but whether it actually
possesses consciousness based onthat is probably a faulty

(14:34):
premise 100%, you're completely,yeah, and I have a lot of stuff
.
Yeah.

Speaker 2 (14:39):
Yeah, yeah, yeah, I agree with you, yeah.

Speaker 1 (14:42):
Yeah, I agree with you so they had selected some
theories and extracted for thema list of consciousness
indicators.
Now these are some ways thatconsciousness might be
constructed with artificialintelligence.
Now one of them is the emergentcomplexity.

(15:03):
This is that some researchersbelieve that consciousness is an
emergent property arising fromcomplex interactions within a
system.
As AI systems become morecomplex and sophisticated,
particularly with advancementsin neural networks architecture,
consciousness may emerge in amanner analogous to how
consciousness arises from thecomplex interactions within the
human brain.

Speaker 2 (15:24):
I think that's some of what you got into last
episode.
Yeah, with David.
I quoted David Eagleman, theneurologist, a lot.
He talks about that, a lotabout emergent properties.

Speaker 1 (15:34):
Yeah, that's exactly right, eagleman.

Speaker 2 (15:37):
Dude, he actually totally rules.
If you ever get a chance, watchhis network series the Brain.
It's fucking brilliant and itreally just changed my entire
perspective on neurology andwhat the universe was and
whatever.
It's really good.
It's really good.
I would do that Nice.

Speaker 1 (15:55):
Try that noodle.
Somebody has to.
I think you're doing a fine job, skip.
Another one would beself-reflection and learning.
Ai systems are already capableof learning from experiences and
adapting their behavior,engaging what some might
interpret as rudimentary form ofself-reflection.

(16:15):
If self-awareness is a componentof consciousness, then AI
systems with enhanced learningand self-monitoring capabilities
could potentially develop aform of artificial
self-awareness.
Another one would be integratedinformation theory, or IIT.
Iit suggests that consciousnessis related to the capacity of a
system to integrate information.
According to IIT, systems witha high level of integrated

(16:39):
information, whether biologicalor artificial, could be
considered conscious.
The human brain, with its vastinterconnectedness, is believed
to have a high level ofintegrated information.
This theory suggests that if AIsystems can achieve a similar
level of integrated informationthrough complex architectures,
they could potentially developconsciousness.

(17:01):
Another would be GlobalWorkspace Theory theory, or GWT.
Now, this proposes thatconsciousness arises from the
global broadcasting ofinformation across different
specialized modules within asystem.
This theory suggests that AIsystems with architecture
similar to the human brain'sglobal workspace, allowing for
the integration and sharing ofinformation across various

(17:22):
subsystems, could potentiallylead to the emergence of
consciousness.
And the last one I'm going tocover would be functionalism.
Now, functionalism is thistheory in the philosophy of the
mind that defines mental statesby their functional roles,
regardless of their physicalrealization.
This perspective suggests thatif an AI system can functionally
replicate the cognitiveprocesses and behaviors

(17:44):
associated with consciousness,it could be considered conscious
even if its underlyingarchitecture is vastly different
from a biological brain.

Speaker 2 (17:55):
Yes, I touched on that a tiny bit before.
One of the things we did skipover was the definition of the
consciousness or the.
What was it the consciousnessof the mind or no?
The?
What did you just say?
The?

Speaker 1 (18:11):
which thing I'm sorry no, no, no.

Speaker 2 (18:15):
The concept of the mind, that whole fuck.
One of the most importantthings we didn't define was the
concept of the mind, that entirelike, like.
That's an entire like.
I looked up trust me, I lookedit up too.
This was a.
This is a big thing.
What is it?

(18:35):
What did you?
What did you go back?
Go back in your script for asecond okay, okay, I'm sorry, I
I concept, the concept of themind.
What was it?
I'm sorry, I don't the mind.
What was it?

Speaker 1 (18:45):
I'm sorry, I don't see that Not phenomenal
consciousness, the sentience orthe.
I'm sorry, I don't know whichpart you're talking about.
I'm sorry.

Speaker 2 (19:00):
No, because this was actually a huge part of this
field of science that I kind ofintentionally skipped over
because I didn't want to spend alot of time defining it, but
it's super important.
I'll come back to it later.
I'll figure it out later.
Theory of the mind well, I mean,there's that because that's not

(19:28):
, I don't know, maybe, maybe itwill come back as we get through
maybe because that's somethingwe like probably should have
defined earlier on, but wedidn't and super important, but
but I mean we're never going tobe able to figure out any of
this stuff.

Speaker 1 (19:41):
So I mean no, no, I yeah, when we, when we get to
the end of this little bit, Ithink that kind of will fall
like a turd out of our pant leg.

Speaker 2 (19:52):
Nice.

Speaker 1 (19:54):
Now, obviously, those are some ways that conscience
could come about and evolvewithin an artificial structure,
but there is at least onephilosophical dialogue I want to
generally highlight before wedepart from the conjecture and
analysis.
Okay, this is something that,according to a 2020 Phil Papers

(20:15):
survey that's a collection ofphilosophers, geordi 62.42% of
the philosophers surveyed saidthey believed that this is a
genuine problem, while, infairness, 29.72% said that it
doesn't exist.
And that is the hard problem ofconsciousness.
Now, this is something that wascoined by cognitive scientist

(20:41):
David Chalmers when he firstformulated it in his hard
problem paper facing up to theproblem of consciousness in 1995
, and then expanded upon it inthe conscious mind 1997, a book.
Now this puts that there.
There's an easy problem ofconsciousness and a hard problem
of consciousness, and when weget to the hard problem, it it,

(21:04):
it gets hard.
Obviously, this is all going tobe difficult.
You know it's tough.
It's a tough one, it's a realShocking.

Speaker 2 (21:10):
We're going to try and figure out the greatest
questions mankind has ever posed.

Speaker 1 (21:15):
This one's a difficult one to crack folks.

Speaker 2 (21:17):
Okay.

Speaker 1 (21:18):
Shocker.

Speaker 2 (21:22):
We're trying to figure out the nature of
consciousness in our twoepisodes or more of our show and
we don't have those kinds ofwell.
I mean, you have that degree,but still, yeah, well you know
that was a wing and a prayerreally.

Speaker 1 (21:41):
So by Chalmers' definition there is an easy and
a hard problem First.
We'll deal with the easyproblem Now.
Easy problems are amenable toreductive inquiry.
These are logical consequencesof facts about the world, like
how a clock's ability to telltime is a logical consequences
of its clockwork and structure,hurricane being a logical

(22:02):
consequence of the structuresand functions of certain weather
patterns.
These are easy problems.
They're logically defined bythe sum of their parts as most
things are.
This is relevant toconsciousness concerning the
mechanistic analysis of neuralprocesses that accompany
behavior.
Examples of these include howsensory systems work, how

(22:23):
sensory data is processed in thebrain and how data influences
behavior, or verbal reports, theneural basis of thought and
emotion, and so on.
They are problems that can beanalyzed through structures and
functions, but the hard problem,in contrast, is the problem of
why and how those processes areaccompanied by experience.

(22:44):
In other words, the hardproblem is the problem of
explaining why certainmechanisms are accompanied by
conscious experience.
So, for example, why should aneural processing in the brain
lead to felt sensations of, say,hunger, and why should those
neural firings lead to feelingsof hunger rather than some other
feeling, for example beingtired or being thirsty?

(23:10):
Chalmers argues that experienceis irreducible to physical
systems such as the brain.
An explanation for all of therelevant physical facts about
neural processing would leaveunexplained facts about what it
is like to feel pain.
This in part because functionsand physical structures of any

(23:30):
sort could conceivably exist inthe absence of experience.
Alternatively, they could existalongside a different set of
experiences.
For example, it is logicallypossible for a perfect replica
of Skip to have no experience atall, or for it to have a
different set of experiences,such as an inverted visible

(23:50):
spectrum, so that blue andyellow and red and green axes
are completely flipped.

Speaker 2 (23:58):
To quote Battlestar Galactica I wanted to see x-rays
.

Speaker 1 (24:05):
Now, as opposed to, like we said, a clock or
hurricane or the physical things.
The same cannot be said aboutthose difference is that
physical things are nothing morethan physical constituents, but
consciousness is not like this.
Knowing everything there is toknow about the brain or any
other physical system is not toknow everything there is to know

(24:25):
about the brain or any otherphysical system is not to know
everything there is to knowabout consciousness.
Consciousness, then, must notbe purely physical.
Now, I bring this up becauseChalmers' idea contradicts
physicalism, sometimes labeledas materialism.

Speaker 2 (24:43):
Which we touched on last time.

Speaker 1 (24:44):
Which we touched on a little bit last time.
This is the view thateverything that exists is a
physical or material thing, soeverything can be reduced to
microphysical things.
There's always like a way tobreak it down and then, based on
those structures and functions,it logically flows one to the
other, one to the other.

(25:05):
But this theory ofconsciousness and the complexity
of this problem, chalmers,suggests that this isn't
possible.
Now we'll get into some furtherdetails of that, but let's just
keep going on with this for abit.

Speaker 2 (25:18):
This doesn't sound like a Okay.
Here's the problem with that,though.
Like you can't give ascientific paper and say that
all of a sudden shrug yourshoulders and go, I don't know,
maybe it's not real, you knowwhat I mean.
Like you get into some of that,like what the bleep do we know
territory at that point.
Let me get a little further inand let's see where we go.

(25:42):
That's fair.

Speaker 1 (25:43):
That's fair.
According to physicalism,everything, including conscience
, can be explained by appeal toits microphysical constituents.
Hachalmo's hard problempresents a counterexample to
this view and to the otherphenomena like swarms of birds,
since it suggests thatconsciousness, like swarms of
birds, cannot be reductivelyexplained by appealing to their

(26:04):
physical constituents.
Thus, if the hard problem is areal problem, then physicalism
must be false, and if thephysicalism is true, then the
hard problem must not be a realproblem.
Now, proponents of the hardproblem argue that it is
categorically different from theeasy problems, since no
mechanistic or behavioralexplanation could explain the

(26:26):
character of an experience, noteven in principle.
After all of the relevantfunctional facts are explicated,
they argue, there will stillremain a further question why is
the performance of thesefunctions accompanied by
experience?
Not only is there a hardproblem, it actually has moral

(26:48):
consequences.
Now, we generally agree that,say, chairs or this microphone
do not have conscious minds.
We generally agree that thosethat are neighbors or the other
podcaster I'm talking to do do.
Now, we often assign consciousminds to our closest male
relatives, like chimps, dogs,pigs, thus to give them moral

(27:12):
consideration, more than, say, afruit fly or even a fish.
However, we actually have noidea which things are conscious.
You cannot prove that mypodcaster, skip, is conscious.
No-transcript.

Speaker 2 (27:48):
Again, this is one of those.
Like you know it's a thoughtexercise, you know, you know, I
know it's a thought exercise,but it's one of those like
masturbatory, like like I'm theonly important person in the
universe exercises, you know,like get the fuck out of here.
Like get the fuck out of here.
I mean, just by definition, theidea, consciousness, the idea
of sentience.
I mean literally we talkedabout this last week.

(28:10):
It's like shared experience.
Consciousness is about shared,understanded experience between
everyone involved.
And so when people start beinglike, well, maybe it's a

(28:31):
hologram, maybe we're living ina simulation, you're like get
the fuck out of here.
You think you're way moreimportant than you fucking are.
Get the fuck out of here.

Speaker 1 (28:40):
But consciousness also has to function without the
representation of anotherconscious mind.
Just because you would be aloneon an island, you would not be
unconscious, because you'd haveno other conscious beings around
you.
Your conscience is still extantand viable.

Speaker 2 (28:59):
Yes, I mean yes, by modern definitions.
You're correct.
Consciousness is, by definition, a subjective experience.
However, considering we're ableto communicate with each other
and share language and art andjust interact with each other,
there has to be at leastsimilarity between our

(29:23):
experiences.
So it cannot possibly becompletely.
I mean, yes, it's subjective,but we're all experiencing the
same sensory input and we're allat least for the most part, and
obviously and we'll probablyget to this there are

(29:45):
differences in this, but we'reall sort of like experiencing it
the same way, or at least in asimilar way, and there are
really great examples of how wedon't.
That will help flesh this outas well.
But so most of the time whenpeople talk about like, oh, this
is a singular, you know,experience I'm the only person

(30:08):
in the universe and everyoneelse is a fucking fantasy of
mine.
It's just masturbatory fuckinglike hubristic bullshit that I
do not entertain there was astrong rebellion from that
conceptual framework.

Speaker 1 (30:27):
For sure I'm gonna.
I'm.
I found this.
Actually it's this redditcomment, um that I wanted to
share well, now we're.

Speaker 2 (30:34):
Now we're really digging into the real science of
it here you want to get down tothe hardcore facts reddit is
where to find us well, it'sbetter than step up against
4chan.
Yeah, because they've seen4chan.

Speaker 1 (30:50):
Oh yeah, If it was a video of a cat getting smashed
with a cinder block or an animegirl getting railed by a demon.
Maybe it'd have strongcomplexities.
Alright, let me just read thisout.
The crux of the hard problem isthat even if you were to figure
out the so-called neuralcorrelates of consciousness, the

(31:14):
informatic pattern required foryou to be conscious, you could
still not prove that otherpeople are conscious except by
referencing the fact that youhave those very same correlates.
And that's weird.
Most people who believe in ahard problem do not deny the
explanatory power of science.
They don't deny that proddingthe three pounds of meat inside

(31:35):
your brain can cause experiences, but they do point out that in
no way can you jump fromneuroscience to phenomenal
experience.
So what's going on?
What is a first-personperspective?
It's formed by patterns ofinformation.
Does which matter compose it,dictate who experiences that

(31:56):
consciousness?
If I were to use a nano Xerox tocopy every brain cell, would my
consciousness be split in two?
What would that be like?
Is this quote-unquote isolatedfeeling of the first person
frame a kind of evolutionaryselected illusion intended to
neurally shackle us to try tomaximize our fitness?

(32:16):
Does consciousness experience,particularly the feeling of
volition or willing, affect mybehavior?
Or does consciousness comeafter the fact, after my brain
is done making all of therelevant choices subconsciously?
Is it simply an epiphenomenonor a byproduct?
If I upload my mind into avirtual world, is it still me in

(32:39):
there?

Speaker 2 (32:40):
Can I?

Speaker 1 (32:41):
morally kill animals?
Does the thermostat feel things?
Should an advanced AI be givenmoral consideration?
Are we even morally allowed toprogram AIs with
phenomenological experiences?
Well, I mean, there's a lot totackle in just that last couple
of sentences, and that thesequestions have different answers

(33:04):
depending on the perspectivesyou choose to take.
Let's look at some of theseReductive elementalism, more or
less the full, the nullhypothesis there is no
consciousness outside of whatscience can study.
Everything else is eitherconfusion or illusion.
Although he might reject beingplaced in this category,
metzinger fits here in myopinion.
Metzinger fits here in myopinion.

(33:30):
Two materialism yes, we haveall of these quality of things,
but they are completelyequivalent to their neural
correlates.
Just because we can't see howyet doesn't mean it isn't true.
Three, the dual aspect monismthere is one reality and the two
aspects to it, the physical andthe mental.
Neither gives rise to the other.
They're supervenient onsomething else we can't see.

(33:55):
And psychism Everything isconscious, more or less even
that thermostat.
It all feels, it's all thinking.
Now there are otherperspectives.
Now this person?
They said they used to thinkthe heart problem was an
ontological question what kindsof things there are.
But they've changed their mind.
The heart problem consistsprimarily of a series of

(34:16):
epistemic dilemmas.
How do you know you'reconscious?
How do you know others areconscious?
How do you know yourintrospection of your own mental
state is accurate.
End Reddit thread quote.

Speaker 2 (34:32):
Okay, well, okay, yeah, there's a lot to deal with
there.
Some of it easily addressed,some of it I would like to know
more about.
To answer it Jeez, yeah, okay.

Speaker 1 (34:49):
I have other stuff If you want to think on some of
that.

Speaker 2 (34:53):
Oh yeah, Let me chew on that, yeah.

Speaker 1 (34:55):
No, by all means.
Yeah, this is kind of like thereverse of when you were talking
.
I was trying to think of allthese things and typing and
searching and it's like finally,by the end of the episode I was
able to like here.
Here's some other things Iwanted to point out Absolutely,
yeah, Like.
The hard problem ofconsciousness highlights our

(35:17):
inability to bridge the gapbetween subjective experience
and observable behavior.
We have experiences of our ownconsciousness directly, yet we
never experience anyone else's.
Instead, we infer that's thekey word Consciousness is in
others, based on behavior,language and perceived
self-awareness.
This inference is so deeplyembedded in human action that we

(35:40):
rarely question it Now.
Some physicalists have respondedto the hard problem by seeking
to show that it dissolves uponanalysis.
Other researchers accept theproblem as real and seek to
develop a theory ofconsciousness's place in the
world that can solve it byeither modifying physicalism or

(36:01):
banning it in favor of analternate ontology such as
parapsychism, which that's maybeeven beyond my thing, or a
dualism.
There's both a mind and a body,which, if we extrapolate that
into religious terms, you have aself and a soul.
Could you possibly have threedifferent things?
A self, soul and mind.

(36:22):
That's a whole, nother thing.
A third response has been toaccept that the hard problem as
real, but deny human cognitivefaculties can solve it.

Speaker 2 (36:33):
Okay.
Now the philosopher Peter.

Speaker 1 (36:37):
Hacker argues that the hard problem is misguided,
and then it asks howconsciousness can emerge from
matter, whereas in factsentience emerges from the
evolution of human organisms.
He says the hard problem isn'ta hard problem at all.
The really hard problems arethe problems that scientists are
dealing with.
The philosophical problem, likeall philosophical problems, is

(36:58):
a confusion in the conceptualscheme.
Hacker's critique extendsbeyond Chalmers and the hard
problem, being directly directedagainst contemporary philosophy
of mind and neuroscience morebroadly.

Speaker 2 (37:12):
So that's really interesting too, because I
looked into some of those samepeople and I went back to
Eagleman when he said that whenhe was talking about emergent
traits, emergent phenomena, thatwhen he was talking about
emergent traits, emergentphenomena, and he actually cited

(37:33):
chalmers and others in thatsense, but he was like, okay,
think about it.
You have like human flight is a.
The idea that human beings canfly in a plane is crazy.
If you really think about itfrom like a physical standpoint,
right, and if you look at how,like a, if you look at the, the

(37:57):
component parts of a, of a plane, you have metal and you have
carpet.
You have all these differentthings individually, how does
that equal flight?
Because when you put themtogether in the right order,
with the right differentcomponents pushing on each other

(38:19):
, making each other in amechanical way go, then you have
an emergent phenomena in flight.
But individually they make nosense as to how we have a
phenomena like that Until yousee how all the little tiny
pieces work together.

(38:40):
And I think some of those I feellike some of those commentaries
sort of overlook that.
Maybe sometimes they reinforcethem.
But Now anyway, continue, go,go.

Speaker 1 (38:53):
Well, that's kind of like I just I didn't Again, we
could go.
There's.
There's a there's a lot ofdifferent schools of thought
just on this singular problem.
There's a lot of other thingswe get into problem.
There's a lot of other thingswe get into.
I mean more about philosophiczombies or the argument of what
knowledge is the mind-bodydualism state.
There's a lot we get into.

(39:14):
We can also get a lot into thetype A materialism, type B
materialism, type C materialism,other monisms and illusionisms.

Speaker 2 (39:24):
There's a lot just in this basic idea of trying to
figure out how to understandcognitively consciousness yeah
just in and of itself yeah, howto cognitively understand
cognition, which is like themost crazy thing to try and

(39:46):
think.

Speaker 1 (39:48):
How to think about one thing yeah, I love like
thinking of these things.
I mean, that's why I, like youknow, I spent my college years
doing it absolutely and I I meanat some point there is a one
might say a cascading failure,in that you tumble down the

(40:14):
mountain of knowledge and tryingto like break things down and
what is real, what isn't real,what can we experience, what can
we experience, what can we know?
And the climb back up thatmountain is, if you're really
taking it finger by finger andstep by step, it can be a

(40:34):
lifelong journey, depending onhow deep you want to go.
They intuit, uh, and infer theroad and they walk that path
every day.
Um, but there there is anotherway of viewing things, um, that

(40:55):
you know some might say isn't,isn't really a worthy endeavor,
or um, you know, there is a amore general understanding of
the world that then you'retrying to like break down
needlessly.
But I find the, the complexitiesof the arguments and the ideas

(41:15):
and the thoughts fascinating andriveting to me.
I just wanted to juxtapose someof like you know, here's we, I
think, kind of like you, kind oflaid out in the first episode,
like if we build it outtechnologically, this will, you

(41:36):
know you keep putting ittogether, this will come about,
and I just wanted to show like,hey, here's an another idea that
you know, maybe it's a littlemore ephemeral, and how do we
decipher that?
Just something else to thinkabout in conjunction with the
first episode.

Speaker 2 (41:55):
100%.
That's really important.
You're absolutely right.

Speaker 1 (41:59):
Yeah, but again, we could go on, and on, and on.
Oh, we could go round and roundfor like forever.
Yeah, we really could, wereally could.

Speaker 2 (42:10):
I could just keep going, like all in all.

Speaker 1 (42:12):
I think it might be fun to look at some of the
popular iterations of artificialintelligence in popular culture
and see how they play off ofthese particular elements and
how we view those interactionsbetween humanity and a

(42:36):
consciousness not our own, thatisn't from outer space or
another dimension.

Speaker 2 (42:43):
Well, sometimes it's from outer space.

Speaker 1 (43:00):
Well, sometimes it's from outer space, another place
that is mechanically based.
It feels more like you'redealing with an alien.
Yeah, you are with, like theartificial intelligence,
mechanical mind that we're kindof talking about.
Do you know what I mean?

Speaker 2 (43:20):
yeah, yeah.
Well, I didn't actually includeany of that, yeah I mean
because I thought about gortfrom from the day the earth
still, but I was like no, thatdoesn't really count.

Speaker 1 (43:29):
artificial intelligence came about, or, you
know, juxtapose it with, youknow, our own minds, or, but
it's really kind of like, notthe same as if, like, we give

(43:53):
rise to ai yeah, I mean, but Imean that's a okay, but that's a
great example of the weirddebate we're going to have now.

Speaker 2 (44:03):
because, um, if you take the, the examples of nomad
from star trek, the originalseries, or viger from star trek,
the motion picture, which isthe same fucking character, um,
they were earth, createdartificial intelligence that
then merged with alienartificial intelligence to and

(44:28):
then lost its original purpose.
Does that count?
Because it is human born, it isearth born, found its own
consciousness and elevated itsconsciousness with other
artificial consciousness that isalien, and then came back to

(44:52):
Earth, or at least came back tocontact with people, right?

Speaker 1 (44:56):
I think I would like lean towards no.

Speaker 2 (45:01):
I didn't include those.
It goes outside the scope ofwhat we're talking about.

Speaker 1 (45:04):
It's a similar like cyborg to like a RoboCop.

Speaker 2 (45:09):
Oh yeah, we definitely don't include RoboCop
.
No, no.

Speaker 1 (45:12):
We can't include RoboCop.

Speaker 2 (45:14):
That's not fair.

Speaker 1 (45:15):
The merging of machine and man into something.
I mean especially in the waythat RoboCop is portrayed.
It's not about creating a newconsciousness, it's more like a
notification of a existing humanconsciousness oh, absolutely.

Speaker 2 (45:33):
It's more about the mechanizing of his brain, and
not successfully or even not,even though that complex a way
of doing so, like, yeah, hestill has his own brain.
That was kind of the point,mm-hmm and that.
And that was the differentialbetween him and like Ed-260 or
Ed-209 is that he's notartificial, he's organic and

(45:56):
still human.
He just has things implanted inhim that he's conflicting with.
That's way different.

Speaker 1 (46:03):
Way different, and I also don't think the Eds meet
any criteria for sentience orconscience.
They're drones.

Speaker 2 (46:11):
They're not Exactly.
Yeah, they're just drones.
Yeah, and that's the same thing.
Yeah, they're not meant tothink on higher levels or
because that's the kind.
Okay, this is one of thereasons we did this is because,
like, talking about robocop isimportant, because it doesn't
fit into these categories, intothese things we're talking about
, because, like, robocop wascreated to have the ability to

(46:35):
think on its own, on his own onmurphy's, you know, with a cop's
brain, with its his ownexperiences, with his own
experiences, with his ownjudgments, but within this
corporate mandated interest.
But he was never meant to be inartificial intelligence, he was

(46:58):
meant to be in enhanced humanintelligence, and that's a
completely different thing whichis important to define if we're
going to talk about thesethings, 100%.

Speaker 1 (47:10):
Is someone there?

Speaker 2 (47:12):
No, they're just somebody.

Speaker 1 (47:15):
I can't hear anything .

Speaker 2 (47:17):
Sorry, oh, okay, good .
As long as you can't hearanything, then it's fine.

Speaker 1 (47:21):
I just saw you turn your head.
That's why I was.

Speaker 2 (47:23):
Yes, somebody is sawing something, it's not a
body is it?
Well it's.
There's no apartment next doorto me on the right, so I'm very
curious.
But anyway, let's no apartmentnext door to me on the right, so
I'm very curious, but anyway,let's just keep going.

Speaker 1 (47:40):
I think a similar again.
What point do we count it?
I did say Ghost in the Shell,Ooh, ooh good, that's good.
Where again it's cyborg, whereconsciousness more than just the

(48:03):
the ones and zeros program tobe like.
Is it major Kusanagi?
And I can't remember.

Speaker 2 (48:22):
They just call it the major, at least Scarlett
Johansson original.
It's Scarlett Johansson.
Thank you, definitely.
Yeah, it's Scar Jo.
That's a terrible, terribleversion of that fucking story,
just it's so bad, it's the worstthing you could possibly do.
Uh, it's so bad there is, thereis.

Speaker 1 (48:42):
I mean, I think it is so bad also because, like what
ghost in the shell means theanime fans and to have it
portrayed in that way.

Speaker 2 (48:53):
Yeah, it's.
It's almost like the Attack onTitan movie, just so misses the
point completely, like that.

Speaker 1 (49:03):
Everyone forgets that even yeah, I mean, I don't
think I even.
I never saw that.
I never got enough deep intoAttack on Titan to Just go that
route.
But first couple of seasons aregreat, the movie is terrible.
I never saw that.
I never got enough deep intoAttack on Titan to go that route
.

Speaker 2 (49:16):
First couple of seasons are great.
The movie is terrible.
It doesn't understand itself atall.
But the show is good until youget to later, in some of the
later lore, and then you're like, because that actually deals
with artificial intelligence aswell.
But we're not going to talkabout that today.
We have a lot of other thingsto get to I, I didn't, I don't,

(49:39):
I didn't know.

Speaker 1 (49:39):
That isn't about the, the large naked people eating,
eating people.

Speaker 2 (49:44):
Yes, yes, but then it turns out that those are.
I don't want to ruin it for you, but trust me, it also deals
with this kind of thing in acertain sense as well.
So just watch it.
You'll like it.
Up until the last like coupleseasons.
You'll enjoy it a lot actually.
I think it's very good untilthe last couple seasons, and
then it's, and the movie ruinseverything.

(50:06):
But it wasn't American, so atleast we have that.
It was Japanese and theyfucking.
They ruined it themselves, sofuck them uh, you did.

Speaker 1 (50:16):
I mean, yeah, at least.
At least we didn't cast randomwhite actors to take the roles
we didn't do.

Speaker 2 (50:22):
yeah, it wasn't chris pratt playing one of the main
characters.
At least we have that, um, okay, so it's interesting that you
say that too because, like I'dlike to enter just just for a
moment interject.
I did a bunch of research intothis and I actually did find
some really interesting academicpapers that deal with this
concept as well, specificallydepictions of artificial

(50:46):
intelligence in sci-fi fantasy,specifically sci-fi.
I don't like getting into thefantasy thing because it doesn't
mean the same thing.
We're not going to talk aboutc3po or r2d2, I don't, are we
not?
well, no, I don't think I meanwe should, you and I should talk
about droids, because thatactually does bring up a lot of

(51:07):
really interesting, weirdethical questions but I think
only because I think there are afew there are elements.

Speaker 1 (51:17):
Fantasy Depictions of Robo.
Yeah, I think they might be theonly fantasy one that I plan to
bring up.
Ok, all right.
Yeah, I mean they're not theonly fantasy depictions of, of,
of, like, no you know,artificial intelligence in the
cybernetics or in the roboticsense, obviously, but they are

(51:38):
weird and unique yes, I thinkthat's the only reason to
discuss them in the realm of ofthis thing, because let's, let's
just put a pin in that and comeback to the droids.
Let's come back to that just asecond.

Speaker 2 (51:52):
Yeah yeah, no, no right, because it's like we
could definitely rant about thatfor a while.
Let's see.
So this is from a paper byisabella herman called
artificial intelligenceartificial intelligence in
fiction between narratives andmetaphors.
Quote taking science fictionalai too literally and

(52:38):
surveillance of humans by AItechnologies through governments
and corporations.
Ai in science fiction, on theother hand, is a trope as part
of a genre-specific megatextthat is better understood as a
dramatic means and metaphor toreflect on the human condition

(52:59):
and socio-political issuesbeyond technology.
So AI in films often servesplots of machines becoming
human-like and or a conflict ofhumans versus machines.
Science fictional AI is adramatic element that is a
perfect antagonist, enemy,victim or hero, because it can

(53:24):
be fully adjusted to thenecessities of the story.
But to fulfill that role, itoften has capabilities that are
way beyond actual technology, beit natural movement, sentience
or consciousness.
If science fictional AI is tobe taken seriously as a
representational of real worldAI, it provides the wrong

(53:47):
impression of what AI can andshould do in the future, and I
think that's really poignant,because we can have these
debates and we can start talkingabout these things in fiction
and we're going to, and howimportant they are and the

(54:08):
metaphors they represent, butreal-world AI is actually,
ironically, far more insidiousin its application than the
crazy insidious stories thathave ever been written about
them, and I think that'ssomething to really keep in mind
when we go forward yeah, yeah,I think I mean a lot of what

(54:29):
we'll get into it.

Speaker 1 (54:30):
But you know, there I think there's distinctly kind
of utopian and dystopian.
Takes on artificialintelligence, to empathize with

(54:51):
that quote unquote, fake humanor, you know, robotic person as
as data would say a synthetic no, no, as Ash would say no, I'm
getting it wrong again, bishop.
What Bishop said?
He's a like to refer asynthetic person.

Speaker 2 (55:10):
Back to Lance Henderson which, by the way, uh,
artifice girl, one of the last,I have to imagine last great
lens henderson roles.

Speaker 1 (55:19):
He was I did not expect him to be in it and he
was great.
I didn't know he's in it.

Speaker 2 (55:22):
He shows up, he's like, oh, that's great, uh, oh
it skips ahead suddenly 50 yearsand you're like wait what?

Speaker 1 (55:28):
oh hey, it's like the two skips I, I like how like it
it jumps and it does skip twice.
Yeah, let's hit the.

Speaker 2 (55:33):
Let's hit therickson.
That's cool.
I like how it jumps.
It does skip twice.

Speaker 1 (55:34):
yeah, let's hit the points with this movie's about,
because we're kind of in and out.
I love a tight 90 that actuallyhas something to say.

Speaker 2 (55:43):
Yeah, and it's all in basically like three rooms.

Speaker 1 (55:48):
Yeah, yeah.
And you have what?
Five actors total?
Yeah, yeah, Great, Six actorsyeah.
Well done have what five actorstotal.
So yeah, yeah great six actors.

Speaker 2 (55:57):
Yeah, yeah, well done , yeah, well done.
Yeah, especially for a lancehendrickson film later on,
because he made a lot of shit.

Speaker 1 (56:00):
That one was like, oh okay, you're saying something
here, that's cool yeah, I, I'massuming that the director was
like all right, I'm gonna.
I I love lance hendrickson.
Can I get him in my 100 percentas opposed to lance hendrickson
?
Agent was all right.
Serbian action film.
Bosnian dragon film.

Speaker 2 (56:21):
Well, I think there's a reason.
I think there was a reason.
He was in a wheelchair chair.
In that, though he was at theend.
Yeah, I think I think he's.
I think it's, it's like it'ssundown for Lance Henriksen, I
think, honestly, yeah, so,seeing if it's something good,
great, that's awesome.

Speaker 1 (56:41):
Yeah, it was good.
We'll get to that, I think,when we come up.

Speaker 2 (56:44):
Yeah we'll get to it.

Speaker 1 (56:46):
But just to put a pin on, like when they show AI
representations, it's either asmiling face that is a little
more human-like that we canidentify with, or it is that
terrifying visage of the killerrobot.
You kind of need thesepantomime heroes and villains
that are easily recognizable andwe can focus on, as opposed to

(57:10):
the way AI can insidiously well,not insidiously, I mean, it can
both be.
I mean the things that AI does.
Now, there's many wonderfulthings for humanity.
There's a lot.
I mean one thing like forArtifice Girl that it does is

(57:31):
like it's combating humantrafficking.
One thing like for ArtificeGirl that it does is like it's
combating human trafficking, youknow, identifying pattern
indicative to human trafficking,helping law enforcement agents
detect and disrupt thosenetworks.
I mean it's disaster response,wildlife conservation, realistic
characters in fiction there's,you know, personalized learning,
improving accessibility,enhanced public services,

(57:53):
helping customer service ingeneral.
Scientific discovery of, likeprotein structure predictions,
um, okay, uh, your climatepredictions and and and trying
to change climate change andmitigated uh different medical
breakthroughs.

Speaker 2 (58:11):
can I push back on that for a second?

Speaker 1 (58:13):
Yeah, Because I You're taking, you're giving one
with a hand but cutting offwith a huge sword with the other
.

Speaker 2 (58:23):
Well, in the current American administration, yes,
that's exactly what's happeninghere.
They're trying to replaceactual thinking human beings
with resources and time andactual experience, with ai

(58:44):
models who are 100 not ready topredict these things or help
these things.
And now we're seeing I mean,we're already seeing the
immediate results of this, withso many people dying from storms
that should have been predictedearlier on and and and acted

(59:05):
upon.
The problem with that is all ofthose things rely on data, and
ai doesn't create data.
It consumes data.
People create data, and so ifyou're going to try and use
predictive models, you have tohave actual data to build those
on, and if you don't have thatdata, you can't predict these

(59:29):
things.
It's going to starthallucinating.
So those are not good thingsthat come out of AI, but they
are ways that you could utilizeit for the good of humanity.
Theoretically, if it worked theway that they tech companies say
that it works.
But it doesn't work that way,it does not have enough data to

(59:49):
be able to do what they say itdoes, and it can't get more data
if you keep cutting people outof the equation.
Maybe someday it could do that,but that's definitely not where
we are right now.
But they are already trying toimplement it like it's just
there and we're already seeingthe the horrible, tragic results

(01:00:13):
of that.

Speaker 1 (01:00:15):
It's trying to ride your bike, but you haven't put
the wheels on yet.
Or you've never learned to ridea bike.
Yeah, but I mean, I think notonly did they not have a bike
properly made and the skills toride said bike, they decided
that they're going to get rid ofall the cars and just make
everyone ride bikes, yeah, yeah.

(01:00:36):
So, oh, we're talking aboutcyborgs and why they don't fit,
and again, ghost in the shellstraddles that line, because
there's the digitizationdigitization of major kusanagi
and her ghost in the shell, hercybernetic spirit that exists
inside the mainframe, andthere's the AI that's created

(01:01:00):
inside that they're trying toget.
That I don't think is either.
It's portrayed as the villainin ways, but it's really just
kind of trying to live.
It's not the same way that aWestworld or a Terminator
function way that a Westworld ora Terminator function, and I
think it's trying to say moreabout what it is to be human
rather than what it is to be anartificial living being.

Speaker 2 (01:01:24):
Right To be a unique consciousness and a unique
entity in and of itself.

Speaker 1 (01:01:29):
Now I think there are ways that you can portray AI
within a cybernetic structure,for example like Upgrade Okay
Did you ever see Upgrade?

Speaker 2 (01:01:41):
I've seen it.
I don't remember much from it.

Speaker 1 (01:01:44):
Yeah, basic idea of Upgrade is that a man is
paralyzed.
He gets a neural chip implantedinto the lower part of his
brain that gives him the abilityto walk and use his limbs again
.
But there is an artificialintelligence that is in control
and obviously it takes a darkpath and turns for its own evil

(01:02:08):
ends.
One might say, without givingit away Everybody knows what
2001 is, everyone knows theMatrix, terminator, kill, the
robot, cyberdynene from thefuture, go back to kill.
You know we get that, but youmight not know what happens at
the end of Upgrade or ArtificeGirl.
I don't want to necessarilygive everything away.
If it's something else, someoneelse can still experience for

(01:02:30):
themselves.

Speaker 2 (01:02:31):
But the overall concepts at least least.

Speaker 1 (01:02:34):
That'll say a lot at least that is an experience of
the integration of technologyand artificial intelligence
within a human body, forming acybernetic relationship just on
the basis of the core concept.
But it isn't about being acyborg.
It's about I mean, a lot ofit's about.
You know, you're giving thisartificial intelligence this

(01:02:57):
much power and look what it doeswith it and you have no control
over that.
But at least that's an exampleof the cyborg premise working in
conjunction with the core ideaof what we're trying to talk
about with artificialintelligence transformers
Transformers.
Transformers are roboticentities, but do they give a

(01:03:18):
shit about discussing what thatmeans?
Does that have anything to say?

Speaker 2 (01:03:22):
Their entire world is cybernetic life, transformers
the movie they show like weirdecosystems where they have like
cybernetic fish.
There's no organic life onCybertron.

Speaker 1 (01:03:34):
At that point it's no longer cybernetic.

Speaker 2 (01:03:36):
That's why it's more like fantasy than it is like
science fiction they use thatterm, I think partially because
it was new and popular at thetime.

Speaker 1 (01:03:43):
But there is no organic life there, it is all
mechanical.
But I also don't think onebecause it's alien.
So the spark of consciousnessand sentience in these machine
people that turn into you meanthe AllSpark.
Yes, the AllSpark.
If only Marky Mark was here tojust tell us more about it, that

(01:04:05):
would be.

Speaker 2 (01:04:07):
Marky, mark and the AllSpark, but no, but I mean
you're right, it doesn't haveanything to say.
No, it has nothing to say.
I mean it could, if michael baywasn't the one in charge of
that franchise, it could havesomething to say and would be
really interesting but I'm surethere are some comics that have
something to say yeah, maybewhat 40, 50 year run?

(01:04:29):
you know, but I think you and Iwish that were true.
I don't think it is true.
I mean, like jason aaron hasnever taken a crack grant,
morrison has never gone aftertransformers you know what I
mean.
Like it's never, you're nevergonna get anybody to say
anything about it, especially atthis point, now that michael
bay has made like 17 of thosemovies and they're all
completely superficial and awful.

(01:04:50):
It's just not gonna happen.

Speaker 1 (01:04:52):
But but, point taken, how do you want to approach
because I kind of had them likelisted again as we discussed off
air a little bit like the good,the bad and the gray with liam
neeson.
Yes, someone's gotta fightwolves with broken bottles tied
to their fists and then be asambiguous as Inception at the

(01:05:15):
end.

Speaker 2 (01:05:16):
By that I mean not at all.
Well, ok, it's a good question,because I had examples of
stories and then characters.
I did not write this in anysort of logical sense.
So here we go.
So we are going to talk alittle bit about examples of AI
in pop culture.
I mean, there are so many wecan go into.
It's one of the most pervasivethings in science fiction

(01:05:38):
especially, and sometimes infantasy.
We're probably not going totackle WALL-E or Johnny Five
necessarily, mostly because of,but those are things we could
definitely talk about.
Or batteries not included, oryou know all sorts of things
that we really wish we couldtalk about.
Weren't batteries not included?

(01:05:59):
Weren't those aliens?
They were alien robots, yeah,but you're right.

Speaker 1 (01:06:04):
But you're right, we've already sort of excised
aliens.

Speaker 2 (01:06:06):
You're right, but for the scope of this, just follow
your bliss.
Well, I'm not going to yuckyour yum.
Ai in films often serves.
I haven't said this yet, have I?
I don't think so.

Speaker 1 (01:06:21):
I don't know, have you my subjective experience of
what?

Speaker 2 (01:06:24):
this podcast is oh, get the fuck out of here.
Ai in films often serves lotsof machines becoming human-like
and or a conflict of humansversus machines this reflects
you're idealizing and a humanrepresentation, giving like
emotions and empathy to themachine.

(01:06:45):
I think a lot of the utopianversions of these tales and the
exact opposite where you're alot of times you're giving a
human-ish face to the killerrobot to highlight those
dystopian All a Terminator oryeah, and we're going to get
into the weird inherent sexismin that as well when we get to

(01:07:06):
the whole Ex Machina thing, thatwhole thing, because that's a
huge part of this that I did notreally take into account as a
whole but definitely is aproblem.
Science fictional AI makes areally good enemy or even hero.
Sometimes, like you're saying,like in Ghost in the Shell, it
can be tweaked for the narrative.

(01:07:28):
No matter what you do, you do.
But to fulfill that role itoften has capabilities that are
like way beyond technology thatexists, which is why it's often
set in the future or in the nearfuture, like max hedrum.
Well, god, we haven't eventalked about max hedrum, that's
kind of a big one.
I think we should do a wholeepisode on max hedrum.

(01:07:49):
Personally, it's always likejust in the future or right now,
but nobody knows that thisexists.
And these capabilities also,they also, like, come in the
form of sentience orconsciousness, which obviously
those are really demandableterms.
Take that from what you will.
Science fiction ai as arepresentation of real world ai

(01:08:13):
kind of gives you the wrongimpression of what AI can and
should do, obviously if you'veseen Terminator or the Matrix,
right, but here's some like justa couple of examples from
history that I think arerelevant today that exemplify
this type of AI the Feeling ofPower, which is a 1958 short

(01:08:37):
story by Isaac Asimov.
In it, a futuristic society,humans rely on AI for everything
.
And then a human beingdiscovers arithmetic they don't
even educate people anymore andthen he trains himself to do
multiplication and then hisknowledge starts to snowball.
Then he becomes sort of a vocaladvocate of human knowledge and

(01:09:03):
at first, you know, the peoplein charge write up what he dubs
as human math, as useless, butthen somebody in the military
sees it as valuable, because ifhumans can understand
calculation they can replaceexpensive computers on warships.
It's a short story, so itdoesn't really need to get into
the universe too much, but it'sa very obvious reverse of how we

(01:09:25):
see AI now.
But that is in 1958, right?
Asimov is actually one of themost pioneering people we're
going to talk about in this.
The rules of robotics alonewere a huge thing in
storytelling, so now I'm goingto get a little obvious, okay,
but I have very good reasons whyI want to talk about, for just
a brief moment, frankenstein byMary Shelley.

Speaker 1 (01:09:46):
I'm just wondering if we're going to bring that.
This is going to be brought up.

Speaker 2 (01:09:49):
I know I debated this because it is obvious, but I
think it's really fundamentallyimportant to discuss.

Speaker 1 (01:09:59):
And I think it's something we can debate the
fundamental Promethean principlebehind Frankenstein.
It applies to everything else.

Speaker 2 (01:10:06):
In many ways.
You know what, in talking aboutAI, frankenstein may be the
most important fundamental thingwe talk about.
It applies in so many differentcontexts and so many different
eras, even though everybodyknows Frankenstein.
I mean, the subtitle forFrankenstein is A Modern

(01:10:33):
Prometheus.
It is a far more nuanced textthan just that, or even the
Universal Pictures.
Take Victor Frankensteinliterally creates an artificial
intelligence from spare parts,then neglects it.
It's pretty relevant today forthat context alone.
In both the Prometheus tale andthe story of, let's say, adam

(01:11:02):
and Eve, a game-changing elementis gifted to mankind, and now
we're on our own, able to guideour fates, for better or worse.
Now, while Frankenstein doestell that story, victor has
harnessed an ability thatpreviously was only possessed by
the gods, and now man's fatehas changed forever, with the
ramifications of that knowledge.
It also shows us the danger ofintentionally unleashing a

(01:11:24):
world-changing discovery withoutan ethical framework around it.
Just because he can I meanshould he right I, I would argue
that Frankenstein's monsterrefers not just to the creature
he created, but also his hubrisand the choice he made.
I think Frankenstein's monsteralso metatextually refers to his

(01:11:46):
folly.
He is the monster, his actions,his actions are the monster, his
choices are the monster, andthat's something we really need
to think about.
The longing for creation, as wetalked about last episode, is
connected with the Well, andthen we can get into some of
this fun stuff, because this isreally, really interesting.

(01:12:07):
The longing for creation isconnected with the anxiety that
the creature will grow over ourheads, that we will lose control
and finally be dominated by it.
This sort of primeval desireand fear Asimov literally coined

(01:12:32):
the term the Frankensteincomplex kind of defines 20th and
21st century AI fiction,because we talked about that
before with the golem and allsorts of other examples
throughout human history theidea that we want to sort of
like, beat the gods, become ourown gods.
I mean, battlestar Galacticadoes a really great job of
talking about this where we haveto create and, by doing so,
being the stewards of our ownfate, but now, because of that,

(01:12:54):
we don't have the gods toprotect us or save us or keep us
pure, and now we're on our ownand we are going to fuck it up
or not.
Let's find out.
It is one of the firstmainstream examples of
artificial intelligence innarrative structure and it's
really relevant because codersand big tech went all in with AI

(01:13:14):
implementation, ignorant or,perhaps more cynically,
intentionally ignoring theethics around it, free of any
guidelines of regulationmotivated by greed.
Consequences be damned.
And this is from a paper calledRobot Rights damned.

(01:13:37):
And this is from a paper calledRobot Rights.
Let's talk about human welfareinstead, by Verhain and Van Dyck
in 2020.
Quote once we see robots asmediators of human being, we can
understand how the quote robotrights debate is focused on in
first world problems At theexpense of urgent ethical
concerns such as machine bias,machine-elicited human labor,

(01:13:59):
exploitation and erosion of allprivacy, all impacting society's
least privileged individuals.
We conclude that if human beingis our starting point and human
welfare is the primary concern,the negative impacts emerging
from mechanic systems, as wellas the lack of taking
responsibility by peopledesigning, selling and deploying

(01:14:22):
such machines, remains the mostpressing ethical discussions in
AI.
So, all that being said, justlike Victor Frankenstein, people
like Elon Musk, anthropic,openai, google created this
thing.
They saw a financial model forit which didn't really make a
lot of sense.
It still doesn't make a lot ofsense, and who gives a shit what

(01:14:43):
the implications are onhumanity and those that go out
there and are like, well?
No, this will mean we have towork less.
This will mean that things willbe easier for people.
We can solve problems.
Most of them know that's nottrue and it's pure hubris.
Just like Victor Frankenstein.
Let's go down that list.
We're talking about Jake.
Let's go down that pop culturelist, because each of those, I

(01:15:05):
think, has different things tosay.
I think that's great.

Speaker 1 (01:15:07):
I think we should probably stop here.
Okay, we've got two hours andwe're just getting into the.
We're just getting the premiseof the episode yeah that's
fucking great.

Speaker 2 (01:15:20):
We knew this was going to be a long one so it's
fine.

Speaker 1 (01:15:22):
I think the fact that we are excited to talk about
these, yeah, we're not going tohave any problem.
Fantastic bringing upfrankenstein and also james bond
.

Speaker 2 (01:15:31):
That was, that was important.

Speaker 1 (01:15:32):
I don't think james bond is going to be in the
episode.

Speaker 2 (01:15:35):
No, but we did talk about that for like half an hour
.
That's how it always goes.
Well, that's what we've got fornow.

Speaker 1 (01:15:40):
Yeah, like Frankenstein, we came up with
this pod and we decided to delveinto it and create it, but it
has grown beyond our wildestimaginations and has taken over,
so it will definitely need tobe pushed into next episode.
So do come back, as we willdelve into the thoughts and
ideas, the portrayals of bothgood and ill artificial

(01:16:03):
intelligence within pop culture.
Next time, on Dispatch Ajax, ifyou wouldn't mind, like sharing
, subscribing, we'd reallyappreciate it.
Tell whatever bot that you canfind.

Speaker 2 (01:16:14):
Do not encourage them to use AI.
It's in everything.

Speaker 1 (01:16:19):
If you want to give us five quarts five quarts on
the podcast, catch your app ofyour choice, ideally Apple
Podcasts.
It's the best way for us to getheard and thus seen.
We'd really appreciate it.
If you do want to hear aboutour thoughts on artificial
intelligence in popular culture,do come back next episode.

(01:16:39):
We're excited that we're ableto talk about this and to share
it with you all, but until we'reall reborn into our mechanical
bodies, thus blending humansentience and artificial
intelligence into one ungodlycreation, Skip.
What should they do?

Speaker 2 (01:16:57):
Well, they should all ask themselves how many gourds
in a gallon, and then theyshould probably clean up after
themselves to some sort ofreasonable degree.

Speaker 1 (01:17:06):
Is this a diehard thing?
Is there a bomb going to blowoff if I can't figure out how
many gourds are in a gallon, andthen it's going to turn out to
just be like maple syrup.
I have a sign that says I haterobots.

Speaker 2 (01:17:17):
You better get out of this neighborhood, man.
Hey Zeus, you got to help thisdude out.
He's going to be dead in a fewminutes.
He's in Robata Harlem.

Speaker 1 (01:17:31):
I've got a very, very bad headache.

Speaker 2 (01:17:33):
It's good.
Everyone should probably knowthat to themselves to some sort
of reasonable degree.
Make sure they've paid theirtabs and not just through some
sort of ai app.
Make sure they have tippedtheir actual human bartenders
and or kjs and or podcasters.
And make sure that they supporttheir local comic shops and
retailers, not their online apps.

(01:17:55):
That being said, we would liketo say godspeed, fair wizards
please go away.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.