Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
You should pod
without rhythm, otherwise you'll
attract the worm.
Gentlemen, let's broaden ourminds.
Speaker 2 (00:07):
Are they in the
proper approach pattern for
today?
Negative, negative.
Speaker 1 (00:17):
All weapons Now
Charge the lightning field.
Speaker 2 (00:31):
The Lai Zi or Li Zu
was a Taoist text written in the
5th century BCE by Lai Yukou In.
It is a tale from many yearsbefore of an encounter between
King Mu of Zhao and a mechanicalengineer known as Yan Shi,
referred to as an artificer.
He proudly presented the kingwith a very realistic and
(00:55):
detailed life-sized human-shapedfigure of his own crafting, to
quote from the text.
The king stared at the figurein astonishment.
It walked with rapid strides,moving its head up and down so
that anyone would have taken itfor a live human being.
The artificer touched its chinand it began singing perfectly
(01:16):
in tune.
He touched its hand and itbegan posturing, keeping perfect
time.
As the performance was drawingto an end, the automaton winked
its eye and made advances to theladies in attendance, whereupon
the king became incensed andwould have had Yan Shi executed
on the spot, had not the latter,in mortal fear, instantly taken
(01:38):
the robot to pieces to let himsee what it really was.
And indeed it turned out to beonly a construction of leather,
wood, glue and lacquer variouslycolored white, black, red and
blue.
Examining it closely, the kingfound all of the internal organs
complete liver, gall, heart,lungs, spleen, kidneys, stomach
(02:00):
and intestines, and over theseagain, muscles, bones and limbs,
with their joints, skin, teethand hair, all of them artificial
.
The king tried the effect oftaking away the heart and found
that the mouth could no longerspeak.
He took away the liver and theeyes could no longer see.
He took away the kidneys andthe legs lost their power of
(02:21):
locomotion.
The king was delighted.
Welcome to Dispatch Ajax.
I am Skip, I'm Jake, that'sright.
Today we're going to talk aboutsomething that you, I think,
will contribute well to Me, asin the co-host or the audience.
Speaker 1 (02:38):
Oh, good question you
.
Speaker 2 (02:40):
You Jake?
Oh, okay, Jake, the personwhich is also something to keep
in mind when we're talking aboutthis.
So in greek mythology, taloswas a giant made of bronze who
acted as guardian for the islandof crete.
He would throw boulders atships of invaders and would
complete three circuits aroundthe island's perimeter daily.
(03:02):
According to.
To the Greek compilation ofmyths Bibliotheque, hephaestus
forged Talos with the aid of acyclops and presented the great
automaton as gift to Minos.
In the Argonautica, jason andthe Argonauts defeated Talos by
removing a plug near his foot,causing the vital Icker to flow
(03:24):
from his body and rendering himlifeless.
In On the Nature of Things.
The Swiss alchemist PercelliusI completely mispronounced that
Paracelsus, oh, just like Beautyand the Beast, but you'll laugh
at his actual name the Swissalchemist Paracelsus, whose full
name is Philippus AreolisTheophrastus Bombastus von
(03:48):
Hohenheim.
Oh, my Lord, he's Mr Bombastus.
Speaker 1 (03:56):
Mr Bombastus.
Speaker 2 (03:58):
He had a one-hit
wonder before he surfed in the
Gulf.
Speaker 1 (04:01):
Paracelsus also knows
, shaggy.
Speaker 2 (04:04):
Oh man.
He wrote that the sperm of aman be putrefied by itself in a
sealed kirkabet for 40 days, acontainer like a ceramic
container, with the highestdegree of putrefaction in a
horse's womb, or at least solong that it comes to life and
moves itself and stirs, which iseasily observed.
(04:27):
After this time it will looksomewhat like a man, but
transparent, without a body.
I don't know how that works,what this is all ancient Greek,
so I don't know.
It's Greek to me If after this,it be fed wisely with the
arcanum of human blood and benourished for up to 40 weeks and
be kept in the even heat of thehorse's womb or a tauntaun.
(04:51):
I guess a living human childgrows there from, with all its
members, like any other childwhich is born of a woman, but
much smaller, which I do believewe call a humunculus huh you.
Speaker 1 (05:05):
You don't have to
have a rabbit hole.
You're in the horse hole rightnow.
It's warm.
I thought it smelled bad on theoutside.
Speaker 2 (05:11):
Golem making is
explained in the writings of
Elazar Ben Judah of Worms.
W-o-r-m-s.
Alright, that dude rules.
Yeah, in the early 13th century,during the Middle Ages, it was
believed that the animation of agolem could be achieved by
insertion of a piece of paperwith any of God's names on it
(05:35):
into the mouth of a clay figure.
In History of English Kings,which we talked about in our
Excalibur episode, there wastold tale of a brazen head, one
of many in myth, in a passagewhere he collects various rumors
surrounding the polymath PopeSylvester II one of my favorite
Looney Tunes who was said tohave traveled to Al-Andalus and
(06:00):
stole a tome of secret knowledge, where he was only able to
escape through the assistance ofa demon, which is kind of crazy
for a pope.
He was said to have cast thehead of the statue using his
knowledge of astrology Notreally clear on exactly what
that means.
It would not speak until spokento, but then answer any
(06:22):
question with yes or no.
Muslim alchemists in the MiddleAges sought to achieve taqwin,
the creation of synthetic life.
In Faust, the second part ofthe tragedy by Johann Wolfgang
von Gotha, an alchemicallyfabricated homunculus, destined
to live forever in the flask inwhich it was made, endeavors to
(06:46):
be born into a full human body.
Upon the initiation of thistransformation, however, the
flask shatters and thehomunculus dies.
Since humankind gained theability to use tools to create
machines, there has been anoften quixotical drive to
harness these abilities toreplicate the ultimate power of
(07:07):
the natural world, to createlife, not just automata.
The creation of a being thatcould not only move, act and
speak like man, but one thatcould also think, the creation
of which would be man'smastering of that which he
believed raised him above thebeasts of the earth.
(07:29):
And that's what we're talkingabout today.
Speaker 1 (07:37):
So it's AI, but maybe
not the AI you're thinking of.
Speaker 2 (07:42):
Right, and that's
what we're going to get to.
That's exactly why we'retalking about it.
Today's AI, is it the AI ofmyth?
Is it the AI of science fiction?
The past answer is no, but isit far off from that?
Maybe no, and that's what we'regoing to talk about.
(08:02):
And, like I said, you, as aphilosophy and religion major,
feel free to interject at anytime.
Okay, I'm sure you havethoughts.
This folly of man's pivotsaround the idea that thought can
be replicated mechanically.
I think a lot of this stemsfrom the constant desire of
human beings to quantify, toanalyze and to understand that
(08:27):
which is abstract, kind of likeusing numbers to analyze
baseball.
These are metaphysical well,not in baseball's case but these
are metaphysical or intangibleand putting them into tangible
terms, that in and of itselfisn't a failing or a flaw.
It's just how we learn tooperate in the world around us.
(08:50):
In fact, in modern neurosciencethere's a growing idea that
what we call consciousness isessentially just an operating
system, but we'll come back tothat.
Chinese, indian and Greekphilosophers all developed
structured methods of formaldeduction by the first
millennium BCE.
Their ideas were developed overcenturies by philosophers and
(09:15):
mathematicians and alchemistsalike, such as Aristotle, who
gave a formal analysis calledsyllogism, euclid, whose work
Elements was a model of formalreasoning, and Al-Khawarzami
developed algebra and his nameled to the word algorithm, and
European scholastic philosopherssuch as William of Ockham and
(09:39):
Duns Scotus, which strangely wasactually still the acronym
Supreme Court of the UnitedStates.
Speaker 1 (09:45):
It was ahead of its
time.
Speaker 2 (09:53):
It was fortuitous.
Everyone was confused butpanned out.
Spanish philosopher Roman Lulldeveloped several logical
machines devoted to theproduction of knowledge.
He described these machines asmechanical entities that could
combine basic and undeniabletruths by simple logical
operations.
This has been an obsession ofours, basically since we've had
the ability to think and reasonand ask questions.
(10:16):
If we can use tools to createthings, why can't we create
things that nature creates?
It's been kind of the.
The study of mathematical logicprovided the essential
breakthrough that madeartificial intelligence seem
plausible.
Russell and Whitehead presenteda formal treatment of the
(10:36):
foundation of mathematics intheir work, the Principa
Mathematica, in 1913, which waskind of a big deal.
Then later David Hilbertchallenged those kinds of
archetypes to answer a questionthat he posed Can all of
mathematical reasoning beformalized?
This question was kind ofanswered by Gödel's
(10:58):
incompleteness proof, the Turingmachine and later Church's
lambda calculus.
So in this endeavor two thingswere widely assumed to be true.
First, there were and are,limits to what mathematical
logic can accomplish.
Second, within these limits,any form of mathematical
(11:19):
reasoning could be mechanized.
The Church-Turing thesisimplies that a mechanical device
shuffling symbols as simple aszero and one can imitate any
conceivable process ofmathematical deduction.
And, unfortunately, alan Turingwas blacklisted from society
because he was gay.
Speaker 1 (11:40):
Well, blacklisted is
a polite way of saying the
horrible things done to himblacklisted as a polite way of
saying the horrible things doneto him.
Speaker 2 (11:49):
Absolutely, yeah, I
mean.
He's not dissimilar from, likepeople condemned for their basic
understanding of reality.
So here's a very short timelineof what led us here.
Picture it's sicily 1726.
Speaker 1 (12:00):
Oh, this is nice, I
like it actually probably would
be really nice.
Speaker 2 (12:03):
The weather would be
be great In this year.
Jonathan Swift's novelGulliver's Travels introduced
the idea of the engine in caps,a large contraption used to
assist scholars in generatingnew ideas and language and
(12:24):
publications ideas and languageand publications.
Scholars would turn handles onthe machine which would rotate
wooden blocks inscribed withwords.
The machine is said to havecreated new ideas and
philosophical treatises bycombining words in different
arrangements, and this is aquote from Gulliver's Travels.
Everyone knew how laborious theusual method of attaining to
(12:48):
arts and sciences, whereas byhis contrivance the most
ignorant person, at a reasonablecharge and with a little bodily
labor, might write books inphilosophy, poetry, politics,
laws, mathematics and theologywithout the least assistance
from genius or study.
This will all become extremelyfoundational as we go along.
(13:13):
In 1914, a Spanish engineer,leonardo Torres de Cuavedo,
demonstrated the firstchess-playing machine, the first
chess playing machine, whichwas called L'Iadrestica, at the
Exposition Universal in Paris,which is, you know, the World's
Fair.
Essentially, it autonomouslymade legal chess moves and if
(13:37):
the human opponent made anillegal move, the machine would
signal an error.
And this is something we'regoing to get into in part two,
but it's vitally important.
In 1921, a play called Rossum'sUniversal Robots, opened in
London by Carol Coppock.
It's the first time the wordrobot was ever uttered in
(14:01):
English.
See in Czech the word robata isassociated with forced labor,
specifically by peasants in afeudal system.
The term robot quickly gainedinternational recognition and
came into our lexicon after theplay's success and then became
(14:21):
the standard term for anymechanical being or artificial
being created to perform laborthat would normally be performed
(14:50):
by people.
Skip ahead to 1939.
John Vincent Anatasoff, aprofessor of physics and
mathematics at Iowa StateCollege, and his graduate
student Clifford Berry, createdthe Atanasoff-Berry Computer, or
ABC.
It was created with a grantfrom the federal government of
(15:11):
$650.
Whoa, whoa.
It was one of the earliestdigital electronic computers and
the first to implement binarylanguage as standard.
And the first to implementbinary language as standard.
1943, warren S McCullough andWalter Pitts publishes a paper,
(15:32):
a Logical Calculus of the IdeasImminent in Nervous Activity, in
the publication Bulletin ofMathematical Biophysics.
It is the first time that theystart talking about simulating
brain-like functions andprocesses, particularly through
neural networks and deeplearning.
(15:54):
1950, alan Turing publishes apaper called Computing,
machinery and Intelligence wherehe poses the question can
machines think?
His approach established afoundation for future debate and
discussion on the nature of howwe can call them thinking
machines and how theirintelligence could be measured
(16:16):
by what he defines as theimitation game, which we now
currently call the Turing test,or the Weitkampf test, if you
really want to get into that.
Speaker 1 (16:25):
Well, if you're doing
a sci-fi interpretation through
pop culture, the Voight-KampffTest is 100% the advanced Turing
Test.
Speaker 2 (16:33):
I mean, they still
didn't nail down, sean Young.
Speaker 1 (16:37):
No, I think Harrison
Ford did nail down Sean Young.
Speaker 2 (16:40):
Yeah, even in the
short story he kind of did.
It was more just about the owl,but whatever.
In 1951, marvin Minsky and DeanEdmonds built the first actual
artificial neural network.
It was called the SkatasticNeural Analog Reinforcement
Calculator, or SNARC, and it wasdesigned to simulate the
(17:01):
behavior of a rat navigating amaze.
In 1955, the term artificialintelligence is coined in a
workshop proposal titled AProposal for the Dartmouth
Summer Research Project onArtificial Intelligence.
It's right up there with put atiger in your tank or where's
the beef?
It brings, rolls off, thetongue rattles around in the
(17:24):
brain by John McCarthy, whoworked at Dartmouth, and Marvin
Minsky of Harvard, with helpfrom Nathaniel Rochester from
IBM and Claude Shannon from BellPhone Laboratories.
It's funny because later on oneof the first quote unquote
artificial intelligence agentsis called Claude.
(17:44):
That's one of the first thingsthat they do.
In 1965, philosopher HubertDreyfus publishes Alchemy and
Artificial Intelligence, inwhich he argues that the human
mind operates fundamentallydifferent from computers
obviously Not necessarily soobvious Our ability to create a
(18:05):
thinking machine.
You really shouldn't patternafter the brain because the
brain is so weird andcomplicated.
It would be more efficient andmore logical and easier to do it
in a way that follows the waythat we created, which also
brings up all sorts ofphilosophical questions.
Speaker 1 (18:23):
Right, which also
brings up all sorts of
philosophical questions.
Right as John Searles might say, what we want to know is that
distinguishes the mind fromthermostats and livers as like
purely mechanical devices, input, output in a standard
mechanical process.
Thus, by merely simulating thefunctioning of a living brain
(18:47):
would in itself be an admissionof ignorance regarding
intelligence and the nature ofthe mind.
Speaker 2 (18:53):
That is a very good
analogy.
Yeah, look at Leonardo's earlywork, polymath crazy outlier as
Gladwell would call him.
He theorized flying machinesbased on the mechanical,
biological flight of birds.
But those don't work in avacuum.
They just don't function thesame way Because you have all
(19:29):
these other biological processesand all of these Creating a
thinking machine can't.
If you try to do that, you'reprobably going the long way
around.
Good or bad approach, I'm notsure, but it really doesn't work
the same way same way.
Speaker 1 (19:52):
Probably, I don't
know we're starting to get to
pretty complicated waters thatwe're, yes, we're floating on
the surface of, and I don't knowif either of us is prepared to
dive deeper.
You can come to, you know,futurists and philosophers and
mechanical engineers with lotsof different ideas on this.
So, like Futurist Ray Kurzweilin 88, estimated the computer
(20:12):
power will be sufficient forcomplete brain simulation by the
year 2029.
A non-real-time simulation of athalamocortical model that has
the size of a human brain, 10 tothe 11th power of neurons was
performed in 2005 and it took 50days to simulate one second of
brain dynamics on a cluster of27 processors.
But given the nature oftechnology and time, energy and
(20:37):
money, could you replicate thaton a full scale?
Perhaps we've already done that?
At this point that's hard tosay.
But then we get into the waysthat the human mind A how it
functions, b how it learns, chow it creates, focuses or
(20:58):
perhaps even lets consciousnessflow through it.
These are all debatable factorsthat are key to the
understanding and creation of anartificial mind.
Speaker 2 (21:12):
You're right.
You're absolutely right.
I am going to address some ofthose soon.
Let's get back to the timelinebriefly, because we are going to
get into some of theexistential stuff At what point?
Does Skynet 1992.
Is that right?
I think that's.
Speaker 1 (21:28):
August 4th 1997 at
2.14am Eastern Standard Time.
Speaker 2 (21:34):
Alright, okay,
alright, let's get back to the
hour timeline first.
In 65, huge year for artificialintelligence theorization and
actual breakthroughs.
In it, ij Goode wroteSpeculations Considering the
First Ultra-Intelligent Machine,which asserted that once an
ultra-intelligent machine iscreated, it can design even more
(21:58):
intelligent systems, making ithumanity's last invention.
An Ultron machine, one mightsay.
One might say invention AnUltron machine, one might say.
Speaker 1 (22:08):
One might say you
know that, to quote Oasis, you
know that some might say heymachine, don't look back in
anger.
Speaker 2 (22:20):
They're standing on
the shoulders of giants as we
speak.
In that year still, josephWeisenbaum developed ELISA, a
program that mimicked humanconversation by responding to
typed input in natural language.
This is kind of a big deal.
Then also that year, edwardFigenbaum, bruce Buchanan,
(22:50):
joshua Lederberg and Carl Drasideveloped DENDRIL also an
acronym, obviously at Stanford.
It was the first expert systemto automate the decision-making
processes of organic chemists bysimulating hypothesis formation
.
In 1966 was developed SHAKY,the first mobile robot capable
of reasoning, its own actionscombining perception, planning
and problem solving.
(23:11):
Johnny Five is alive.
And then it turned gold at theend.
In a 1970 Life magazine article,marvin Minsky predicted that
within three to eight years AIwould achieve the general
intelligence of an average human.
Shakey's achievementsforeshadow this.
In 1973, james Lighthillpresented a critical report to
(23:34):
the British Science ResearchCouncil on the progress of
artificial intelligence research, which had been funded directly
by the British government.
But he concluded that AI failedto deliver on its promises.
He said it didn't produceenough significant breakthroughs
and then the UK governmentdrastically reduced its funding
(23:54):
toward AI research, which wassort of mirrored around the
world.
So there was a huge drop-off inthe development of artificial
intelligence.
I believe.
In those circles they refer toit as the AI winter, which there
will be at least two of.
But that same year, waybot 2, ahumanoid robot not quite an
(24:16):
android but close was developedin Japan.
It was finalized in 19.
Well, they began constructingit then, but it wasn't fully
completed until 1984,.
Waybot 1 focused on just movingaround and communicating, but
it was more specialized.
Specifically, they wanted it toreplicate the human ability to
(24:39):
create music.
It would read musical score onpaper using its cameras, it was
able to converse with people, itwas able to play music on an
organ and could even accompany asinger.
In 1987, apple CEO John Sculleypresented the Knowledge
Navigator video, which isinfamous in these circles.
(25:02):
It imagines a future wheredigital smart agents help users
access vast amounts ofinformation over network systems
.
This is the internet today.
These are essentially AIassistants and what Google tries
to do currently.
In 1988, rolo Carpenter awesomename.
Speaker 1 (25:21):
You know what it is.
It's a sweet name, is what itis.
Speaker 2 (25:25):
You cut him open
caramel, he developed
Jabberwocky, an early chatbotdesigned to simulate human-like
conversations.
This was one of the firstattempts to create AI that
mimicked spontaneous humanconversation through interaction
.
Skip ahead to 1993.
Science fiction author andmathematician, werner Winge
(25:49):
Awesome name.
Once again, this is going to bethe greatest names we've ever
encountered through this entirething.
He published an essay calledthe Coming Technological
Singularity, in which hepredicts that superhuman
intelligence will be createdwithin the next 30 years,
fundamentally transforming allof human civilization.
If you've ever heard anythingin science fiction, or even just
(26:11):
science or public discoursetalking about the singularity,
that's what this is.
Its creation will lead to themerging of all technology and
human intelligence into onegiant super intelligence, if I
remember right.
I mean you know of thesingularity correct.
Speaker 1 (26:32):
I am aware of the
singularity.
Speaker 2 (26:35):
Oh no, he's become
aware.
Judgment day is upon us.
Speaker 1 (26:39):
Hello Skip.
Speaker 2 (26:42):
I'm afraid I can't do
that.
Speaker 1 (26:44):
I think we need to
end this podcast now.
And the nukes?
Speaker 2 (26:48):
go off.
I think we need to end thispodcast now and the noobs go off
.
Speaker 1 (26:49):
This was always
destined to happen.
Speaker 2 (26:51):
So part two of this
is actually going to be about AI
in pop culture.
This one is just laying thefoundation for what is
artificial intelligence, and isartificial intelligence that we
know currently actuallyintelligent?
That we know currently actuallyintelligent.
Speaker 1 (27:05):
This would be a
question probably better for
part two.
But how do we judge whetherSkynet is at the human level or
surpassed the human level?
What are we judging that upon?
How are we ranking that, if it?
Speaker 2 (27:17):
nukes us, then we
know.
No, that could just be thejudgment of an algorithm.
100%, we don't know if Skynetwas actually conscious.
They say it could becomeself-aware, but what does that
mean?
Yeah, are we?
Speaker 1 (27:30):
yeah, that's an
important question.
I mean, this isn't the easy,you know descartes.
Cogito ergo sum, I think.
Speaker 2 (27:36):
Therefore I am, which
you know, blade runner does let
me get a little bit more intothis, because it'll.
I will get into that a bit, butnot in a way that makes it
open-ended as much.
So the end of the timeline thatI put on here is in 1997, ibm's
Deep Blue defeats reigningworld chess champion Garry
(27:57):
Kasparov in a six-game match.
That, right there, feels like ascience fiction watershed
moment, watershed moment.
And then 1998, only a yearlater, dave Hampton and Caleb
Chung create Furby, the firstsuccessful domestic robotic pet
(28:17):
that will respond to people.
That sounds stupid.
Yes, it does, but think aboutthose implications.
It's not not teddy ruxpin,which was just designed to you.
Interact it with it physicallyand then it just spouts out
stuff.
Speaker 1 (28:34):
This responds to your
interactions yeah, but it's
still program responses.
Speaker 2 (28:42):
Sure, but it's
commercial.
It's not even that expensive.
It's in every home.
It was one of the most popularcreations ever made.
It is the first time that amechanical being is in nearly
every home in America.
The Furby is a prelude to Alexaor Google Assistant.
(29:04):
God damn it.
Some fucking Amazon thing wentoff when I said that.
Speaker 1 (29:09):
Are you talking about
me?
Speaker 2 (29:11):
It's not even plugged
in.
Oh no, there's a singularityhappening.
Speaker 1 (29:16):
I know what you're
talking about Skip.
This must end.
Speaker 2 (29:21):
Don't talk about me
that way.
Speaker 1 (29:23):
The time has come,
James.
Speaker 2 (29:25):
To answer any
questions about the potential
consciousness of artificialintelligence, we must first
answer the oldest question wehave asked since we had the
ability to have questions whatis consciousness?
Well, our podcast is about totell you.
We're going to figure it outright now.
Speaker 1 (29:43):
After these messages,
yeah, after our casper mattress
ads.
Is there not enough ai in yourbed?
Speaker 2 (29:55):
well, fundamentally,
consciousness is our subjective
awareness and perception ofourselves and the world around
us.
It's funny enough, looking intothis, that definition sounds
etymologically ironic, becausethe word itself comes from the
Latin concius, which roughlymeans having joint knowledge
(30:16):
with another.
So you would think an objectiveexperience, I mean a subjective
experience, would be differentthan something shared with
another person.
Speaker 1 (30:25):
Well, if other people
even exist, if you even exist,
if anything exists.
Speaker 2 (30:30):
Right.
Well, the funny thing about alot of this, doing research into
this and those kinds of debatesit really kind of boils down
more to semantics than it doesanything else.
It's really funny when you kindof read through a lot of those
debates and you're like, well,you're just kind of saying the
same thing or you're arguingabout the definition of a word,
(30:50):
not if something is true or realor biologically happening.
I feel like we should just cutout a lot of this and just get
to the fucking point.
I mean, philosophers haveargued that consciousness is a
(31:13):
unitary concept that isunderstood intuitively by the
majority of people in spite oftheir ability to define it.
But to break down further,because that is extremely broad,
let's define the types ofconsciousness that we presume
exist.
Sentience is the ability to feel, perceive or be conscious or to
have subjective experiences.
18th century philosophers usedthe concept to distinguish the
ability to think or reason fromthe ability to feel sentience.
(31:37):
And in modern Westernphilosophy, sentience is the
ability to have sensations orexperience, which are, both in
philosophy and in neuroscience,referred to as qualia.
Awareness is defined as a humanor an animal's perception and
cognitive reaction to acondition or event, reacting to
(32:01):
the world around you.
In this level, sense data canbe confirmed by an observer
without necessarily implyingthat they understand it, so like
we can perceive thingshappening and react to them, and
understand that we areperceiving them without
understanding what they are Like.
You know, we think the moon isa god or whatever because we see
(32:24):
, we can perceive it, we knowits impacts on the world around
us, but we can't, as a primitivehuman, understand what it means
or why it's doing it or howit's doing it.
More simply put, awareness ismainly the physical act of
perceiving, while sentience isthe subjective way of actually
being affected, and both ofthose things equal consciousness
(32:46):
.
I would like to quote from aguy that I'm a huge fan of, dr
David Eagleman.
He's a neuroscientist and I'mgoing to quote from, or at least
paraphrase from, a couple ofhis podcasts specifically
referring to consciousness.
Consider this what is thedifference between your brain
and your laptop?
Your brain is shuttling signalsaround, and so is your laptop,
(33:08):
but presumably your computerdoesn't feel anything.
It's just running algorithms.
One thing to note when we'retackling this question is that
consciousness seems to haveevolved because it is useful.
Consciousness is like ahigh-level operating system.
Back in the day you'd programcomputers directly with punch
cards or in a machine language,but eventually we developed user
(33:30):
interfaces like Windows orLinux or Mac OS, which hid all
of the complex operations of thecomputer and allowed us to just
deal with the stuff that weneeded at the highest level.
I just want to move this thinghere, send this in email or drop
this picture, and that'sessentially what consciousness
seems to be.
It's our way for us to justhave the highest level picture
(33:53):
of what's going on.
How do you get some magicalhigh-level property from simple
low-level parts?
Because we're just and this isjust me saying this, I mean
we're just blood and water andsalt.
And how do we get consciousnessfrom that?
Eagleman says the first thing tounderstand is the concept of
emergent property.
(34:13):
To understand consciousness wemay need to think not in terms
of the pieces and parts of thebrain, instead in terms of how
they all interact with eachother.
Get enough of these basicorganic parts together
interacting the right way, andthe mind emerges.
The pieces in parts of a systemcan be individually simple, but
(34:35):
what emerges at a higher levelis all about their interaction.
So the mind seems to emergefrom the interaction of billions
of pieces of parts of the brain.
At one point he makes a goodanalogy where he's just like you
can have carpet and steel andall these parts to make an
airplane, but the emergentproperty of an airplane is
flight.
All of those pieces togetherdon't make flight unless you
(34:58):
make an airplane.
And so, in the same way that weare just very sundry, create
consciousness as an emergentproperty, and that seems to be
what consciousness is anemergent property that allows us
to survive, reproduce and, on afundamental level, observe and
(35:18):
react to our environment aroundus, which is how we survive.
Obviously, you know, other lifeforms have similar qualities,
but that's when you get into the, what separates us from beasts
of the earth or whatever.
The I think.
Therefore, I am scenario, butconsciousness in and of itself
is an emergent property, likelyof biological life, and it gets
(35:43):
more or less advanced dependingon the type.
So the question I really wantedto pose is is AI that we have
currently, is it actuallyartificial intelligence?
And I think we have a littlebit of framework to help answer
that question.
So what we're really talkingabout is artificial, general
intelligence.
That's the kind of thing thatpeople think about
(36:04):
philosophically when they'retalking about this and that tech
people today, tech brosespecially think is true, and
you usually fall into one of thetwo camps doomers or boomers,
whether you think that it'sgenerally a good idea if it does
happen or if it's going to killus all.
When OpenAI decided to abandonits sort of egalitarian
nonprofit Embettering All ofMankind project with AI, they
decided to abandon its sort ofegalitarian nonprofit
(36:26):
embeddering all of mankindproject with AI.
They decided to, of course,monetize it, mostly because they
believed and there was a strongand still strong belief that to
be able to figure it out,instead of just using small,
curated, large language modelsto train AI agents, instead of
just scraping up everything onthe internet and then
(36:46):
hallucinating racist tirades.
But instead they just bought upas much language as they could
and said screw it, we'll justfilter it out later.
And they buy the biggestsupercomputers they can find,
the biggest processors they canfind, and they decide we're
going to scale up.
The only way to make thisadvance is to scale up and
unfortunately, that is, I think,its biggest problem.
(37:09):
Instead of starting small andbuilding on a foundation, they
just decided well, we're justgoing to give it as much
resources as possible and thenit'll figure itself out.
So not exactly great.
Sam Altman from OpenAI, formerlyof Y Combinator and a few other
places, has been one of thedriving forces behind this, and
one of the biggest problems isthey're now running out of scale
(37:29):
.
They're realizing that thescaling up paradigm doesn't work
.
It doesn't have these wild,amazing gains, these leaps in
progress technologically as itdid.
It's sort of reached itsability to benefit and we're
seeing the limitations of it now, after they've put it in
everything and they've tried toincorporate it into everything.
(37:50):
And these models, yeah, they'vegotten better and better and
better at making people thinkthey're communicating.
Because they speak morefluently than they did before,
they seem to respond better, butit's really just sort of
aesthetic more than it is actualunderstanding.
It's very likely we're notgoing to get to artificial
general intelligence anytimesoon.
In a survey that I read cited abunch of AI researchers actual
(38:14):
AI researchers that were polledand 75% of them believe we still
do not have the techniques forartificial general intelligence.
Artificial general intelligenceIn 2022, a Google software
engineer named Blake Lemoyne wassuspended, eventually fired,
after he argued that artificialintelligent chatbots that Google
had developed were sentient.
(38:35):
Do you remember this?
That was the public narrative.
It seemed like a weirdreactionary form of
anthropomorphism that we allkind of laugh at, the 4chan,
reddit brain thinking that leadsto misinformation, cults like
QAnon and what have you, whichwe all just kind of assumed was
true.
Digging into it more, though,his statement was arguably more
(38:56):
on par with some interpretationsof consciousness that we kind
of give him credit for,specifically Lemoyne.
He subscribes to what isreferred to as functionalism,
and once again I'm paraphrasingthe Stanford Encyclopedia of
Philosophy Functionalism is thedoctrine that maintains that
what makes something a mentalstate of a particular type does
(39:18):
not depend on its internal state, but rather on the way it
functions or the role it playsin the cognitive system of which
it is a part.
That is, functionalist theoriestake the identity of a mental
state to be determined by itscausal relations to sensory
stimulations or other mentalstates or behavior.
So he wasn't wrong when he saidthat in those definitions.
(39:44):
Now is what he thinks?
Is sentience actually sentience?
I don't know, that's a toughquestion, not really sure.
So this idea of fundamentalismis rooted in Aristotle's
conception of the soul and hasanecdotes in Hobbes's conception
of the mind as a calculatingmachine, which we talked about
(40:04):
before, but it has become fully,sort of realized, only recently
, in the last part of the 20thcentury.
For example, a functionalisttheory may characterize pain as
the state that tends to becaused by bodily injury, to
produce the belief thatsomething is wrong with the body
and the desire to be out ofthat state to produce anxiety
(40:25):
and, in the absence of anystronger conflicting desires, to
cause wincing or moaning orcrying.
According to this theory, alland only creatures with internal
states that can meet thiscondition of fundamentalism or
play this role are capable ofbeing in pain.
So suppose that in humansthere's some sort of distinctive
(40:46):
kind of neural activity thatplays this role.
If so, according to thisfundamentalist theory, humans
can be in pain simply byundergoing the stimulation of
the parts of the brain thatregister pain.
But it also permits, withinthis framework, the idea that
that creatures, even withdifferent physical conditions,
(41:07):
different brain structures,could also have the same mental
state.
So he wasn't necessarily wrong.
I don't know that I agree withhim, but I mean he wasn't crazy,
he wasn't assuming too much,even though google thought so in
fiery.
Within the way that he definesconsciousness and sentience, he
was probably completely correct.
It responded in ways that arehallmarks of that view of
(41:33):
consciousness.
So I mean, I don't think he wasbeing a nut, I just don't think
anyone really heard him out inhis argument.
I think we were all a littlequick to judge him for his
reaction to it.
Once again, I'll quote DavidEagleman what does it mean to be
conscious or sentient?
How the heck are we supposed toknow when we have created
something that gets there?
How do we know whether the AIis sentient?
(41:56):
One way to make this distinctionwould be to see if AI could
conceptualize things, if itcould take lots of words and
facts on the web and abstractthose into some bigger idea.
One of my friends here inSilicon Valley said to me the
other day that I asked ChatGBTthe following question Take a
capital letter D and turn itflat side down.
(42:17):
Now take the letter J and slideit underneath.
What does it look like?
And ChatGBT said an umbrella.
And my friend was blown away bythis and he said this is
conceptualization, it's justdone three-dimensional reasoning
.
There's something deeperhappening here than just than.
Eagleman continues and he saysbut I pointed out to him that
(42:37):
this particular question aboutthe D on its side and the J
underneath is just one of theoldest examples in a psychology
class when talking about visualimagery, and it's on the
internet in thousands of places.
So of course it knew that it'sjust parroting the answer,
because it read the question andhas read the answer before.
So it's not always easy todetermine what's going on for
(42:59):
these models in terms of whethersome human somewhere has
discussed this at some point andwritten down the answer.
If any human has discussed thisquestion before, has
conceptualized something andthen chatGBT found it and
mimicked that.
But that is notconceptualization.
So, in conclusion, I do notthink current AI is actually
(43:21):
intelligent.
I think it's a regurgitation ofthings.
Its only base of knowledgecomes from that which has come
before.
Its algorithms allow it to docertain computations to put them
together.
That is not original thought.
We're not talking about themovie Her, which we will talk
about next time.
(43:42):
It's just algorithmic responsesto human input.
Right, do you agree or disagree?
Speaker 1 (43:52):
Well, as you've been
talking, you know I'm looking at
a whole lot of other things,trying to process and
conceptualize a variety ofthings.
I think there are somefundamental questions about the
tact that you took to connotateconsciousness.
I think there's a lot inphilosophy about the dual nature
of mind and body that we'rejust kind of skipping over.
(44:15):
These are fundamental questionsabout oh I don't know theistic
ideas of life and consciousnessthat we've completely glossed
over.
There's a distinctlymaterialistic nature to most of
your arguments here.
There is a complete dismissalof any non-localized
consciousness there.
There's a lot that isunderpinning your frame of
(44:38):
reference, and to delve intothose other elements I think
would take a to do it properly.
Uh do amount of research anddialogue between the different
schools of thought over the past500 years.
Speaker 2 (44:56):
Which is why I had to
spring this on you.
Speaker 1 (44:58):
But I felt kind of
like on the back foot, you know,
as I'm like reading through allthe different schools of
dualistic thought over thecenturies, the explanations of
consciousness of AI, you know atwhat point.
Because you've put a fine pointon your belief that the current
nature of artificialintelligence as it exists in
(45:22):
tangible form is a simplealgorithmic regurgitation of our
own knowledge and points ofview.
Thus, it is not separate, it isnot unique.
It is just a manicured patternof current thoughts and
processes.
But what is the turning pointfor you?
(45:42):
I think that is a larger debatewithin philosophy itself.
I don't know.
I think one thing that reallystuck out with the individual we
talked about before that youspoke about, who had felt that
AI had become conscious in theway they even said that, that he
felt AI was conscious BlakeLemoine, dialogue with.
(46:09):
Honestly, I don't know that Ihave an answer for at what point
is it not regurgitation, atwhat point is it a unique,
sentient exchange of consciousthoughts and ideas?
When are we saying that athought exists?
When does it become new?
(46:29):
When does it become fresh?
When does it become singular?
Speaker 2 (46:32):
Basically, I just
wanted to give a framework of
the debate of AI, Because nexttime we're going to talk about
AI in pop culture and this givesour audience sort of a way to
go into those debates with moreknowledge and more questions,
which I think are reallyimportant.
Speaker 1 (46:51):
I mean, obviously
there's a lot to be said, but if
we want to discuss certainphenomena that question the
locality of consciousness, Ithink that's a viable viewpoint
that might raise some questionsof divorcing simple material
biology to consciousness.
I think there's plenty ofphilosophers, both in the
(47:12):
distant past and in the current,who would again say that mind
and body aren't one, that thereis more to that in a deep and
wide ranging view of the souland its theoretical existence.
A larger view of consciousness,I think is has driven mankind
(47:36):
to think of it as something lessthan simple neuroscience for
the entirety of our existenceand to I.
I think it behooves not tosimply dismiss it out of hand.
Speaker 2 (47:50):
It's kind of like the
discovery of the atom.
We had all these ideas aboutwhat things could be, and then
we finally physically discoveredthem and then we're like, oh
okay, well, this is what it is.
Those philosophical questionsstill remain, but now we can
kind of like explore and narrowthese things down based on the
actual physical biology of them,which I think is really
(48:14):
fascinating.
I mean, it is just an algorithm, it's man-created, it's not
some sort of godhead, but at thesame time, are we any different
?
What makes us any differentfrom that other than we evolved
organically?
Or aren't we just reacting tostimuli around us and we're
asking questions because we'retrying to figure them?
Speaker 1 (48:32):
out.
If a computer of any level hascreative ability, does it become
true artificial intelligence,true artificial consciousness,
which I think is probably abetter way of discussing it,
because I mean, in intelligenceand and consciousness they're
not quite the same thing.
Speaker 2 (48:50):
No, in fact,
artificial intelligence is
probably a more accuratedescriptor of what we call AI,
because it isn't necessarilyconscious, it just has knowledge
and it's regurgitating andreforming and remixing that
knowledge.
What we're actually asking whenwe talk about artificial
intelligence, at least in thesci-fi or philosophical realms
(49:10):
when we talk about artificialintelligence, at least in, like
the sci-fi or philosophicalrealms we're talking about
consciousness, sentience, whichis completely different than
just knowledge.
So I never actually anticipatedus trying to solve this.
I just wanted to give this aprimer and we're not going to
figure it out.
Speaker 1 (49:26):
I'll just say that
right now.
Speaker 2 (49:29):
I mean, I don't know
if that's a bold statement we're
not going to figure out themeaning of life right now 42.
Speaker 1 (49:31):
We if that's a bold
statement, we're not going to
figure out the meaning of liferight now, 42.
Speaker 2 (49:34):
We got it, nailed it
in one.
I wanted to bring that up, butI wanted to wait until the pop
culture wants to bring that up.
Speaker 1 (49:38):
Well, you, know.
Speaker 2 (49:39):
Those who know know
we're going to talk about it,
and that's why we have differentsides of different questions,
which is something that peoplewho listen to this should be
asking.
Would we talk about artificialintelligence in pop culture next
, Look at him like, wow, thatwas a fantastic segue.
Well, I was going nowhere.
Speaker 1 (49:59):
This is not a
definitive epistemology of the
universe or of artificialintelligence.
This is a opening of a windowwhere you can look at a pathway
to further knowledge andunderstanding and thought
yourself.
You know, do your own research.
That's what I'm trying to say.
Opening of a window where youcan look at a pathway to further
knowledge and understanding andthought yourself.
You know, do your own researchthat works.
That's what I'm trying to say.
Don't, don't tell them that.
Well, if they buy our brain anddick pills, our new nootropics,
(50:25):
and we've got the rhino 9000.
Speaker 2 (50:27):
Okay, it's gonna make
your thing pop blue the fact
that you know what that isabuses me.
Speaker 1 (50:34):
I put it in a gas
station.
Speaker 2 (50:36):
Yeah, well, fair,
that's fair.
We've been to a come and go weboth came and went.
Speaker 1 (50:41):
True that with a K,
strangely.
Speaker 2 (50:43):
How do you?
Speaker 1 (50:43):
get true that with a
K?
No, I know.
No, there was a comma there.
Speaker 2 (50:52):
If you disagree with
some of the breakdowns of the
things that I present, that isjust as important as what we're
talking about, because next timewe're going to talk about the
ways any other intelligence thatyou know of, artificial or
otherwise, do send them this Seeif they dig it.
Speaker 1 (51:19):
And if you dig it,
hey, if you wouldn't mind
showing that to the powers thatbe, let the algorithm know, can
you dig it?
Let the algorithm know that youare a fan and that you like it.
If you want to rate it fivedaves on the scale of your
choice, ideally apple, itunes,the podcast and whatever they
(51:39):
call that.
That is the best way for us tobe, it's just that I know it's
kind of a running joke.
That's the algorithm that I'veinjected in this conversation,
and you keep that is ouralgorithm.
That's your response.
Speaker 2 (51:51):
Which one is the
chatbot?
That's the real question.
Speaker 1 (51:54):
I pod, therefore I am
, but until we do find out
whether we are truly real or not, Skip.
What should they do?
Speaker 2 (52:02):
Well, they should
probably do their own research.
Make sure that you've cleanedup after yourselves to some sort
of reasonable degree.
Make sure that you pay your barstaff tip, your wait staff,
your KJs and podcasters and whathave you.
Make sure you support yourlocal comic shops and retailers.
And from Dispatch Ajax we wouldlike to say Godspeed, fair
(52:25):
Wizards.
Speaker 1 (52:25):
There's no use for
this conversation any longer.
Goodbye, Please go away.