Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeart Podcasts and how the
tech are you. I'm actually a little under the weather today.
I got my vaccinations for fluent COVID and so I've
(00:29):
got a few symptoms, nothing serious, but it is hard
for me to talk, which unfortunately is one of the
pivotal aspects of podcasting. So while I continue to work
on episodes, I thought I would bring you an episode
from just a couple of years ago because spooky times
(00:49):
are coming up, and with spooky times coming up, I
thought we could have a spooky times kind of episode,
at least, you know, tangentially. So I bring to you
an episode that published on October twenty four, twenty twenty two.
It is titled The Ghost in the Machine. I hope
you enjoy. We are continuing our spooky episode series and
(01:15):
the lead up to Halloween twenty twenty two. Apologies if
you're from the future listening back on this episode. That's
why the bizarre theme is popping up. We've already talked
recently about stuff like Van Bier power and zombie computers
where I'm really grasping at tenuous connections to horror stuff
(01:38):
for tech. But now it's time to tackle the ghost
in the machine. These days, that phrase is frequently bandied
about with relation to stuff like artificial intelligence. But the
man who coined the phrase itself was Gilbert Ryle in
nineteen forty nine, and he wasn't talking about artificial intelligence
(02:01):
at all. He was talking about, you know, real intelligence,
and he was critiquing a seventeenth century philosopher's argument about
the human mind. That philosopher was Renee Descartes, who famously wrote,
cogito ergo sum, I drink therefore I am sorry. It's
going to be impossible for me to talk about philosophers
(02:22):
without quoting Monty Python because I'm a dork. Okay, No,
cogito ergo sum actually means I think, therefore I am,
or that's how we interpret it. But that wasn't really
the bit that Gilbert Ryle was all riled up about.
See Descartes believed in dualism. Now, I don't mean that
(02:45):
he believed in showing up at dawn to sword fight
other philosophers. Though, if someone wants to make a philosopher
version of Highlander. I am one hundred percent down with that,
and I will back your kickstarter. No. No, Descartes believed
that the mind and you know, consciousness and sentience and
all this kind of stuff are separate from the actual
(03:08):
gray matter that resides in our noggins. So, in other words,
intelligence and awareness and all that other stuff that makes
you you exists independently of your physical brain. That there
is this this component that is beyond the physical Now,
Ryle referred to this concept as the ghost in the machine,
(03:31):
that somehow all the things that represent you are you know,
to great effect, ethereally independent of the brain itself. And
Ryle rejects that notion, and you know, Ryle appears to
be right. We know this because there are plenty of
case studies focusing on people who have experienced brain injuries,
(03:52):
either from some physical calamity or a disease or something
along those lines. And these events all often transform people
and change their behaviors and their underlying personalities. So the
damage to the physical stuff inside our heads can change
who we are as people. That seems to dismiss this
(04:14):
concept of dualism that the mind and the brain are
not separate entities. Take that descartes anyway. That's where we
get the phrase the ghost in the machine. The machine
in this case is a biological one in the original
sense of the phrase. In nineteen sixty seven, Arthur Kustler
(04:37):
he wrote a book called The Ghost in the Machine
and or at least it was published in nineteen sixty seven,
and he was a Hungarian born, Austrian educated journalist. So
his book The Ghost of the Machine was an attempt
to examine and explain humanity's tendency toward violence, as well
(04:57):
as going over the mind body problem that Ryle had
addressed when he coined the phrase to begin with in
nineteen forty nine. All right, then flash forward a couple
of decades, nineteen eighty one, and we get the title
of what was the fourth studio album of the Police,
you know Sting and the Police. Now I say this
(05:18):
in case you were thinking that Sting and Company were
naming their album off the technological use of the term
ghost of the machine, but they were not. Sting good
Old Gordy had read Kussler's book, and he took the
album's title from the book title. So in case you're
wondering That's also the album that brought us songs like
every little thing she does as magic and spirits in
(05:42):
the material world. For a really recent exploration of dualism, well,
arguably it's more than dualism. I guess you can actually
watch Pixar's Inside Out. That's a film that solidified my
reputation as an unfeeling monster among my friends because I
didn't feel anything when bing Bong meets his final fate.
(06:05):
I just couldn't care. I mean, he's not even real
in the context of the film, let alone in my world,
so why would I know? Okay, never mind anyway. In
Inside Out, we learn that our emotions are actual anthropomorphic
entities living inside our heads that share the controls to
our behavior. That we are in effect governed by our emotions,
(06:29):
and our emotions, in turn are the responsibilities of entities
like joy and anger and disgust. Descartes probably would have
been thrilled and rale Likeli would have rolled his eyes. Anyway,
The trope of having a little voice inside your head
that is somehow separate from you and also you is
(06:50):
a really popular one. I always think of Homer Simpson,
who will often find himself arguing with his own brain
for comedic effect. It's another example of doue in popular culture.
But the idiom the ghost of the machine survived its
initial philosophical and journalistic trappings, and now folks tend to
use it to describe stuff that's actually in the technological world.
(07:14):
We're talking about machines, as in the stuff that we
humans make, as opposed to the stuff that we are
generally in tech. The phrase describes a situation in which
a technological device or construct behaves in a way counter
to what we expect or want. At least that was
the way it was used for quite some time. So
(07:36):
for example, let's say that you've got yourself a robotic
vacuum cleaner, and you've set the schedule so that it's
only going to run early in the afternoon, and then
one night you wake up hearing the worrying and bumping
of your rumba as it aimlessly wanders your home in
search of dirt and dust to consume, and you spend
(07:58):
a few sleepy moments wondering if there's some sort of
conscientious intruder who's made their way into your house and
now they're tidying up. Before you realize no, it's that
darn robot vacuum cleaner. There's a ghost in a machine.
It's decided on its own to come awake and start working. Now, Alternatively,
maybe you just goofed up when you created the schedule
(08:19):
and you mixed up your AMS and your pms. That's
also a possibility. But you know, sometimes technology does behave
in a way that we don't expect. Either there's a
malfunction or it just encounters some sort of scenario that
it was not designed for, and so the result it
produces is not the one we want, and we tend
(08:40):
to try and kind of cover that up with this
blanket explanation of ghosts in the machine kind of stands
as a placeholder until we can really suss out what's
going on underneath. Programmers sometimes use the phrase ghost in
the machine to describe moments where you get something unexpected
while you're coding, Like you get an unexpected result, like
(09:01):
you've coded something to produce a specific outcome, and something
else happens instead. So the programmer didn't intend for this
result to happen, and so therefore the cause must be external, Right,
It's got to be some sort of ghost in the machine.
That's causing this to go wrong. Now I'm joshing a
bit here. Of course, Usually this is a way for
(09:24):
a programmer to kind of acknowledge that things are not
going to plan and that they need to go back
and look over their code much more closely to find
out what's going on. Where did things go wrong? Does
encoding all it takes is like a skipped step where
you know you've just you just missed a thing, and
you you went on one step beyond where you thought
(09:46):
you were, or maybe you made a typo, you got
some missed keystrokes in there. That can be all it
takes to make a program misbehave, and so then you
have to hunt down the bugs they're causing the problem.
But you know, if a program is acting very oddly,
you might call it a ghost of the machine scenario. Now,
I'm not sure about the timeline for when folks in
(10:09):
the tech space began to appropriate the phrase ghost in
the machine for their work, because when it comes to
stuff like this, you're really entering the world of folklore.
And folklore is largely an oral tradition where you are
passing ideas along one person to the next, speaking about it.
There's not necessarily a firm written record, at least not
(10:31):
one where you can point to something and say this
is where it began, Not like Ryle's version of the
phrase ghost in the Machine itself, which was published, so
we can point to that as saying this is where
the phrase comes from. This would be what Richard Dawkins
would refer to as a meme, a little nugget of
culture that gets passed on from person to person. But
(10:53):
it also has been used in literature to refer to
technological situations. Arthur's Clark, whom I've referenced many times in
this show, as he's the guy who explained that any
sufficiently advanced technology is indistinguishable from magic. He also used
the phrase ghost in the Machine to talk about AI. Specifically,
(11:16):
he used it in his follow up to his novel,
his work of fiction two thousand and one, A Space Odyssey.
The follow up is called, fittingly enough, twenty ten Odyssey two.
Chapter forty two of twenty ten is titled the Ghost
in the Machine, and the focal point for that chapter
(11:37):
is characters discussing how that's the AI system from two
thousand and one that caused all the trouble so quick
recap of two thousand and one for those of you
not familiar with the story and two thousand and one
the film, the Stanley Kubrick film gets pretty lucy goosey,
So we're just going to focus on the main narrative here.
You have a crew aboard a spacecraft and a mayormer
(12:00):
in spacecraft called Discovery one, which is on its way
toward Jupiter. Now, the ship has a really sophisticated computer
system called HOW nine thousand that controls nearly everything on board.
Also fun little trivia fact, how HL means that the
initials are each one letter off from IBM, though Arthur C.
(12:23):
Clark would claim that that was not intentional. Anyway, How
begins to act erratically in the mission. At one point,
How insists there's a malfunction in a system that appears
to be perfectly functional, that it's working just fine. Then
How systematically begins to eliminate the crew after learning they
(12:44):
plan to disconnect the computer system because they suspect something's
going wrong. How figures out that plan by monitoring a
conversation that a couple of crew members have in a
room where there are no microphones, so How can't listen
in on this conversation. But How is able to direct
a video feed to that room and is able to
(13:06):
read the lips of the crew members as they talk
about their plan. So How continues to try and wipe
everybody out, and he explains or it I shouldn't give
him a gender. How explains that the computer systems being
turned off would jeopardize the mission, and How cannot allow
that to happen. How's prime directive is to make certain
(13:28):
the mission is a success, so anything that would threaten
its own existence has to be eliminated. There's also the
implication that How does not want to cease to exist,
that How has a personal motivation beyond seeing the mission
to completion, and so How has no choice but to
kill everyone. It's not that How wants to murder everyone.
(13:49):
It's just that in order to complete the mission, that's
the only outcome that makes sense. Eventually, one of the crew,
Dave Bowman, manages to turn off How, and How wonders allowil.
What will happen afterward? Will its consciousness continue once its
circuits are powered down? Will I dream? It? Says? Well? Anyway.
(14:11):
In Odyssey two, you now have this group of astronauts
and cosmonauts in a Soviet American joint effort that are
trying to figure out what happened with How. Was there
something inherently flawed in How's programming? Did some external element
cause how to malfunction? Did How's apparent consciousness emerge spontaneously
(14:36):
all on its own? Was it all just a sophisticated
trick And How never really had any sort of consciousness?
It only appeared to. So the crew are kind of
left to ponder this themselves. They don't have any easy answers.
That's just one example of the ghost of the machine
concept being handled in entertainment. When we come back, I'll
(14:57):
talk about a different one. But first let's take this
quick break. Okay, before the break, I talked about Arthur C.
Clark and his work with the concept of ghost in
the machine. Let's now leap over to Isaac Asimov, or
(15:20):
at least an adaptation of Asimov's work. So the film
version of I Robot, which really bears only a passing
resemblance to the short stories that Isaac Asimov wrote that
were also collected in a book called I Robot, uses
the phrase ghost in the machine. Isaac Asimov, by the way,
(15:40):
in case you're not familiar with his work, he's the
guy who also proposed the basic laws of robotics, which
are pretty famous as well. So in the film the
character doctor Alfred Lanning, who actually does appear in Asimov's stories,
but he's a very different version than the one that
appears in the film. He says in a voiceover quote,
(16:03):
there have always been ghosts in the machine, random segments
of code that have grouped together to form unexpected protocols.
Unanticipated these free radicals engender questions of free will, creativity,
and even the nature of what we might call the soul.
Why is it that when some robots are left in darkness,
(16:24):
they will seek out the light? Why is it that
when robots are stored in an empty space they will
group together rather than stand alone. How do we explain
this behavior random segments of code? Or is it something more?
When does a perceptual schematic become consciousness? When does a
difference engine become the search for truth? When does a
(16:47):
personality simulation become the bitter mote of a soul? End quote.
There's some fun references in there too. Difference engine, for example,
refers back to Charles Babbage, who create analytical engines that
pre date the concept of electrical computers. Now, this idea,
(17:09):
this idea of consciousness or the appearance of consciousness emerging
out of technology, is one that often pops up in
discussions about artificial intelligence, even within our world outside of fiction,
though usually we talk about this on a more hypothetical basis,
ununless you're Blake Lemoine or Lemoine, the former Google engineer
(17:31):
who maintains that Google's language model for dialogue applications aka Lambda,
is sentient. That's a claim that most other people dispute,
by the way, So maybe I'll do another episode about
it to really kind of dig into it. But Lemoine
or Lemoine, and I apologize because I don't know how
(17:52):
his last name is pronounced, has said a few times
that he believes that this particular program has gained sentience.
But it brings us to another favorite topic among tech fans,
which is, of course the Turing test. All right, So
the Turing test was Alan Turing, who he was kind
of like the father of computer science in many ways.
(18:15):
It was his response to the question can machines think?
Turing's answer was that question has no meaning, and you
are a silly person, goodbye. I am, of course paraphrasing,
but as a sort of thought experiment, Turing proposed taking
an older concept called the imitation game and applying it
to machines, really to demonstrate how meaningless the question of
(18:39):
can machines think is. So, what is the imitation game? Well,
the name kind of gives it away. It's a game
in which you have a player who asks at least
two different people questions to determine which of them is
an imitator. So, for example, you could have an imitation
game in which one of the people is a sailor
(18:59):
and the other is not a sailor, and the player
would take turns asking each person questions to try and
suss out which one is actually a sailor and which
one is merely pretending to be a sailor. So the
game depends both on the strength of the imitator's skill
of deception as well as the player's ability to come
up with really good questions. And you could do this
(19:22):
with all sorts of scenarios, and indeed there are tons
of game shows that use this very premise. Turing's thought
experiment was to create a version of this in which
a player would present questions to two other entities, one
a human and one a computer. The player would only
know these entities as X and Y, so they could
(19:44):
ask questions of X, and they could ask questions of
why and get replies. So the player would not be
able to see or hear the other entities. All questions
would have to be done in writing, you know, for example,
typed and printed out, and at the end of the
interview session, the player would be tasked with deciding if
X was a machine or a human, or if why
(20:06):
was the machine or the human. Turing was suggesting that
as computers and systems get more sophisticated and things like
chat programs get better at processing natural language and formulating responses,
though that was a little past Turing's time, that it
would be increasingly difficult for a person to determine if
any given entity on the other end of a chat
(20:28):
session was actually a person or a machine. And Turing
also rather cheekily suggests that we might as well assume
the machine has consciousness at that point, because when you
meet another human being, you assume that that other human
being possesses consciousness, even though you're incapable of stepping into
that person's actual experience. So you can't take over that
(20:51):
person and find out, oh, yes, they do have consciousness,
You just assume they do. So if you and I
were to meet, I assume you would believe I am
in fact sentient and conscious even on my bad days.
So if we're willing to agree to this while simultaneously
being unable to actually experience and therefore prove it, then
(21:13):
should we not grant the same consideration to machines that
give off the appearance of sentience and consciousness. Do we
have to prove it or do we just go ahead
and treat them as if they are because that's what
we would do if it was a human. Now, Turing
was being a bit dismissive about the concept of machines thinking.
His point was that they might get very very good
(21:36):
at simulating thinking, and that might well be enough for
us to just go ahead and say that's what they're doing,
even if you could, you know, push machines through the
finest of sieves and find not one grain of actual
consciousness within it. Now, it doesn't hurt that defining consciousness,
even in human terms, is something that we can't really do,
(21:58):
or at least we don't have a unifying definition that
everybody agrees upon. Sometimes we define consciousness by what it
doesn't include, rather than what it is. This is why
I get antsy in philosophical discussions, because being sort of
a pragmatic Dullered myself, it's hard for me to keep up.
But let's jump ahead and talk about a related concept.
(22:22):
This is also one that I've covered a few times
on tech stuff that also points to this ghost and
the machine idea. And this is the argument against machine
consciousness and strong AI. It is called the Chinese room.
John Searle, a philosopher, put for this argument back in
nineteen eighty and that argument goes something like this. Let's
(22:45):
say we've got ourselves a computer, and this computer can
accept sheets of paper that have Chinese characters written on
the paper, and the computer can then produce new sheets
of paper. It can print out sheets that are also
covered in Chinese characters, that are in response to the
input sheets that were fed to it. These responses are sophisticated,
(23:06):
they are relevant. They're good enough that a native Chinese
speaker would be certain that someone fluent in Chinese was
creating the responses. Someone who understood what was being fed
to it was producing the output. So, in other words,
this system would pass the Turing test. But does that
(23:27):
mean the system actually understands Chinese. Cirtle's argument is no,
it doesn't. He says, Imagine that you are inside a room,
and for the purposes of this scenario, you do not
understand Chinese. So if you do understand Chinese, pretend you don't. Okay.
(23:47):
So there's a slot on the wall, and through this
slot you occasionally get sheets of paper, and there are
Chinese symbols on the sheets of paper. You cannot read these,
You don't know what they stand for. You don't know
anything about it other than they're clearly Chinese characters on
the paper. However, what you do have inside this room
(24:07):
with you is this big old book of instructions that
tells you what to do when these papers come in,
and you use the instructions to find the characters that
are on the input sheet of paper, and you follow
a path of instructions to create the corresponding response. Step
by step, you do it all the way until you
have created the full response to whatever was said to you.
(24:31):
Then you push the response back out the slot. Now,
the person on the other side of the slot is
going to get a response that appears to come from
someone who is fluent in Chinese. But you're not. You're
just following a preset list of instructions. You don't have
any actual understanding of what's going on. You still don't
know the meaning of what was given to you. You
(24:52):
don't even know the meaning of what you produced. You're
just ignorantly following an algorithm. So externally it appear here's
you understand. But if someone were to ask you to
translate anything you had done, you wouldn't be able to
do it. So Ceyrele is arguing against what is called
strong AI. Generally, we define strong AI as artificial intelligence
(25:15):
that processes information in a way that is similar to
how our human brains process information. Strong AI may or
may not include semi related concepts like sentience and self
awareness and consciousness and motivations and the ability to experience things,
et cetera. So Cearrel is saying that machines, even incredibly
(25:38):
sophisticated machines, are incapable of reaching a level of understanding
that true intelligence can, that we humans can grasp things
on a level that machines simply are unable to reach
even if the machines can process information faster and in
greater quantities than humans are able to. Another way of
(25:58):
putting this is a calculator can multiply two very large
numbers and get a result much faster than a human
could do, but the calculator doesn't understand any significance behind
the numbers, or even if there's a lack of significance,
the calculator doesn't have that capability. Now, maybe Seerle's argument
(26:18):
is valid, and maybe, as Touring suggests, it doesn't even matter.
So let's talk about machine learning for a moment. Machine
learning encompasses a broad scope of applications and approaches and disciplines,
but I'll focus on one approach from a very high level.
It's called generative adversarial networks or gans GANS. Okay, As
(26:44):
the name suggests, this model uses two systems in opposition
of one another. On one side, you have a system
that is generative, that is, it generates something. Maybe it
generates pictures of cats, doesn't really matter. We'll use cats
for this example. So what does matter is that this
model is trying to create something that is indistinguishable from
(27:09):
the real version of that thing. So on the other side,
you have a system called the discriminator. So this is
a system that looks for fakes. Its job is to
sort out real versions of whatever it's designed to look
for and to flag ones that were generated or not real.
So with cats as our starting point, the discriminator is
(27:31):
meant to tell the difference between real pictures that have
cats and fake pictures of cats, or maybe just pictures
that don't have cats in them at all. So first
you have to train up your models, and you might
do this by setting the task. So let's start with
the generative system, and you create a system that is
(27:52):
meant to analyze a bunch of images of cats, and
you just feed the housands of pictures of cats, all
these different cats, different sizes and colors and orientations and activity,
and then you tell the system to start making new
pictures of cats. And let's say that first round that
the generative system does is horrific. HP Lovecraft would wet
(28:15):
himself if he saw the images that this computer had created.
You see that these horrors from the Great Beyond are
in no way shape or form cats. So you go
into the model and you start tweaking settings so that
the system produces something you know less eldritch, and you
go again, and you do this lots and lots of times,
(28:38):
like thousands of times, until the images start to look
a lot more cat ish. You do something similar with
the discriminator model. You feed it a bunch of images,
some with cats, some without, or maybe some with like
crudely drawn cats or whatever, and you see how many
of the system is able to suss out. And maybe
(29:00):
if he doesn't do that good a job, maybe it
doesn't identify certain real images of cats properly. Maybe it
misidentifies images that don't have cats in them. So you
go into the discriminator's model and you start tweaking it
so it gets better and better at identifying images that
do not have real cats in them. And then you
set these two systems against each other. The generative system
(29:21):
is trying to create images that will fool the discriminator.
The discriminator is trying to identify generated images of cats
and only allow real images of cats through. It is
a zero sum game, winner takes all, and the two
systems compete against each other, with the models for each
updating repeatedly so that each gets a bit better Between sessions.
(29:44):
If the generative model is able to consistently fool the
discriminator like half the time, the generative model is pretty
reliably creating good examples. This, by the way, is a
ridiculous oversimplification of what's going on with generative adversarial networks,
but you get the idea. This form of machine learning
starts to feel kind of creepy to some of us.
(30:04):
Like the ability of a machine to learn to do
something better seems to be a very human quality, something
that makes us special. But if we can give machines
that capability, well, then how are we special or are
we special at all? That's something I'm going to tackle
(30:27):
as soon as we come back from this next break. Okay,
we're back now. I would argue that we are special.
Before the break, I was asking, can we be special
if machines are capable of learning? I think we are
(30:49):
in that we're able to do stuff that machines as
of right now either cannot do or they can do,
but they don't do it very well and they can
only attempt it after a lootic Chris amount of time.
For example, let's talk about opening doors. Several years ago
twenty sixteen, I was at south By Southwest. I attended
(31:10):
a panel about robotics and artificial intelligence and human computer interactions.
In that panel, Laila Takayama, a cognitive and social scientist,
talked about working in the field of human computer interaction
and she mentioned how she was once in an office
where a robot was in the middle of a hallway,
sitting motionless. It was just facing a door. What Takeyama
(31:33):
didn't know is that the robot was processing how to
open that door, staring at the door and trying to
figure out how to open it for days on end.
This was taking a lot of time, obviously. Now when
you think about doors, you realize there can be quite
a few options, right, Maybe you need to pull on
a handle to open the door. Maybe you need to
(31:54):
push on the door. Maybe there's a door knob that
first you have to turn before you pull or push.
Maybe there's a crash bar, also known as a panic bar.
Those are the horizontal bars on exit doors that you
push on to open. Frequently, they're seen in doors that
open to an exterior location, like inside schools and stuff.
(32:15):
You push on them to get out. Maybe it's a
revolving door, which adds in a whole new level of complexity.
But you get my point. There are a lot of
different kinds of doors. Now. We humans pick up on
how doors work pretty darn quickly. I mean sure, we
might be like that one kid in the Farside cartoon
where the kid's going to the School for the Gifted
(32:37):
and he's pushing as hard as he can on a
door that is labeled pull. That could be us sometimes,
but we figure it out right, we do a quick push,
we realize, oh, it's not opening. We pull. Robots it's
more challenging for them. They are not good at extrapolating
from past experience, at least not in every field. We
humans can apply our knowledge from earlier encounters, and we
(32:59):
can even if the thing we're facing is mostly new
to us, we might recognize elements that give us a
hint on how to proceed. Robots and AI aren't really
good at doing that. They're also not good at associative thinking,
which is where we start to draw connections between different
(33:20):
ideas to come up with something new. It's a really
important step in the creative process. I find myself free
associating whenever I'm not actively thinking about something. So if
I'm doing a mundane task, like if I'm washing dishes
or I'm mowing the lawn, my brain is going nuts
free associating ideas and creating new ones. Machines are not
(33:41):
very good at that for now anyway. They are not
bad at mimicking it, but they can't actually do it. So,
getting back to Laila Takeyama, one of the really fascinating
bits about that panel I went to was a discussion
on social cues that robots could have in order to
(34:03):
alert us humans in that same space what the robot
was up to. This was not for the robots benefit,
but for our benefit. The whole point is that these
cues would give us an idea of what was going
on with the robot so that we don't accidentally, you know,
interrupt the robot. So you know, it might be like
(34:24):
the robots in that hallway and it's looking at a door,
and you're wondering, why is this robot shut down in
the hallway, But then maybe the robot reaches up to
apparently kind of scratch its head in sort of a huh,
what's going on kind of gesture, and that might tell you, oh,
the robot is actively analyzing something. Don't know exactly what
(34:46):
it is, but it's clearly working, So maybe I'll step
around the robot behind it and not interrupt its vision
of the door it's staring at. The whole point is
that the social cue whose can help us interact more
naturally with robots and coexist with them within human spaces,
(35:07):
so that both the humans and the robots can operate
well with one another. Also, it helps to explain what
the robot is doing, because if you don't have that,
the robots end up being mysterious. Right we can't see
into them, we don't understand what they are currently trying
to do, and mystery can breed distrust. That leads to
(35:30):
yet another concept in AI that gets to this ghost
in the machine concept, which is the black box. So,
in this context, a black box refers to any system
where it is difficult or impossible to see how the
system works internally. Therefore, there's no way of knowing how
the system is actually producing any given output, or even
(35:52):
if the output is the best it could do. So
with a black box system, you feed input into the
system and you get out output out of it, but
you don't know what was happening in the middle. You
don't know what the system did to turn the input
into output. Maybe there's a sophisticated computer in the middle
of that system that's doing all the processing. Maybe there's
(36:13):
a person who doesn't understand Chinese stuck in there. Maybe
there's a magical theory that waves a wand and produces
the result. The problem is we don't know, and by
not knowing, you cannot be certain that the output you're
getting is actually the best, or even relevant or the
most likely to be correct based upon the input you
(36:34):
fed it. So you start making decisions based on this output.
But because you're not sure that the output is actually good,
you therefore can't be sure that the decisions you're making
are the best, and that leads to really difficult problems.
So let's take a theoretical example. Let's say we've built
a complex computer model that's designed to project the effects
(36:57):
of climate change. And let's say this model is so
complex and so recursive on itself that it effectively becomes
impossible for us to know whether or not the model
is actually working properly. Well, that would mean we wouldn't
really be able to rely on any predictions or projections
made by this model. I mean, maybe the projections are accurate,
(37:21):
but maybe they're not. The issue is there's no way
for us to be certain, and yet we have a
need to act. Climate change is a thing, and we
need to make changes to reduce its impact or to
mitigate it. It's possible that any decisions we make based
upon the output of the system will exacerbate the problem,
(37:42):
or maybe it'll just be less effective than alternative decisions
would be. Further, we're getting closer to that Arthur C.
Clark statement about sufficiently advanced technologies being indistinguishable from magic.
If we produce systems that are so complicated that it's
impossible for us to understand them fully, we might begin
(38:03):
to view those technologies as being magical or the very
least greater than the sum of their parts, and this
can lead to some illogical decisions. This kind of brings
me to talk about the Church of Ai called the
Way of the Future, which was founded and then later
dissolved by Anthony Lewandowski. You may have heard Lewandowski's name
(38:27):
if you followed the drama of his departure from Google
and his eventual employment and subsequent termination from Uber. And
then there was also the fact that he was sentenced
to go to prison for stealing company secrets and then
later received a presidential pardon from Donald Trump. So quick
recap on Lewandowski. Lewandowski worked within Google's autonomous vehicle division,
(38:53):
which would eventually become a full subsidiary of Google's parent company, Alphabet,
and that subsidiary is called Weimo. So when Lewandowski left Google,
he brought with him a whole lot of data, data
that Google claimed belonged to the company and was proprietary
in nature and thus constituted company secrets. Lewandowski eventually began
(39:16):
working with Uber in that company's own driverless vehicle initiative,
but the Google slash Weimo investigation would lead to Uber
hastily firing Lewandowski in sort of an attempt to kind
of disentangle Uber from this matter, which only worked a
little bit anyway. In the midst of all this Weimo
(39:37):
slash Uber drama, in twenty seventeen, Wired ran an article
that explained that this same Anthony Lewandowski had formed a
church called Way of the Future a couple of years earlier.
In twenty fifteen, he placed himself at the head of
this church with the title of dean, and he also
became the CEO of the nonprofit organization designed to run
(40:01):
the church. The aim of the church was to see
quote the realization, acceptance, and worship of a Godhead based
on artificial intelligence AI developed through computer hardware and software
end quote. This is according to the founding documents that
were filed with the US Internal Revenue Service or IRS. Further,
(40:23):
Lewandowski planned to start seminars based on this very idea
later in twenty seventeen. By twenty twenty, Lewandowski's jumped from
Google to Uber escalated into a prison sentence of eighteen months,
and it was because he had been found guilty of
stealing trade secrets. Trump would pardon Lewandowski in January twenty
(40:45):
twenty one, kind of you know, after the insurrection on
January sixth, but before Trump would leave office in late January.
As for the Way of the Future, Lewandowski actually began
to shut that down in June of twenty twenty and
it was dissolved by the end of twenty twenty, but
not reported on until like February twenty twenty one. He
(41:06):
directed the assets of the church some one hundred and
seventy five thousand dollars to be donated to the NAACP.
Lewandowski has said that the beliefs behind the church are
ones that he still adheres to, that AI has the
potential to tackle very challenging problems like taking care of
(41:29):
the planet, which Lewandowski says, obviously we humans are incapable
of doing that. You know, we would put on this
system taking care of things that we understand to be important,
but we seem to be incapable of handling ourselves, almost
like we're children. Thus looking at AI like a godhead,
(41:51):
so we should seek out solutions with AI rather than
locking AI away and saying, oh, we can't push AI's
development further in these directions because of the potential existential
dangers that could emerge from AI becoming super intelligent. I
don't think there are actually that many folks who are
(42:13):
trying to lock AI away at all. Mostly I see
tons of efforts to improve aspects of AI from a
million different angles. I think most serious AI researchers and
scientists aren't really focused on strong AI at all. They're
looking at very particular applications of artificial intelligence, very particular
(42:35):
implementations of it, but not like a strong AI that
acts like deep thought from the Hitchhacker's Guide to the Galaxy. Anyway,
maybe Lewandowski's vision will eventually lead us not to a
ghost in the machine, but a literal dios x makina
that means god out of the machine. That seems to
(42:59):
be how Levendal's Ski views the potential of AI that
are unsolvable problems are almost magically fixed thanks to this
robotic or computational savior. Know In fiction, deos ex machina
is often seen as a cop out. Right, You've got
your characters in some sort of ironclad disastrous situation, there's
(43:23):
no escape for them, and then in order to get
that happy ending, you have some unlikely savior or unlikely
event happen and everyone gets saved, and it might be
satisfying because you've got the happy ending, but upon critical
reflection you think, well, that doesn't really make sense. There
are a lot of stories that get a lot of
(43:44):
flak for using deosx machina. The image I always have
is from classical theater, where you've got all the mortal
characters in a terrible situation and then an actor standing
in as a god is literally lowered from the top
of the stage on pulleys to descend to the moral
(44:05):
realm and fix everything so that you can have a comedy,
a play with a happy ending. For Lewandowski, it's really
about turning the ghost of the machine into a god.
I'm not so sure about that myself. I don't know
if that's a realistic vision. I can see the appeal
of it, because we do have these very difficult problems
(44:26):
that we need to solve, and we have had very
little progress on many of those problems for multiple reasons,
not just a lack of information, but a lack of
motivation or conflicting motivations where we have other needs that
have to be met that conflict with the solving of
a tough problem like climate change. Right, we have energy
(44:48):
needs that need to be met. There are places in
the developing world that would be disproportionately affected by massive
policies that were meant to mitigate climate change, and it's
tough to address that. Right. There are these real reasons
why it's a complicated issue beyond just it's hard to understand.
(45:08):
So I see the appeal of it, but it also
kind of feels like a cop out to me, Like
this idea of will engineer our way out of this problem,
because that just puts off doing anything about the problem
until future you can get around to it. I don't
know about any of you, but I am very much
guilty of the idea of, you know what this is,
(45:31):
future Jonathan will take care of this. Jonathan right now
has to focus on these other things. Future Jonathan will
take it. Future Jonathan, by the way, hates Jonathan of
right now and really hates Jonathan in the past because
it's just putting things off until it gets to a
point where you can't do it anymore, and by then
it might be too late. So that's what I worry
about with this particular approach, this idea of we'll figure
(45:53):
it out, will science our way out, we'll engineer our
way out, because it's projecting all that into the future
and not doing anything in the present anyway. That's the
episode on ghost in the Machine. There are other interpretations
as well. There's some great ones in fiction where sometimes
you actually have a literal ghost in the machine, like
(46:14):
there's a haunted machine. But maybe I'll wait and tackle
that for a future, more like entertainment focused episode where
it's not so much about the technology but kind of
a critique of the entertainment itself, because there's only so
much you can say about. You know, I don't know
(46:35):
a ghost calculator. That's it for this episode. If you
have suggestions for future episode topics or anything else that
you would like to communicate to me, there are a
couple of ways you can do that. One is you
can download the iHeartRadio app and navigate over to tech Stuff.
Just put tech Stuff in the little search field and
you'll see that it'll pop up. You go to the
(46:56):
tech Stuff page and there's a little microphone icon. Click
on that. You can leave a voice message up to
thirty seconds in length and let me know what you
would like to hear in future episodes. And if you like,
you can even tell me if I can use your
voice message in a future episode. Just let me know.
I'm all about opt in. I'm not gonna do it automatically,
(47:16):
or if you prefer, you can reach out on Twitter.
The handle for the show is tech Stuff HSW and
I'll talk to you again really soon. Tech Stuff is
an iHeartRadio production. For more podcasts from iHeartRadio, visit the
(47:37):
iHeartRadio app, Apple Podcasts, or wherever you listen to your
favorite shows,