Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking. Hey there, and welcome to Forward Thinking, the
podcast that looks at the future and says Marvin, I
love you. Remember, I'm programmed for you. I'm Jonathan Strickland,
(00:22):
and I'm Joe McCormick. And our other host, Lauren is
not with us today, but you you had something to
say about Marvin there. Now, does that mean today we're
gonna be talking about somebody named Marvin? We are? Is
it going to be Lee Marvin? It will not be
Lee Marvin apart from this very moment where we are
talking about Lee Marvin. No, Lee Barman wouldn't make a
lot of sense on this party. No. I mean, you know,
(00:43):
the lyric refers to Marvin the paranoid android from Hitchhiker's
Guide to the Galaxy. But we're not talking about that
Marvin either. Although we're talking about a Marvin who had
a lot to do with artificial intelligence, which Marvin the
paranoid Android possessed a great love that was kind of
a long way of going about that. We're talking about
Marvin Minsky. Marvin Minsky. So he passed away last month. Yes,
(01:07):
Marvin Minsky was an artificial intelligence pioneer associated with the
m I T. And he he passed away on Sunday,
January four, sixteen, and a bunch of publications that I
read and I'd seen online had been running some retrospectives
of his life, looking at his influence on his main field,
which I guess you would say is artificial intelligence, but
(01:28):
also on the history of computational theory and on cognitive science.
You might say, yeah, it's interesting because uh, you know,
we often will refer to artificial intelligence as being multidisciplinary.
It's not. You know, you could argue artificial intelligence is
its own discipline, but within that you have other disciplines.
It's far more complex than just a label. And really,
(01:50):
if you're talking about the the entire scope of artificial intelligence,
it almost necessarily encompasses all of human knowledge. Yeah, you're
You're not wrong, I mean, and Minsky in many ways
was kind of a human example of this, because he's
certainly had a wide variety of interests and um and
(02:12):
so we wanted to really kind of talk about him,
and in a way we're thinking about doing occasionally an
episode about a forward thinker of some sorts. So we
may in the future do episodes about other Forward thinkers.
This is sort of a pilot program for that, and uh,
I know, there are a lot of people we would
love to talk about in the future, So we're gonna
(02:32):
kind of start with this one. And if you guys
out there have people, you know, Forward thinkers you would
love us to to profile, you should definitely let us know.
And we'll talk more about that at the end. But
let's talk more about Minsky. Yeah, and so we we
thought it'd be good to talk about Minsky because we
so often talk about artificial intelligence on the program and
it's one of the great future frontiers that we keep
(02:55):
coming back to, and that his his influence on the
development of artificial intelligence in the SA and half of
the twentieth century has been so profound, and also his
views on where artificial intelligence had been going over the
last decade are really interesting. Yeah, we'll conclude with our
discussion on that, but to start off at the beginning,
Minsky himself was born in New York City on August nine,
(03:18):
nineteen seven. Yeah. I so there was a piece that
we read that was a profile of Minsky from The
New Yorker in nineteen eighty one that was written by
the physicist Jeremy Bernstein. It was just shy of a
full autobiography. I mean it was Yeah, it was really
comprehensive and and one of the things that makes me
realize is that we totally do not have space on
(03:39):
this podcast to cover all of the interesting aspects of
his life. So we're just going to do a kind
of highlight reel of some of the things that stuck
out to us. But if you're interested in the stuff
we have here, I would highly recommend checking that out
to learn more about him. But anyway, that piece is
going to be the source of several quotes that I've
pulled about Minsky's childhood and and edgy cation that I
(04:01):
thought would help give you a better picture of sort
of the color of his personality in life. Right, Yeah,
because this guy was a lot of people described him
as being imaginative and humorous and maybe some people would
say eccentric. Certainly they would say, you know, he was
very enthusiastic, so a vibrant personality, not like some person
(04:22):
who would cloister himself away from everybody else in order
to work on ideas. He strikes me as uh a
quintessential outside the box thinker, you know what I mean,
somebody who would always approach a problem in a in
a strange and usually fruitful way. And he loved to
incorporate students in his in his thinking. You love to
(04:44):
collaborate with students because I think, although I don't think
he ever necessarily articulated it this way, to me, it
sounds like he loved to talk with people who had
not yet learned what was impossible, because that meant that
they didn't put those constraints on their ideas from the getting.
And that's where you see a lot of innovation. Yeah,
I know exactly what you mean, and I think it's
(05:05):
kind of inspiring in that right. I agree, But but
I want to start with with a little a picture
of little Marvin. So he's talking about the different interests
he had in in subjects in school when when he
was a kid, and he he talks about his interesting
chemistry and this is his sort of hands on approach
to doing experiments and learning things firsthand. So he says,
(05:27):
I've been reading some chemistry books and I thought it
would be nice to make some chemicals. In particular, I
had read about ethel merkup Tan, which interested me because
it was said to be the worst smelling thing around.
I went to and this is his teacher, Zim mr Zim.
I went to Zim and told him that I wanted
to make some. He said, sure, how do you plan
(05:48):
to do it? We talked about it for a while,
and he convinced me that if we were going to
be thorough, we should first make ethanol, from which we
were to make ethel chloride. I did make the ethanol
and then the ethyl chloride, which instantly disappeared. It's about
the most volatile thing there is. I think Zim had
fooled me into doing this synthesis, knowing that the product
would evaporate before I had actually got to make that
(06:10):
awful merkuptan. I remember being sort of mad and deciding
that chemistry was harder than it looked on paper, because
when you synthesize something, it can just disappear. I thought
this was an interesting metaphor also for the way you
would end up chasing the basis of physical intelligence. Sure, yeah,
I mean it's it's you know, there's one thing that
(06:31):
he would refer to, uh, you know. He would say
that intelligence was sometimes why you would call a suitcase
word because he would cram so many different concepts into
the the suitcase of intelligence. And uh, we've also mentioned
this when he would say the same thing about consciousness.
But I know that we've on this episode or not
this episode, but on the show I've talked about how
(06:52):
consciousness is kind of one of those ideas where you
almost define it by striking things out from under the
umbrella of consciousness, right, and then you're like, okay, so
whatever's left, that's what consciousness is. It's it's just some people,
some people have made the criticism of of consciousness theory
that you know, it's almost like when you're saying what
(07:12):
consciousness is, you're just making a list of all the
things the brain does and then striking out everything that
we fully understand or not fully understand, but everything that
we understand the physical basis, we've got a good grip
on the actual mechanisms that are going on behind the scenes.
And so actually, I'm okay with using consciousness as a
placeholder until we figured everything else out. That will kind
(07:34):
of come into play with his ideas on what thought
was all about. But before we get to that, we
also need to talk that about how when he was
uh when in the four he was joined the U. S. Navy,
served in the Navy until nineteen. Yeah. I think he
explains that he he joined the Navy because he was
saying that he knew they would send him to electric
(07:55):
electricians electrical school whatever they called it back then, They
would send him to school if he if he joined.
So I think he was going to he was on
track to be a radar technician or something like that.
But of course that you know, he was in the military,
so they had him do basic training, he says. And
he talks about this group that he was in the
in the Navy with, and he says, our little group
(08:16):
was a strange kind of mini Harvard in the middle
of the Navy. Everything seemed unrealistic. I practiced shooting down
planes on an anti aircraft simulator. I held the base record.
I shot down a hundred and twenty planes in a row.
I realized I had memorized the training tape and knew
in advanced exactly where each plane would appear. But I
must have some odd skill in marksmanship. Many years later,
(08:38):
my wife and I were in Mexico on a trip.
We came across some kids shooting at things with a rifle.
I asked them if I could try it, and I
hit everything. It seems that I have a highly developed
skill at shooting things for which there is no explanation.
I also, I also love that he he talks about
how there were maybe four people in my company who
are really remarkable, including a mathematician and an astronomer. And
(09:00):
he started hearing this, you think there's like a nerdy
version of inglorious bastards that could be made from Minsky's
experience in the Navy. So instead of being these these
tough like special forces guys, it's like the brilliant mathematicians
and scientists who were part of the Navy and then
went on to go and do other things. Um Minsky,
(09:24):
after he left the Navy, joined well. He attended Harvard University,
and this is where we really get a first look
at how he was interested in so many different fields
that collectively lent themselves to this idea of artificial intelligence.
He studied psychology, and he studied neurophysiology and physics. When
(09:45):
he graduated, his degree was in mathematics. But he was
interested in all this stuff while he was in school. Yes,
he moved around a lot like he He says quote.
I was nominally a physics major, but I also took
courses in sociology and psychology. I got interested in neurology
around the end of high school. I started thinking about thinking.
(10:05):
One of the things that got me started was wondering
why it was so hard to learn mathematics. You take
an hour a page to read this thing, and still
it doesn't make sense. Then suddenly it becomes so easy,
it's trivial. I had never thought about that before, but
he's exactly right about understanding math concepts. It's always been
that way for me. That you can go over how
(10:26):
to use a certain operator. You know, you're learning a
new type of mathematical function or operation, and it's just
banging your head against a wall until you get it.
And then as soon as you get it, it's it
seems so simple, it's stupid. Yeah, this was how I
experienced math when I was in high school. I remember
by the time I got to trigonometry, uh it was
(10:48):
it didn't take very long for that switch to click
in my head where I would see what I was
supposed to do and understand why I was doing it
that way. It wasn't until I hit Calculus, and I'm
not certain what the roadblock was, but for some reason,
when I hit Calculus, that switch would take longer and
longer to click. And I would often attribute that to
(11:11):
the fact that I think the way it was being
taught was here's how you do this, not here's why
you do this. So I think it also depends on
your approach to learning that concept. But I totally get
what he's saying. Where you look at something and it
just feels like I could read this for the twentieth time,
but it's still not going to become any more clear
to me, and then two hours later, when you're doing
(11:32):
something totally different, you just think, oh wait, now I
get it. Yeah, yeah, yeah, it's it says something very
interesting about human cognition. And I think this insight that
he mentions here could very well come into play when
we're talking about how you construct intelligence from base parts, uh,
because there's something happening here. There's something about intuition and
(11:54):
about maybe the formation of pathways like you would have
in your old at work, where you know, once the
pathway is set, now you can find your way back
there quite quite easily. Yeah, you could even think of
that as being, you know, make it an analogy of
a physical pathway through a forest. Like the first time
you go and make a path, you're cutting your way through.
(12:15):
It's a lot of work. Uh. It might even be
hard for you to retrace it the first time, But
after you've done it a couple of times, there's a
pretty worn path there that's much easier to follow. It's
it's a fitting analogy in many ways. But Minsky also had,
as we've said, very eclectic interests when he was in school.
For example, there is all throughout his life he was
(12:36):
interested in music, and I love what he says about
music here. This is another interesting thing about cognition that
I'll get to in this. He says, quote, I had
also taken a number of music courses with Irving Fine.
He usually gave me ces or d's, but he kept
encouraging me to come back. He was a tremendously honest man.
Is that referring to the season d's. I'm not sure. Uh.
(12:59):
He says he was a tremendously honest man. I think
the problem was that I was basically an improviser, one
of those people who can occasionally improvise an entire fugue
in satisfactory form without much conscious thought or plan. The
trouble is, the more I work on a piece deliberately,
the worse it gets. I can totally get behind this too,
(13:21):
because you know, we're both writers, and I'm sure that
I know what he's talking about. There's been experiences where
you'll sit down and you just you get a nugget
of inspiration and you just start writing. And why you
end up whether you may have to go back and
revise a little bit, but in large part, it's just
it feels really satisfying. And there are other times when
you think I have an idea, I'm gonna go ahead
(13:43):
and start the whole process of outlining all of this
and then blocking it all out, and then I'll actually
get around to writeing it, and then, like you know,
two hours later, you're just like, I don't know, whatever
made me think this was worth putting down on paper. Yeah,
I know exactly what you mean. I mean usually, I
would say for most people and for myself, more work
leads to improvement, but not all the time. Sometimes you
(14:05):
can just write a thing to death. The more you
keep tinkering with it, the less interesting it becomes. Uh So,
by nineteen fifty one, he had graduated Harvard the year before,
and then he goes and joins Princeton University for postgraduate studies,
and uh that same year he built the world's first
(14:25):
neural network simulator. And this is this is a thing
that is worth noting. It's a neural network simulator in
nineteen fifty one, So try to imagine that this is
not based on microchips. No. Um. Also, it was called SNARK,
which is great. It's s n A r C. And
that stands for stochastic Neural analog Reinforcement Calculator, which really
(14:46):
clears it all up. Uh. Stochastic is one of those
words that's going to pop up a couple of times
as we talk about this. In case you aren't familiar
with the term, it essentially means random. That's that's kind
of a easy way of translating it. Uh So. He
graduated Princeton in nineteen fifty four with a doctorate in mathematics.
The following year, in fifty five, he invented the confocal
(15:07):
scanning microscope, which actually uses a little spatial penhole inside
the lens, and the purpose of that is to filter
out all the light that would not be in focus,
so it therefore creates a higher resolution image of whatever
it is you're looking at through the microscope. So it's
kind of just really improving resolution now. In nineteen fifties seven,
(15:28):
Marvin Minsky began to work for m I T. Massachusetts
Institute of Technology, and he was specifically interested in researching
computers in order to understand human thought, which uh might
seem counterintuitive to some people, like why would you look
at computers in order to get a better understanding of
how humans think? There it was a really good analogy,
(15:51):
I thought. In um one of the pieces we looked
at it was on edge dot org that was talking
about it was interviewing different people with recollections about Mints
his life. But there was one part of this piece
that talked about how, even though the analogy was not perfect,
if you were a person today who wanted to understand
how birds fly, probably one of the easiest ways to
(16:14):
start would be to look at how airplanes work. Even
though airplanes and birds work in a different way, you
can start getting the principles about what you know how
things stay aloft in the air by looking at what
an airplane needs to do in order to not fall.
And I think the same thing could be true about
computers and brains. Both do computation, both to information processing.
(16:36):
So if you look at a thing that's kind of
graceful and mysterious, like a human mind, and you want
to try to understand it, it it might be a good
place to start to say, Okay, how does information processing
work in a machine? Yeah, I mean, I I'm always
hesitant about that. I there are a lot of things
that Minsky talks about that I like a lot, but
(16:58):
it's because it lates to the mind, not the brain.
And uh it's because I know that computer's process information
in a very different way than the way we think
in general. I mean, if you're talking about classical computers
and the neural networks that we have in our in
the wet ware we have in our heads. Uh, So
(17:18):
I'm always hesitant to make that comparison. However, when you
go to an abstract level of the human mind as
opposed to the human brain, then suddenly these conversations make
a lot more sense to me, and I'm a lot
more um inclined to agree and engage on that level
as opposed to just crossing my arms and going yeah, well,
(17:39):
I mean I think it plays on the same principle
as the idea of the universal computer, right that if
you have a touring machine, you know you have a
basic universal computer. It doesn't matter what the hardware is.
If you can do the basic computing functions, you can
do the same job as a different kind of computer
that uses different hardware, right right, Well, moving on with
(18:01):
our little biography on Marvin Minsky before we get into
some more details about his specific ideas in fifty eight
or fifty nine. And the reason why I put that
down is because depending on what source you read, some
site that uh, this happened in nineteen fifty eight, others
in nineteen fifty nine. My suspicion as this particular thing
took a long time to happen and probably started in
(18:22):
fifty eight and became official in fifty nine. Uh. Minsky
partnered with a man named John McCarthy who was a
professor of electrical engineering at m I t and together
they formed the m I t Ai Laboratory. And as
a side note, John McCarthy is generally attributed as the
person who actually coined the phrase artificial intelligence in the
mid nineteen fifties, I didn't know. Yeah, so he was another, uh,
(18:46):
founding father of the science of artificial intelligence. Like you know,
if you were to make a list, you'd have people
like Ada Lovelace and Alan Turing and Marvin Minsky and
John McCarthy all on that list easily. I mean, you
would not want to leave them off. He would. Minsky,
that is, would stay with m i T for the
rest of his career. He became the Donner Professor of
(19:07):
Science in nineteen seventy four and the Toshiba Professor of Media,
Arts and Sciences at the m i T Media Lab
in nineteen Yeah. Well, I think we should now just
transition to a more general discussion of what were some
of Minsky's influential ideas, concepts, and books, because, as we've
(19:28):
said earlier, he was massively influential. We we don't have
time to talk about everything, but we want to highlight
a few interesting things that he brought forward and and uh,
A lot of his work kind of relates to this
running theme of the whole and its parts. Yeah, like
so whole as in w h O L E uh,
(19:49):
the entirety and its parts, specifically with reference to intelligence.
Right that that's a running theme throughout a lot of
his work. One of his early ideas something that he
called frames. It was this concept that he proposed in
nineteen and defined frames as the general information a computer
(20:09):
system would have to possess before it can make specific decisions.
So what do you mean by that? All Right? So
let's say that you've you've built yourself a robot and
you want the robot to do things. In order for
the robot to to do the things you wanted to do,
you have to teach the robot certain concepts. First, I
love that sentence. You want the robot to do things? Yeah, well,
let's you know, you could just build a robot, right,
(20:31):
I mean maybe maybe you're rawsom and you're just like,
I just want some universal robots run around this place.
But you know, I don't care if they do anything. No,
you must send it upon the world with a mission.
But if you here's a simple example, you've got a roomba.
You've built a roomba. Well, before you can just set
a rumba down and have it vacuum of a room,
you've got to teach it general concepts, things like uh, walls,
(20:52):
you know, Uh, the what happens if you come up
to allege all this kind of stuff. You have to
teach it all of this before it can complete eat
the task it was built for. So one example that
is commonly sited is imagine you've got a computer system
and you've got a series of rooms, and these rooms
are connected to each other through doorways that actually have
doors on them. In order to have a this computer
(21:15):
system be able to navigate through those rooms, you know,
presumably through some sort of robotic form, it would have
to understand how doors work. What a door is, that
a door could swing either inward or outward, the various
mechanisms that might be employed in order to work a door,
whether it's a door knob or um a handle that's
got a latch that you have to press down with
(21:37):
your thumb, or maybe even a bar that you have
to push or pull. And you have to teach the
computer system all these things. Now, these are things that
humans once you teach them. Once humans are really good,
Like they can recognize get the basic concept of a door.
You've got pretty much all doors ready to go. Yeah,
you might get thrown by something like a revolving door
(21:58):
that you see for the first time. But most most
of the time, you're gonna see a door and you're
gonna think, all right, this is either going to open
inward or outward. It's not going to do anything else
unless it's Star Trek and then goes But yeah, like
you said, you're a robot and you come to a
door with different types of door knobs, or with door
knobs at different height, or you or you only taught
the robot how to open a door if it opens outward.
(22:21):
What if it's a push door and it doesn't have
a knob. Yeah, And these are all sort of things
that that we take for granted as humans because we've
had some experience and we're able to extrapolate. Computer systems
in general are not good at this. Computer systems are
very good at performing tasks that they've been programmed to do,
but they're not so good at doing tasks they haven't
(22:43):
been programmed to do. Who to thunk it? So um,
But he was he was using this this idea of
frames as a way of explaining this concept of These
are the These are the sort of contextual information buckets
that you need to each a computer system in order
for it to be able to do the thing you
designed it to do, and whatever environment that might be,
(23:06):
whether it's you know, a robot moving around rooms or
autonomous some submarine exploring underwater features. Anything. Really though, I
would suggest that, well, I guess I'd have to guess
because I don't know for sure, but I would guess
that Minski would agree that if the human mind can
(23:27):
figure out things without having to be told them, a
computer potentially can too. It just needs the right equipment.
It needs the right process is to be able to
know things that without being told them it would need
to be able. It would still need some frames, right
Like for example, if I, uh, let's say that that
your mind is completely wiped, Joe. So let's imagine it's
(23:52):
last Tuesday, because we all know what happened that day. Uh,
and I were to produce for you a coffee mug
and I point to this, and I say, this is
a mug. Sometimes people refer to it as a cup,
and it holds liquid. This is where the liquid goes.
As a human being, what's liquid, We've already covered that,
(24:14):
We've already gotten to that part. Though that stuff we
already covered. This is actually pretty advanced in the day.
This is like four pm on Tuesday, and so, but
at that point you would you would be able to
recognize another cup, even if it were a different color,
different size, even if it were a slightly different shape.
Let's say it's like a novelty cup, so it's in
the shape of a tartist or something, you would know.
(24:34):
You might not know that that's a tartist, but you
would know that that was a cup, and you would
be able to use it as such. Whereas computer systems,
or if you haven't built in any sort of machine
learning so that they can actually start to extrapolate information,
they can't do that right if it's if it might
not even recognize the same cup, if the light in
(24:54):
the room is different, or if it's a little too
far away from the camera, are a little too close
because the size will look different to it by perspective.
So uh, you know, you would still need those frames
at least to exist for some amount of information so
that the machine could know what to do. But the
goal of course and artificial intelligence is to get machines
(25:17):
sophisticated enough where those frames can be more basic that
you don't have to map out every single possibility in
order for a machine to be able to understand that
the machine itself would be able through perhaps even trial
and error, learn how things work. Like if you taught
the uh the the machine how certain doors. Like let's
(25:39):
say that you've got ten different varieties of doors in
this other scenario we mentioned, and you teach it about
five of them and how those five work, and it
has all the basic information of how all the doors work,
but the other five are slightly different variations on it.
And you have taught it how to uh do trial
and error so that it can actually experiment when it
(26:00):
owners a door that doesn't fit the five that it
was taught. That would be more like he will it
will do science in order to break on through to
the other side. Right yeah, it might just turn into
a robotic kool aid man, you know, and just crash through.
But your goal is so that it actually learns and
experiments and continues to uh grow its own knowledge. Okay, Well,
(26:23):
let's look at another one of Minsky's influential ideas, which
is his society of mind theory. Now, he had a
book called Society of Mind I think in nineteen eighty five, right, Yeah,
before that, he had started really playing around with this
concept all the way back in the nineteen sixties, and
what really inspired him was that he started to work
(26:44):
on a very basic robotics system. Uh. And it was
a very simple exercise and artificial intelligence. Simple in the
sense that it was elegant, not simple as in it
was easy to do. Yeah, and Minsky had done some
work with with robotic motion and the manipulation of of
arms and claws and stuff like that, right even back
(27:06):
when he was in school. This was one of the
great stories from that piece in The New Yorker that
Minsky tells. So once the Harvard zoology professor John Welsh
offered Minsky access to his lab and his equipment after
Minsky found out that scientists didn't know how the nerves
in crayfish worked and uh Minsky told The New Yorker,
(27:27):
I became an expert at dissecting and crayfish. At one point,
I had a crayfish claw mounted on an apparatus in
such a way that I could operate the individual nerves.
I could get the several jointed claw to reach down
and pick up a pencil and wave it around. I'm
not sure that what I was doing had much scientific value,
(27:47):
but I did learn which nerve fibers had to be
excited to inhibit the effects of another fiber so that
the claw would open. And it got me interested in
robotic instrumentation, something that I have now returned to, trying
to build better micro manipulators for surgery and the like. Yeah,
So in between his UH Frankenstein like experiments with crayfish
(28:09):
claws and developing UH micro mechanical systems for surgery, he
was experimenting with this very basic artificial intelligence robotic arm apparatus.
And it consisted of a computer that did calculations, a
camera that could focus in on what needed to be manipulated,
a robotic arm, and then a series of blocks. And
(28:31):
the idea was that if you could UH teach the
computer system what certain terms were, like I want you
to build a tower, that you would then be able
to teach the robot how to pick up a block,
how to manipulate it so it's in the right place,
how to stack the blocks so that they're stable, and
(28:53):
also to teach them things that people kind of grasp
pretty quickly once they get out of the infant stage
of their lives, like if you're trying to build a
tower and you've got three blocks stacked on one another
and you need to put you know, your instruction is
make this tower four blocks high. One solution is not
(29:13):
to grab the block that's on the bottom of the tower,
pull it free, and then try and place it at
the top. Was the one I was thinking, was you
would it would it necessarily understand that you have to
lay down the lowest level first, right if you try
and all, right, well, you know, let's let's start from
the top and work our way down. That doesn't you
can't do that. This is something we've talked about before.
But I do think it's an interesting thing about artificial
(29:36):
intelligence that's often overlooked is the basic locomotion and physical
interactions with objects is a kind of intelligence. Absolutely, It's
it's not at all just like, well, that's the dumb
thing the robots do, and artificial intelligence is getting them
to be chatter bots, you know, to pass the Turing
test and have have conversations. I mean, knowing how to
(29:56):
move things in your environment in a smart ways absolutely
artificial intelligence. Sure, yeah, you know, knowing how to handle
any particular you know, object so that you are not
damaging it, that you can move it effectively, you might
even want to program in things where the robot knows
I cannot move this particular object because either it's too
(30:18):
delicate or it's too heavy, or whatever it may be. Yeah, okay,
but back to MINS. So when Minsky was working on this,
he began to think about all the different elements that
are necessary in order to make this task possible, and
he began to look at kind of discrete facets of
intelligence that are required in order for you to do this,
and that's where he had this breakthrough, this idea that
(30:40):
led to the society of mind idea. So in the
nineteen seventies he began to develop this theory and he
published a lot of essays on the subject, and he
worked with an m. I. T. Mathematician named Seymour Papert
on several of the early ideas. So the book came
out in ve and the argument he makes is that
the mind not the brain, that the human mind is
(31:02):
made up of individual parts called agents and agents It's
important to note have no mind of their own, So
agents themselves have no emotion, they have no thought. They
are aspects of the mind itself, and each agent is
responsible for a particular aspect of intelligence. It's through their
cooperation that conscious thought emerges, according to this society of
(31:25):
mind theory, and it's really about how the mind works
at a conceptual level as opposed to the biological level.
This is an idea I've encountered before in cognitive science,
but I wasn't aware in the past that it really
came from Minsky. Um. But I think there's a lot
to this. I think this is a very I would
consider this a very plausible and convincing way to think
(31:46):
about what consciousness and intelligence are. And even if you
are hesitant to argue for that, at the very least,
it is a very compelling way to think of artificial intelligence.
How do you get a machine to do any particular
thing that would require intelligence on behalf of that machine?
But if if you're not convinced, maybe we should look
(32:07):
at an example. And this comes straight from the book.
In fact, I read the book. Um. It's very easy
to read. Uh. Each idea is about a page long,
and each chapter is a collection of between eight or
nine ideas, maybe more a fewer depending upon the chapter
in their thirty seven chapters. UH and I Love I
(32:28):
actually also watched the beginning of a lecture that Minsky gave.
There's an open course on m T where you can
go to m I t s website and watch a
lecture series led by Minski himself from two thousand eleven. Oh,
that sounds fun. I kind of want to get on that.
It's pretty cool. And at the very beginning he talks
about how he really liked Society of Mind, and the
(32:48):
main reason he liked is that so each idea is
like a page long, and if you don't like it,
you can totally skip it and go to the next one.
It's really easy. Like this other book I wrote later,
the chapters are much longer, and if you don't like
an idea, you kind of have to just keep going. Uh.
You can't really hear the students, but I would hope
there was some good natured chuckling going on at any rate.
(33:10):
So he gives an example in his book, and he
presents a very simple scenario, the idea that you are
told to pick up a cup of tea and you're
gonna you're gonna drink from this cup occasionally, but it
is immediately What I'm thinking of is something you mentioned
on the podcast a few episodes episodes ago, which is
the office simulatory pick up the cup and throw it.
(33:31):
Just just start throwing things across the virtual reality office.
That that video is hilarious, by the way. So from
his book, he says, let's think about all the elements
that go into picking up a cup of tea uh
in the in this idea of society of mind that's
made up of agents, he says, you're grasping. Agents want
(33:52):
to keep hold of the cup. And he uses the
word want as in not not that they have an
actual motivation, but that's their purpose. So you're grasping. Agents
want to keep hold of the cup. You're balancing. Agents
want to keep the tea from spilling out your thirst.
Agents want you to drink the tea. You're moving, agents
want to get the cup to your lips. So he
(34:13):
argues that these four agents working together, although each one
is independent and that is important, they're independent of one another,
but they're working together in concert, can accomplish the task
of allowing you to drink your tea. And more importantly,
you can do this while doing other things like you could.
His example was walking around like at a like at
a tea party type deal, and you're having conversations with people,
(34:37):
and you're just casually holding your tea and occasionally sipping it.
But you're not thinking about that, right, at least not
consciously thinking about it, right. But clearly your brain is
doing all this work, right, It's not like you're just
magically holding this cup and keeping the liquid from spilling
out and all that kind of stuff. But he said
that you know consciously, you're not really aware of it. Uh.
(35:00):
So my example, I said, you could, uh, you could
drink your tea while not interrupting other stuff you might
be doing, such as telling the Queen of England that
hilarious story about the time you got drunk on the tube. Um,
because as soon as I think tea, I'm like, well,
clearly I'm If I'm drinking tea, I'm obviously having as
as I am want to do. And so Minsky would
(35:22):
go on to argue that the concept of agents is
a necessary concept. He argues that if we cannot and
this is a quote, explain the mind in terms of
things that have no thoughts or feelings of their own,
will only have gone around in a circle. So, in
other words, he says that if your definition of thinking
requires you to talk about smaller elements that also think
(35:48):
you're you're not really describing thinking, You're just you're just
shifting the definition around two different parts of the brain.
This is something that parallels uh and analogy that I
remember coming across in the works of Daniel Dennett, the
cognitive philosopher, and he so he presents this idea of
the Cartesian theater. Have you ever heard this, I've heard
(36:10):
the term. Well, essentially, he says, okay, so some people
think that look there is what your eyes do is
that they take in light from your surroundings and they
paint a picture. And it's like the brain projects that
picture as a movie screen for you to see. But
who's doing the seeing? So then you have to imagine
(36:32):
that really inside your brain is a small is a
little brain that gets to sit in the movie theater
of your mind and watch the screen that is made
by your eyes and so, but who's seeing within that
brain in that movie theater. So if you keep postulating
a little person inside you that is the audience of
(36:54):
your thoughts or the audience of what you are perceiving,
it's an infinite, infinite grass right right, And that's not
helpful if you want to have an actual meaningful conversation
about how is this working? Um. So the book divides
up concepts into these categories that kind of mentioned that,
where you have like maybe up to eight or nine
(37:15):
of these one sheet descriptions collected under these categories. And
those categories include things like holes and parts, kind of
what I was referring to earlier, conflict and compromise, the self,
problems and goals, and lots of other ones like I said,
They're thirty seven in that book, and each section details
Minsky's ideas on how the human mind processes this information
(37:37):
on a conceptual level. So Minski uses the example of
building blocks early on in the book to demonstrate all
those all the considerations one has to take in order
to complete that simple task. So, uh again, you know,
back to that idea of I want you to build
a steeple. I don't know what a steeple is. A
steeple is going to be two green blocks and one
(38:00):
orange triangle that goes on top. And then once you
teach it, then it, you know, it knows how to
do that, but it has to you have to give
it the all you know to find all the agents
to identify things like a block versus a triangle, how
to pick that up, how to place them, the fact
that the blocks have to go on the bottom and
the triangle has to go on the top, all this
kind of stuff. Um. And he says that once you
(38:21):
break it down into those basic parts, then suddenly these
these uh advances and artificial intelligence become possible. Um. And
like I said, you can take an open course on
the Society of the Mind on the m I T website.
You can actually find that for free. So if you
want to check it out, even if you just want
to see some of the lectures and and hear what
(38:43):
the man himself had to say about this idea, you
can go and do that. And I highly recommend checking
it out at least you know, satisfied your curiosity for
given a good ten fifteen minutes. The first lecture is
two hours long, and there are a lot of lectures,
so um. But yeah, and that this kind of leads
to that idea of common sense and common sense is
(39:07):
one of those things that we kind of innately understand
as human beings. But what does that mean for artificial intelligence? Yeah,
and this is one of the things that I think
got mentioned most often, like in the obituaries after he
passed away, a lot of publications mentioned that he was
interested in giving computers common sense. But what does that
really mean from Minsky's point of view, Well, it goes
(39:29):
back to that that description I talked about earlier, like
if you're building a tower, you can't we know, you
can't take a block from the bottom of the tower
and put it on the top, or you can't start
at the top and work your way down. Gravity obvious
to us, but maybe not obvious to a computer. Right,
So things that are are common sense we often kind
of dismiss as being easy or simple, or it's just
(39:49):
a matter of fact, and therefore it's not anything to
to really worry about, except if you're building an artificial
system to do those things. The artificial system doesn't know
any of that, so you have to teach it. And
I think his point is sort of the common sense
is not as simple as we think it is. It's
actually it's actually quite hard. We think common sense is
(40:10):
something that's very basic or very simple because it's intuitive
to us, but it's not basic. It's not simple. Common
sense is incredibly complex. Yeah, he he had a quote
um that says, a common sense is not a simple thing. Instead,
it is an immense society of hard earned practical ideas,
of multitudes of life learned rules and exceptions, dispositions and tendencies,
(40:34):
balances and checks, which I think is a good way
of putting it. Like it's stuff that once we humans
have come in contact with it. You got it right,
It's like this little, this little box in our brains
gets checked and we understand that concept from that point forward,
even if we encounter it in a different context in
the future. Not so with machines, at least not naturally,
(40:55):
which is why it's a big problem in artificial intelligence
that if you can create a machine intelligence that is
able to mimic that sort of uh feature of human intelligence,
you're you're way ahead of the game. So um uh yeah,
it's it's interesting too because you've got this, I like
your fun fact in here. Well, yes, the fun fact
(41:16):
is that did you know that Marvin Minsky was consulted
by Stanley Kubrick as I don't know exactly what you
call it, maybe sort of a science advisor for two
thousand one of Space Odyssey. I did, but only because
Minsky would often have Kubrick over to his house for parties,
as well as Arthur C. Clark and Isaac Asimov. Minski
(41:38):
moved in some awesome circles, like like people who were
really interested in robotics, not just from the academic side,
but from the literary side. We're all uh in contact
with him at the time. He taught re Curse Wild,
didn't he? He may have. I don't know that for
a fact. I do know that he he had conversations
(41:59):
with Albert Ein's line and said he couldn't understand a
word of it. Um he uh. He was friends with Heindline,
So I mean the guy was like, he was like
the guy who knew everybody. So it would not surprise
me to learn that he had taught Kurtzwile we have
some other stuff to talk about. He has another book
called The Emotion Machine, which came out in two thousand six,
(42:22):
and this is sort of following up on some of
the same ideas earlier in his career. Yeah, it's a
it's most people refer to it as a sequel to
Society of Mind. His central argument, and this one is
that emotions are really just different ways of thinking. Yeah,
and I've read several quotes of his along these lines
where he talks about he's sort of urging people not
to underestimate the the cognitive content of emotions, if that
(42:47):
makes any sense. He certainly says that you know, the
ability to have these emotions, whether or not their different
methods of thinking, uh, hey, they lead to greater intelligence.
That it creates a new capability of looking at information,
and it is an interesting way of looking at it right, Like,
(43:08):
like if you are thinking about something and you're angry,
you might come to a different conclusion and learn something
that you otherwise wouldn't have if you were happy or sad. Well,
it also for me, raises an interesting question, which is
that we naturally make a distinction between thoughts and feelings.
We think there are two different species of things. You know,
(43:28):
I have feelings and then I have thoughts, And I
can have thoughts about feelings, and I can have feelings
about thoughts. But are they necessarily different species? Are are
feelings maybe just another type of thought? Are they just thoughts?
And this kind of brings us to that amazing documentary
Inside Out, which a lot of people have said, you
(43:48):
know it was it's based on some of the most
current information and scholarship on emotions and memory, and thought, yeah,
I really haven't seen it. I know, so, I know
some people who liked it a lot. I I saw
it so anyway, when I described my reaction to Inside Out,
which I thought was entertaining, but that's about it. Most
(44:10):
people think I'm dead inside because like that movie destroyed me,
I cried, like crazy, like what's wrong with being dead inside? Yeah? Look,
a lot of Pixar's movies affect me deeply. That was
just not one of them. However, I did think it
was a very interesting approach with emotions their connection to
thought and memory, and it seems very similar in many
ways to what Minsky was saying. Now, not everybody was
(44:35):
totally thrilled with this approach. Some people thought it was
an interesting way of understanding the mind, but a misleading
way of thinking about how the brain actually works. So
neurologist Richard Rustack wrote up a review about his you
(44:55):
know his work on emotions and criticized part of Minsky's approach,
saying that Ski failed to show how emotional functions relate
to brain activity. Now, he acknowledged that Minski explains this
by saying our knowledge of the brain changes so quickly
that it becomes outdated rapidly. But then Respects says, well,
how can you possibly draw any meaningful correlation between brains
(45:16):
and machines if you also are arguing our knowledge of
the brain changes so quickly as to essentially contradict itself,
So you can't make any conclusion if part of your
argument states our knowledge of the brain changes so quickly
that it changes our understanding. Like, how can you conclude
anything if at the very start of your argument you say, listen,
(45:39):
I'm not gonna write about the brain because our knowledge
of it changes so quickly. Anything I write will be
out of date by the time this book is published.
But they're totally like machines like that, he says, You know,
that's a logical Like there's there's a disconnect there now.
Respect also wrote that he had some reservations about some
of Minsky's other assertions, many of which seem to draw
(45:59):
conclusions of about how the brain works based on how
large complex computer systems work. So Restack wasn't so sure
you could support such a connection. But he also said,
it may turn out that Minski is completely right. We
just don't have the science to support it one way
or or you know, deny it one way or the other.
It's it's just that without knowing, we can't be sure.
(46:20):
But it may turn out that these are absolutely on target.
We just we just can't be so so sure of
it right now. But he did say you could learn
a lot about how the mind works by reading Minsky's book,
you just wouldn't learn about how that relates to the
way your brain functions. So again, the mind being this
more nebulous platform that rests upon the brain, like it's
(46:43):
a manifestation of the brain's abilities. Um, and that we
can learn more about how the mind works, but not
so much necessarily about the neurology underneath it. Um. Just
pretty cool, any hope coming through for my theory that
the brain is just for cooling the blood and we
really think with our tone nails, h I'm gonna I'm
(47:05):
gonna say that science does not currently have very much
support for that particular belief, but shine on you, crazy diamond.
They also said Galileo was wrong. Okay, and moving on,
let's talk about Minsky and the concept of free will. Yeah,
Minsky had some really interesting thoughts on this, and I
want to read a quote from his paper Matter, Mind
(47:29):
and Models, and this was cited in another thing I
read about him. But this quote goes, if one thoroughly
understands a machine or a program, he finds no urge
to attribute volition to it. If one does not understand
it so well, he must supply an incomplete model for explanation.
Our everyday intuitive models of higher human activity are quite incomplete,
(47:53):
and many notions in our informal explanations do not tolerate
close examination. Free will or villa is one such notion.
People are incapable of explaining how it differs from stochastic caprice,
but feel strongly that it does so. Stochastic caprice, in
case you're wondering, would mean random whimsy. Yeah, so he's
(48:15):
saying that even though we can't really explain, we can't
give any good account of where free will comes from,
we insist we must have it, and that it is
different from just random impulses that we have that we
act on. Yes, but he continues, I conjectured that this
idea has its genesis in a strong primitive defense mechanism. Briefly,
(48:36):
in childhood, we learn to recognize various forms of aggression
and compulsion, and to dislike them, whether we submit or resist. Older.
When told that our behavior is controlled by such and
such set of laws, we insert this fact in our
model inappropriately, along with other recognizers of compulsion. We resist compulsion,
(48:57):
no matter from whom and whom is in quote, it's there.
Although resistance is logically feutile, this resentment persists and is
rationalized by defective explanations since the alternative is emotionally unacceptable.
I think that's a very interesting insight. Yeah, And and
it applies to more than just intelligence, Yeah, you know,
(49:17):
because it It actually reminds me of any time where
we encounter something we've never encountered before, and by we,
I mean humans at large, and we naturally, as curious beings,
want to understand that thing we've just encountered, and often
in our first attempts we will we will create explanations
(49:42):
that don't necessarily correlate to any kind of reality in
order to explain it. And it's only later on, as
we start to peel things away that we really see
what's happening underneath the surface. Yeah, and then totally he
goes on to apply this same reasoning, uh about the
origins of our resistance to, you know, the idea of determinism,
(50:05):
and and our and our tendency toward free will as
a kind of rebellious impulse against compulsion. Uh. He's like, well,
hold on, now, if we create intelligent machines and they
have something like consciousness, will that inherently bring with it
the illusion of free will and the resistance to the
(50:26):
idea of compulsion by physical determinism. Would a robot with
consciousness also insist that it has free will? Yeah, it's
an excellent question that right now remains in the realm
of philosophy. One day it will not be though. One
day it will be a reality whether or not, and
(50:47):
it may turn out that the answers No, I don't
need to worry about that. I don't know that that's
going to be the case, because I I believe very
strongly that, uh that our our concept of re will
is based upon ultimately the activity going on in our brains.
So if we in fact build a system that is
(51:08):
truly simulating that activity, it stands to reason that whatever
entity is created from that would also experience that same
feeling that it possesses free will. Yeah. And even if
you argued, no, I built you so that you can
make me toast. It's you know, that's that doesn't matter.
If someone told me, no, I built you so that
(51:30):
I can make you toast, I would they say, I'll
show you why you built me exactly. So, Hey, we
talked about the future on this podcast. I have to
assume that Minsky, given all his thoughts about artificial intelligence,
made some comments somewhere about what he thought the future
would be like. He talked about the future. And he
also months before his death, he had there were a
(51:53):
few different interviews where people were asking him his opinions
on the current state of artificial intelligence. And I think
those answers are really interesting too. But uh, you know
the New Yorker piece that that you dug up from
nineteen eighty one, it was actually called like it was
talking about his vision of the future. Although spoiler alert,
(52:13):
if you read the whole thing, there isn't a lot
in there about that. It's mostly about a profile of him. Yeah,
though it is a really good one. It's fascinating. It's
just that the headline might be a tad misleading, but uh,
or or it could be more of like a broad
approach like you know, he he saw this stuff happening
well before it became reality. In fact, his work is
(52:36):
what allowed a lot of the artificial intelligence uh developments
to to take place in the first place. But one
of the things he talked about was he could envision
a future and remember this, in which a with a
relatively small amount of technical improvements in robots would see
(52:57):
automatic factories in space dead on. And we do have
robots working in a lot of factories doing automated work,
just not in space. Oh wait, I forgot. You have
not seen the factories on the far side of the moon. Well, Jonathan,
you are in for a pleasant surprise. Yeah. Uh, this
actually reminds me of a terrible movie by the Asylum
(53:19):
I once watched, and which robots were being built in
a space station orbiting the Earth and then sent down
to Earth. And I thought, what a huge waste of resources.
Just build them on the Earth. Build them on Earth
if if that's where they're doing work, why would you
ever build them in space? It's too expensive? Uh So Yeah,
that obviously has not panned out. But we are seeing
(53:40):
a lot more of automated systems in factories around the world,
a lot more robots being employed to the point where
we're seeing like the thoughts about robotic drivers and robotic
drones delivering stuff to us. I mean, it's it's pretty
far along in that respect. Now, where did Minsky come
down on that? We talked about the possibility of machines
developing aciousness? Was was he thumbs up to that? Uh?
(54:04):
It's interesting. I would argue that his answers, depending upon
what time of his life you were looking at, Uh,
went a little back and forth. It was always a
little vague to me. But he did say that he
could see a future where machines have minds of their own,
and their minds would be aware of the various parts
(54:25):
that make up those minds, kind of the agents, if
you will, and that what each of those agents would
be capable of doing, and be able to use that
knowledge to solve any problems that the machine would encounter.
It's not the same thing as consciousness, however, Necessarily it
may just be Oh, I have this task that I've
never had to do before that I have got to complete,
(54:47):
but it's similar to all these other things I know
how to do. Therefore, I'm going to employ all of
these agents that help me do those similar tasks to
complete this one. You wouldn't argue that that in itself
is a manifestation of consciousness. I think if you were
like that rotten so and so wants me to do
this thing and didn't even tell me how to do it,
(55:07):
We'll fine, I'll do it, but I'm not gonna be
happy about it. Then he'd be like, Okay, that's a
pretty conscious machine. So I'll show you why you created me, right. So, so,
you know, I would hate to ascribe a position when
I myself, I'm not entirely certain where he fell on
that he may himself have been like, this is a
philosophical question that fascinates me, but I don't know what
(55:29):
the answer is, or I don't know what I feel
the answer will be. Um. So, but you mentioned that
right before his death he had thoughts about where we
stand with AI today. Yeah, they were not um complimentary thoughts. Actually,
you know, he he was very dismissive about certain things,
Like there was a Washington Post piece that was published
(55:50):
shortly after his death, and it contained a lot of
little nuggets about Minsky and what he felt, how he
felt about certain developments, you know, recent developments and artificial intelligence. So,
for example, they asked him about UM IBM S Watson.
I think a lot of people would argue that IBM
S Watson is a very impressive display of artificial intelligence.
(56:12):
It's not, or at least a very impressive display of
word play. And that's kind of what Minsky would say.
He said he called it an ad hoc question answering machine,
that it wasn't intelligent, it was just a question answering machine. UM,
which is I think if you would go to maybe
the Watson chef style, where it's starting to try and
(56:33):
invent things based upon other things, it's not doing a
great job, but it's trying, I think it goes beyond
question answering machine. But that was his opinion, and he
also talked about how he felt AI developers were making
a mistake aiming for what he called the top of
the AI problem, So, in other words, trying to create
(56:55):
systems that on their surface appear to be similar to
human thought, but they lack the foundation of what thought
is really all about, and therefore it's just it's kind
of like a chat bot. It's just it's simulating it
enough so that it seems intelligent, but there's nothing underneath
it to actually support that supposition. I guess from his
(57:17):
point of view that the problem might be that it's
uh that it's lacking these agents, right, the society of mind,
the basic agents that populate the society that becomes thinking. Right. So,
in other words, instead of having agents, it's simply uh,
trying to follow a program that mimics the way humans
(57:38):
would respond to situations, but without that underlying you know,
network of agents that are actually making it happen. Uh,
it's kind of like skipping those those those found that
foundation in order to just get the result. But that
means that the underlying system is not as robust as
what you would need to have a truly intelligent uh computer.
(58:00):
He also said in an interview with M I T
Technology Review that the last decade of AI was about
quote improving systems that aren't very good end quote. So
he contrasted that with the era of the nineteen sixties,
the early era of artificial intelligence development, when he said
that they were having major breakthroughs on the order of
(58:21):
every couple of days, that he and students would talk
about these ideas and come up with new approaches and
new concepts about thought that would lead to enormous potential
breakthroughs in artificial intelligence. So these days it's every two
or three years you might see a breakthrough. And part
(58:43):
of that, he argued, was that we rely too heavily
on so called experts in AI. He was actually calling
back for the days when he would work with students who,
again because they don't know what's impossible, end up asking
questions and coming up with ideas with those constraints, and
therefore push forward the discipline much further than people who
(59:06):
have a preconceived idea of what is and isn't a possibility,
that have already placed limitations on themselves that they aren't
aware don't really exist. So it was interesting, I, you know,
I I can see his point. Also, there is something
to say about when you get to a when you
(59:26):
when a new discipline is created, you would probably expect
advancements in that discipline to be extremely rapid early on,
because there was nothing before. But as you build and
build and build by necessity, usually things slow down. You
just you know, you've you've explored, you you've picked up
all the and I hate the phrase, but you've got
(59:47):
you picked all the low hanging fruit. Why do you
hate the phrase because it's overused. I worked for consultants
for seven years. I hate low hanging fruits. We come
up with an alternative expression, the eazy cheese. All the
easy cheese has been eaten, and it's it's the difficult
to get cheese. That is, it's the only cheese that's left.
(01:00:09):
You've you've picked You've picked up the deli counter cheese. Yeah. Yeah,
the really good stuff that's like under a heavy glass
case and it's guarded by wolves. You just haven't been
able to get to that yet. I go to a
lot of weird cheese parties. Alright. So that kind of
wraps up our discussion of Marvin Minsky. Obviously, like you said, Joe,
(01:00:32):
there there's so much more we could have talked about. Um,
the guy was absolutely fascinating. He has had an enormous
impact on the discipline of artificial intelligence, and I have
no doubt will continue that impact will continue into the future. Uh.
And if you guys enjoyed this, let us know if
you have other people you would like us to talk about.
(01:00:54):
If if I mean, I would love to do a
full episode on Ada Lovelace. I think that she was
an absolutely phenomenal person and uh, it would be really
interesting to do a full rundown on on her ideas
and how how much of a pioneer she was. Uh,
but you know other people too, like people who are
still alive would be great too, Like we it doesn't
(01:01:16):
have to be someone from I think they must be dead. Okay, alright,
So if you have an idea of someone you would
like us to profile, living or dead, let us know
send us an email. The address is f W Thinking
at how Stuff Works dot com, or you can always
drop us a line on Twitter or on Facebook. At
Twitter we are f W Thinking. Just search fw Thinking
(01:01:38):
and Facebook's little search bar will pop up. You can
leave us a message there. We read all of them,
and we will talk to you again really soon. For
more on this topic in the future of technology, I'll
visit forward thinking dot com. Brought to you by Toyota.
(01:02:07):
Let's go places,