Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow Your Mind from how Stuffworks
dot com. Hey are you welcome to Stuff to Blow
your Mind? My name is Robert Lamb and I'm Joe McCormick,
and today we're gonna be talking about one of my
favorite comedy subjects, What's funny about the way that machines fail?
(00:24):
And here's just a heads up. This is gonna be
a two part episode because we ended up going super long.
Like the machines, we can't stop. So I want to
start with a particular example, probably my favorite funny thing
on the Internet these days, the hilarious almost successes of
artificial intelligence trying to generate examples of human language almost
(00:47):
but not quite human. Yes, I don't know why this
does it for me, but aside from Highlander to The Quickening,
pretty much nothing makes me laugh harder than language generated
by artificial neural networks and machine learning. And we'll explain
a little bit more about exactly how this works in
a minute, but first I just thought we should look
at a few examples of what this is like. If
(01:08):
you're on the Internet, you've probably encountered this at some point,
because it's become popular in the past few years. Especially
for its comedic value. If you're not on the internet, boy,
are you in for a treat um by far? I
would say the best of this stuff I've come across
is traceable back to the blog of a person named
Janelle Shane, who I think her day job as she
(01:30):
works in industrial researching optics, but as a hobby, she
trains neural networks with text based data uh, text based
data sets to spit out these amazing simulations of types
of human language, and so they'll they'll be in categories
like it. She gets an AI program to write recipes
(01:50):
for foods or to come up with the names of
paint colors or something. And the way, of course you
would do this is, and we'll explain more of the
details in a bit, but you'd train it on existing
names of paint colors, or you'd train it on existing
recipes of food. Right. So the end result here is
you essentially have a machine trying to human but not
quite pulling it off, but but but doing so in
(02:14):
such hilarious fashion. Yeah, so you should absolutely look up
Janelle Shane's blog. It's called AI Weirdness dot com. It's
reliably such a source of joy. But I want to start.
In fact, I will say that if you were if
you're scratching your head and you're thinking, oh, didn't I
see something hilarious in this vein recently, there's there's a
there's a high probability that it originated from AI weirdness
(02:35):
dot com. Yes, that that blog is just awesome, but
I wanted to look at a few examples of what
this is like. So one thing is consider the work,
uh that that Jennell Shane did training in neural network
to come up with names for D and D spells.
So you take the Dungeons and Dragons manual and you'd
feed in all the actual names of spells to the
neural network, so it gets a sense of what these
(02:56):
things are like, and then it tries to come up
with similar types of names on its own. Now, to
give everybody just a quick idea first of what actual
Dungeons and Dragon spells are named. You have everything from
Magic Missile or Crown of Madness to Evard's Black Tentacles
or anytime there's a wizard name in there, you know
you're in for some good stuff. Um, well, there's you know,
(03:19):
stuff like Glyph of Warding or what's the what's the
one I'm trying to think of? Oh, stuff like Leo
Moon's Secret Chest or Leoman's Tiny Hut. That's another great one. Well,
so here are the ones that it came up with.
How about selections like Mister of Light, Confusing, Storm of
the Giffling, Song of Goom, song of the Darn, Ward
(03:44):
of Snay to the pooda Beast, primal rear. You've got
to watch out for primal rear. Someone's storm Bear. Now
that one sounds legitimate, because that's one of the beauties
of these exercises is is when there's one that either
almost works or actually does work. Because I think some
in storm Bear, I can I can easily imagine I
(04:04):
can describe the storm Bear blasting out of the portal
and rushing into combat on behalf of your you know,
your your storm mage. Yeah, it's almost kind of effortlessly evocative.
A divine boom. How about that? That one sounds pretty
good too. Soul of the Bill. Now now we're flying
back down the hill. Farc mate about Charm of the CODs,
(04:30):
Death of the Sun. Okay, that that's that would have
to be a high level spell. But okay, okay, three
more greater flick. Okay, that's that's just a can trip there,
that's just a flick, a magical flick of the ear
curse clam. I like it. Daving Fire's confusing. Now. While
these phrases I think are mostly funny on their own,
(04:52):
I think they're probably even funnier if you're an actual
D and D player, because you not only get the
pleasure of the nonsense words and the you know, the
syllables that seem out of place in their context, like
Dave does not really seem to go very well with
some kind of magical fire spell, But if you actually
play D and D you probably also get some humor
(05:12):
from just like seeing the little resonances that these spells
have with actual spells that you would recognize. I now
want to run a gaming session where there's some sort
of warp effect in place where suddenly all the magic
users are forced to use of spells from this list,
and they don't know what they're going to exactly do
(05:34):
until they cast them. Even better, what if they had
to What if your character suddenly had amnesia and then
had to act as if they had bios that were
also generated by an artificial neural network. That's right, because
AI weirdness dot Com also has a wonderful piece titled
D and D Character bios now making slightly more sense.
In fact, I would say this post from I think
(05:55):
this was last week or something we were looking at.
I think this inspired me to want to do this episode.
We should read a couple of these. I'll read the
first one here. Quote. Frick found his old family's fortune
and his curiosity, and he went to a small city
to see if he could find a work in the goldfish.
(06:15):
He heard stories of a goldfish, a goldfish, a sea monster,
and a silver fish, a sea monster, and a ship
that was a ship of exploration. The ship was full
of fish and evil some treasure, but it was not
to be When Frick found the ship. He rushed back
and found the ship full of treasure and full of fish.
(06:38):
He wanted to be a pirate and fight it. Now.
I like that because there's a there's a lot of
silliness in there, but it does have the basic shape
of a bio and and and and and it's weirdness,
and it's the stuff that is more nonsensical actually feels
suitably magical and fantastic. You know that there's this this
(06:58):
fish that's not a fish, that's also a ship that's
full of treasure and fish, and he's going to become
a pirate and fighted, and so I think you could
run with that. This is one of the fascinating things
about neural net generated text is that it often has
the format of what you're going for correct, It just
doesn't have the sense of it correct like it will
(07:20):
esthetically and in shape be very much like what you're
looking for, but keywords do not make any sense at all.
Then again, another one that I thought was kind of
interesting in what it showed about common themes in in
D and D character bios, or I guess maybe fantasy
more generally, was one bio that included the lines um
(07:41):
the orc Warlock was captured and killed by a group
of Orcs. He was imprisoned and forced to work for
a giant tome. He was imprisoned and imprisoned for a
while until he was rescued by a group of adventurers.
He was imprisoned and imprisoned for a while until he
was rescued by a group of adventurers who were looking
for a group of adventurers to help eradicate the orchest ferocity. See,
now that's wonderful. I actually really like that when it's
(08:02):
a little you know, nonsensical. But uh, first of all,
you know, being an orc is hard. It's it's it's
a warlike world for the orc. But but then the
idea that he's working for a giant tone, the idea
that there's like a magical book employing him, and the
repetitive imprisonment. Yeah, it's a hard knock life. You're always
(08:23):
getting imprisoned and then escaping and encountering a band of adventurers. Yeah,
and working for weird magic books. So that one, that
one works, I think. Of course. Another great one on
this page is an injury that just seems to devolve
into endless like dozens of repetitions of variations of the
phrase big cat. Yes, little did he know. During the
monastery's course of time, when the monastery's training and growth
(08:47):
was complete, his mother told big cat, big cat and
big cat to drive their own path and test big
cat with big cats, Army of big cats, Army of
big cats and big cats and big cats. Big get
is big cat, big cat and goes on, yes something
for for many many lines. I'm trying to picture big cat. Uh,
though I can't. I don't actually go to like tiger
(09:09):
or lie in there. I think more of a kind
of mental power great basilisk type of fat house cat.
I think that would work. I also can't help but
think of cheshire cats, and of course cat Bus from
Totoro is being sort of in the vein of big Cat,
Big Cat, big Cat. Um. How about a neural networks
(09:30):
trained on a corpus of more than forty three thousand
questions and answer style jokes. I'll just read one of them.
Why do you call a pastor a cross the road?
He take the chicken? Yeah. Another great one from AI
Weirdness dot Com was a neural network designs Halloween costumes. Okay,
(09:52):
so you just feeded a bunch of Halloween costume names. Yeah,
and this one, this one was tremendous fun. I remember
when it first came. I just made me laugh so hard.
Highlight It's include uh, Saxy Pumpkins, Um, Disco Monster, Spartan Gandalf.
I really like that one. Starfleet Shark. Yeah, that's good. Um,
let's see Martian Devil. That was too believable. Panda clam though,
(10:15):
that's that's pretty good shark cow. That's very interesting given
some of our past discussions on was it a shark
cow that we were discussing. I believe it was. Oh,
that was That's Daniel Dennett's thought experiment exposing what he
believes to be flaws in Donald David's Donald Davidson's Swampman
thought experiment. It was the cow shark where he says,
(10:38):
is it actually meaningful if you posit a cow shark
to ask whether it's a cow or a shark? A
little accidental thought experiment here from this list. Uh, you'll
find less thought experimentation in such entries though, as Snape
scarce snape scarecrow, or or how about Lee garbage Lady garbage.
(11:02):
Lady garbage is good. So yeah, there's there are a
lot of great, great entries on that list. Okay, one
last thing from General Shane's work, I just must mention
some some titles for recipes that she wrote a script
to come up with. Um. These include chocolate pickle sauce,
whole chicken cookies, salmon beef style chicken bottom, artichoke, gelatin dogs,
(11:28):
and crock pot cold water. Where any of these attempted
these recipes well, actually at one point she so those
were just names at some point. I don't have these selected,
but she did have it based on a corpus of
recipes generate new recipes which are just nightmares, you know,
like huge lists of ingredients that would be like, you know,
(11:52):
a third cup s'more goals, you know, a cup of Horseradishuh.
Then there was one website I don't remember who actually
did it, might have been Super Deluxe. Somebody made a
video where they took one of these AI generated recipes
and just literally made it. Now, they had some trouble
because some of the ingredients were not real words. They're
(12:13):
just you know, like so they had to I guess
substitute something or they put in instead of smart goals
it was coco powder or something. But they ended up
with these like, uh, pasta shells. I think that had
like chocolate and stuff all over them. But anyway, so
that I think that's currently one of the best websites
on the internet. Go to AI weirdness dot com if
(12:33):
you want more joy and humor. But um, but I
was wondering, why is this the funniest thing out there
for me right now? What? What is so inherently funny
about the ways machines create things that are sort of
like real language, just close enough to be in the
zone where they are funny, but at the same time
(12:55):
far enough off that they're they're totally hilarious. And I
think that they're at least two parts it's uh that
make this this stuff so golden. One is that there's
something inherently funny about machines trying to behave like humans
and failing. Specifically, it's the ways that they fail, and
we'll we'll definitely explore this more throughout the episode. But
(13:16):
the way that they failed demonstrates a kind of pristine,
oblivious quality of stupidity. It's like a kind of platonic stupidity,
isolated from the ability to appreciate itself. It's like the
funniness of, you know, watching automatic doors repeatedly trying to
close on something that's blocking them like that that's not
(13:36):
itself so funny, but you see an inkling of the
same thing there that comes through in these AI generated texts.
But then we sort of mentioned this earlier. It's also
funny because it tends to reveal something interesting about the
human culture game that it's trying to play like it
brings this sort of cold objectivity to phenomenon that we
(13:56):
don't necessarily always bring, and it can identify a quardly
replicate trends and behaviors that we might fail to notice
in the same way that like you might notice funny
things that are actually present in the names of D
and D spells by watching a computer try and replicate them. Yeah, yeah,
I think these are these are these are two strong reasons. Okay,
(14:17):
we're going to take a quick break. We'll be right
back with more than thank Alright, we're back now with
all things humor, And of course we'll get into this
in this episode as we continue to discuss what humor
is and then why machines in some cases achieve it,
Like the absurd is is funny, Like the absurdity is hilarious,
(14:39):
And it seems it seems like we're in a situation
where a lot of times, some of these, especially these
neural net situations, were we were accidentally creating absurdity engines.
We're creating machines that that produce absurdity, and uh, you know,
you know, what can you say, like why is I
(15:00):
is something that is absurd funny? You know? Because we'll
get into all that in a bit. But but sometimes
I feel like the answer might be it's funny because
it's funny. Well, yeah, I mean, there's definitely an ineffable
quality of of of humor. That is one of the
reasons there's so many different theories of humor. Then, as
I said, we'll explore them more as the episode goes on.
But um, yeah, it's obviously something that's really hard to
(15:24):
to narrow down and put your finger on. It's that
there seemed to be all these strange, conflicting, overlapping reasons
we find things funny. But I do feel like this
strain of machine humor machine failure humor being one of
the funniest types of humor UH is bigger than just
the AI text. Because it got me I started thinking
(15:46):
about what are some of the funniest scenes in movies
I can think of? And when I tried to think
about that, I can't help but think of the dark
humor of the hyper violent board room scene in Robo Cop.
With ed to own mind, I would argue one of
the I mean, it's not for children, this is like
a hyper violent, horrible UH scene, but it's also in
(16:08):
in a in a morbid way. One of the funniest
scenes I think in film history, Like and so, why
is it so funny? I think it hinges on parallels
between humans and machines and the scene and the similarities
in the ways they fail. So brief recap of the
scene is, uh, you know, Ronnie Cox plays this you
know self, serious bloviating businessman who's proudly proclaiming, you know,
(16:31):
I've got the technology that's the future of policing. And
he brings out this robot called ED two oh nine
that's got these big guns on it, and they're saying
that it's going to take over the police force and
it's this great new technology. And then they demonstrate it
on a guy and it malfunctions, and uh, it tells
him to drop his weapon in the demonstration, and he does,
and it doesn't seem to notice he has and then
(16:52):
it shoots him like a hundred times, just a ridiculous
amount of times. And but it's something about the way
that the the people in the room fail in the
same way the machine does. Like he just plows through
this this horrible, violent encounter, and then afterwards somebody in
the background is like, can we get a paramedic after
this guy has been shot like a hundred times, as if,
(17:15):
like the machine, they're just sort of like carrying on
their like route behaviors without understanding what they're doing or
thinking about them. Yeah. The edd to oh nine is
such a great design in the original RoboCop because it
it resembles it has animal qualities to it. It looks
kind of like a bipedal dinosaur. Uh, and yet it's
(17:39):
also smooth and abstract in so many ways that it
looks like either a highly designed piece of technology, which
of course it is. It has you know, it looks
like it's a nice piece of stereo equipment, but but
then it also lacks any additional details, like it's almost
a silhouette of a lie of an animal. Yes, um,
and the growls and it growls as well. But then
(18:01):
also later a really funny scene in the movie is
the discovery that this horrible violent killer robot is can
be defeated by stairs. Like it can't use stairs. It
has these wonderful like all terrain looking um legs that
it walks on, and that it can't manipulate stairs. It
falls down them, and then it's it's like a you know,
a ridiculous upside down duckling. I mean it's so hilarious too,
(18:25):
because I mean it dad to a nine is highly
effective in other situations if it's just filling a guy
with bullets, highly effective. Um, just battling RoboCop also highly
effective for the most part um. And this falls in
line I think pretty well with our experiences of machines
and of AI, we can create highly effective specialist in
(18:47):
many areas of AI and robotics. So you create a
machine that just puts bullets into this guy, uh, you know,
it does a great job. But when it comes to
creating a general AI or a machine that can navigate
the complex natural or even or the human created world
such as the stairs, Uh, there's continual challenge there. Like
(19:07):
that's kind of that's that's what a lot of of
what's going on in robotics and AI is about. Yeah,
or the just recognizing that it's not actually supposed to
shoot the board executive during the demonstration exactly. Yeah. But
then also just the idea that it's it's wonderful, it's
something and terrible at another thing. That imbalance is often
where we find a lot a lot of hilarity in
(19:28):
other um, you know, another comedic stories and bits of
fiction or just situations like one of my favorite cut
scenes of all times from Conan the Barbarian. So there's
a scene in it where this is the Arnold Schwarzenegger original.
There's it's like a blooper out take. That's a blooper
Outare you find it? I don't think most of the
special DVDs. It's available on YouTube as well, but basically
(19:50):
Conan has just escaped or been released from servitude and
he's running across the you know, the waste land essentially,
while dogs are chasing him in order to to to
eat his flesh. He manages in the finished film to
scramble up some rocks and that begins this new story
arc him discovering this great old sword. But this is
(20:11):
like young corporal beef body giant Arnold Schwartz, and this
is the this is the Arnold Schwarzenegger that was so
ripped he had to like lose muscle so that he
could actually hold the sword correctly. Uh So there's this
out take though, where he's running in chains, the dogs
are chasing him in the movie Dogs, and then he's
scrambling up the stones and the dogs catch him and
(20:34):
drag him back down, and he's just screamed a and
and cursing the whole time. Yeah, it's I watched it
after you linked it very very funny, and you see
the p as come in to get the dogs off
the dog, and he's like, yeah, it's it's funny because
the idea of Conan the barbarian, or even just Prime
Arnold failing like this, it's just start contrast to actual
(20:58):
or perceived strength of a character and or individual. And
it's also funny because he wasn't actually mauled to death
by movie dogs, right, of course, he wasn't actually seriously
harmed in this incident, but he was apparently inconvenienced and
had his pride wounded. Right, And it will come back
to that idea to the degree to which you know,
(21:18):
the severity the outcome um comes into play and determining
if something is funny or not. Yeah, I think I
can definitely see what you're talking about that the failures
of technology are especially funny when there are other ways
that the technology is highly advanced or presented as highly advanced,
and as long as like nobody of course dies, you know, um,
(21:41):
but but I do wonder too if there's a darker
streak and all of this too, you know, something that
ties into a deeply rooted human disdain for the other,
especially for others of species. There is a quote from C. S.
Lewis's The Lion, The Which, the Wardrobe and the Four Children.
That's my son suggested alternate title for it. He's okay, like,
why don't they call it the Land the Witch Wardrobe
(22:02):
the Four Children? Like they're in it too, and they're
not in the title. Seems like a missed opportunity. But anyway,
there's this wonderful quote from it that I find I
found kind of creepy on a recent reread of the book. Quote,
but in general, take my advice when you meet anything
that is going to be human and isn't yet or
used to be human once and isn't now or ought
(22:24):
to be human, and isn't you keep your eyes on
it and feel for your hatchet. Well, that is very creepy. Now,
I think in the context of the book, this is
going to be referring to, like, you know, possessed objects
and stuff like creepy and magical. Basically, yeah, basically, the
message here was talking. Animals are cool, you know, you
can hang out with them in Narnia, But there are
(22:46):
other things in Narnia that are dangerous, and you can
tell if they're dangerous based on how human they seem,
they're they're trying to be or used to be. Well,
that's one thing in a in a fantasy context, in
a in a science fiction or even just a real
technology context, that's that's a different thing entirely, and starts
making you think about uh, well, you know, fear of
advancing technology mimicking human behavior, fears of AI. Yeah, yeah, yeah,
(23:12):
To what extent do we delight in the falls and
errors of inhuman entities because we don't wish to see
them succeed? You know, we we celebrate that the telltale
signs of their otherness because we kind of dread the
day when there will be no way to tell. And
I think there's a lot there's a strong argument to
be made that that day will be here sooner than
we think, and in many respects it already is. We've
(23:34):
talked about, for instance, robocalls on the show before um
and and just how and also chat bots and to
it becomes frightening when you look at where we are
with the technology. Now, UM, certainly you get a robocall,
it's not going hopefully not going to um deceive you
long term. But I think a lot of us are
having that experience where you pick one of these up
(23:56):
and at first you think it is a human you
are talking to, and then you realize that it is not. Well.
I think one of the funny things about stuff like chatbots,
which also deal in language but can be much much
more convincing than these, uh, these neural networks that generate
you know, lists of uh, lists of character bios or something,
is that the things that are generally more convincing these
(24:19):
days are programmed. I think with more explicit rules, they
tend to have more kind of human meddling and exactly
what they're going to be doing, and by having less
freedom to be creative and all that, and ultimately having
less potential, they can actually be more kind of narrowly
convincing the I think one of the things that's interesting
about the the neural network generated text is that it's
(24:41):
not anywhere close yet. You know, you can't really use
it yet to make things where you go like, yeah,
that's definitely real human. I mean, maybe you can in
some again narrow scenarios, but we like, for instance, if
you have something that will tell you what your Wu
Tang clan name will be. You know, they sort of
random generation and uh systems which are totally different, different thing. Uh.
(25:03):
And we'll get into the distinction here. But but you know,
systems like that, just through sheer random um matching of words,
those can be effective. Yeah, but it doesn't mean that
it's you know, it's an entirely different kettle of fish
in terms of of what's going on with neural networks
and where they seem to be going. Well, maybe we
should take a quick break and then we come back.
(25:23):
We can explain just the basics of how this these
kind of things actually work. Alright, we're back. So I'm
gonna try to do the simple version of how a
neural network works, because if you get in the weeds,
obviously neural networks become extremely complicated. I know, I I
spent a lot of time deep in a bunch of
articles trying to understand technical details that I'm not actually
(25:44):
gonna end up using here. Um. So the simple version
is think of a neural network as a machine that
transforms values. That's it. You know, it has values that
come in, like number variables, and then it puts out
values at the end. It's like it's kind of like
the toaster conveyor belt at quiz Nos or one of
those sandwich shops. You know, you're untoasted. Sandwich comes in,
(26:07):
toasted sandwich comes out. If everything goes according to plan. Now,
if it doesn't go according to plan, maybe it splits
out something that's on fire or something that you know,
who knows what goes on in now, or nothing comes
out because the bagels are building up inside of it,
right exactly. But a neural networks core job is to
just accept inputs and produce outputs. Okay. An example would
(26:27):
be image recognition. Their neural networks designed to look at
an image, a digital image, and come up with a
text string that says this is what that is. So
look at a picture of a dog and say that's
a dog. And you've probably seen examples of this on
the web. So you'd have numerical values going in. It
might be things like, you know, be a field of
pixels with numerical values for their color and placement, and
(26:50):
then it would have an output that's, uh, say a
string of text, which would actually be like a ranking,
like the top ranked string of text that matches with
those pixels in that configuration. But so the question is
what's going on inside the machine? How does it turn
that input into the correct matching output, and then and
then of course, how does it fail? Because I've also
(27:12):
found this tremendously amusing. Tumbler Um recently changed their guidelines
about what's acceptable content and what's not and stuff to
blow your mind has has Slash had a tumbler account
recently just rebranded it as the Transgenesis tumbler account, but
it had a lot of old stuff to put your
mind content on there. And suddenly I got a page
(27:32):
of all the things that have been flagged for for
potentially violating the new terms. And it was amusing because
some of it was like, okay, well that has a
classical painting on it. It's got like a you know,
it's something there's something that might look like nudity, and
so it got flagged, makes sense, or the topic is
something that is a little too sketchy for them, and
they're like, okay, that's been flagged by the machine. But
(27:54):
the most hilarious one was a picture of a baby
bat on on somebody's pall them and that was flagged
as as as likely inappropriate. And so I was just
trying to figure that one out, Like what is it
about a baby bat that it? I mean, did it
think this was genitalia or like, like what because because
I know it's somehow clicked off a number of boxes
(28:16):
and when and then the automated result was no baby
bats on tumbler. Well, I guess I don't know if
you know, but I don't know what the mechanism for
identifying that is. It might be something like this, but
yeah that I love seeing that kind of failure. And
notice that we are laughing now like it or we
were laughing. It is funny that it looks at that
and says, I don't know about this bad I think
this might be porno. Yeah, I mean then the stakes
(28:38):
are pretty low. I ultimately, one picture of a baby
bat no longer on a tumbler page that we don't
really use. It doesn't really affect me personally, but you
could see where this could lead to, uh too far
worse problems if given the right scenario. Okay, so so
back to the neural network. So you've got this machine.
Inside the machine, values are being transformed. You have inputs
(28:59):
and output and so inside on the inside, and neural
network consists of layers of sort of stations of value
transformation that are called referred to as neurons. And each
neuron essentially accepts a series of numerical values as input
and then it just performs some kind of mathematical transformation
of those values based on what's known as the weights
(29:22):
of its connections with the sources that received the inputs from.
So you've got these interlinking sources of information, inputs and
outputs throughout, and each neuron takes inputs, sums them, does
some transformation, and produces an output. So for each neuron,
you've got a bunch of numbers coming from different sources.
Each one gets treated with a certain bias based on
(29:42):
where it came from, and then the neuron spits out
a new value. And these neurons exist in layers, so
there are these waves of inputs, say the pixels and
image getting passed and transformed through one layer after another
of these neurons, until finally the network produces numbers that
constitute its final outputs. In this case, this would be
(30:03):
something like it's top guesses at the word string that
describes the image you put in at the beginning. Now
you'll see from this that the value of neural network
depends completely on how well those connections between neurons are
weighted to produce the correct results. If they just have
random weights, then the network will just produce random output.
It won't be any better than making up numbers at random.
(30:24):
So the network has to be calibrated or trained somehow
to produce outputs that are correct, and there are multiple
ways to do this. Uh. It of course could be
programmed to some degree by hand, right, you could have
a program or explicitly uh going in and tinkering with
waiting rules to try to get the outcomes to be better.
But can it can also be trained through machine learning,
(30:45):
which is a process where inputs are already associated with
correct outputs, Like you've already got a text string associated
with the image that you put in and you say
this is what you should say when it comes out,
And each time it runs the process, it checks to
see how far off from the correct known output that
it was, and then tries to change the internal waiting
(31:06):
to get closer to the correct answer. And of course,
with automatic machine learning, you can do this at scale.
You can do it thousands of times. You could potentially
do it millions of times just training over and over,
and you might be able to see a parallel here
with one of the ways that we actually learn. Uh,
you know, we we learned in multiple ways. Sometimes we
learned by being taught explicit rules to follow, like if
(31:27):
we're learning in school what an insect is, we might
learn that an insect is a small animal with an
exoskeleton that has six legs. Or sometimes, on the other hand,
we learned to generalize from particulars. We might see lots
of animals pictures of lots of animals and notice that
the ones that are called insects all happen to have
six legs and exo skeletons, and therefore we derive this
(31:51):
category called insect from that survey. And in logic, of course,
this this process where we come up with general rules
from lots of individual examples, is known as induction. So
machine learning to train your oal networks is kind of
like allowing computers to learn categories by induction, kind of
like we do when we just go out and look
at the world and see what we find. But one
(32:12):
of the things that really sets humans apart from computers
is that humans seem to have this amazing, remarkable ability
to generalize from particulars. We can often get the gist
of a category from just a tiny handful of examples.
You know, when you're giving somebody examples of something to like,
give them the gist of what you're talking about. You
(32:34):
don't usually need to list a million examples. You can
list two or three maybe, or sometimes even just one. Yeah.
This gets into the idea of judging a book by
its cover, right, not supposed to, but we often do,
and sometimes you you can if you pick up on
particular things on you know, specifically to use the book, example,
specific things about the design or the era of the cover. Yeah. Yeah,
(32:57):
And in fact, sometimes you can you can judge things
about the contents of a book just by knowing certain
things about how certain types of books end up with
certain types of covers. Like you might think I tend
to like books that have hand drawn illustrations on the
cover more than I like books that have sort of
like c g I stock image cover photos, which means
(33:21):
you probably like books from like, you know, at least
the nineteen eighties and before. Yeah, because it seems like
we have far fewer hand drawn, uh you know, covers
on books these days. Yeah, why are people putting stock
photos on the covers of books? I do not get it.
But you you didn't need to read a million books
to come to that conclusion. You could probably come to
(33:42):
that conclusion after reading I don't know three books like you,
really we get the gist of things really fast. And
that's in contrast to computers, which really really don't at all.
This is one of those strange and amazing things about neuroscience,
about the human brain. How do we solve such a
difficult problem as generalizing from particulars with so few examples
(34:04):
to draw from? And of course, another example of the
generalizing power of the human brain is in language, Like
we've been talking about it, like, how is it that
most of the time kids learn how to speak a
language without being taught explicit rules of syntax and grammar
and the definitions and usages of all the common words,
and without hearing billions and billions of examples of sentences.
(34:26):
They just pick it up. Well, there's there there are
some specific answers to that. If we've talked about that
on the show before. Yeah, well, I mean I think
that there there is a good case to be made
that the human brain is specially geared towards language acquisition
in childhood. That's sort of like one of our species superpowers.
And then those windows close, or they don't completely close,
(34:47):
but they the windows become much smaller later on in life.
You know. Speaking of children, they are you know, they
are also frequently a font of weirdness and beauty as
they two are learning to function in the human world,
in the in the adult human world, and then they
say and do things that sometimes hit a weird zone
(35:09):
that is either hilarious or sometimes a little frightening, or
or even a little bit elegant. Yes, I know exactly
what you mean. Like, kids often do the same funny
outputs based on induction that these machine learning algorithms do.
Like you can see it's funny in the same way
that they might say something that's a little bit off
(35:29):
and kind of absurd, but you can sort of understand
the rules that got them there, right. Like one example
I always refer to is from years and years ago,
I went to a children's puppet shows before I had
a kid, my my wife and I went to check
this out because it was actors from a local improv group,
Dad's Garage, Atlanta. They put on the show Uncle Grandpa's
(35:52):
Who Deal His story Time? I think it was, and
so you had these sort of seasoned, uh you know,
improv vets, and they were doing a puppet show for kids,
and they're taking ideas from the audience and they said, like,
who should our main character be? And so they hand
the mic to some little little girl in the front
row and she says Batman the Girl, which which is
so hilarious and I don't think an adult would be
(36:14):
able to come up with that, but you can sort
of tease it apart and figure out how she got
there and you know, with it. But uh, but it's
it's just one of you know, many examples I'm sure
that that anyone with children in their life can can
turn to where they come with something that is just
so goofy or weird or or sometimes terrifying. Well, I
think the real funniness and pleasure in that is that
(36:36):
it's Batman the Girl and not bat Girl. Right, Yeah,
Like it's almost there, and but by not being there,
it's it's also like it's it's even better like it's
not it's not Backgirl, it's Batman the girl. It's just
so nonsensical, uh and beautiful at the same time. You know.
Another one of the great AI text generation experiments that
(37:00):
in El Shane did on her blog was was generating
that you know those Valentine's Day candy hearts. Yes, yes,
She had a programmed sample those and then try to
come up with examples and ended up saying things I
think like like sweat, pooh and hole and time hug.
Time hug sounds good time hug time like time cop. Yeah,
(37:22):
but it also sounds like something that like it might
be a term that aliens come up with for human love.
It's like they engage not in a hug, but in
a time hug. It is as if they are hugging
for the rest of their lives, you know, or something
like that. Another one all hover and then finally bog love. UM.
My son, who as of this recording is is six
(37:45):
almost seven, he drew a picture for my wife and
I for for Valentine's Valentine's Day CARDI did his school
and it depicted dinosaurs as these want to do um
and they're there are some herbivores walking about, there are
some carnivores eating the flash of fallen dinosaurs, and then,
(38:05):
as is typical in paleo art, there may be a
volcano in the background. But then there is a meteor
coming in hard and fastid Yeah, and he writes, uh,
I love you on the on the meteor, which which
is absolutely wonderful because it's like at once it is
like like this is beautiful, Like he totally means all
(38:26):
of this, and it's like the most beautiful Valentine I've
ever received and yet to put it on the the
instrument of of the of the extinction events. It's just
so weird and like it it's accidentally brilliant, you know. Yeah,
I saw that when you put it on the internet.
It was the sweetest thing I've ever seen. It was
so good, and it's like, yeah, this is that. This
(38:47):
is how love works. Love is a is a destroyer
as well as the creator. Now, we do want to
stress it with the especially with these text based situations.
We're not talking about merror random mashups of text, uh,
such as like the Wu Tang clan name generator or
um or more literary example would be the cut up
technique popularized by author William S. Burrows, where you just
(39:11):
have like a random um mechanism and play to sort
of splice words or or sentences together to get something
that may have sort of accidental meaning to it. No,
the neural net programs are They are algorithms attempting as
best they can to approximate the quality of the the
(39:33):
input texts, the text they're trained on. So they're doing
their best to make something like this. Uh. And then,
of course, I mean one of the things is you
might say, well, why don't they just perfectly spit back
out the text you've trained them on. In fact, if
you don't tell them not to, they'll do that. You know,
they'll just spit back in exactly what you fed in.
You have to sort of like change some values and
(39:55):
tinker with it to prevent what's known as overfitting in
this world. Uh, to sort of force the algorithms to
be more creative and try to come up with new
versions of the kind of thing they've seen instead of
just completely copying what you fed them. Yeah, because we
want these machines to rule the world, not just Hollywood, right, Um,
(40:16):
But you know we also have to to distinguish it.
We're also not talking about fake i AI generated text,
which can certainly be tremendously entertaining as well. Yes, that's
such a great genre, people pretending to be neural networks
creating machine learning generated text, which it's such an amazing
(40:37):
reversal of principles that humans have intuitively detected what's funny
about machine learning generated text and then made fake human
designed versions of it to exploit that inherent humor. Right,
I don't I don't know how you get deeper on
the irony pit than that. Yeah, there's a wonderful two
thousand eighteen tweet by comedian Keaton Patty and this was
(40:58):
the tweet quote, I worked a boat to watch over
a thousand hours of Alive Garden commercials and then asked
it to write an alive garden commercial of its own.
Here is the first page, uh and uh. And then
he proceeded to include the images of this text that
script rather for and alive garden commercial. And and it's
(41:18):
filled with hilarious nonsense like I shall eat Italian citizens
and unlimited stick and playing seeming you know, to play
upon the whole catchphrase of what when you're here? Your home.
I think when you're here your family and you hear
your family. There's there's one from this UH, this fake
script that says leave without me, I'm home, which I
(41:40):
just I love. I remember laughing so hard at this
when it came out as well. I love that the
waitress says lasagna wings with extra italy. Yeah. There's also
like really funny stage directions in it. Yeah, well, oh yeah,
you mentioned the infinite uh where it says like we
the gluten classic O. We believe the wage risk that
it is from the kitchen. We have no reason not
(42:02):
to believe. Now, of course, this is just a comedian
doing this thing, trying to pretend to be an AI. UM.
But I was reading an article where they quoted Janelle Shane,
the author of the AI Weirdness blog, who trains all
these neural networks to come up with all this funny
stuff we were talking about at the beginning of the episode,
And you know, she talks about that there are ways
to notice UH when something was probably written by a
(42:24):
human instead of by an actual AI. Both can be
absurd in similar funny ways. But one of the problems
with this script UH in passing as a real AI
generated text is that it's actually too coherent, like it's memory.
Its memory is too long. It remembers characters from many
lines earlier and still has them appearing and saying things.
(42:46):
Actual AI generated texts of a much shorter memory. They
they're not consistent in that way. They don't make Actually,
they make even less sense than the fake all of
Garden commercial. So she's saying, don't reach for your match
it on this one because a real neural net generated
text only mimics forms, it doesn't mimic meaning. And this
(43:07):
thing it's it means too much. It's too clever. Okay,
So we're gonna go ahead and close out this episode now,
but again there's going to be a part two where
we continue this discussion and really get more into the
meat of the topic. In the meantime, check out Stuff
to Blow your Mind dot com. That's the mothership. That's
where we'll find all the episodes of the show. There's
also a little button at the top you can click
(43:28):
on that go to our T shirt store get t
shirt stickers with a number of cool designs, designs that
line up with certain show topics, like the Great Basilisk
or various Squirrel episodes uh as well as just as
you know, show logo um material as well, and if
you want to support the show, the best thing that
you can do is rate and review stuff to blow
(43:48):
your mind wherever you have the power to do so.
Wherever you get this podcast, give us some stars, give
us a nice review. It really helps us out in
our war against the almighty algorithms. Big thanks as always
to our excellent audio producers Alex Williams and Tor Harrison.
If you would like to get in touch with us
directly with feedback about this episode or any other, to
suggest topic for the future, or just to say hi,
(44:10):
you can email us at blow the Mind at how
stuff works dot com for moralns and thousands of other topics.
Does it how stuff works dot com