All Episodes

June 8, 2018 25 mins

On this week's episode, we speak with HowStuffWorks' host Anney Reese about the relationships between humans and machines in science fiction. We also visit Dr. Mark Riedl at Georgia Tech to learn more about the future of artificial intelligence.

The AI Revolution: The Road to Super-intelligence Article: bit.ly/2ty9KHI

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hey, here's the cool thing. Yeah, people who are really
into the question what's the difference between a human and
a machine? And they left us a lot of great comments.
Oh nice, like what Nick said that humans are freethinking
creators and machines are just one of the things that
humans create. Mike said, with the advances in science and
technology that we're seeing, machines might be able to become

(00:21):
more human than some humans. And Matt said, one is
made of metal, one is made of meat. Not really
a difference past that. That last one is a wild visual,
but all of those are great comments. Yeah. It definitely
sparked a conversation online and around the house Stuff Works office,
and we've been thinking a lot about it over the
last week. So we actually have two more interviews to

(00:43):
share with you about humans and machines today. M M.
Welcome to the Question Booth. I'm Dylan Fagan and I'm
Kathleen Cillian. The Question Booth is a place where strangers
come and answer a big question. We also talk to
people who know a lot about each week's subject to
get more insight on my people might feel a certain way,

(01:04):
and later on in the show we'll have an interview
with Dr Mark Radell, An, Associate professor at Georgia, Texas
College of Computing. But first we sat down with how
stuff works very own Annie Reese this week and also
asked her our big question. You may know Annie as
a co host of the podcasts Food Stuff and Stuff
Man Never Told You, And since this question sparks so
much conversation, we want to know what a sci fi

(01:25):
lover like Annie thought about it. M have either of
you read the Ghost in the Machine? So I guess
the the answer would be a soul? Right if humans
have a soul? And the ghost of the Machine is
describes this thing that has been observed where machines will
if you leave them alone, if you've abandoned them and

(01:46):
they can move, they will move into a corner together.
And no one's entirely sure why. It's really interesting, um,
and it's just this idea of like, what if machines
have something sort of similar to a soul? What if

(02:06):
a soul We've always kind of thought of it as
this sort of like ghost like thing that once we die,
is somewhere in the atmosphere, But what if it's more
like experiences and energy that have defined you and shaped
your your body, your cellular makeup, and it's almost more

(02:26):
of a drive to be with someone, to share those
experiences with someone. And in that case, if machines are
doing something similar, I mean, that is really interesting. It's
it's worth pondering for sure. There's also there's a definition
of what it is to be human and I can't
remember what it is, but it's something along the lines

(02:47):
of like feelings and can you feel fear our machines
afraid of dying, of not existing anymore? And that is
how like I used to watch a lot of um
Star Trek and Stargate, and so when they'd go to
a like alien planet, they're trying to decide some moral quandary,

(03:07):
is it okay to let these creatures die? They would
ask themselves these questions like what is it to be human?
And it's also along the lines of like what's cruel
if there's these criteria and you ask yourself, well, they
don't feel this, they don't feel this, they don't feel this.
It's kind of like a standard set of questions that
we've been we've been going by. Our other question was

(03:30):
can you ever love a machine? Or do you think
it's possible to love a machine and the idea of
what's the difference between human and machine when machines banded
together for their own freedom like the Ghost Machine? Are
a West World? Are you talking about West World? I
was talking about in solo. I just haven't personally seen

(03:50):
West World other than the movie. I guess from what
I know that's a big theme in West World. Oh yes,
oh yes. Um. I think you can have a relationship
with the machine personally, and I do think that we
just keep improving upon technology so much, and since machines
can be adaptive, I mean, I don't know if you
guys saw that thing where Google now can I make

(04:13):
appointments for you? So what you're going to hear is
the Google assistant actually calling a real salon to schedule
the appointment for you. Let's listen happening out here. Hi,
I'm calling the book a woman's haircut for a client. Um,
I'm looking for something on May third. So what can

(04:36):
we want that? Mm hmm. I think the technology is
moving that way that we could have machines that if
they are not experiencing emotions, they can at least like empathize,
or there will be something in their code where they

(04:56):
can almost understand what those are. And I think that
at that point, absolutely you could fall in love with
the with the machine. And we've seen it in so
many sci fi things like that. The examples that are
coming to my mind is wild. I just watched Blade
Runner and it's just like a little USB kind of

(05:17):
digital projection, and it it learned him so much. And
that's interesting too, because then you're kind of falling in
love with something that you have created, that you has
learned you and is adapted to you. I think for
sure you could have a relationship with a with a
row pot. I wonder how long term it could be

(05:39):
if you were falling in love with your own creation.
I do feel like that's easy to do, but is
it truly love? Because in my opinion, and I don't
have factual evidence to back this up, but a lot
about loving someone and being someone's friend and growing close
with someone is not always agreeing with them about everything.

(06:02):
It's sometimes your disagreements and your ability to talk to
each other about your beliefs and your opinions and your
feelings in them not lining up is what makes someone
endearing and makes you love them, you know, And so
I wonder if you can fall in love with your
own creation. At what point, Well, aren't you just falling

(06:25):
in love with yourself? Because we curate all of these
machines that know everything about you, like that agree with
everything that you say and what you do and how
you think. So it's like you're literally just falling in
love with a virtual version of yourself. And it's just like,
is that love? Is that? Or is that just vanity? Yeah?

(06:46):
Or is that narcissistic? Yeah? I think that this Actually,
I think people fall in love with versions of themselves
in real life all the time. Because we were talking
about this on this other show. I do um a
lot of people kind of project what they think the
other person will like, and it's almost always reflection of

(07:09):
the other person and their likes. And then as the
relationship goes forward in the year's pass and one person
that you probably both realize you've sort of pretty much
faked it, fake being this person, the relationship probably won't
work anymore. But I think a lot of us have

(07:29):
some type of relationship hang up in one form or
the other. And so maybe you're you're afraid of commitment
or you're afraid of being left, So you have trouble
trusting people that with a machine in theory, it's never
going to leave you unless we do reach a point

(07:49):
where they are pretty much human, so you don't have
that fear. And I could see in the future a
lot of people choosing that over relationship with a human
because stability. Yeah, we'll have more question booth after the break.

(08:18):
Mm hmmm, and we're back. Let's hear more from our
interview with Annie. I know that there's been development and
science is moving to find a way to make people
immortal by uploading their their conscience two the cloud. But
then if humans can exist in the cloud, it does

(08:40):
go back to that question of like what is the
soul and what makes a human human? And what is
it that makes you you. I've been thinking about that lately,
just in like in terms of if someone had the
exact same DNA as you, but they weren't they didn't
have the same experience as you are they you. One

(09:02):
of my favorite things to ponder every time it comes
up is if you have a boat and you, over
like thirty years, you replace every single part of the
boat to where no original panels or parts exist, Is
that still the boat that you started out with I
don't think it is again in theory, if we could
reach a point where we don't age the same way

(09:25):
we do now and you just kind of get replacement parts.
But then I guess, if your brain is still there,
that's true and that's integral. Yeah, I got to have
that part. Yeah. A lot of people came in and
ended up talking about their fear surrounding artificial intelligence. And
once again, as someone who watches law sci fi, I

(09:45):
do feel like there are conflicting views. I mean, they're
conflicting representations of what artificial intelligence will mean when it
reaches a certain point. But I wondered what you thought
about it. I mean, it's hard to think of an
example where art of intelligence isn't the bad like the
villain in a movie. I'm kind of more relaxed about it.

(10:08):
But I did read um an article that I very
much enjoyed. The article is called the AI Revolution the
Road to super Intelligence, and you can check it out
in our show notes, and it was saying that we're
right now, this is the moment in history where we're
at the bottom of that curve, so soon we're gonna
see exponential growth like in the next fifty years things

(10:30):
could be completely different. And they were talking about artificial intelligence,
and the author was trying to say, like, when we
think that artificial intelligence, all this anxiety we have around it,
that it's going to be basically it's going to destroy humanity.
It wouldn't be because it realized like I am superior
and humans are inferior. It would be because somebody coded

(10:53):
it poorly. Just last week, m I t faed an
Ai named Norman data from an infamous board on Reddit.
They don't name which one they're pulling from in the
press release to the graphic content of the sub credit,
but it's one centered around death. They compared Norman's responses
with a standard image captioning neural network on rorshack ink blots.
It turns out that Norman's answers insinuate that they may

(11:13):
have made a psychopath AI. But here's Annie with a
fictional example from the article she mentioned. It was a
program that was coded too. I think it was for
like a greetings card, right, the best Greetings card for everybody.
But the code was such that the artificial intelligence that

(11:34):
they developed to do this, it infiltrated a network looking
at all these sources to try to find what's the best,
and then it's like, well, it would be the best
if everyone could see it, And the only way everyone
could see it is if I do this and he
like shut down certain power grids and then somehow like
hijacked rockets, sent them into space and had them like

(11:59):
right out seasons creatings. There's something like some message in space,
so everyone could see it, but a lot of people
end up dying, and it took over a lot of systems,
but it was because the code was written poorly. So
there is still anxiety around it if you think about
it that way. But to me it makes me feel

(12:21):
more we should just be very careful, proceed with caution.
But I'm trying to be optimistic, I guess, because I
think we are moving that way pretty quickly. If we
are reaching a point where the lines between being a
human and a machine are getting blurry, do you think
there are benefits to that? I do. I do think
there are benefits, depending on how you define things. Humans

(12:45):
have kind of been augmenting humanity for for a long time,
so I do hope. So I do hope that it
frees up a lot of time and maybe it helps
us come to solutions that so far have alluded us.
That is my hope. Okay, so we also want on

(13:12):
a field trip this week. Since so many of our
participants talked about AI when we asked the question, we
wanted to talk to someone who actually works with artificial intelligence. Yes,
so we went down the street to Georgia Tech to
speak with Dr Mark Radell. His research focuses on what
he calls the human centeredness of AI. He wants to
find out how humans and computers can interact in more

(13:32):
naturalistic ways and how AI can be used to tell stories.
We started off by asking him our big question, what,
in his opinion is the difference between a human and
a machine? That is actually not easy to answer. UM,
But when I think about the kind of the nuts
and bolts, what we have is UM, the human brain

(13:53):
being something that has been evolving over many many millennia,
very specialized to deal with a the human environment, and
a computer being something that has to be programmed in
order to process certain inputs and outputs. Now, I think
one of the interesting things that is probably coming up
a lot is this notion of artificial intelligence and machine

(14:14):
learning in particular. Let's start with UM what artificial intelligence
is Artificial intelligence simply being uh the act of creating
computer programs that do information processing that we think is
typically something that only humans can do, so something that's
challenging and non trivial. This could be anything from driving

(14:34):
a car to um, you know, searching the web or
processing millions of web pages, you know, just about anything
that we think of as being non trivial for humans.
We can think about a computer program trying to do
that that would be considered artificial intelligence. In the last
few years, we've been more interested in something called machine learning,
which is a type of artificial intelligence. But what machine

(14:56):
learning basically boils down to is a computer system them
that learns from data. Uh So machine learning is just
pattern right, finding patterns, acting on them. A particular algorithm
that um a lot of people have found to be
very successful at finding these patterns is something called a
deep neural net. Now this is I think where some

(15:16):
of the confusion about human brains and human minds and
computer programs come in, because we called these things neural nets.
And human brains are made of vast networks of physical
cells called neurons. The neural nets are in fact kind
of weakly inspired by the human brain and the idea
that there's this network of connections inside these algorithms. But

(15:39):
if we're to look one level deeper, what we find
is that these neural networks seem to work more like
electronic circuits than human brains in that there layout in
nice layers. Um, they're fully connected, meaning that there's certain
network properties that do not appear in the human mind.

(15:59):
He and cells are much more complicated. And I could
go on, but basically, what we've done is we've taken
something that we know happens in the human brain, We've
simplified it down, We've written nice mathematical formulas that don't
exist in the human brain, and we found that in
very limited contexts Uh, these neural networks do really well.

(16:21):
The human brain is a single thing that can do
lots and lots of different things. Um, we haven't quite
mastered how to make our little algorithmic approximations of the
human mind do anything close to as complicated as the
broad scope of human everyday activity in life. It's interesting
because one of the participants said, aren't we similar to

(16:42):
a machine? Because in a way, DNA is just very
intricate form of code, But obviously, as you said, it's
a very intricate form of code. Do you think we'll
ever come close to that kind of technology to where
we almost will be able to mimic that? So there's
not thing in theory that says we can't get to
the level sophistication, but I do kind of from my perspective,

(17:05):
I want to say that the level of sophistication we have,
even though we have these impressive AI systems doing amazing
things like driving cars UM, the level of sophistication of
the human mind is orders and orders of magnitude better
as amazing as the things we see in artificial intelligence.
We really kind of take baby steps every single year,
and we talk about this explosion of interest and excitement

(17:27):
about UM, machine learning and neural nets, but it really
is still kind of scratching the surface, and it's not
really clear how we get from what computers can do
today to the vast complexity of the human mind. So
if we want to double the intelligence of a computer system,
we have to exponentially grow the amount of computing power.

(17:48):
So right now we don't know how to make the
physical hardware uh scale, and in many ways it's not
the software. In theory, um, we can say that there's
no as to what software can do, but the hardware
is something that is holding us back. We'll have more
with dr Adell after one more quick break m m hm,

(18:17):
and we're back. Thanks for joining us. Also comparing human
and machine, some participants said that a machine can't have
a moral compass or you know, really know what's right
from wrong, because those things are built off experiences. And
like you said, the human mind is so complicated and
what we go through is so vast. To put that

(18:38):
into a machine or code it is very hard. Um.
I thought that was interesting because that was definitely a
constant pattern that someone was like, well, this machine doesn't
have these life experiences, Like how will you build up
these years and years of experiences? You know, that's a
really insightful comment by some of your guests, because, um,

(18:59):
they're right that experience is an important part of um
living in the human world. Right, So we've constructed social constructs,
social norms, cultural norms that allow us to interact with
each other kind of very fluidly, seamlessly, and to avoid
conflict with each other. Humans learned this because we you know,
from day one we're living in a social world and

(19:21):
we're mimicking and we're learning from our parents and our
teachers and and from our peers how to interact with
each other successfully. And computers know nothing when you turn
them on. There's patterns of society. They're very complicated. How
do you do that without taking an AI system and
training it from you know, a baby to a teenager, right,

(19:42):
which takes you know, dozens of yours. Okay, this did
come up from some of our participants. He said that
they are fearful for the future, and we were going
with AI. I mean, how our technology is progressing and
how it's portrayed in the news. They do instill some fear.
So so it's you know, it's the first thing that
we go to when we try to understand what this

(20:03):
thing is. But what's happening here is that when we
don't understand things that look intelligent or that solve interesting
hard problems or speak to us like the way Siria does,
the first thing we want to do is anthropomorphize it. Right,
so we see this thing solving a really hard problem. Oh,
this AI can drive a car. I can drive a car,

(20:26):
I can do other things. Can this AI do these
other things as well, or is it one step away
from that? Um So, our self driving car is not
thinking about poetry when it's sitting in our parking lot.
You know, it's not plotting against us. It literally cannot
think about anything other than getting us from point A
to point B as safely as possible. And the same

(20:46):
thing with Syria or machine translation. They do one thing,
They do one thing very well. It may do those
things better than us, right, but it's very constrained and
what they're able to do, and there is not really
any serious kind of effort or we really don't understand
how to make these things broader to do more than
one thing at one time. So they're very, very laser

(21:09):
focused on the things that they do. Um So, this
anthropomorphization is actually the wrong lens to look at a I. Right,
we shouldn't think because it can do something better than
me and this topic that it can do other things
better than me and other things. And there are things
that humans are just vastly superior at, like social interactions
and and language and lots of things like that that

(21:32):
we don't even have any idea how to crack. If
I have concerns about artificial intelligence. It's not about the
rise of the robots or the singularity, which I don't
believe are going to happen, at least not in my lifetime,
if at all. Uh. What I am concerned about is
a rush to deploy artificial intelligence before we um have

(21:53):
a full understanding of all the ways these systems can fail.
So we've seen lots of self driving cars crash for
really kind of strange reasons, right. Um. We see random
sorts of things happen on machine translation systems where it
takes a sentence that seems really easy and abouts out
some really strange things. We see bias in our data

(22:16):
sets that cause our AI systems to kind of say
incendiary things. It's it's not intentional, malicious nous, right, it
just doesn't know it picked up a pattern and is
applying that pattern. Sometimes the patterns get applied to the
wrong problems or the wrong situations. And these failures, um,
are the things that worry me because they can put

(22:38):
people in harm's way, or they can get people in
trouble or unintentionally insult people or or say things that
are hurtful. And those are the sorts of things that
I think we need to work on right now while
we're also trying to kind of figure out how to
make these things useful to society. This is this is

(23:01):
how you should address kind of the things that you
see out there. When someone says AI has done a
new thing, right that there is no one singular AI,
I'm doing hand quotes here, finger quotes. Um. What you
have is, you know, lots of little algorithms running that
are very specialized, uh two, very specific tasks. And at

(23:22):
the end of the day, when someone says this is
machine learning of this is a neural net, you should
ask what is the data, what are the patterns that
is looking for? And is how is it using the patterns?
And once you start changing your words from artificial intelligence
and words to pattern finder and things like that, then
I think that kind of brings things down to kind

(23:44):
of the level that things are actually happening on and
make you less likely to anthropomorifies those things. So those
are kind of practical, everyday ways of cutting through kind
of the you know, the hype. In some ways I

(24:05):
think that doctor rodel Ha swaged some of my worries
around the future of AI. Yeah, things are progressing rapidly
and like an. He said, it's hard to predict where
we'll be in fifty years, but it was cool to
hear from someone like Dr Roddell who works in AI,
to be reminded that at this point, at least, there's
no AI overlord watching us. All yeah, m hm hm hm,

(24:44):
and we want to know what you think. You can
email us at the Question Booth at how stuff Works
dot com or find us on Instagram at the Question Booth.
We'd like to give us special thanks this week to
our executive producer Julie Douglas and to any recent Dr
Mark Raddell for joining us. We'd also like to thank
Pon City Market for hosting the Question Booth. Question Booth
is written, edited and scored by me, Dylan Fagan, and
my co host Kathleen Willian one million likes, Kathleen one million,

(25:08):
Thanks Dylan, and if you're in Atlanta, you can visit
the Question Booth too. We're on the second floor Pont
City Market, tolp to five pm Friday through Sunday. Also,
if you like what you hear, we'd love if you
give us a quick review on iTunes. It helps other
people find the show. Okay, so before we go. What
are we talking about next week? We're listening to the
answers to the question if you could change one thing

(25:30):
about yourself, what would it be. I am looking forward
to that one, but until then, see you in the question.
Both

The Question Booth News

Advertise With Us

Follow Us On

Hosts And Creators

Dylan Fagan

Dylan Fagan

Kathleen Quillian

Kathleen Quillian

Show Links

About

Popular Podcasts

2. In The Village

2. In The Village

In The Village will take you into the most exclusive areas of the 2024 Paris Olympic Games to explore the daily life of athletes, complete with all the funny, mundane and unexpected things you learn off the field of play. Join Elizabeth Beisel as she sits down with Olympians each day in Paris.

3. iHeartOlympics: The Latest

3. iHeartOlympics: The Latest

Listen to the latest news from the 2024 Olympics.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.