All Episodes

January 18, 2020 15 mins

Guest host Ian Punnett and computer engineering expert Dr. Robert J. Marks discuss the continued advancements in artificial intelligence and robotics, how the military is harnessing these improvements to make better weapons, and whether they will pose a threat to mankind.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Now here's a highlight from Coast to Coast AM on
iHeart Radio. Robert J. Marks is a distinguished professor of
electrical and Computer engineering at Baylor University, for whom my
respect went up enormously after you all beat KU a
couple of weeks ago. Yes, that was interesting. It was awesome,

(00:20):
is what it was. It was awesome, and I'm a
member that I'm here in the Big twelve with you,
but at you know, much smaller case date. But we
we celebrated that win. That our head coach just quit
for a football team. No, yeah, he went to the NFL.
The oh yeah, that's right. I forget something that. Lately,

(00:41):
college head coaching has become kind of a revolving door,
so it's hard for me to keep track of where
people are going. But I did hear that. And the
NFL needs it because they've got a bunch of they
got a bunch of stale coaches there. They got to
keep up. But that's part of the same thing here too.
It's we always have to be forward thinking and always
thinking what what is not what necessarily we're facing today,
but what we could be facing five years from now.

(01:03):
And in your book, which by the way is available
free if people want to. You can link up to
that at Coast to coastm dot com and get a
free copy of the Case for Killer Robots. Is that
this is again we're arguing about some sort of convergence
point in the future between what we need to have
in our arsenal and what others are going to have

(01:24):
in theirs. Yes, and I think in order to do
that we need to have a nice, sober, informed discussion
about the limits of AI. Right I think the way
things are informed in the media, there is this idea
that AI someday will be sentient, or be creative or
understand No, it will never do any of those things.

(01:47):
That's back pretty solidly by evidence and computer science. So
once you put that aside and look at artificial intelligence
in an informed, a sort of matter manner, you can
actually see some of the limitations that are going to
be imposed on artificial intelligence and the weapons of the future.
It's never going to become like Skynet in the Terminator movie,

(02:09):
rights never going to become like the Matrix where we're
all in bathtubs of good and literal virtual reality world
that's not too different than my life right now. By
the way, so I'm not sure you can make that claim.
But but when you say that, I also have to
point out though that although you are right and we

(02:29):
know I mean, it's still sometimes easy to put everything
down a level, say the media. But however, we look
at movies, TV. You know, there's still stories that come out,
say out of Japan, where they're working on robots that
mimic human emotions. So they may not generate it, but

(02:51):
there does seem to be an interest in some levels
of science to create at least an effect of human
empathy or sympathy, or to fulfill a function, and that
that that this would be part of some future development
of artificial intelligence. Well, the mimicking itself, the mimicking of
human emotion, is not really that difficult. It turns out

(03:14):
that we are prewired as little babies. My daughter just
had twins, so I know a lot about being right. Yeah,
they can they can actually see about a foot in
front of them, and their brains are prewired to notice faces.
And this gives rise to something called the Uncanny Valley hypothesis,
and it has to do with the Dipper and regression curve.

(03:37):
But basically the idea is that the emotional response to
things which are close to human are much more severe
than if they don't relate to humans. You think back
to the nineteen thirty two movie Frankenstein with Boris Karloff. Sure,
he kind of walked around really slowly, and if you

(03:58):
were on crutches you could outrun the I and probably
Mike Tyson could take him out with a couple of punches.
But he was creepy, and he still gives you the
creeps because he resembled a human being. The science fiction
author Isaac Asimov actually coined the term the Frankenstein effect,
and that is the fear that we have of robots

(04:21):
or anything that looks human. Yeah, and I think that
that that that is the kind of that it was
a human version of an automaton which had already existed
in our culture and in our cultural imagination. So whether
we were creating um, you know, essentially arcade features that

(04:43):
were very realistic looking automatons basically very complicated clock but
one that sort of gave the effect of a human interaction,
or if we even go back and we look at
in religious lore about the creation of a mindless automaton,
that was, you know, created out, created in flesh. This

(05:06):
is this is what people fear because we've always feared it,
as your point, but I don't know that it is
entirely without basis of fearing it. When we look at
people who are modern computer scientists who are still trying
to come up with robots that will that will do
better than that and will appear empathetic and not scary,

(05:27):
and they will appear very sweet, and they might be
able to cradle a baby and be able to comfort
that baby instead of just having some bassinet rocking back
and forth. Yeah. Absolutely, one of the things that always
needs to be defined as what is meant by being better? Better?
In what sense? Yeah, I think you actually meant in
the cradling of the robot. In terms of artificial intelligence,

(05:51):
the artificial intelligence itself has little to do with the packaging.
So you can package artificial intelligence in a robotic sort
of not in a robotic but a humanoid sort of
sort of thread like like the Transformers, or you can
package artificial intelligence in missiles, So the actual artificial intelligence

(06:12):
has little to do with the packaging. And some of
these things that I see in the media, I don't
know if you've heard of the robot Sophia. She's supposed
to be able to have conversations with you and express
human emotions, and actually, to me, she's not that impressive
because basically what she does is is raises an eyebrow,
does a little grimace, and these are things again that

(06:34):
we're tuned to recognize, and it's sure easy to program.
And her background is in her conversational skills that are
kind of on the level of Alexa, right right, But
look how much Alexa is taking over, you know, homes
in many ways, and how people have you know, I'll
grant you that the human interaction with Alexa is something

(06:55):
that Alexa is not aware of, not in any sension way.
But we do what with that type of technology, what
we do with dogs. I'm pretty sure my dog completely
understands why I was bothered by having to write for
Syllabi yesterday. I'm pretty sure that he completely got my
point and that was wrong to have to do in

(07:15):
one day. But you know, he's just looking at me.
He has no idea what I'm talking about, but I
feel like he knew, And I think this is where
maybe this is the great challenge, is not getting the
media to articulate it better. Although that there's truth to that,
and look, we're trying to do it right now. But
it's whether science is going to be whether the people

(07:37):
who are developing it are going to continue to be
as engaged with the public and making their case for
its necessity instead of doing that Ivory Tower thing, which
is we're going to go do this thing and we'll
let you know later on when we finished. Well, if
you actually think about it, and we are really blessed
with artificial intelligence today, I think some of these things
were kind of numbed by familiar familiarity, right the Lexa.

(08:00):
We got Uber and Google search engines, Series and Amazon shopping,
bitcoint So we have artificial intelligence all around us that
is doing some great and wonderful things. So that's the challenge.
Though then when we talk about this in a military context,
is that what will how will great and wonderful interpret

(08:22):
to military artificial intelligence or will it only feel great
and wonderful when it's our drones doing the killing and
not us being killed by enemy drones. Well, here is
the unfortunate conclusion I would submit is we do not
have a option in terms of pursuing the artificial intelligence development.

(08:44):
Again pointing to history, technology has actually increased the posture
of nations. It is one wars, it's shortened wars. The
atomic bomb shortened World War Two, as did the Norden
bomb site, as did the decoding of the Nazi Enigma code.
All of these were technology, technological things which helped shorten

(09:07):
the war. A big one was actually radar that we
that the enemy didn't know about. The nazison or the
Japanese knew about until the day of japan surrender. But
I had an uncle that was in the Pacific Theater
and he was supposed to jump behind enemy lines with
twenty four pounds of explosive on his legs. He was

(09:30):
a paratrooper. He was supposed to go behind enemy lines
and blow up stuff. But he was so happy when
he heard about the bomb, the atomic bombs, because he
was able to come home to West Virginia where I'm from,
and raise a family and lived at the ripe old
age of ninety wherein jumping behind lines would have been
a suicide mission. Now, the atomic bomb killed I think

(09:53):
about two hundred and twenty thousand people. Just terrible. But
if you look at historians, one of them was Philip Jenkins.
He says story and here at Baylor he estimates that
the dropping of that bomb saved ten million lives, not
only the Allies invading Japan, not only the Japanese fighting back,
and they actually had death over surrender in their philosophy,

(10:15):
but all of the occupation that Japan was doing with
China and North Korea. And there was a standing order
in incarceration camps that in case the camp was to
be overrun, all of the prisoners wouldn't be killed. Sure,
and he makes ten million, So this is an unfortunate
but it's unfortunate aspect of the nature of man that

(10:37):
we have to do it. But I remember, we have
no choice, Okay, But this is I think you are
actually making a case against your point. And I'll tell
you why. First of all, by as chance would have it,
I too had an uncle that was in the Pacific Theater,
and he shared the same feelings with me many times.
Even though he's very much of a pacifist in other respects,

(10:59):
he had fought alongside He was in Patent's Army in
Italy and he was being transferred to the South Pacific,
and he felt his luck had just run out. He
had survived, you know, a year in Europe which was
on her. He was the most senior guy in his
little group, and he thought there was no way he
was going to survive a Japanese invasion, and so there

(11:20):
was always part of him that was grateful for that bomb.
But I think this is the point, is that we
still humans, still selected that target, right, So that's my
point that I say, I think that's where again we
come back to something which might undermine a little bit
of that parallel. Although I know you're not making too
big a comparison allowing machines to select their targets, allowing

(11:45):
machines to decide that this is the human I'm supposed
to kill. That's the part that I think is where
we were We all should shudder a little bit because
it's not as though machines are in fact collible. And
we know this from even a previous tech that you
mentioned about face recognition software, where people have been accused

(12:07):
of crimes or arrested on the basis of a walking
down the street in London which they turned out not
to be the person of interest that police were looking for,
or even as we were talking about last week with
a guest who is in the CIA, where they were
just going on algorithms, and they were choosing people that
on the basis of their name, where they were from

(12:29):
other aspects of their life, which turned out to be
a false subset of data to go arrest somebody, disrupt
their lives for like six months, and then let them go.
And they were able to determine that they weren't the
person that they were looking for. That's the part that's scary.
We were in that case of the algorithm. Law enforcement
was subservient to the algorithms equation to the actual conclusion

(12:54):
that it came to. And they said, well, we have
to go do it because the algorithm told us to.
And then that's where the face recognition software as well,
this is what they say we should. And I think
that's the part that's scary about about lethal you know,
for example, lethal drones. Oh, by the way, I totally
agree with you. I think that the human needs to
be in the loop when in all possible, and autonomous

(13:16):
should actually be used as a last resort. But there
are cases where autonomous weapons are going to be required.
The military and Engagement has something they called uda O
DAWN stands for Observe, Orient, detect and I'm sorry, Observe, Orient,
decide and attack UDA. And many times the success of

(13:40):
an engagement is determined by how quick your UDA is.
And in some cases we're going to have udas, which
if humans are deciding, this is just going to be
too long, especially if our adversaries are using artificial intelligence.
So it's actually, in some cases it's going to become
like a I don't know, the Gunslinger movies of the

(14:02):
Old West, where you have these two cowboys standing next
to each other in a showdown of the street and
whoever's the fastest drama is going to be the winner, right,
And so we are going to have scenarios like that
one of them. For example, I don't know if you
remember the arcade game Space Invaders. Yeah, I was pretty
good at it. Yeah, alright, we're okay. What you know
that Space Invaders started out and it went really slow

(14:23):
at first, right then at the last at least the
way I played it, I wasn't. I wasn't as good
at it. You couldn't aim anymore. There was just too
much happening. You actually had to do a splatter hit
on all that and just hope you survived in some way.
So that is an example of where autonomy might be required.
If one is overwhelmed by attack in such a manner

(14:47):
that humans cannot comprehend what is going on, then you
know you have to go to autonomy. If you don't
want to surrender, Listen to more Coast to Coast AM
every weeknight at one am Eastern and go to Coast
to coastam dot com for more

The Best of Coast to Coast AM News

Advertise With Us

Follow Us On

Host

George Noory

George Noory

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.