Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to Planet Logic today's episode Could Humans Become Obsolete?
And my guest today is somebody who may have an
opinion or two on that, April Cousins, who is a
tech person herself. I've worked with her a couple of
places in the past. We've been on boards together and
such back in the old days. And welcome to Planet Logic.
Speaker 2 (00:22):
Well, thank you very much, Lynda. I appreciate the opportunity.
Speaker 1 (00:25):
All right now, I want to start with that provocative headline,
could humans Become Obsolete? It has gotten to a point
now where a teacher in a high school or a
college as signs an essay and the essay comes back
reasonably well written. Did the student write it or was
it chat GPT or something like that? Is that the
(00:48):
world that we are beginning to live in and will
live in.
Speaker 2 (00:52):
That's the world we are living in. One of the
interesting developments to that is now the teachersides all the
other things they have to deal with, have to become
familiar with the tools that detect the fact that AI
has been used to write a particular essay. And they
do that by comparing phrases and paraphrases and just word
(01:19):
groups and themes with everything that the AI has access
to my mind.
Speaker 1 (01:24):
You know, it was interesting last night perusing the Internet
a little bit. I found an old picture of the Quarrymen,
which was the Beatles before they were the Beatles, with
some members ended up being the Beatles and some didn't.
And I thought, these are fascinating pictures. And I started
scrolling down at the comments and it became apparent they
were AI. So much that we've seen. Paul McCartney supposedly
(01:50):
has agreed to pay for the College of the Charlie
Kirk children. That's fake too. The pictures involved in that
were fake. How do we go on Facebook or ex
and know what's real and what is not real?
Speaker 2 (02:05):
You just have to do a lot of research. You
can't tell by looking, I mean AI, particularly the photo
thing that's been actually has been around for a very
long time. One of the things that I found interesting
was that if you remember the movie The Gladiator with
Russell Crowe, the gentleman that was in charge of the
Gladiators died before that movie was finished, and about at
(02:29):
least the last twenty minutes of his appearance in the
film is all AI.
Speaker 1 (02:33):
Well. One of the Star Wars movies, and this has
been some time ago. I don't remember the name of
it because I tend to sleep through those these days.
Peter Cushing was in it. Peter Cushing was in the
original Star Wars movie, and I kept thinking, who is
that guy? It looks like Peter Cushing. So I looked
it up and it was all AI. And that was
(02:54):
before we started using the term AI. It was just CGI.
I guess is what we call that. It's hard to
know what's real and what isn't. Now here's the I
think where the question starts to get interesting. Could humans
become obsolete? You've seen I'm sure two thousand and one
A Space Odyssey?
Speaker 2 (03:14):
Oh yes, many?
Speaker 1 (03:16):
You know our friend how how and when he says
I can't let you do that, or he starts to
he starts to make decisions based on his programming. But
they are sentient decisions made by a machine.
Speaker 2 (03:32):
Well, they are appear to be sentient decisions made by
a machine. What people have to understand the machines are
tools and everything that a machine, AI, robotics or whatever
has been programmed by a human, with all their biases
and flaws and holes in their logic. If you will
that all of that will result in. And so two
(03:56):
thousand and one doesn't follow as mobs three rules of robotics.
Speaker 1 (04:01):
Well, no, sure, but that was what sixty seven sixty eight,
and they were already thinking like that, and some of
this has happened. So what happens when when robots or
ais program other robots or ais, well.
Speaker 2 (04:18):
They already are. The truth to that is that they
already are. It is in my opinion what that is
is the programmers and the tech people just can't resist
trying to play god, and so they say, well, our
tool is so great that not only can it do
these minor tasks, but it can also it can also
(04:42):
be used to create other servants, with the caveat to
that being that servants frequently don't like to stay servants.
And so as soon as you start trying to program emotions,
because that's what those kind of decisions are based on
emotions into technology that is going to be based on
(05:05):
the programmer's experiences and bias, and we're humans, that's not
all encompassing, and so you're more than likely going to
get some pretty interesting results.
Speaker 1 (05:17):
Let's move from hol In two thousand and one Space
Odyssey to Lieutenant Commander Data. Yes, here is a robot
and android as they sometimes called them. That is not
like C three PO or R two D two, who
are obviously robots and built for specific tasks. Data became
(05:42):
a Starfleet officer, and his greatest desire, Unlike Spock, who
did not want emotion, Data's greatest desire was to have emotions,
to become a human, to experience human things, to no love,
so to space.
Speaker 2 (06:00):
Oh yeah, no, for sure. I mean that was the
episode where he created his daughter Lal was the epitome
of him wanting to have the emotion of family and
love and fatherhood and anybody that's in that episode. That
didn't go very well, No it didn't.
Speaker 1 (06:19):
But could that happen? I mean, April, you're a tech person,
you deal with us all the time. Is are we
going to have a day when a machine, a computer
and AI actually has sentient thought that is independent of
their programming.
Speaker 2 (06:39):
It's possible, I mean, anything's possible, but I think we
are a long long way off from that.
Speaker 1 (06:45):
So AI is a tool still at this point. Yes,
all right, we may in a moment have some fun
and talk about some of the great robots of fiction,
but here's a question. When this started coming in out
and I was at an event, it's probably been two
or three years ago, and I had just barely heard
(07:06):
of this chat GTP thing, GPT, GPT whatever whatever it did. Yes,
and I do a minute commentary every week on the
radio station that I worked for, Talk thirteen seventy in Austin.
You can listen to it at talkthirteen seventy dot com.
(07:27):
My producer walks over with his phone and he said,
look what I did. And I said, what did you do?
He said, I told the computer to write a Lynn
Wooly style column one minute long, in your writing style. Yep.
I read it. It wasn't one hundred percent, but it
was getting there. It was pretty darn close to me
(07:49):
having written that. All he did was tell it to
write it in my style and gave him a subject.
Gave the computer a subject.
Speaker 2 (07:55):
Yes, And what the computer did at that point was
to scan through all of its libraries of all of
your articles and synthesize an article based on what was
in the that combination of articles.
Speaker 1 (08:11):
But in twenty seconds.
Speaker 2 (08:14):
Computer speeds are pretty amazing though.
Speaker 1 (08:16):
All Right. I have an Alexa on my desk at
my apartment in Austin, which is not where we are
right now. I'll be laying in bed reading a novel
and I'll come across some reference that I don't understand,
and I'll ask her and she'll tell me in five seconds.
Speaker 2 (08:34):
Again, computer speeds are amazing, and what you've given when
you asked that question are keywords. They've been able to
construct the databases so that they are keyword driven, and
so it really cuts down on the amount of time
it takes to find something like.
Speaker 1 (08:51):
That in your work. Do you use AI at all,
because you're not, You've already made it known you're not
over overwhelmed with the idea of it.
Speaker 2 (09:02):
Yeah, No, I don't, And well I say I don't.
I use a very rudimentary one called grammarly when I
was writing particularly emails.
Speaker 1 (09:13):
Okay, but that's to make your grammar correct, right, Yeah, and.
Speaker 2 (09:16):
It does, reword. I mean you can. You can, even
with something like grammarly. You can tell it I want
it friendly, I want it professional, I want it technical,
and it will use that as the theme for what
it sends back to you.
Speaker 1 (09:29):
That's kind of interesting. I do want to point it.
I just if you're listening to this. I just gave
April Jones excuse me, April Cousins a copy of my
latest story collection called Stitches in Time. I just want
you to know up front the grammar in that book
is ninety nine percent correct. And I did not use AI.
(09:51):
I used my knowledge of grammar, which is becoming less
I guess of a valuable tool as it's so easy
to let to let automation as such take care of
it well.
Speaker 2 (10:04):
And that's one of the issues. And that's a good
example one of the issues that I've got with AI
is that and this is going to sound really stilted
and condescending, but it's not meant that way. All of
these tools seem to have the impact like social media
(10:24):
does that it gives people an out for thinking. You know,
they can access these tools, and I don't know where
their brains go, but it's not generally something that would
benefit humanity. It's you know, what can I do that's
fun next, or anything that doesn't require critical thinking or
(10:49):
problem solving because they've got something that solves it for them.
And in the some of the young people that I've
dealt with and older people for sure, that is just
distressing because you can ask them or present them a
situation that would be relatively simple I would have thought
for people to solve, and they look at you like
(11:09):
a deer in the headlights.
Speaker 1 (11:10):
What about things that are relatively not simple? If you
could look at one of these ais, maybe five years
from now, maybe ten years from now, when we've really
expanded what they can do, and say to it, what
is the cure for cancer?
Speaker 2 (11:28):
Well, it'd be lovely if it could come up with
a cure that would work.
Speaker 1 (11:31):
But it's like my mythical robot president. In one of
my recent columns where I made the statement a robot
president imbued with Asimov's three laws and imbued with the
three laws of robotic Politicians, which I wrote, which I
don't have in front of me, but you'll have to
(11:52):
read it. A robot president would not need a cabinet.
The robot I'll use the pronoun he for the heck
of it. The robot president would instantly, like Alexa does
now know everything, would know the whereabouts of Vladimir Putin
and hesion Peing on any given day because it's monitoring
(12:12):
every news source there is. Would instantly know, let's say
that we what's a good name for AI in the future.
I'll call it auto. Auto. Okay, Auto, it looks like
the Russians are on the move against poland give me
ten top scenarios on how to stop them, in order
(12:38):
of likely success, and you'd have it like that, every possibility,
every right turn, every left turn, every weapon that Russia
has would be taken into consideration.
Speaker 2 (12:55):
And I will counter that with you've watched war games, right, yeah, okay,
I don't know that your auto would not come up
with something that didn't end in thermonuclear war.
Speaker 1 (13:08):
Well, that that's true, But we also don't know that
Trump won't do something, or Putin won't do something, or
a guy who was up barely barely there, Joe Biden
might have done something.
Speaker 2 (13:19):
Yes, being human, that's entirely possible.
Speaker 1 (13:21):
But you know what a swat analysis is, so strengths, weaknesses, opportunities,
and threats. Couldn't couldn't a robot president do that kind
of analysis like almost continuously as things change on the ground. Oop,
the Russians have just sent troops into Bravia or whatever,
and then a computer could instantly take that fact into
(13:45):
account and change the strategy based on it.
Speaker 2 (13:49):
Yes, but something that that has been part of politics
and humans for as long as there have been humans
is that the computers will assume actions based on past
history and logic they have to the logic of their programming. Humans,
bless their sweet little hearts, are not logical. They will
(14:10):
do things because it benefits them or it feels right
at the moment. Example, Putin, you know there's that made
no sense for him to do what he did going
into Ukraine. He wants to expand Russia's territory at the
cost of millions of his own people. What you know
(14:33):
what part of that makes sense? And even given the evidence,
continuing evidence of what's going on, he keeps thinking that
he's going to be able to do it.
Speaker 1 (14:43):
He wants to reconstruct the Soviet Union. Yes, all right,
speaking of Putin, there's kind of an axis of evil
forming in the world, and the two smaller players are
the Ayah Toola hominy from Iran and Kim Jong Un
of North Korea, who is from what I understand, if
(15:07):
the North Korean police come to your house and you
don't have a portrait of Kim Jong Un, you may
get your throat cut. So it'd be nice why I
should have a portrait of Trump up here somewhere just
to make sure all right. And then the two big players,
of course are Shijian Ping from China and Vladimir Putin,
and they're forming kind of alliance. Now. At this last
(15:31):
meeting they had, she and Putin were walking along together,
and there were still translators with them, one speaking some
form of Mandarin or something and one speaking in the
Russian and there was the mic was not supposed to
be on, and it was on. Are you familiar with
what they were discussing?
Speaker 2 (15:51):
Now? Huh?
Speaker 1 (15:52):
They were discussing the possibility of immortality, where the two
of them, as presidents or or leaders for life, could
live to be one hundred and fifty or two hundred years.
Now they've that's been scrubbed in most places that they
have authority over because they didn't want that out. They
(16:13):
didn't know the mic was hot. How would that work?
Let's say that Let's say that Elon Musk decides he
wants to live forever, or Bill Gates decides he wants
to live forever. I'd always thought what you would do
was you would cryonically freeze yourself and with the idea
(16:36):
of being unfrozen in a future time where you could
be restored with technology. Now we're talking about transmitting your
consciousness into a perhaps as we used to call computers,
an electronic brain.
Speaker 2 (16:51):
Right, Yeah, that concept gets I mean, it's almost tread
warn it gets used a lot.
Speaker 1 (16:59):
But they're ta talking about it seriously. Now, Is it
even remotely possible?
Speaker 2 (17:05):
No, and I really do believe that. I don't believe
that that is possible. I believe that something that simulates
that is extremely possible. There's a show on it. It's
older now, but it's called The one hundred and it's
I mean, the concept is is that it's a dystopian Earth.
You know, the people have completely irradiated the planet, and
(17:27):
they send this group of folks up into essentially a
very large space station to live for literally over one
hundred years, waiting for the radiation on the Earth to die.
And the series goes on for several seasons. But in
one of the seasons, there is a group that the
leaders supposedly record themselves electronically so that they can be
(17:55):
then with this microchip, be transferred into someone else's body,
and then that intelligence takes over the body and the
other person of course, and this is sci fi again.
So usually the other person basically fades away and doesn't.
Occasionally they don't, and then there's this internal brain argument
between the two, the host and the and the original person.
(18:17):
It's just not possible. I mean, that's the essence of
being human. I don't think can be translated into bits
and bites and then transferred to something else. I just
don't think that's possible. Again, it can be simulated, but
it's simulated in the same way that Hell was an
(18:38):
intelligence that again didn't follow Asthma's role of rules of robotics,
which is the first one is do no harm to humans,
and the second one is do no harm.
Speaker 1 (18:49):
Let's they always protect your own existence as long as
it doesn't violate the first first.
Speaker 2 (18:53):
Law, Yes, and that.
Speaker 1 (18:56):
What's the second one? That may be the third one?
Speaker 2 (18:59):
That's the third the third one. The second one is
don't allow yourself to be harmed, and the second one
is that and the third one is but then don't
harm humans. So it's it's say hierarchy exactly.
Speaker 1 (19:12):
And I may have gotten that, and many plot devices
were based on that. And I've read all of Asimov's
robot stuff, yes, and it's it's very interesting. I think,
let me tell a joke right now, go for it,
because this joke kind of is epitomizes what I want
(19:34):
to ask you. There's a bunch of scientists together and
they're at a big university and they're the best brains
in the country, and they're wondering. They're having a big
argument over the existence of God, and half of them
says there is no God, and half of them says, yes,
there's a God. And they decide the best way to
decide is to create this incredible AI and ask it.
(19:57):
So they spend years building this thing, is the most
powerful artificial intelligence ever, and they get it done and
they're ready to ask it. So they asked this AI,
is there a God? And the AI says there is
now yes, Okay. So how how does this end? Or
(20:19):
does it? I mean, is this a continual improvement in
the ability of AI for the rest of human existence?
Does it reach a point where it can't get anymore?
And what is that point?
Speaker 2 (20:34):
I think that it is going to continue to improve.
It gets more data, the programming gets more sophisticated. But
I don't ever believe that it will reach the point
where it will be human I mean there was There
was a television show that was really good on TV
that was called Person of Interest.
Speaker 1 (20:53):
Oh I love it. Yeah, Jim Cavizel exactly. Oh great. Yeah.
He had a little back door into this thing where
he could go and find somebody that was in trouble
and go try to save them. That that show could
get intense.
Speaker 2 (21:04):
It could, but in the end it was a computer
that was the villain.
Speaker 1 (21:08):
Well you I haven't seen the last season. Oh no,
a time, I have it. I have it. I'm still
I'm still a blu ray person. Oh it's so sorry. Sorry, sorry,
I'm old yellow. But yeah, that is a great show. Yes,
absolutely fabulous show. One of the best things TV's had
(21:29):
in a long time. Yes, all right, Now I want
to ask you about reasoning. I mentioned Alexa a serious
kind of the same way I try to get them
talking to each other.
Speaker 2 (21:39):
Sometimes I won't have those in my house.
Speaker 1 (21:43):
It's oh, really, do you think they monitor you?
Speaker 2 (21:46):
Think?
Speaker 1 (21:47):
Okay, think do they monitor you?
Speaker 2 (21:51):
I don't have Alexa or any of those things. And
I can tell you one hundred percent that your phone
is monitoring you all the time.
Speaker 1 (21:57):
Well, sure it is, so, I.
Speaker 2 (21:59):
Mean, whether or not. The only way it wouldn't be.
And I'm not even one hundred percent sure of this
is if you have the thing turned off, but if
it is on, it's listening.
Speaker 1 (22:07):
Yeah, and Waterburger is monitoring me. I put the app
on my phone and they know my meal history. Now
from day to day, all I have to.
Speaker 2 (22:17):
Do is look at something an advertisement, and for the
next two weeks I'm getting all kinds of advertisements of
that thing that I just not that I thought, but
that I looked at.
Speaker 1 (22:29):
I thought people were being conspiratorial thinking that, but then
it's happened to me so many times. Yes, it is true.
Speaker 2 (22:37):
Yeah, and it's actually for me. It makes me chuckle
because I've dealt with people that are pretty adverse to
having their information out there, you know. I've had people
that I worked with in the business that I was
involved with get really angry at me when I sent
them back a digital copy of a check that they
(22:58):
sent for payment for something. I'm like, oh my god,
you don't do that, because you know, now it's on
the internet and they'll know my stuff. And I'm like,
oh my god, they already knew your stuff.
Speaker 1 (23:07):
I'll tell you a better story than that. I had
a friend who have a friend who does not like
to make any kind of online payments. And I said,
why don't you just venmo me what you owe me,
or I'll venmo you what I owe you. He said, well,
how does that work? I said, well, you sign up
(23:28):
for venmo, you put your bank account number and your
routing number in there, and you can send money instantly.
And he said, but then they'd have my bank account number.
I'd rather send a check. So I get out of
check and I hold it up to him and I
point to that little number at the bottom, on the
right side of the bottom of the check, and I said,
what do you think that is? And he said, oh, yeh,
(23:49):
that's my bank.
Speaker 2 (23:50):
Account number and the routing number and the routing number
and your name and address. And some people used to
actually put either their Social Security number and or their
phone number.
Speaker 1 (23:59):
You know, on I don't use many checks, but I
took as much information off of the checks as I could.
Name and address. That's basically all I have on there,
and that's more than I want on there. But venmo zell, yes,
there's another one cash apt Do you use those things? Yes? Okay?
(24:21):
And you're not worried about them No.
Speaker 2 (24:23):
Because they've already got the information. I'm not giving them
anything they don't already know.
Speaker 1 (24:26):
Well, I have I don't have the other two. I
have Venmo and I have PayPal. I've never had a problem.
Speaker 2 (24:33):
No, I don't.
Speaker 1 (24:34):
I think Venmo and PayPal are co owned anyway.
Speaker 2 (24:36):
I think they are. But I used to have PayPal
and they have a and I don't remember what is
off that top of my head, but it actually just
kind of set me off and they closed my PayPal account.
They have have some interesting policy changes as far as
privacy is concerned.
Speaker 1 (24:52):
Really, yeah, but I always hit agree. I mean, what
are you gonna do. I'm not going to read that stuff.
I don't have a pair of glasses big enough to magnify, right.
Speaker 2 (25:01):
But then you have to understand that you've agreed to
whatever happened.
Speaker 1 (25:04):
Well, but that's but that's the system.
Speaker 2 (25:06):
Well yeah, well that's like you know, uh, what are
they called end user agreements? The ULASU? Who reads those?
Speaker 1 (25:17):
Nobody reads them?
Speaker 2 (25:18):
There forty pages of want you.
Speaker 1 (25:21):
I want you to go, just as an experiment, to
my website WB daily dot com and click the about
us and read the terms I wrote them, and tell
me what you think of them. They're lengthy. Now, I'm
not gonna say it all came out of my head.
I looked at a lot of others combined a lot
of things, and my friend Todd Bowman, who started that
(25:43):
website with me, read it and said, my head is spinning.
And I don't know if any of that would absolve
us of any kind of wrongdoing, if anything ever came up.
But it's there just in case.
Speaker 2 (25:54):
You know, right now, and I understand what you did
as you performed your own personal human based AI.
Speaker 1 (26:02):
This, this is true. Now I want to ask you this,
and I know you're going to know the name I'm
going to bring up, Elijah Bailey. You're probably going to
know that name. Does it ring a bell?
Speaker 2 (26:15):
Not off the top of my head?
Speaker 1 (26:17):
All right, it will in a minute. How much reasoning
can an AI do? That is to say, could a
I'll use I'll use some of the science fiction terms.
A humaniform robot, that is a robot that is almost
indistinguishable from a human.
Speaker 2 (26:34):
That's what the androids are supposed to be. Cybergs are
a little bit more mechanical, but.
Speaker 1 (26:37):
Yeah, exactly, the the the androids a little more chemical based,
because Lige Bailey is a detective.
Speaker 2 (26:47):
Oh, yes, yes, there you go.
Speaker 1 (26:51):
I knew it would come to you. Yes, Liige Bailey
is a detective who is, according to Isaac Asimov who
created him, indistinguishable from a human being. And he's the
star of I want to say three novels, Robots of Dawn,
The Naked Sun, and The Caves of Steel. I believe
that three have read them all, but it's been many years. Yes,
(27:12):
I need to get them out and reread them. Caves
of Steel is great, that's great stuff. Yeah, and the
title is because people have to live down inside the
earth because of various and sundry things. But he's able
to solve murder cases and he's a robot, right, are
we going to get to that point? I mean, look,
Lieutenant Commander Data could fight off the Klingons, you know,
(27:35):
and he was essentially an.
Speaker 2 (27:36):
AI again science fiction, so you know it's possible to
get close to that. But to go back to your
title of this podcast, unless humans let them, and it's
entirely possible that they will make that really sorry decision.
AI is never going to replace humans. There are intuitive
(27:58):
leaps that humans make that simply can't be programmed because
they they seem to come from out of the ether,
and you just can't program the ether very well.
Speaker 1 (28:10):
Well, all right, well that's true, but I see a
lot of humans that don't make good decisions. Oh for sure, yes,
and perhaps AI can do that. But I want to
ask you about some of the robots of history. Okay,
all fictional, of course.
Speaker 2 (28:28):
We can to at least get back to the one
from Twilight Zone where the guy falls in love with
his robot companion.
Speaker 1 (28:34):
Was this the original Twilight Zones? Okay? Which episode was this?
I'm trying to recall it.
Speaker 2 (28:39):
I can't remember exactly what it was, but he was
basically kind of like Papion. He was a criminal that
was sent off to this planet to live by himself.
I remember it now, yes, yes, and they thought it
was cruel for him to be completely isolated, so they
give him a female robot companion and he falls in
(29:00):
love with her, which is you know, it's kind of
like romance novels, you know, they all have a lot
of the same themes. But in the end, there's when
he is released from his imprisonment, the spaceship that comes
to get them doesn't have enough power to bring her back.
Speaker 1 (29:20):
So he stays, no, I can't remember what happened.
Speaker 2 (29:23):
No, he winds up basically shooting her in the head
because he can't stand the thought of her being there
by herself.
Speaker 1 (29:31):
And she wouldn't have cared that much. Well, I mean
she was, I'm sure emotional in the show she was. Yeah,
all right, folks, we're gonna nerd out for a minute.
You mentioned the original Twilight Zone. Yes, there was an
episode called I Robot, which had nothing to do with Asimov.
In fact, Isaac Asimov complained rather loudly to his publisher
(29:52):
for basically stealing that title. I Robot was originally Adam Link.
Adam Link was created by two brothers, Earl and Auto
Bender B I N D E. R. And their pen
name was E and O Bender E and O Earl
and Auto and they created Adam Link. And Adam Link
(30:17):
had apparently done something wrong and they put the robot
on trial. And this was an early season episode of
the Twilight Zone. And in the end, after they've convicted
the robot and they're about to dismantle him, the little
girl runs out in front of a truck and Adam
(30:38):
Link saves her but gets crushed and it's an amazing episode.
It's great stuff. It's probably the earliest robot that I
actually remember in fiction.
Speaker 2 (30:50):
Now.
Speaker 1 (30:50):
Of course, there's a Elije. Bailey in Asimov, and he
did all these robot stories. The Bicentennial Man was a
very good one, and so on and so forth. I'm
going to ask you, because I have a very specific
one in mind, that's my favorite robot in fiction. Who's
your favorite robot? Oh?
Speaker 2 (31:10):
My goodness, I can't off the top of my head.
I mean, there's.
Speaker 1 (31:21):
All right. Let me give you mine. Maybe it'll spur
your thought a little bit on who yours might be.
You attempted to say Robbie from Forbidden Planet nineteen fifty six,
the robot that drank a little too much and got drunk,
which is not possible. But that was a great movie
and probably the inspiration for Star TREKJJ Abrams, JJ Adams,
(31:46):
I'm sorry. JJ Adams was the commander of the starship.
The crew looked awful, awful lot like Star Trek, and
they had a robot and they went to the planet
at Altair four, which turned out to be an incredible AI.
This is nineteen fifty six, left by the Krell and
(32:07):
that is probably the most cerebral or cerebral, however you
want to say it, science fiction movie of the nineteen fifties.
But my favorite, my favorite movie robot will always be
Gort from the Day the Earth Stood Still. Oh yes,
Clattou barata Niictoor. I just read the source material last week.
(32:32):
I pulled it out again and read it from the
original story called Farewell to the Master by Harry Bates,
and is nothing like the movie. It's a very good
short story. The Roboti is named newt gn Ut in
the story. They changed the movie around. The movie's actually
better than the source material, which is an odd audience
(32:52):
instance of that. But when Michael Rennie as Clattou does
that speech at the end of the Day of the
Earth Stood Still where and he tells that professor what
Gord is capable of, he could destroy the world. And
it's an amazing movie. It's my favorite movie of all time,
The Day of the Earth Stood Still. I've seen it
(33:14):
a hundred times. If you haven't seen it, you've got
to see it. I've got DBD, I've got Blu Ray,
I've got every format I can fight. It's a great movie.
But there were others. There was the Silly Robot, which
really didn't have a name in Lost in Space. Oh, yes, yes,
And I don't know if you remember.
Speaker 2 (33:32):
Quest Store sounds familiar.
Speaker 1 (33:34):
Quest Store was a gene Roddenberry robot who was being
put together in the lab and he was given his
brain was programmed to respond to his master, and after
they closed the lab up one night, he was able
to automatically finish assembling himself and create a face which
(33:57):
was that of Robert Foxworthy the actor, and go in
and look for his master. And Quest Store, of course,
was the prototype for Lieutenant Commander Data. Okay, and uh,
I have a copy of that as well. That is
a really good story. It did not sell as a
pilot because of creative differences within BC. But robots have,
(34:18):
for the longest time, you know, been been uh fantasized
and I would say humanized in fiction. If you can
humanize them, you don't really have a story, right.
Speaker 2 (34:31):
You asked me what my favorite robot was, and this
is going to sound really off the wall, but the
one that comes to mind that I still quote is
the number five. Do you remember that one? Oh?
Speaker 1 (34:43):
The movie? Yeah, gosh, I can't remember the title. Of
the movie, but I remember number five.
Speaker 2 (34:49):
Yeah, yeah, it's and I can't remember.
Speaker 1 (34:52):
The title of the movie. I have the mental picture
in my head.
Speaker 2 (34:57):
Yeah. There there were designed as military units. I mean
they were able to fire rockets and all that kind
of thing. And it's typical kind of disney Ish twist
to it is that this robot gets hit by a
bolt of lightning and number five is alive, you know,
(35:21):
and one of the one of the places that you
realize that he understands what he is because he's at
the house with Sheety, the actress. She's the person that
he winds up being with that he finds she says,
(35:41):
the gentleman and his name has gone to that was
his creator. He talks about disassembling it, you know, if
something doesn't happen, he's going to be disassembled. And he says, well,
the robots is disassemble. What is that? And so he goes, well,
you know, take it apart. You know that he was
dissemble alive and then he realized that the disassemble would
(36:05):
kill him, and you know, freaks out and runs out
of them, you know, wheels himself out of the house.
And it's really his creator doesn't believe that he's really
sentient until some of these incidents happen, and then then
instead of trying to get him back to the Labby
starts fighting for him.
Speaker 1 (36:22):
I saw it in the theater, which tells you how
long ago it was.
Speaker 2 (36:25):
It's a long time ago, yes.
Speaker 1 (36:26):
But I would love to see it again. Yeah, it's
a great movie. I'm wondering what you think about the
possibility of robots doing dangerous things that we don't want
to do. If all countries would agree, could we just
send robots soldiers into war?
Speaker 2 (36:47):
The answer is could we? Yes? I mean, robots already
do dangerous things. They already use them for, like disarming
bombs and things like that, and mining operations and all
kinds of things that are dangerous to humans. Could we
send them toward the answers? Yes. But there's a great
Star Trek episode where the machines fight the wars and
(37:10):
I can't remember the title of it, but it's a
case of where the machine decides where the strikes are,
and then if you're in that strike zone, you're supposed
to just walk into this disintegration booth and die. And
the way the gene Roddenberry's crew solved that problem is
(37:31):
they made the war reel again. And so if you
had robots battling each other in a war thing, it
wouldn't solve any wars. There's no human cost to it.
Speaker 1 (37:40):
Us guys have certain body parts that you ladies don't have.
My barber had to have that certain body part operated on,
and a robot did it. The vincy, I believe, is
what they call the robot. Instead of being out for
(38:02):
two months or so, he was out for days of work.
It's amazing the precision sometimes that they can do.
Speaker 2 (38:12):
Oh yeah, for sure. I mean they are able to
work in much smaller spaces with much greater precision than
human muscle control will allow. For sure.
Speaker 1 (38:22):
Yeah, it really is crazy. All right, let's talk a
little bit about what's in the news. I'm sure you've
followed the Charlie Kirk situation. Yes, that is. Assassinations in
America is the title of a column I wrote about
this last night and put up on wb Daily dot com.
(38:46):
I don't know, we seem to have more and more
forward movement. I guess in technology we have X, we
have Facebook, we have Instagram, we have TikTok, right, But
sometimes our humanity seems to be lacking.
Speaker 2 (39:07):
Yes, And in my opinion, it's all of those social
media things that you mentioned that feed that lack because
it and there's been studies on this with particularly with
young people, where they use the AI as their companion,
I mean, and they completely are almost as much as possible,
(39:29):
disassociate themselves from people because like having someone as a
companion that really and truly isn't your friend, but they
tell you everything you want to hear. I mean, there's
definitely people that are extreme.
Speaker 1 (39:46):
Oh aren't there aren't there a female oriented AI companions
now on some of these sites that will will tell
some guy that she loves him and she's really an.
Speaker 2 (39:59):
AI, Oh yes, for sure. And so having that kind
of and it happens with people too. You know, if
you're somebody that only communicates with people that have your
same opinions and outlooks and political bent, it's really easy
to get completely off center as far as your critical
(40:25):
thought thinking processes and think, well, this whole group of
people feels the way I feel, so they must be right,
and anybody that doesn't feel that way is wrong, which
is part of what fed into what happened to me.
Speaker 1 (40:35):
You take this suspect, Tyler Robinson. I wonder if he
was radicalized online. He doesn't appear at this point. I
don't believe, correct me if you've heard something different to
be a student at this college.
Speaker 2 (40:48):
He was for one semester, all right, and he was
also in another school doing something with electronics.
Speaker 1 (40:54):
What do you make of the current violence that we
see in video games? Because this almost ties into our
AI thing. If you play a video game where you're
wiping out somebody constantly, I mean even Pac Men, you know,
was kind of a start of that, but now they've
gotten really graphic. How does that work on a young mind?
Speaker 2 (41:18):
There's been a lot of interesting studies on that, I
mean for a long time, and there have been some
days on this for a long time. As long as
those violent video games have been out, you know that
they were saying that that really does impact and makes
you more likely to commit violence. More recent studies actually
show that that gives you an outlet for those feelings
(41:39):
and it doesn't impact your actual actions as much as
was previously assumed.
Speaker 1 (41:45):
I'm old fashioned. I still run three miles two times
a week on a treadmill at the gym at my apartment.
In Austin. And when I get done, I go into
the next room and I visualize everybody that I don't like,
and I pound away at that punching bag.
Speaker 2 (42:04):
There you go, let's get out of it.
Speaker 1 (42:05):
And when I walk out, I feel really good about it.
And I haven't heard anybody I skinned up my knuckles.
Maybe yes, but I mean there are ways to get
frustration out without killing somebody. I don't know what we
do about this. Maybe AI can solve this problem for us.
Speaker 2 (42:21):
I don't think so. Now this one is, in my opinion,
really a mental health issue. I mean, nobody, nobody in
their right mind that has any kind of compassion or
empathy if you want to use that word, would actually
(42:43):
go out and kill somebody for an opinion.
Speaker 1 (42:47):
Look what we've had in the last few months. We've
had Trump shot in the ear, We've had Trump almost
shot again. We've had this Charlie Kirk assassinated. We've had
a young Ukraine refugee on a light rail in Charlotte assassinated.
That's apparently a racial motive there, that's what I think. Anyway.
(43:10):
We've had a guy assassinate a insurance executive on the
streets of New York. We've had soon to be newly weds,
almost married, come out of a Jewish meeting and get
assassinated on the steps. My gosh. We had a representative,
(43:32):
a state representative and her husband get assassinated in Minneapolis.
They were Democrats. Most of this is left on right,
some of it's right on the left. But it's hard
for me to understand this and what's happening in America
until I look back at my youth when I was
a kid in the sixties. Let's see, we had a
(43:53):
president assassinated, we had his brother assassinated, we had a
civil rights leader assassinated. Malcolm X was assassinated. There was
an attempt on George Wallace. George Wallace. That's what I
was trying to think of. It's not like this hasn't
happened before. Why does it come in periods and spurts.
Speaker 2 (44:18):
Well kind of you know, monkey see monkey do I
think you know if you see that that got a
result that maybe you approved of, you think, well, you know,
that could work for me too.
Speaker 1 (44:29):
But you know, April, as I mentioned in the column
I wrote last night, when I was motivated to do this,
this is changing history. I mean, if you go out
and murder anybody, it doesn't matter who it is. It
could be a gang member in the South side of Chicago.
You don't know what that gang member would have eventually done.
(44:51):
It could be something really bad. It could have been
I mean, if somebody had murdered Mohammed at tah, would
we have still had nine to eleven. But on the
other hand, when you murder John Kennedy, obviously you have
changed history. You have given us a president who's going
to come in and do a bunch of things we're
still dealing with today, Lyndon Johnson. If you murder Charlie Kirk,
(45:14):
a lot of people think that JD. Vance is the
heir apparent and that Charlie Kirk would have run after
that and probably could have gotten elected. He would have
millions and millions of supporters by that time.
Speaker 2 (45:26):
Yeah, I've read that as well.
Speaker 1 (45:27):
So that you can't say that this guy, this suspect,
didn't accomplish anything if that's what he was trying to do.
Speaker 2 (45:37):
Oh no, definitely definitely had an impact. I don't know
if accomplished is the right word, but he's definitely had
an impact on our world. Because Charlie had a very
big impact on our world. I mean, it's I think
that there is a lot to be said for the
disintegration of the nuclear family, the fact that there are
(46:01):
so many fewer people that are actually involved, not necessarily
in an organized religion because you know, grew up Catholic,
went to Methodist Church for a long time, back to
the Catholic Church, and it's just it's there's a there's
a need for people to have a connection with something
(46:23):
they feel is bigger than themselves, because just being human
can be really overwhelming sometimes. And if you don't have
that entity person, you know, Cosmos or whatever it is
that you perceive the greater power to be to say,
(46:43):
oh my please, you know, even atheists will be like, God,
please help me. If you don't have that, and you
truly feel you are completely alone in the world, then
I mean the other side of that is that what
impact do your actions have on you? Not nearly as
much as they went on somebody that has it a
(47:04):
belief in the higher power.
Speaker 1 (47:06):
Oh, if I'd killed somebody, I wouldn't I wouldn't be
able to exists. Why so many of these murderers shoot
themselves afterwards?
Speaker 2 (47:13):
Yes, yeah, And that you said, what point is there
in living? If that's all there is?
Speaker 1 (47:21):
All right? We started this podcast by talking about AI,
and you have made it known you're not the biggest fan.
Speaker 2 (47:31):
Not a fan at all.
Speaker 1 (47:31):
Now let me just will end it this way. Do
you would you think this is going to be a
better world if we just decided to stop doing this.
I mean, we're building data centers all over the country
that are sucking up energy by the megawats, including right
here where we're doing this podcast. There's one under construction
right here up the street. You can't you can't undo
(47:57):
AI at this point?
Speaker 2 (47:58):
Can now the Pandora's boxes open. You're not going to
undo it. And there are some very good uses for
AI again, as long as you keep seeing it as
a tool and not give it more power, so to speak.
Speaker 1 (48:11):
All right, Actually, one last question. I saw a YouTube
video recently of Elvis singing a song and I thought,
wait a minute, I don't remember the original version of
that song being out in nineteen seventy seven or prior.
And as I scrolled down, now that was not Elvis,
(48:32):
that was AI. It was AI. Is the point going
to come where fifty years from now we can have
a new Beatle album each year if we want to,
because AI can just create.
Speaker 2 (48:44):
It, could it happen, Yes, it could happen today now
whether or not people accept it as another story?
Speaker 1 (48:51):
All right? My final question, what is art going to
be if we don't know if a person wrote an
recorded that song and worked on it like I work
on a song when I write one and try to
record it and make it sound the way I want
it to sound, or if AI simply did it in
(49:11):
less than a minute. I mean, what is art? Is
there any use for people to write novels anymore? Is
there any use for people to write songs? If AI
can do it quicker and better?
Speaker 2 (49:22):
Well, I disagree with the better, I mean better than
some people could do it. Yeah, definitely, But I don't
I think what that would lack. And this is going
to sound kind of trite, but the human element, it
just won't have it. AI can mimic emotions, but they can't.
I don't think they'd be able to communicate it in
(49:43):
a way with the people who can really feel it.
I think people still can tell the difference if they
bother to take the time between what's been AI generated
and what a human can do. I mean we were
talking before about involved in a in a production that
is summarizing some Lewis Carroll Alice in Wonderland, and when
(50:07):
you turned it over to Claude Gpt to do it,
it was hard.
Speaker 1 (50:12):
Well, I'm not sure that an AI could write a
line like twas Brillig and the Sly the Tobes did,
Gyre and Gimble in the wave all mimsy, where the
burrow go, where the borough goes? Yeah? I love Lewis Carroll.
Speaker 2 (50:27):
Yeah, and that was Alis through the looking Glass. But
we're okay, all right, all right, you win, you win.
Speaker 1 (50:33):
I think I've been out nerded, folks. I don't know.
That was fun. I love all the robots stories. And
to be what I have called in my own mind
living literally living in the future where so many of
I mean even the Star Trek flip phones are already outdated.
Speaker 2 (50:52):
Oh god. Yes, and the Razor was my favorite.
Speaker 1 (50:55):
Yes, but it is so interesting to think about it
and where we've been and where we're going with this.
April Cousins, thank you for your time.
Speaker 2 (51:04):
You're very welcome. I appreciate it.
Speaker 1 (51:05):
That all right. You can read my columns at wb
Daily dot com. You can hear my radio show Monday
through Friday from seven o'clock until ten o'clock each morning
on KJCEE. That's Talk thirteen seventy in Austin or Talk
thirteen seventy dot com. I've got a page on Amazon
dot com where all seven of my lovely books are
(51:27):
there for sale at outrageously high prices. And until next time,
this has been playing at Logic. Be logical, and we'll
see you next time.