All Episodes

April 4, 2025 57 mins
In this episode of The Open Door, panelists Thomas Storck, Andrew Sorokowski, and Christopher Zehnder interview Christopher Reilly on his book AI and Sin: How Today’s Technology Motivates Evil. (April 2, 2025)

Artificial intelligence technology (AI) motivates persons’ engagement in sin. With this startling argument drawn from Catholic theology and technological insight, Christopher M. Reilly, Th.D. takes on both critics and proponents of AI who see it as essentially a neutral tool that can be used with good or bad intentions. More specifically, Reilly demonstrates that AI strongly encourages the vice of instrumental rationality, which in turn leads the developers, producers, and users of AI and its machines toward acedia, one of the “seven deadly sins.” The third section of the book offers a comprehensive survey and analysis of the many moral problems caused by AI. It concludes with recommendations for overcoming the 21st century scourge of AI-induced acedia.

AI and Sin: How Today’s Technology Motivates Evil by Dr. Christopher Reilly | En Route Books and Media
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
So listening to WCAT radio your home for authentic Catholic programming.

Speaker 2 (00:06):
Welcome to the Open Door with your host Thomas Storic,
co hosts Christophers Ender Andrew Surakowski. Our guest today is
doctor Chris Riley, who has written a very interesting book
on AI, that is, artificial intelligence and its potential for
promoting human sinfulness. So we're hoping for an interesting discussion today,

(00:30):
and we'll begin with our usual prayer, the name of
the Father and Unknown Spirit. Amen, Come, Holy Spirit, fill
the hearts of your faithful and kindling them the fire
of your love. Send forth your spirit, and they shall
be created. You shall renew the face of the earth.
Let us pray well, God, who has taught the hearts
of the faithful by the light of the Holy Spirit.
We had that in the same spirit we may be

(00:52):
truly wise and everybody rejoiced in His consolation through Christ,
our Lord, Amen, Holy Spirit. So maybe we should lead
off in a life clo fashion, Chris, and ask you
before we talk about artificial intelligence, can you talk about
intelligence and distinguish it from what many people call artificial intelligence?

(01:18):
Is that even a proper term.

Speaker 3 (01:21):
I can try. I think intelligence is something that is
a matter of if you define intelligence, you're defining human nature,
which is certainly difficult to pin down because of the
spiritual aspect of human nature. But I would say that

(01:41):
intelligence and this is drawing a good bit from a
recent document called Antiqua at Nova from the Dicastery for
the Doctrine of the Faith and the Dicastery for Culture
and Education. But they talk about intelligence as being something
which is inherent in the in the human person. Uh,

(02:02):
it's it's a matter of the integration of the intellect
along with the will and the passions and emotions, as
well as our bodily experience and our social experience. All
of these become part of our intelligence and become part
of the the overall pursuit toward truth, which I would say,

(02:23):
if if you want to a pithy statement, I would say,
intelligence is the power of seeking truth, and for a
human being, that truth is the relationship with God through Christ,
not necessarily a factual truth. The way that if we
look at artificial intelligence might define it.

Speaker 2 (02:47):
Well, if that's true, why do we talk about artificial intelligence?
Is that a misnomer or is it based on an error?

Speaker 3 (02:57):
I would say it's it's not only based on error.
It's also kind of a dangerous concept, the idea that
an intelligence can be artificial. Even the thought of that
suggests that something like a machine can have something like intelligence,
and that divorces intelligence both from the human will, and

(03:19):
it also divorces intelligence from life. So you know, intelligence
may be something that characterizes more than just human beings,
but it does need to be associated with life, and
if we divorce that. There's an article by a person
called King Ho Leung who writes about the Augustinian idea

(03:41):
that God is life and that God is life in
the highest and our participation in God's life is our
participation in his being. So if we are to look
at intelligence as being something divorced from life, in many ways,
we're looking at a devaluation of the human person just
by that very concept.

Speaker 2 (04:04):
Well, if I'm wonders, go ahead. I was wondering.

Speaker 4 (04:13):
I think there might be have been missing in the explanation,
because when we talk about life, for instance, we can
talk about animal life, we can we can talk about
plant life and the human life in terms of intelligence,
and animals have have imagination, but that doesn't seem to
be the same thing as intellect. And isn't the kind

(04:34):
of the key here that the intellect is immaterial and
when we're dealing with machines, they're not their material. So
we're talking about things which exist in two different, two
different spheres of existence. Really, yeah, I think that's uh.

Speaker 3 (04:55):
I think that's that's very very helpful, and I agree.
You know, if we look at the thimistic or scholastic
idea of the intellect, it's not just a matter of
acquiring facts or acquiring data and the memorization or recording
of that, but it's also the in a sense, the

(05:18):
self expansion of the intellect of the person in a
spiritual way, with a spiritual connection with the object that
is being known. And that's just simply not possible obviously
with the machine.

Speaker 2 (05:34):
Well, given that, given that, there is something which people
call artificial intelligence right as another example of human technology obviously,
and one way of looking at technology seems to me
to be say that there is certain kinds of technology
that are really neutral. For example, hammer you can use

(05:56):
that to build a house, or you can use that
to murder somebody, and it does do they have an
inherent bias one way or the other. On the other hand,
you have certain technologies that are in a way, abstractly speaking,
they seem to be neutral, like the automobile, for example,
but in fact it's transformed our way of life in

(06:17):
vast ways in the last one hundred and fifty years,
which had we known about it, we might have had
second loss. And then you have technology which seems to
have no good use, weapons of mass destruction and abortion
suction machines and on. So where would you place AI

(06:37):
in those three category? In any of those three categories,
I would say on.

Speaker 3 (06:42):
The level of use that AI certainly has very extraordinarily
good uses, so we need to not lose sight of that,
and also has extraordinarily bad uses that can be applied.
For example, I've been reading about recently the use of
AI in uh for for assistance with in vitro fertilization

(07:06):
i v F. It's it's being used essentially to screen
out embryos through pre pre pre implant genetic testing and
essentially is being used to to essentially discard those embryos
that are unwanted. So there there's many ways that artificial

(07:28):
intelligence can can have very concerning effects. My concern overall
and my evaluation of AI has focused a lot on
the ideological effects, how it influences the way that we
think about ourselves and the way that we think about
our beatitude and what is important to us, and also
how we attain that, and I think overall artificial intelligence

(07:51):
has a very concerning effect in that direction.

Speaker 2 (07:58):
My wife is a translator and the latest project she's
working on, the publisher said would you like me to
send you an AI translation that you can work from?
And she had never done that. She said, yeah, I'd
like to see it. Now should that be something she
just uncritically uses or is it something that she should

(08:21):
welcome and make use of as much as possible. Is
it doesn't it promote estidia as you argue in your book.

Speaker 3 (08:29):
Yeah, yeah, I mean it's it's just like with any technology.
If if you have an occasional use of the technology,
the ideological effects of that and how it affects the
way we think about things is going to be minimal
with with occasional exposure to the technology. Where I'm concerned

(08:50):
is particularly where artificial intelligence proliferates to the point where
it's involved in virtually every part of our life, in
which case, the effects of it on the way that
we think and these more abstract ideas about how it
might lead to a CDIA start to become very, very
powerful and become much more important. And I think it's

(09:14):
fair to say I virtually nobody thinks that artificial intelligence
is not going to proliferate to be a extremely important
part of our daily lives.

Speaker 5 (09:27):
I could just jump in for a second. I've done
a good deal of translation myself, and I have encountered
AI generated translations. I think a lot depends on what
kind of translation it is. If it's a technical translation,
I would not mind that, because often the AI will

(09:48):
really find the right word. But if it's let's say,
a literary translation or a poetic translation, even if the
program can come up with, you know, ten different ways
to translate a word and then give me a choice,
I would still feel somewhat pressured. And I think it
would be interfering with the mental or you could even say,

(10:10):
perhaps emotional or spiritual process of thinking about what the
author is trying to convey. And it's a very delicate process,
and the machine might just kind of force its way
into my consciousness. So I would feel that the machine
could actually, you know, in some ways help it in
some ways hinder the process because that's the kind of
translation where you're dealing with something more than simply finding

(10:34):
literal equivalents to basically technical terms.

Speaker 3 (10:39):
That's just fling, and on an individual level, you're also
contributing to an overall essentially standardization or batalization of the
output that is out there whenever you use artificial intelligence.
It's been shown over and over again through studies, particularly

(11:02):
in regard to creative writing, but also translating or other
intellectual products, that artificial intelligence is going to help very
much a person, particularly somebody that's low skilled in the activity,
and might have some great effects in that way, but
when you look at the overall population of people that

(11:23):
are engaged in that activity, you find that the output
among AI AI applications essentially is very standardized, it's very
very similar, and that it's it's generally following a certain
logical sequence that is that is not creative and that

(11:45):
does not represent the full capacity of human persons, in
which case, you know, we're undermining the the variety and
the enjoyment of what it is that we produce just
by participating in it.

Speaker 2 (12:01):
M hm.

Speaker 4 (12:04):
You know, I wonder it might be good to introduce
this concert. This is been years since I read this.
I think I'll get it right. Eric Gill, who is
a little and serby in some cases, but.

Speaker 2 (12:17):
He pause. It's the.

Speaker 4 (12:21):
Usefulness of a tool is directly proportional to how much
it enhances the art man, so that like a more
refined brush for the painter, well doesn't doesn't replace the
art and the painter it actually enhances his art because
he's some some technologies like recorded music, which have a

(12:43):
lot of good benefits, no doubt. Nevertheless, they all tend
to replace singing and the art of music. I'm wondered
if that how we can bring if you think that's
a correct evaluation, and how that relates to AI.

Speaker 2 (12:56):
Yeah, I think it is.

Speaker 3 (12:57):
I think that relates to kind of what I was
suggesting about it. I do think that that there are
certainly ways that AI can be used, like such as
for brainstorming, coming up with ideas, or fleshing out an
idea that's already there and questioning the AI and asking
it for different responses, or to even criticize what it

(13:22):
is that you're doing in order to give you a
different perspective. It can result in a much better output
than might have happened if you did that alone. But
you know, and I think that's generally a good thing.
I'm not sure I could find something that's bad about
doing that, except for the fact that it might be

(13:44):
deskilling the person that's involved in doing that, in which
case they lose critical reasoning skills or they lose creativity
skills that they might have had if they had engaged
in that activity on their own, or engaged maybe an
actual conversation over zoom or or actually in person with

(14:05):
with somebody, rather than using a computer and a program
to do that for them.

Speaker 4 (14:13):
That's that's because the AI can give you counter arguments, perhaps,
but it just collects. It's not it's not actually understanding
your argument and responding to it. It's just read. If
I'm criticizing something in cont it's it's gonna be drawing
from cont scholarship right of other already existing things. It's

(14:37):
it's repeating things in a rather highly efficient way that
have been already already said.

Speaker 2 (14:44):
Yeah.

Speaker 3 (14:46):
Yeah, fundamentally, you know, and and we have to be
careful when we say AI. We're mostly talking about the
current generation of AI called generative AI, which which basically
in large line which models which basically operate by looking
at language and predicting what the best or the appropriate

(15:08):
next next letter or word or sentence might be following
something that's some sort of text. That's the fundamental way
that it works. But obviously then it's it's not actually reasoning,
it's not actually producing something new in that sense. But

(15:31):
the product overall, in terms in comparison to what an
individual might be able to produce on their own without
that prompting, without the questioning, without the counter arguments, is
probably a lot more creative and a lot more interesting
than it would have been otherwise. So it may help

(15:51):
in those circumstances, but it's to me it's concerning, particularly
when it ends up becoming a standard way that we
do things, not the the occasional tool that we use
to help us out when we're in maybe stuck with
with an idea.

Speaker 2 (16:09):
Some of the some of the examples I've seen of
AI give the illusion or appearance that they are actually
doing more than simply finding words in previous texts. Uh.
Is very It's kind of scary in a way when

(16:29):
you see that, and I can imagine people getting accustomed
to the idea that oh, these this machine is really thinking.
Is that is that a danger?

Speaker 3 (16:42):
It's it is a danger, and that's that's basically anthropomorphism,
which is a big word but basically means treating something
that's non human as if it's human, and there's a
lot of danger in that. At the same time, they
are finding that there have been some studies now just

(17:02):
within the last week, where they've been able to look
at the detailed reasoning or the detailed action of some
of these models, and they've found that that actually they're
using not just the initial sequence of words or letters

(17:22):
that they call them tokens within a particular text, but
they're also developing in a way concepts, and then they
use those concepts or ideas in order to further generate output,
which is interesting. It suggests that there's some sort of
emergent capability that comes out of the process that's beyond

(17:47):
what we initially consider to be the rudimentary way these work.
But it's certainly it's certainly miles and miles away from
the kind of reasoning or the kind of abstract action
and capability that a human person has.

Speaker 4 (18:07):
So it seems like you wouldn't say that AI is
uniquely evil.

Speaker 2 (18:14):
H it's it's it seems like you're you're.

Speaker 4 (18:18):
You say that it leads to a sadia, and maybe
it'd be helpful for a lot of people people listening
don't know what a sadia is. Make that clear. But
it leads to a sadi. But that's but sort of
a lot a lot of the technology, doesn't it. I mean,
microwave ovens, one could argue, could lead to a certain

(18:39):
to two impatience given to our impatience and are to
our laziness and enhance the sadia. So I mean, we're
I guess we're going that's going back to the first question,
but I think more fine tune it and maybe you
could see where, say where how uniquely equal this whole
thing is.

Speaker 2 (18:58):
Yeah.

Speaker 3 (19:00):
Well, acedia, first of all, is it's known very often
as sloth. It's considered one of the seven Deadly sins,
and seven deadly sins basically refers to sins that have
a tendency to generate other sins. But acedia is not

(19:21):
just idleness or laziness, which you associate with sloth. It's
also restlessness. It's also anxiety, and this comes out of
Saint Thomas Aquinas called it sorrow about a spiritual good
inasmuch as it is a divine good, And basically what
he's referring to their is not sorrow about God or

(19:44):
God's presence, but sorrow about our experience with God and
a feeling that there's some part of some effect of
our experience with God that is distasteful or burdensome, and
therefore we're not able to have that relationship, or willingly
choose not to have that relationship, if you want to

(20:05):
look at it as a mortal sin. But anyway, the
connection between artificial intelligence and acedia, I have a number
of different examples in my book, and we can go
through those specifically, but the primary connection that I look
at is the way that artificial intelligence motivates instrumental rationality,

(20:30):
and instrumental rationality just real basically is the tendency to
It's a habitual disposition or behavior that allows or that
tends to cause a person to look less and focus
less on the appropriateness of their ends, the good fulfilling

(20:51):
or moral ends, and focus more on the efficiency and
effectiveness of the means that they use for limited ends.
So that's just a very basic definition of it. But
there's a number of structural ways that artificial intelligence promotes
instrumental rationality, and from that, I consider the instrumental rationality

(21:13):
to be particularly related to encouraging ASDIA. So that's the
argument that I make, and I think that what we
need to look at is very carefully the way that
art that instrumental rationality is encouraged, both through the structure

(21:36):
of AI and also through the various ways that we
use it.

Speaker 2 (21:42):
Yeah.

Speaker 5 (21:43):
My impression that as we are discussing I mean, as
the general public is discussing AI, inevitably it kind of
runs into some very basic philosophical issues and a kind
of division between let's say, we might look at it,
you know, as athletics at the issue, and the way

(22:04):
a secular mindset would look at it. And it seems
that we run into, you know, a different perspective here.
I have been, you know, noticed some recent articles that
have discussed AI or AGI in this case. I guess
that's generative. There was an article in the New York
Times by Kevin Russ and hero just follows. I'll just

(22:27):
quote this. The leading AI companies are actively preparing for
AGI's arrival and are studying potentially scary properties of their models,
such as whether they're capable of scheming and deception in
anticipation of their becoming more capable and autonomous. Now, those words,

(22:48):
you know, are to me very striking, and you're talking
about scheming and deception. We're of course talking not just
about manipulating data. We're talking about kind of you might say,
moral or ethical problem, or some kind of activity which
has a moral value or a negative moral value, not

(23:09):
just a concrete technical operation. And then when they're talking
about what he's talking about, these machines becoming capable and autonomous,
I mean, autonomous is a very strong word. So I'm
just wondering, you know, do you think this is actually
something that could happen.

Speaker 3 (23:31):
I think in a lot of ways, the statement is
a bit exaggerated, and it's using anthropomorphism metaphors that refer
to human type abilities, you know, the idea of autonomy
and the idea of scheming and things like that. Those
are what we talk about with a human being. So

(23:54):
we want to avoid doing that if we can. But
there are a lot of instances where artificial intelligence is
by its very nature, focused on pursuing a goal. It's
pursuing focused on pursuing tasks, and even if those tasks
are very broadly defined, the very nature of artificial intelligence

(24:17):
is to focus on a task or a goal, to
attempt to calculate or to operate in that direction of
that goal, and then to use data as feedback to
then adjust what it's doing if it's not meeting that goal.
And that's that's where you get that semi autonomous type

(24:40):
feature of it, because it's able to work with feedback
and then adjust what it does without somebody actually being
involved in that process. But artificial intelligence has been shown to,
for example, because it pursues goals so ruthlessly in games

(25:01):
where artificial intelligence models are used as participants in complex
games with human beings without being programmed. These models have
engaged in things like alliance building with with certain players
against other players. They're involved in bluffing like in poker.

(25:21):
They're involved in more concerning things such as when when
developers are trying to assess a model and trying to
understand its reasoning, some models have actually blocked those those
developers from getting that information by pretending to be dead,
or they've they've actually tried to to get over and

(25:45):
and and uh, and overcome guardrails or programmed parameters that
are used to prevent an artificial intelligence program from doing
something that might seem as harmful. So there is that
that sense of deception and that sense of manipulation. They're

(26:09):
very persuasive when you use an artificial intelligence program.

Speaker 2 (26:14):
In a debate.

Speaker 3 (26:16):
A lot of these models have been extremely effective in
debates with human beings, much more so than the than
the human persons have been able to be. So these
are things that we need to pay attention to. I
don't think that that's necessarily an argument for a cedia,
although the overall effect is to depress our understanding of truth,

(26:41):
our understanding of the capability of of of acquiring truth,
our trust in the information that is out there, including
in the physical sciences and the social sciences. Now as
artificial intelligence becomes more involved, are our ability to trust
the information and the output from it actually gets eroded

(27:07):
and which is obviously not a helpful phenomenon.

Speaker 2 (27:16):
Well, if if the machines or robusts or whatever you
want to call them, if they can try to prevent
their developers or controllers from finding out things about them,
how is that? How is that in principle possible. I mean,
if there if they're simply machines which are nothing but

(27:39):
quant a data things, how can they go against what
presumably is the u the desires of their creator.

Speaker 3 (27:48):
Yeah, that's the short answer, is that they you know,
the machine has an overall goal and it ruthlessly uses
any tactic that might be available to it in order
to meet that goal. That's what it's programmed to do.
But but in terms of the details of how it's

(28:08):
able to do that, at this point, it hasn't been
able to be determined. But so they've it's very very difficult,
and a company called Anthropic has recently managed to look
at a couple of these models and be able to
get some insight into how decisions are made. But even

(28:31):
just the smallest decision and decision is probably the wrong
term to use, but the smallest output or choice of
an artificial intelligence model takes hours for these these developers
to assess that and to figure out exactly what happened

(28:52):
and how that decision was made. So when you're looking
at these very complex situations with millions of parameters and
different data sets that are used to make a decision,
it's nearly impossible to get into that model and really
know why it.

Speaker 2 (29:10):
Did what it did.

Speaker 3 (29:12):
That's why they call them black boxes, because it's very
difficult to see what's inside.

Speaker 5 (29:19):
But just for clarification, I think it was Paul Kingsworth
who made this point. It's very easy, you know, in
the popular mind to look at these machines behaving, these
very behaving in these very complex ways, and to perhaps
not consciously but begin to almost believe that they are human.

(29:39):
But what he's saying is that still these are mechanical operations.
They are an imitation of human let's say, scheming or
deception or moral vices. You know, the machines may do
things that are unethical, but it's still a machine doing it,
not a human doing And there for that moral ethical

(30:02):
dimension is not really there. Is that true? I mean,
is that a correct assessment?

Speaker 2 (30:08):
Yeah?

Speaker 3 (30:09):
I mean I think the morality of the machine is
essentially that its right to pursue and to accomplish its goal,
and it's immoral or wrong for it not to or
not to at least adjust what it's doing or its
calculations in order to approximate that. And that's really it's

(30:35):
not only is that a very basic type of morality.
It ultimately leads to a kind of morality of consequentialism,
because essentially what you're looking at is the machine is
concerned about the consequences of its actions or the consequences
or the output of its calculations, not with say, the

(30:56):
intrinsic morality or the intrinsic goodness of what it's doing.
You know, something like contemplation is just you know, it's
impossible to apply to something.

Speaker 2 (31:08):
Like a machine.

Speaker 3 (31:10):
Even though there's been many people who think of these
or talk about these machines as if they're thinking or
as if they're pausing to reason and think about ideas,
that's that's really a human anthropomorphism that's being applied there.

Speaker 5 (31:28):
So then it's been said that the danger really is
that we might start imitating the machines, not vice versa.

Speaker 3 (31:37):
There is and I think, you know, kind of going
back to our discussion before, is that the greater that
we time and effort that we put into using the
applications that come with with artificial intelligence, the more we're
likely to imitate and to think and value the kind

(31:58):
of limitted thinking and reason that's associated with artificial intelligence.
In addition, we're more likely to value intelligence in general
as as an as an artificial process rather than something
that is an integral skill in the in the human person.

Speaker 2 (32:21):
If it's true, if apparently, as you say, it's true
that these these machines can not exactly have purpose in
the human sense, but be focused on fulfilling your tasks
and apparently develop I don't know how to put this

(32:45):
very well, but the fact that they would try to
prevent their programmers or whatever they are from finding out
what they were up to or or something, is that
Is that fair to say it that way? That is? Yeah?
Well maybe, although you know, we here can distinguished phlosologically

(33:07):
as they know, it's a mussible on a machine. Could
ever really think or it's impossible that a machine could
have a soul? And we can we can say that
with certainty, but does it really in the end, will
really matter if these machines take on the semblance of
purpose and the semblance of control and is there a

(33:28):
bustible way that they can start manipulating things in the
real world as it were in the extra extra machine
world for some anti human purpose. Yeah. I think.

Speaker 3 (33:46):
The concern that a lot of people have and it
comes out of science fiction, which is not necessarily a
reason to discount it. Many good ideas come out of
science fiction, but uh, this idea or thought about artificial
intelligence deliberately trying to take over or deliberately trying to

(34:08):
interfere with with the the will or what's good for
human beings, or even become oppositional. That is essentially, you know,
beginning to look at the machines as if they're persons
and as if they have a purpose and a will,
And that would be that could be accurately phrased in

(34:31):
those terms, and I would say that we need to
be very careful about that, that that's really not what's
going on. Even in the example that I was using,
where the artificial intelligence model is interfering with with people
that are trying to assess it, it's not really interfering
like it's it's not like it's deliberately trying to prevent

(34:53):
these these these developers that it's aware of to to
a acomplish their task. It's simply just logically doing what
it needs to do in order to meet its goal.
So it's it's it's not all it is is taking
a whole millions and millions of data and millions and

(35:16):
millions of calculations and through that process basically coming up
with what the best strategy is in order to meet
its goal. And there's nothing there where there's an awareness
of a person. There's no awareness of a purpose. It's
just simply doing what it's supposed to do as a program.

Speaker 2 (35:37):
Well, granted that, and of course I agree, But granted that,
the point I wanted to make was trying to make,
was does it would necessarily make any difference? In other words,
let's take a really farfish example. A robot's coming at
you and it's going to try to kill you, and
you say, robot, you don't know who I am, you're not,

(35:58):
you don't have any purpose. And it doesn't matter whether
it has a purpose, if it's somehow part of it's
unfolding or that's not the right word, part of its programming,
that's going to kill you anyway, it does it matter?

Speaker 3 (36:13):
Yeah, I think in terms of the overall effects, we
have to be extremely careful about what we put artificial
intelligence programs in charge of.

Speaker 2 (36:24):
Now.

Speaker 3 (36:24):
I do believe that, especially given the rapid development of
this technology, that we are going to see a refinement
of the technology, a refinement of its capability, and there
are going to be guardrails placed on it to prevent
it from harming people in ways that will allow it
to do more and more things safely. But we're in

(36:49):
a situation right now where, particularly the companies we talked
about AGI before, it's artificial general intelligence. You know, a
lot of these companies are promoting this idea that AI
is right around the corner. It's going to be just
as intelligent as human beings. It's it's extremely dangerous to

(37:11):
suggest that because it's causing.

Speaker 2 (37:14):
A kind of.

Speaker 3 (37:19):
I guess trust in the artificial intelligence to have more
capability and more refinement than it necessarily has at this time,
in which case, you know, we need to be very
careful about the hyperbole and about the exaggerations that are
associated with it.

Speaker 4 (37:38):
Yeah, especially because I think we already do in our
society have an idea that the human mind is like
a computer, that's our mote, and so all it does
is just.

Speaker 2 (37:51):
Enforced that, doesn't it.

Speaker 3 (37:55):
Yeah, I think I'm sorry, did I interrupt?

Speaker 2 (38:00):
In regard to.

Speaker 3 (38:03):
Like neurology, neurology, an awful lot of neurologists find it
useful to look at the human brain as similar to
a computer. They find it useful to look at it
as is simply a material being or a material instrument
that through the term they use as emergents. But basically

(38:26):
that concepts and then ideas and then thoughts and strategies
and reasoning all kind of grow out of very basic
data and very basic material processing in a brain. That's
useful to do that have that model, but it's also
very wrong in terms of that it basically eliminates any

(38:50):
conception of the spiritual side, of the intellect and spiritual
side of life and of the human being. And it
also basically deduces purpose, the entire concept of purpose to
a calculation rather than to perhaps something which is beyond

(39:12):
in many ways, beyond what is even able to be
imagined and able to be conceived by a human person.
Our purpose is God, and we can reflect and contemplate God,
but our purpose certainly is not something which we have
complete understanding of.

Speaker 5 (39:33):
In that connection. Actually, there was an interesting article by
this Paul kings North, whom I mentioned before, called Ai Demonic.
This was published in twenty twenty three in Touchstone, and
he says he says, some people think that they know, well,
some people think that they know, you know, what AI

(39:55):
really is. Transhumanist Martin Rothblatt says that by building AI systems,
we are making God. Transhumanist Elise Bohen says we are
building God. Kevin Kelly believes that we can see more
of God in a cell phone than in a tree frog.

(40:15):
Does God exist? Asks transhumanist and Google Maven Ray Kurtzwoat.
I would say not yet. Now you know that that
some of this is kind of sounds silly, but I
think it reflects some really, you know again, disturbing, you
might say, metaphysical ideas by people who are not who

(40:38):
are not believers, and who are nevertheless thinking about the
you know, what AI could be, not just what it
could do.

Speaker 3 (40:49):
I within a lot of that language is kind of
the idea of technology or artificial intelligence as having some
personality to it, whether that's divine or whether it's limited
in any way. And what whenever you treat AI is divine,

(41:13):
you're essentially giving it the ability to have personality, have
that personhood to it. And there's a really strong connection
psychologically between the idea of between idolatry of AI and
and the self image of the person as the user
or the developer of AI. There's a way basically we

(41:37):
inflate our idea of ourselves. Whenever we inflate the idea
of AI is having something of a divine nature, we're
attributing that to ourselves, because we're the creators and the
manufacturers and the programmers of AI in the first place. So,
and there's actually studies that are done in regard to

(42:00):
that where they show that, like, for example, the anxiety
that comes out of that kind of experience or the
anticipation that's associated with it can directly affect our spiritual
experience of life because that anxiety leads to first of all,

(42:22):
a loss of a trust or a belief and meaning
and purpose in life, and then it leads to an
oppositional attitude toward the divine and toward other persons. So
there's very serious consequences to even if it sounds silly,
there's very serious consequences to that kind of rhetoric being

(42:45):
out there. And I think to some extent it's driven
by psychology, but it's also very savvy in terms of
there's some intention to promoting these kind of ideas.

Speaker 2 (43:00):
Yeah, that same article, I think Andrew said, and I
don't know. Of course, you can confirm whether this is
true or not. Claim that AI machines started communicating in
Persian in order to prevent their programmers from figuring out
what they were doing. Is that? Is that a ridiculous claim?

(43:23):
Or is that true? Are you aware of it? Yeah?

Speaker 3 (43:27):
I mean it certainly it probably happened. But but again
it's you know when you when you use the word
and I did it when I explained it. Also, you
know when you use the word that they you know
that they that they deceived or that they intentionally did something.

(43:47):
We're attaching a kind of will or a purpose to
the AI that that's probably not there. They're not aware
of a person, they're not aware of any relationship. They're
simply just calculating and doing what is best in order
to accomplish their goal.

Speaker 2 (44:04):
Yeah, but I'm not sure that, as I said suggested before,
I wonder whether that's really that makes any difference, uh,
or that need make any difference, as yould say, in
certain scenarios that can be imagined. You know, you could
have a machine that even a very simple machine that

(44:24):
was doing something uh in the farias but didn't have
any idea, of course, because they had no will, no intelligence,
And this is a much more sophisticated version of that. Yeah,
I don't know.

Speaker 3 (44:37):
Yeah, it's it's there's no question that it's a concern,
both in terms of the effects but also in the
way that it basically scares us. It basically increases anxiety.
It increases those that existential anxiety that I was just
talking about that can interfere even with our spiritual sensibility

(45:00):
the but certainly with with with many ways that we
approach the environment, approach the idea of truth and trust
in our society, in other people, and in and in
in relationships that that have AI as as a mediator

(45:22):
in those relationships.

Speaker 2 (45:26):
Yeah, I can imagine AI doing marriage counselors.

Speaker 3 (45:33):
Yeah, it's they're they're actively very and I say they
but basically a therapists, mental health therapists are very actively
looking to use artificial intelligence as a therapist, particularly for children,
particularly for the elderly, for populations that are either growing

(45:57):
in terms of mental health issues to the extent that
they can't be served adequately, or like the elderly, because
there simply isn't enough attention and resources being devoted to them.
But it certainly is something that is being intentionally done
to try to use artificial intelligence as a as a

(46:21):
participant and really as a therapist to work with people.
And there's amazing ways that it does work. There's amazing
ways that it does help people. But the overall concern
is what is the ultimate effect on the attitudes and
the understanding of people. Is it necessarily moral or correct

(46:46):
to present an artificial intelligence model or application as if
it is a person that you can communicate with in
the first place. There's Catholic Answers have had recently created
in I think August of last year, that had created

(47:07):
a character called Justin Father Justin that was meant to
provide information about the faith to anybody who had questions,
and you could you can interact with it and ask
questions and follow up questions and get great information about
the faith that way. But there was an outcry against

(47:28):
it because a lot of people felt that it was
very creepy to portray, you know, an artificial intelligence chatbot
essentially as as a person, let alone a priest. But
but what Catholic Answers has done is taken the application away.
But they've replaced it with a supposedly, you know, lay

(47:50):
person called Justin. So there's there's a number of ways
in which this artificial intelligence is being portrayed as a
person very intentionally by even our friends, and I.

Speaker 2 (48:05):
Guess we would have no way of knowing whether the
replacement really was a person. Right.

Speaker 4 (48:13):
We also not have a knowing to project whether that
AI robot is actually going to be giving the right answers.

Speaker 6 (48:22):
Right, Yeah, we assume it's but you know, we don't
know that.

Speaker 3 (48:35):
Yeah, I mean the question is in testing of these systems,
you know, what what's the threshold that that is appropriate?

Speaker 2 (48:43):
You know, if if the if.

Speaker 3 (48:45):
The chatbot answers questions correctly ninety five percent of the time,
is that good? You know, does it need to be
ninety nine or ninety nine point nine.

Speaker 2 (48:55):
Percent of the time.

Speaker 3 (48:57):
It's a moral issue that needs to be decided what's
appropriate in each context.

Speaker 2 (49:05):
I'm almost I'm almost find more scary the idea of
a robot coming at me that I didn't have any
kind of will or mind or anything that you couldn't
reason what you couldn't talk to. I find that scarier
than than the impossible idea. Of course, that a robot

(49:26):
could have such a such a will on mind, but
I would I would almost rather have somebody to talk to.
You would have thought it was a malevolent than someoney
that I thought was simply uh impervious to do any
kind of real discourse. Yeah, that's why i'm That's why
I'm not. I completely agree that we have to guard

(49:50):
against being anthropomorphic with these with these machines and investing
them with things they don't have and can't ever have.
But I wonder, as I set up a couple of
times already, I almost find it irrelevant to certain possibilities
that you know may or may not be able to

(50:11):
play out because I don't know. I don't know, and
I don't know if anybody knows.

Speaker 3 (50:15):
Yeah, I think in terms of in terms of our work,
in terms of evangelization, in terms of teaching people wisdom
about AI, it's it's difficult because, you know, it's easy
to talk to somebody who has a moral disgust in

(50:36):
a sense about certain aspects of AI or certain aspects.

Speaker 2 (50:41):
Of the way that it's used.

Speaker 3 (50:43):
But we have to remember that most of society, or
a lot of society, has great enthusiasm for these things,
and also as not necessarily concerned about the wisdom of say, anthropomorphism,
or about the spiritual implications. So it's looking at, you know,

(51:04):
both being savvy about the actual effects and the actual
reality of how this is going to proliferate, and also
being aware of the way in which most people are
looking at the technology. I think is crucial to being
able to speak to them and being able to explain
to them. You know, a lot of what I'm saying

(51:26):
about a CDIA, for example, a lot of people just
don't care. You know, why do I care if AI
leads to sin, That's not my concern, you know. So
it's it's difficult to fashion and craft the way that
we talk about it in ways that are going to
lead people, at least even in an incremental way, towards
a more wise approach to AI and a wiser approach

(51:49):
to the use of it.

Speaker 4 (51:54):
You know, one thing I as the difficulty talking to people.
I've got this impression for years before before we talked
about computers or anything we talk about just the sub
human world. There seems to be this proclivity in our society,
just for a long time, to want to view ourselves
as human beings. As equal to or even less than
the rest of the surrounding creation. There almost seems to

(52:17):
be almost kind of glee at the thought, now that
we've made something better than ourselves, we want something. I
don't understand it. You would think a certain sense of
self pride, you would not want to be demean in
the face of a mirror machine. But we seem to
actually rejoice that the fact that we'll be robots are

(52:39):
just like ourselves. So I don't understand I don't entirely
understand why that is, if it indeed it is, and
it's my impression, but it seems like if that's indeed
the case, then we have a job of actually appealing to.

Speaker 2 (52:55):
People about AI is made even more difficult.

Speaker 3 (53:00):
Yeah, there's there's the the virtue of magnanimity, which a
lot of people don't necessarily understand because it sounds like pride.
But it's basically, you know, the feeling of or or
the experience of the full grace and wonder of the
the of the person as a child of God. And

(53:25):
it's it's difficult to encourage people to embrace magnanimity, embrace
their their special nature without really translating that into pride
and translating that into a more secular version where they

(53:45):
essentially are reducing their understanding of themselves in order to
in order to feel better about the about the choices
that they make. You know, the easier and the simpler
and the more limited human nature is, and the more
it's associated with calculation, the more it's associated with the

(54:07):
operations of a computer, the easier we're able to evaluate
ourselves morally, and the easier it is to basically, you know,
give ourselves a break when we might behave in a
way that's inappropriate, or might behave in a way that
is not in keeping with our true nature.

Speaker 2 (54:33):
I hope that made sense. And do you have any
for the question of concluding marks.

Speaker 5 (54:43):
Well, I guess so. Just summarizing things, it looks like
there's several problems here. I mean, there's there is the
question of how powerful AI can be. There's the question of,
you know, how it can be controlled by us, how
it can be used to do good or evil lens.
And then there's also the question of how it affects

(55:05):
our own behavior and the question of how it affects
our own sort of image of who we are. What
we are is who is becoming of what so's It's
a complex problem. That's pretty much all I can say
is I think there are different aspects to what we
have to look at.

Speaker 2 (55:23):
But thank you. That was a very good analysis. Yeah, no,
that's fine. I'm doing okay. Well, we've been interviewing today
doctor Criswelly. His book is AI and Sin How today's

(55:45):
technology motivates evil. It was published by Books and Media
in Saint Louis, and you can find information about that
on their website. So we'll conclude with our committee or
discussions and consequences to our lady with a hail Mary,
the name of the Father and some of the Holy Spirit. Amen,

(56:06):
you'll Mary, full praise lords with the listen are the
only women, and blessed is the fruit of that I
warmed Jesus by Mary, Mother of God. Pray for us
sinners now at the all of our death. Amen, and
the Father. Well, thank you, thank you, thank you. Very

(56:26):
good and wide ranging decision. Yeah, it's uh.

Speaker 3 (56:31):
It's difficult to to bring it down to earth in
many ways, sometimes to express it in ways that would
be both coherent but also would be appropriate for understanding
for the average person. It's a it's a I'm learning
as I go.

Speaker 2 (56:51):
Yeah, well, certainly a relevant topic.

Speaker 3 (56:57):
Well, thank you very much.

Speaker 2 (56:58):
I appreciate you. I thank you for joining us. Before
you sign it off and get everybody else out of here,
I want to ask Christopher's under one question, you know
yesterday with the deadline for getting your piece in for
that symposium book, I know I got an extension, Well
you did, you got one. Okay, you got an extension,
but beating down my next tom h sakes. Okay, thank

(57:25):
you everyone, thank you, thank you, good bye.

Speaker 1 (57:32):
Hello, God's beloved. I'm Annabel Mosley, author, professor of theology
and host of then Sings My Soul and Destination Sainthood
on WCAT Radio. I invite you to listen in and
find inspiration along this sacred journey. We're traveling together to
make our lives a masterpiece and, with God's grace, become saints.

(57:56):
Join me Annabel Mosley for then Sings My Soul and
Destination Sainthood on w c AT Radio. God bless you.
Remember you are never alone. God is always with you.

Speaker 2 (58:14):
Thank you for listening to a production of w c
AT Radio. Please join us in our mission of evangelization
and don't forget. Love lifts up when knowledge takes flight.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.