All Episodes

July 8, 2025 75 mins

Send us a text

As artificial intelligence weaves itself into the fabric of our lives, how should people of faith respond?

This conversation between two pastors explores the spectrum of reactions to AI in ministry settings—from those who eagerly use it for sermon preparation to those who refuse to touch it. Rather than settling for simplistic answers, they dig into the nuanced ethical terrain of this emerging technology, drawing parallels to previous communication revolutions like the Gutenberg printing press and radio that similarly upended society.

What makes this discussion particularly valuable is its focus on human formation. While acknowledging legitimate concerns about job displacement and environmental impacts, the deeper questions emerge around how these technologies shape us as people. Can we outsource our thinking to AI without sacrificing something essential about our humanity? Recent studies showing deterioration in critical thinking skills among regular ChatGPT users suggest the answer might be no.

The conversation takes a fascinating turn when examining how AI might amplify existing problems we've seen with social media—the illusion of connection leading to actual isolation, the absence of natural boundaries, and the addictive pull of technologies designed to keep us engaged. Through a Wesleyan lens of accountability and virtue formation, they suggest communities of faith might offer exactly the kind of intentional discernment needed to navigate this new frontier.

Whether you're curious about AI's implications, concerned about its impacts, or simply trying to develop a thoughtful approach to technology, this episode offers a compassionate framework for moving forward with both wisdom and hope. As one pastor notes, "The church has seen a lot of new things over 2,000 years, and we've come up with some pretty good wisdom for how to deal with them."

What boundaries have you set around technology in your own life? Join the conversation and share your thoughts on navigating faith in the age of AI.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Chris Nafis (00:03):
all right, all right.
Well, hey, derek, it's great tosee you.
We were just talking off thescreen.
It's been like 15 years sincewe've had like a real
conversation and I'm so gratefulyou're willing to just spend
some time.
Thank you for coming on thepodcast thank you for having me.

Derek Kubilus (00:22):
This should be fun I think it will be fun.

Chris Nafis (00:24):
Yeah, I mean from I saw your Facebook posts.
This is how I ended upcontacting you.
You posted a New York timesarticle about um.
It was like a, a, a chat, gpt oran open AI former employee that
was kind of having all thesedoomsday predictions and stuff
and I was like man, I have beenthinking a lot about this and

(00:45):
you're you're someone who, atduke, I just knew as, like this,
really kind of opinionated insome ways, but really sharp,
yeah intelligent person yeah,but like thoughtful, you know
like there's not that manypeople who have strong opinions
that I've seen like change theiropinions as they've like kind
of had conversations and stuffand I've seen kind of you go

(01:05):
through some of that, and so Ijust have a lot of a lot of
respect for you and your how youthink through things um well,
thank you, I appreciate that Iwas like, yeah, I should talk to
derek about ai.
This would be great.
We could have a really funconversation.
How?
How did you start thinkingabout ai?
Or like, how did you come tofocusing on that?

Derek Kubilus (01:24):
well, I've been aware of it just through, like
the cultural hype and stuff.
Um, I probably didn't interactwith it until way late, but, um,
it was actually my wife thatgot me thinking really deeply
about it.
Um, because she is an executiveassistant, so she's like the

(01:48):
assistant for a ceo at a bignon-profit and she's constantly
taking classes and reading booksand things to kind of um, take
her skills to the next level.
And one of those classes was onusing ai and I was like wait a

(02:09):
second, like you're telling me,you took like a super practical,
essentially continuing edcourse on using this thing like
it is in.
In that this was like a coupleof years ago and maybe a year
ago, and I was like whoa, thisis way more integrated into life

(02:36):
already than I would havethought.
Right, like we've all seenpeople make the funny pictures
and stuff like that.
But and so we've had severalconversations in, that has led

(03:03):
me down a bit understand thetechnology itself, which is so
complicated.
I think I'm not wrong to saythat the people who design the
technology don't absolutely knowhow it works.

Chris Nafis (03:22):
That's what I'm reading that they don't really
know how this thing is happening.
They've kind of lost thetracking of it.

Derek Kubilus (03:28):
Well, it's kind of designed to do that.
So, like they call it a blackbox, like you set it up and just
let it go, and it grows in waysthat are almost impossible to
chart.
But even more than how it works, I'm interested in how we

(04:09):
respond to it, and I havefriends who are all over the
gamut, especially in ministry.
So I'm a United Methodistclergy person and I have one
friend who is all aboard the AItrain.
He is unabashed that.
He uses it to write sermons anddevelop Bible studies and all
of these things.
He's constantly postingpictures and graphics and videos
that he's creating for hischurch using AI.

(04:32):
And I have some friends thatare will not touch it for their
own sort of ethical reasons, donot believe that it should exist
.
And people who like when theysee their friends posting AI
things on social media, willactually like confront them in

(04:55):
all capital letters that theyshould stop doing that.
So there's this huge widespectrum right now in people's
feelings about AI have you usedit much yourself?
I have.
So I have.
I think you could characterizeit as I've dipped my toe in the
water.
Yeah, um, I refuse to use it insermon writing or in book

(05:23):
writing, like those are theplaces that like it.
Just, I just don't want thosethings to be tainted.
Right now, where I have used itand I actually got this from, I
feel like I'm like sharing adeep, dark secret right now.
But that's part of it too,because it is, it almost feels a

(05:45):
little dishonest.
There have been a couple timeswhen I have had to write an
email over an issue that couldpotentially have a lot of
conflict surrounding it and, asyou may recall from our days at

(06:07):
Duke divinity school, I don'talways have that much tact with
my words, and so I haveessentially written out in email
like everything I want to sayand then told chat GPT to um,
hey, make, make this sound warmand professional and inoffensive

(06:31):
, but get all these points home.
And it has done an okay job.
I mean nothing that I could, uh, copy and paste necessarily
like whole cloth, but definitelychange some things in light of
what it said, but I also feelkind of dirty about that.

(06:56):
Yeah, how about you?

Chris Nafis (06:59):
Me, yeah.
So the first time I reallytried it was I was trying to
make a lineup for my son'ssoccer team that I was coaching
and it was.
I was just like it wascomplicated because I wanted
everybody to play three quartersof the game, one of them all to
only play two positions, and Iwas just like like I was like
this is like a math problem.
I bet I wonder if Gemini, whichwas Google's one at the time,

(07:22):
could do this.
And I put it in this was a lastfall and it was incapable of
doing it.
It kept spitting out like I hadfour quarters and all that and
it kept giving like having thesame players play the same
positions for three quarters andthen it would mix them up for
the fourth quarter.

Derek Kubilus (07:38):
And it wasn't.

Chris Nafis (07:39):
And then I would tell it like no, this is
incorrect, you're not doing whatI'm asking you to do.
And then it would say oh, I'msorry, uh you know, I messed up
and then it would give meanother version of the same
thing.
So that was.
So my first time using itmyself, I was very unimpressed
because it couldn't do thissimple math thing.
And then you know, learningthat's really not.
These are language models, soit's really not the strength of

(08:11):
what it does.
I have been using it and this isI also feel like this is like a
deep, dark secret or somethingbut I've been using it in this
podcast actually to uh, to makea transcript of the podcast,
which is helpful to have forpeople that, um, are hard of
hearing, and it's it's verylabor intensive to do that, but
it does it super fast and easy.
There's like a tool in ourhosting site that does it, and
then it writes like a littlesynopsis that I've used to kind
of have the synopsis which,again, I'm the same as you like

(08:32):
for like a book or a sermon oranything that I feel like was
the content itself.
I'm like hard, no, don't wantanything to do with it, but
since it's like a summary ofsomething that I've already done
, it feels a little bit more.
In a sense, it's like gruntwork type work.
Yeah, it's a little more likeall right, this is.
This is where it's actuallyuseful and it has saved me a lot

(08:54):
of time.
So I don't know if everyone'sgoing to tune out of this
podcast now that they know thatthey're.

Derek Kubilus (08:58):
It's all fake yeah.

Chris Nafis (09:00):
Actually I have cloned myself and I'm just
kidding Um, but other than thatI really haven't used it at all.
Uh, I haven't really done,emails and stuff, um, but I'm
sure that I've interacted withit, which is part of you know a
fair amount, which is part ofyou know what.
What we're working through islike all right, this thing is

(09:20):
coming, whether we areintentionally engaging with it
or not.
It's like we're getting spamcalls from it.
We're seeing it on.

Derek Kubilus (09:28):
Oh yeah, I I've read a couple books and a lot of
like online stuff and I've readin several different places all
the telltale signs of AIgenerated text.
Telltale signs of AI generatedtext and I'm actually I feel

(09:49):
like I'm getting pretty good atum spotting it like in the wild,
and I'm seeing it every day.
I see it every day.
I see the pictures, which Ithink are really easy to spot,
Although I'm told baby boomersmight struggle with it.
I see those every day on socialmedia, Absolutely every day.

(10:12):
The app, if you're at allinterested in, like Pinterest, I
have a few things that Isometimes like to follow on
Pinterest.
That app is quickly becomingjust unusable because it is so
full of just AI slop.

(10:34):
I mean, just every post becomesAI really quick in some
categories.

Chris Nafis (10:42):
Yeah, I had some friends who told me there was
like a john oliver episode juston that, specifically on his uh
show.
So I watched it before just tohave this conversation and he
has a whole episode on that andspecifically points to pinterest
as, like you know, this woman'scomplaining on on the show
they're showing a video of hersaying I just searched up garden
and like almost every pictureis not of a real garden it's all

(11:05):
ai, and you can tell, becausethis, this and this, and I think
I think right now there Imostly feel like I can tell, but
I think it is getting betterand it will be harder to tell
the videos that are coming outnow are crazy.

Derek Kubilus (11:20):
Yeah, they have a kind of softness to them that
sometimes can key me in.
But the New York Times just hadlike a little quiz five videos
or maybe it was 10 videos and Igot exactly five right and five

(11:41):
wrong, which means like I wasn'tpicking up anything essentially
.
So, um, yeah, I mean it's in tothink about where we came from
three years ago and what, whatit was capable of If you go out
in three more years.

(12:02):
Or the big essay that goteveryone talking was called ai
2027, which was a big thoughtexperiment of where ai could
take us through 2027, with lotsof very serious doomsday
scenarios and stuff like that.
It was very sensational.

Chris Nafis (12:23):
Um, it's, it's just uh, the possibilities are wide
open and I think that's whatmakes a lot of people nervous
right now yeah, that's whatmakes me nervous, because it
just feels like there's it feelslike we're on the cusp of like
a major revolution in like howwe, how we discern what's true

(12:47):
and what's not, which is kind ofwe're getting out the videos
and the pictures, um, whichwe've already kind of been there
already yeah, absolutely stuff.
But now it feels like I had thefirst uh moment this week.
I saw I was on twitter andthere was a picture someone had
shared of like an apartmentcomplex in Ukraine that had been

(13:08):
bombed by Russia, and it wasthe first time where I was
really like I feel like I'vethought about this, but it was
the first time I was likelooking at a thing and I was
like I don't know if this isreal, I don't know and I can't
tell you know, and, and I thinkthat that's going to be pretty
normal now, there's been prettymainstream news stories where,

(13:29):
like the, the broadcast networkwas either fooled or didn't care
enough to check and they've,you know, shared videos or
photos that were AI stuff.
Or, rachel, my wife told meabout a I forget what newspaper
it was, but they published awhole summer book reading list
about books that you should readthis summer and the entire list
was AI generated.
None of the books were real.

Derek Kubilus (13:51):
I heard about that.
Published it yeah.

Chris Nafis (13:54):
So there's that side of it, and then you know,
like I don't know, it just feelslike there's going to be a lot
of money and power because it'sgoing to change how we work.
I mean, it feels like we'rekind of heading towards like an
industrial revolution sort of asituation where everything just
sort of changes very quickly andwe're kind of all figuring out
how do we live now in this newworld that has not only the

(14:17):
internet but has artificialintelligence running around?

Derek Kubilus (14:20):
Yeah, absolutely .
There's a really great bookcalled the Gutenberg Parenthesis
that I think puts all of thisin context really well, and if
you haven't checked that out, Imean, I think it's a good book
on its own, but it has a lot tosay about specifically the

(14:51):
revolutions that happen in thewake of new communication
devices and new forms ofcommunication and sort of what
that does to society and um,it's, it's really interesting

(15:12):
like um, sorry, you can hear mydog probably in the background,
okay, um, if you break it down,one of the cheap the, the.
The main theses of that bookhas to do with what happened
with the Gutenberg printingpress, and the early Renaissance

(15:49):
came from this technology thatpeople had to disperse
information more quickly andmore widely than they ever had
before.
And so you, you know, not onlydo you have the reformation, the
Protestant reformation,obviously, you have, like the
peasant revolt in germany, youhave the wars of religion in
england and france, you have theenglish civil war, and I think

(16:14):
there's a case to be made thatin all these um instances you
can link them in some way to theadvent of the printing press.
They had these things that theywould call pamphlet wars, where
communities, people incommunities would start
constantly handing out pamphletson different sides of the issue

(16:35):
, and then those pamphlet warswould turn into, eventually,
real wars.
And coincidentally, you alsosee around this time conspiracy
theories start to pop up.
With it.
With the advent of new forms ofcommunication, you always get

(16:58):
the spreading of these Sorry,these conspiracy theories, which
are almost like a virus thatsort of gets attached to the way
we communicate.
So this is where you see a lotof the anti-Semitism.
You see the protocols of theelders of Zion and the shadowy

(17:25):
cabals of secret people.
That's where a lot of it hasits start.
And then when radio is invented, you see something very
different but similar.
Radio causes another kind ofrevolution, but it's much more

(17:48):
centralized because radiohappens on bandwidths that can
be easily controlled.
You see revolutions in umnations.
So this is not instabilitybetween people and tribes and
towns and villages, this isinstability that's created

(18:09):
between nations themselves.
So that's when you have therise of radical nationalism,
fascism, communism.
Those are made possible by theairwaves and that's what we
really have, until television umuh kind of starts to work out

(18:31):
the kinks of that and then youwind up with walter, cronkite
and woodward and bernstein, inan age of ethics in journalism,
where it's this really big dealto get your facts straight and
everything is so well organized.
And then the internet comesalong and social media and blows

(18:52):
that all up again with thisvery egalitarian kind of
communication style and so youget the internal pressure and
you get, as we've seen, theconspiracy theories.
Um, I would say, and you get,as we've seen, the conspiracy
theories, I would say,ultimately you get, you know
QAnon and COVID conspiracies andthose kind of things, and then

(19:17):
people start taking the moralimplication of the technology
more seriously.

Chris Nafis (19:23):
Moral implication of the technology more seriously
, yeah, yeah, well, and a partof it feels like the internet
revolution is still kind of inprogress, like it feels like
it's been a long time for us.
You know, I don't, I don't knowhow old you are, derek.
I'm guessing you're about myage, but we're I'm a 42
millennial.
Yeah, yeah, and so we kind ofgrew up in, in had the time

(19:43):
before the internet and, likethe internet, you know, facebook
came out when I was in collegeand so we've kind of seen it.
It feels like it's been foreverin some ways, but it really, in
terms of like, in terms ofgenerations and in terms of
history, we're really still theinternet's pretty new, and then
now we're adding a whole notherlayer of cause, like AI is isn't
a thing without the internet.
I don't think right like itdoesn't yeah sure um, but we're

(20:06):
adding like another layer to it.
We're not only um, and you know,on the internet it's a mixed
thing with social media andstuff too, because on the one
hand, like you said, it's veryegalitarian everyone can say
what they and be heard and builda platform and a voice for
themselves.
You don't have to go through aneditor or a newsroom or
journalism school.
But then, at the same time,that, like that, that unfilters

(20:30):
everything, which just opensthings up to lots of wild stuff.
And now AI feels like it's thishuge generator, will be a huge
generator of like fake things, Iguess.
I think that's what because,like, it's not gonna be
communicating true things in thesense of like facts and things

(20:51):
you know, like you know we cantalk about what truth is, I
guess, but, um, but it's gonnabe almost exclusively used
probably to have conversationswith you know, know, not real
people.
Well, do you see it beingdifferent?
Or what do you see?

Derek Kubilus (21:08):
I mean, yeah, number one.
What's really good right now, Ithink, and why conversations
like this are good, is becausewe have a little bit of an
interregnum.
Before AI completely saturatesour lives, we have this moment

(21:31):
where we actually know that itis coming, which is something
that I don't think we really hadwith social media in particular
.
Right Like, we were just kindof like oh, this is a new thing.
Oh, this is a new thing, andbefore we knew it, we were all
using it like crazy.

(21:51):
Right Like.
I remember being in seminary,like having to force myself not
to check Facebook when we werestudying for exams, because
everyone would post, you know,their page count of how many
pages they had yet to write andstuff like that, and it was just

(22:11):
on us before we knew what wewere doing.
And it took like the crisis ofyou know, I think I don't I
don't want to offend any of yourlisteners, I don't want to
offend any of your listeners thetwin crisis of, like a Trump
presidency and a pandemic toopen up all of our lives, open

(22:33):
up all of our eyes, to say, whoa, there is some really nasty
stuff that this can be used for.
And just now, I feel like we'restarting to catch up with it.
We're starting to intentionallyteach children how to use
social media appropriately, howto spot fake news and things

(23:00):
like that.
Hopefully we can get ajumpstart on AI, things like
that.
Hopefully we can get ajumpstart on AI.
Hopefully we can start havingthese conversations and have a
little bit better footing,because we see the wave coming,
if that makes sense.

Chris Nafis (23:15):
Totally, and that's part of why I wanted to talk to
you is because I think, likethat's what you know, we could,
we could go on, and I feel likeI've done this with friends and
even with congregants at timesand just kind of gone down
doomsday lists of like whathappens next with AI.
You know what happenspolitically, what happens to the
workforce, what happens tocreative work, what happens to

(23:38):
you know all of these things andyou know we don't really know.
We can conjecture and guess andit seems like whatever's going
to happen is going to be big.
But the real question for methat I'm interested in is, like,
what do we?
How do we respond to it?
How do we prepare for what'scoming?
How do we kind of get like,sharpen our tools for
discernment on, like as wefigure out the new ethics and

(24:00):
morals of like a new technologyand morals of like a new
technology?
You know, what do we need toprepare for as churches or as
followers of Jesus in order tobe able to do that work well, to
look out for people you know.
So, for example, I think AI isalready beginning to whittle
away at some like white collarjobs, which technology has been

(24:21):
whittling away at blue collarand low wage jobs forever, but,
like I think the whole jobmarket is going to change pretty
rapidly.
So how do we prepare for thatLike?
What does it?
What does it mean to thinkabout?
You know, we get so much of oursense of identity and value

(24:42):
from the work that we do.
What happens when a lot ofpeople, suddenly their work, is
kind of taken from them?
How do we re understand what itmeans to be human and how to
come alongside?
You know those?
Those are the kinds of thingsthat I'm looking to figure out.
Sure, that's like a millionquestions all in one, I guess,
but you know, what do you see as, like, the main tools that we
need to like, think about as weenter into this new season in

(25:06):
the church of like, how do we,how do we work through these
things in terms of, like, ethicsand practice?

Derek Kubilus (25:12):
Well, I'm.
I'm a United Methodist, so as a, as a fellow Wesleyan I I'm not
sure if you guys use this termor not we're really into
something we call holyconferencing.
Is that a Nazarene term?

Chris Nafis (25:30):
You know I haven't heard any Nazarenes use that
term.
I heard it around.
You know we went to Methodistschool together.
Sure, but yeah, tell us what itis.

Derek Kubilus (25:36):
But it's just all about the power and the
grace that's found inconversations for lack of a
better word, and we Methodiststalk about it a lot.
The problem is that we can bereally bad at it.
A good example of how we're badat it is around the LGBTQ

(26:03):
debate in our church, Like wejust had a major split in
everything.
And through that whole processwe talked about holy
conferencing a lot, but I don'tthink we ever did it.
Does that make sense?
What we did is we shouted ourtalking points at one another,

(26:29):
but what we didn't do was justsit in a posture of listening to
one another's stories.
So, like I know in UnitedMethodism in my own denomination
, what I would like Like I knowin United Methodism in my own

(26:54):
denomination what I would likeis for us to call a whole
conference just to talk, Notwith the goal of making hard and
fast rules at the end of it.
Sometimes people think, oh, ifyou're going to have a meeting,
you got to have something, apiece of paper that you know the
meeting produces a statement orsomething.
I don't even think we need that.
I just want to hear from peopleand have discussions, and I

(27:17):
think we could like invite someacademics to present papers, and
we could invite some peoplejust to tell their stories.
Where have you used AI?
What have you found?
What has been good?
What has been bad?
Is there someone who has losttheir job because of AI?

(27:38):
Is there someone who has acareer in AI?
What does the environmentalscience, what do the
environmental sciences have totell us about it?
What's their story?
Um, and I, I it's.
It's tough, though, because Ican already feel that

(28:02):
polarization happening aroundthis issue.
Like I said, I have somefriends who are all about it in,
some friends who were dead setagainst it, and so those sides
are already forming up, and sothose sides are already forming

(28:35):
up.
But what I'm interested in iskind of this fuzzy middle, and
I'm interested in a lot ofpeople getting a lot of people
together who have the humilityto say we don't know, we don't
know how this will affect us, wedon't know what the
environmental impacts are goingto be, the social and economic
impacts, but we want to try tofigure out how we can respond
carefully and faithfully to it.
Faithfully to it.

(29:04):
So I mean, I don't have a greatanswer for you, other than I
hope churches and denominationshave intentional conversations
about how to move forward.
And it was interesting, my owndenomination, the United
Methodist Church Board ofDiscipleship, just released like
a thing into social media, likejust into the ether as I was

(29:26):
preparing for this conversation.
They just very flippantlytalked about AI and why you
should use it, why it's a tooland so on and so forth.
And they just said, well, andwe'll be sensitive to the
environmental things and so onand so forth and misinformation

(29:47):
and all of that.
But they said go ahead and useit.
Essentially.
And he was like well, I don'tknow, I'm not there yet.
I need to have moreconversations with people and
figure out.
You know, I know someone onsocial media who makes their
living as an illustrator, whosays look like I'm staring down

(30:11):
the barrel of losing my job,Like I went to art school and AI
has just learned fromeverything that I've ever made
and dumped on the internet andcan do what I do very easily.
Um, I think we need to hearthose kinds of stories, you know

(30:33):
.

Chris Nafis (30:34):
Yeah, for sure.
And and to talk about successstories too, maybe, of how like
I've distinguished myself fromthis AI juggernaut and how you
know what I mean.
I don't know that they, thesestories, even exist yet, but you
know, like.
So my wife runs a smallbusiness.
She's a flower farmer and hasbuilt this, this whole business
around Instagram and stuff youknow we she could tell stories

(30:58):
about how she's built a up asense of connection and
authenticity with, like, locallygrown flowers in the face of
mass market flowers that aremostly shipped from south
america or asia to southerncalifornia, which is just a long
way.
It's environmentally unhelpfulall those kinds of things, but
there is space to kind of carveout um, something better if you
can figure out how to like tellthe story well and how to

(31:21):
connect I don't know if that'sthere for for illustrators.
I know my brother, who editsthis podcast for me, is feeling
the the squeeze with videoediting world and photography
world, um, so yeah, like itwould probably be helpful for
people to even people in thosefields to just talk, have like
an open forum to talk to eachother about, like all right,
well, how do we?

(31:41):
How do we you know, I don'tfight, this thing is maybe not
even the right word but how dowe distinguish, you know, real
art from generated art and howdo we kind of tell that story to
the rest of the world so thatpeople can support human artists
or whatever the case might be?

Derek Kubilus (31:59):
And I know that was part of what the big
Hollywood writer strike wasabout a couple of years ago Was
they saw the writing on the wallthat like, so to speak, that
like we're going to be out of ajob if we don't do something to
try to protect ourselves.
Now, that wasn't so much aconversation as it was just sort

(32:22):
of sticking up for themselves,um, but it just goes to show,
you know, some people have losttheir jobs already.
Well, yeah, for some people arestaring down the tunnel at it.
Um, I mean, I have seenadvertisements on television

(32:42):
that I know were made with AI.
I mean I can just see it.
I read copy all the time online.
That is obviously AI generated.
But here's the thing, and thisis where it gets difficult.
Okay, when we ask that questionabout jobs and AI, we should be

(33:07):
aware that there is a deeperquestion that lies behind it,
which is how do you feel notjust about, like, ai taking
people's jobs, how do you feelabout capitalism?

Chris Nafis (33:26):
Yeah, right, right behind it.

Derek Kubilus (33:29):
If you're uncomfortable with a new, more
efficient technology, puttingpeople out of work whether
you're talking about the cardisplacing the horse and buggy,
or solar power displacingco-workers, or AI displacing

(33:54):
coders and illustrators andfilmmakers then your problem is
with capitalism, which istotally appropriate.
My problem is with capitalism,you know, other than war, I
think, is probably the most evilthing that's ever been invented
.
So I think we should be askingthose questions.

(34:17):
But what we need to keep inmind is that CEOs and
shareholders are not going to beasking those kinds of questions
.
They want efficiency, they wanthigher profit margins, they
want fewer HR things to dealwith.

(34:40):
So, with their support, you'renot going to get rid of it
because there are too manystakeholders in the profit
possibilities.
You're not going to get them tolimit it when it could
potentially make so much moremoney.

Chris Nafis (35:02):
So and maybe I mean oh, go ahead.
No please, please.
So this reminds me.
So I had this conversation withthis guy.
He was kind of a stranger atthis camp where I was at and I
was just asking him what do you,what do you do for a living?
This was, uh, several years agoand he's he works in um,
agricultural engineering was hiswork and he says he told me

(35:25):
this story and you could tellhe's kind of like still
processing the thing that hedoes for a living.
But he goes to these umagricultural processing
factories and installs machinesthat displace all these workers.
So say like I'll go to thislike factory and you know, in
wherever Central Valley,california, where there's like

(35:47):
20 women whose whole job is justto slap these stickers on
peaches or avocados or whateverit is that they're growing, and
they're working all day justlike slapping sticker after
sticker after sticker, whateverit is that they're growing, and
they're working all day justlike slapping sticker after
sticker after sticker, and inhe'll install a machine and all
of a sudden the machine can dothe work of all 20 of them and
there's no and they're.
And then they're like I mean,their job is miserable, they get

(36:10):
paid crap for it.

Derek Kubilus (36:12):
But now it's the only thing they have.

Chris Nafis (36:14):
It's the only thing they have and now they're out
of work and what it kind of like.
It kind of hit me in a new waythat, like, someone is
benefiting from that increasedefficiency but it's not ever
going to be the workers or thelower class.
It's always going to be theones who own the capital and who

(36:35):
own the technology.
So, like, he and his companythat's installing these things
are going to make a lot of moneyon these.
You know, agriculture, machinesthis is a very rudimentary
thing but it's still happening.
You know they're going to makea lot of money on installing all
these things all over the place.
And then the person, whoeverowns that farm or whoever owns
that production line, they aregoing to benefit from the

(36:57):
efficiency, when it would benice if, like, we could all
benefit from the efficiency.
Like it would be nice if no onehas stickers on oranges or
whatever.
But we could all live a littleeasier and work a few less hours
because there's less work to do, because we can do it more
efficiently.
But that's not how it works inour current system of capitalism

(37:20):
.
And you know, like people willsay, capitalism has made things
much more efficient.
It's brought a lot of thrivingand wealth to the world and all
these kinds of things.
But the problem is when it'sunchecked or unregulated, and
when there's, when it's justkind of in this free market way,
then, like, the wealth alwaysfunnels its way up, especially
when there's a new technologythat makes things more efficient

(37:41):
Everyone on the bottom end endsup suffering and the only real
check is that now those peoplewho were slapping stickers on
the oranges, they're not gonnabe able to afford to buy as many
oranges, which might affect thebottom line if it happens on a
mass scale.
But like, generally speaking,the people who are making the
money off it don't really care,you know, they're just happy to
have a.
That's what you're talkingabout, right?

Derek Kubilus (38:04):
Yeah, and I mean not to put too fine a point on
it, but what you're talkingabout is who owns the means of
production.

Chris Nafis (38:09):
Yes, Right yeah.

Derek Kubilus (38:10):
Yes, exactly.
So, yeah, and it and it.
It strikes me one of the mostdepressing things I ever read
was about this huge bill that umwas debated on in Congress in,
I think, the 1960s.
That was all about going to auh, four day work week with the

(38:39):
federal government, and it wasassumed that private industry
would follow the government'slead, as it does with, like,
holidays and things like that,and the argument was that, you
know, we have much moreefficient means of producing
things that requires less labor,and so we should be inviting

(39:03):
people to have more leisure time, like we have washing machines
now, rudimentary computers thatare supposed to help us by sort

(39:23):
of taking the time we wouldspend on those things and
allowing us to direct itelsewhere.
And it turns out the answer isno.
You keep working just as hardas you always have, if not
harder, and you just alwaysproduce more right.
And so if you were designing asociety with this advanced

(39:44):
technology for yourself, youmight say well, if I could have
robots sorry, this got verysci-fi very quickly If I could
have robots that do everythingfor me, then I would be free to
do what I wanted, and we shouldjust do that for everyone.

(40:06):
We make robots that can fixother robots and then it would
be great.
Except that when it playsitself out in a capitalist world
, except that when it playsitself out in a capitalist world
, only some people will get thebenefit of the robots.
Right, and that's what we'reseeing.
It's not like those farmers aresaying to that woman oh great

(40:31):
news, we have this machine thatputs the stickers on the fruit,
so we can just pay you and youcan go chase your dreams,
because we got a machine thatdoes it automatically.
Right, like that's what wewould want, but we live in a
world where people don't yetknow how to share.
Um, there's a really good book,and I can't remember the author

(40:55):
of it.
Please, please forgive me.
It's very Google-able.
It's called Habits of the HighTech Heart and I think it was
written in the early 2000s as aresponse to the internet.
There was another sort ofclassical virtue ethics book
called Habits of the Heart.
That was written many yearsbefore, and this tried to apply

(41:22):
the lessons of that book to whatwas then known as the internet
age.
Right, which seemed so naiveback then, but the idea being
that the classical ethics of youknow if you want to talk about
the fruit of the spirit peace,patience, kindness, gentleness,

(41:43):
self-control, those things like.
The task in every technologicalage is to apply those things to
the technology that you're using, and that's the big question.
How do we use AI in a generousmanner, right?

(42:03):
How do we use AI in a kindlymanner?
How do we use it gently?
How do we use it in such a waythat it allows us to be better
stewards of our creation, of thecreation that's been handed to
us, and not just absolutelydestroy it, which is what we're

(42:26):
on the cusp of doing?
The amount of power that theseservers that generate this AI
require are just incredible.
That generate this AI requireare just incredible.
I mean, um, I think it's Googleis building a nuclear power
plant just to serve its AI.
Like um.

Chris Nafis (42:48):
Microsoft too, I think, I think maybe it was
Microsoft.
They've been talking about re.
There's a nuclear power planthere near us that's been
decommissioned and I thinkthey're working on bringing that
back online.
I think some of themotivation's got to be some of
the ai stuff, but it's going toneed an incredible amount of
power.
Yeah, like scale, this thing,as it seems like they're going
to, might be one of the fewlimiting factors.

(43:10):
Actually, that checks thetechnology a bit but in the
meantime, like, I've heardstories of people who live near
some of these servers that like,are in and out of power
themselves in their own homes.
Really, yeah, and I get youknow, maybe that's all ai
generated nonsense, I don't know.
But uh, there's, you know.
I just think there's going tobe a, there's going to be a cost

(43:31):
to it environmentally, just,with the how do we produce that
power cleanly?
And then you, in terms ofsharing it, and that you know
there's all the wealth andeverything gets mixed up with
with those questions also.
Um, yeah.

Derek Kubilus (43:45):
So when I was writing, when I was using it for
the sake of writing thoseemails, what I was thinking to
myself is okay, I am using thisright now to make peace with
someone and therefore I sort ofjudged it to be okay in that

(44:08):
circumstance.
You know what I'm saying.
Now, as humans, our ability torationalize anything is really
high, right, but I guess I'mjust upholding that as an
example of can we use this withan eye toward virtue and have

(44:31):
discussions about because weknow it's not going anywhere and
because we know it's gettingmore powerful every day and more
ubiquitous every day can wetalk about how to use it
responsibly?
Otherwise, you know, is it ourmoment to become Amish?

(44:56):
Is it our moment to becomeAmish, Like the Amish kind of
pick an arbitrary moment in timeFor them it was, you know, 1790
or whatever in America and it'slike okay, technology here
pretty much stops and we have togive the okay for anything new

(45:17):
to come in.
Are we just saying that for us,that date is 2025, um, or are we
going to sort of learn how tointegrate this technology with
how we live as christians?
And for me, that means learninghow to use it virtuously, which

(45:39):
I?

Chris Nafis (45:41):
I need to have more conversations in order to
figure out yeah, and I, you know, I, that's an interesting
approach to it that I reallylike.
I think I think part of uh,what comes to mind for me is
that, like, most people are notgoing to do that.
You know what I mean.
No, but it's kind of like a lotof Christian practices where,

(46:06):
like, the world is not going tosuddenly turn on their virtue
and their care for actualjustice and their love for their
fellow humans their care foractual justice and their love
for their fellow humans.
But we can, in the midst of itand we can maybe in some ways at
least be some salt in the line.
That's not going to change.
It's not going to resolve allthe issues that are going to

(46:27):
come up with AI, but at least wecan find our own ethic and work
out our own way of likediscerning.
Is this an appropriate use, forexample, like yeah, I don't
think that, um, I want to use itin sermon writing.
Like it just feels like thewrong thing to do, um, and maybe
maybe that changes over time.
I don't know, but like you knowwe can, others might do it, I,

(46:50):
I'm sure there are.
I, in fact, someone told metoday about an, an AI pastor
somewhere in Europe that thewhole congregation has, you know
, has an AI person doing all ofthe things, which is insane to
me.
It seems like a, you know, oneof those things that's going to
get lots of attention and that'sprobably why they're doing it.
But like that's, I'm notinterested in that, you know.

(47:12):
But where do we draw thoselines and we can kind of
collectively discern forourselves and maybe other
industries don't do that orother churches don't do that.
But you know, having thoseconversations can at least be
helpful for us to figure outwhat we do.

Derek Kubilus (47:26):
Yeah, and here's the thing too.
I mean, you talk about the AIpastor and how that seems just
almost wacky, and I agree itdoes sound almost wack wacky.
Here's the thing about ai,though, is the wacky things are
already happening right, and sowe need is as silly and as

(47:47):
stupid as it is we need to havethose conversations right.
There are people who are havingparasocial relationships with
AI.
There are people who aretreating AI as if it's a

(48:08):
religious oracle or somethinglike that.
Yeah, that sounds wacky.
If you would have told me in2015 that one day, um, you know,
a majority of christians, of ofconservative christians would

(48:29):
believe that joe biden andhillary clinton were drinking
the blood of children, I wouldhave said that's wacky, but then
it happened right, and millionsof people just sort of got on
board with that story.
So, as silly as it is, I thinkwe need to talk about those

(48:50):
things, and perhaps we need tohear from those people who have
had those relationships or whogo to the priest.
I I can't stop thinking aboutthe, the movie priest, which is
about the, the vampire hunter inthe future, who confesses to an

(49:12):
ai in a booth and he offersthem absolution.
I mean, there will be peoplewho go down that route.

Chris Nafis (49:22):
Because that, to me , the in talking about how do we
use AI responsibly, one of thehalves is kind of what you were
talking about before, of like,all right, what is a good use,
what is a worthwhile use?
This is a peace buildingactivity, and the other thing to
think about for me is like,what is this doing to me?
Um, how, how is this affectingme as a person and affecting the

(49:43):
way you know?
So like, one of the things thatcame out recently was they had
the first study of, like brainscans of people who have been
using chat, gpt, uh, regularlyfor their work, and one of the
things that I found was thatit's deteriorating their ability
to like, think critically, towrite, to do the tasks
themselves, like the, theirskills in the areas that they're

(50:06):
handing things off to that GPTare beginning to atrophy.
And so if I'm leaning on thisthing and you know, sometimes
that might be okay if I don'tneed this skill anymore, I don't
, I don't need this likeslapping skill, like I don't
need to slap the thing on theoranges anymore, because I'm
never going to do this again,because there's a machine that
does it.
You know, to my sense ofpurpose in the world and my

(50:27):
ability to like navigate theworld well, love others well.
That is being robbed of mebecause I'm essentially
outsourcing it to AI, and so howare these things shaping us as

(50:47):
people?
I think maybe that's part of mysermon writing thing is like I
want to be like sermon writingfor me as a spiritual practice.
It's part of how I stay sharpand discern my way forward.
And if I hand that off to AI,you know, the study shows that
I'm not going to even rememberwhat I wrote like an hour later,
you know.
And so, uh, how is thataffecting me?

(51:08):
You know what I mean.

Derek Kubilus (51:10):
I mean, I have felt it myself when you know,
every once in a while, likemaybe once a year or something,
I'll take a vacation and thenwe'll have a guest preacher and
then maybe a service like achurch picnic or something, and
before I know it it's been threeweeks since I've, like,
actually sat down to write asermon.

(51:30):
And then when I go to do it Idon't quite have that edge.
You know, I'm not quite asincisive with my thoughts.
I have to do a little moreresearch and kind of have a
couple false starts.
Stuff like that, relying on AI,I think, would put me in that

(51:51):
place all the time where I wouldjust lose my ability to write
all together and for some peoplethey may not think of writing
that way, but I do and I amgrateful for my.
I don't do many things well,you know.
So the one thing that I pridemyself on I want to keep doing,

(52:16):
and it forms us in other waystoo, you know.
Think about the way social mediahas changed our relationships
right?
I remember this was years ago Iwas mentoring a kid who I had

(52:40):
been mentoring him, hadn't seenhim for a couple years, hooked
up with him for coffee and hehad broken up with his
girlfriend that he'd been withfor like two or three years.
And I said, well, how'd shetake it?
Was she crying?
And he said, oh, I don't know.

(53:01):
And I said, well, what do youmean?
You don't know?
Well, it was because ithappened over text, right, and
that was totally fine for him.
And I don't know, maybe I'mjust a stodgy old man and I
don't know, maybe I'm just astodgy old man, but that's not
fine.
Like, when you break off arelationship with someone, you

(53:23):
should be there to see theirtears Because that will elicit
your compassion.
Increasingly, there are morestories about people that are
having what I would callparasocial relationships with AI

(53:43):
, and whether they'refriendships, whether they are in
some way romantic, and thosethings, as AI gets better, are
going to become easier andeasier for people to have.
And when you use Gemini or chat, gpt or whatever, you know how
obsequious it is right.

(54:06):
Like, oh, how was that?
Is there anything else we cando for you?
Like, is that okay?
Like, dah, dah, dah, dah, dah.
Imagine how obsequious in AI,boyfriend or girlfriend would be
.
Well, that's going to treat.
That's going to teach you touse people like tools to meet

(54:27):
your own ends, instead of youtreating them as an end in and
of themselves, like an aidoesn't need anything from you,
so you can't really truly have arelationship with it.
So yeah, I think personalformation should have a lot to

(54:47):
do with how we think about it.
How does it form us as people?

Chris Nafis (54:52):
Yeah, yeah, I think I mean that's where you're
saying we can take some of thelessons from the social media
revolution in the same way,because it's you know, social
media.
I think I think I can say mostof us agree at this point that
it's turned out to be a sort ofa hollow social identity or
social experience, where it hasthis illusion of connecting us

(55:14):
to all of these people, thisillusion of connecting us to all
of these people, but at the endwe have like an epidemic of
loneliness happening because noone actually feels that
connection.
Because we're doing it in thisway, that's not actually real.
We're not seeing people inperson.
We're not sharing our wholeselves with people.
We're we're sharing a facade ofwhat we want to present to the
world.
You know we're.
You know which is creating allthe FOMO stuff world.

(55:37):
You know where.
You know which is creating allthe FOMO stuff the fear of
missing out, stuff where youknow you just see all these
people on their best day with asuper flattering picture that
may be edited, and then you feellike you're less because of,
because that's the relationshipwe have with everybody.
That's what we see fromeverybody's feeds.
Um, instead of like actuallyhaving friendships, where we're
walking alongside people on gooddays and bad days and where we
actually see them, and some ofthe trauma work that we've been

(56:00):
doing in our church lately haskind of highlighted the somatic
significance of being aroundother people and the smell and
the heart rates and all thethings that kind of align us
with one another when we're inperson.
So social media has kind ofshown us like all right, you can
go down this path and as asociety it's going to make for
loneliness and for a hollownessand for a divided people and so

(56:22):
maybe maybe we should be kind ofmore cautious stepping into all
these AI things that are comingAI therapists, ai relationships
, ai you know, leaning on AI forall of these things, because
you know we can kind of see someof those patterns re-emerging
with the, with the newtechnology and again, like I
don't know exactly how wenavigate that well or perfectly,

(56:44):
but does give me some pausewhen thinking about like all
right, how am I going to?
You know, this tool seems sogreat.
This AI thing never judges mewhen I share my darkest thoughts
with it, you know.
But is that?
I mean, maybe some of thatjudgment and shame is helpful
sometimes so that I can know,like, all right, I don't want to

(57:05):
feel bad about myself becausesomeone's judging me, but also I
should know that that's like aterritory that I shouldn't let
my thoughts wander and dwell intoo much, because like it's bad
for humanity if I'm alwaysthinking about this violent
image or whatever this yeah, Imean we, we supply boundaries
for one another.

Derek Kubilus (57:25):
yeah, and too often we have used our
technology to try to escapethose boundaries in various ways
.
Um, whether you want to talk, Imean, I guess the most obvious
place is something like internetpornography, right?
Like that is about boundarytranscendence.

(57:45):
That's about imagining yourselfin situations that normally no
one would ever be okay with youbeing in and they would put a
little stop to you and say, well, you're indulging yourself a
bit too much, right?

(58:06):
Um, uh, ai would have no reasonnot to give you what you want,
and in fact it would have everyreason to, to to give you
everything you wanted so thatyou will keep using it.

(58:29):
Right, um, it will beinteresting also, um, because
internet things do get worseover time.
Number one, like you said, we,we figure out how hollow they
are.
But number two is, oncetechnological industries kind of

(58:54):
get everyone using everything,that's when they start to
monetize it, right?
So I was really shocked when Isat down to watch a movie with
my wife on Amazon Prime afterseveral months of not watching
it, to find that they had addedcommercials to it.
Right, and, if I wanted thecommercial list experience, I

(59:15):
would have to, and now I'm lesslikely to watch something on
Amazon prime.
Eventually, when AI ismonetized, however that's going
to look, it will give us thatthat, um, that gentle kind of
boundary, but we can't rely onthat like.
We need to think about itbefore.

(59:38):
We need to be thinking about itright now and and having
conversations, um, about how torespond to it.

Chris Nafis (59:48):
Yeah, I mean, part of what's coming out for me in
this conversation is thatself-control is going to be a
super important skill for us, orfruit it's one of the fruits of
the spirit.
Actually, it's going to be animportant task for us to take on
, you know, because I think inour church we have a lot of
folks that struggle or havestruggled with addiction and
like these are all likeaddiction type things, know,

(01:00:09):
like oh yeah, they're alldopamine hackers yeah for sure.
And uh, yeah, like I think the,you know the, the pornography
stuff is definitely an addiction.
Some of the, I think even thesocial media stuff, there's like
addictive elements to itbecause, like you said, there's
no, there's no natural boundaryand unless we are able to

(01:00:30):
control ourselves which isreally hard to do, like it's
it's very hard to haveself-control all the time in in
all areas of life.
It takes work and effort.
It's it takes habit building,which is not an easy process,
like repetition and kind ofreminders and, and you know,
discipline, sometimes imposedfrom outside of ourselves, so

(01:00:50):
that in our moment of weaknesswe have something that's keeping
us from going to the liquorstore and getting the drink or
whatever.
You know what I mean Likesomething that helps us to stay
disciplined.
I don't feel like we're verygood at that.
How do we foster those things?
I you know.

Derek Kubilus (01:01:06):
I don't mean to put you on the spot, but how do
you?

Chris Nafis (01:01:08):
foster self-control in congregants.

Derek Kubilus (01:01:12):
Well, as a good Methodist I'm going to say a big
part of that is holyconferencing.
The method to Methodism ismeeting with a small group of
people, hopefully every week,and talking about how that's

(01:01:32):
going essentially, and askingquestions around like what harm
have you done either to yourselfor to someone else, and what
good have you done?
Where are your bad habits?
Where are your good habits?
How are those things going?
And it's tough right now.

(01:01:58):
I think in this society there'ssome good things that are
happening, along with some badthings.
Along with some bad things, wehave seen the examples of how
behavior modification is donepoorly, right, oh sorry.

(01:02:24):
We've seen lots of examples ofpoor behavior modification that
relies on shame and fear andanxiety, and we're starting to
call those things out now.
Like I see that, especiallyamong young people, who are
talking a lot about the waypeople in authority try to shame

(01:02:47):
them into doing not necessarilythe right thing, but the thing
that they want them to do.
So in light of that, we need tostart growing new ways based on
communities boundary that weenter into willingly in order to

(01:03:11):
consciously shape ourselves.
So essentially, what that meansin the church is you say I am
going to make myself accountableto this community for the way I
show up in the world, mm-hmm,that I recognize that this
community has something to teachme about how to walk this

(01:03:33):
journey of life and how to buildvirtue over and against sort of
other communities that mightshape me into making money or
succeeding in the corporateworld, or finding the perfect
mate or having the perfect body.

(01:03:54):
This community exists only forthe sake of helping shape my
soul, and so I am going tosubmit to it and I'm going to
submit to the process of beingformed by it, even as I help
form other people.

(01:04:16):
And that's about the mostradical thing I think you can do
in a society like ours.

Chris Nafis (01:04:24):
It really is, and it's a hard thing to do.
So we actually, um, just in ourtuesday night bible study, we
we finished a, a book that wewere reading through like a, a
first, second thessalonians,with the commentary thing, and
then so I was like, all right,what are we going to do next?
Let's let's try out the wesleyquestions, which, if you don't
know yeah, I'm sure you knowthem, derrick.

(01:04:45):
But if, for those who don't,you know john wesley, who the
founder of Methodism, who's?
You know we're in a Wesleyandenomination in the Church of
the Nazarene and hugelyinfluential theologian, although
you can look him up if youdon't know who he is.
Hopefully most people know whoJohn Wesley is at this point.
But you know he had thesequestions that he would ask his
essentially his accountabilitygroups, his small groups that
they would run through, andthey're very intense questions

(01:05:06):
about you know, were you, uh,you know, were you honest with
who you are, or were you puttingon a front?
In other words, are you ahypocrite?
I think that's like the firstquestion.
I didn't say it perfectly, butyou know there and it was
interesting to talk through, tokind of present these questions
to our Tuesday night group,because there were definitely
people in our group who seem tobe kind of triggered by the

(01:05:28):
questions because they've hadthat net you talked about at the
beginning of what you said,that negative, shame-based
reinforcement of like you arebad if you can't answer these
questions well, and so I thinklike we've got to maybe find
some ways to ask accountabilitytype questions of ourselves, but
it's got to be voluntary and ithas to be asked in a way that's

(01:05:48):
like we're trying to shape oneanother positively and not just
like bash people, not not set upanother, uh, sort of legalism
or another like you're not goodenough unless you blah, blah,
blah, blah, blah, um, but thatthis is a sort of a coaching.
You know, like this is us tryingtogether to to get better, to

(01:06:09):
do better, to be shaped in theways that we need to be shaped
by one another and by the HolySpirit.
You know, but it's hard, Idon't know.
It's a hard sell on peoplethese days, do you?
find it different in yourcontext.

Derek Kubilus (01:06:20):
Oh no, no, it's very difficult, I will say in
the we call them class meetings,a lot of people call them small
groups or accountability groups.
In my class meeting we havesome rules that make it just a
little bit easier.
So we take that whole list ofquestions that Wesley had and he

(01:06:45):
himself actually boiled thatall down into three more general
questions that we answer, whichis what harm have you done?
What good have you done?
And the third one is morecomplicated.
It's how have you attended uponthe ordinances of God?

(01:07:06):
Which is what have yourspiritual practices been like
this week?
And so what we do is everyoneanswers Well, first we answer
the biggest question, which is avery famous Methodist question,
which is how goes it with yoursoul, which is a very famous
Methodist question, which is howgoes it with your soul.

(01:07:29):
And then we go around and weanswer, which is just a general
how are things going with you?
So on and so forth, and then weanswer those three more
specific questions.
But the rule is no one isallowed to comment on them.

(01:07:50):
You can ask a question, but youhave to ask permission to ask a
question and the other personhas to give you permission to
ask it.
Or if you're, if you're the oneanswering the questions, you
can say you know, what do youguys think about this?

(01:08:10):
I want to hear from you, and soyour vulnerability is totally
up to you.
Mm-hmm, and that works reallywell in terms of getting people
to share more deeply.
As time goes on, as they builtup trust and they know inside

(01:08:34):
that no one, they becomeconvinced that no one really
wants to shame them or posturein front of them or anything
like that.
Now, it's not foolproof, but itjust works a little better for
me, I guess.
Does that make sense?

Chris Nafis (01:08:53):
I like that.
I actually wrote down thequestions because it happens to
be Tuesday when we're recordingthis and we've just done the
questions last several weeks, soI'll probably bring some of
that to my group for tonight ina few hours.
Yeah, I appreciate that.
That's good, and there we go.
I mean so you know I know westarted talking.
This conversation is basedaround like kind of the AI stuff

(01:09:14):
, but I think this is where Iwanted to get with.
The conversation is like how dowe intentionally shape our
church life, our own individuallives, like our spiritual lives,
in order to be ready to handlewhatever AI throws at us, cause
we don't know all that's coming?
And I do think that some ofthese kinds of practices right,

(01:09:34):
like learning how to beintentional about who we are
becoming and thinking about thedifferent influences on on that,
on us, and then being able tomake decisions based on our
sense of like ethics and harm.
And you know, I think that islike so in some ways, like the
answer is like the church needsto just be the church and do

(01:09:54):
discipleship well, and of course, that's like that's not
foolproof either.

Derek Kubilus (01:09:59):
But you know it's really about.
It's like you use the wordintentional.
It is about making intentionalchoices with regards to things.
That's the number one thingthat I'm walking away with from
this conversation.

(01:10:20):
It's that so much about.
If we're going to learn alesson from social media is that
most people just aren'tintentional about the way they
use it.
They use it the way they liketo use it, the way that it
appeals to them, but they don'tintentionally make decisions

(01:10:42):
around ethics and virtue andholiness.
When it comes to the technologyWith AI, I hope we can get to a
place where it's not that weneed to reject it outright or we
don't embrace it fully, butthat everyone sort of makes

(01:11:06):
choices about how to use it andabout where to use it.

Chris Nafis (01:11:19):
And we can argue about those choices but
ultimately respect them.
Yeah, I think that's great and,as I think we've kind of
mentioned.

Derek Kubilus (01:11:25):
Like I don't think that's going to solve all
the problems that are going toarise with AI.

Chris Nafis (01:11:27):
It's not going to save everybody's jobs or
anything, but at least gives usa path as Christians who are
trying to navigate, like atricky new world that's emerging
, how we can navigate it well,and then you know, I think, the
politics of it and all the othersides of it like some of that.
We can maybe try to have someroles of advocacy or whatever,
but I think most of it's goingto happen whatever we do.

(01:11:49):
I know I feel very powerless inthe face of, like, the
onslaught of AI that's comingour way.
Yeah, and so you got to kind oflet go of what you can't
control in some ways and thentry to be faithful in the
decisions that we do have.

Derek Kubilus (01:12:02):
You got it you got it.

Chris Nafis (01:12:05):
Oh well, I feel like we could talk about AI for
hours, uh, and I there's achance I may, uh, talk more
about it in future episodes, but, um, I really want to thank you
for coming on, derek.
It's really good to get tospend some time with you, thank
you for yeah, it's beenfantastic.

Derek Kubilus (01:12:20):
Thank you for asking me.

Chris Nafis (01:12:21):
Thanks for doing it .
Um hope you have a greatevening over there.
Any final thoughts?
Anything you want to like, sendus out with or have the last
word on anything.

Derek Kubilus (01:12:30):
No, I just think , um, I, I, I, I'm scared too.
Mm-hmm, I think it's okay to bescared, um, because there there
are a lot of scenarios that cango very poorly for a lot of

(01:12:51):
people.
But what gives me comfort isthe fact that I'm part of a
community that not only offersme support but offers me
guidance with how to deal withthings that seem very new.

(01:13:13):
And the church has seen not AI,but the church has seen a lot
of new things over the last2,000 years, and we've come up
with some pretty good wisdom forhow to deal with it, and we've
talked about a lot of that heretoday, and that gives me a lot
of comfort.

Chris Nafis (01:13:34):
Yeah Well, thank you, derek, and I appreciate it.
That's a great final word Forthose of you listening.
Thank you for giving us alittle more of your time.
Don't forget to subscribe ifyou haven't, or share this with
someone who's thinking about AI,and we'll catch you next time.
Thank you, thanks, derek,thanks man.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.