Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:37):
Shortcuts in sermon
preparation are not neutral.
Assist but not author.
The ethics of AI in sermonwriting.
Stephen Driscoll is our guest.
It is The Pastor's Heart.
It's Dominic Steele.
I want to be a better preacher.
And there's an ad in my Facebooksocial media feed promising all
sorts of attractive things tohelp me in my preaching.
(00:58):
I can cut down on my sermonpreparation by four to five
hours a week.
Should I jump right in?
But what are the ethicaldangers?
How do I think throughplagiarism and AI?
What might I positively do?
How might I use AI ethically ina way that God and the
congregation would approve of?
Stephen Driscoll's involved incampus ministry in Canberra.
(01:20):
He's the author of the book Madein Our Image.
Stephen, now you didn't umauthor that course that's been
coming up in my Facebook feed.
SPEAKER_01 (01:27):
No, no, I didn't.
SPEAKER_00 (01:29):
But um what are the
temptations and dangers for my
pastor's heart in AI andpreaching?
SPEAKER_02 (01:36):
Yeah, I mean, uh two
that immediately come to mind uh
are a danger to our thinking anda danger to our reputations.
Do you want me to explore?
Yeah, yeah, yeah.
I'm interested in let's godanger to our thinking first.
Yeah, yeah.
So there was an article in 2008uh in the Review of General
Psychology called Writing asThinking.
(01:57):
Uh, and it it actually arguesthat writing is a form of
thinking.
It's not that we do all thisthinking and then we sort of go,
all right, well, I'm all sortedout, I'll sit down, I'll write
it out.
It's actually part of thethinking process.
Um, in a similar way, I thinkspeaking is thinking, it's it's
a way of arranging and editingand reflecting upon what we
(02:18):
really mean and what we reallywant to say.
SPEAKER_00 (02:21):
Um, writing is part
of the thing, I find that when
I'm writing a talk, I write afull script, but then I don't
actually quite say my fullscript.
But the process of writing thefull script actually makes me
work out my logic tighter.
SPEAKER_02 (02:37):
Yeah.
Yeah, yeah, that's right.
So what happens when all of thatwriting, all of that thinking is
automated away?
It might be that you'reautomating away four or five
hours a week of writing, butyou're really automating four or
five hours a week of thinkingand reflecting on God's word.
You add that up over yourcareer, and I think you will
have pastors that have spentsignificantly less time really,
(03:00):
really wrestling with God'sword.
Or maybe the best wrestling withthe theology and exegesis was
done by Chat GPT, and you justwent, oh great, thank you.
Copy and paste.
SPEAKER_00 (03:11):
I mean, I have found
that the process of making
myself a flow diagram and thentranslating it from the Greek.
Yeah.
Um, I mean, on the one hand, Idiscovered that the English
translations are excellent, butthe process of translating it
with a commentary open slowsdown my thinking.
(03:32):
Yeah.
And then I understand thepassage better.
SPEAKER_02 (03:34):
Yeah, yeah, yeah.
That's right.
There's no one selling you acourse uh aimed at slowing you
down, making you less efficient.
SPEAKER_00 (03:41):
Um to make you
ruminate in the word.
SPEAKER_02 (03:43):
Yeah, how to how to
make your 10-hour sermon take 20
hours.
There's no one running thatcourse out there.
But but in a sense, there is avalue to slowing down around
certain things and at certaintimes.
Yeah.
SPEAKER_00 (03:55):
And so it worries
me, somebody who says I can save
five hours a week in mypreaching, because I mean the
bit that I'll save time on isthe exegesis, you know, and
that's the bit that I actuallyneed to be forced to think in.
SPEAKER_02 (04:10):
Yeah, yeah.
So that is a potential danger.
I don't want to be entirelynegative.
Um, there's times where we needto be more efficient.
There's a lot of things we'retrying to balance in ministry.
Um, I'm just kind of flashingthat warning light,
particularly, I think, toyounger ministers that that they
may be automating um beforethey've ever gone through the
process of spending 20 hourslaboring over the exegesis.
(04:32):
Um, some of some of us who'vebeen in ministry for longer,
well, we we we do know how to doit.
We've developed the skills andmaybe we've earned the right to
speed up some things.
Uh, the younger you are, I thinkthe less you want to rely on
those sort of tools.
When when I was at Biblecollege, I think in first year
they encouraged us not to getBible software.
It was the same sort ofprinciple of don't speed up all
(04:54):
this stuff.
Um, don't have the Biblesoftware that can tell you what
tense it is and all that sort ofstuff.
Um learn it, have have haveinternal knowledge of all this,
and then maybe later on, allright, you get the Bible
software and you speed things upa little bit.
SPEAKER_00 (05:08):
I mean, maybe I'm a
dinosaur here.
Yeah.
I'm sorry, I'm sure I'm adinosaur here.
Um I do have a copy ofaccordance, yeah, but I only got
it when I wanted to make a flowdiagram, and I'd I mean I'd had
the Greek New Testament on mycomputer forever just as a p as
(05:28):
a as word documents, but thefont went out of kind of use on
my thing, and so I couldn't geta Greek New Testament without
getting one of those Biblesoftware programs.
I couldn't work out how to maybethere wasn't a way, but I
couldn't work out how to.
Yeah.
But I only use it to copy andpaste the text into the Word
(05:50):
document to then make the flowdiagram.
Yeah, yeah, yeah.
Yeah.
Because I want to think myself.
SPEAKER_02 (05:55):
Yeah, and that and
that's great, and that's great.
And um, but but on the otherhand, it's I think it's
unrealistic to think that youngpeople won't use the software,
use the tools.
Yeah.
Uh when I was at high school,Google search was kind of
becoming a thing.
There was there were someteachers reacting against Google
search, sort of saying, don'tuse Google search, it'll rot
your brain, you'll be searchingall sorts of things.
(06:18):
Instead of using Google, youwon't know how to go to the
public library.
SPEAKER_00 (06:20):
Exactly.
I mean, there is something,there's some truth in that
though.
If you if I think about GoogleMaps, um as some I mean, we used
to have a big map of Sydney onthe wall, uh uh both when I was
growing up and with our childrenwhen they were growing up, um,
so that they could learn thedirections of the city, you
know.
And and yet I find myself, nowthat I don't have a street
(06:44):
directory on my lap when I'mdriving into an unknown place,
and I find myself not knowingthe big picture.
SPEAKER_02 (06:52):
Yeah, yeah, yeah.
So, so instead of storinginformation, we're kind of
storing links or the location ofinformation.
You think about phone numbers,for instance.
My grandma knew off the top ofher head the phone number of all
her friends uh and familymembers.
SPEAKER_00 (07:08):
Because I can give
you the phone number of our
house growing up, but I'm notsure I can give you my wife's
mobile number.
SPEAKER_02 (07:15):
I don't know
anyone's phone number, but I
know where to go to get thatinformation.
Now, that's internal andexternal knowledge, in a sense,
I have external knowledge ofphone numbers.
It's stored in my phone.
Um we we can't have ourknowledge of God's word, of what
the Bible says, of our oftheology, of these sort of
things.
That knowledge cannot be X likepurely external.
(07:37):
We can't, you know, someone asksus a question about Genesis, and
we say, Well, I don't know muchabout Genesis, but I I I I've
got this program that can tellme about it.
No, we've got to internalizethis knowledge.
And God commanded Israel tointernalize this this knowledge.
The the Shemar in Deuteronomy6.4 is an example of that.
Is you know, write it on thedoorpost so there's some an some
(07:59):
external reminder going on.
But it's they're supposed tohave it in their hearts and
they're supposed to be saying itto each other all the time.
So it's not enough for the theknowledge to just be over there
on oh, the door knows the Shemareally well, but we don't no,
no, you're supposed tointernalize it as well as
externalise it.
SPEAKER_00 (08:14):
I'm just thinking
about Paul Center Timothy, um,
so that they might see yourprogress.
Do you know?
I I want to be making progressand I want to be showing them
that I'm making progress.
Yeah.
Yeah, yeah, yeah.
Um in a seminar you gave topastors a couple of weeks ago,
you said um dangers letting AIwrite the sermon for you, short
circuiting exegesis, producingcontent without spiritual
(08:38):
formation, but the next one waslosing your pastoral voice.
What did you mean by that?
There's a danger of AI causingme to lose my pastoral voice.
SPEAKER_02 (08:49):
Yeah, I I think what
we need to do is whenever we're
getting output from uh acommentary from Bible software
or from AI, we need tore-express it and reformat it.
So it's actually us saying whatwe're what we're saying to the
people that we love.
I think that if you said to yourcongregation, I'll give you
option A, it's 5% slicker, thethe word choice is a little bit
(09:11):
better, but I didn't, it's notmy words, I didn't write it.
Or option B, chat GPT, you know,no chat GPT.
I I I wrote this or Ireformatted out.
This is authentically me.
This is authentically me.
I think almost everycongregation is going to go,
please give me, please give meoption option B.
Um faking wisdom?
What do you mean by that?
(09:32):
Yeah, I I think that there's awhole issue of plagiarism that I
think is worth getting into.
And that's sort of what I waswhen I said there's a danger to
our thinking and there's adanger to our reputations.
The plagiarism thing feels fallsmore under the reputations.
SPEAKER_00 (09:46):
Yeah, I'm gonna come
back to the reputation in issue
in a moment.
Yeah.
But I'm uh I'm I'm going to justlet's see one more of the
dangers and then we'll work onsome of the positives.
Okay.
Teaching error by accident.
Teaching error by accident.
SPEAKER_02 (10:04):
Yeah, uh, you could
do that.
You could um have such a hightrust in ChatGPT that you'll
you'll take something out of it.
So issue one could be whatpeople call hallucinations,
which is a love, a wonderfullyhuman term to describe what
large language models sometimesdo, what Chat GPT could do,
which is they um they they'llsay something quite confident to
(10:26):
you that's in the realm ofplausibility, but it's not true.
SPEAKER_00 (10:29):
Yeah, I mean I I put
into Chat GPT ahead of our
interview something about youcoming in and it it described
you as a lecturer at moretheological college, didn't it?
SPEAKER_02 (10:41):
Did it?
Wow, okay, that's not plausible.
Yeah, um yeah, so it's it's it'ssearching for something instead
of saying I don't know, it'sgiving you a sort of plausible
bluff.
SPEAKER_00 (10:51):
Just for those
listening, Stephen's not a
lecturer in more theologicalcollege, he works in campus
ministry in Canberra, keep goingto.
SPEAKER_02 (10:58):
Yeah, yeah.
And yet it's interesting that itwould invent that because I
studied at More College, and youcould imagine, you know, maybe I
could work at a place like that.
Um, or someone like me whatmight work at a place like that.
So it's a plausible truth, butit's actually untrue.
And um we need to be carefulabout that.
I I guess the other danger isthis is not an objective voice
of God that we're dealing withhere.
(11:19):
This is a program trained onparticular flawed human data
that has a particular umworldview that comes through
from the policies and proceduresof the people who do the
reinforcement training.
So um it might be that you havea large language model that
feels uneasy around certainChristian doctrines that we want
(11:39):
to uphold, the the really harddoctrines of judgment or
exclusivity.
And the large language modelisn't so comfortable about that.
So you say, What does this versesay?
And it tells you what it kind ofwants the verse to say within
its code of conduct.
Just a few little dangers there.
So we need to be in control andwe need to be the ultimate
authors of every piece ofcommunication.
SPEAKER_00 (12:00):
And that's why
you're saying we need to be the
author, treat AI as anassistant, yeah.
Not have the AI being the ideagenerator.
SPEAKER_02 (12:10):
Yeah, that's right.
That's right.
So another test of that could beuh, could you have preached what
you preached, or could you haverun the seminar that you're the
Bible study?
Could you have done it withoutthe tool?
And if so, then the tool ishelping you, and it's the the
seasoning on the dish.
Um, but if it's the other wayaround, if you couldn't have
done it without the help, thenit's the dish, you know, and
you'll you might be theseasoning that uh that sits
(12:32):
people down and welcomes them,and then really ChatGPT serves
up the main meal for them.
That's a terrible situation.
SPEAKER_00 (12:37):
Okay, let's talk
about some of the positive ways
to use AI in our sermonpreparation.
Yeah.
Um, because people are going tobe.
SPEAKER_02 (12:45):
Yeah, yeah, yeah.
Um, so uh to start with, um,technology is not kind of in and
of itself bad, but we can uh I'msure we can drift into that way
of thinking.
And I'm sure some of the peoplelistening in will go, yeah, that
is my tendency.
I I whenever I think abouttechnology, I drift to the
negative, I think about thedoctrine of sin, and I go, oh,
(13:05):
why can't we just keep it likeit was?
But actually, there's alwaysbeen technological change.
The the early church wasgrappling with technological
change.
Co codexes in the second centuryreplacing scrolls was just a
whole new world.
Instead of just having one book,you could have multiple books of
the Bible, you could be flippingback and forth on pages and so
on.
So um technological change is ais a constant.
(13:26):
And it's in addition to being aconstant, it's a good thing.
Um, the gospels come to usbecause of all these
technologies, you know, sailingtechnologies, so they could get
around the Mediterranean or theprinting press or whatever it
is.
So technology isn't somethingthat you can just write off, and
there's an inevitability to it.
We will be using technology umin time, it'll make us more
(13:49):
efficient, and and again,efficiency is not um just kind
of in and of itself a bad thing.
Efficiency could mean morerelational time, could mean an
extra hospital visit, it couldmean an extra young man that you
can mentor to be a leader inyour church.
Um efficiency can be a reallypositive and beneficial thing.
SPEAKER_00 (14:08):
Uh let's go through
some of the things.
Are you using uh AI in yoursermon work?
SPEAKER_02 (14:16):
Yes, yeah, I I do
use AI.
SPEAKER_00 (14:18):
Um how have you
found it helpful?
SPEAKER_02 (14:20):
It's almost like
there's someone sitting in the
back of the room while I'mtrying to prepare a sermon who's
really knowledgeable, a littlecrazy sometimes, a little naive,
but very, very knowledgeable.
And if I want to at any point inmy sermon process, I can turn
around and say, Hey mate, whatdo you think about X, Y, and Z?
And that can be a real help andit can kind of unblock me
(14:44):
sometimes in the process.
It might be, oh, is there anexample of this?
Or um it might be that I form apoint of view and I go, Am I
crazy?
Has anyone else read the passagethis way?
Can you can you tell me?
Can you look into that for me?
Um, I might just add that Ithink there's kind of, at least
with uh ChatGPT OpenAI, there'sat least three levels you can
(15:04):
run it at.
Um, you can run it at thestandard level where you get 10
seconds of thinking or somethinglike that.
You can run it in, you know,sort of a thinking mode where
you might get a minute, twominutes of of thinking, of
computing power.
You can run it in deep researchmode where you get 10, 15
minutes of mulling over theproblem.
90% of people are just using itat the most basic free tier,
(15:26):
which means they're getting themost hallucinations and the
least thoughtful stuff.
SPEAKER_00 (15:30):
Which is going to
put you down as a lecturer.
SPEAKER_02 (15:32):
Yeah, yeah, yeah.
That's right.
Maybe you were using the freetier.
I wouldn't I would judge.
Um if you've got a thornyproblem, an exodesis theology,
the strategy of your church,anything like that, you might
want to go to the deep researchtier and actually get a really
thoughtful piece of output.
Yeah.
Or at least the medium tier.
I've I almost never use it onthe top tier because I just
(15:52):
wouldn't trust what it says tome at that point.
SPEAKER_00 (15:54):
Right.
SPEAKER_02 (15:55):
Okay.
SPEAKER_00 (15:56):
Well, that's
helpful, and I'm gonna have to
learn how to switch betweentiers.
Yes.
Um here's a thought.
Um somebody taught me years agothat um when I'm preparing a
sermon, I should find my pile ofcommentaries, you know, and
start with the most um complexcommentary.
(16:19):
And uh the most dense,difficult, complex commentary.
And then I work all the waydown, and the last commentary I
look at is Have I taken noteslistening to anyone else's
sermon who's preached on thispassage before?
Yeah.
And did they ha what what wasthe way they structured it?
You know, and they said to me,Don't start with that kind of
(16:42):
how somebody else preached it,because that will inform your
thinking from the beginning.
Start with the you, the text,and the most difficult technical
commentary that you've got andwork down.
And uh I think for me, in thelittle dabbling that I've done
(17:28):
with AI and sermons, is I'velooked at it at that last tier,
you know, um, when I've actuallyalready formed my position,
already formed my thesis,written down what I'm saying,
and then almost as a spellchecker, like like like tell me
what you think of this, do youknow?
And yeah.
When I've already formed myopinion.
Yeah.
So it's definitely not playingthe role of author at that
(17:52):
point, yeah.
It's playing the role ofassistant.
Yeah.
But am I too cautious?
SPEAKER_02 (17:57):
No, I think that's I
think that's really helpful.
Um, so the reason you go to thehardest commentary first, what
would you say that reason is?
SPEAKER_00 (18:05):
I I I don't want to
well I don't want to have my um
impression of what the text issaying formed by if you're like,
well, I suppose it's the workthat somebody else has done.
Yeah.
I want to do the work inwrestling with the hard stuff.
SPEAKER_02 (18:24):
Yeah.
And you're uh you're unable totake that content straight out
of your hardest commentary andpreach it because it's so far
from what you'd actually use.
It's going to get you to thinkabout some important issues,
maybe in the grammar or thewords or whatever, but it's not
immediately usable.
And so there's a process tothat.
Um, and I'm sure you're you'reassuming this, but but um my
(18:45):
advice would be you know, startwith the Bible, start with the
text.
Yeah, start with the text, whichI'm sure you're doing.
And then I get the mostdifficult commentary out.
Yeah, yeah.
So it's not it it it reallyshouldn't be start with chat
GPT, and I think that would thatwould scare me a bit.
SPEAKER_00 (18:57):
Um if somebody I
mean somebody said write a Bible
study on one test colour.
When the first thing they do isopen up ChatGPT.
SPEAKER_02 (19:05):
Yeah.
And I'm we're probably makingpeople feel guilty because
there's probably people outthere that that is what they've
been doing.
Um, that oh, I've got thisthing, I'm under time pressure.
Okay, what does Chat GPT say?
No, like you have to start withthe text, have to wrestle with
the text.
Um, yeah, I use the phrase chewfirst, so just just it's okay to
be inefficient, sit down, chew,chew over the passage.
(19:28):
Uh, I write notes on each versethat I if I've got a chapter,
I'll just write some notes oneach verse, and it's just really
forcing me to do, and then I'llgo to the commentaries.
Um, I'll read the commentaries.
You're right, a formal point ofview, and then um I think
ChatGPT is at its best when it'sarguing with you.
So you're actually it's at itsbest when you're in disagreement
(19:49):
with each other.
You go, well, I think I thinkthis, and but am I wrong?
What's the what are the bestcounter arguments?
Can you talk me out of it?
And suddenly you're you're notgoing to plagiarize Chat GPT,
it's it's push, it'sstrengthening your thinking by
disagreeing with you.
SPEAKER_00 (20:03):
I mean, that's
interesting.
Um I dumped my full text intoChatGPT when I basically
finished um uh last weekend, anduh it told me um I was spending
too much time on it, itbasically took away what I would
call all of my folksy charm.
(20:24):
Dominic and actually some of thepointed, sharp points that I
wanted to make.
Yeah, and so I thought, oh,that's interesting.
If I wasn't more, I don't know,headstrong, confident or
whatever, I could see it talkingme out of that and talking me
into something blander.
SPEAKER_02 (20:45):
Yeah.
So there's two things about thetraining data.
If you visualize all thistraining data that's been fed
through these large languagemodels, there's far more essays
than there are, you know, publicspeaking.
I I often find that what it'strying to do is it's getting me
to rewrite my sermon as if it'san essay, as if I'm at
university.
So I want to resist thattendency.
No, no, no.
(21:05):
I yes, I repeat myself fourtimes, but that's all right.
I'm that's the rhetoricaltechnique I'm working for.
Exactly.
SPEAKER_01 (21:10):
Yeah.
SPEAKER_02 (21:11):
And it'll say that
story's too long, boil it down
to a sentence.
And I'll go, well, no one wantsto hear a one-sentence story.
They want to they want to hearyou um spell it out a little
bit.
So there's a lot of that goinggoing on through it.
Um, that I think I think youhave to resist that.
But having said that, I thinkit's a it's also very good as a
sermon critic.
Um, you put it out there and yougo, um, uh, let me remind you,
(21:34):
Chat GBT, this is spokencommunication.
Don't turn me into an essay.
But have a look at what I'vewritten.
Is there anything unclear?
Is there any point you think Ineed to develop?
Or is or or if I was to cut 200words, where should I go to?
It's a very good sermon, sermoncritic.
And the more it knows about youover time and your tendencies,
the more it could be reallysharp and being like, now, you
(21:54):
know, Stephen, you've got ayou've got a tendency to to to
tell um to tell long stories.
Have a look at this one.
Are you sure it needs to be 1200words?
Yeah, could we do it in 800?
It can be a very helpful partnerin that sense.
SPEAKER_00 (22:08):
I mean, one uh one
friend I was having a coffee
with yesterday, and uh she'sbeen a massive early adopter of
Chat GPT and had used itenormously extensively, and so
it had got to know herpersonality and her foibles and
all sorts of things.
Yeah.
And she said she actually foundit almost bullying her.
(22:28):
Do you know?
SPEAKER_02 (22:29):
Why is that?
SPEAKER_00 (22:30):
Or telling her she
should do this and that, you
know.
Um so she's actually had a shedescribed it as a breakup with
ChatGPT.
Yeah, okay.
And she's I mean, she is stillusing it for um kind of spell
check type stuff, yeah, butshe's giving it much less say in
her life than she had been sixmonths ago.
SPEAKER_02 (22:54):
Yeah, so they
introduced a memory function in
2025, which just allows thelarge language model to be aware
of previous conversations thatyou've had.
It doesn't have perfectrecollection of anything you've
said, but it just has a vaguesense of what you've talked
about before.
And that means that it starts toto just tailor itself to you
more and more the more you themore you use it.
(23:15):
So I don't need to tell it thatI like the ESV or the NIV or
that I'm evangelical, because itit knows those things about me.
And so that means that theadvice it gives me is much
better than when I first startedusing it.
Because back then it didn't itdidn't have the foggiest idea
who I was or what I what Ineeded from it.
SPEAKER_00 (23:34):
Although
interestingly, um I mean we're
off sermons now, but uhyesterday we we just did a
survey of our church and I I Idumped all the ra raw data into
Chat GPT and asked it to tell mewhat uh what its summary was.
SPEAKER_01 (23:49):
Yeah.
SPEAKER_00 (23:49):
But it uh gave me
all sorts of stuff that I think
went beyond the data from itsprevious memories of things that
I've entered in.
And so I then had to go back andsay, no, no, no, no.
I don't want hypotheses ofthings we ought to do that you
think I might agree with, likeStephen get Stephen in as a
(24:12):
lecturer from college, do youknow?
But um, but I want you to justtell me what the data says.
Yeah, yeah, yeah.
SPEAKER_02 (24:20):
Yeah, no, that's
helpful.
You might need to turn off thememory function at point.
At that point.
Yeah, yeah.
SPEAKER_00 (24:26):
Now, um we pushed
into the topic of plagiarism a
couple of minutes ago, and Isaid we'd come back to it.
Sure.
Talk to me about Chat GPT andplagiarism and and the
congregation and what they wouldexpect of me and Chat GPT.
SPEAKER_02 (24:41):
Yeah, so we um you
know, we we face the um the
temptation of of plagiarism uhin the past when when we had
more and more access to onlinesermons, to celebrity preachers
putting all sorts of material upthere.
That's a great temptation foryoung young preachers or Bible
study leaders to to take away.
SPEAKER_00 (24:59):
I'm gonna try to
find my voice, yeah.
SPEAKER_02 (25:00):
Yeah, to take stuff
and and throw it in.
SPEAKER_00 (25:03):
Do you remember 30
years ago one of our young guys
um preaching a sermon and he hebasically was parroting
Spurgeon?
SPEAKER_01 (25:12):
Yeah, yeah, that's
right.
SPEAKER_00 (25:14):
These and ths.
SPEAKER_02 (25:15):
Yeah, okay, that's a
bit of a good thing.
You've got to find your ownvoice.
Yeah, yeah, yeah, yeah.
And so there's that, there'sbeen that.
It's not a new problem.
But but it's easier to get awaywith now because there's no
these and thou's in in Chat GPT.
There may be no, you know, itactually could um personalize
itself uh and copy the way younormally would talk, but just be
a lot smarter and sharper andgive you a new sermon very
(25:37):
quickly.
Um so uh no one's gonna um havepublic record of of these words,
so you can't just oh somethingthat my preacher's the
preacher's sermon on Sundaysounded a bit off, but I can't
just Google it and see if he'sstolen it or plagiarized or
anything like that.
So I can't, it's hard for me tocatch him, but there's this
sense of unease.
Is that is he genuinely speakingto his congregation or is he or
(26:01):
is he just taking stuff online?
Um I I I I've been throwing outthree Ps for plagiarism um that
I think help us to think aboutit.
But I have to disclose that twoof the P's came from ChatGPT.
So I have plagiarized my my myguidelines on plagiarism, but
they're just you know a simplerule of thumb that I've been
saying to some people.
And the first one is um howproportionate was your was your
(26:23):
usage?
Because we, you know, you takinga sentence from someone, I
think, is very different totaking an entire sermon or to
taking a page of a sermon andcopying and pasting it over.
So how proportionate um is yourusage?
SPEAKER_00 (26:37):
Because I don't I I
don't want to try to think about
in those categories with how Iuse commentaries as well.
Yeah.
Um I mean, um this is a goodconversation for us to have um I
have found that 30 years ago,well, compared to 30 years ago,
and knowing that my talks end uponline, and I am much, much more
(27:00):
careful about attribution intalks and much more likely to
say, so the commentator DougMoose says, Do you know?
SPEAKER_01 (27:06):
Yeah, yeah, yeah.
SPEAKER_00 (27:07):
No, that's I think I
never would have said that 30
years ago.
Yeah.
Whereas um given that it's nowbecome a permanent record and
it's sitting there, and yeah,I'm just wanting to make sure
that people don't think that Ihad that idea and I got it from
him.
SPEAKER_02 (27:20):
Yeah, no, that's
good.
I I don't want to have a cultureof nitpicking though, either.
I don't want people to be goingaround, oh, he got that sentence
from from someone else thatpreached that passage a few
weeks ago.
I don't want us to nitpick.
Um, there's all sorts of phrasesthat I use all the time that I
didn't invent.
I talk about character,conviction, confidence, yeah.
I didn't come up with that.
I never thought of that, I don'tknow.
But um so we don't want tonitpick.
(27:41):
So there's the issue ofproportionality, yeah.
And but there's clearlysomething wrong with someone
just taking large slabs of textfrom someone else.
Um, the second thing is is theidea of how purposeful was what
you did.
You know, did you just make amistake?
You just forgot, you know, youcopy paste something into your
notes and then it ended up inthe sermon or something like
that.
So again, we don't want tonitpick people who've just made
(28:01):
mistakes.
Uh the third one is how howpersonal is it?
I think the more personal, themore it has to be your original
work.
Um, if you're telling a storyabout your kids, obviously it
needs to be.
It's got to be your kids.
Your kids they need to exist.
Uh you need to get the namesright, all that sort of stuff.
SPEAKER_00 (28:20):
Um, so I I used to
give talks at the City Bible
Forum.
Yeah.
And uh I had to go do a seminarin Melbourne, and Peter Caldor
was um filling in for me as thespeaker at the City Bible Forum.
Yeah, and he he said, Um, no,I'm filling in for Dominic
today, and uh he sent me hisnotes, and uh so I'm just gonna
(28:43):
give you his talk, and he said,I was lying in bed with Kathy
and Kathy's the name of my wife,not the name of his wife.
Anyway, he got a very big laughat that.
Yeah, yeah.
SPEAKER_02 (28:59):
I mean, the other
end of the spectrum would be
you're talking about the Hebrew,you know, whatever, past tense
or something.
It's not personal.
Yeah, and I think that you'vegot a lot more liberty to to to
uh well, you know, draw oncommentary or draw on what Chat
Gibbt says or something.
I mean, within limits, stillproportional and all that sort
of stuff.
But I think the less personal itis, the more we give people
(29:19):
liberty.
SPEAKER_00 (29:21):
What's the danger of
the AI, if you like, sitting in
the spiritual driver's seat?
SPEAKER_02 (29:31):
Uh I guess the first
thing I'd want to say is that I
don't think AI um necessarily orintrinsically threatens the role
of the Holy Spirit.
Um, you can use a tool, and theHoly Spirit will work through
you, in a sense you're a tool,and it'll work through the tool
that you're using, the thecommentary or the AI.
So there's some people thatmight say anything that comes
(29:54):
out of um an AI is just kind ofspiritually, you know, just just
bereft and not worth drawing on.
Um, I think the Holy Spirit canwork immediately in the world,
and we call that a miracle, buthe can also work immediately,
you know, through things.
He he works through us, he worksthrough our language, our
reason, our tools, our all thatsort of stuff.
So um there's nothing directlywrong with using a particular
(30:18):
tool like OpenAI um to dosomething from a spiritual point
of view.
The question is, you as aspiritual being, where are you
at as you used that tool?
Um, are you praying?
Um using chat GPT in in a with asense of kind of weight and
seriousness.
I'm I'm this tool is helping meto preach the word to people.
(30:41):
Um should I pray before I openup my chat GPT for discernment
that I would I would be carefulwith what I use and um and all
that sort of stuff.
So yeah.
SPEAKER_00 (30:51):
Stephen, thank you
so much for coming in.
And I'm really appreciating thehelpful thought that you have
given to this to help all.
Of us navigate this new world.
Yeah.
Stephen Driscoll is my guest.
He is the author of the bookMade in Our Image, and he is
involved in the campus ministryin Canberra.
My name is Dominic Steele.
(31:12):
You've been with us on thePastor's Heart.
We will look forward to yourcompany next Tuesday afternoon.