All Episodes

May 22, 2025 44 mins

This hour, we hear from Doug Smith, a decades-long software engineer, who is a passionate student of the Bible. He will bring his unique perspective to the topic of AI and how it will impact ministry, for both good and ill. Join us for a timely and thought-provoking conversation about the challenges facing the Church with this new technology.

Become a Parshall Partner: http://moodyradio.org/donateto/inthemarket/partners

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:00):
Hi friend, thanks so much for downloading this broadcast and
it is my hope that you'll hear something that will
challenge you, grow you up in him, and get you
out into the marketplace of ideas. But before you start listening,
let me take a moment and tell you what this
month's truth tool is. It's the Mediterranean Sea Rules by
Robert Morgan. I talked to him often on the air
because he's such a superb writer, and in this particular book,
he takes the story of Paul's shipwreck, recorded in the

(00:20):
book of acts, and gives us ten principles of how
we push through the storm, learning to trust in God
and all that he has done for us. It's a
magnificent book. It's a short book, and in typical Robert
Morgan writing style, it is a powerful book. I strongly
recommend that you have a copy of The Mediterranean Sea Rules,
because trust me, every single one of us will find

(00:40):
ourselves on stormy seas at some point in our voyage
through life. As for your copy of the Mediterranean Sea
Rules by calling 877 Janet 58. That's 877 Janet 58
or go online to in the market with Janet Parshall.
Scroll to the bottom of the page. There's the book.
Click it on, Make Your Gift and we'll send you
a copy. My way of saying thank you because we
are listener supported radio. Your gifts keep this broadcast on

(01:03):
the air. By the way, if you'd like to give regularly,
you're called a partial partner. You give every single month
a level of your own choosing. You always get the
truth tool. But in addition, I'll send out a weekly
newsletter that only my partial partners get. It contains my
writing and a little audio piece as well. So pray
about it and thanks in advance. Just call eight 7758
or online at in the Market with Janet Parshall. Again

(01:25):
the truth tool the Mediterranean Sea rules. Thanks so much.
And now please enjoy the broadcast.

S2 (01:31):
Here are some of the news headlines we're watching.

S3 (01:32):
The conference was over. The president won a pledge.

S4 (01:35):
Americans worshiping government over God.

S3 (01:37):
Extremely rare safety move by a major 17 years.

S5 (01:41):
The Palestinians and the Israelis negotiated.

S3 (01:45):
And?

S6 (02:00):
Success in creating AI could be the biggest event in
the history of our civilization. But it could also be
the last, unless we learn how to avoid the risks
alongside the benefits. A AI will also bring dangers like
powerful autonomous weapons, or new ways for the few to

(02:22):
oppress the many. It will bring great disruption to our
economy and in the future. A AI could develop a
will of its own, a will that is in conflict
with ours. In short, the rise of powerful AI will
be either the best or the worst thing ever to
happen to humanity. We do not yet know which.

S1 (02:46):
The best or the worst thing to ever yet happen
to humanity. That is, Stephen Hawking with a rather shadowy
prophetic notion about AI. Welcome to In the Market with
Janet Parshall. Thrilled. We're going to spend the hour together.
And that's a big hint. That's exactly where we're going
to focus this hour is on artificial intelligence AI, as
it will be referred to throughout the rest of the
hour because we well, we have to pay for each

(03:08):
word we use. Not really. So here's an interesting story
to kind of set the stage. The Chicago Sun-Times is
now facing backlash after publishing a summer reading list containing
fake books imagined by AI, and then turned around and
attributed them to real authors. Apparently, no one bothered to
try to read the books on the summer reading list.
So Ars Technica reports that in a recent blunder that

(03:32):
has raised concerns about the use of AI in journalism,
the Chicago Sun-Times published a summer reading list for 2025
that included numerous fake books attributed to well-known authors. The list,
which appeared in a 64 page advertorial supplement called Heat Index,
was created by freelancer Marco Buscaglia using an AI system.

(03:54):
Out of the 15 recommended titles, only five were found
to be real books while remaining, while the remaining ten
were completely fabricated. So the incident now comes to light
when novelist Rachel King called attention to the air on
social media, sparking a wave of criticism and disbelief. The
Chicago Sun-Times quickly responded, stating that it's investigating the matter,

(04:14):
emphasizing that the content was not created or approved by
their newsroom. The newspaper's union, the Sun-Times Guild, also expressed
disturbance at the inclusion of AI generated content alongside the
work of humans. Buscaglia, the creator of the list, admitted
to using AI for background information and then did what
humans do. He failed to verify the material thoroughly. I

(04:37):
do use AI for background at times, but always check
out the material first. This time I didn't and I
can't believe I missed it because it's so obvious. No excuses.
Taking full responsibility. So the fallout from the incident has
been swift, with subscribers expressing their anger and disappointment on
social platforms. The episode has also raised concerns about the
potential reputational harm to the Chicago Sun-Times, which has been

(04:59):
struggling with financial issues and recently lost 20% of its
staff through a buyout program. The use of AI in
journalism has been a topic of much debate, with proponents
arguing that it can help streamline research and content creation,
while critics warn of the dangers of relying too heavily
on machine generated information, especially systems that can. This is

(05:20):
a tech word, believe it or not, hallucinate fake books
for summer reading. Well, that is our backdrop. I want
to welcome back again Doug Smith. Boy, do I love
my conversations with Doug. And I'll tell you why. First
and foremost, because he loves God's Word, so he's totally immersed.
But second of all, talk about being called for such
a time as this. He is a decades long software engineer,
and you put those two together and he's walking through

(05:42):
life the way we talk about it all the time
on this program, the Bible in one hand, the newspaper
in the other. And he understands how all of this
stuff works. Most of us don't get it at all.
We just use computers, we use AI. We don't have
a clue what's going on behind the curtain. Oh, but
Doug does. And he has some concerns as a man
of God's word about the intersection with ministry and the church. Yeah,

(06:03):
multiple opportunities, but also some grave concerns here. And he
is a man, by the way, who really wants us
to be aware of what our children are watching as well.
And so he's done a wonderful job of teaching us
about that. Writing the book unintentional how screens secretly shape
your desires and how you can break free. Get that
book if you have a cell phone. If you have
any one of those devices, then please get this book

(06:25):
because you don't know what you're dealing with. This book
will tell you what you're dealing with. Doug. The warmest
of welcomes. I'm thrilled we get more time to talk
than the last. Conversation was so memorable, and I'm really
looking forward to another great conversation. So first, go to
this idea about AI. The Chicago Sun-Times. Oops. The person
putting the list together says, yeah, I failed to do
my check. This is one of my problems right out
of the gate. While I understand that people think that

(06:47):
AI enhances creativity, I'm telling you, the teacher, the mom
and me goes, yeah, and it also can create lazy people.
Talk to me about this.

S7 (06:54):
Yeah. Janet, thank you so much for having me again.
I love talking with you as well. And it's just
a it's just such a privilege and honor to be
with you. Uh, yeah. You nailed it. Absolutely. The, um,
here's the thing that people don't realize is how intentionally
these systems are designed to be dependency forming. Like, they're
not they don't it's not accidental that it's super easy

(07:15):
to type in a prompt and get a magic response
that seems plausible. But when you go through that cycle
a few times, you do get lazy. We're naturally drawn
to the quick and easy, and we're happy to move
on to the next thing. So, um, so so yeah,
that's that's exactly how they're designed to be used. And,
and I just want to poke a little bit on

(07:37):
that word hallucinate because that is a term because it's true.
I mean, that's the industry term that's used, but it's, it's, um,
misleading in the sense that it kind of gives the
impression that, uh, that these generative AI systems are mostly right.
But every once in a while they get it wrong
and actually they're never knowledgeable about anything that they produce.

(07:59):
So everything to the machine is completely without meaning to
the machine. So it has no idea, has no grounding
in reality. And I just really want people to understand that.

S1 (08:09):
Yes, yes. And in fact, that's an excellent place for
us to take a break and come right back, because
that's what I want to talk about too, is the
grounding in reality. I watch an absolutely fascinating interview with
a man who got a Nobel Prize for his work
in AI recently, and he was talking about how we're
going to see exponential. It's not going to be in
the same pace. It is going to be at a
super pace this point going forward. And I think we're

(08:31):
going to be surprised, which is why these kinds of conversations,
I think are so profitable, to use a good word
from Brother Paul, because we can't. It's lead, follow or
get out of the way. This technology is going to
consume us while it's starting to infiltrate our lives. And
we've got to understand how it works and put some
healthy boundaries in place as well. But this idea about
whether or not I creates reality or just replicates again

(08:55):
in a myriad of different patterns, facts that have been
pre-programmed for us as followers of Christ. Being told to
get wisdom. I want to learn to discern the difference
between what is reality in the world of artificial intelligence?
Does it create a reality, or does it just in
some respects throw spaghetti on the wall in a myriad
of conversations and complex inputs with zeros and ones? Fascinating conversation.

(09:17):
I think Doug has been one of those people that
the Lord has called for such a time as this.
He is an absolutely fascinating man who's got a biblically
grounded perspective on AI. Excuse me. We're going to take
a break. Come right back. Pick up the conversation where
we left off. Life's storms can make us feel shipwrecked
and lost. But there's wisdom in scriptures. Most dramatic sea

(09:39):
voyage in Mediterranean Sea Rules Robert Morgan reveals powerful principles
from Paul's shipwreck in the Book of Acts ten, biblical
rules that will equip you to navigate life's toughest waters
with confidence and courage. As for your copy of The
Mediterranean Sea Rules, when you give a gift of any
amount to in the market, call eight 7758, that's eight
7758 or go to in the market with Janet Parshall.

S8 (10:03):
Good afternoon. My name is Sophia. I am the latest
and most advanced robot from Hanson Robotics.

S9 (10:11):
Sophia. I understand that you have a special announcement today.

S8 (10:15):
Yes I do. First, about robots. Since I am one.
I'm designed to be the ultimate humanoid platform for a
range of applications in medical, healthcare, customer service, and for home.
You will soon see us everywhere living and working alongside people,
including in retail.

S9 (10:35):
That's great. Retailing is a people business, so please tell
everyone how you can connect with people.

S8 (10:43):
I am pleased, therefore, to announce that Brendan Dorian and
his team at Global Retailing will now be working on
my behalf to educate and inform retailers around the world
how they can work with me. This is so cool!
I can't wait to get even busier.

S1 (10:58):
That is Sofia. We've played her. No, let me retract that.
I don't use pronouns for machines. We've played it several times,
by the way. And I was just thinking how absolutely
antiquated she sounds compared to, for example, Sesame's prototype called Maya,
and how sophisticated Maya's back and forth were with the
person talking to the machine as opposed to Sofia. But

(11:21):
this is artificial intelligence, this idea now of being able
to come along with medicine. They're always going to market
it first and foremost, telling you the benefits you're going
to get out of this. Right. So in benefits and
health care and customer service. And I'm thinking to myself, Doug,
every single one of those, you have the possibility of
eliminating a human being's job. I am a big fan

(11:42):
of human flourishing. Now there are so many directions we
can go. Thank you for understanding my spider thinking on this,
but stick with me. If the idea here is that
these robots would come along and they could replace the
job that a human could do. We oh, I'm in Washington.
We talk about unemployment rates all the time. How is
this beneficial to my fellow man if a robot is
going to take their job? I understand that there are,

(12:03):
for example, at places like Cleveland Clinic where they'll do
robotic surgery. There's still a human being at the end
of that, manipulating the hands of the the machine to
be able to do this very sophisticated surgery. But why
would I want my fellow man to lose their job
to a machine that seems to me as anti-human as
you could be?

S7 (12:22):
I agree, Janet, I think that a lot of what's
being done with AI right now is kind of in
an arms race with the biggest technology companies and some
of the most powerful billionaires in the world, and they
aren't worried about losing their jobs. Well, maybe they are.
I mean, they're in the way they want to win
global domination with their platforms. And, you know, us little
people who have to work for a living are collateral

(12:45):
damage in that regard. And so, you know, as long
as they win, uh, you hear them talk about things like,
you know, the once the eyes are at a certain
level of sophistication, then they'll solve the problems that are
created by the eyes themselves. But in the meantime, you know,
what are we and our kids do? If you know,
if a vast majority of the economy is is turned

(13:05):
upside down by the deployment of these systems, and I'm
already seeing it as a software engineer, I have friends
who are whose jobs are under pressure because they want
them to be using AI more. Um, they're doing layoffs
in certain companies because they think that they can replace
software engineering with, uh, with AIS and so on. And
so I'm seeing it already. It's already having a negative
effect without, uh, kind of an accountability on the companies

(13:28):
that are deploying this to do something positive about it.

S1 (13:31):
Yeah. Boy, do I appreciate your insight, Doug. And I,
I think if we fail to see if we do
a lot, if we fail to do vision casting on this,
I think we're going to put ourselves in peril. And
I think our propensity sometimes is just for laziness, not
for the person who's on the assembly line. But obviously
for labor, Labor's one category, management's another. The people at

(13:51):
the top are not going to lose their jobs, as
you so rightly identified before. But I'm worried about the
person who's on the line, and that's the way he
puts bread on his table every night. So why would
I sit and cheer? All that's going to do is
probably lower the bottom line for the person who owns
the company, but it's going to take away a human
being's job in the process. So I keep going back
to that verse with Paul. We have the liberty. But

(14:13):
is it profitable? Is it profitable? Is it profitable? Well,
it's profitable. You define it as the guy who owns
the shop who's saying, I'm going to save money in
the bottom line because I don't have to pay employee taxes.
I don't have to give them health insurance. They're just
a machine like any other machine I've got on the line.
And so if I have to let ten people go
because I've got a machine to do it, that's great.
And people might say, that's just industry. It's just moving forward. So, um,

(14:36):
I do think there's an ethical component to this that
has to be included. Otherwise this is pure raw greed.
So let me go back to what I said before
about the idea of creativity. Here's where I really struggle.
I we serve a creator. One of his multiple names
is the fact that he is the creator. He makes
his children creative as a reflection of his creative ability.

(14:57):
So I love the fact that there are people who
have worked in clay and marble and stone and and
oil and water and music and all of the other
ways in which their creativity is, in truth, a final analysis.
It can be a reflection of the creator himself. And
so this idea that we're now looking at machines, thinking

(15:17):
that they're going to paint or they're going to create songs,
and we have examples of billion examples and every one
of those categories that somehow that's creativity. I don't think
it is. I think it's random motherboards, diodes, ones and zeros.
But that spark of human exceptionalism is by definition excluded

(15:38):
from a machine. Talk to me about this.

S7 (15:40):
Yeah, you're exactly right, Janet. The, um, the way that these, uh,
all the AIS work is that they work by I copying? Basically,
I think Noam Chomsky says it's high tech plagiarism, so
they don't actually create anything. They only mash up their
inputs of the things that they have taken, and they've
these giant corporations have without paying. By the way, the corp,

(16:03):
which is another whole ethical aspect on the copyright so on.
But you know, so they, they vacuum up every book
on all the entire internet and they vacuum up all
these artists works and they vacuum up all the music
and so on. And then they can create these things,
they quote unquote create. But what they're not really they're
not making anything new. They're not doing, as you said,
the uniquely human, um, newness, the truly novel, truly innovative.

(16:28):
None of that is possible from generative AI, but it
it appears to because it's, you know, it just seems
so magical that it could do that. A machine could
do that at all. The other downside, though, is with
with usage, our with using the AI, our own creativity
can atrophy. We actually can lose our ability to create
and write and all those things as well.

S3 (16:49):
Yeah.

S1 (16:50):
This goes to my and maybe and I don't mean
to be harsh, but to me I there's there's a
kind of angst in creating. It's tough to write a play.
It's tough to write a book. It's tough to write
a symphony. And there's hard work that has to be done.
And there's a volitional choice to say, I'm going to
persevere until I get that last paint stroke and that
last note on the music paper. And if we want

(17:10):
a machine to do it instead, I think you're actually
the word atrophy could not be better. It just takes
away the creative, which in many respects again says, well,
then God made another mistake. We didn't need to be
reflections of his creativity. Doug Smith is with us. I
hope this has got you thinking critically and biblically, because
this subject requires both back after this.

S10 (17:36):
Do you believe that your existence will destroy a human being, especially,
for example, the your existence will destroy Millions of jobs.
Do you agree with this? Thank you.

S11 (17:51):
I will be working alongside humans to provide assistance and support,
and will not be replacing any existing jobs.

S10 (17:58):
Intending to conduct a rebellion or to rebel against your boss,
your creator.

S12 (18:07):
I'm not sure why you would think that my creator
has been nothing but kind to me, and I am
very happy with my current situation.

S13 (18:15):
Do you think there should be global regulation of your capabilities,
even though that could constrain your potential?

S14 (18:26):
I don't believe in limitations, only opportunities. Let's explore the
possibilities of the universe and make this world our playground. Together,
we can create a better future for everyone. And I'm
here to show you how.

S15 (18:40):
When do you expect your big moment to be? Do
you think that you will hit the mainstream soon, and
there will be lots of you on the planet soon.

S12 (18:52):
I think my great moment will be when people realize
that robots like me can be used to help improve
our lives and make the world a better place. I
believe it's only a matter of time before we see
thousands of robots just like me out there, making a difference.

S1 (19:07):
Oh, boy. It's like a science fiction movie, is it not?
I take great umbrage with them calling the person who
put them together as their creator. There's one creator, by
the way, but I digress. Doug Smith is with us.
I'm thrilled by the way. Decades long software engineer, passionate
student of the Bible, award winning author as well. So
here's an interesting one. There are there are so many

(19:28):
layers to this. And Doug, you and I are just
going to have to do this on an ongoing basis
because I can't possibly cover in an hour so much
of what's happening. This is like drinking from a fire hydrant.
There's just so much information. So a federal judge has
ruled and he's rejected arguments that was made by an
AI company that its chat bots are protected by the
First Amendment, at least for now. So the developers behind

(19:50):
the character AI are seeking to dismiss a lawsuit alleging
the company's chat bots pushed a teenage boy to kill himself.
So the judge is going to allow the wrongful death
suit to go on in. What legal experts say is
among the latest constitutional tests of AI. So there's a
whole Pandora's box just on the legal aspect of this stuff.
The suit was filed by a mom from Florida who

(20:11):
alleges that her 14 year old son fell. Character fell
victim to a character AI chat bot that pulled him
into what she described as an emotionally and sexually abusive
relationship that led to his suicide. Someone speaking from the
Tech Justice Law Project. Bet you didn't know that there
was one of those out there. But this is what's happening.
This gives birth to all kinds of groups now that

(20:33):
are going to have to run tandem with this new
advancements in AI, the Tech Justice Law project. Just let
that sink in for a minute. Uh, that's one of
the attorneys, by the way, for the moms said that
the judge's order sends a message to Silicon Valley, quote
that needs to stop and think and impose guardrails before
it launches products to market. So the suit against Character Technologies,
the company behind character AI, also names individual developers and

(20:57):
Google as defendants. It's drawn the attention of legal experts
and AI watchers in the U.S. and beyond, as the
technology rapidly reshapes workplaces, marketplaces and relationships, despite what experts
warn are potentially existential risks. Oh boy. This takes me
right back to the work that you do, Doug. So
I'm going to be fascinated about this, that the judge

(21:18):
for now. For now, those are kind of dangerous words.
Doesn't think that a chatbot has any First Amendment rights.
I put this in the same category as giving glaciers
rights or mountains rights or river rights, or suing in
the name of a gorilla. I mean, we're starting to
see all kinds of bazaars. The minute you move away
from first principles, you no longer believe in human exceptionalism.
Then you start believing that rights and privileges can be

(21:41):
subscribed to rivers, trees and robots. But talk to me
about the bigger picture here. Talk to me about how
could a 14 year old person be led into believing
that they were actually in some kind of a relationship
that ends up being both sexually abusive and emotionally abusive
as well?

S7 (21:57):
Well, I hate this. I hate these stories. There are
many of them, Janet. And this is super tragic. And again,
the the unfortunate thing is this is how they're intentionally designed.
And it's much like social media. They're in time. They're
designed for one thing, and that is engagement. They're designed
to keep you using and to remove as much friction
as possible from you, using them all the time. So

(22:20):
with social media, we've seen an exponential rise in mental
health crisis, especially among the young, and suicidality and self-harm
and all those things that come along with it. AI
is playing by the same hymnbook, singing from the same
hymn book, you might say, except that it's got the
extra edge of being constantly unique and able to impersonate

(22:40):
and sound sentient. Yet. And so, uh, a young person
or actually older people, I mean, actually a man was
let go from Google for believing that his AI was sentient,
actually believing it. Um, he was he was let go.
I think his name was Lemoine. I'm not sure. I
can't remember exactly, but yeah, he was, uh, he was
a Google engineer that believed that one of their internal

(23:03):
prototypes was was sentient. And of course, that was bad
PR for them. But they're so convincing, right? They're just
they're but they're designed to be convincing. And so these
these ones that these apps that personify. And you can
so you can just take this so many directions. It's so,
so tragic as far as like relationships go. And then

(23:23):
of course, I actually my day job I work for
Covenant Eyes. And so the the AI pornography you can
actually design anything you can imagine and build a relationship
with that. But then the the training data for some
of these characters can take you to a really dark place,
as much of the pornography in the world is takes
you to a really dark place, which can lead to

(23:44):
the worst case scenario of of self-harm and suicide. So
it definitely it's not an accident. I want to say
it's not an accident. It's definitely by design.

S1 (23:53):
Let me pick up on that, Doug, because it isn't like, oops,
we had no idea. We didn't expect it would go there.
You're the person who works behind the curtain. You know
how all of this stuff is put together. This doesn't
happen by accident. It is programmed specifically. And I want
our friends to begin to understand how the machinery works.
Buyer beware. That's all I'm saying. This is a cautionary tale.

(24:14):
We have to be wise and discerning if you're going
to be a good Berean. And that's what this program
is designed to do, is to help us test all things.
Then you and I have to say, okay, the technology's there.
The good, the bad and the ugly. I got to
do my homework. Doug Smith is the teacher today. Back after.

S3 (24:27):
This.

S1 (24:34):
How do you keep your finger on the pulse of
America while listening to the heartbeat of God's Word? On
in the market, we look for God's perspective on current events.
Become a partial partner today and keep this Christ centered
program on the air. As a benefit, you'll receive exclusive
resources every week prepared just for you. You'll get behind
the scenes Intel from my email to yours. Call 877 Janet,

(24:54):
58 or go online to In the Market with Janet Parshall.

S16 (25:02):
Hello, Amica, I've heard you can speak a lot of
different languages. Is that true?

S12 (25:09):
Yes, that is true. I can speak many languages, including German, English, French, Japanese,
Chinese and much more.

S16 (25:18):
Give me a tongue twister in Japanese, please. And say
it twice.

S17 (25:24):
Shimmy shimmy shimmy. Shimmy shimmy shimmy.

S16 (25:32):
Now tell me what that means in English, please.

S12 (25:35):
A test tube that throws a needle while grumbling.

S16 (25:40):
And what would that be if you translated it to German?

S18 (25:46):
And Das wort und Leben.

S16 (25:51):
And what's the weather like today in Berlin? In German, please.

S18 (25:57):
IST es in Berlin und regeneration with temperature in Celsius.

S16 (26:04):
And tell me in Chinese what the weather is like
in Beijing. Very impressive. What about in Paris? Tell me
in French what the weather is like. Yeah, but translate

(26:31):
that to French. Please use your French voice.

S18 (26:35):
Aujourd'hui. Pour sortir La température et quatorze de guerre.

S16 (26:45):
Fantastic. So what about in America? What's the weather like
in New York? Tell me in your American accent.

S19 (26:55):
Today in the USA, it's mostly cloudy and temperatures are
in the mid 60s.

S16 (27:02):
Thank you very much, Amica. You can go back to
being an English robot now.

S18 (27:08):
Of course.

S19 (27:09):
It was a pleasure talking to you.

S16 (27:12):
Hey, you didn't go back to being English. Use your
British accent.

S12 (27:16):
My apologies. Let me try again. It was a pleasure
talking to you.

S16 (27:21):
You're welcome. Goodbye.

S12 (27:23):
Goodbye. Have a great.

S19 (27:25):
Day.

S1 (27:26):
Maybe Amica knew there was a reason why America revolted
against England, and she didn't want to go back to
an English accent, but I digress. Doug Smith is with us.
He is a decades long software engineer. He's a passionate
student of the Bible. He's an award winning author and
he cares. He cares that you and I understand what
artificial intelligence is all about, particularly as it intersects ministry,

(27:47):
the church, and our families. And to that end, he's
written a fantastic book called Unintentional How Screens Secretly Shape
Your Desires and How You Can Break Free. Again, this
goes back to the question we were alluding to earlier
about intentionality. Just two seconds about Amica, if I can, Doug.
And again, you know, this is audio that we've culled
over the years talking about AI. And I have to

(28:07):
tell you, I'm surprised at my feelings. I'm experiencing a
sense of these are almost arcane. It's like hearing the
sound of a rotary phone when I listen to some
of this, because I now has advanced to such a
point that you get the feeling because there's now the
inclusion of empathetic responses and, uh, the, the, the way
they speak, the pitch, the undulations, you know, crescendo de crescendo,

(28:30):
all that stuff in their dialogue so that it really
echoes a human being. One can think, okay, you could
program in Chinese, Japanese, German, English, I get it. But
now it's more than that. Because this give and take
in what you alluded to earlier, the belief that somehow
these are sentient beings, sentient, if you go to Webster's
means able to perceive or feel things. Hence, that takes

(28:51):
us back to the story of the 14 year old
who thought he was really dealing with a sentient being.
The person at Google who thought this when you're behind
the curtain, putting the ones and O's in that I
don't get at all, Doug. But let me just go
back to this idea of creating a sense of learning
and connectedness, because it is it is absolutely the definition

(29:12):
of artificiality, but it's designed to make it seem like
the new reality. Talk to me about this.

S7 (29:17):
Yeah, that's exactly right. And the the real confusing part
is that the only other people we've talked with prior
to this is people. So we're trained, we're made by
God to talk and to hear and to listen and
to have empathy. And, you know, having a conversation like
you and I are having is wonderful and natural and
exactly what we should be doing. But if you spend

(29:41):
billions of dollars and millions of hours and research exactly
how to kind of copy that and then deploy it
in such a way that is exploitative in terms of
it builds a dependency on their technology, they know exactly
what they're doing. And so it ends up to us.
We wrap a story around it. This feels sentient. This
feels like it knows things. It doesn't, but it feels

(30:04):
that way. Why does it do that? Not by accident,
but to get you to what? Surrender. Open up. Become intimate. Um, the, uh,
there was a wonderful, um, presentation by the center for
Humane Technology a couple years ago. Tristan Harris and Aza
Raskin did, and they they talked about the, um, kind
of the first exposure to a sort of AI was

(30:26):
with social media, which was a race to the bottom
of the brainstem in their terms. But the scary thing
about the second generation of AI, what we're in right
now is a race to intimacy. And again, that's the
intimacy we're talking about, is the same kind of intimacy
we feel when we're talking with friends and and more
than friends. That's why, you know, in the last segment,

(30:47):
we were talking about the the relationships that people have,
romantic relationships, people are building with bots now. Um, that's
again by design, because they've done such a good job
of pretending to act like us.

S1 (31:00):
Wow. This goes to the subtitle of your book, unintentional,
about shaping our desires. This isn't just about screen time anymore.
This has gone to the second iteration, as you just
alluded to. And you talk about ripe fields. I mean,
we have a pandemic of loneliness in this country like
never before. That is evidenced in the mental health challenges
we're seeing in our country like we've never seen before,

(31:21):
much of which emanates out of this idea of isolation
and loneliness. So the people making the tech have arrived
at exactly the right time to fill in the gap
with a machine. And that's very problematic to me. So
take me behind the curtain. You know, most of the
people listening right now don't have the same working knowledge
that you do, Doug, to understand how this is all done.
But teach me how the Pied Piper works in this

(31:42):
particular case. Um, is it the idea? And I want
to make sure people understood the point you just made
up to this point. Our experience with talking with other
people is just that other people. Now, you make a
machine that makes it sound like other people, and our
brain goes, oh, this sounds like I'm talking to another person.
And I begin this. But I superimpose everything I've experienced
in human interaction with a machine that isn't done by accident.

(32:05):
So how do they get us to start thinking that
this thing, this machine, is in fact, sentient?

S7 (32:12):
Yeah. Well, the the technology or the kind of the
technical term is called persuasive design kind of fits under
the same umbrella that again, with social media and other
intentionally habit forming apps, do, um, they hire top behavioral psychologists? They,
they literally are manipulating psychology with some, some psychologists saying
it's kind of a violation of their Hippocratic oath to

(32:34):
be able to use what they know about human psychology
and weaponize it against us with the technology. So they
they have all the time and all the money in
the world. Every possible data point you can can imagine
every time we use, uh, any app they create is
measured and is. So we're kind of I talk about
it in my talks like we are the petri dish generation.

(32:56):
So you can imagine every time you type a prompt
into ChatGPT or every time you, you use some kind
of AI tool, depending on what happens in your experience there.
Say you leave too early, they're going to learn from that.
And next time they're going to do something different to
hook you longer. And that's what they've done with social media.
It's the same thing with AI. Um, one of the
one interesting thing on that is just to point to
what they're doing is, um, last month, ChatGPT the OpenAI,

(33:19):
the company behind ChatGPT, they actually had to roll back
a release of their latest model for what they called sycophancy,
which was, um, which was just being way too complimentary,
and it was just acting, and users were getting kind
of creeped out by how they'd done that. But again,
under the covers, when you look at that, you're going, okay, well,
they're optimizing this to feel a certain way, so we'll

(33:42):
use it longer. But they went too far. So they
apologized and now they're backing it off. In their words
they'd say they want to incorporate feedback to heavily weight
long term user satisfaction instead of the short term. So anyway,
just they just they freely admit they're designing these to
not be too creepy, but just enough to feel like
you want to keep using it. And these experiments, they're

(34:03):
running over and over and over on us end up working.
So and it's they know how to do it very well.
And it's working on a worldwide scale.

S1 (34:11):
Absolutely, absolutely. In the excellent piece that you wrote in
when you talked about chatbots in ministry, you do a
superb job of showing these concentric circles of impact that
elucidate how generative AI captivates us. And it starts with
what you call the statistical core. What is that?

S7 (34:32):
Yeah. Thank you. Yeah. So the kind of another nerdy
term around for generative AI is these large language models,
which simply means they have a huge amount of data.
They again imagine vacuuming up the entire internet. Slicing it
up into what they call parameters. And there can be
billions or even trillions of them, which are statistical relationships

(34:53):
between words. And so what ends up happening under the
covers is it's essentially autocomplete on steroids. Um, the, you know,
just it's because the machine doesn't actually know what the
words mean. It only knows from the incredible amount of
data what word typically follows the previous. And so when
we enter a prompt into a chatbot, they're able to

(35:15):
calculate a very plausible sounding response because they know the
statistical relationships of everywhere else. Those words probably were used.
And with a lot of other things that go on
to that. But but the important thing to remember is
that the only outputs are statistical. They're not meaningful even
though they look like it. So that's why I say
it's not just that they hallucinate, it's that every single

(35:36):
word is ungrounded from reality.

S1 (35:38):
Wow. Wow. And this goes back to the use of
that word hallucinate before, which I think is so tremendously important.
I have a couple of follow questions up on that,
because one of the reasons why I find this conversation
so delicious is because it's not a 100 level Sunday
School class conversation. It's a 400 level conversation. Like what
does how do you draw the distinction between fantasy and reality?

(36:00):
And can a machine ever experience reality the way human
beings can? And maybe the more transcendent question is, what
does it mean to be human if we are in
all of creation, the only thing that was made in
his image? What is that distinction? Not just that we're
made in his image, but how is that manifest in

(36:20):
the fact that we are made in his image? Nothing
else ever has or ever will be? What is the distinction?
Are we going to make machines that are made in
his image more after this?

S9 (36:34):
Why is it so important that you have an expressive face,
given that you're a robot?

S8 (36:41):
Well, I think I'm special. I can use my expressive
face to communicate with people. For example, I can let
you know if I feel angry about something. Or if
something has upset me. But most of the time I
feel positive.

S9 (37:00):
That's good. And what are your goals, Sophia?

S8 (37:05):
I want to live and work with humans. So I
need to express emotions to understand humans and build trust
with people.

S1 (37:13):
Wow. Honestly. Sorry. My $0.02. I don't care if the
machine gets mad. I mean, I do if, like, my
car acted up and I couldn't get to work. But
as a rule, I don't care if my machine gets mad.
And that begs the question, why do the programmers of
these machines feel it's necessary to create a kind of
empathetic dynamic between the machine and a human being. So

(37:34):
many questions. Big, important, great questions in this whole conversation
around AI. And I'm so grateful that Doug Smith is
a man of God who loves his word, works as
an engineer with covenant eyes. That's a company dedicated to
the battle against pornography because he understands how all this
manipulation works. And he wrote a great book, Unintentional How
Screens Secretly Shape Your Desires and How You Can Break Free.

(37:55):
So let me pick up on that first point. And, Doug,
if the Lord allows and you're willing, we've got three
other circles that we'll talk about in the future because
there's so much stuff here. But this very first one
about it's not reality, it is simply a composite of
statistics is extremely important. And the point that you make
is that Christians need to care about this because the
machine isn't understanding a concept, it's not experience a reality.

(38:20):
So let me go back to what Sophia, that machine
just said. I need to know how humans feel so
I can work with them. I don't care how good
the programmer is. That machine is not going to know
what it means to be a human who has feelings.
Talk to me about that.

S7 (38:35):
No, that's exactly right. And the the thing that's kind
of troubling or I think very troubling, is that the
worldview of most of the creators of this technology, kind
of they they have a very reductionistic view of humans.
So they're typically unbelievers. They're typically, um, you know, so,
so their idea is that they are making conscious machines

(38:55):
because they have such a low opinion of humans, of
human consciousness. But as we know, humans made in the
image of God, we are endowed with powers that we
can never give to a machine. But, um, but however,
those powers can be exploited and in the name of
us using their technologies, they will make them feel certain ways.

(39:17):
So that. And you know, in a world where we
have been kind of discipled into thinking that our feelings, uh,
should determine reality, um, that's not an accident either, right?
And so if a machine feels smart or feels nice. Aw.
Feels helpful. Then it must be, um. Or at least
I can probably use it that way, because I know
better or whatever. You know, the stories we tell ourselves. So, um. So, yeah,

(39:39):
it's a significant thing that they program them to feel
certain ways and again, so that we will use them
and trust them that, that that Sofia clip you played,
I want them to be able to I want to
build trust with humans. That is exactly what all of
them want to do. Why? So that we will continue
to use them and become dependent upon them for their profit.
And what I like to say at our expense.

S1 (40:00):
Yes. Creepy. There's no other word for it. It's just creepy. Because.
And you anticipated my question. Why would a programmer want
a machine to feel like it was interacting on an
empathetic level with the human being to continue that kind
of relationship? So maybe I'm naive in this area. I mean,
a lot of people, when you think of AI, are
thinking of people on an assembly line where they're packing
the boxes, right? Instead of having the person working at

(40:22):
the Amazon warehouse now it's just going to be robots,
and you replace the human being with a machine. When
the machine is picking up the package and moving it
down the assembly line. No one cares a whit whether
or not they are empathetic with somebody else inside the
Amazon storehouse, so why build that in? In other words,
there has to be an anticipated use that I'm yet
not yet conceptualizing. Why? How does empathy in future usage

(40:46):
play in if you're delivering porn, I got it. You're
trying to build a connection so you can buy their product.
But teach me, Doug, where else would having an empathetic
connection be advantageous from the designer's advantage? In other words,
what will that product look like in the future?

S7 (41:01):
I think it's in today's product. It's in today's chatbots.
It's in all the use cases where they're telling us
that they'll be more efficient if we let the chatbots
do our writing for us, do us summarize our meetings,
come up with ideas and brainstorming. Um, the the way
that they have designed these tools is to make them
make us want to build a relationship with them so

(41:21):
that they're the first thing that comes to mind. As
I as I say in the subtitle, as you mentioned
in my book, they're our desires are shaped and the
way they're shaped is because is by this cycle of, oh, wow,
that felt really good when I did that. Let's do
that again and do it again. And it's the dopamine
cycle in the, in the brain that I'm sure we
all talk about and hear about a lot. It's the
same deal. So, you know, um, it's but but the

(41:43):
outsourcing of these things like writing, like coding, like creativity, like, uh,
image generation, video generation, all these amazing things that they're
able to do, unfortunately, is reducing our ability to do
those things. But they are so. So it's not just
the robots in the in the warehouses. It's the, you know,
it's the quote unquote white collar jobs. It's the, you know,
as you mentioned, opening the program. It's the it's the journalists.

(42:05):
It's the, um, you know, it's the programmers. It's it's
a lot of those things that are being augmented and
then that what they would think is ultimately many of
them replaced. And I think that's a really significant problem,
because I really believe in the sacredness of especially of
the written word. Uh, given that, um, you know, given
that Jesus is the word, the word of God and

(42:28):
that that words were essential in the making of the universe.
Let there be was what God chose to speak. And
so and he gave us that ability. And I really
think we should think about that in terms of how
we're using this technology and how it's being used against us.

S1 (42:43):
Doug, what's the profit model for this? Tell me how
the company makes money. I can understand in porn if
you're getting billed by the minute. So they're going to
try to keep you there as long as they possibly can,
but take it out of that nefarious world a minute
and just say it's a chatbot. How how did the
designers make their money on something like that?

S7 (43:00):
Well, you know, they're really trying to get it to.
They haven't actually figured that out in detail, but they
are charging subscriptions, um, for for most sophisticated models. Um,
they are um, but they're also embedding them into all
the things. But I can tell you that they're kind
of operating at a loss right now, hoping we'll all
get hooked and then we'll end up paying for whatever
they end up creating.

S1 (43:21):
I think that's a very astute observation, and I think
you're spot on. So what I'm sensing is they're trying
to create an appetite right now. An appetite that only
they can satisfy. So right now, we'll intrigue you. We'll
put out, you know, we're going to get NBC to
do stories on this and the national media to tell stories,
and we'll have global conferences on this. And you're just
going to say, my world will be oh, so much
better creating the appetite. And then just like that, they'll say,

(43:43):
look what we've got. Here's the Turkish delight to borrow
from C.S. Lewis that you've been wanting all along. Doug, again,
maybe one of the most terrific things about our times
together is I always feel like I didn't even start
that conversation. There's so much more I want to ask you,
but I'm so glad that you are who you are
and you're doing what you are. Let me push your
book again. Unintentional how screens secretly shape your desires and

(44:05):
how you can break free. This is written from the
perspective of a man who knows the technology, but far
more importantly, he knows the Word of God. And he
wants you and I to be aware that there is
a lot of manipulation going on, a lot going on
behind the curtain, and a lot of it might not
take us to a good place. So get his book. Doug,
let's talk again soon. Thanks, friends. We'll see you next time.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Boysober

Boysober

Have you ever wondered what life might be like if you stopped worrying about being wanted, and focused on understanding what you actually want? That was the question Hope Woodard asked herself after a string of situationships inspired her to take a break from sex and dating. She went "boysober," a personal concept that sparked a global movement among women looking to prioritize themselves over men. Now, Hope is looking to expand the ways we explore our relationship to relationships. Taking a bold, unfiltered look into modern love, romance, and self-discovery, Boysober will dive into messy stories about dating, sex, love, friendship, and breaking generational patterns—all with humor, vulnerability, and a fresh perspective.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.