All Episodes

October 3, 2024 48 mins

Ben Harris is a computer scientist and business professor researching LLM’s. Ben and Marlin discuss AI optimism (pessimism) with a focus on the nature of artificial so-called intelligence.

Statement on AI risk

CNN report on CEO survery

Business, Math, and Righteous Living with Dr. Benjamin Harris – Episode 004

Into the AI Flood by Ben Harris

This is the 235th episode of Anabaptist Perspectives, a podcast, blog, and YouTube channel that examines various aspects of conservative Anabaptist life and thought. 

Sign-up for our monthly email newsletter which contains new and featured content!

Join us on Patreon or become a website partner to enjoy bonus content!

Visit our YouTube channel or connect on Facebook.

Read essays from our blog or listen to them on our podcast, Essays for King Jesus

Subscribe on your podcast provider of choice

Support us or learn more at anabaptistperspectives.org.

The views expressed by our guests are solely their own and do not necessarily reflect the views of Anabaptist Perspectives or Wellspring Mennonite Church.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
One of the things they have to do is writedown, here's the rules of the game,
and here's what, doing a certaingood thing pays you, right?
They call it a payoff function.
And so if you, you know, if you take the,the opponent's pawn little piece,
here's the payoff.
Or if you take their queen,it's a bigger payoff.
And so, if you can define that in chess,you can actually define that.

(00:21):
You can make a systemthat that does very well.
But think about a dating relationshipor a marriage like.
And we pretty quicklyrealize the our attempts
to quantify and define value.
It's we're in a different layerof abstraction.
We you can't the these don't go together.

(00:47):
So today in Anabaptist perspectives,
I'm joined by Ben Harris,and we're going to be diving into
the ontological limitsof artificial intelligence.
Ben, you want to start with a
little introduction,and we'll jump into the topic after that.
Sure, Marlin.
Good to be with you. So,my name is Ben Harris.
Hi, everyone.

(01:07):
So I'm a professor up at SattlerCollege in Boston, Massachusetts.
I coordinate the business program up here,
but my my backgroundcomes out of the engineering world.
I spent more than a decade working in machine learning, artificial intelligence,
just many of the classical engineeringdisciplines.
So, I would.
I think it's hard to definewhat an expert in the field is, but,

(01:29):
I spent a long time thinking about itand continue to do so because it affects
not only back in, in the technical world,but also in academia.
We contend with AI every day.
So, Marlin, it's good to be with you.
Yes. Thanks for coming on, Ben.
Excited to have the kind of engineering, computer background,
that you bring to it.

(01:49):
So to
startwith some of the the drama around AI,
a little over a year ago, May of 2023,
there's this famous statement on AIrisk, signed by,
you know, a bunch of the big names,and their short version was mitigating
the risk of extinction fromAI should be a global priority

(02:14):
alongside other societal scale riskssuch as pandemics and nuclear war.
And so that got a bunch of press.
I found it interesting.
A few weeks later,there was a CNN report on a,
a survey from a number of CEOs.
they got 119 responses.
And out of those, 50 of them said,

(02:37):
yeah, AI could potentially destroy humanityin the next 5 to 10 years.
And the other 69 were unconcerned.
So I guess we
can start with, where are youat on that question?
Your view of artificial intelligenceand that kind of.
Yeah,that kind of dramatic fear statements.

(02:58):
Oh, that's a that's a classic questionfor anybody who thinks about AI.
I, I am not on thewhat we call an AI doomer.
Those arethose are probably what you put those,
you know,the Geoffrey Hinton's of the world
that would you know, he's very, he’shad a much longer background here.
This is his view.
I do not see AI as within that shortperiod of time, 5 to 10 years,

(03:20):
as being an existential threatto humanity.
Both for a theological reason,I think, like God has defined
the end of humanity for us,but also that we
we look at the technologyand what we assume in the forecast of,
you know, AI improvingor growing to a certain level assumes
no major hurdlesthat we encounter on the way.

(03:42):
I and I just think we're startingto see the hurdles.
Right?
We we at the moment cannot produce
data fast enough in order to supportthese bigger and bigger AI models.
We don't.
We're running out of,
raw materials for creating trainingchips and systems that do this.
So thereso we're starting to see the hurdles.
And I think that that's going to
that's going to skew the forecastfairly dramatically.

(04:03):
I, I am not particularly concerned about,existential issues.
partly
because what we're going to talk abouttoday is the is the ontology of AI, right?
The movieversions of AI being a threat to humans,
in the in thenarrative tend to revolve around a moment
when I realized what it wasand what it could be, and it has this self

(04:25):
defensive response or reaction to humanitytrying to shut it down.
I don't yet think that AI has the
has an existential understandingof what it is yet.
You know, itprobably can give you a text response.
Yes, I system, system systemam an AI system,
but that does that does not yet imbuevalue in a self defensive reaction.

(04:48):
Right? Thethat we naturally experience as humans.
So I think there's still a gapontologically and as well as what
we would call artificialgeneralized intelligence or AGI, right.
Systemsthat can think and reason for themselves.
I mean, it's still an ongoing open
research area in, in what they callargument mining, right?
You can present a bit of text to an engineand say, is this a good argument?

(05:13):
And it has a really difficult timedetermining
yes or no, whereaswe as humans read that and say, oh, that's
a terrible argument or a great argument,and I find it very persuasive.
And so the, there's the that's oneI think about a lot that particular gap.
But there are a numberof them that that in my view,
set they set a

(05:33):
fairly big chasm today between where AI is
and anything that we would need to worryabout from an existential standpoint.
So you're setting a chasm.
And you're saying those time framesare way over inflated or sorry,
not over inflated.
Opposite of over inflated.
Way too ambitious in a sense.

(05:54):
To push on it a little bit.
I still hear you using words like.
Not yet, implying that the time is coming.
It's just not yet.
and I guess even to push on thata little bit more,
you know,we talk about things in computing, like
Moore's Law, which I think is a fairlyspecific technical definition for that.

(06:17):
But in a general sense,
you know, computingpower changes very quickly.
I mean, you even noted in an email to me,it seems like there's new
AI applicationscoming out every couple of weeks.
so I guess
how would you respondto an argument like that that says, well,

(06:39):
you know, computing powerhas been doubling
frequently for the last number of years,and we're just going to see,
you know,we can't see what's around the corner.
You know, it's a fair question.
You know why? Why the “Not yet”?
I think the the one of the reasons isif you, you know, say you
you project outsome number of years and you,
you allow Moore's law, which is beginningto show asymptotic behavior.

(07:03):
Right?
You if you were not doubling the you knowour our speeds are getting quicker,
but we're running into computation issueswith these kind of things.
It's largely why GPUs have become
the standard architecturefor doing this kind of work.
one, the not yet is an admissionthat I don't that we don't know.
Right.
We you know,can can you create a technical system

(07:24):
that that is a version of intelligence?
That's a bigger questionthan I think AI can answer.
That's a, you know, what is what has Goddefined intelligence as.
And can you create a system that has that?
you know,
we sort of think of itin a, in a narrow band in that,
you know, intelligenceis the ability to collect and synthesize

(07:45):
information, to answer a questionor to develop something.
but there's there's other types. Right?
We have a sense of asthetics.
Right? Humans look at what is beautiful.
We have a sense of awe,that is uniquely human, right?
And you, you know,if you ask, a large language model,
you know, if a given painting is,is beautiful or if a piece of artwork

(08:09):
is just abhorrent to you, like,it's likely does not have an opinion.
And if it does, it has to createit based on some quantitative
aspect of the artwork.
It has to look at, you know, the okay,the color contrast is within this range.
And most people think that that means it'snot a very attractive painting.
It has.
That's the way it has to thinkbecause of its computational nature.

(08:31):
we don't think that way. Right?
We we look at it. And what is it?
What does it do to our to our spirit,to our soul, to our mind?
And we have a response. And so,
you know, I think that the not yet is a,
you know, is a is a fair question.
I think we're, we're also running
into some mathematical challengesin that the, the number of,

(08:52):
I'll call them synapses or nodesthat we'd have to do to simulate
actual human thought and cognitionis still many orders of magnitude
above what the most powerful systemscan handle right now.
So even if Moore's Law holds,which is question whether it will,
we have a long time to gobefore we exponentially reach that point.
We would need to be and whoand who knows?

(09:15):
On the way, we may encounter a wholenother obstacle we didn't anticipate.
So, the the challenge is that weyou don't.
When you make a forecast,you have to acknowledge
what could cause the forecast to fail.
Right?
Things don't always continue as they were.
You look at, you know, populationexplosion in the 1970s, right.
There was this huge concern

(09:35):
that it was going to lead to global famineand billions of deaths.
But that never happened. And so,
so we want to look back
soberly at those examples,and I think with a bit of wisdom,
and just encourage ourselves that, like,the Lord has this in hand, right?
He's not he's not going to let AI ruin
his creation.

(09:58):
Okay, so a theological answer there.
Confidence in God.
Yeah.
And the title Ontological Limits kind ofhighlights what I wanted to the question.
I kind of want to press here.
if we talk about limits of AI,
maybe the first limitwe think about is technological.

(10:20):
Can we build it?
What can we build?
What kind of
computers can we build,what kinds of neural networks and so on?
or epistemologicalwhat do we know how to do?
Knowledge.
when we say ontological limits,we're getting into.
Some ways, the strongest of those termsbecause we're trying to say what

(10:43):
what is the limit by the very,
by the very nature of what this thing is,
that produces the limit,
and yeah, I think maybe you've hintedat some of that,
but.
Yeah, kind of at its core,

(11:04):
what do you see is that biggest core limit
or way of talking about that core limit.
I think probably the the,the most stark ontological limit
of AI is is just what we've created it outof, right?
This is a this is a creation of,
Well, it's it's a, sounds funny, a

(11:25):
creation of creation.
Right.
We've our ingenuity has developed thisand it's immensely powerful in some areas.
Right.
I don't want to to minimizeits effectiveness in its aid in some
things, but there
it is
a it doesnot involve the supernatural, right.
It does not.
You know, it is a it is a, by definition,natural development.

(11:48):
It is limited by the laws that thethat God has put in place
in terms of physics and computationand, and information,
in mathematics. Right.
We look at, you know, questionslike there's a famous question called
the halting problem in Computer sciencethat basically is a
we now know is a unsolvable questionfor a computer system.

(12:10):
And so, you know, it's not without
computing is not a finished business,right?
That we, you know,we know how to do everything
as long as we have enough power.
Enough, enough systems, enoughelectricity.
we don't weand we can prove that we don't.
Not only that, but never will.
Right.
There's you know,mathematicians have worked on that.

(12:32):
That there are some fundamental,fundamentally
unknowable thingsin computation in the technical fields.
and so Ithink those are you're going to encounter
some of those questions in the growth ofAI based on its ontology, where you can't
there is not a, it is bound by the laws

(12:53):
that that all physical,
digital systems are bound by.
Right?
It can only go so fast to goand get so hot before it breaks down.
It's limited by the laws of physics.
It it is also limited by the ingenuityin which humans can insert into it.
Right?
We create it's steps, but

(13:15):
we are limited and finite.
So I, I have a difficult time
imagining how a system is going to,
you know, by its own volition, exceedthat based on what it's created on.
Well, is part of the the argument there.
Especially for the.
I don't know, eitherAI optimist or AI pessimist.

(13:35):
Whichever one it is that, you know,has these very high expectations for AI.
I suppose that it's more a question
of your outlook, whether that makes youan AI optimist or pessimist.
but it's part of the
argument that, well,these are neural networks.
They're functioning the same wayas the brain.

(13:56):
We don't have to give it a precisealgorithm because
it has more capacitiesfor self-learning and self adjustment.
And is the
is the expectation therethat something about
that architectureis going to let it get past
what normally appliesto other computer systems or...

(14:21):
It's a good question.
There's a, I remember there was athere was a French institute
some years ago that had begun to tryand create neural networks with the scale
that would attempt to approximatesome of the cognition of the human brain.
And, you know, they had the the resourcesof the French government.
They had an immense amountof computational power behind them.

(14:43):
And even with all that,they were able to to roughly get to,
if I remember the number correctly.
So don't hold me on the on the citation.
Something like 2 to 3% of brain function.
You know, this is immensely complex
because network complexity isdoes not grow linearly.
It grows exponentially. Right?
To gain, to grow a systemthat can get strong enough,

(15:08):
to be a big enough neural network,
even even that is,
is not going to give you human cognition,
because we think about the the use casesthat we use neural networks for.
There's, there's really there's twomajor ones in machine learning.
One is classification.
Basically telling,you know, you look at if you take as input

(15:28):
something and your brain tells youwhat the thing is, right?
Our brains are that naturallyyou have image recognition software
or other neural networksthat do the same thing.
They take in inputsand they identify something.
the other one is regression.
It's making a, a relationshipbetween two quantities.
So, and allowing you to do a sort ofwhat we call an in-sample prediction.

(15:49):
Right.
If you're if, you know,you know, house size A
and house size Band you see something in the middle,
what should the cost of the home be,right?
That's a that's a regressive,I'm sorry, it's an example of regression.
It's not regressive, but the,
and those are the two main neural networkapplications.
And there's now,there's variations on them

(16:10):
now, those the base layers of those arewhat's been built up
to create these transformersthat LMS are built on.
So, they exist and they've been extended.
But you
I would say we arewe are too far away from,
actual
human brain function to predictthat it will it will continue along

(16:32):
that road, even even on any,uninterrupted path.
And finally get tohow the human brain works,
there's going to be breaksor discontinuities in the progress
and that and who knows,some of those may scupper the whole thing.
Right?
You may only be able to get so farwith the applications of AI.
You just can't go further.

(16:53):
They estimate that the cost to traintraining is what's most expensive
right nowfor these systems, to go beyond GPT four.
so for little 040,
to GPT five, GPT 5,6,7is trillions of dollars per iteration
to accumulate all the datato do the training, the electricity cost
to run the models.

(17:15):
at some point we're goingto we're going to run out of money.
We just, you know,humanity will either have to decide,
okay, we're invested in this or we're not.
We just can't keep spending that
those amount of resourcesto develop a system like this.
so I don't, you know, fortunately,I don't have to make.
I'm glad I'm not in charge of an
AI companythat I'd have to make that choice, but,

(17:37):
there's,
you know, whetherit's the actual technology, whether it's
the mathematics behind itor the funding to generate these things,
any, any combination of thosecan cause this thing to fail.
So we have toalmost have the perfect storm to go up the
the the graph of progress towards human
cognition, at least in my view.
Yeah. Those are helpful.

(17:59):
One the point you made there at the end,those
literal physical constraints.
I mean,we have all kinds of energy available, but
that as you try to exponentially scale
up the amount of energyyou're literally running into.
I don't know what the scale is,but it's registering on
things like grid capacityand electricity generation capacity

(18:21):
and that kind of thing. At some point.
Not a small thing.
no. The other thingthat was really helpful, for me
and what you said there wasAI is really only doing at the base
the two operations,
classification, classifying objects.
And it's gotten

(18:42):
sophisticated at that by by training
humans, coding examples.
and then it going off of those examples
and regression problems, which I'm not
not familiarwith, the mathematics of those, but

(19:03):
again, those are fairly simple operations.
I mean, takes a lot of powerto carry them out.
fairly simple compared to
the human mind and living a human life.
so, yeah,
I think just breaking it downand saying it does these two things

(19:24):
and builds on them andputs them to good use, to me really helps
to see the a little bit of see the limitsor demystify things a little bit.
Yeah.
And I, one thing I would add in to
is that we you see on,
maybe on social media at points people
some very like AI negative people who likeAI is not any good at anything.

(19:46):
And they'll bring up an examplelike I saw one on LinkedIn the other day
that there was athere was like a stone pillar in a field.
And on one side of ithad like the rear half of a cow.
And then the stonepillar was maybe ten feet wide.
And on the other end of the stone pillarwas like the head of a cow poking out.
And now we would say like,okay, there's two cows in the picture,
but that like an AI system,might put a box around it and say,

(20:09):
here's one cow and its length is 15ft,
right?
And sothey would look at an example like that.
And we know, humans know the error.
We we see it and we're like, oh, okay.
a computer system doesn't.
And so I think those, those examplesget brought up to say,
like AI is ofno, like it's never going to be a threat.

(20:30):
It's, but I think what it points tois there are,
there are there are limitsand easy mistakes that AI makes,
especially early ChatGPT daysmade tons of mistakes.
And so, but humans
will will continue to try and sort of plugthe holes in the dam.
Right.
They'll we'll say, okay,here's a, here's an error

(20:50):
that we're getting ridiculedfor this thing that the system can't do.
We'll fix the thing.
and I'm, I'm led to believe we're goingto keep finding more of those mistakes
that humans have to fix.
And it'll it may get betterand better and better, but it, again,
you're you're going to keep finding,
reasons, reasons to make fun of itultimately.

(21:10):
Now, that doesn't mean it's not powerful.
It's not something to think about,but it's, you know, those
I think the, the, the satirical view of AIis becoming more and more prevalent.
And I don't know what that's going to doto people's view of it.
probably very little.
But, we want I just want toI want to try and think clearly about
is this a likewhat is the truth here about AI? Yes.

(21:33):
That's a silly mistake that it made.
But does that invalidatethe whole project?
Well, of course not. Right.
That's you know,amongst the billions of things it can do.
Here's one that you thought wassilly. All right. So move on.
Yeah, exactly.
I think just to pursue that a little bit,though,
and. Yeah,I like the case you've made, kind of,

(21:57):
limits and it's limited.
but would you agree that we can get
we can get a lot betterat using some of the tools even without
even without really fundamental changesin capacity.
like, for example, I was,
listening to an accountant
and his statement was, there's

(22:19):
going to come a time sooner or later where
you're basic bookkeeping,categorizing transactions, and so on.
Somebody is going to come up with a toolthat does that well enough
that it's barely worth your timeto have a human bookkeeper go through
in detail, because it's going to get it
close enough, especially for the purposesof small business.

(22:41):
It's going to get it close enough that
it's not going to matter.
And I don't know, that kind of strikesme as plausible.
It also doesn't strike meas requiring anything
radically new from AI,maybe just some human ingenuity
and how to how to apply itright to the process.
I don't know.
Does that sounds like a fair estimate.

(23:02):
Oh, I think so.
I think there's, like, accountingas an application of AI.
Sure, you can do the.
The mechanics are not tremendouslycomplicated.
You know, they take careand some background knowledge, but the,
nothing about the mathematicsis incredibly difficult.
But the, I think, like with that example,one of the constraints we're going to
encounter is at the end of it, you,you make a legally binding declaration.

(23:26):
Right. These are thethese are the truth of the accounts.
And so who who is going toto put their legal weight behind that.
Is it going to be the,you know, is it going to be the individual
who's used the system like I certifythis is correct,
or are we going to try and passthat off to the AI system?
So like, well no, the AI did it.
So if there's a mistake it isn't my fault.
It's the AI’s fault.

(23:48):
And if we do that which AI companiesare going to shoulder the legal burden
for that of you know, that'sI think it's, it's going to sort of
be the, the,the fingers pointed at one another issue
with why hasn't this become, why hasn'tthis use case become widespread?
I think it'sbecause no one is confident enough yet.
That and I'll say yet because I think itit may come a day that people will be.

(24:12):
And this will the dam will break loose.
But I don't think anyone yetis ready to sign up
and put their legal business lifebehind this there.
I think people are still waiting forfor more and more evidence that it's okay.
Yeah.
And that could apply to.
To a lot of pieceswhere you have software doing a lot of it.
I mean, it's thinking,you know, friends that build roof trusses

(24:35):
and this isn't necessarily AI,the software is doing basically
the engineering and saying, yes, this,this truss will meet specs or it won't.
but if they're actually doing a jobthat requires an engineer's stamp,
well, it's still got to be an engineerthat does it, which is an extra step.
That liability part.
They're so good.
Yeah.

(24:57):
Okay.
So I'm guessing given the argumentsyou've made, that,
you know, artificial general intelligence,
actual purpose, purposeful
behavior by AI systems and so on.
I'm taking ityou're very skeptical of that.
Or very skepticalthat being in the all things.

(25:22):
Yeah, I'd say that's true.
I have a good degree of skepticismabout that. The
I think that we have humans
have struggled to definewhat AGI actually is.
Right.How do you test it. How do you verify it.
How do you.
And so I think that that particularquestion of what is AGI is going to,
remain in the academic circlesfor a while.

(25:43):
It's going to, you know, we're going tothere's going to be arguments for
and against of various kindsthat, may or may not prove fruitful.
Right.
I don't know that they're goingto come to any conclusion.
and I think in the background,AI companies
are going to continue to build systemsthat more and more closely approximate,
you know, human cognition,even though they we might say

(26:04):
you're still light years away,but we're making progress.
And that's probably true.
And so, so I don't
the yeah, I'm
not tremendously worriedabout the development of AGI.
you know, sometimes the cases of,
chess or go these board gamesthat have had a long and storied history

(26:25):
in computation,I like, companies like DeepMind.
So, which is now owned by Google,but that's out of London.
have done some really neat work,like developing superhuman
chess engines and go engines that playthe game at a, at a level that surpasses
human understanding.
which is pretty cool, the waywhat they had to develop to do that,

(26:46):
but those are still games with a,
a defined feature spaceand defined rules and defined options.
Right.
You like when you can do that,when you can do that, when you can box
in the system, the universe,you can make something really neat.
But, boy,humanity resists being boxed in like that.
We just don't.
That's not our nature.

(27:07):
And and so I think part ofthat is the root of my skepticism is,
like if you look into a fieldcalled reinforcement
learning,it's a it's a subpart of machine learning.
But one of the to do it.
Well, this is how the, the, the chess engine from DeepMind was built.
One of the things
they have to do is write down, here'sthe rules of the game, and here's what,

(27:31):
doing a certaingood thing pays you, right?
They call it a payoff function.
And so if you, you know, if you take the,the opponent's pawn little piece,
here's the payoff.
Or if you take their queen,it's a bigger payoff.
And so, if you can define that in chess,you can actually define that.
You can make a systemthat that does very well.
But think about a dating relationshipor a marriage like.

(27:54):
And we pretty quicklyrealize the our attempts
to quantify and define value.
It's we're in a different layerof abstraction.
We you can't the these don't go together.
And so you know those are
some of the, I guess musings to me of why
I think AGI is still some ways off,if we even can define it.

(28:20):
Yeah.
So along the lines of, you know, AGI.
And again,these questions about intelligence,
the Turing test.
I used to use a version of thiswith students, long before,
you know, generativeAI was on the on the horizons

(28:41):
came from my,my days in college, philosophy 101.
And, you know, basically we're asked toI don't have a technical definition
in front of me,but we're asked to consider the question,
you know,if a computer can give the same answers
so that if you're communicating with itthrough text or whatever, you can't tell
if there's a computeror a person on the other end.

(29:04):
well, then should we ascribe it
the same mental life and intelligencethat we ascribe to a person?
yeah. How do you think about that?
And maybe you have a more precise,computer science definition of the test.
No, the.
The Turingtest was all the way back in the 1950s.
Alan Turing, when he was workingin the early theory of computation.

(29:26):
no, youyou had a good working definition.
You know, if you're if, you know,there's two rooms behind you
and you're getting textinputs from questions you ask,
how do you tell if one is a humanand one's a computer, and if it's,
you know, a systemthat is statistically indistinguishable
from a human, we would say, okay, passesthe Turing test.
I think we are actually already therewith that, with the Turing test,

(29:48):
I think we have systems that you can,you can write
fairly complex questions to, and this is always an interesting, like,
thought experiment of like, what questionwould you ask that a, artificial
intelligence system that you know isan AI system would fail to answer well.
Right.
that's kind of an interestingsort of a subfield here, but I think we're

(30:10):
I think we already have systemsthat pass the Turing test.
The, but I think this it's
the current applicationof the Turing test, I think is a moving,
a bit of a moving standard in that you,
you have a systemthat seems to pass the Turing test,
and then humans get used tohow it communicates.
Right?
You begin to pick up the flavoror the nuance of, like,

(30:33):
this sounds like it wasAI generated, right?
I think we all,
if we've read AI generated text,we sort of know what that feels like.
It's very it's very linear, very clear.
There's no halting pausing.
There's it'slike it's mechanical in a sense.
And so,
now what an AI company might say is,
okay,I'll adjust my, my generation algorithm

(30:55):
to make it a little more clunky,a little more human like.
Right.
And then it it
passes the new standard of the Turing testthat people can't distinguish.
but I whenever you have a test thatrelies on human perception of something
that, like, we grow,we learn, right, that that the benchmark
for that testis going to change over time.
So, you know, I think we we have systems

(31:18):
that your early GPTis probably passed the Turing test.
Now the standards even higher.
Right.
For a systemto be indistinguishable from a human.
and it's notbecause the systems have changed
so much as thewe have changed and learned.
And so, We've learned how to.
How to deal with it.
Yeah, that's a good perspective.
And just.
Yeah, as we're talking about this

(31:39):
and it occurs to methat it'd be very interesting to read what
philosophers wrote about the Turing testand how that has changed as the
computer
computer capacities have gone up,because that was the angle
from which I came,which which I would have encountered.
It was kind of philosophy of mind. And,
what is it about the human mind,

(32:01):
what makes mind and so on.
And it does strike methat those kinds of questions are really
kind of relevanthow we're thinking about AI here, because
if you're cominginto the whole discussion,
with a philosophy that says.

(32:21):
You know, we can explain this
all physically or we can explain thisby the functions of physical things.
The human mind is just
just is the human brain,or it's the functions
and processes that the human brain runs.
Then it seems to be a fair question.
Well,can we duplicate it with something else?
yeah.
If we're coming in as Christians,

(32:42):
we still may have a large varietyof philosophy of mind.
We're going to believethat the brain is important.
but generally,
you know, God is a spirit.
God has a mind without having a brain.
generally we believe that our mind is.
In a certain sense,independent of our brain,

(33:03):
although obviously it functionsthrough our brain.
yeah.
I just have to wonder how much that playsinto the whole discussion.
if you simply are a materialist,
then it seems likeso kind of naturally lead to this
more AI optimistic view of things.

(33:24):
I don't know how that mapsinto the academic landscape
either, but.
There's.
There's certainly
in the academic world, there'scertainly a sense of techno optimism.
And it's,
And, you know, they treat it againif you if you come at it
from a largely materialist worldview,you end up with just,

(33:46):
you know, an understanding that
that all I'm, all I'm creating technicallyis a smaller approximation to me.
And it's going to get closerand closer and closer over time.
And so there's no there isn't reallya connection between soul, spirit
and mind in the, in the secular worldview
because we're just,
you know,

(34:06):
we're just a random collection of atomsthat, that have no lasting eternal value.
I think that's the conclusion,
whereas I think likethat is from a Christian perspective.
That's where we come at AIwith a different view of like, you know.
Yeah.
So you can create a neural network that isthat has the trillions of connections
that our brain does.

(34:27):
you're still missing a piecethat you can't simulate.
There's a spirit,there's a soul, there's a, humans are,
you know, by a mathematical definition,irrational creatures.
But in
if we because we are we, you know,we look at we do things that are silly,
not in our best interest, but in from a theological perspective,

(34:48):
that makes sensebecause we, we identify this conflict
of our of our sin natureand our redeemed nature in Christ.
And so, if you don't have theology, thisgets leads to a natural techno optimism.
But I think only in the,in the sense that it is self beneficial.
Right.
Like I'm optimistic technically,if it benefits me, right?

(35:09):
if it doesn't, if it's a threat to me,then I'm not as optimistic about it.
And I have no moral qualms eitherway. Right.
That's the materialist view.
It's not a moral question.It's it's pragmatic.
Yeah.
So actually, whileI was getting ready for this episode,
I was sitting in my little,

(35:30):
office at the school I help with,and I looked out the window
and I noticed plants, flowers.
It's a beautiful sunny day.
And butterflies flitting around
on top of there.
And. Yeah, so I'm drawn into the beauty.
Something a lot of people are.
And so just got me thinking, you know,those are the things we paint.

(35:53):
we draw pictures of,
But not only that, like, we really have,we really have dug into that.
What is the life cycle of the flower?
How can we breed the flowerto maximize blooms?
You know, we've traced the life cycleof those butterflies
and learned how they function.

(36:14):
learned the biology of how they function,the life cycle, all of that.
and, you know, I even had
to think about a book we're using in our,in our science program.
It's called The Girl Who Drew Butterflies,
and it tells the story of a girlwho was a painter.
Her art, but just was really
was really closely observing,

(36:35):
you know, insects, butterfliescame to understand
these stages of metamorphosis and so on.
when a lot of people, at a time whena lot of people didn't understand those,
and so just kind of really hit me like,okay,
this is what humans do with thingsthat we see and pay attention to.
is there any reason to think that AI

(37:00):
models or agents,
would do any of the same thingwith have any of that same,
really It's a
relationship, ongoing relationshipwith reality.
and does that get at anyof the difference, this kind of difference
we're talking about between a human mindand an AI model?

(37:20):
yeah.
I'd love to hearsome of your thoughts on that.
I love the question. I,
I mean, my my thoughts.
First, go to the the the the genesisof what we would call curiosity.
Right? The the.
You know, I've, I have children at home.
Some of them are very curious.
And what is it that.And we love curiosity.

(37:42):
It's how we we develop an everexpanding knowledge of the world.
you know, so on the one hand, I think AI
could be constrainedand that it has to like
it doesn't really have the abilityto explore aside from the digital space
it inhabits.
And so, you know, it could but, you know,that could be that could be overcome

(38:03):
if a, you know, a human, we we give itinputs from a world beyond its own
or we give it pictures and soil samplesand scientific reports, and we give it
enough information to, discern connectionsand patterns and systems.
but I
think at the, at the very bottom of aof a contrived system
like that, we have like human volition,we've done it.

(38:24):
We've,we we have to push AI into the world
to begin to collect this information.
so I don't know that,
you know what?
I'll say this in the, in the worldAI inhabits,
I think it already is actingas a curious agent within its world.
Right.
It's I think by its incentive structure.Right.

(38:45):
It says like,I want to be the best AI system.
So I'm going to collect everythingthat I can, and, and look for patterns
and, and begin to create moreand more of an approximation to reality.
there's so much more to the worldthan just the digital space, right?
So we have to we have to somehow connectAI with that space.
That's a more challenging question.

(39:07):
so I don't know that I would,
I wouldn't
I don't think thatAI is going to have agency in that way.
I think one of the problems where, like,I think we're already seeing an inflection
point with that kind of pattern in AI,
because they estimate the amountof the amount of data
on the planet doubles every six months,something like that.

(39:27):
So just the amount of data is explodingbased on what we're creating
movies, films, text, all these things.
now what we're realizing
is that the some of the datathat's being fed back into these models
to learn and remain currentis actually AI generated.
Right?
So AI models are consumingAI generated things
and taking it as as new truth sources.

(39:51):
Right?
So they're the they're building up on whatthey, what they've already created.
And so any, you know,if there's any error or bias
or anything that's builtinto those generated sets of data
that's now been replicatedand virally expanded across the AI space.
And so, so I think, you know,I my one one thought I have

(40:13):
is there's going to be a bit of a pendulumswing when you realize AI
is spitting out nonsense,because that's all it knows.
and the pendulum swing will say, well,don't
don't let AI have access to anythingnew yet.
Right?
We need to curate what it's learning from,in order to make this truthful.
And so, you know, we have this curioushumans have this beautiful

(40:35):
curiosity to know, you know what,why do things work the way that they do?
Right.
It's led to advances in cosmologyinto in biology, in mathematics.
You know, we do this naturally.
you know, I
loved I loved seeing recentlythere was a paper that came out of,
I think it might have been Google
DeepMind as well,but they actually had developed a

(40:56):
AI generated mathematical theoremproving system.
They could give ita mathematical question.
It could prove something mathematically.
I said, that's fascinating because that'sthat's an area that I had thought
that humans like that was humanlike creativity and intuition.
That was not an accessible region.
but I had but the more I am

(41:17):
reading about it,the more I'm thinking, okay,
they've not done human things,but they've they've made progress, right?
They've done some things that I thoughtthey couldn't do.
And so, so there but I think that'show progress oftentimes goes in
AI is they, they,it appears they can't do anything.
And then there's this discontinuityand jump into now what they can do.

(41:37):
So I think we've seen thosenow do I think that
that AI is ever going to have the,the beautiful,
observation of the butterflies,like the girl who drew butterflies
to to understandthe biological cycles of these things.
again, I probablyhold a healthy skepticism, just like AGI.
I'm not.
I don't think that AI has, becausewe haven't given it the reason to yet.

(41:59):
We've not incentivized it.
you know, wewe learn because it's beautiful to learn
and AI learns because either we tell it to
or we threaten its extinctionif it doesn't.
So there's a, you know, I don't think
we'veit doesn't have the natural curiosity.
We do. And I think that makes us human.

(42:19):
Yeah. Thanks. Yeah.
So I keep hearing youcome back to the, “Not yets” a little bit.
But if there is one theme in your limitsthat you talk about,
kind of ontological limits of AI is
is it always being downstream of humanityin some way or another?
Yeah. We don't understandexactly what it's doing.
We we set it up and we don'talways understand exactly how it works.

(42:43):
The whole black box effect.
but it's downstream from you talking about
the incentives we give it, the waywe program it, the parameters,
what we want it to do,
the inputs we curate for it.
so yeah, that is helpful.
yeah.
Any particular concluding thoughtsyou'd like with,

(43:06):
like to, leave on this topic?
I think just the.
You know,you're you're right point to highlight.
Like. Yeah. So there is a,there is a sense of not yet.
And again part of meI go there because I in candor,
I don't actually know,but I make a forecast that could have a,
you know,there's a margin of error in the forecast
and there's things that could go wrongor right that make it wrong.

(43:29):
And so,
I only forecast out as sort of as far
as I could see, the, but you think aboutsomething like quantum computing.
So the new technology that there'sengineering
challenges to solve, but say we solve themand we have quantum computers
that can do incredibly quick calculation
in, in some previously inaccessibleproblems.

(43:52):
all of these developments,you know, AI developments, growth
in neural networks or training or,you know, GPT eight say.
Or quantum computing, right.
They all solve sort of little niche thingsand they do it very well.
But I guess the analogyI go back to in my own mind is that you
without it, without a unified systemor understanding of reality,

(44:14):
all of these things will remainniche solutions, right?
You don't.
It'd be a little bit like, you know, you
you try and fix something wrongwith your body and you, you know, you,
you sprain your fingerand so you splint it.
Right. You have a solution to the problem.
You've not cured cancer, right?
You've you've solved a problem.
Right.
And you've made progresstowards the solution to the whole.
But youthere is not a systemic understanding yet.

(44:37):
And so all of these developments,
you know, I'll say not yet.
My my prediction is it's a long way off.
But, I think for all of these thingsto work in concert to develop anything
that might be, an ontological breakfor artificial intelligence,
just this to me, there's still toomany unanswered questions.

(44:59):
Is they don't none of these thingssee reality in the same way.
Like quantum and classical computingalso don't see reality in the same way.
So like you gotta find a way to bridgethat before these are effective together.
And it's
I think that there's
we're finding more questionsthan we have answers to these days.
And at least in my view.

(45:19):
That's an interesting waymaybe to put your.
Not yet.
You said we won't get to an on. You know,
you don't think we'll get
to an ontological breakthrough.
and so in some ways,
what you're saying is, look,the ontology of what we have now is,
is, in reality, nothing like artificialgeneral intelligence.
You can't conclusively rule outthat at some point,

(45:43):
we would be able to develop systems
that are really fundamentally differentat how they work.
And that's kind of the not yet.
And as part of your emphasis is likeit wouldn't just be a development,
it would be somethingwould be a very fundamental breakthrough.
Yeah, I'd say that's a goodthat's a good assessment of it.
It's it's very hard to prove a negative.

(46:04):
Right. So that's one of the challengesyou run into in logic.
And so and it, so yeah, I think we have there's
and it always
seems, it always seems impossible
to solve some of these problemsuntil someone solves it.
Right.There's that's true of mathematics, right?
We have all these unsolved problemsuntil someone,

(46:24):
you know, goes out of mathematicsfor a long enough
and they come up with a solution.
So, I think, you know, mymy skepticism lies in the, in the,
the nature of the questions being asked inthat, you know, we're saying,
you know, in order to create thingsthat make us uniquely human, we need to,
you know, more and more silicon chipsdoesn't get us there, right?

(46:46):
It needs to be something else. Right?
Some other way of understanding reality,processing information,
that we just don't have.
And I, and again, I don't think too long
about whatit would take to do that, but the,
When we reach a point in
innovation that every time we innovate,we get more questions.

(47:06):
I'm, I'm waiting for the
I'm not really waiting, but in a sense,if the tide began to turn and the
we began to answer, a lot of these thingsor major breakthroughs are covering
lots of them that might tilt the needleof optimism versus pessimism.
For me, I might say, okay, I'mmaybe I was in error before,
but at the moment we just keepgetting more questions and it's yeah,

(47:28):
that's that's sort of the root of mythe root of my not yet answer.
Yeah.
Well,thank you, Ben, for, diving into this.
I've enjoyed this and. Yeah.
Thanks for sharing that.
with our audience here.
Thank you to the audience
for tuning in hereto Anabaptist Perspectives.

(47:50):
And if you've enjoyed this,you can subscribe to the channel
or share this episode.
And we'll catch you in the next episode.
Advertise With Us

Popular Podcasts

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.