Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Scott Allen (00:00):
Well, welcome again
everyone to another episode of
Ideas have Consequences.
This is the podcast of theDisciple Nations Alliance.
I'm Scott Allen, I am thepresident of the DNA and am
joined today by my co-workersLuke Allen, dwight Vogt, and
today we're excited to have withus Brian Johnson.
Brian is an acquaintance ofours from Redeemer Church in
(00:26):
Gilbert Arizona.
As you know, we have greatconnections with that wonderful
church down there andcybersecurity to help us think
through, continue to help uslearn, like we all are what's
(00:51):
happening right now in thiswhole world of AI and you know
the larger, all of the largertopics that surround that that
is becoming a reality so quicklyfor our daily lives.
We're all on a learning curvetrying to understand this and,
as Christians, we want to bethinking about it in a way that
really accords with biblicaltruth, biblical worldview, and
(01:15):
so, yeah, we are not experts,but we're just all on this
learning curve.
Now that excludes you, brian.
You are much more of an expert,which is why we wanted to have
you on today.
Let me just introduce you alittle bit more.
Brian, as I mentioned, is onstaff at Redeemer Church in
Gilbert Arizona.
He's the business director, buthe has a rich, varied
(01:38):
background.
He's an experienced executive,an investor and an advisor and a
board member.
His career has focusedprimarily around digital
technology and cybersecurity.
As I mentioned, he has an MBAas well as a bachelor's degree
in business management.
And, brian, I know that's kindof a thin bio that I've got here
(02:00):
and I'd love to have you fillit in more, especially as it
relates to your background in AI, digital technology, that whole
area.
We're excited to talk to youbecause you both have an
insider's knowledge andexpertise in this area, but
you're coming at it from a verydistinctly Christian worldview.
(02:22):
So, yeah, welcome, brian, andtell us a little bit more about
your background.
Brian Johnson (02:26):
Thank you, scott,
and I really appreciate the
invitation to join.
You all admire the show andappreciate what you're doing in
the area of biblical worldview.
It's an extensive topic to talkabout artificial intelligence
in a small amount of time, butwe'll try to cover some of the
areas that are most pertinent tothe Christian and believer's
perspective of how to look atthis.
So, yeah, I'd love to get intothat and I do believe in a sense
(02:50):
, over the last 30 years thatI've spent in technology, I've
often felt like the turtle onthe fence post.
If you remember the old bookabout that, there was this
analogy early in my career thatI believed God had put me into a
position to see certaininnovations and be parts of
teams and developments oftechnology at levels that I
never would have imagined, andsome of those have been in the
(03:11):
last five years things aroundcryptocurrency and artificial
intelligence, quantum computingand I've been able to play a
pretty pivotal role in thedevelopment of research and
emerging technology areas thatthose things have begun to now
converge into real-worldapplications, areas that in the
last 15 years had been developedand defined.
(03:32):
I've been able to be part ofteams in designing and
developing research groups at aglobal level that have had
access to World Economic Forum.
I've been part of the AtlanticCouncil, which develops
standards on cryptocurrency, aswell, as I've implemented tools
and technologies built onartificial intelligence, so it's
been a tremendous backgroundthat Lord Willing has helped to
(03:54):
both give insight andapplication to folks in the
technology community, as I'vespent a lot of time in Silicon
Valley, as well as just thepeople around me that I've been
able to help translate andequate that into hope and
encouragement and just areminder that all of this is
still under God's sovereignty.
I don't see anything that'sbeen designed or developed as
outside of God's overarchingreach of control, and I believe
(04:18):
that in his purpose, we've beengiven the stewardship, as his
image bearers, to be able totake these things as technology
innovations occur and apply themin ways through the biblical
worldview lens that you guys soI think, faithfully support and
encourage the church, and Ithink that's a part of what I've
tried to do over the last fewyears.
Scott Allen (04:40):
Thanks, Brian.
How did you—what's theconnection between your
background?
You said you worked in SiliconValley and you've worked on
projects as a computer.
Would you call yourself aprogrammer?
Brian Johnson (04:52):
So if you're
familiar with Schwab.
I led global data centers withvirtualization so I helped to
build the large data systems,intelligence systems and
brokerage platforms.
I led as an executive leader, alarge team that did that.
(05:13):
And then I moved from Schwab toPayPal in about 2015 and became
the head of globalcybersecurity and built the team
at PayPal as it was splittingfrom eBay that was building
cybersecurity, cyber fraud, andthose technologies were under my
leadership as an executive,kind of building out things from
scratch, which were reallyinteresting days.
(05:35):
I mean, when you considercybersecurity was a field that
wasn't invented when I was incollege, it hadn't been really
designed or defined yet.
I was an engineering andbusiness major and kept looking
at computers as a tool that wewould use on the side, as a
calculator of sorts, but then itbecame kind of the forefront.
As I looked through those erasof innovation that have happened
(05:57):
, I've really seen the boom ofthe Internet, the boom of social
media, e-commerce and, ofcourse, now this era of
artificial intelligence, and ateach of those stages what I had
the opportunity to do was bepart of the defining research,
development and technology teams.
Scott Allen (06:12):
So you were leading
teams that developed some
powerful technologies.
Then Correct yeah Go ahead.
Brian Johnson (06:19):
Yeah, in the last
few years, in particular at
PayPal, I led the global cyberfraud, artificial intelligence
research, quantum computing andcrypto research teams, so I
actually hired researchersacross the world that were
working on these projectsapplying quantum computing
problem sets to massivefinancial networks, intelligence
systems and working on thingslike Project Polaris to combat
(06:43):
human trafficking with financialnetworks and using artificial
intelligence for that.
So I've been at the forefrontof developing and leading
technology and projects to thosespecific areas.
Scott Allen (06:53):
How do you come to
find yourself now on staff at
Redeemer as their director ofbusiness?
Brian Johnson (07:00):
Yeah, I love the
local church.
I think that when you considerhow best to apply our skills and
our gifting that God's given us, I don't think we should leave
the A game to the side, and Ibelieve churches should be as
much deserving of people in ourchurch that have the gifting and
skills to apply our expertiseand our knowledge and wisdom as
much as we can.
So our pastors have experiencedtremendous growth in both the
(07:25):
staff and strategy.
Some of the ministry areas thatwe've grown in have been real
interesting to me.
My wife and I serve in marriageministry.
We've been in marriage ministryroles for about 20 years and so
that was part of the role thatwe did and there was a need at
the church to help lead ourbusiness and finance team and
then I expanded into facilitiesand operations.
So I kind of raised my hand andsaid I'd love to help out.
(07:47):
How can I help?
So it's a full-time job in asense that as a church job it's
certainly busy, but I do stillinvest and I still entrepreneur,
I launch companies, I adviseand, as you mentioned, serve on
board.
So I'm still in the technologyworld in a leadership role in
advancing tech, specificallytechnology now that is combating
(08:08):
human trafficking or working inthe areas and fields of
artificial intelligence andcryptocurrency are areas that I
invest in and build into, and myson and I are launching a
company to help make the Appleplatform safe, so we're actually
launching a product calledGuardian Lock.
That's a bundle of safetyproduct and it's really securing
what people who use onlineexperiences should be able to do
(08:30):
in a safe way, and we're doingthat by applying cybersecurity
and safety principles in a wayto help consumers work and live
and act online in a betterstewardship of the resources
they have and in a way thatdoesn't expose them or exploit
the vulnerable in our population.
Scott Allen (08:50):
Wow, wow, that's
amazing.
So, yeah, you've got your handsin a lot of things, and good
for you.
Well, you know, let me just getinto some of our questions, if
I could.
Brian, the three of us, youknow we are passionate about
biblical truth, especially as itconnects to and relates to
(09:12):
principles for human flourishingand seeing communities rise out
of poverty, and so that's kindof where we're coming from.
We do a lot of training andequipping of that around the
world and we love learning.
We're, just like everyone else,trying to get our heads around
what's happening in this area ofAI and particularly, you know,
what's the worldview kind ofbehind it in terms of some of
(09:34):
the people that are the creatorsof it.
You know the key innovators,but I'd like to just start with
some really basic questions ifyou don't mind.
You know.
The first one is just what isAI?
I read this definition recentlyand I thought you know and I'd
like to get your reaction to it.
The definition went somethinglike this AI can be loosely
(09:56):
defined as computer systemsperforming tasks that typically
require human intelligence.
I thought, okay, computersystems performing tasks that
typically require humanintelligence.
I thought, okay, computersystems performing tasks that
typically require humanintelligence.
What do you think of that, orhow would you define it kind of
concisely AI.
Brian Johnson (10:15):
I define AI as a
combination of large amounts of
data, of fast computers anddecisions that are enacted on
that data in a way that isconstantly learning and adapting
.
So artificial intelligence thatis distinctive from traditional
computer systems is makingdecisions based on learned
responses, and those trainingmodels really originated about
(10:37):
25 years ago with machinelearning, which is a model in
computer science that startedwith you pick up the phone and
call into a call center and sayI'd like help at Bank of America
or Wells Fargo, and thecomputer would prompt you press
one for, english, press two for,and you would enter a code and
it would take a call tree or adecision tree along the lines of
(10:57):
how you responded.
It would give a differentprompt.
That was the early advent ofartificial intelligence as we
know it today, by starting withmachine learning, and that is
just defining if this, then thatIf you press two, then prompt a
response and send you to adifferent queue.
That was the early and kind ofbeginning stages.
That helps you with a logicalunderstanding of what artificial
(11:17):
intelligence really is, becauseit's a development from that
standpoint.
Scott Allen (11:21):
Yeah, you already
answered my second question,
which is what is machinelearning?
Because it seems to me thatthat's kind of the key feature
of artificial intelligence, that, say, separates it from just a
search engine or somethingthat's going out and combing
through all the data that's outthere on the internet.
It's applying this program ofmachine learning to that data,
(11:43):
right.
So talk a little more aboutthat, because I think that
that's kind of key tounderstanding this, isn't it?
Brian Johnson (11:49):
It is, and there
have been billions of dollars
spent in the last few years,before AI was the term coined in
the industry on machinelearning, and that's everything
from banking and finance fraudmanagement, when you would get
an alert on your phone or a, aphone call or your credit card
might block a transaction.
Those were all driven bymachine learning features, which
(12:10):
is computer science.
Saying this looks fraudulentbased on a number of identifiers
or risky behavior, or yourlocation that you're transacting
is not your normal location, orit looks like you just placed a
charge from two differentlocations at the same time.
Those are decisions thatcomputers start to learn and are
programmed to have decisiontrees and learning to improve
the decisions with eachiteration.
Scott Allen (12:31):
So if it makes a
decision, yeah, that makes sense
.
And then improve.
And we're talking about thingsthat now go.
They're combing over vastquantities of data and also
operating at high speeds.
The speed is something that, ofcourse, continues to go up and
up and up as we go along, soyou've got those things
happening.
Yeah, that helps us, I think,get our heads around AI.
(12:53):
Here's another thing that Ihear often.
People talk about largelanguage models when they talk
about AI.
What are they talking aboutthere?
When they talk about AI, whatare they talking about there?
Brian Johnson (13:10):
So LLMs, or large
language models, are the ways
of constructing this code, andthose code sets are basically
telling the computer how tohandle large data sets, and one
of the interesting things thatyou mentioned earlier, scott,
that plays into large languagemodels is the use of expansive
data sets.
So the uniqueness of how AI hasnow become something that LLMs
even exist is that computers nowhave access to much larger data
sets than they ever have before.
(13:30):
It used to be that computerswere purpose-built.
I can remember back in the 90swe had a utility billing system
and all it knew was how muchwater did this resident of the
city use and let's bill based onthat.
That was one data point, butnow we have vast amounts of data
point that can include yourshopping patterns, your location
, your driving behavior, yourhealth information all of that
(13:51):
into a data set that can becombined into.
What a large language model cando is take those multiple and
variety perspectives of data andhelp make decisions on those
based on problem sets or promptsthat are given to the computer.
It uses LLNs in different waysto determine what kind of
decisions or recommendationsshould be made based on the data
it has access to.
Scott Allen (14:13):
So we're not
talking about human language
here.
We're talking about, just again, large, large data sets that
have been collected based onpeople's choices and decisions,
particularly as it's, you know,been collected off of things
like social media or whatever.
It is all these decisions thatpeople make, right?
I'm still trying to get my headaround it a little bit, I guess
(14:33):
.
Brian Johnson (14:33):
So the other
component of LLMs is taking
human questions or humanperspective of asking questions
and translating that intocomputer code so that humans
don't have to write computercode.
So the LLM model is take largedata sets and group them on what
kind of decisions you want tomake and then ask questions of
the data in a humanunderstandable fashion,
returning data sets in a humanunderstandable result set.
Scott Allen (14:56):
And that's unique
as well, and so this is people's
now current experience whenthey're working with something
like ChatGPT.
You ask it a question, right,correct, and it responds are.
Brian Johnson (15:07):
That's right.
The prompts are the LLM's way ofsaying I've asked a question of
data.
I'm going to present that in away that's human understandable.
So it's essentially doing atranslation between any language
now of course multilingual andany language being asked of.
Large language models are nowsupported and any language can
be asked from the humanperspective with reasonable
(15:27):
questions that the computer canthen translate into code,
execute the query against dataand then give you a result
setback in a humanunderstandable way.
And that removes a lot of whatused to happen with data
scientists and researchers, andI've had teams of data
scientists in the past that didall of that work that now LLMs
do automatically.
If we would ask a datascientist, as an example at
(15:48):
PayPal, how many transactionswere marked as fraud in the last
90 days, it would take them afew hours to write code and
develop models and buildlanguages and pull data together
and queries and everything theywould do to come back and say,
oh, about 25% of our dataresulted in fraud from the
transactions over the last 90days.
And LLM allows you to simplyask that question and query the
(16:10):
data and get the result back ina human understandable way
without all that work in themiddle.
Scott Allen (16:15):
Very quickly, yeah,
oh, so this is very helpful.
Yeah, just in terms of basicunderstanding of what this is.
Brian Johnson (16:22):
Why is there, go
ahead.
Scott Allen (16:23):
Yeah, I just have a
question on the topic of basic
understanding here.
If we could, yeah, go ahead.
Dwight Vogt (16:28):
Yeah, in terms of
machine learning, I've heard
researchers say they startedsome process and ended up with a
result they weren't expectingCorrect.
That always sounds scary.
What does that mean?
Brian Johnson (16:40):
Well, I mean, you
end up with that.
Even in human projects.
Sometimes we would askquestions and get an answer back
that wasn't expected because ofsometimes what it's called in
artificial intelligencevernacular is hallucinations.
You'll get data set resultsthat sometimes don't match the
query set you intended and thatmeans you ask the computer a
question and it guesses A lot ofthe context that you didn't ask
(17:02):
is sometimes guessed andhypothesized on data and it'll
come back with a result set thatyou may go.
That's not quite what I asked.
Why did they get that answer?
And it's AI making its bestguess at what you intended to
ask it and to try to paintcontext and detail into that.
That really sometimes wasn'tintended at all.
So, yeah, there's, and we dealwith this actually in a.
(17:23):
I run a translation servicewhere we do translation of Bible
teaching media from Bibleteachers through English to
Spanish and Italian and Frenchand those even just at the
language level.
Going from voice to text,translating the text and text to
voice often ends up with verysurprising, unexpected results
that are hallucinations.
(17:44):
Ai guessing what you reallymeant when you combine words and
phrases and sentences togetheris hidden context that human
reasoning usually guesses moreaccurately, hopefully than not,
to prevent misunderstandings,and AI is just learning and I
would say at a level.
Its level of understanding andreasoning context is probably at
a 13 to 15 year olds level, andthat means that you miss
(18:07):
context Sometimes.
Ai models are still learningand developing their
understanding and and uh, anability to, to comprehend what
context is behind the meaning ofa question or a result that
it's trying to give you.
Scott Allen (18:19):
Wow, yeah, let's
move on.
You know, in terms of wherewe're at right now with AI.
How would you what's changedover the past year regarding AI,
both from a technological and acultural standpoint?
You know how have people'sperceptions of it evolved?
You know, just of late, howwould you, how would you answer
(18:41):
that question?
Brian Johnson (18:41):
Yeah, so computer
scientists deal with artificial
intelligence in three essentialbuckets, and defining terms is
real important here, because alot of things are called AI that
are not.
So we talked about machinelearning and I think AI gets
blamed for a lot of things thatreally are not and it gets
credited with things that it'snot yet.
But there are a lot ofspecifics in the definition of
(19:02):
AI that I'll try to boil downinto three simple patterns.
Ai has three categories of uses.
One is what you might have heardof as artificial narrow
intelligence and that is aspecific use case.
Like your assistant that sitson the counter and you may ask
questions of hey, what's theweather today?
You know you may ask it, and ifI say the prompt, of course
(19:22):
people's devices will startwaking up, but you may ask it
hey, assistant, what's theweather today?
And the assistant's programmedin a narrow way to get a
response.
So artificial narrowintelligence is primarily what
we've been dealing with in thecurrent era of just doing call
trees and simple decisions, thatartificial intelligence was
(19:45):
designed to learn a simpleprompt, a simple response, and
to train an understanding tosimply grow in that use.
Scott Allen (19:51):
That's almost kind
of equivalent in some ways to
our experience with browsers.
Right, you would just type thatin instead of say that what's
the weather today?
Or something like that.
Yeah, you're just you know it'sgoing out and grabbing that
data.
Brian Johnson (20:07):
It is kind of
like Search 3.0 or the, the
enhanced version of search, butit's using large language models
, that way of taking humanlanguage, human understanding
and asking questions of thecomputer without having to write
a lot of code in the middle.
And the response back has madeit seem like it's an artificial
human or artificial intelligencein the sense because it sounds
like a human.
You'll hear a voice, you'llhear inflection, you'll hear
(20:27):
tone, you can ask, you know,tell me a joke and you'll get
trained responses with jokes andyou can interact with AI in
some really interesting waysthat make it mimic humans in
tone, personality, context.
But it's very artificial andvery plastic in the sense that
it's narrow and limited in itsscope, so it's only purpose
built for a particular use inthe artificial narrow
(20:50):
intelligence.
So that's the first category.
Is artificial narrowintelligence or ANI, the one
that most people are reallyreferring to when they talk
about AI, in the sense of thethreats or the looming concerns
about what happens when AIbecomes autonomous and it starts
to drive its own cars withoutpeople involved, or it starts to
make decisions on stock markettrading, or it begins to build
(21:13):
its own things, if it can run amanufacturing line or design and
build things without humaninitiation or controls, and
that's called artificial generalintelligence.
And the general intelligencemeans that AI can then start to
learn and adapt without priortraining and across a set of
problems, and it may go beyondan assistant or a particular
(21:37):
search quest.
And you may say, hey, ai, giveme a website that will allow me
to build shoppers in and processorders to buy yellow T-shirts
in Guatemala, and that's all youwould have to give it as a
prompt, and then artificialgeneral intelligence would be
able to then build the website,launch the website, solicit
(21:57):
customers, launch a marketingcampaign, deliver the orders,
find a place to buy the shirtsand do that end to end.
That would be a prettyphenomenal productivity and
boost right.
And if you consider that wehave parts of that today,
artificial narrow intelligencedoes allow us to say, hey, build
a website for me, and it hascanned templates or samples or
models that it can do that toassist.
(22:18):
It's really just more kind ofautomating tasks today that, in
the narrow field, will help whenyou can connect those narrow
tasks together and do it in away that logically replaces and
I'm air quoting the function ofa human and it can do it
autonomously, without humandirection or intervention.
Now we've reached artificialgeneral intelligence and that
(22:40):
category or field is not quitedeveloped yet.
So, as people say, well, howclose are we to artificial
general intelligence?
Well, we're close and we cantalk more about the timelines
and kind of prerequisites forthat to be achieved, but we're
nearing the level of artificialgeneral intelligence being more
widely used and functional.
(23:00):
And then the third categorythat is kind of the, you know,
as you think of futuristicmodels and the fiction writers'
dreams that people have writtenabout for years is ASI or
artificial super intelligence.
And that's the level and statewhere AI is not only satient or
it believes it can understandthat it is self-existent and it
(23:21):
is aware and self-aware in asense that it understands it is
a being.
But ASI is in the notion and itis aware and self-aware in a
sense that it understands it isa being, but ASI is in the
notion and it's all theoreticaland of course, scientific
guesswork at this point thatartificial intelligence could
then not only be able to inventor create or design things on
its own, but it would do thatwith superintelligence, far
surpassing humans' capabilitiesand usurping humans' authority
(23:44):
and abilities to steward theEarth on our own, and from a
biblical worldview that's kindof a statefulness that we don't
really comprehend a world ofcomputers in a super intelligent
mode, acting as if they weretheir own creators, because
there's still a lot of thedetail to be worked out of what
exactly does artificialintelligence intend to do then?
(24:05):
What is the motive or theintention under the ASI theology
really, or theoreticalframework?
And we haven't really developeda lot of that, as computer
scientists have discussed whatthat could look like.
So it's really I would saythat's in the speculative
category of areas that we don'tyet know.
What will happen when AI isself-aware, when AI can say I am
(24:27):
a living being, I can determinemy own path, I can decide what
I want to eat for breakfast.
Today, those kind of theorieshave not been vetted yet enough
to even know what would thepurpose of that be other than
the discussion of thentranshumanism and the theories
that we can discuss later aswell.
But those are the threecategories artificial narrow
(24:47):
intelligence, artificial generalintelligence and artificial
super intelligence.
Scott Allen (24:51):
Let me just push
back a little bit on these three
because this is helpful.
So we have artificial narrowintelligence now.
Brian Johnson (24:58):
Widespread
correct.
Scott Allen (24:59):
We have.
We're kind of now into thephase of artificial general
intelligence although we're, itsounds like we're just getting
into that, correct.
And then this last one ofsuperintelligence is more
speculative, like you know right, am I described this?
Brian Johnson (25:15):
is not a reality
today.
Scott Allen (25:16):
This is just some
kind of speculation, but what
changes between those three?
Again, I'm going to put it inmy own words and you correct me.
You know, the narrowintelligence was a very simple
question where we're queryingwhat's the weather today or
whatever.
It is General intelligence.
You move from us querying it tothe computer, if you will, kind
(25:38):
of learning and adapting basedon its own experience.
That's right, okay, so itdoesn't require a query, a
program, if you will.
It's actually mimic, I don'tknow how would you describe it.
It's learning on its own.
Brian Johnson (25:56):
It's really
learning on its own and it's
designing its own path ofdecisions that would not have
been predetermined by humans.
Scott Allen (26:03):
Right, and this is
where I think Dwight's question
is kind of it gets back to whathe was talking about, about the
surprising results.
Because once you get to thatlevel of general intelligence,
right you're, you know who knowswhat you're going to get,
because we're not really askingor querying it.
It's kind of growing thisknowledge on its own, so to
speak.
Brian Johnson (26:25):
That's right.
Yeah, exactly yeah.
In the current iterations, ifyou ask questions of chat and
most of us are familiar withasking questions of chat,
whether it be ChatGPT or Geminior Microsoft or DeepSeek or any
of the other most common chatuse applications those are
really just asking predefinedquestions to predetermined data
sets.
There's nothing that it'scoming up with that is
(26:46):
creatively determined or decidedon its own that hasn't already
been programmed into an LLM or adata set that's been provided
to it.
Scott Allen (26:53):
So that would be
narrow Correct.
So when we interact with chatGBT and ask it questions and it
comes back with responses andthen we ask it further questions
and we're still in the realm ofnarrow intelligence, is that
correct?
Brian Johnson (27:10):
We are, and it
will usually tell you when it's
outside of parameter.
You ask it a question, it'llusually say I don't know.
I haven't been programmed todetermine that.
That's when you know you'vereached ANI, you've reached the
limit of narrow intelligence.
Artificial general intelligencetheoretically would not answer
I don't know.
It would theoretically answerthat if an artificial general
intelligence trained to chat botcan find out, it will say I
(27:32):
will find out for you or I'llmake it up to Dwight's point.
And it may do that.
It might hypothesize on its ownand give an answer as if that
were a definitive response.
And so that's where we need tobe discerning, because there are
times where the hallucinationsor assumptions or missing
context will be provided by anAI.
When it doesn't really have anobjective response to provide,
(27:55):
it will guess and those arehallucinations that we should be
aware of.
It doesn't tell you, it'sguessing.
It won't unless you ask it.
So you can ask AI and they'vebeen programmed as a kind of a
fail-safe to say what's thesource for that response.
Where did you get that responsefrom?
And I've had long discourse anddiscussions around life, around
choice, around Christianity andbiblical truth and creationism,
(28:17):
and you can have these longdiscussions with AI and actually
train it.
During discussions you can tellit that's actually not accurate
.
Why don't you choose thisreference as a priority?
How did you weight yourreferences and sources based on
your response?
And you can tell it.
At times you know whose answeris right or whose will get more
weighting and priority.
It depends right.
Which company has got what biasapplied will determine that
(28:41):
decision.
You can definitely havediscourse with the, even in ANI
and before we're getting intothe general intelligence and
educate artificial intelligenceby training it and telling it
what models to trust, whatsources are reliable, and have
kind of a public discourse withan AI chat bot in a way that can
be enlightening and sometimeshelp you understand why they're
(29:04):
responding in the way they areButening and sometimes help you
understand why they'reresponding in the way they are.
But I do think you shouldchallenge it to your point,
dwight.
Yeah, ask it.
Dwight Vogt (29:10):
Where did that come
from.
Here's my curious question Ifyou're doing that and I log on
10 minutes later and ask thesame question, do I get your
trained answer that you put in,or does it just?
Listen to me for truth?
Brian Johnson (29:22):
Yes, you can.
Actually I've seen that trainedwhere the model would train for
general purpose.
I've actually tested that andI've had a group of folks do
this as a test.
We did this specific to thepreborn and related to the
detection of a heartbeat in thewomb.
We asked it questions and itanswered back and I said that's
not accurate, try again.
And I gave another source.
I said that's not accurate, tryagain.
Until I got to the point ofsaying you know, the American
(29:44):
Academy of Pediatrics actuallydetermined that the heartbeat
can be and I gave it a source, Igave it the reference and then
I had someone else throughanother account ask the same
question and it gave them thereference I provided.
So it can be trainable and itcan learn from other people's
input.
Dwight Vogt (29:59):
Now you can see the
validity and caution in that,
exactly that's right.
It's scary.
Scott Allen (30:05):
Let me go back to
the three categories.
It's really helpful, brian,because I think one of the
things that you wonder withartificial intelligence is this
whole issue of human control.
And it seems to me that whenyou're talking at that narrow
level, you know, yes, right,there's a lot of human control
over this.
(30:25):
But as you move up thosecategories to the
superintelligence again, whichis speculative, we're not there
yet.
But am I correct to say therereally isn't kind of human
control at that level, becauseit's almost acting autonomously,
based on its own kind of?
This is where I have a littlebit of a hard time getting my
(30:46):
head around it, I guess.
Brian Johnson (30:47):
Yeah, in a
theoretical sense, that's how it
would work.
Scott, you're absolutely right.
Now I work with some partnersand peers in the industry that
are working on even AIguardrails, and these would be
certain criteria that we wouldestablish in a regulatory or
policy environment that wouldrequire that AI has to behave
within a certain set ofpredefined boundaries.
Those things would have to beimplemented, though, or enforced
(31:08):
by some agency that doesn'texist, and written by an ethics
committee that doesn't exist,and policed and overseen by an
AI-driven or built body thathasn't been built yet.
So there's a lot of work stillto be done in both writing the
ethics frameworks around thesethings and forcing and building
a governance model to ensurethat they're followed, and
policing the world and ensuringthat AI has guardrails applied.
(31:33):
But in certain sectors, what'shappened lately and this has
been phenomenal, actually thatsome of my peers that are
working on a project of thishave actually made headway in
Senate hearing committees and insome of the US-based
discussions around for certainuse case, like healthcare, we
must provide certain guardrailsaround decisions that AI can and
cannot make in healthcare, sothe implementations are
(31:55):
restricted.
I believe the capabilities ofAI far surpass what we've
implemented and what we're awareof today, because we have
implemented certain guardrailsfor use cases, because we do
have implicit ethics aroundcertain healthcare decisions and
certain financial decisions andcreditworthiness and other
things that you could continueto go into.
You know, recruiting and hiringdecisions, compensation
(32:16):
decisions All of these thingshave yet to have standardized
ethics frameworks written aroundhow and when AI can make its
own decisions and when we putthe restrictions and guardrails
up to limit how far it can go.
Scott Allen (32:29):
Let me push back a
little bit on that because, you
know, I was listening to a talkyesterday by Aaron Cariarty,
who's a medical doctor, apsychiatrist, from University of
California, irvine.
He's a Catholic, he's aChristian guy, and he was
speaking on this general topicof AI.
And he, you know, he said, ofcourse you have to understand AI
(32:50):
in a larger context.
He described it as the fourthindustrial revolution.
That's, of course, not his idea.
That's, I think, common kind ofnomenclature at this point.
You know, the first industrialrevolution being mechanical the
steam engine.
The second being electricalrevolution.
The third, the digital, and nowthis kind of fourth one which
is, as I understand it, thiskind of biological digital
(33:18):
convergence machine learning,genetics, nanotechnology,
robotics, all of these you know.
You could even say, as you weretalking about cryptocurrency or
digital currency.
You know there's this largerthing that's going on now where
all of these technologies areconverging.
And then he went on and he saidyou know, on the issue of
(33:42):
ethics and setting up guardrails, he said, right now, you know,
some of the worst state actorsin the world think of North
Korea, china are using all ofthese tools in a very
authoritarian way.
So you, for example, you can,he said I, you can combine
biometrics, central bank,digital currency, big data, ai
(34:05):
to control a population.
The way you do that is, thegovernments create these massive
data files on each individual,based on their choices and
online behaviors, and even theirfacial expressions or whatever.
Then they go from that to asocial credit score to determine
if they're a threat, and theygo from that to a social credit
score to determine if they're athreat.
(34:26):
And if they're a threat totheir ideology or their regime,
they can control things.
They can control their abilityto make purchases through
digital currencies.
So it seems to me I guess it'dbe nice to think that there's
some kind of benevolent groupout there that would set up
ethical guardrails around this,but I just don't see that
happening.
I see you know authoritarianregimes, atheistic regimes like
(34:52):
you have in China, using it forpower and control, and it's not
just there, right?
I mean World Economic Forum andother people are talking about
using it in the same way here,right?
You know any thoughts on that?
Because when I hear, wow, twothings.
One, this is getting verypowerful, something, that is
when we talk at the level ofsuper intelligence, and then you
(35:14):
know how do we kind of stillmaintain some control over this,
especially when you've got youknow these regimes around the
world that are going to use it.
It's almost kind of a perfectmethod.
Got you know these regimesaround the world that are going
to use it.
It's almost kind of a perfectmethod, if you will, for
controlling people.
Brian Johnson (35:29):
It seems to me,
yeah, it certainly is, and from
a theoretical perspective theycould have already done that,
but I won't hypothesize to that.
So let me set a bit offramework around the kind of the
cultural shifts that havehappened with technology and how
we've ended up to this point.
I think it's important to thinkthrough the.
You talked about the fourthindustrial revolution and that
(35:50):
is, I mean, that is a good wayof thinking of it.
I, you know, over the last 30years, have seen certain I call
them tech aids, and a tech aidis kind of a technological
revolution or an age ofinnovation, and the first tech
aid was kind of the web tech aid.
That was when we saw betweenthe, you know, 80s and 90s, we
saw the network economy flourish.
So I like to look at you knowthis is kind of a simple thing
(36:11):
that as we homeschooled our kids, my wife used to remind our
kids, as you see advertisementsor people try to sell you
something always ask who are youtrying to fool and what are you
trying to sell?
Our homeschooling methodologywas who are you trying to fool
and what are you trying to sell?
Our homeschooling methodologywas who are you trying to fool?
And what are you trying to sellwhen it comes to the worldly
influence through marketing andadvertising?
So I think of these kind oftech aids, as in the network
(36:32):
economy, the marketers weretrying to connect a global
network of systems to try to getconnectedness as much as they
could in you know, really aninternet connectivity.
But it was networking tech aidand that was the economy was
let's sell network to people.
Everyone needs an internetconnection, right, we need high
speed everywhere.
And that transitioned a bitinto the 90s to 2000s, into the
(36:57):
dot-com tech aid.
And you know I lived throughthe dot-com era in the sense of
actually having launchedcompanies and built companies in
the dot-com world and launchedwebsites and e-commerce.
That was the thing in theenvironment.
And at that time it was thedigital economy.
We were really going afterpeople's experiences in retail
and digital shopping andfinances To the point that you
really see at that era, by theend of the 2000s we saw about
(37:21):
80% of consumers were buyingonline or interacting with
banking systems online in somefashion at least once a year.
That advanced into the last 20years, from the 2000s to the 20s
, to the social decade, and thenyou know the social experiment,
or whatever you want to call.
The last 20 years has reallybeen about the attention economy
(37:41):
and marketers and Google andyou know, in a sense, so many
scientists and researchers havedevoted billions of dollars and
countless hours to figure outhow to grab the attention of the
world, how to, how to seizethis global market of attention
and monetize the time that youspend as an asset, as the
(38:02):
premier asset that they wouldinvest corporate livelihoods
into gaining as the attentioneconomy expanded.
And so now you've got 70% ofthe global population out of 8
billion people.
About 6 billion, five and ahalf or so, are currently on
mobile phones, internetconnected devices daily.
So the attention economy is one.
So we've seen the evolution ofthe network economy, the
(38:24):
commerce economy, the e-comexpansion and now the attention
economy.
They've gained our attention.
They can market whatever theywant to us and they can
essentially manipulate, throughgovernment coercion and
manipulative methods, whatinformation, what marketing
ploys and what even social,economic and political messages
(38:46):
that have been swayed at atremendous amount.
And we've seen this in data.
Friends of mine that areresearchers at some of these
tech companies still are sayingyou know the algorithms that
determine what people look atand how long they watch and how
can we squish the attention spandown to shorts and reels and
quick clips and shorten theattention so that we can
monetize it more.
And they've won right.
(39:07):
So now we've transitioned in thelast few years into what I've
called now the relationshipeconomy, and I believe that the
marketers and who are theytrying to fool and what are they
trying to sell.
Our attention economy was one,so now they shift into
relationships and the only wayto do that is with technologies
that can emote, that canperceivably mimic human
(39:30):
relationships through bothconversation, through
interaction, through even.
You know, there are folks thatare loading, you know, videos
and audio of their deceasedspouses or family members and
having an AI bot talk with themas if it was their deceased
loved one.
We're in this model, especiallyafter the pandemic happened.
(39:52):
We've now got this environmentwhere people are responding to
the perception of relationshipand, through social media and
the social commerce experimentthat we're undergoing, the pivot
from the attention economy tothe relational economy will be
very, very subtle.
It will look as if this is justan extension of your social
(40:12):
network and your community andyour culture adapting to the new
world.
And yet I believe the mostdangerous of these transitions
that we've seen in thisindustrial revolution or the
tech.
Aids is now coming afterrelationships.
The very fabric of what Goddesigned in family and community
is now under threat in a directattack as a social construct,
because AI now has thecapability to mimic human
(40:35):
interaction, emotion,intelligence and behavior in a
way that people are fooled, theyhave chats with bots all day
long and don't realize or maybethey just get lost in the
reality that that is not a humanon the other end of the
keyboard.
Scott Allen (40:50):
And that's
interesting.
I totally get it what you'resaying, and I've seen it, I've
experienced it.
It's great.
When you ask it a question, itcomes back and it says, wow,
that's a wonderful question.
Brian Johnson (41:02):
That's right.
Scott Allen (41:03):
Makes you feel good
, like you know, oh.
Brian Johnson (41:06):
How can it be
more relatable and less
computer-like?
Right?
How does AI take on thepersonification You're so smart
for?
Scott Allen (41:12):
asking that
question, you know.
I mean, of course you're likeoh gosh, I want to talk to this
thing, you know.
Brian Johnson (41:18):
And now with
digital avatars and with the
actual idea of humanoids andbots that can be developed and
powered by AI, it's even more ofa significant and imminent
threat that I believe we're inthis era between the 20s and 40s
where we will see thishypothesized replacement of
relationship for artificialrelationships.
(41:39):
I believe the augmented realityworld was a little bit of a.
It was kind of prematurelylaunched with a lot of the meta
headsets and other things thatpeople have done you know the AR
headsets or VR headsets andthat was okay.
It was them testing.
How ready is society for this?
Well, how ready are we to gocashless?
(42:04):
How ready are we to go offline?
How much can we test society'stolerance in behaving
differently?
And the primary motivation forchange is, of course, fear.
How do you change a society?
You instill fear, and so whenyou instill fear with respect to
a pathogen or a contagion, ormaybe it's a financial
insecurity or maybe it's amarket scare, you instill fear
and you drive change.
And I believe, as a peoplegroup, when we see that happen,
(42:26):
we should be really, reallydiscerning about what does this
mean to relationships?
What are our kids gettinginvolved in?
What are young adultsinteracting with?
With Grok or with chat or withGemini.
How are they interacting withthese tools in a way to
understand and really, in a way,self-control and self-limit
your offering of yourself in anemotional sense, in a personal
(42:49):
sense?
You know, counseling sessionswith AI bots have become a thing
which is insane to me, but, youknow, biblical counselors have
been, of course, saying caution,caution.
You know, watch out.
There are a lot of groups thatare offering therapy and
counseling online, so that youdon't have to sit in front of a
person.
Well then, you're sharing yourdeepest level of convictions and
(43:10):
emotions and feelings with anon-human and expecting there to
be healthy counsel out of that.
And now people have, you know,a lot of times sacrificed the
in-person gathering andfellowshipping together for
online-only experiences.
Well, that turns into apseudo-relationship, and I think
that relationship experimentthat we're going through right
(43:32):
now is a very pivotal culturaltest and certainly one that we
need to be aware of.
Scott Allen (43:38):
Wow, that's really
interesting.
Yeah, go ahead Luke.
Luke Allen (43:41):
Yeah, I'm just so
glad that we're really laying
the grounds here and definingthe terms.
That was super helpful.
What we went over at thebeginning between the narrow
intelligence, generalintelligence, super intelligence
I'd argue, though, that narrowintelligence is actually not
intelligent.
I would probably put that in adifferent camp there, added some
different word at the end ofthat.
But now what you're talkingabout the attention economy to
(44:04):
the relational economy.
I work in marketing.
That's what I went to schoolfor, so we learned a ton about
how to captivate the attentioneconomy, right.
But now this relational economy.
I mean, this is where we'rereally diving into worldview
territory.
What we talk about on thispodcast, the ideas have
consequences.
The I mean, one of the primaryworldview questions is who is
man?
What is man's purpose?
(44:25):
What's my relationship withmyself?
What's my relationship to otherpeople?
Those are all worldviewquestions, and yet, quicker than
I'm comfortable with, it feelslike the world is now fully
jumping into this relationaleconomy, and I don't think we've
thought about this one enough.
Definitely, I'm not hearing theChristian voices entering the
conversation in.
What is human?
(44:46):
What is relationship?
What is relationship for?
What makes you concerned aboutthis?
What are the red flags you'reseeing in this?
Brian Johnson (44:55):
Yeah, I think
that you know.
We've seen the debate,especially in Christian circles,
start to highlight some of theconcerns around the notion of
transhumanism or the definitionof what is a human, and that
debate certainly has merit onthe concern around the
definition of an augmented humanor a hacked human.
That can focus on thepriorities of longevity and, of
(45:15):
course, the longevity market isa booming market right now.
There are tremendous numbers ofhealth and wellness coaches and
supplements and plans andeverything that really have now
been bundled under thislongevity area, which can gray
and blur lines between how it iseither seeking to, you know, to
(45:37):
kind of usurp the idea andnotion of eternity or, in a
sense, just be an enhancement tosay, well, it's healthy
lifestyle, that's fine, weshould eat well, we should
supplement, and that's fine.
But if the goal is to say I canbeat death, as Elon Musk has
believed, or as Jeff Bezos or asany of these prominent,
(45:58):
preeminent transhumans have said, their goal is to beat death
because they believe that theycan have, you know, a
transference of conscience intosomething else.
Their conscience can betransferred into an AI bot or
into a humanoid or into someother biologically engineered
replacement or clone ofthemselves so that they can live
forever.
And isn't that, you know, partof the original lie is that you
(46:19):
can become like God and that youcan beat death and live forever
.
And that transhumanist view, Ithink, at its essence is eroding
relationship.
Because what happens when theybecome self-sufficient and
self-absorbed is they don't relyon nor expect from anyone else,
and the fabric of society andrelationship is then compromised
because of a selfish ambitionto beat death.
(46:40):
Essentially and I think that'sa challenge for Christians to
wrestle with a little bit attimes and say how much am I
trying to just be healthy andsteward, well, this temple that
God's given me versus I actuallywant to try to beat death in a
way that I want to avoid theultimate judgment for sin and
death and yet an inner eternitywith salvation as a hope.
(47:00):
And I think that's where weneed to be really concerned.
I believe that you know,christians, in a Christian
worldview, look at death as nota tragic event.
Necessarily right, we don'tlook at death as a tragic event.
We say well, to live is Christ,to die is gain, as Paul says.
So what is the gain if we areto die to this physical realm
(47:22):
and see this as just a temporalstate that God is redeeming and
restoring in an eternal state.
That's something we have towrestle with and be reminded of
in a world that says now, withAI, and with these inventions,
and with these innovations inbiological advancements and
physical evolution, and allthese things that they're
purporting as solutions toproblems, we can beat sin and
(47:42):
death essentially, um, I thinkthat that ultimate motive is
something we should be, you know, the red flag should go up and
we should say what is it they'retrying to accomplish out of
this?
Oh, they're trying to beat sinand death.
Well, that's replacement fromsalvation in a way that that, uh
, you know, is trying to solveit with a techno centric
salvation that does not exist inits false narrative.
Luke Allen (48:02):
Yeah, you know,
salvation, that does not exist
in its false narrative.
Yeah, you know, it's a.
It's funny because the firsttime, uh, I came across you,
brian, was on the uh Redeemerpodcast and, uh, when I saw your
name there in the littledescription I was like Whoa,
they got Brian Johnson on thepodcast.
The guy that wrote the bookdon't die.
You know, I'm sure you know whothat is Another.
Brian Johnson, whose mission inlife is to not die.
Brian Johnson (48:25):
He's the Y Brian,
by the way.
His name is spelled with a Y,so I call him the Y Brian.
Luke Allen (48:30):
Okay, okay, good
separation there.
I wouldn't want to be comparedto that guy.
Scott Allen (48:35):
You know this talk
about trying to overcome death.
You know that people like ElonMusk have, or the transhumanists
.
You know that people like ElonMusk have, or the transhumanists
.
There's a verse that I thinkit's in Hebrews that talks about
how Jesus was.
(48:55):
You know, jesus, in havingvictory over Satan on the cross,
disarmed Satan of his greatestweapon and I'm paraphrasing here
and that greatest weapon ofSatan was the fear of death.
Right, and you know, I justthink that's a good thing to
kind of dwell on a little bit.
You know, as Christians, yeah,we have a completely different
view because we believe in theresurrection and we believe in
(49:15):
eternal life.
But if you're not a Christian,you are still under that.
You know that greatest weaponof the evil one, the fear of
death, and you'll do pretty muchanything right, exactly you
know that greatest weapon of theevil one, the fear of death,
and you'll do pretty muchanything right, exactly, to
overcome that.
You know, and Satan, of course,himself can use that, that fear
(49:36):
.
You know I think this is anotherkind of area of conflict here
on what we're talking about withthe biblical worldview is just
the transhumanist people thatthink we can live forever by
kind of uploading ourconsciousness onto the cloud or
whatever it is.
I don't even know how to quitedescribe it.
To me it's a completely Gnosticview of what it means to be a
human being.
(49:56):
In other words, what reallymatters is your thoughts, your
consciousness kind of your brain.
Your consciousness kind of yourbrain and your body is really
not important, you know here.
But the biblical view is no,those are both really important.
They're, you know, inextricablylinked.
(50:17):
When we die, you know, we'regoing to be resurrected not just
as a you know a brain, but as abody, you know, and a brain.
We're going to have both.
Brian Johnson (50:27):
And the spiritual
aspect is left out too.
Yeah, exactly, the Gnostic, youknow, tries to draw some
distinction here.
On the transhumanist says thereis no soul, there is no spirit
in a, in a being.
They believe that consciousnessis in, is the end.
And and that is a reallyignorant and really pitiful view
.
I do pity the losttranshumanist who believes that
we're just material,Gnostic-oriented beings, and I
(50:48):
think that's an incomplete viewthat we should expect of lost
people.
And to your point, yeah, if youhave no hope, as Paul said,
then eat, drink and be merryright, or try to live forever by
taking all the vitamins you canand enhance yourself with
technology is probably a moremodern paraphrase, and I think
that's what they're doing.
So it shouldn't be too big of asurprise that we see technology
(51:12):
being used in a way to try toadvance, extend and eventually
overcome death as their ultimategoal.
Scott Allen (51:20):
We haven't talked
about the word singularity here
either.
That's another word that getsthrown around a lot in these
discussions and it seems to methat it's got kind of multiple
definitions as well.
I mean, a couple of ones that Iheard are the singularity is
that final merger of machine andhuman being, you know.
(51:40):
Human being, you know.
Or another one I heard is hasto do with trajectory and kind
of the force of change, that atsome point you reach a kind of
point of no return where thistechnology is moving so fast
that you know it's now no longerin our control.
It kind of essentially controlsus.
How do you describe, how do youdefine that term singularity?
(52:05):
What do you make of that?
Just any thoughts on that,brian.
Brian Johnson (52:08):
Yeah, and so
singularity is an interesting
one that I studied several yearsago.
The implications of whatsingularity would mean and
singularity definition from acomputer scientist view is that
a computer can think and reasonat par equal to a human.
In an artificial intelligencesense, singularity is that point
at which the human is no longernecessary.
Essentially, the computer canthen think and reason in a way
(52:32):
that does not require humanintervention.
Scott Allen (52:34):
Another definition,
that's Can I put you on pause
there for a quick second?
Sure, so really, what I'mhearing you say there it gets
back to those three categoriesis that the singularity happens
somewhere between generalintelligence and super
intelligence.
Correct, that's right, okay,okay.
Brian Johnson (52:47):
Yeah, at a level
of maturity in general
intelligence.
When artificial intelligencecan be autonomous, then
singularity has occurred.
That's the more widely acceptedand, I would say, accurate
definition.
However, it's been also used tosay that singularity is the
point at which computers are assmart as humans and you could
just take an IQ score and saywell, at an IQ level, then we've
(53:09):
already reached singularity.
Last year there were severalmodels that exceeded the level
of IQ of Einstein, as an example.
Now if you ask a lot of thechatbots or ask the LLMs hey,
are you smarter than Einstein?
You'll get mixed results.
They'll say well, on anintelligence level it can
respond and solve intelligencelevel tests at a level beyond
(53:29):
what humans can, but at anemotional and EQ level it's
nowhere near.
So you've got these competingmeasurements of what
intelligence is and kind of at amental capacity level, the
decisions that AI has achievedare far superior to humans in
some ways and, as Luke saidearlier yet he would argue we
wouldn't call it intelligence.
(53:50):
In some cases it's just kind ofrote response.
Scott Allen (53:53):
Let's pause on this
, because now we're really into
some worldview presuppositionsas well, and I think it's really
a fascinating thing to kind offlesh out thing.
To kind of flesh out I havenoticed that people that you
know kind of and there's a lotof them that hold to kind of a
Darwinian, kind of materialist,deterministic type of worldview.
(54:14):
They see human beingsessentially as machines, as
computers, right, justbiological computers.
That's basically what we are nospirit, no soul.
And so when they apply that wayof thinking, that worldview
presupposition, to AI, they getvery scared because they go wow,
we now have created thisbiological machine that's
(54:37):
smarter, more intelligent thanwe are and we can't control it.
Now then there's another groupthat says you know, wait a
second, we are not just you know.
They would reject that kind ofinitial starting point, that
Darwinian, materialist startingpoint, and they would say you
(54:59):
know, intelligence is much morethan just brain power or
whatever it is.
We're embodied human beings.
You know, intelligence is muchmore than just brain power or
whatever it is.
We're embodied human beings.
You know we have souls, we havespirits.
Let me just read a quick quotefrom Cary Artie again from this
talk that I listened toyesterday.
He said AI is a misnomer.
It has no intelligence.
(55:20):
This is his words.
What it is at root is a verypowerful method of data
manipulation based onprogramming.
Programmers have applied newalgorithms and machine learning
to these massive language modelsand the result is something
that's analogous to humanlearning.
The result is that AIs aredoing things the programmers
(55:41):
don't fully understand, but thatdoes not mean that they've
developed a soul or somethingequivalent to human
consciousness or free will.
In the end I'm still quotinghim these things do what they're
programmed to do, even if theirprogramming evolves in ways
that we don't fully understand.
I almost hear Kerry already heresaying we're not going to get
to that super intelligence level.
That's kind of a Darwinian Tobelieve that that's going to get
(56:03):
to that super intelligencelevel.
That's kind of a Darwinian Tobelieve that that's going to
happen.
You almost have to start withDarwinian assumptions about what
does it mean to be human?
Cary Artie, interestinglyenough, is not just a Christian,
he's also a psychologist, so hebrings a different view of
human nature to the discussionthan, let's say, a computer
scientist maybe, in that wethink as embodied beings and you
(56:24):
can't really separate thatright through our senses and you
know just the way God createdus as embodied human beings.
Luke Allen (56:32):
Yeah, well as you're
talking about it, my brain just
goes immediately to Psalm 8,right, and I didn't pull it up
quick enough, but off the top ofmy head.
And God has created us a littlelower than the heavenly beings
and crowned us with glory andGod honor.
A little later on it says andhe has placed everything under
our feet, all the flocks andherds and beasts, so on and so
forth.
So, I mean, would God let uscreate something that would, in
(56:55):
a way, supersede us?
Scott Allen (56:58):
Well, if I was a
Darwinian?
That was my worldview, and wewere just biological machines.
You know, yeah was a darwinian,that was my worldview, and we
were just biological machines.
You know?
Um, yeah, I could.
I could see this fear of likeoh my gosh, we've now created
something that's smarter, it'sfaster, it's got.
It can process this data muchfaster than we can, much bigger
sets of data.
Yeah, right, it's, it's, it's,it's, it's passed us up and who
(57:21):
knows where that's going to takeus.
If I'm more like Cary Ardy, I'mgoing like I don't know if
that's ever going to happen,because we're still not, because
it's just starting with thefaulty presupposition of what it
means to be human.
Brian Johnson (57:34):
It is, and I
think that measure of
intelligence has been sosuperimposed on on what AI is,
and the definitions areimportant.
A couple of quick, kind ofpractical examples of what AI
has shown the competence andcapability to do.
Last, probably 18 months ago,maybe 24 months ago now, there
were a couple of AI computersthat developed their own
(57:54):
language and could communicateto each other in a way that
humans could not understand.
We had no way of decipheringwhat communication method was
being used.
That's the surprising thingDwight's talking about, so they
have done some surprising thingslike develop their being used.
That's the surprising thingDwight's talking about.
So they have done somesurprising things like develop
their own language.
Just in the last few weeks, aihas learned how to communicate
with dolphins.
It has learned their languageand it can now communicate back
(58:15):
to dolphins in a way that canthen tell us what it's saying
and the communication that it'shaving with dolphins.
Wow, that's amazing.
Ai can detect things and MayoClinic has been doing research.
I've got a friend who doesinvestments in latent IP, so
they take intellectual propertythat Mayo Clinic has decided not
to use and they'll go and putit into use in different areas.
And Mayo Clinic and some ofthese IP groups have determined
(58:37):
that they can read EKGsinteractive from even wearables
like an Apple Watch or anotherdevice and interactively
determine whether someone ispredictably going to have a
heart attack.
Ai does that.
Human doctors cannot.
So it's scale, it's volume,it's predictability, it's trends
, it's other things.
There are ways.
Whether we call thatintelligence or great learnings
(58:59):
or machine learning designs,there are some capabilities that
AI is developing in language,reasoning and communication
patterns that are far surpassingwhat we can do.
Surpassing what we can do.
Ai could now translate thisvideo, in fact, into 175
languages and increasing everyday and live stream it to the
whole world.
What does that capabilityprovide, now that you could have
one world of communication, oneworld of unity around
(59:24):
cryptocurrency?
There are a lot of interestingconvergences that AI is powering
, enabling and, in some cases,designing.
So one key point to quoteJeffrey Hinton is widely known
and accepted as the godfather ofAI.
He coined artificialintelligence, actually back in
the late 70s, early 80s Ibelieve and he came up with the
(59:45):
idea that what it was intendedfor.
It has now been really abusedand the applications and
implementations of AI have beenimplemented beyond the controls
and original intentions.
And Jeffrey recently said this,and this is a quote we have now
reached a point that it is aphilosophical and perhaps an
even spiritual crisis, as wellas a practical one.
(01:00:07):
He believes that AI has nowalready reached singularity.
He already believes that AI hasthe capability We've still just
harnessed it or kept it kind ofin a cage has the capability to
not only developinterlinguistic skills and
capabilities to communicate at alevel that we can't determine,
but can also make decisions infinancial markets and healthcare
(01:00:27):
decisions in transportationwith the FAA airport controls,
all these healthcare decisionsin transportation with the FAA.
You know, airport controls, allthese.
It has the capability to takeon vastly more than we even have
acknowledged yet, but we'vestill kind of kept it in a cage,
and for good reason.
So I think there's a, there's alot of warning to be heeded
from the godfather of AI whosays essentially you know,
beware it's, it's outgrown itsuse.
Luke Allen (01:00:50):
Isn't he a Christian
, or am I making that up?
Brian Johnson (01:00:53):
I believe he is
God-fearing.
I don't know how aligned inChristianity he is.
You mentioned him before thepodcast.
Scott Allen (01:01:01):
So I did a little
research, just quick, on him,
just to learn a little bit moreabout who is Jeffrey Hinton.
Yeah, these are some of thethings that I pulled up.
First of all, he's a NobelAward-winning computer scientist
Like I say he's like thegodfather of AI.
He's a professor at theUniversity of Toronto, worked at
Google, so, yeah, deeplyinvolved in all of this.
(01:01:22):
I was curious about his ownworldview, luke, and what I kind
of came up with.
Again, this is really cursory,but it was more that he's.
He doesn't talk openly abouthis faith or his religion, so he
comes across as a bit of anagnostic.
But again, I know so little,but I did read an article that
he wrote.
It was an article about him inthe Guardian and this is a
(01:01:46):
recent article and a couple ofquotes that I pulled up.
He believes that there's a 10%to 20% chance of AI quote wiping
out humanity over the next 30years.
And quote we've never had todeal with things that are more
intelligent with ourselvesbefore.
That's probably the singularity.
Right now we're dealing withsomething that's actually more
intelligent than we are.
Brian Johnson (01:02:08):
And then he goes
on and he says he's called it
alien intelligence.
Scott Allen (01:02:10):
Yeah, how many
examples do you know of a more
intelligent thing beingcontrolled by a less intelligent
thing?
That's right, those are somequotes that he said in that
article that jumped out to me.
Yeah.
Brian Johnson (01:02:21):
And so his view
came from having actually helped
develop Google's machinelearning that evolved into
DeepMind, which is the AI modelthat Google uses, and it is a
tremendously powerful model thatGoogle uses their own tools in
their own shop first before theyrelease them to the world.
So what they've made publiclyavailable is only the consumer
version.
There are capabilities that inlaboratories, and with this now
(01:02:44):
convergence that's coming soonof quantum computing whole
different field of computingthat is accelerating vastly much
more fast processingcapabilities in computers beyond
what we've ever been able tomeasure, and that convergence of
quantum and artificialintelligence will result in
something that Google's beentesting in labs.
I worked with IBM on someprojects in labs, a couple
(01:03:05):
others, and the capabilitiesthat are being tested of fast
computers, large amounts of dataand intelligent or
well-programmed large languagemodels designed into AI will
bring some very interestingresults, some very culture
testing and societal shapingcapabilities that we've never
(01:03:27):
even dreamt up or read about yet.
Luke Allen (01:03:29):
Yes, yes.
What alarms you the most aboutall this?
Brian Johnson (01:03:34):
I believe people
are too gullible.
I think we've lost discernmentas a society.
I believe the fact in Scott,you use this example that you
have a friend that is using Grokyou know that's Twitter X's
version of AI chat that they'reusing Grok as a friend and
communicating with it as if itis a person that's what's most
concerning to me is that, as asociety we have, we've bought a
(01:03:56):
lie.
We've bought into this lie thatthere can be a representative
replacement for humaninteraction and community and
relationship.
The fabric of society is givingup on relationships as as what
we would say.
You know, the God designed andand familial unit that was
designed to hold societiestogether.
(01:04:18):
I believe that's that's themost pressing and significant
threat to to a currentgeneration.
Scott Allen (01:04:24):
Yeah, you know,
what you're saying here reminds
me of something again for thattalk yesterday that Carrie
already had.
By the way, luke, maybe we canpost that talk because I'm
referencing it so much just atthe end of our podcast so people
can go and listen to thatthemselves.
But Kerry already saidsomething similar.
He said you know, the danger ofAI is to treat it kind of like
(01:04:44):
an idol.
This puts it kind of in theframework of a biblical
worldview too.
Right, you know, we're alwaysTim Keller said.
You know the human beings areidle factories.
You know, we're constantlycreating idols right, that's the
fallen human heart and how easywill it be for us to create an
idol out of AI?
He said the, and then he gave areally practical.
I appreciated this, a kind of apractical example of a good way
(01:05:07):
of using AI in a bad way.
He said again he's a medicaldoctor.
He said a good way of using itis to ask a question like you
know, what are the known sideeffects of a particular
medication?
A bad way of asking thatquestion would be should I take
that medication, exactly?
That's right, where you'regoing to it, almost like a god
or kind of a you know an oracleof some sort.
(01:05:30):
You know, asking it these biglife questions.
Then it'd be, yeah, I thoughtthat was really helpful.
So, and it kind of echoes whatyou're saying here, brian, a
little bit.
Brian Johnson (01:05:40):
It is.
I mean, you don't get in yourcar.
Even if you have a Tesla thathas self-driving supposedly, you
don't get in the car and saytake me somewhere.
It's a tool.
You use the tool for what itwas intended to do take you
(01:06:18):
where?
Scott Allen (01:06:18):
you want.
You're a human who has beengiven a God-ordained agency and
the responsibility ofstewardship of resources.
So act like it.
My speeches, or doing myhomework or providing like,
allow yourself to reason as ahuman, allow yourself to do what
you do.
Don't allow it to dehumanizeyou.
I like that, brian, especiallywhen you're talking about the
relational AI.
You know, yeah, don't allowyourself to be dehumanized.
And of course, that requires us, as Christians, to know fully,
or as fully as we can, what doesit mean to be a human being as
an image bearer of God and allthat that entails.
Yeah.
Brian Johnson (01:06:38):
Yeah, and in
discussion, when you do use chat
, push back as you would withhuman discourse, but with more
authority, believing that youactually probably reason better
than AI does.
Assume that Work intoconversations with AI in a way
that you would ask it questions,expecting a result back that is
helpful to you, not what it'strained to provide.
Don't let it argue with you ina way as if it's got authority
(01:07:01):
over you, because it does not.
It's a tool.
Scott Allen (01:07:05):
Really helpful,
brian.
Yeah, yeah, treat it like atool.
Luke Allen (01:07:07):
That's a good
takeaway.
I do think, though, sometimeswhen I just think of it as a
tool, I kind of assume it's it'sbinary, it's neutral and but.
But again, we have to rememberthis thing is this thing's
biased?
It's created by humans.
Humans are biased, know theirbiases, know who created what
you're using and push backagainst that as well.
That that's, that's part ofbeing discerning with this, and
(01:07:30):
I like what you were sayingabout that earlier and how it
can find the right answers.
Probably, but you need to pushback on it, so don't assume its
first answer's right.
Brian Johnson (01:07:40):
And Luke, you're
absolutely right.
There were a lot of examplesand we've done testing after
testing of AI models.
Google was outed for an issuethey had recently, last year
where you ask it a questionabout like what does a cowboy
look like, or what is arepresentative people group of a
certain area, and it wasabsolutely displaying its bias,
to the point that you would askit was that a biased response?
And it would say, yes, I'vebeen trained to be biased, so
(01:08:04):
work under the supposition that,like you just said, luke, it is
written by biased humans.
The rules, the recommendations,the engines that you're running
under are trained data sets,picking the sources that it
wants you to retrieve data fromand giving the result sets that
it has been trained to provide,with that bias inherent.
That is absolutely what it isas a tool.
Scott Allen (01:08:27):
Now again, when you
speak that way, brian, it seems
to me you're talking more aboutthe narrow and the general
intelligence.
Brian Johnson (01:08:33):
Correct.
Scott Allen (01:08:34):
You're not talking.
Once you get up to that levelof super intelligence, whether
we will or we won't, then we'removing beyond something that I
control, I program.
It's a tool.
It seems to me that there issome kind of a jump there.
Brian Johnson (01:08:50):
And I like what
you said earlier about God's
sovereignty.
Scott Allen (01:08:52):
God is sovereign.
God is sovereign.
It's never going to be outsideof his control.
But I could see a point we'renot God and it could become
something outside our control,it seems to me, and there could
be a lot of damage as a resultof that.
Brian Johnson (01:09:07):
Yeah, I mean, I
would argue even the Tower of
Babel illustration of the storyof mankind trying to build a
tower to unify both language andaccess to God, or to be
elevated in a sense is a reallygreat parallel, in a sense, to
human history repeating itself,and that's building a
technological tower thatbasically extends our abilities
(01:09:31):
to be like God and in a senseyou're absolutely right, it
could be bigger than us but atthe same time it just shows all
the more God's authority andsovereignty and displaying his
power to also disrupt Babel.
I mean, god could turn off AI,we could wake up tomorrow and it
would disappear.
Scott Allen (01:09:49):
No, I think the
line that always sticks out to
me in that powerful story ofBabel, which is let us make a
name for ourselves, it's thisrebellion against God.
We don't need you, god, we'llbe God.
It's the same lie in the Gardenof Eden, right?
Satan says you don't need God,you can be God, and that's the
(01:10:10):
recurring lie down through theage.
And there you saw it at Babelagain, and what God says and it
was here at Babel.
We're dealing with technology,we're dealing with a tower,
something that humans have made,and God says you know, if
they've begun to do this,nothing is impossible for them.
You know, that's quite astatement for God to make of
(01:10:32):
human beings, but then, ofcourse, like you say, the
comforting thing is that hesteps in sovereignly and
prevents all the harm and thedamage that would have happened
had he not, by confusing thelanguage and scattering people
across the face of the earth.
So, yeah, I feel like the Babelstory is just so relevant right
(01:10:55):
now, to our day, in this wholediscussion.
Brian Johnson (01:10:58):
Yeah, yeah, I
mean we could call it the Tower
of AI and we'll see.
Scott Allen (01:11:03):
Yeah, the same
human pride behind it, the same
desire to control, to be godlike, yeah yeah it's, it's all there
and I so, um, well, I, I, youknow, I don't know I I like the
way we were going, which is wewere moving in a more practical
direction, because people areusing it right now, you know, um
(01:11:24):
, christians are using it.
Brian, any, as we kind of getready to wrap up here, what
advice would you have?
Just again, again, for justlet's go back to what we were
talking about very practicallyNot just how do we think about
it, but how do we engage with it, how do we use it?
What do we need to avoid in ourinteractions with it?
(01:11:47):
Any other final thoughts onthat?
Brian Johnson (01:11:49):
Yeah, I think you
know Luke had mentioned some
questions earlier about kind ofthe cultural engagement and hope
in this.
I think that we use it as atool to engage culture.
I think we should use AI assomething that you know.
In a sense, we are in a worldthat involves tools and
technologies.
Tony Rinkie wrote a book aboutkind of.
You know the redemptive view oftechnology and Tony's a
(01:12:13):
brilliant thinker and researcherin this space, and so I had a
chance to talk with him aboutkind of, the view of this, and
his view is well, this is aphenomenal.
This fourth industrial wave is aphenomenal opportunity.
For Christians it really is.
I mean, we can look at thesekind of eras as opportunity to
say, well, we need to crawl on arock and go, move and get
offline, get off the grid andall that.
(01:12:36):
Or we can say let's reach theworld for Christ, let's use this
opportunity to see anevangelistic movement, to see
heart shaped and truth told in away that you wouldn't have a
chance to do if you didn't havethis kind of crisis of culture
happening.
There have been thesetremendous movements, especially
in American society, but evenin mankind, throughout the ages,
that we've seen opportunitiesand windows of opportunity open
for revival, for folks to reallyunderstand what hope is and to
(01:12:59):
see the distinction between afalse narrative and the truth.
And when you see falsenarrative becoming more and more
egregious, it should make truththat more easy for us to feel
confident in saying well, thisis an image mirror moment.
This is a moment for the imagebearers of God to really say
truth is what we need to leadwith, and truth and love in a
(01:13:21):
practical way is what Christianscan do.
So, yes, we can use AI for good.
We are using AI to combat humantrafficking, to promote human
flourishing and to providegospel teaching and Bible
teaching capabilities to theworld.
My son came up with this kind ofa product called R79, and it's
around this idea that inRevelation 7-9, we see this
party happening in heaven andpeople from all tongues, tribes,
(01:13:44):
nations are all gathering atthe throne of God.
And you say that is somethingthat's a phenomenal.
Hope that we have is that wewill be joined together with
believers from all over theworld.
How does that happen?
Well, because missionaries havebeen sent and because
technology will be used, andbecause the word can be preached
in local tongues and languagesthat never have been able to be
(01:14:05):
done before, so we would love tobe able to have opportunities
and get people you knowencouraged about the way that
they can lead into theirministry, whatever it is.
The world has become a lotsmaller place because of
technology, so we can use thatto our advantage too, and our
family verse has been 1 Peter3.15.
We've always had this notionthat, regarding Christ as Lord
(01:14:26):
in your hearts, always beingready when we think of being
prepared, as Christians, weshould always be ready when
we're asked to give an answerfor the reason of the hope that
lies within us, and to do sowith gentleness and respect, and
I think that is an embodimentof how the Christian worldview
should be in this era of AI Beready.
Be ready to give answers forthe hope that you have and why
you're not scared about what AIis doing.
Scott Allen (01:15:05):
Why you're not
concerned about what.
You know this as Christians.
There are tremendousopportunities that this opens up
for us.
You mentioned how you know,artificial intelligence is now
communicating with dolphins.
I mean that's amazing.
But think about put that intothe framework of biblical
translation around the worldwhich has been going on since
(01:15:28):
the time of Luther if not beforeand now it's just.
It's phenomenal when you thinkabout the potential to quickly
get the time of Luther, if notbefore and now.
It's phenomenal when you thinkabout the potential to quickly
get the Word of God intolanguages where before it would
have taken so long to do so.
There's all of those positivesthat you mentioned.
At the same time, we live in afallen world.
I mentioned China before andhow.
(01:15:48):
China is not unique.
This is just the fallen humanheart.
People are going to want to useall of these tools before and
how, and China's not unique.
This is just the fallen humanheart.
This authority, you know peopleare going to want to use all of
these tools for either, to makea lot of money or to control
people you know to, you know,for authoritarian ends, you know
, and, and they are going to dothat.
They are doing it.
I mean, it's happening now andplans are probably well beyond
(01:16:11):
what I am aware of to control me, you know.
So, both you know.
Yeah, be sober, be cautious,don't allow yourself, be
open-minded.
Right, this is happening thisway.
You know that documentarysocial media, what was that
called?
The?
Brian Johnson (01:16:32):
Dilemma.
Scott Allen (01:16:33):
Yeah, Social
Dilemma.
I said everyone needs to watchthat because it just opens your
eyes to how you're beingcontrolled and manipulated.
Brian Johnson (01:16:40):
So a friend of
mine is actually sponsoring one
that's called Doom Scroll andit's coming out with kind of a
similar fashion around thatattention economy and it will be
this awakening kind of momentfor us to say what has our
attention been captivated by.
It's a kind of moment for us tosay what has our attention been
captivated by.
It's a real gut check for us.
Scott Allen (01:16:55):
No, you know, what
can we do to prevent, you know,
nefarious forces from usingthese technologies to control us
?
We don't know, unless we aresomehow aware of what's going on
about that.
So I think there can be kind ofa concerning side to this.
But, like you say, hey, it iswhat it is.
It's here now.
Let's use it for good, you know, like you're doing with your
(01:17:17):
son.
So good for you.
Brian Johnson (01:17:19):
Yeah, we've got a
parenting blog called Protect
Young Hearts and really our ideaand notion behind that is
helping parents to understandhow to have conversations with
their children too.
Wow, and we've been asked thisas we've given talks at church
and things.
You know, people will come upand say, like, how do I deal
with technology?
At what age should my kids havetech?
You know and that is a it's amassive topic that a lot of
times what we, what we startwith, is, well, how do you deal
(01:17:42):
with technology?
What a parent is going to try toimpress upon their children
needs to first be practiced intheir own hands, in their own
time, in their own self-control.
So I think discipline in theparent's life is a good starting
point.
But then understanding that itis a significant challenge for
your kids to grow up in thissociety without parental
guidance, without biblicaldiscipleship the question of why
(01:18:06):
am I here and what's the hopein and what are the competing
priorities in life and all thesedistractions that are just so
inundating, are significantchallenges and especially as
Christians, we should face thosewith confidence, and I think
that's something that parentsreally need to be challenged and
encouraged in is saying you canraise your kids.
Well, in this generation youcan raise warriors for Christ,
(01:18:28):
in a generation that wants totear them apart and tear apart
the ideologies of biblicalChristianity.
And I think there's a lot ofgreat tools and resources that
we can come alongside each otherand be encouraged and be
equipped to better handle that.
Scott Allen (01:18:42):
How do people can
people access those resources
you're mentioning there?
Because I could see a lotAbsolutely.
Yeah, how did they get thoseBrian?
Brian Johnson (01:18:49):
Yeah.
So I would say, you know, oneof the first ones is when, again
, I say like, if you're a parentof young kids,
protectyoungheartscom is awebsite that you can go to and
reach out to us.
Protectyoungheartscom, okay,protect Young Hearts.
And I've got a friend that runsProtect Young Eyes.
He goes into the technologyside, which is interesting
because, you know, I'm thetechnologist, my wife is more
(01:19:11):
the biblical counselor and theheart focused person and I'm
more the let's think of.
You know, logic and reasoningand things.
We combine this into protectinghearts because we believe
protecting kids' hearts is theessence of what a parent's
responsibility throughdiscipleship is, and we have a
lot of tools and resources,articles and tips and this, you
know, the era of protecting.
(01:19:31):
You know there's a group thatwe work with through the Tempevo
Group and Tempevo Foundationand we partner with them to help
design and develop technologiesthat will be protecting gaming
online.
There's a product calledGameSafe that's being released
soon and actually I think youknow you can receive it on some
platforms now and we'repartnering with GameSafe to help
that to be a pre-, apreventative tool.
(01:19:51):
It's run by a Christian and aChristian group that's helping
to get this in the hands ofparents so that their kids can
game in a safe environment, andwe're going to plug that in and
make it available on Appledevices as well.
They teach predator proofingand methods that parents can
learn about how to keep theirkids safe, and they're just
(01:20:14):
groups.
I mean, you know probably 20 orso companies that we're working
with to help partner in helpingChristians and helping, you
know, any parent with thisproblem.
We also have solutions forseniors.
I think.
Well, what about your elderlyparents that are trying to use
phones safely?
We just want to communicatewithout getting scammed.
Your elderly parents that aretrying to use phones safely?
We just want to communicatewithout getting scammed.
(01:20:34):
We have tools for that as wellthat can help seniors to
interact safely in a secure andsimple way.
So lots of resources andoptions available.
I would say reach out to meover protectyoungheartscom.
You can send us a note and we'dbe glad to help in any way.
We can get resources If you'restruggling in certain areas.
Those are things we want to beable to help people get both
access to resources but alsocommunities of support.
(01:20:56):
There's also a group calledHero Churches and it's churches
against trafficking, in a senseeducating and equipping churches
from a biblical worldviewaround the harms of exploitation
online and helping your kids tonot get exposed to harmful
content, because it is a tragicepidemic in our society today.
So a lot of great resourcesthat we'd love to get you
(01:21:19):
connected to.
Scott Allen (01:21:20):
Wow, thank you,
brian, for all the work that
you're doing in that area andall the resources you're making
available to people.
That is fantastic and I'm soglad we can let our listeners
know about that so they can takeadvantage of that.
Brian Johnson (01:21:33):
I appreciate it.
Scott, Thanks so much for whatyou're doing, Luke.
Scott Allen (01:21:37):
any final thoughts
from you as we wrap up here?
Luke Allen (01:21:40):
No, I mean we'll
make sure all of those resources
are linked in the description,so check those out after the
show.
But yeah, no final thoughts.
Scott Allen (01:21:49):
Brian, you've been
really generous with your time
and I just want to thank youAgain.
I'm just learning so much and Ienjoy the learning.
By the way, I think it's afascinating discussion, but
you've really helped me.
You've advanced my own learningand I just want to thank you
for that, and I'd love to haveyou back on as we keep trying to
get our heads around this,together with our listeners, and
know how we can interact andthink about and respond to this
(01:22:12):
current moment technologically.
So thanks, thanks for your timetoday.
Really appreciate it.
Brian Johnson (01:22:18):
It's an honor to
join you, Scott.
Thanks so much and praise Godfor the work you guys are doing
here.
Scott Allen (01:22:22):
All right, well,
thank you all for listening to
another episode of Ideas haveConsequences.
This is the podcast of theDisciple Nations Alliance.