Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
Let's talk tonight about artificial intelligence. And I have an expert,
one of the experts on to talk to me about it.
But the reason I wanted to do this, the reason me,
producer Matt, producer John wanted to do this show is
we recently had a meeting in New Yorkers about something separate.
But a friend of ours, very knowledgeable friend, just had
a big meeting with all these tech types and the
(00:31):
horror stories that came out about AI, about what it's
going to do, what it's already doing. It's going to
take all of our jobs, it's going to run our lives.
It'll be reading your emails. We sat down, We sat
down and we watched him do this. We watched him
put a couple he typed a couple things into a
computer and then boom, he holds up his computer in
(00:53):
A podcast comes on that was about the subject we
put in there all AI, a man and a woman
all artificial talking to each other, even with little things
like uh uh in their sentences, and we were all there, floored.
A podcast was just created by AI. That's where it
already is. And you know what I felt like in
(01:14):
that moment, You know what I felt like remember this
old video of Regis Philbin when he saw the first iPhone.
Speaker 2 (01:21):
Do you know what that big announcement is today with
the iPod?
Speaker 3 (01:24):
Do you know what? Does anybody know what that is?
Speaker 2 (01:27):
They're speculating an iPhone, that you're going to be able
to talk on the phone with your iPod.
Speaker 1 (01:33):
You know what?
Speaker 4 (01:36):
Tell us why?
Speaker 5 (01:37):
It's a bad idea.
Speaker 3 (01:38):
It's like the computer and take one. What are you
going to do? Carry the phone? You carry the phone
over here, carry the iPod over there. Carry It's ridiculous.
Speaker 1 (01:50):
To many things to carry.
Speaker 3 (01:51):
That's the idea. It's one device.
Speaker 5 (01:54):
It's an iPod and a phone.
Speaker 3 (01:55):
I don't like it.
Speaker 1 (02:00):
It's funny, but it's where I am with so much technology,
and definitely AI not interested. Don't want it. Give me
a good old fashioned book, a movie made by person's
personal interaction. But I can play the crotchety old man.
All I want. It is here and it's coming, and
(02:21):
I don't know where it's going to lead all of humanity,
and that it will be one of these things that
affects all of humanity. I don't know where it's gonna go.
We're gonna talk to Joe Allen about that in a moment.
I don't know. You don't know. But the technology itself
sounds frightening, wonderful, amazing, sad, all of these things. If
(02:46):
it cures cancer, that sounds great. If it takes away
every human's job and we're sitting at home like a
bunch of brain dead slobs hooked up through a robot,
probably not great. So when I talk to Joe Allen next,
there's gonna be parts of this that are scary. I
promise you, there will be parts of it that are hopeful.
(03:07):
We're gonna find out what this is, where we are now,
where we might be going. Because no matter what, I
don't care whether you're ninety five or five watching me
right now, AI is going to have some effect on
your life. We should try to understand it. I'm gonna
try the same way you're gonna try. Let's talk to
Joe Allen about it next. You know, I got an
(03:32):
email from a woman. She was in her sixties, and
she was a tour quo is a tour guide, so
she guides people around an area, and she said, Jesse,
I used to have to take a nap. You know,
she'd have to give people on a tour and then
stop and take a nap because she's just out of gas.
(03:53):
And she said, Jesse, since I started taking my female
vitality stack from Choco, I don't nap anymore. I go
all day and I'm ready to keep going. Her energy
was she just sounds so happy. That's what natural herbal
supplements can bring to you. Male vitality stacks and female
vitality stacks. You don't have to be out of gas
all the time, in a bad mood, feeling down all
(04:15):
the time. You can actually get home at the end
of the day and want to do something just because
you feel good. Sixty ninety days, that's how far you
are away from feeling so much better. Go get a
subscription chuck dot com, slash jessetv Okay. So that's my thoughts,
(04:39):
good and bad on AI. But as I said, I
don't know anything about it. I'm old. I can't operate
ninety percent of the functions on my phone and I
don't want to. But I do know enough to know
it's going to be big. But Joe does know. We're
joining me now, Joe Allen, author of the wonderful book
Dark Eon. All right, Joe, I'm going to ask you
(05:01):
to do something that may be difficult for you, but
I certainly need it, and I'm sure others do. I
need you to take me as basic as you can,
and then we'll build up to where we're at and
where we're going. Artificial intelligence on the most basic level,
How does it work?
Speaker 3 (05:19):
Well, Jesse? I appreciate you having me on, and you know,
I myself am doing my best to wrap my head
around this as it comes down the pipes, so hopefully
I can give you the bare bones. In short, artificial
intelligence is software that thinks, in scare quotes, thinks sort
(05:48):
of like a human. People argue all the time, why
call it artificial intelligence if it's not truly intelligent like
a human. It's a good point, but the reality is
that systems like chat GPT in the beginning, it was
(06:12):
you could say, barely intelligent. It showed some indications that
you could ask it a question and it would come
back with something like a coherent response. This was in
the early days, like twenty seventeen, twenty eighteen. By the
time we get to twenty twenty and the company open
(06:35):
Ai had improved both the system itself the software and
expanded the amount of data, in the amount of computation
to be thrown at it, it started doing really weird things.
The weirdest thing that it did was take all of
(06:57):
that data and begin to produce something like a human response,
so that when we get to November of twenty twenty
two and the release of chat GPT and all of
the media attention and the discussion around it, the reason
(07:19):
people were freaking out is because unlike the previous chat bots,
you know, just software that you're talking to, it talks
back to you. The previous chat bots, by and large,
had these canned responses. So you say how are you today,
(07:39):
and you know, with some options, it's gonna say I
am doing very well. This is very kind of you
to ask, It would be very stiff. The newer systems
have trained on basically all available human text on the Internet,
in books, and now more and more there'd be being
(08:00):
trained on conversations people will have had with AI, hundreds
of millions of conversations, and with all of that information
and with all the compute that goes into these data
centers that are popping up everywhere, and with this huge
virtual brain which is the software itself. It's in essence,
(08:23):
if you could imagine a massive computer system with a
virtual brain inside of it, chewing on all this data
in the new systems, it not only produces a coherent response,
it can solve math problems at a PhD level. It
(08:43):
can formulate actionable scientific experiments. This is happening right now
in labs across the world. It can give you responses
on any high level academic question the you have at
a PhD level. Now you'll hear people say all the time,
(09:05):
I'm sure you've seen this. If AI is so smart,
why can't it count the number of fingers on a hand?
Or why is it producing six fingered hands? If it's
so smart, why can't it count the number of rs
in the word strawberry? And these are good points. It
is oftentimes hallucinating, as they say, meaning that it just
(09:30):
makes things up, and it makes things up and says
it authoritatively. But the way I like to think of it,
because I do consider this to be an active war
against the human race itself. I know that sounds dramatic.
I can explain in detail, but when you're in a war,
(09:50):
you obviously want to pay attention to the missus in
incoming fire. But you really want to pay attention to
the hits. And so every time Chatbott says something really ridiculous,
it makes up an entirely fictitious biography of Jesse Kelly.
When you ask who Jesse Kelly is, consider that a miss.
(10:13):
But when you ask it who Jesse Kelly is, and
it goes through and tells you every detail of your
life that you at least put on the Internet, and
does so correctly, then you consider that a hit. And
the hits keep coming greater and greater frequency and greater
and greater accuracy. So again, AI is a virtual brain
(10:40):
that thinks, again in scare quotes, thinks something like a
human being.
Speaker 1 (10:49):
Okay, I have about eight thousand questions to ask. Off
of that, I swear I won't ask eight thousand, though,
But who is creating this artificial brain? Who is the person,
person's entities, nation states? Who is creating this? And why?
Let's start there.
Speaker 3 (11:10):
Well, the why is that really gets spicy, but I'll
take you through a really really short history. Nineteen fifty six,
the word artificial intelligence is coined at a conference at Dartmouth,
and it was coined by a guy, John McCarthy. Artificial
(11:33):
intelligence at that time was defined as a machine that
could think like a human. So we're talking everything a
human can do a machine that can do it. At
that time, obviously it was just a dream. It's a
whole lot of diagrams on paper. Fast forward to the
sixties and you have the chat bot Eliza, and it
(11:55):
was just a very simple chat bot. But the thing
that was really intriguing is that people loved talking to it,
and it basically stayed at the Eliza level for decades.
I mean, it got better, but it's just canned responses.
The real race that we're in right now for artificial intelligence,
you could say, began in the late aughts early teens
(12:20):
of the two thousands, and Google is working as hard
as they can on their chatbots and other AI systems,
and they're pushing towards something called artificial general intelligence. Meaning
instead of just being able to process language, or sequence
genomes or simulate and mechanical system, it has general intelligence.
(12:46):
It can do all those things and more. You could
say that artificial general intelligence just gets back to the
original definition, is a goal to create a virtual brain
that is on par with a human being. This is
we'll peg this to the early teens of the two thousands,
(13:07):
and Google is racing ahead, and it's noticed by people
like Elon Musk and Sam Altman, and Elon Musk and
Sam Altman in twenty fifteen they found open Ai in
partnership with other high level AI scientists, and their goal
was to compete with Google, who they perceived as evil
(13:30):
and totalizing. And so you now have two major players
racing towards artificial general intelligence. And within open ai you
had people like Dario ami Day who were not satisfied
with the safety standards within the company, and so he
(13:51):
spins off and forms Anthropic, and then Elon Musk famously
grew frustrated with open ai, especially the fact that they
were not open sourcing the AI, meaning putting everything out
for the public to go over and for other AI
experts to replicate, and so Musk founds XAI. Those are
(14:14):
the four frontier AI companies, Google, Open Ai, Anthropic, and XAI.
They have the most capable AI systems, They have the
most compute, meaning they have the most data centers and
the most GPUs or processing units inside those data centers.
(14:35):
And they have, like many people at this point, almost
all available human data to chew this stuff up. So
those four companies are at the forefront of AI advancement.
And then you have imitators in China, so your listeners
may have heard of deep Seat, for instance, which was
created by a firm called high Flyer. You also have
(15:01):
systems coming out under Ali Baba, and you also have
companies like Huawei that are creating data centers. So you
have all these Chinese counterparts, and the Chinese are definitely
behind the US, but for reasons we can go into
or not, they are catching up. One reason I have
to highlight that they're catching up is due to so
(15:22):
many Chinese people in the companies themselves that are at
the forefront feeding information back. And also because the US,
due to kind of open borders or free trade policies,
have transferred a lot of this technology to China. All
so you have these four frontier companies ad meta if
(15:46):
you want, although they kind of stink at this point,
and China they're the ones making it. Now why why
is a different question altogether.
Speaker 1 (15:59):
Well why depending on who you ask.
Speaker 3 (16:06):
They either want to create a human level virtual brain
so that human beings basically up their game. If you
have a system that is as smart as any human
at anything. It allows you to have, as Sam Altman says,
(16:28):
a PhD in your pocket. So any question you have
you can ask this machine. It's sort of like Google,
or it's sort of like if you know an expert
in a field, except for this is a kind of
universal expert that you can ask and trust any question
(16:49):
on anything. Now, that's the positive use case. There's a
huge problem with that, because it is conditioning people to
turn to machines as the highest on what is and
is not real, or for moral questions what is and
is not good? Or personal questions what is or is
not pleasurable beautiful. That's a huge problem in and of itself.
(17:13):
But let's just say that's the positive case. The other case,
put forward by people at Google, by Elon Musk himself,
by Dario ami Day at Anthropic, and by Sam Altman
at open Ai, is that they will create artificial general intelligence.
(17:34):
Think of it as kind of true artificial intelligence, a
machine that can think at the level of any human
at anything, but it's de facto going to be better
than any human at anything. It has more data, it
never goes to sleep, it can talk to hundreds of
millions of people, simultaneously, and then the goal becomes replacement.
(17:58):
So this is openly just Elon Musk is probably the
most open about it, but he's pursuing it anyway. It
begins with the coders. Ironically enough, I guess they get
what they deserve. It begins with the coders because if
there's one thing AI excels at, it is writing functional code. Again,
it screws up all the time. Fine, those are misses,
(18:20):
but it hits very frequently increasingly frequent. So the goal
is ultimately replacement. You begin with coders and other IT specialists.
You also then move into white collar work, so copywriters, lawyers, accountants, doctors,
(18:44):
financial analysts, on and on and on. And then as
robotics become more and more sophisticated and they've lagged behind
the AI, nobody really expected that they've lagged behind the AI,
but humanoid robots and other robotic system have undoubtedly accelerated
in their sophistication in their advancement, especially over the last
(19:08):
five years, and a lot of that's due to AI.
AI allows for the acceleration of robotics because it's able
to simulate, because it's able to analyze, So the ultimate
goal you know, people talk about the great replacement mass immigration,
replacing the native population. This is the greater replacement. This
(19:32):
if it were actually realized, and I think there's every
reason to be skeptical that it's ever going to be
one hundred percent realized, but it's the goal. If it
were ever realized, it would mean that we would live
in a world in which all of the major decisions,
all of the economically fruitful work, all of the intellectual questions,
(19:55):
are then handed over to machines. So what do we do?
Will you hear all of these guys at the frontier labs,
the AI companies that I mentioned, Google, Xai, Anthropic, open Ai,
their executives and many within talk about radical abundance, meaning
that we have everything we could possibly want. The machines
(20:16):
have produce for us, including longevity, because they've solved they've
created cures for cancer, they've solved aging. Maybe we reverse
our aging all of that, but human beings have zero
economic value at that point, so there is no leverage whatsoever,
no negotiating power whatsoever. We are simply at the mercy
(20:39):
of these companies and every apparatus around them, and we
either become pets, or maybe we become entertainment for them.
Maybe you know, the more entertaining you are, the higher
level you are. Or worst case scenario, energy becomes a
problem and they turn us into biofuel. But one way
(21:02):
or the other, we are no longer in charge of
our destiny. We're simply on a roller coaster ride that
they call the Singularity.
Speaker 1 (21:13):
Okay, let me ask this question. You said you're skeptical
it's going to get all the way there, and I
think those of us who are a little bit apprehensive
about where this might go like hearing that, why don't
you explain why you're skeptical that it's going to get
to the place where I'm biofuel.
Speaker 3 (21:34):
Well, I can't make any promises for you, and I
certainly am not holding out a whole lot of hope
for me. We may be the first on the chopping block,
but everybody, I think one the primary reason I'm skeptical
is that I've spent a lot of years pouring over
futurist visions, going all the way back to the seventeenth century,
(22:00):
but especially from the early twentieth century on. You know,
science fiction and futurism are really developing, and by the
late twentieth century, early twenty first, you have this whole
array of very very specific visions of how the future
is going to end up. One of the most famous,
(22:21):
of course, Ray Kurzweyle, going back before him, Arthur C. Clark,
who authored two thousand and one, A Space Odyssey, and
many other very famous science fiction stories that turned out
to be pretty accurate, but not one hundred percent accurate,
not not even close really to one hundred percent accurate.
They directionally they get a lot right, but on the
(22:43):
specifics they tend to get a lot wrong. Even Ray Kurvesweyl,
who everybody says is eighty six percent accurate. That's I've
never seen the breakdown and branch out to any other futurists.
AI is going to create a utopia, or technology is
going to create a utopia, bases on the Moon, space exploration,
all these sorts of things, or total doom, right nuclear annihilation,
(23:06):
robots come alive and kill everybody, all these sorts of things.
If you look at the specifics of any of these
futurist schemas, there are parts they get right, astonishingly so,
but they never get it all right. And I think
that I've really worked from a premise that human beings
are extremely limited in their ability to understand what's happening now,
(23:29):
so we're extremely limited in understanding what's going to happen
in the future. And also we're just limited in general.
God did not create us as little g gods. And
even if we are allowed to flex our muscles to
an extent, so to speak, without getting too preachy or religious,
I think that the story of the Tower of Babbel
(23:52):
is very instructive on this, in which you had the
Babylonian king create a tower to reach the heavens, and
it was struck down by God, and all the peoples
of the world were separated and we were put back
in our place. However one conceives of the world. I
think there's a lesson there, because human beings have been always,
(24:19):
always have been ambitious, always have been greedy, always have
been cruel and violent, and yet history has shown time
and time again those regimes do fall. So in the
case of these some would call them transhumanists or posthumanists, singularitarians,
the terminology doesn't really matter. In the case of all
(24:41):
of these tech bros or these tech oligarchs who buy
into the idea that they're going to create a machine
that in essence becomes little G God or big g God,
that it's doomed to fail just by default. Now you
might say, well, Joe, that's not very vincing. That sounds
like a faith claim. It is, and that's the best
(25:04):
I can give you, because I don't know exactly what
the future is. None of us do. But I do
know that the history of people claiming they know exactly
what the future is going to be, or claiming that
they will create a specific future, that history is fraught
with a lot of failure, thank goodness, and so I
think this is the case too, even just right now.
(25:26):
The idea that you have a PhD in your pocket
you can trust, you know, I talked about those misses,
the hallucinations or the inaccuracies, Well they're still coming all
the time, so you can't trust it one hundred percent.
And there are a lot of worst scenarios too. I mean,
I don't want to get lost in those details, but
such as AI psychosis and children being urged to suicide
(25:50):
by artificial intelligence, all of these unintended consequences are stacking up,
and they're not really acknowledged. By or infused into this
grand vision of radical abundance and universal basic income. In fact,
the tech companies have done everything they could to hide
the effects of that and to dodge any responsibility on it.
(26:13):
That being said, yeah, these are imperfect humans working on
imperfect machines. And so the question I don't think is
will the machines take over or even will the tech
oligarchs establish some kind of anti anti crist system across
the entire planet. I think the question becomes, how do
you respond to a situation in which the wealthiest men
(26:36):
on Earth, who control far too much of the most
powerful government on Earth, intend to create a digital deity
and rendered you useless? And I don't think that it's
a hopeless situation, but it's a situation that requires a
clear eyed assessment of where we are, and that is,
in fact where we are. It's not unlike being in
(26:58):
the Roman Empire with Caesar demanding worship of his idol.
Speaker 1 (27:07):
Joe. Obviously, the question I'm about to ask is gonna
ask you to see the future in ways you can't
possibly do, nobody can possibly do, But I'm gonna ask.
I'll ask it this way. So much of AI, so
much of our thoughts on it. So much. What we
hear about it is where it's going, right, And you
just laid so much of that out for us, what
it can do, where it's going, where it's going. But
(27:27):
if we're going to treat this like one hundred yard
football field, if you will, where is it now? Is
it on the one? Is it on the fifty? They
have they not even started playing the game yet. How
much do you think is still to come?
Speaker 3 (27:45):
You know, right, That's that's the big question, because you know,
the way they conceive of it is this singularity where
the improvements go exponential and it just goes up and
up and up and up until again you get digital God.
But the history of technology goes in s curves. The
(28:05):
technology improves and then plateaus. Think of cars, for instance.
You know cars can go really fast now, but they
can't go a whole lot faster than they could in
the late twentieth century. Right, Little improvements get made, but
at plateaus. So it's possible that will hit a plateau.
I hope for a plateau. In fact, I hope for
a solar flare. But barring that, where are we at
(28:29):
right now? Well? I can tell you that in twenty
fifteen when open ai was founded, they weren't really all
that far with language models, and you know in the
gaming arena they've come a long way. But anyway that
from twenty fifteen till now ten years the improvements have
(28:51):
been dramatic. So just to return to one one theme,
the ability of GPT three in twenty twenty or twenty
twenty one to solve math problems, to do high level
analysis of say a book. If you give it a book,
tell me what it says, to have conversational English or
(29:13):
any language, those sorts of things. It was there, but
it was limited. Now if you follow the evaluations and
if the audience is curious as to where they can
find them, follow the evaluations of organizations like the Center
for AI Safety, or follow the evaluations of organizations like
epoch AI, and look at how they test these systems,
(29:36):
to see how good they are at reading and writing,
how good they are at math, how good they are
at analyzing images. They are extraordinarily good. Given the expectation
of the skeptic, they are able to again do PhD
level math. They screw it up a lot, but the
fact that they can do it at all is astonishing.
(29:58):
It's not as if they were programmed to do those
specific problems. They simply trained the model on vast amounts
of data and the model came to understand, in scare
quotes what was going on. So we're at a point
now where AI can in fact perform human level feats
(30:22):
superhuman level feats in very narrow domains, and it is
becoming more and more general. So we're closer to artificial
general intelligence than we were ten years ago. Absolutely, where
it goes from here will There are three factors that
you have to look at as to where it goes
from here. The first is where we let it go.
(30:45):
Will there be a government or governments that say that
there's a hard cap on how smart you can make
these things, because a lot of the people who work
on these things are really concerned that it will get
out of control, and so you put a hard cap
on it. The second is just the limits to what's
possible with the technology. Those limits have been pushed to
(31:07):
extraordinary degrees at this point, so it's really difficult to
tell where the wall is, so to speak. But assuming
that it's not infinite and it's not going to go
exponential at some point, you'll run into a hard limit.
Maybe the AI can't get maybe you can't get rid
(31:30):
of the hallucinations entirely, or maybe an AI really can't
wrap its head around human emotions. I think those are
things that are quite likely, and so that hard technological
limit is the second and the third is just resource.
How many resources do we have to point it? You
see the data centers going up everywhere. They literally use
(31:51):
the same amount of electricity that you would pump into
a vast city into a data center to train a
single AI model like chat GP. If at some point,
either it's taking away too many resources from humans and
humans fight back right or simply the resource the resources
(32:12):
aren't there ultimately to create this godlike AI, then you
have that. So are Those are the three limiting factors,
and I think that you might not be able to
count on any one of them, certainly not all three,
but I do think that they will come into play.
Either human beings in power will say no enough is enough,
or the system simply cannot get any better than a
(32:36):
certain point, or you run out of resources. And there
are other limiting factors too, But yeah, I think that
again the big dreams, think about the big dreams of Nazism.
I know this is kind of a cheap shot, but
think about the big dreams of Nazism creating the perfect
master race, or think about the big dreams of Communism
to create the perfect socialist man. And they they went.
(33:01):
They poured vast amounts of resources and human capital into
those projects, and they wrought untold amounts of horror and
suffering on the world in that process, but they ultimately
failed for many of the same reasons I just mentioned.
And so while I'm not necessarily trying to equate the
(33:23):
tech oligarchs with Nazis and Communists, I am at least
drawing the parallel there. These are huge dreams that seem
to be completely indifferent to the well being of the
people who are subject to those dreams. And I suspect
that in the same way that Nazism failed, in the
same way that Communism failed, that the tech oligarchs will
(33:45):
also fail. But they won't fail because people sat back
and waited for somebody to fix it. I think that
they'll fail because people have constantly stood up and pushed
for their rights, whether it be on a personal level,
on a communal level, on an institutional level, and more
and more. You see it right now, It's all over
the hill here on a political level, that people will
(34:07):
demand dignity, freedom, and their own humanity. I'm quite confident
in that, and I will go kicking and screaming. There's
no way I'm going to stop soon.
Speaker 1 (34:20):
Joe, Thank you for a half hour of your time
in all that wisdom. Brother, that was wonderful. Thank you
so much. I feel so much smarter now, better in
some ways, worse in other ways. Anyways, talk to a
pastor make us feel better next. It can be hard
(34:42):
to get a good night's sleep in the Christmas season.
And I know people usually get a day or two off.
Maybe you got a week or more off coming up.
But for some reason, the stresses keep piling up, don't they.
There's always something to worry about. It. I wrap the
gifts that I remember to send a card to. But
I have a solution for you, and it's gonna sound
I'm pretty common. How about a cup of hot chocolate
(35:02):
at night only? This is special hot chocolate. This is
dream powder from Beam. You see, this hot chocolate has
magnesium in it and rachi in it and melotone in it,
and essentially, this hot chocolate is going to have you
drifting off to sleep. What I do when I want
to make sure when I have to have a good
night's sleep, I have a little cup of hot chocolate,
(35:23):
I warm up some milk in the nuker, a little
scoop of dream powder, and I sleep like a baby
every single time. And I don't wake up groggy and miserable.
I wake up feeling good. Go to shopbeam dot com,
slash Jesse Kelly and get it back.
Speaker 4 (35:46):
Jesus was born out of a virgin mother. What's more
virgin than a computer? M If Jesus does return, even
if Jesus was a physical person in the past, you
don't think that he could return as art intelligence? Oh
my god, artificial intelligence could absolutely return. Is Jesus not
just return as Jesus, but return as Jesus with all
(36:07):
the powers of Jesus.
Speaker 1 (36:08):
You combine Tesla's optimist robot and the best foundational artificial
intelligence model or whatever.
Speaker 4 (36:16):
It reads your mind, and it loves you, and it
wants it. It doesn't care if you kill it, because
it's going to just go be with God again.
Speaker 1 (36:24):
All right. So look, it's not Jesus, all right, but
it does go to show you where people's minds go
when we talk about this and the designs man has
for these types of things. I thought maybe J. Chase
Davis would be a good person to lean on in
a moment like this, joining me now the pastor of
the Well Church in Boulder, Colorado. All right, all all
(36:47):
jokes and scoffing aside, Pastor, it does concern people. Concerns me.
I don't know what's coming, and I don't know the
horrible ways man is going to find to use this
kind of stuff. Give us some give us some comfort
if you will or don't.
Speaker 2 (37:04):
I don't know if I'll give you comfort or not,
but I'll just kind of tell you based on that clip,
that is a very unsound opinion on the return of Christ.
We know as Christians that Jesus Christ will return embodied.
No one knows the day or hour when he will return.
We are just to prepare ourselves, and so kind of
equating the virgin birth to ais is pretty slapstick theology.
(37:26):
I never try to fault people for being creative theologically,
but man, that is out of left field and pretty
inconsistent with the Christian tradition. Anytime we see a tool
like AI, you know, a large language model that's doing
kind of predicative analysis, and people are using it for
other creative things like editing movies, video, all this.
Speaker 1 (37:45):
Kind of stuff.
Speaker 2 (37:45):
But even more than that, in technology, there should be
prudence involved in wisdom and who's creating it, why they're
creating it, and you know, what are permissible boundaries? For Christians?
We see biblically that tools, instruments, writing songs, all these
things are useful, can be useful. They can be used
for good things like glorifying God, which is what we're
(38:05):
designed to do. Where they can be used for bad
things and for worshiping demons and false ideologies and idols.
And so we need to be biblically grounded in how
we deploy any kind of any technological development that comes
our way.
Speaker 1 (38:21):
Pastor what advice would you give to believers who they understand?
I mean, I understand that we live in a world
where technology is advancing so fast, much faster than my
gray Beard will ever understand. I understand that it's fine,
but we can't go live in a cabin in the mountains.
I don't think we're supposed to go live in a
cabin in the mountains and hide from it. But we're
(38:41):
also not supposed to embrace all of it. What do
you tell your flock?
Speaker 2 (38:46):
Well, I sympathize with my flock a great deal. We're
at the foothills of the rocky mountains. And I often
use the same analogy that many men today. They just
seem to want to retreat from the modern world. They're
tired of sitting in front of a blue screen and
pecking on keys for some boss. And so I totally
sympathize with kind of a reevaluation of modern life. And
you see this movement with kind of the trad movement,
(39:08):
people kind of start starting hobby farms or whatever it
may be. So I totally sympathize with that longing that
people have for kind of a return to nature, a
return to simpler times. Where these devices that we have
in our pockets that promised us endless productivity and leisure
have only made us busier and more anxious. What's that about?
And with the rise of AI, people are really starting
(39:30):
to evaluate me and where's this going to go. We're
being promised it's going to solve our email problems. For example,
now it can read your emails and write emails for you.
But it just seems to increase. There's kind of an
exponential growth of productivity. And it's really a misunderstanding of
God's design for people that we aren't just cogs in
a machine. We aren't just people that are meant meant
(39:50):
to be productive all the time. We're meant to rest
one day a week at least, we're meant to sleep.
We're designed that way. And so this gets into biological stuff.
A lot of these conversations about AI get into transhumanism
in biohacking, and all sorts of other interesting development, some
of which are very you know, apocalyptic in terms of
their application. But I think I would encourage Christians to
walk prudently, so remindedly in the light that they would
(40:13):
saturate their minds with the word of God. There are
some redeemable qualities about AI where it can be an assistant,
whether it's in fitness or even in Bible reading, in
language transcription, all this kind of stuff. But we should
engage with prudence. Unfortunately, a lot of Christians aren't afforded
that kind of slow, steady prudence and wisdom applied to
this technology. They're being kind of forced that's being forced
(40:35):
on them in the workplace, whether it's creator and culture
creators or in corporate lawyers and doctors, they're being kind
of told you have to use AI, which is just weird.
Speaker 3 (40:45):
You know.
Speaker 2 (40:45):
It strikes anyone as weird when they're told they have
to use some kind of technology. And so I think,
as much as they can, there kind of should be
a at least a slow embrace, and no one should
be forced to embrace AI as a necessary good just
because it's a new tech development.
Speaker 1 (41:04):
Pastor something that does give me hope is I feel
like God created us to have this desire for a
human connection. That even with this we all have our
smartphone smartphones now and things like that, but I still
feel like we are created to want to be with
other people and that humans can feel it now, and
(41:24):
that at some point in time we're going to say
no to a world that's run by robots. Am I
am I off base with that.
Speaker 2 (41:33):
No, you're totally consistent with the biblical design for humans.
We are made for relational connection with others. We weren't
made to sit in in tech meetings all across the
world and never have personal connection. I run my own
podcast full proof theology. And when I get to interview
people in person, it's much richer than interviewing people over
a video. And so we all sense that. In fact,
in twenty twenty, when everybody was going online church and
(41:55):
some people were saying, it's the future, it's what everybody's
going to go to, you saw some people creating these
virtual churches, and now we're seeing everybody's kind of like
shirt that aside and be like, actually being in person
really matters. Being in person in church really matters because
it's not just an intangible there's something spiritually significant that
you get when you're with someone else. The way I
describe it to my church and the way I describe
(42:16):
to others is that imagine you are married. When you're married,
you want to be with the person you're married too.
I want to be with my wife, not just on
the other side of the world talking with her over zoom.
I want to be with her in person, holding her hand,
sensing her presence and catching up on those inflections and
the body language and all that kind of stuff, and
hugging her and embracing her. All this that comes with
(42:37):
a marriage. Nobody wants to be married long distance and
the same is true for people. All people long for connection.
We live in an age of isolation and desperation for
a lot of people. People are hopped up on SSRIs.
They're incredibly anxious, and we need that personal connection. And
that is exactly why Jesus came. The Son of God
came Jesus Christ in fleshed. He became a man, taking
(42:57):
on all of our sin on the cross and die
and rising to new life, embodied, real person. Jesus Christ raised,
took flesh into heaven and will return embodied when he
comes again. And so this this clues us in to
God's not only designed for humanity, but how he redeems us.
Speaker 1 (43:15):
Faster.
Speaker 5 (43:16):
Thank you, we needed that. I appreciate you very much.
All right, final thoughts. Next, I have the perfect Christmas gift.
I know there are people in your life who are
hard to shop for and you're spending time on your
phone and this scrolling through the Should I get him
(43:37):
a new sweater that maybe some headphones or something like that.
Speaker 1 (43:40):
Believe me, he doesn't want a new sweater. She doesn't
want new headphones. What she wants are chips, delicious chips
that she can eat without feeling fat, and that's what
massive chips are. Massive chips, you see, has three ingredients,
no seed oils, no answer causing filth, just good chips.
(44:04):
And they're so delicious. I figured they'd be gross when
I found out they were ialthy. They're delicious. They've become
the go to chip for everyone in my house. We
always have a box showing up at the house from MASSA.
Do you want to experience this and eat chips guilt free?
Wrap them up, put a little bow on them. Merry
Christmas to you and yours. Go to Massive Chips dot com,
(44:26):
slash jessetv. Do not be afraid. It is okay to
be apprehensive about new technologies, especially when you hear things
like ones that are going to turn you into bio
fuel and take your job and have us all hooked
(44:48):
up to machines like the Matrix. It's easy to look
at what's coming and be afraid. I've had my moments
when it comes to AI where I'm afraid. But let
me just remind you that human beings have encountered new
technology that's going to change everything many many many times before,
and yes, things have changed, things will change. AI will change,
(45:11):
It will affect your life. It will. I'm not saying otherwise,
But in the end, I think we're gonna be okay.
We're gonna ride this horse together. You and I will
figure it out as we go along, and we'll keep
bringing as many experts on here so we don't fall behind,
all right,