All Episodes

September 20, 2016 71 mins

What is artificial intelligence? What's the difference between weak, strong, narrow and general AI? And what's state of the art today?

Learn more about your ad-choices at

See for privacy information.

Mark as Played

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking. Hey there, and welcome to Forward Thinking, the
podcast that looks at the future and says I'm a
little intellectual someone who knows it all. I'm Jonathan Strickland,

and I'm Joe McCormick, and our other host, Lauren voc
obamb is not with us today. She is in the
city of New York, within the state of New York. Yes,
the city of Apples. You were sell what it is
now according to the I T crowd, you were there
not long ago. To Joe, yeah, that's right. Uh So

my co hosts on the other podcast I do here
Step to blow your mind, Robert Lamb, Christian Saga, and
I did a live podcast at the Star Trek Mission
New York convention that was held at the Javits Center.
That fantastic. I'm so envious that you guys got to

do that. And then of course Lauren is up there
with a lot of our crew just kind of exploring
New York and also taking part in a bunch of
different activities, including a trivia night. She's having a blast.
We are having a blast here talking about something that
could potentially blast us off the face of the planet.
I knew you were going to make a blast punt
today to get there. Today we're gonna be talking about

the potential scenario of an AI arms race. Yeah, and
this is really going to be a two part series.
So this first part we're talking mainly about just the
concept of artificial intelligence, the various definitions that we have
created to talk about AI. And in our next episode,
we're really going to dive into some of these various

scenarios people have proposed that we could see as a
result of this emerging discipline. And it's funny to call
an emerging discipline because really, if you look back at
the history of AI, it's it's more than a century old.
But I think most people would argue that what what
the general public would consider artificial intelligence. That's the kind

of stuff we're just now starting to see kind of
kind of peek around the corner a little bit. So
starting that all off, let's start defining artificial intelligence, because
this is a term that gets used all the time.
Sometimes it is misused, sometimes liberally misused or misrepresented, and uh,

to me, it's sort of like virtual reality. It's one
of those terms that people have heard it, and they
generally know what it means, but sometimes they have a
different vision than what was necessarily intended. It means a
stock image of a sexy robot with a gun, yeah yeah,
or sometimes a cyborg lady with an ear of corn.
Ear of corn. You've never seen those, I'm not sure

the ear of corn, the ear of corn, cyborg lady?
I uh, I am ashamed to admit I have used
that image in an in an article for how Stuff
Works dot com. Uh tangent, And I know we're going
on a tangent already. Dylan who works here, he actually
he edits a lot of the podcasts, but he also
does a lot of photo work, and he held a

workshop here in the office to talk about the kinds
of photos that you should and should not use and
when he starts from stock photo websites in order to
illustrate your work. And I had to cringe as some
of the ones he was picking up as ones you
should not use. I know for a fact I have
used at least one of the images from that series,

if not that specific one, then one from that same
photography series that generated that particular image that he used
as an example yeah. So if you're ever looking for
a good belly laugh, go to a stock photo website
and look up pictures of artificial intelligence or pictures of hacker. Yes,
and so this, this actually tells me that we don't
need to worry about artificial intelligence. We need to worry

about just human intelligence first, at least in my case. Well,
that's not necessarily all that off the money, according to
some people. So yeah, so how would we define artificial intelligence?
One really simple way to characterize it would be intelligence
possessed or performed by machines. But this is already complicated,
right because you used a word intelligence to define artificial intelligence. Yeah,

and it's not that simple because it's there in the
name artificial intelligence. Now, if it were really just about
machines meeting some objectively defined criterion of intelligence, there would
be nothing artificial about it, right. You might just refer
to as machine intelligence then. Yeah. So that and machine intelligence,

in fact, is a terminology that many experts writing this
area seem to prefer. They like it better. But when
some people use the term artificial intelligence, I think they're
implying that machines are merely performing some kind of simulation. Generally,
the simulation of the behavior you're of a human brain, right,
like the science fiction depiction of artificial intelligence, is a

robot or computer that thinks, at least on some level,
similar to the way a human being thinks. Yeah, And
this does come through in many of the public facing
criteria for determining what constitutes a successful example of artificial intelligence,
for example, the so called Turing tests, which we will
refer to numerous times throughout this and we've talked about

it on the show before, you've probably heard about it
from us before. But essentially, a computer program, as it's
understood today, a computer program is designed to chat back
and forth with a human via text messages on a
computer terminal, and ultimately its goal is to trick the
human into thinking it's not a computer program but another
live human, right. Essentially the idea of being that you

are communicating with a series of entities, some of which
are humans, some of which are computers, and you are
unable to reliably tell the difference between the two. And
so the test as conceived today arises out of a
philosophical concern originally voiced by Alan Turing over essentially our
inability to tell the difference between a machine that truly

possesses intelligence and a machine that merely gives the appearance
of intelligence. Right, we we give people the benefit of
the doubt that they too are intelligent when we interact
with them. Correct. Like Joe and I are sitting across
from one another at a table. We both have microphones
in front of us. We are interacting, We're having this conversation,
and Jonathan has no idea that I'm not conscious. Right

to me, you know you are behaving in a way
that I would associate with being intelligent. Correct, Like you
are hearing what I say, you are processing it, you
are responding with your own thoughts. So to me, that
is the appearance of intelligence. I know that I am
intelligent because I have that personal experience. Therefore, I assume
you also have a personal experience that is similar, if

not identical, to my own, and you must possess intelligence
if a computer is able to behave in such a
way that it also appears to have these these faculties,
even if you don't know what's going on in the
back end, Touring would say you should at least extend
the courtesy to say the machine is intelligent, because you

would have done the same thing for another human being. Yes,
but I think this all gives a very human gloss
to the definition of intelligence. Like interpreted like this, intelligence
sort of seems to mean that property of a conscious
mind that is possessed by a normally functioning brain of
an adult Homo sapiens. Yes. In other words, we have
couched the word intelligence to be a distinctly human type

of experience, right like that, Like there is if we're
talking about artificial intelligence and we're framing it in the
human experience than machine intelligence almost as meaningless. Yeah, and
so I what I would be on the search for
is a definition of intelligence that's more kind of an
objective definition that could apply to any object, including humans

or machines, depending on what what they could do. So,
uh so, do we have anything that's like an objective,
universal definition of intelligence that doesn't just mean acting like
a human? Might? I came across one that I like
a lot. I'll see what you think of this, Jonathan.
So this comes from the systems theorist David Krakauer, and

he's got definitions of intelligence and of stupidity that I
think are actually very illuminating. So, according to krak our
intelligence is finding very simple solutions to complex problems. And
he gives an example I'd like to read from a
piece that's published in Nautilus that that he did. So

he says, quote, let's take a very simple example, and
the example I'd like to choose is the Rubik's cube.
If I give you a Rubik's cube and you try
to solve it just randomly, so is imagining you're just
turning in all directions until you hope, by random chance,
you get solved. He continues, it will take many, many lifetimes.

There are a billion billion solutions to the Rubik's cube.
That's several lifetimes. That would be ignorance. That would be
where you just don't know what to do, and so
you perform essentially at a random level. Stupidity for the
Rubik's cube is if you just consistently moved and manipulated
one face. Maybe if I just rotate this one face forever,

eventually the cube will be solved, and it will never
be solved, even in infinite time, unlike the random case,
which it will. Intelligence is a series of rules manipulations
that will guarantee that you reach a solution in in
steps or less. So my way of translating this is
that he's saying intelligence is the ability to come up

with strategies for improving one's chance of success at a
given goal. So, in other word, uh, let's say that
you to even generalize this, to go beyond like the
Rubek's Cuban say, just a general problem. Let's say that
you are faced with a problem and you try to
come up with a solution. It turns out your solution
doesn't work, so then you adjust, You try a different

solution to see if that perhaps is more applicable, and
eventually you hit on it. You get to a point
it may not be the most elegant, it may not
be the most efficient, but you finally find a way
to solve that problem. Whereas the stupidity issue is you
go with that first attempt, it doesn't work, and you
just keep going with that same attempt, you do worse

than random chance. Yes, it ends up almost falling into
that same definition. As as the idea of insanity, you know,
doing the same same thing over and over but expecting
a different result. Uh, same kind of argument. There, the
idea that that and you could see this easily with robotics, right,
a robot that all it can do is pick up

an object and turn it and set it down. And
that's all I can do. Uh, it's never going to
be able to solve a problem that doesn't involve picking
up an object, turning it, and then setting it down.
And we do see stupidity in robotics. Sometimes sometimes it's necessary,
right because because the stupid solution, if you're talking about
something that is always going to be the same, you

are never going to have any variance from the approach.
You want a stupid robot, right right? You just wanted
to do this this three set are rather the series
of three steps and that's all it does. Oh well,
I mean if it's that would be if I'm hearing
you correctly, that I mean, that's a different thing. Like

a robot can not have any innovative solutions, but it
can still do its job correctly because it's been programmed
to do. You could even have a what I mean
is a stupidly programmed robot hour since where where it
performs worse than random performance at a task in like

the example would be the robot that is trying to
hand you something but instead stabs you. That would sort
of be stupid robotics. Yes, yes, I would definitely feel
stupid for using that robot. Uh. Now, moving beyond just
trying to define what is intelligence, even beyond what is
artificial intelligence, you start encountering subsets of artificial intelligence. And

this is where a lot of other confusion comes in,
and we're gonna be talking about some of these subsets
in detail. One thing you may have encountered is the
concept of weak AI versus strong AI. Yeah, and so
this distinction is sometimes invoked as a philosophical one rather
than a technological one, and for good reason. I mean,

for one thing, we don't have anything that I think
approaches the strong AI definition, and so we we can't
say what the technological distinction is because we haven't achieved it, right,
But let me explain. So, in the philosophical sense, week
AI is a machine that can simulate or generate the

outward appearance of intelligence or quote thinking, and strong AI
is a machine that is quote actually thinking. So I
would say this distinction is it's sometimes extended to sort
of assert that strong AI is conscious, you know, like
that there's something that's like to be that robot. And
I do find the debate about whether machines can be

conscious and whether machines can really think. I do find
that interesting, but I also think that it's sort of
a different question than the question of implementing AI technology
in the real world, because you can't know whether another
person is conscious. So it's just different than talking about
strategies for how to create an intelligent machine, right, And

I see your point, and I agree with it mostly.
I think that that it's important only in the sense
that you have a lot of people talking in terms
of a I that, at least from the way they
are talking about it, seems to indicate strong AI, this
idea of machines that are quote unquote actually thinking. Uh.
And there's a lot of debate on whether or not

we will ever achieve that. There's some people who don't
think it's achievable at all. There are others who say, well,
that seems that seems like it's pretty egotistical to suggest
that that only are gray matter in our heads is
capable of this, and that we would never be able
to achieve this in a machine environment. It may may
be that it's impractical. Some would say, how would you

know the difference? And in fact, there are lots of
different philosophical you know, like thought experiment type things. We
I think we've even mentioned one in a previous episode.
Have we ever talked about the Chinese Room. Yeah, we
did a whole episode on the Chinese Room experiments, all
about whether machines can think right or are they just
responding in a way that they've been programmed to respond
so that it seems like they are thinking, that the

illusion of thought is there, but there's no actual comprehension
going on. But I agree that overall that conversation is
not really relevant to most of what we're concentrating on,
other than the fact that I think at least some
of the people will chat about appear to have that
kind of concept of AI in mind when they're talking

about well, and to be fair to them, I think
some people would make a not stupid case like it,
like an interesting case that these these things we're talking about, consciousness,
something deep about the very human, distinctly human things we
think of as thinking and consciousness are actually in some
way crucial to what we're about to talk about, which

is general intelligence. But another way we could address the
idea of week AI. This is sort of different, but
the same terms are used sometimes UM also known as
narrow AI. It's the AI that you're already familiar with
any device or computer program that's mulates some mental ability

of human beings. So you could argue that facial recognition
in photographs on a computer is a very specific form
of narrow AI, or those turing chat bots, you know,
the inner the programs, or another form of narrow AI.
We're already surrounded by examples of this kind of thing. Yeah.
In fact, uh, we see advances in this all the

time in interesting ways that maybe you weren't necessarily associating
with artificial intelligence. One great example of this is the
capture series, where you know captures that that that step
you need to go through in order to prove you're
not a robot to various online services. Typically, you know,

early on it was just you read a word and
then you had to type in what the word was,
and or maybe it was just a string of letters
and numbers, whatever it might be, and then the computer
would verify that you are, in fact the human being
and not some sort of autumn aided bought that's just
trying to spam a service. Then there was a point
where the object recognition or the text recognition capabilities of

computers began to improve, and that is one aspect of
artificial intelligence, the ability to recognize characters and understand what
they correspond to. And so the captions began to get
more difficult. They were distorted in weird ways or somewhat
um covered up by other images in order to fool
these image recognition programs, which then made the people designing

the image recognition software go a step further so that
they could recognize those examples too, And it became back
and forth between security systems and artificial intelligence. And if
you read up on the folks behind capture, they said, yeah,
we were kind of thinking like this, this will help
services figure out that, you know, the people who are

trying to use their services are in fact human beings.
But ultimately we wanted to drive the develop element of
artificial intelligence. It seems so weird because we often think
of AI as being this much more broad concept, nothing
so specific as being able to recognize that this string
of weird images is in fact a series of letters

and numbers, but that is wrapped up in the concept
of artificial intelligence and would be very much an example
of narrow AI. My favorite example, are the ones that say,
which of these pictures is not a clown? Right? Five
clowns and then a pocket watch and a tiger right right,
I've I've seen those two. Yeah. Where you you get
like a bank of like nine images and about five

of them tend to be whatever it is that you're
supposed to be looking for, or or you get one
that's really kind of makes you feel stupid, like it says,
pick the images that have a river in them, and
you're looking at one you see like a house on
a shore and you're thinking, all right, is that a river?
Is that a lake? And does this know the difference? Well,

regular caption makes me feel stupid. Sometimes I can't see
what those letters are. Yeah, I I sometimes will go there.
You know, there's audio ones too, and in fact audio
capture the same thing, like the idea of getting uh,
the the voice recognition, the audio recognition that plays into
artificial intelligence, natural language processing plays into artificial intelligence. All
of these different things. So really, uh, you know, these

are all aspects of narrow AI. These are all or
or rather you know, manifestations of narrow AI. And some
of them are are better than others. Some of them
are more accurate than others. Um, some of them are
further along than others, but it shows how diverse this
discipline is. Yeah, but at the other end of the

spectrum from narrow AI, you would get to the main
concept that we're gonna be talking about in these two episodes,
which is general AI. Artificial general intelligence often abbreviated a
g I, and so under some usage, I think sometimes
some people use the term strong AI to mean this, Right,

they interchange the two terms to mean the same thing,
but more accurately. Yeah, what we're trying to talk about
is artificial general intelligence, and it's the property possessed by
a machine that can apply intelligence to many or even
all problems, rather than just one problem or some small
number of them. And it's more like the diverse and

adaptable problem solving engine within the complex animal brain, except
it's going to presumably be free from biological and psychological
limitations that we have in our human brains and probably
be able to outperform the human brain most if not all,
problem solving tasks. So this would be the difference between

an AI implementation that can recognize a face and go
beyond to one that not only can recognize a face,
but also know the context of what's going on, also
even respond to questions that are related to the the
picture and ones that go beyond like tangential to the pictures.
So you might say, uh, what is this a picture

of it? It's a crowd of people, And the answer
might be, well, that is as a group of human beings,
all right, well how many how many of them are
wearing blue T shirts? And it could tell you, and
then you could start asking more questions that that start
building off of that. It would continue to answer those,
and it goes beyond just a single uh, a single

task or a single um course of action, right. Like
another great example is that a lot of the robots
we depend upon today are functional because they focus on
a very specific task. They don't they don't branch beyond it.
And we've seen that the there are a lot of
challenges to create robots that are able to tackle numerous

challenges and uh, Thus we come to the conclusion that
coming coming up with like an artificial general intelligence is
going to be incredibly challenging just from the the baby
steps that we've seen so far in that arena. And
and even then like with the darker robotics challenge, which

we will mention again later on. Uh. Even with the
advancements that we've seen in that realm, we see the
limitations that are there, and that's only a small, a tiny,
tiny slice of everything. Whereas general intelligence we we tend
to say could deal with any topic or any task.

Anything humans can do, it should be able to do too,
and probably some other stuff too. Yes, yes, so yeah.
I think I'm going to suggest that it makes more
sense for us to speak of narrow AI and general
AI for the purpose of these episodes than we k
AI and strong AI, just so we can avoid these
side questions about you know, the nature of consciousness in
the phenomenal logical nature of thinking and stuff. I agree,

and uh. I also want to point out that general
artificial intelligence can still refer to a machine that is
dedicated to a specific set of tasks. It doesn't have
to mean that you've got, you know, the deep thought
computer where you can ask any question and it will
give you the answer. It may be that we look

at examples like a machine that could exist within the
medical field is the example I picked, where it helps
doctors diagnose and treat patients, and IBM s. Watson computer
is meant to do those sort of things. But I'm
talking about going a step beyond what Watson can do
and and be almost like a partner or collaborator with

the doctors, the medical staff. I've got something to say
about that in a bit, because there are some critics
of AI theory who don't like Watson and and it's
a very under endible. Yeah, I'm glad that you put
that note in as well. Also, I think that it's
possible we'll see that there's not a firm line between

narrow AI and general AI, maybe that that line is
actually pretty fuzzy, and that it will be a little
bit difficult once you get to sufficiently complex narrow AI.
It may be difficult for us to say, well, is
this still narrow AI, or would we call this general AI.
It may not be something as simple as, uh, you know,

well that this is distinctly general AI and this is
distinctly narrow but it is useful for the purposes of
this conversation to have that distinction between the two. Okay,
one more topic we should define before we start getting
into these debates is the idea of super intelligence. Yeah,
this is what Superman has no but it is, well, actually,

I guess Superman does have sort of superhuman intelligence suddenly.
I mean, if you watched Batman versus Superman, clear really
he does not, because he would never agree to be
in that movie if he had super intelligence. Yeah. I
watched the most of that on the plane back from
New York. Yeah. Wow, that was a slog. So what

is super intelligence? That's just intelligence surpassing the most powerful
biological intelligence of humans, which is clearly that by definition,
it means we humans cannot possess it unless we've undergone
some form of enhancement, whether unless you get to your
trans human kind of brain right right, And you could
even argue that this this would be degrees, right, it's

not necessarily that we would be leaps and bounds more
intelligent front once we hit that point. It may mean
that we find, through genetic modification, we can improve our
intelligence by degrees over the course of time as we
get more and more adept with it. And so it
may be that it's a transition where it's not like
the common conception of the singularity, where one day everything

is different, but rather, over the core of a decent
amount of time, we get to a stage where we
no longer can meaningfully define the present, but it does
mean changing humans in some way if if we want
to achieve superhuman intelligence, either through medicine or through technology
or both or whatever. Right, but superhuman intelligence and machines.

One thing that I think is interesting about this is
it's always just assumed. It's assumed if we create a
g I that will lead to super intelligence in machines.
And I'm not necessarily disputing that assumption, but I do
think it's interesting that it's always just kind of assumed
as a given. I think it's funny because we're all
familiar with how computer services can sometimes mess up. Right

where you know any service that you might use where
you start looking into it and you're like, well, this
is clearly it's it's either unintelligent or the work of
a crazy person. Great example that is Chef Watson, where
you start looking at the recipes, You're thinking, all right, well,
we've got a long way to go because these recipes
either do not sound appetizing, do not reflect what the

title of the recipe says, or make no sense, or
some combination of those. Um But then again, I kind
of wonder what would it mean for a computer chef
to actually be super intelligent in the realm of designing recipes,
I imagine it would have to mean that it was
just able to make the most nutritious, delicious food covered

in caso. I mean, caso has to be on top
of it for it to be super intelligent. Yeah, I don't. Yeah,
really it is cheating. I any time I make something
and I realized that I have made a terrible dish,
I just add melted cheese and that that covers all sins.
But you know, we've this may also be a fuzzy area,

right because we can already see that computers are better
or at least faster at processing certain types of information
than human beings are. If if it, if they weren't,
we wouldn't use computers right there. They would be unnecessary.
They would actually slow things down. But they are better
than we are for certain types of complex processing. And

we've built computers that are better at us than lots
of other or better than us that add a lot
of tasks like chess, We've got computers and computer programs.
They can beat the best chess masters human chess masters
on the planet and go. We've solved Go. Now, the
computers are better at playing Go than than humans are.

But these are what I want to know is when
computers are going to be able to beat humans at
Magic the Gathering? Yeah, yeah, when they start tapping man
like crazy. Yeah, I'm already still pretty darn good at clue.
I think I can. I think I can go up
against the best computers in a game of clue. Uh
and not because I cheat. But you gotta keep in

mind that these computers that we're talking about, they're very
specific machines running very specific soft where for a very
specific purpose. Right It's like the the computer that can
beat you in chess may not be better than you
are at some other tasks. And there are certain cognitive
tasks that computers just don't they can't even compete in

right now. Yeah, you know, especially anything that involves creativity. Um,
that is one of those issues where we're still way ahead.
We've talked about that in previous episode and we're going
to get into that more in a bit. But yeah,
these these tasks all have very highly specified outcomes. Yes,
it's very clear what the parameters of success are, and
thus it's easy for the computer to follow them because

all it has to do is run step by step
algorithmic execution. Uh. And it can do it thousands of
times faster than we can. So yes, it beats us.
It's essentially brute forcing the um the task, right. It
looks at the task, it says, all right, well, what
what are all the different possible moves for this particular situation?

Which one is the most advantageous. Now we've seen those
programs become more sophisticated over time, where it becomes less
of a brute force and more of a probabilistic approach.
But it's still, you know, it's still going about it
this way. It's not like innovating. It's not it's not
creating a new defense or a new attack like you
would see chess masters do. However, there are computers that

are capable of actually learning, right, there are There is
such a thing as machine learning. UM. That's essentially a
process of trial and error. You give a machine a task,
it attempts to do this task, You evaluate how the
machine performed and and indicate in some way in the
machines programming whether it's succeeded or failed, And then it

attempts to do it again, maybe perhaps improve upon its performance,
and through this process of trial and error, which takes
quite some time even in the machine world, it quote
unquote learns. It learns how to best approach whatever task
you've given it. Um it It's it doesn't mean that

the machine actually understands what it's doing. Right, For the
example that we've given before, the idea of teaching a
computer what a cat is like how to recognize a
cat in a picture. It recognizes cats to some degree
of of accuracy in photos, but it doesn't know what
a cat is. It doesn't know how a cat is

different from a dog. Is only able to recognize, uh,
the image of a cat inside a picture, and it
could never understand why cat memes are funny, right, it wouldn't.
It would just see that there's an awful lot of
cat content on the Internet, which might lead a a
stupid computer to believing that cats are the dominant species

on this planet, which I think most domestic cats would
agree is the truth. But I I have some you know,
I don't own a cat, so I dispute us um Now.
A super intelligent general AI is typically the type we
see in science fiction stories that pick the man against
machine skynet. Yeah, that's the big one or or how
in two thousand one. Uh, but we again, we can't

be sure that we'll ever be able to create such
a computer or the software. Really, when we say a
computer or a machine, we're also we also mean the
software that would be required to make this happen. And
in fact it maybe that software becomes the big impediment
and not not the technological processing part. We don't know,

but either way, just keep in mind when we talk
about computers or machines, we're lumping software into that as well. Right,
So I guess we should transition to talking about where
we are right now in the development of artificial intelligence.
And I would say, maybe you can dispute this, but
the most basic picture of the lay of the land

today is that we're making lots of progress with individual
examples of narrow AI and apparently nowhere near anything like
strong or general AI. I would agree with that. I
would say that, uh, we've got some great examples of
very compelling, very accomplished narrow AI stuff that is impressive,

but then you you just recognize on the face of
it that it cannot do anything outside of its intended purpose,
like facial recognition. Again, facial recognition software has gotten really good.
I mean, just using something like Facebook and going on Facebook.
The ability for Facebook Facebook's AI, you know, it's facial
recognition software to identify a person with a pretty good

accuracy is kind of surprising us. Even if you're you've
got part of the face obscured, it's pretty good. It's
it's not perfect. I don't know. It keeps identifying my
left elbow as Gary Abuse. Well, to be fair, I
thought I thought for a while that you had actually
grafted Gary Bucy's head onto your left elbow. Uh so,

I mean that's really more of a personal problem. I guess. No,
I guess I shouldn't have gotten that Gary Bucy tattoo. Yeah.
But uh but again, we wouldn't expect that facial recognition
software to be able to diagnose a medical issue or
be able to give us the projection for whether three

months out, like, none of that would make any sense.
It doesn't. It doesn't do those things. It is a
very specific application of AI for a specific purpose, and
when you think of it that way, a lot of
our narrow AI is pretty good. Here's some examples I
would give. I'd say, driverless cars are a good example
of narrow AI. It's maybe a little more broad than

your facial recognition is, because it's got to do a
lot of different stuff. It's got to navigate changing dynamic environments.
It has to be able to sense things that's going around.
It has to be able to convert that information in
in process it in a way that then results in
actions that it takes, whether that's speeding up or breaking
or swerving out of the way, whatever that might be.

There are a lot of different uh um things going
on here. Now. We have had some fairly tragic, really
tragic examples of failures of this technology recently that show
that it has its limitations. It cannot process every scenario
with perfection. In general, they these driverless cars performed better

than human drivers. They're able to have full sixty degree
awareness beyond that really because but it is no longer
true that they've had no at fault accidents. That was
true for a long time. Yeah. Now, the in the
case of the Google one, it's funny because it's kind
of a backhanded thing where the Google's like, yeah, our
car assumed that the bus driver would let the car

in and it turns out that assumption was wrong, which
is kind of kind of a passive aggressive way of saying,
it's our fault. They need to adjust their algorithm to
to include a lower view of human nature. Yes, yeah,
assume that people are jerk faces. Uh. Then there's the
Tesla autopilot, which really is more of a driver assist system.
It's not meant to be a driverless car system. Uh.

And we've seen several accidents with people in those cars.
But you know, Tesla, the company itself has said this
is not meant to be an autonomous car, it's meant
to be a driver assist feature. Uh. So you could say, well,
in that case, some of those accidents are at least
partly the fault of the operators because they were not
operating the vehicle the way the company had communicated you

were supposed to, which is like you don't take your
hands off the wheel, and you maintain awareness and you
don't just let the car just take over to take over,
because that wasn't the purpose of the the technology. Beyond that,
we've got things like virtual assistance. I think we're all
familiar with these things like Siri or Alexa or Google's
assistant these are all voice activated interfaces. They involve some

natural language processing button cheeky jokes, cheeky jokes ultimately, but
their pre programmed cheeky jokes. They're not coming up with
new jokes, right. Someone had to record all those things.
And I mean even if you were to record every
single word in whatever language you were programming in, uh,
that would you'd still have to figure out a way

of placing them in the right order for the syntax
to make sense. But ultimately, these are interfaces, right. It's
not any different than say a graphic user interface or
a gooey or even a text based interface. It's just
it's just a level of interaction that seems to suggest
the presence of intelligence, but doesn't actually mean it's intelligence.

It still involves artificial intelligence because it involves that natural
language processing and probabilistic determination of which, you know, what
was it that you were asking for and what's the
most likely answer to your query so that you get
what you want. If if that and work, then it
would be pointless using these things, right, You would ask
Serie a question and the response would be completely nonsensical

or irrelevant, and you would never use it again. So
there's some elements of AI there, but it is very narrow. Uh.
There's the facial recognition stuff that we've talked about already.
There's the discipline of machine learning, which is, you know,
not specific to any one application. It's a discipline, not
not a specific example, but it does show a concept

of narrow AI, which is again that trial and error approach. Uh.
You can also have computer programs that are able to
infer information based upon some sort of input. UH. Stanford
researchers showed this off when they had the computer program
that observed the movements of a pendulum and then was

able to infer the laws of motion based upon that,
and it was able to do it in the course
of like I think, I think, like a maybe a
day or so. And when you think about how long
it took humans to put all that together, it's pretty impressive,
right that a computer was able to put these pieces
together within a matter of a few hours compared to

centuries UM. Pretty pretty extraordinary, but again limited in that respect.
It can't it can't look at anything. It's not like
you could show it reality television and it could explain
that to you. Some things are just impossible to understand um.
And then there, of course robotic platforms designed to observe

humans as they go through a series of steps and
then just repeat those series of steps. We've seen these
in robotics, like pick up this block, move it over here,
pick up this bottle, poor stuff on the block, move
the block over here, and then when the robot observes
it can then repeat. That's the type of machine learning too. Now,

the further we go from single purpose devices, especially with robotics,
the more we start to see the limitations of AI. Right.
The Darker Robotics challenge is a great example of this. Yes,
and we we've talked about all of the darker robotics
fails before, and we should emphasize as always that in

talking about how funny it is to see these robots
just fall over and be conquered by a door or
some sand or something, we're not saying that the people
who created them didn't do an amazing technological accomplishment. I
mean they did. These things are cutting edge, they're very
they're very impressive. It's just a testament to how hard

it is to make a robot that's able to do
twenty six different physical tasks. Right, Yeah, it's so again,
it demonstrates that artificial intelligence and robotics designs both are
our hard problems. They're not not easily conquered, and we

often forget that when we see really compelling demonstrations of
either artificial intelligence or robotics, it seems natural that the
next step forward would be right around the corner. That's
not necessarily the case. And then, Uh, we got to
get into how we're treating the whole idea of artificial

general intelligence in the first place, Like, how do we
how how would we get to that point? Right? Well,
we're we're just sort of assuming up until now that
we can just keep working on the types of artificial
intelligence projects we've been doing and eventually maybe we'll get
to some a g I. And that could be a

very flawed assumption. I mean, it could be that there
is a problem with our basic approach to a g
I and we need to go back to ground level
and start over. For example, the Oxford physicist David Deutsch,
he he has written about this in a piece from
October twenties twelve and Eon magazine that I thought was interesting.

He was talking about our failure to build a g I,
and Uh, so he says, in principle a g I
must be possible given the universality of computation. Quote. This
entails that everything that the laws of physics require a
physical object to do can, in principle be emulated in

arbitrarily fine detail by some program on a general purpose computer,
provided it is given enough time and memory. And so
he he talks about how the first people to really
grapple with this were Charles Babbage and Ada Lovelace, who
did their work with the Difference Engine, which was a
machine designed to sort of the first computer in many

ways is enormous, big mechanical computer, right, designed to flawlessly
replicate the computation power of human quote computers. These people
who you know, just did computations to fill out out
mathematical tables. So you'd have a big table had all
these co signed values or logarithm values in it, and

they'd make mistakes, and so they wanted, you know, Babbage
wanted a computer that could It was a machine that
could put these numbers out without making any mistakes along
the way. And of course there was a later spiritual
successor to the Difference Engine, which was the general purpose
outworking of its principle, the analytical engine. You could use

an early form of computer memory to reprogram itself for
any computational tasks, even to the point where Lovelace herself
had envisioned a future in which music and art could
be reduced into I shouldn't say reduced, but converted into
mathematical expressions that a computer would be able to process

and recreate, which was incredibly prescient. Yeah, so this is
more than a hundred years ago people having the idea
of general computer intelligence, and and and it's true what
what they envisioned is in fact correct, because Deutsch says
he himself formally proved the principle of universality of computation
a few decades decades ago using quantum theory of computation.

So this is now an established principle in in computation theory.
But so, h G I must in principle be possible,
and yet we have just made abysmally little progress toward it.
We we've got all these narrow AI functions, but as
far as a G I, there's just nothing. Uh. And
so concerning the study of artificial intelligence, Deutsch writes, I

cannot think of any other significant field of knowledge in
which the prevailing wisdom, not only in society at large
but also among experts is so beset with entrenched, overlapping
fundamental errors. So that's a that's an intense critique. And
what does he have to say, Well, well, we'll get
to this in a second. But he he ties it

back into some more progress in the history of computing theory.
So he brings up Alan Turing. He says, Turing came
along after, you know, much later after Babbage and Lovelace,
and articulated the idea of the universality of computation that
any of the distinctive attributes of the human brain could
be reproduced by a properly designed computer. And then Deutsch

says that since Turing, so that sort of sparked a debate,
and since then the intellectual intellectual world has been mostly
split between two camps. There's one camp who says that
a g I is impossible, and the other camp says
it's imminent you know any day now, Yeah, well, and
and Deutsch says both of these are wrong. And the

first camp is usually motive motivated either by supernaturalism, you know,
they say there's something about the mind that is magical right.
That well, those are the ones who argue the mind
over the brain, right, the idea that the mind somehow
is more than the manifestation of activity within within our bodies. Right,
Or if it's not necessarily a supernatural objection, it's some

kind of philosophical objection that the Deutsch thinks is incoherent.
None of these ideas hold water for him, right, like
like like, For for some inarticulate reason, machines would be
incapable of possessing this faculty. Huh. And yet Deutsch says,
the second camp, meanwhile, is has been led severely astray
because they've failed to understand the primary feature distinguishing human

minds from other physical objects like computers. And that's creativity, right,
that idea of of innovation and being able to come
up with an idea that is not necessarily a product
of just experimentation or trial and error. I would I
would argue also that that second camp has largely been

led astray by equating the idea of progress towards a
g I with other types of progress like Moore's law,
where you see Moore's laws progress and it seems to
have this very steady rate. Yeah, you. You'll see this
sometimes in like a Kurswilian type thought, where people say,

look at look at the rate of progress through Moore's law,
we can extrapolate this to a general progress of computational
power and thus machine intelligence, and from this rule we
can predict that the singularity will come in twenty whatever. Yeah,
but the problem there is that one it does not
necessarily take into account the progress with software sophistication, which

does not necessarily keep up with Moore's law. Nor does
it take into account the fact that artificial intelligence depends
more on more than just processing power, and that our
understanding of intelligence in general doesn't follow the same pattern
as Moore's laws. So there are a lot of other

factors that could put the brakes on that on that
journey to a g I not saying that it makes
it impossible, but rather that the timeline maybe longer than
what some of these enthusiasts project. Yes, and so so.
To come back to Deutsch at the end of his argument,
he seems to suggest, at least the way I read

him is he's saying we need a philosophical revolution in epistemology,
and that's the study of how we know things. You
know how how you know, um a revolution in the
philosophy of epistemology before we can create a g I,
because we don't even have a correct philosophical model of

how humans actually do generate creativity. And I'd agree with
him there. I think we we don't have a coherent
understanding of what exactly creativity is. Yeah, and I think
that's something that we probably I'm not sure I fully
agree with him that we have to understand that before
we could create a g I. But I do think

he makes a strong point. We're trying to make a
machine that can be as good as a human at
at creative leaps, and we don't even understand how humans
make creative leaps, right. We we understand the products of
that creativity, but we don't understand the mechanism of creativity itself,
which is and he he illustrates this with common examples.

You know, the kind of thing that you would You
can get a computer to do amazing things as long
as you have specified parameters. You're telling it. Here are
the rules of go. Here, the types of moves you
can make. Here, the conditions of winning. Now figure out
how to win. Okay, Well, that's clear now instead try
to say, can you figure out what dark matter is?
Go computer. You can't write a program for that, because

it requires to to program such a machine, you would
sort of you would have to know what the parameters
of success were. And if you knew what the parameters
of success were, then you would you would have the answer.
You wouldn't need to ask the computer in the first place. Yeah. Yeah,
this is where you get into that that issue of uh, well,

the way to get around that ultimately is assuming you
get to that first step where you can create a
machine where you can have it have at least some
element of a g I, have it designed its own successor,
and then then you get into the deep thought Earth model.
You know, that might be a solution. But so anyway
to finish up with Deutsch, what what he says in

the end is really we're making no significant progress at
all on a g I. Not because people aren't working
hard on the problem, but because we lack a fundamental,
ground level piece of information that's necessary to designing this
capability and machines to understanding the nature of the thing
we call intelligence. And until we make that philosophical breakthrough

about the nature of creativity in a in epistemology, we're
just fumbling around in the dark with AI research. Yeah,
so it would be like landing on a planet and
someone tells you, hey, it's a planet. That's that's like
Earth to the point where there's Starbucks. Okay, okay, there's
only one Starbucks. And you land on that plane, you

land on that exactly for the bathroom. You land on
that planet and uh, you have no map, you have
no coordinates or anything. You're just told you need to
go to Starbucks. Well, you don't know what direction to
head in. You don't know where it is in relation
to where you are, you don't know where to look. Uh,
the same sort of thing, like like you might find it,

you might fall upon it, but odds are you would
just spend a whole lot of time just randomly walking around,
not making any real progress. That's kind of the same thing.
Like we're fumbling about because we don't have a big picture,
we don't have a map, we don't we don't know
how to get the pathway to the destination we want
to reach. And this idea is actually it's kind of
exciting to me because it makes me think about, oh wow,

how I mean what if somebody figures this out in
my lifetime? What if somebody discovers here, really, this is
the best way to characterize what creativity is and how
it happened in the brain. Wouldn't that be a fascinating
thing to understand? Well? And it would be neat because
you could then take that information and think back on
individuals that we identify as being incredibly creative, right like,

Like then you could sit there and think, wow, so
this this who whomever was the author of Shakespeare's Place
it was William Shakespeare. You could then start to characterize
in a more kind of analytical way, which would be
fascinating to me because you could say, like, these are
the qualities that Shakespeare must have possessed in order no

oxford Ians come on kidding, insulting my intelligence. That's a
button that you can push very easily. But then we
have some other ideas about artificial general intelligence and and
some of the problems that have come along with misrepresenting,
misrepresenting either intentionally or otherwise, what our official intelligence is. Yeah,

another thing I want to mention is a criticism leveled
that I that I've read before, by the uh, the
what would you call him and just general, very smart,
interesting guy. Jaron Lanier. He's a technologist, virtual reality pioneer.
He writes about technology and culture. Super smart dude. Yeah,
I And I like him a lot. I really like

reading what he has to say, even when I don't
agree with him. He's one of those Yeah, he's one
of those people who I like to read him. Disagree
with me, Yeah, yeah, I I I'm the same page.
But yeah. Anyway, So he says that sometimes the way
we invoke the very concept of artificial intelligence can have

negative impacts on research progress and on human culture. And
so the example he gives, uh, and I'm gonna just
try to give a summary rather than getting into the details. Uh.
In this New York Times article of his that I
read from a while back, called the First Church of Robotics,
he says that um framing advancements in computer software in

robotics as quote artificial intelligence often over sells a sort
of fake theatrical advancement and under sells the real function
of the advancement. And the example he gives is Watson
on Jeopardy, And the way I'd interpret him, I think
what he's saying is that, you know, the Jeopardy competition.

Watson going on there and beating the human contestants is
sort of theatrical and misleading. If you're trying to imply
that Watson has the sort of general conversational intelligence of
a human who can win at Jeopardy, you're misleading people.
It doesn't have that. But meanwhile, what IBM did do
is put together a really interesting, powerful, formidable phrase based

search engine, which is an achievement that is useful and
deserves recognition in its own light. And you kind of
undercut that when you try to frame it as something
that it isn't. I agree entirely with that, by the way.
I think that that's an excellent and point, and uh,
I think we see it all the time in lots

of different realms of artificial intelligence. And I also think
it's totally fitting for a guy who was a pioneer
in virtual reality to bring that up, because virtual reality
had the exact same thing happen. Right. Virtual reality, when
it was blossoming in the in the late eighties early
nineties is just getting started. It fizzled out largely because

there was a misrepresentation of what virtual reality was. People
had an idea of what it actually was capable of doing,
and the reality was so far away from that that
public interests and funding went away, and it took two
decades to get back to where we should have been
by the end of the nineties. Misleading marketing killed public

interests in something that was genuinely interesting. Uh. Yeah. And
so another point he makes, and this is kind of
beside the point, but I do think it's interesting, is
that he points out that trying to construe the latest
robot or computer program as in some ways significantly resembling
human intelligence, we not only overestimate what these machines can do,

we also tend to start underestimating and devaluing human intelligence
in personhood. And I think there's something to that too. Yeah,
And and so here's my take on both Deutsche and
Lanier's points. So, first, I think and I hesitate to
say that Deutsch is overlooking anything, because I'm sure Deutsch

is far more intelligent than I am and has thought
about this in much greater detail than I have. And
I've not read all of Deutsch's work, So this is
but based upon what we looked at, I think there
is the possibility that you're overlooking, not you Joe, but
Deutsch is overlooking the potential for us to achieve uh,

the the task of creating an a g I without
fully understanding the mechanisms that actually create it. Right. This
is something that does happen where occasionally someone makes something
that works, but we don't fully understand why it works
until later. Right. So it's one of the amazing things

about human ingenuity. Sometimes we create a thing, it does
something unexpected, or it does something better than what we
had anticipated, and uh, and it's not because some sort
of magical light shone down on that innovation. It's that
we had a limited understanding of what we were actually doing,
and we did it, and later on, as we get

a grander understanding of it, a deeper, more thorough understanding
of it, then we're we say, oh, well, that's why
it works that way. We didn't know it at the time.
So I think it's entirely possible, not necessarily plausible, but
possible that we would create a general artificial intelligence without

knowing the secret sauce that makes a general intelligence possible.
It's it's one of those things that could just arise
as a product of complexity once we have sufficiently sophisticated,
narrow AI, and we've banded enough of them together that
it seems like it's emergent, and it's not necessarily truly emergent,

and may just seem that way to us because we
don't fully understand the mechanisms that made it a general
intelligence UM So I don't necessarily agree that in order
to get to that point we have to have a
deeper understanding of what makes human creativity possible. It may
turn out that way. It may just, by happenstance, turn

out that we get a better understanding of human creativity
before we ever get a g I. But I don't
know that it's necessary. I don't know if that's like
if there's like a direct path and human creativity is
a stop that we have to hit before we hit
a g I. Um As for a linear I do
agree with your interpretation, uh, And I do think that

when you have these misrepresentations, whether they were intentional or not,
and often I think they are unintentional. Often I think
it's an interpretation that say that a media outlet takes
or is you know, some some sort of pr person
has created a communication that is not entirely accurate. Again,

not through any attempt to mislead the public, but just
through a miss misunderstanding of what's going on. It is
harmful in the long run, both because we overestimate where
where we are in AI and also we don't appreciate
where where we actually are right and and uh again,

if you if you do it enough, the same thing
that happened with VR could happen with AI, and that
you get people less interested in the discipline. It means
you have fewer computer scientists going into the field, you
have less uh funding going into projects that could push
the field forward, and ultimately we all are left behind

as a result of it. So to all you journalists
out there, do your best to make sure you're representing
AI stories accurately, but also say what's genuinely cool about them?
Oh yeah, no, don't you know, don't dismiss it, but
at the same time, don't build it as something that
it is not. I think people can sometimes, in service

of trying to be skeptical about things, also sort of
overcorrect into just kind of pessimism and hating everything. Yes,
you know, it's yeah, it's very easy to go to
the extremes where uh again, where you know, you you
want to be realistic, but you don't want to be uh,
you don't want you don't want to disregard stuff you

want don't want to say, like, well, it's not general
artificial intelligence. That care, right, it's not general artificial intelligence.
Therefore it's not important. Ye, that's that's not true. So
so what is it going to take for us to
get to a g I. Joe, tell me what's the step?
How do we do it? If we knew we'd do it?

We don't know. Nobody knows. Well, there goes my plans
for tomorrow. But there's a few hypotheses. Right, There are
a few different avenues that we could in theory. Take
one we can discuss, I think is the the incremental approach, right,
sort of the additive emergent a g I approach, and
that would be narrow AI plus narrow AI plus narrow AI,

and you just keep adding them up somehow. Eventually you
can add together enough of these narrow AI s that
you have generated by addition and artificial general intelligence. Is
this a good hypothesis? And of course we don't know
the answer to that. It may very well be that
artificial general intelligence becomes the product of or rather the

summation of a whole bunch of different types of narrow AI.
You could think of it in the way if you
look at a human being and you say, all right, well,
what are the faculties that uh, that contribute to intelligence
as we understand them, And you start with the senses,
all all the stuff that is in that allows us
to take an input of information, and you say, all right, well,
we've got to make sure that we can simulate all

of that in our machine so that it has some
equivalent to the senses of sight and smell and touch
and taste and hearing and all this kind of stuff.
But then also maybe we add on some memory, and
then maybe also some object recognition, and then also some
some basic limbic responses, and then you just keep on

adding things until you get to a level of complexity
that is uh. It is is enough for you to say, well,
if this isn't general AI, it's it's a close enough
to not matter, right, if it's if it's sufficiently sophisticated
or complicated to the point where it appears to be

general AI, then it kind of is general AI. Even
if that wasn't a let's start from square one and
build up to general AI approach and uh and it
maybe it will be that that's the path we have
to take. That that because as Deutsch points out, we
lack this this key piece of understanding of what makes

human intelligence so special, at least as far as we
can tell. I mean, we're the only ones who seem
to possess it. Um If if we cannot figure that out,
and that's a necessary step to go from scratch to
build out a g I, it may very well be
that it's just through the process of building out a
machine that has all of these different narrow AI elements

that are working together in some way. And I mean
that alone is a monumental task, right, not just to
develop each type of narrow AI, but then to figure
out how do you build a framework where these can
all work together and there can be some sort of
processing unit that acts as like the conductor that understands

what all of this data means and assign it importance
and be able to create responses to it. That's a
non trivial problem all by itself. But it maybe that
that's the approach to a g I, or maybe we
just you know, maybe maybe it just happens accidentally. You know,

we've talked about this before, to the idea of if
you were to create a sufficiently complicated neural network that
perhaps intelligence would just be an emergent faculty, that it
would it would uh manifest simply through the complexity. Uh.
Somehow I find that this. I find it intuitively unlikely.

But then again, what's our intuition worth, you know, I
mean almost nothing? Yeah, I mean it's it's part of
You could argue, well, the brain is really a complicated
organ and maybe if we were to create an artificial
neural network that had a sufficient level of complexity, wouldn't
necessar really have to match the human brain, but it
would have to be at least closer than what we
can accomplish right now that intelligence would naturally arise from it.

There have been people who have argued that perhaps the
Internet itself could become an artificial general intelligence, because if
you think of all the different nodes on the Internet
as being elements of a neural network, it's getting more
and more complicated every single day. But if it is,
it's a it's a brain that has lots of lots
of mental issues because those machines aren't always on all

the time. The communication lines can get fouled by various means,
and as far as we have been able to tell, uh,
the Internet has not become an intelligent entity of its own,
and if it is, it is obsessed with cats to
a level is clearly unhealthy. I mean, it's just it's

a problem. But that that's It's got a huge, massive
subconscious right, you know, only a tiny bit of the
Internet is what we see. Well, not only that, but
it has like you know, the the classic depiction of
the angel and the devil on the shoulders that that
guide a person's decisions. It has several billion of those,

and many of them are terrible, terrible, terrible entities that
comment only on YouTube. Okay, so I've got another scenario.
What would you call it if you just have, like say,
some project at Google and they just say, yeah, we
just built it. Here it is, we built your a
g I. Yeah, I call this one the Grand Slam.
The idea that you know, you start off with the

intent of building artificial general intelligence, and this would mean
instead of going that narrow AI plus narrow AI approach,
you identify a strategy to create a general intelligence from
from scratch, from from a starting point. But again I
would argue that this this kind of falls into that
that category that Deutsch was pointing at, saying that without

that full understanding of human intelligence, it would be really,
really hard, I think, to go this route because it
presupposes that we understand enough about intelligence to be able
to create an artificial version of it. Um. I don't
know that that is something that we could possibly do

without that deeper understanding that Deutsch said. We might be
able to create uh, machines that outwardly appear to possess it,
and you could argue that we've already done that with
some of the examples we've already given, but it doesn't
really have that capability. So one last question before we

wrap up part one, how do we know once we've
made it? It seems like it would be obvious, but
would it be necessarily I don't know, because again, if
if you have a sufficiently complex device that is capable
of giving what seems to be until eligent answers to
various questions, you get back to that Chinese room problem,

right like, is it actually an intelligent machine? Or is
it just complicated enough that it appears to be intelligent
to us and doesn't matter at that point. Well, I mean,
if we go with the crack our definition of intelligence,
one way we could check to see if it is
truly an artificial general intelligence is check to see if

it's solutions to various general problems are working. That's an
excellent point. You know, it has it reduced to the
number of steps or the complications in solving problems and
multiple areas. If we accept that as the definition, then
I think that would make it pretty easy to figure
out because you could, uh, it kind of becomes like
that Simpson's episode where Homer goes to work for Scorpio

and he walks, is are you guys working? Yeah? Could
you work harder? Yes? The same sort of thing, right, like,
can you solve this Rubik's cube? Going back to that example,
and then it solves the Rubik's cube and then say
all right, we're gonna reset it to the state it
was in when you started. Can you solve it in
fewer moves than you did in the previous one? And
can you continue to do that? Can you improve on

your performance until you reached an ideal version of that? Though?
I guess the real test for for a g I
would be not just that, because maybe you could program
into it some kind of you know, the parameters for
success in solving a Rubics cube. Yeah, now there are
algorithms for that are known algorithms, because that's how human
Rubik's cube competitors. Yeah. I mean, we can design a

computer program right now to solve Rubik's cubes. And that's
not all that impressive because it's a specified outcome. We
know what the what the what all the parameters are.
The really impressive thing would be to do the thing
Deutsch said and say, hey, can you figure out what
the heck dark matter is for us? And it comes
up with the solution and we say, wow, that's really interesting,

and we do some empirical tests and we figure out
the son of a gun it was right. Yeah, yeah,
that would be if if you if you were to
argue that that's not a g I, I think at
that point most people would say, like you, at this point,
you're arguing semantics at best at best, because the result
is exactly what we would expect with general artificial general intelligence. Well,

that kind of wraps up this, uh, this laying the groundwork,
which is pretty extensive. We we laid on a big
old foundation for our next episode. Well, next time we're
gonna be talking. We're gonna build on what we've talked
about today and specifically talk about the idea of an
AI arms race, taking this idea of a g I
one step further into the geopolitical realm. Yeah, and that's

where things really get messy. So you're gonna wanna tune
into that and pay attention to to what we have
to say. We're gonna be talking about some interesting characters
out there who have a lot of opinions on the matter,
So tune into that, and remember you can send us
any questions or comments you might have. Our e knowlogress
is FW thinking at hell stuff, what's dot com? Or

drop us a line on Facebook or Twitter. If you
search f w Thinking in Facebook, our profile will pop
right up. Our Twitter handle is FW thinking. You look
forward to hearing from you, and we'll talk to you
again really some For more on this topic in the
future of technology, visit forward Thinking dot Com brought to

you by Toyota. Let's go places

Fw:Thinking News

Advertise With Us

Follow Us On

Hosts And Creators

Jonathan Strickland

Jonathan Strickland

Joe McCormick

Joe McCormick

Lauren Vogelbaum

Lauren Vogelbaum

Show Links


Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Music, radio and podcasts, all free. Listen online or download the iHeart App.


© 2024 iHeartMedia, Inc.