All Episodes

January 10, 2020 42 mins

What is the technological singularity? How might the technological singularity come about? What do critics say about the idea of the singularity? Listen in as Jonathan and Lauren explore the technological singularity.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tex Stuff, a production of I Heart Radios.
How Stuff Works. Hey there, and welcome to tech Stuff.
I'm your host, Jonathan Strickland. I'm an executive producer with
I Heart Radio and I love all things tech. That's
time for another tech Stuff classic episode where Lauren Voge,
Obama and I sat down back in February two thirteen.

(00:28):
This episode published February eleven, and we talked about the singularity.
The episode was titled tech Stuff Enters the Singularity. So
sit back and enjoy this discussion about a somewhat controversial
hypothesis of the future of humanity today. We wanted to

(00:48):
talk about the future. The future. Yeah, really, we're talking
about a kind of a science fiction future. We're talking
about the singularity. And uh, long time listeners to tex Stuff,
and I'm talking about folks who listen way back before
we ever talked out at thirty minutes, let alone an hour.
May remember that we did an episode about how Ray

(01:10):
Kurtzwild works. And Ray Kurtzwill is a futurist and one
of the things he talks about extensively, particularly if you
corner him at a cocktail party, is the singularity. And
so we wanted to talk about what the singularity is,
what this idea, you know, we really wanted to kind
of dig down into it and why is this a
big deal? And how realistic is this vision of the future. Yeah,

(01:33):
because some people would take a little bit of uh,
would would argue with your concept of it being science fiction.
They take it extremely serious. Oh yeah, they say it's
science fact, science fact, it's science inevitability. Yeah. The time.
The term was actually coined by a mathematician, John von
Newman in the nineteen fifties. Um, but it was popularized
by a science fiction writer. Yeah, it's also Ah, there

(01:58):
are a lot of different concepts that are hied up together,
and it all depends on upon whom you ask what
it means by the singularity. For instance, there's some people
who when you hear the term the singularity, what they
say is, Okay, that's a time when we get to
the point where technological advances are coming so quickly that
it's impossible to have a meaningful conversation of what the

(02:19):
state of technology is because it changes changesseconds. So that's
one version, but most of the versions that that we're
familiar with that the futurists talk about incorporate an idea
of superhuman intelligence or the intelligence explosion, right, a kind
of combination of of human and technological development that just

(02:41):
dovetails into this gorgeous you know, space Baby from two
thousand one. Kind of that's an excellent way of putting it.
The documentary two thou one. Uh, I remember specifically when
the space baby looked at Earth. Okay, that documentary example
doesn't work at all. It usually does, but not this time,
not this time. S Space babies are a poor example

(03:02):
in this in this one instance. But but metaphorically speaking, yes,
you're right on track, because the intelligence explosion that was
a term introduced by someone known as Irving John Good,
or if you want to go with his birth name
is the door Jacub Gudak. I can see why he
changed it. Yeah he uh. He actually worked for a

(03:23):
while at Bletchley Park with another fellow who who made
sort of a name for himself in computer science, a
fellow named Alan Turing. Oh, I guess I've heard of him. Yeah,
Touring will come up in the discussion a little bit later,
but but for right now, so Irving John Good just
just a little quick anecdote that I thought was amusing.
So Good was working with touring to try and help

(03:47):
break German codes. I mean that's what Bletchley Park was
all about, right, So Good apparently one day drew the
ire of touring when he decided to take a little
cat nap. But because he was tired, and he was.
It was Good's philosophy that being tired did not mean
that he meant that he was not going to work

(04:08):
at his best, and he might as well go ahead
and nap, exactly, take a nap, get refreshed, and then
tackle the problem again, and you're more likely to solve it.
Whereas touring was very much a workhorse, you know he
was he was no rest, no rest type it, so
touring when he discovered that Good had been napping, decided
that that this was the Good was not so good

(04:30):
um and and touring touring sort of treated him with disdain.
He began to essentially not speak to Good. Good meanwhile,
began to think about the letters that were being used
to in Enigma codes to code German messages, and he
began to think, one, if these letters are not completely random,

(04:51):
What if the Germans are relying on some letters more
frequently than others, and he began to look at frequency
of these letters being used. He made up a table
and mathematically analyzed the frequency that certain letters were used
and discovered that there was a bias, there was a pattern. Yeah,
so he said, well, with this bias, that means that
we can start to narrow down the possibilities of these codes.

(05:14):
And in fact, he was able to demonstrate that this
was a way to help break German codes, and Touring,
when he saw Goods work, said the sworn I tried that,
but but clearly that showed that it worked well. And
then good and another point apparently went to sleep one
day and they've been working on a code that they
just could not break. And while he was sleeping, he

(05:36):
dreamed that perhaps when the Germans were encoding this particular message,
they used the letters in reverse of the way they
were actually printed, and so he tried that when he
woke up, and it turned out he was right. And
so then his argument was Touring, I need to go
to bed. So yeah, yeah, what the moral of the
story here is that knaps are good and no one

(05:57):
should talk to you, right, yeah, that's how I live
my life. But yeah, so so Goods point. Anyway, he
came up with this term of the intelligence explosion, and
it was this this sort of idea that we're going
to reach a point where we are increasing either our
own intelligence or some sort of artificial intelligence so far

(06:20):
beyond what we are currently capable of understanding, that life
as we know it will change completely. And because it's
going to go beyond what we know right now, there's
no way to predict what our life will be like
because it's beyond our because it is it is by definition,
out of our comprehension. Yes, as the Scots would say,

(06:41):
it's beyond our ken are we going to be doing
accents this episode? That was a terrible one. I actually
regret doing it right now. I already knew I couldn't
do Scottish and yet there I went. Anyway, your trail
placing again, So to kind of backtry a bit before
we really get into the whole singularity discussion. That was

(07:05):
just a brief overview. A good foundation to start from
is the concept of Moore's law. You know, originally Gordon Moore, who,
by the way, it was a co founder of a
little company called intel uh he he originally observed back
in nineteen sixty five in a paper that I'm going
to I'm gonna with this, but it was called something

(07:27):
like cramming more components onto integrated circuits something like that.
That was that was actually cramming was definitely one of
the words used um and circuit probably was to Anyway,
he noticed that over the course of I think originally
it was twelve months, but today we consider it two years. Uh,
the eighteen to twenty four months, I think is the

(07:47):
official unofficial right right, right, Yeah, that that the number
of discrete components on a a square inch silicon wafer
would double due to improvements in manufacturing and efficiency, so
that in effect, what this means to the layman is
that our electronics and particularly our computers get twice as

(08:09):
powerful every two years. So if you bought a computer
in and then bought another computer in two thousand, in theory,
the computer in two thousand would be twice as powerful
as the one from This is exponential growth. That's an
important component this idea of exponential growth. And it goes

(08:29):
without saying that if you continue on this path, if this,
if this continues indefinitely, then you know, you quickly get
to computers of almost unimaginable power just a decade out certainly,
although I mean I still don't really understand what a
gigabyte means because when when I first started using computers,
we were not counting in that. I mean, I mean

(08:49):
I was still impressed by kilobytes at the time. So yeah, Now,
I remember the first time I got a hard drive,
I think it had like two fifty megabyte hard drive,
and I thought, like, needs that much space? Now, grad
that's that's space we're talking about, not even processing time. Absolutely,
so yeah, it's it's it's one of those things where
the older you are, the more incredible today, is right,

(09:12):
because you start looking at computers and you think, I
remember when these things came out, and they were essentially
the equivalent of a of a really good desktop calculator. Right. So,
but Moore's law states that this advance will continue indefinitely
until we hit some sort of fundamental obstacle that we
just cannot engineer our way around, right, you know, and

(09:32):
people that that's why it's it's kind of in contention
right now because people are saying that, well, there's there's
only so much physical space that you can fit onto
with with silicone. There's there's there's a physical limitation to
the material in which there's only so much you can
do about it, and so does more It's law still
apply if we're talking about other materials, and what's you know, right?
And and how small can you get before you start

(09:52):
to run into quantum effects that are impossible to work around? Uh?
And then do you change the geometry of a chip?
Do you go three dimensional instead of two dimensional? Wouldn't
that help? And you know, there are a lot of
engineers are working on this, and frankly, pretty much every
couple of years, someone says, all right, this is the
year Moore's law, and it's good, it's it's over, it's gone,

(10:13):
it's done with. Five years later, you're still going strong.
And then on the sixth year, someone else says, Moore's
Law is gonna end. It's a little bit of a
self fulfilling prophecy. I think that a lot of companies
attempt to keep it going, to keep it going, sure, yeah, yeah, yeah.
No one wants to be the ones to say, guys,
guess what, we can't keep up with Moore's law anymore.

(10:34):
No one wants to do that. So it is a
good motivator. Also, if I can footing out myself real quick.
I'm pretty sure that I just pronounced silicon is still
a cone, and I would like I would like to
stay for the record that I know that those are
two different substances. Okay, that's fair. Anyway, I was I
was going to ask you about it, but by the
time you you were finished talking, I thought, let's just
go Yeah, that's cool. It's all right if you knew

(10:54):
how many times I have used that particular pronunciation to
to hilarious results. So moving on with this, this whole
idea about Moore's law, I mean, the reason this plays
into the singularity is with the technological advances, you start
to be able to achieve pretty incredible things, and uh,
even within one generation of Moore's law, which kind of

(11:18):
a meaningless term. But let's say let's say you arbitrarily
pick a date and then two years from that date
you look and see what's possible with the new technology,
and getting to twice as much power however you wanted
to find it doesn't necessarily mean that you've only doubled
the amount of things you can do with that power.
You may have limitless things you can do. So with

(11:41):
that idea. You're talking about being able to power through
problems way faster than you did before. And there's lots
of different ways of doing that. For example, UM, grid computing.
Grid computing is when you are linking computers together to
work on a problem all at once. Now, this works
really well with problems, parallel problems we call them. Uh.

(12:03):
These are problems where there are lots of potential solutions
and each computer essentially is working on one set of
potential solutions. And that way you have all these different
computers working on it at the same time. It reduces
the overall time it takes to solve that parallel problem. Uh.
And so UM, like if you've ever heard of anything
like folding at home or the set project, where you

(12:24):
could dedicate your computer's idle time. So the problem, the
idle processes, the processes that are not being used while
you're serving the web or writing how the singularity works,
or I don't know, uh, building a architectural program in
in in some sort of CAD application. Anything that you're

(12:45):
not using can be dedicated to one of these projects.
Same sort of idea that uh, you don't necessarily have
to build a supercomputer to solve complex problems if you
use a whole bunch of computers, whole bunch of small ones,
large had drunk lighter does this although they use very
nice advanced computers, but they do a lot of grid
computing as well. So, uh so, just using those kind

(13:08):
of models, we see that we're able to do much
more sophisticated things than we could otherwise if we were certainly. Yes, networks,
as it turns out, are pretty cool. Yeah, and networks
play a part in this idea of the singularity. Um. Actually,
I guess now is a good time. We'll we'll kind
of transition into Verner Vengees and I honestly, I don't
know how to say his last name. I say venge

(13:29):
and it could end up being VINGI. But I just
went with what you said. So that's great, that's fine.
What we'll say that Vench says everything is silicone. Uh
So Verner though he Verne, I call him vern He
He suggested four different potential pathways that humans could take,

(13:50):
or really that the world could take to arrive at
the technological singularity. Okay, what are they? The four ways
are we could develop a superhuman artificial intelligence. So computers
suddenly are able to think on a level that's analogous
to the way humans think and can do it better

(14:11):
than the right. Whether or not that means computers are conscious,
that's debatable. We'll get into that too. Computer networks could
somehow become self aware. That's number two. So yes, sky
net So like the grid computing we were just talking
about that, somehow using these grid computers, the network itself

(14:32):
having enough cycles and enough pathways and enough loops back around,
it starts going like, hey, I recognize this starts thinking
about like like like like thinking about IBM S Watson.
But it's distributed across a network. So computers you can
think of computers is all being super powerful neurons in
a brain, and that the network is actually neural pathways.

(14:54):
And it's definitely a science fiction e way of looking
at things. Doesn't mean it won't open, but stranger things,
my friends. It feels like a matrix kind of thing
to me. Then we have the idea that computer human
interfaces are so advanced and so intrinsically tied to who
we are, that humans themselves evolved beyond being human, we

(15:17):
become trans human. So this is an idea that we
almost merge with computers at least on some level via
kind of nanobot technology. You know, stuff stuff running through
our bloodstream, stuff in our yep. Or we have just
brain interfaces where our scientiousness, our consciousness is connected to
So so for example, we might have it where instead

(15:40):
of connecting to the Internet via some device like a
smartphone or a computer, yeah, it's right there in our
our meat brains, so that you know, you're sitting there
having a conversation with someone. Then you're like, oh, wait,
what movie was that guy in? Let me just look
up I am dB in my brain and then you
you know, depending on how good your connection is, which means,

(16:03):
by the way, if you are a journalist and you
attend C E S, you will automatically be dumber because
all the all the internet connectivity will be taken up
and so you'll be sitting there trying to ask good
questions and drool will come out of your mouth, which
to me is a typical C E S. I can
I can only assume that that wireless technology would advance
also at this point, but one can only hope fingers cross.

(16:24):
There are certain technologies that are not advancing at the
exponential rate of Moore's law, which is another problem we'll
talk about that. Uh. And then the fourth and final
method that Werner had suggested the world may go would
be that humans would advance so far in biological sciences
that that would allow us to engineer human intelligence so

(16:45):
that we could make ourselves as smart as we wanted
to be. This is sort of that Gattica future where
we've got all the another another great documentary, where we
we engineer ourselves to be super smart. So those are
the four pathways artificial intelligence, computer networks to become self aware,
computer human interface has become really really awesome, or we

(17:07):
have biologically engineered human intelligence and uh. And all four
of these lead to a similar outcome, which is this
intelligence explosion. And this is the idea that some form
of superhuman intelligence is created, either artificially or within ourselves,
and that at that point we will no longer be
able to predict what our world will will be like,

(17:31):
because by definition, we will have a superhuman intelligent entity involved.
And because that's superhuman, it's beyond our ability to predict
which is you know which, which makes thought experience experiments
about it a little bit uh, philosophical. That's the kind
way of putting it, uh, pointless would be another way

(17:53):
of putting up like we could we could, you know,
sit there and and and spitball a whole bunch of
possible futures. But that's the thing, they're possible. We don't
know which one could come out. We don't even know
if these four pathways are inevitable. We have futurists who
truly believe that this is something that will happen at
some point. There are other people who are more skeptical,

(18:16):
but we'll talk about them in a bit. So one
of the outcomes that Werner was talking about, uh, and
it's it's a fairly popular one in futurist circles is
the idea of the robo apocalypse. Essentially, this is where
you've got the humans are bad, destroy all humans idea. Essentially,

(18:37):
the the ideas that humans would become extinct, either through
definition because we've evolved into something else or because whatever
the super human intelligence is it besides we are a problem. Yeah,
and a lot a lot of futures are a lot
more positive about that. They're they're more looking forward to
it than being scared of it. It's less of oh, no,
big scary robots are coming to take over society and
more of a the robots are coming to take oversus

(19:00):
society like free Day. Yeah exactly. Yeah, I don't have
to work anymore, and and I don't because robots are
are supplying all the things we need. There's no need
for anyone to work anymore. There's no need for money anymore,
because the only reason you need money is so you
can buy stuff. But if everything is free, then you
don't need you. So it becomes Star Trek and we

(19:20):
all know run around in jumpsuits and punch people. And
if you're Kirk, you make out a lot. I mean
a lot. That dude every week Becauson Ryker. If you
add them together, make one Kirk. And yes, in this
documentary series Star Trek, I don't know about Archer because

(19:41):
I never watched Enterprise, So you guys have to get
back to me on that. Yeah, sorry, sorry about that. That That.
It's also a gap in my personal understanding. Just took
one look at that decontamination chamber and said, yep, I'm
out um anyway. So that's that's Werner Avenge. It's he's
sort of popularized this idea, but he's there are other
people who have I think their names are synonymous with it.

(20:02):
Hey guys, it's robo Jonathan from here to interrupt, because
even in the future, we need to take a break
to thank our sponsors and now back to the show.
So Werner Venge again very much associated with the idea

(20:23):
of the singularity. But there's another name that comes up
all the time, Ray Kurtswile, Rate Kurtswile. And this is
a fellow who has been referred to in various circles
as as the Thomas Edison of modern technology or um
or or perhaps more colorfully, the Willy Wonka of technology.
That was by Jeff Duncan of Digital Trends, and I
just wanted to shout out because that was great, but

(20:46):
you get nothing. I shared a remix of Willy Wonka
earlier today and it's still playing through my head. We're fans,
we might be fans of the Gene Wilder Willy Wonka.
Every when everyone homework assignment, go watch that. It has
nothing to do with the singularity singularity at all. I
don't know there's some chocolate singularity in their chocolate singularity.
I want episode on that band. If I were better

(21:09):
at cover band names. I totally would have said something
witty right there. Yeah, all right, well, fair enough, we'll
say it's the Archies, Sugar Sugar, oh dear, Oh my goodness. Okay,
So Ray kurtswhile Ray Kurts, while Um is the kind
of cat who you know, when he was in high school,
Um invented a computer program. And this isn't in the
mid nineteen sixties, This isn't like last year or something.

(21:30):
In the mid nineteen sixties, created a computer program that
listened to classical music, found patterns in it, and then
created new classical music based on that. So its a
computer that composed classical music following the rules of classical
music that other composers had created. Yes, that's kind of cool.
That's just that's just something he did, you know. And yeah,
that's dudes got credentials. Yeah, it's He also kind of

(21:55):
invented flatbed scanners, has done a whole bunch of stuff
in speech recognition, and uh, which that's interesting because well,
and we'll talk about that in a second. But but
one of Kurt's well's big points is that he thinks
that by and this all depends upon which interview you

(22:15):
read of Kurt Kurts well, but in various interviews he
said that essentially by twenty thirty we will reach a
point where we will be able to make an artificial brain. Well,
we'll we'll have reverse engineered the brain, and we'll be
able to create an artificial one. Uh. And there's a
lot of debate in in UH, in smarter circles than

(22:37):
the ones I move in. UH. That's not a that's
not a slap against my friends. They're pretty bright, but
none of us are neuro churologically gifted at that point. UM.
I include myself in that in that circle. So, but
there there's some very bright people who debate about this
point whether or not we'll be able by the year
twenty thirty two reverse engineer the brain and design an

(22:59):
artificial one. And I think the debate is not so
much on whether or not will have the technological power
necessary to to uh simulate a brain. We can simulate
brains on a certain superficial level today. I mean hypothetically
we could connect enough computers that we could make it go.
I think, yeah, we could probably get the computer horsepower,

(23:21):
especially by thirty to simulate a human brain. The question
is whether we will understand the human brain exactly. So
that's that's sort of where the debate lies. It's not
so much on the technological side of things as it
is the biological side of things, which is kind of interesting. Um.
I've read a lot of critics who who have really

(23:45):
jumped on Kurtzwild for this. Particularly PS Myers has written
some pretty um yeah, strongly worded, strongly worded criticisms to
Kurtzwild's theories, saying that that Kurtswild simply does not understand
neurological development and activities, and that you know, the nature

(24:05):
between the environment and are are the way our brains
develop over time versus the you know, nurture versus nature,
all of this stuff with hormonal changes, electrochemical reaction, saying
that there's there's so many little bits that make up
our brains, so many hormones, so many processes, and we
understand such a small fraction of what they do. This
is why a lot of psychiatric drugs, for example, are
kind of like, oh, well, we invented this thing, and

(24:28):
we guess it does this thing, right, take it and
see what happened we did stuff we don't. It tends
to make you happy. It also makes you perceive the
color red as having the smell of oranges. Like you
know that that we don't we don't understand it fully.
And in fact, there are other people like Stephen Novella,
who is uh he's the author of the Neurological Blog,

(24:51):
and he also is a host on a wonderful podcast
called Skeptics Guide to the Universe. If you guys haven't
listened to that, you should try that out if you,
especially if you like skepticism and critical thinking. But he's
he's a doctor and a proponent of evidence based medicine,
and he talks about how, uh, you know, we don't

(25:13):
know how much we don't know about the brain, like
we we have no way of knowing where the endpoint
is as far as the brain is concerned, and therefore
we cannot guess at how long it will take us
to reverse engineers, simply because we don't know where the
finish line is, right right, Yeah. Kurt kurts Wells Kurtzwell

(25:33):
has a new book new as of we're recording this
in January, just came out in November called How to
Create a Mind. The Secret of Human Thought Revealed and
and in the book, he theorizes that, um, okay, if
you'll follow me for a second, atoms are tiny bits
of data DNA. It's a form of a program. The
nervous system is a computer that coordinates bodily functions and

(25:58):
thought is kind of simultaneously a program and the data
that that program contains. Gotcha. Now, this is another problem
that some scientists have, is reducing the human brain to
the model of a computer, right, because it's you know,
it's it's a very it's a very elegant, interesting proposition

(26:19):
and and it's kind of sexy like that because you
go like, oh, well that's that that sort of makes sense.
Man Like, let's go get a pizza and talk about
this more. Let me let me get a program that
will allow me to suddenly know all kung fu. Right,
And when you're a programmer, that's a great plan. Yeah,
I mean that sounds that sounds terrific. But yeah, there's
one one specific guy found. Jarren Lanier wrote a terrific

(26:44):
thing called one Half of a Manifesto, which is a
really entertaining read if you guys like this kind of thing,
where and he was saying that what futurists are talking
about when they talk about this singularity is basically a religion.
He was calling it cybernetic totalism. Uh, you know, like
like about fanatic ideology. He compares it to Marxism at
some point. But yeah, um, and he was saying that that,

(27:08):
you know, it's this, This theory is a trific theory
if you want to get into the philosophy of who
we are and what we do and what technology is.
But that you know, cybernetic patterns aren't necessarily the best
way to understand reality, and that they're not necessarily the
best model for how people work, for how culture works,

(27:29):
for how intelligence works. And that's saying so is an
gross over simplification. That's a good point. And we should
also point out that it all depends on how you
define intelligence as well, because Kurtzwell himself has worded his
own predictions in such a way that that some would
argue Novella argues, for example, that he has given himself

(27:51):
enough room where he's going to be right no matter what,
like saying that we will be able to reverse engineer
basic brain functions, and novellas is, well, technically you could
argue that now, so that kind of gives you a
lot of room. Yeah. But but whether or not it
means total brain function, that's that's a totally different question.

(28:12):
And so the other point is that we could theoretically
create an artificial intelligence that does not necessarily reverse engineer
the brain. It doesn't follow the human intelligence model. I mean,
that's IBM S Watson again, a good example of artificial
intelligence that you know, in some ways it mimics the
brain because it kind of has to. You know, we're

(28:32):
coming at this. Human beings are the ones creating this technology,
and so as human beings creating this technology, it's going
to follow the rules that as we understand them. So
there's going to be some indigree there. But but IBM
S Watson, you know, you think about that, it doesn't
really understand necessarily the data that's passing through it. It's

(28:54):
looking for the connections and making just making making connections
and recognizing pattern and spitting out useful information. Yeah, it's
it's looking for whatever answer is most likely the right one.
It's all probability based, right, so, and if it doesn't
reach a certain threshold, it doesn't provide the answer. So

(29:14):
if arbitrarily speaking, I don't know what the threshold is
so I'm just making a number. Let's say it has
to be certain or higher for it to give that answer.
If that if the if the certainty falls below that threshold,
no answer is given. That's essentially how it worked when
Watson was on Jeopardy. It would it would analyze the

(29:35):
the the the answer in Jeopardy terms, and then come
up with what it thought was probably the most accurate
question for that answer, and occasionally it was wrong to
hilarious results. But it did sort of seem to kind
of mimic the way humans think, at least on a

(29:57):
superficial level. And oh, I mean the thing about humans
is that they're they're wrong a lot more than a
lot more than what that fifteen percent at the time. Yeah,
it's you know, it's we've we've got well, we give
answers even if we're not short the question. I certainly do,
because we all know from going to trivia nights. Yeah,
and and there's there's a lot of I've read a
lot on online about arguments of how it's our deficiencies

(30:21):
are memory biases, are rational behavior? Are weird hormonal stuff
going on? Or what makes us human? And that you
can't teach a computer to be irrational, that's true. Although
you can't teach you to swear. You can. We just
we read a story last week, Yeah, where IBM allowed
Watson to read the Urban Dictionary, and then Watson got

(30:44):
a little bit of a potty mouth. It got it
got kind of fresh. It did it did it started
it started to say that. Oh see what was it?
Oh I'm going to say something and it's going to
be bleaped out, right, Tyler Tyler, Tyler just said so. Uh. Anyway,
so there's one point where a researcher asked a question
of Watson, and Watson included within the answer the word

(31:10):
so since I was bleeped out, you probably don't know
what it was, so go look it up. It was funny.
It was really funny. Yes. And then and then they
basically nuked that part of Watson from orbit. They were like,
you know what, never mind, it was the only way
to be sure. They wiped out the Urban Dictionary from
Watson's memory. They also said that a very similar thing
happened when they let Watson read Wikipedia. No judgments here,

(31:31):
just saying what what IBM said. Anyway, Again, the computer
was unable to determine when when there was appropriate and
what's the appropriate context for dropping a swear word. It
didn't know, so it just started to speak kind of
like my wife does. So yeah, it was I'm going
to pay for that later. Uh So, anyway, that that

(31:52):
that's an interesting point though again you're you're showing how
machine intelligence and human intelligence are different because the machine
intelligence doesn't have that content. Sure, and of course you
know we're talking about about science fiction or science future,
however you want to term it, so that you know,
we might very well come up with a fancy little
program script that lets you that lets you introduce that

(32:14):
kind of bias. But you know, but again from that
documentary Star Trek, I mean data never figured out these contractions.
Hi guys, it's Jonathan. I have merged entirely with technology
at this point, but I still got to pay the bills.
So let's take a quick break. Touring actually had a great,

(32:40):
uh mental exercise really, and it's called the touring test,
and this this applies to artificial intelligence. Tourings point, and
we've talked about the train test in previous episodes of
Tech Stuff, But just as a refresher, Touring had suggested
that you could create a test and that if a
machine could pass that test at the same level as

(33:01):
a human. In other words, if you were unable to
determine that the person who took that test was human
or machine, the machine had passed the touring test and
had had essentially simulated human intelligence. And uh. It usually
works as an interview, So you have someone who's who's
conducting an interview, and you have either a machine answering

(33:22):
or a human answering, and there's a barrier up so
that of course the person asking the questions cannot see
who is responding, and of course they're responding through you know,
text usually because if they're responding through a voice, it's like,
I think the answer as far you know, you'd be like, well,
either it's a robot or the most boring person in
the world. Uh. The idea being you would ask these

(33:42):
questions over a computer monitor, get text responses, and if
you were unable to to answer with a certain degree
of accuracy whether or not it was a machine or
a person, then you would say the machine passed the
test and uh. And and you could argue, well, that
could just mean that the machine is very good at

(34:02):
mimicking human intelligence, it does not actually possess human intelligence.
Turing's point is, does that matter? Because I know that
I am intelligent, I speak with someone like Lauren, who
I assume is also intelligent based upon the responses she gives.
But she could just be simulating intelligence. However, I have
I have already bestowed in my mind the the feature

(34:27):
of intelligence upon Lauren, because what she does is very
much akin to what I do. So Touring said, if
you extend the courtesy to your fellow human being that
they are intelligent based on the fact that they act
like you do, why would you not do the same
thing for a machine. Does it matter if the machine
can actually think. If the machine simulates thought well enough

(34:49):
for it to pass as human, then you're giving it
the same benefit of doubt if you word anyone else.
This is what a lot of science fiction movies are about. Actually,
there's a lot of philosophy, and a lot of philosophy,
lot of Isaac Asthma, of a lot of Blade Runner
and and and that's not an author. Sorry, well no,
but you know Philip K. Dick. Look him up. So anyway,
thank you to Android's dream of electric Sheep. I won't.

(35:11):
I won't spoil it for you. Uh. They to kind
of wrap this all up, getting back into the discussion
of philosophy. Uh, we had very recently we did a
podcast about are we living in a computer simulation? And
that kind of plays into this idea of the singularity
because that argument stated that if the singularity is in

(35:34):
fact possible, if it's inevitable, if we are going to
reach this level of transhumanism where we are no longer
able to really predict what the present will be like
because it will be beyond our understanding, then one thing
we would expect to do is create simulations of our
past to kind of study ourselves and to see what
happens play around variables. Yeah. We like good experiments. Yeah,

(35:57):
and we could. We could, if we're that advanced, could
in theory, creates such a realistic simulation that the inhabitants
of that simulation would be incapable of knowing that they
were artificial and would be completely you know, self aware
of themselves. You know that was totally redundant, self aware
and uh, but unable to know that they were a simulation. Uh.

(36:21):
He said that if those things are possible, then there's
no way of knowing that, you know, the the overwhelming
uh possibility is that we are in a computer simulations,
computer simulation right now, because yeah, if that's if that's
what's gonna happen, then there's no way of saying with
certainty that we are not in fact the product of that.
And so uh, the point being not necessarily that we

(36:44):
are in fact living in a computer simulation, but that
perhaps this singularity, this transhumanism thing might not be realistic.
It might not be the future that we're headed to.
Maybe it ends up being a pipe dream that's not
really possible for us to attain. Or maybe we'll wipe
ourselves out through some terrible war or catastrophic accident. Um,

(37:09):
maybe we create a biological entity that wipes us out.
Allah the stand Why create a black hole at the LHC,
Which come on, people don't write me. I already know
about that and how tiny and and and almost non
existent they are because it lasted so quickly. Let's say
that they do that thing where you look at that

(37:30):
one website where the black hole forms in the parking
lot outside the LHC and you just see the whole picture.
Go um, which funny video anyway, that's that argument plays
back into this. So I don't know, I don't know
if we're going to ever see a future where the
singularity becomes the thing. Oh and we never really talked

(37:51):
about it. But one of the big points that kurts
Well really punches in his Singularity talks is the idea
of digital immortality, right right, And he's been obsessed with
this I and and obsessed is probably a judgmental word.
I apologize, that's but he's been very focused on this concept.
His father died when he was about twenty four, and

(38:12):
he's been exploring theories on life extension ever since then,
and supposedly takes all kinds of supplements and sells them
as well to extend life, has all kinds of kinds
of health plans, dietary plans that he has exercise all
the idea that the idea being that if he can
preserve his own line last long enough that we hit

(38:32):
the singularity, then he can become immortal. Right and either that,
you know, we attain immortality through one of a thousand
different ways. For example, we end up uploading our own
intelligence into the cloud, and then we've become part of
a group consciousness, so we are no longer really individuals,
or we merge with computers in some other way so

(38:53):
that we are technically immortal that way. Or we just
conquer the genes that all guide eight the aging process,
and we stop it, and we stop disease, you know,
we we take like in transmant you just take a
cancer pill and then you don't get cancer, because that's
what you do. Yeah, so I think again the singularity.
That's kind of why I think a lot of critics
also point to it as being more of a religion,

(39:14):
because it's kind of this sort of utopian pipe dream
in their minds. There's the former CEO of Lotus, Mitch
Kapoor Capper, I'm not sure how you say it, once
called called it the intelligent design for the i Q
one forty people. Yeah ouch ouch, Well, I mean, well,
kurtz Wild's kind of laughing all the way to the bank.
I hear that company that rhymes with schmoogle hired him

(39:36):
little little people, I mean, we're you probably wouldn't have
heard of him, but yeah, they just tired him on
to be I have it in my notes at the
official title. I think it's the director of Engineering a
director of engineering over there. Um, they get they get
some big names. I mean, they had Vince Surf as
the chief evangelists, and of course he was one of
the fathers of the Internet. So Google's Google's gotta they're

(40:00):
own for for getting some really smart people. And and
to be fair, while the singularity may or may not
ever happen, I think it's important that we have optimists
in the field of technology who are really pushing for
our development to try and make the world a better
place for people. Now, you know, absolutely so, even if

(40:22):
we're even if we never reach the point of digital
immortality in our lifetimes or any other it's I mean,
if if someone wants to think so big that that
they want to put in nanobots to make my body awesomer,
I mean and and not that connected that came out
that came out possibly crude. I mostly means that I
don't get cancer and die, um kind of stuff. That's
that's terrific. I can I can't argue with any part

(40:44):
of that. Yeah, I'm gonna be on video so much
this year that I definitely need my body to be awesomer,
So I'm all for that. Well either way, yes, and
and and Google. You know, Google looks forward so much
too to augmented reality. Augmented reality. I'm sorry, I'm I
can't pronounce anything today. I am on a non role

(41:04):
in the Internet of Things and all of that wonderful
future tech that it seems like a terrific fit. Yeah. Yeah,
so we'll see how how it goes. I mean, obviously,
the nice thing about this is that all we have
to do is live long enough to see it happen
or not happen. And most predictions have the Singularity hitting
somewhere between twenty Yeah, it all depends on upon which

(41:27):
futurist you're asking. I hope you guys enjoyed that classic
episode about the singularity. I have touched on that topic
multiple times, both on tech Stuff and on other shows
like Forward Thinking. So if you're really interested in it
and you want to hear even more, feel free to
do searches both on Forward Thinking and on tech Stuff.

(41:47):
You can search on tech Stuff going to the website
tech stuff podcast dot com. We have an archive there
of every episode that's ever published Forward Thinking. You might
have to do a little more googling, but it is there,
including podcasts and videos, so go check that out. If
you would like to get in touch with me and
suggest future topics for tech Stuff, you can do so

(42:08):
with the email address text Stuff at how stuff works
dot com. You can also drop a line on Facebook
or Twitter. The handle for both of those is tech Stuff.
Hs W. Also add that tech stuff podcast dot com
link I mentioned earlier. There's a link to our online store,
so if you've ever wanted any sort of tech stuff
merchandise you can find it there. Go check it out.
Every purchase you make goes to help the show. We

(42:30):
greatly appreciate it, and I'll talk to you again. Release soon.
Text Stuff is a production of I Heart Radio's How
Stuff Works. For more podcasts from my heart Radio, visit
the i heart Radio app, Apple Podcasts, or wherever you
listen to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.