All Episodes

January 2, 2024 44 mins

What is the technological singularity? How might the technological singularity come about? What do critics say about the idea of the singularity? Listen in as Jonathan and Lauren explore the technological singularity.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Say hey there,
and welcome to tech Stuff. I'm your host, jonvan Strickland.
I'm an executive producer with iHeart Podcasts and how the
tech are you. So it's January second, twenty twenty four.

(00:24):
As you listen to this, it's actually back in the
past in twenty twenty three when I record it and
we are on vacation. You know, I brought you a
whole bunch of new episodes over the holidays, but I
feared it's about time for Jonathan to maybe rest a
little bit, So we're going to listen to a classic
episode today. This episode is called tech Stuff Enters the Singularity.

(00:48):
I mean, after all, we're in a new year, so
why not talk about the future a little bit. This
was recorded way back in February eleventh, twenty thirteen. Lauren
Vogelbaumb now host of brain Stuff and Savor along with
other Things, was my co host at the time, and
we sat down to talk about this vision of the future,

(01:10):
the singularity. What does it mean that when we're talking
about the technological singularity? That is So sit back, relax
and enjoy this classic episode from twenty thirteen called tech
Stuff Enters the Singularity. Get in touch with technologies with
tex Stuff from HowStuffWorks dot com. Hey there, everyone, and

(01:38):
welcome to tech Stuff. My name is Jonathan Strickland, Host Extraordinaire.

Speaker 2 (01:43):
And I'm Lauren vocal Bam, host Extraordinarier.

Speaker 1 (01:46):
That's very true, and today we wanted to talk about
the future. The future. Yeah, really, we're talking about kind
of a science fiction future. We're talking about the singularity.
And long time listeners to tech Stuff and I'm talking
about folks who listened way back before we ever talked

(02:06):
out at thirty minutes, let alone an hour. May remember
that we did an episode about how Ray Kurtzwel works
and Ray Kurtzwell is a futurist and one of the
things he talks about extensively, particularly if you corner him
at a cocktail party, is the singularity. And so we
wanted to talk about what the singularity is, what this idea,
you know, we really wanted to kind of dig down

(02:27):
into it and why is this a big deal and
how realistic is this vision of the future.

Speaker 2 (02:34):
Yeah, because some people would take a little bit of
a would would argue with your concept of it being
science fiction. They take it extremely seriously.

Speaker 1 (02:41):
Oh yeah, they say it's science fact, science fact, it's
science inevitability.

Speaker 2 (02:46):
Yeah. The term was actually coined by a mathematician, Jean
von Newman in the nineteen fifties, but it was popularized
by a science fiction writere.

Speaker 1 (02:56):
Yeah, it's also a There are a lot of different
concepts that are tied up together, and it all depends
on upon whom you ask what it means by the singularity.
For instance, there's some people who when you hear the
term the singularity, what they say is, Okay, that's a
time when we get to the point where technological advances
are coming so quickly that it's impossible to have a

(03:18):
meaningful conversation of what the state of technology is because
it changes changes by the milliseconds. Right. So that's one version,
But most of the versions that we're familiar with that
the futurists talk about incorporate an idea of superhuman intelligence
or the intelligence explosion, right.

Speaker 2 (03:37):
A kind of combination of human and technological development that
just dovetails into this gorgeous you know, space baby from
two thousand and one kind of that's.

Speaker 1 (03:46):
An excellent way of putting it. The documentary two thousand
and one. I remember specifically when the space baby looked
at Earth. Okay, that documentary example doesn't work at all.
It usually does, but not this Yeah, not this time.

Speaker 2 (04:00):
Sorry, space babies are a poor example in this one instance.

Speaker 1 (04:04):
But metaphorically speaking, yes, you're right on track because the
intelligence explosion. That was a term introduced by someone known
as Irving John Good or if you want to go with,
his birth name is Adore Jakub Gudak. I can see
why he changed it. Yeah. He actually worked for a
while at bletch Lee Park with another fellow who made

(04:29):
sort of a name for himself in computer science, a
fellow named Alan Turing. Oh oh, I guess I've heard
of him. Yeah. Touring will come up in the discussion
a little bit later, but for right now, So, Irving
John Good just just a little quick anecdote that I
thought was amusing. So Good was working with touring to
try and help break German codes. I mean that's what

(04:50):
Bletchley Park was all about, right right, So Good apparently
one day drew the ire of touring when he decided
to take a little cat nap because he was tired
and he was it was Goods philosophy that being tired
did not mean that he meant that he was not
going to work at his best, and he might as

(05:10):
well go ahead and nap, exactly, take a nap, get refreshed,
and then tackle the problem again, and you're more likely
to solve it. Whereas touring was very much a workhorse,
you know he was he was no rest, no rest,
We have to.

Speaker 2 (05:23):
Do so touring.

Speaker 1 (05:25):
When he discovered that Good had been napping, decided that
this was the Good was not so good and and
touring touring sort of treated him with disdain. He began
to essentially not speak to Good. Good meanwhile, began to
think about the letters that were being used in Enigma

(05:46):
codes to code German messages, and he began to think,
what if these letters are not completely random? What if
the Germans are relying on some letters more frequently than others.
And he began to look at frequency of these letters
being used. He made up a table and mathematically analyzed
the frequency that certain letters were used and discovered that

(06:07):
there was a bias. There was a pattern. Yeah, so
he said, well, with this bias, that means that we
can start to narrow down the possibilities of these codes,
and in fact he was able to demonstrate that this
was a way to help break German codes, and Touring,
when he saw Goods work, said, I could have sworn
I tried that, but clearly that showed that it worked well.

(06:28):
And then good and another point apparently went to sleep
one day and they've been working on a code that
they just could not break, and while he was sleeping,
he dreamed that perhaps when the Germans were encoding this
particular message, they used the letters in reverse of the
way they were actually printed, and so he tried that
when he woke up, and it turned out he was right.

(06:49):
And so then his argument was Touring, I need to
go to bed.

Speaker 2 (06:53):
So yeah, yeah, what the moral of the story here
is that naps are good?

Speaker 1 (06:57):
Yes, and no one should talk to you, right, yeah, yeah,
that's how I live my life. But yeah, so so
Goods point. Anyway, he came up with this term of
the intelligence explosion, and it was this this sort of
idea that we're going to reach a point where we
are increasing either our own intelligence or some sort of

(07:18):
artificial intelligence so far beyond what we are currently capable
of understanding, that life as we know it will change. Completely,
and because it's going to go beyond what we know
right now, there's no way to predict what our life
will be like, right because it's beyond our because it
is it.

Speaker 2 (07:38):
Is Yeah, by definition out of our comprehension.

Speaker 1 (07:41):
Yes, as the Scots would say, it's beyond our ken.

Speaker 2 (07:46):
Are we going to be doing accents of this episode?

Speaker 1 (07:48):
So that was a terrible one. I actually regret doing
it right now. I already knew I couldn't do Scottish
and yet there I went. Anyway, you're trail placing again. Yeah,
So to to kind of backtrack a bit before we
really get into the whole singularity discussion, that was just
a brief overview. A good foundation to start from is

(08:12):
the concept of Moore's law. You know, Originally Gordon Moore,
who by the way, was a co founder of a
little company called Intel, he originally observed back in nineteen
sixty five in a paper that I'm going to I'm
going to with this, but it was called something like
cramming more components onto integrated circuits something like that. That

(08:33):
was actually cramming was definitely one of the words used,
and circuit probably was too. Anyway, he noticed that over
the course of I think originally it was twelve months,
but today we consider it two years.

Speaker 2 (08:47):
Eighteen to twenty four months, I think is the official, unofficial.

Speaker 1 (08:50):
Right, right, right, Yeah, that the number of discrete components
on a square inch silicon wafer would double due to
improvements in manufacturing and efficiency, so that in effect, what
this means to the layman is that our electronics and
particularly our computers get twice as powerful every two years.

(09:12):
So if you bought a computer in nineteen ninety eight
and then bought another computer in two thousand, in theory,
the computer in two thousand would be twice as powerful
as the one from nineteen ninety eight. This is exponential growth.
That's an important component, this idea of exponential growth, right,
And it goes without saying that if you continue on

(09:33):
this path, if this, if this continues indefinitely, then you know,
you quickly get to computers of almost unimaginable power just
a decade.

Speaker 2 (09:42):
Out certainly, although I mean I still don't really understand
what a gigabyte means, because when I first started using computers,
we were not counting in that. I mean, I mean,
I was still impressed by kilobytes at the time.

Speaker 1 (09:52):
So yeah, Now, I remember the first time I got
a hard drive, I think it had like a two
hundred and fifty megabyte hard drive. I thought, you're like,
who needs that much space? Now? Grat that's that's space
we're talking about, not even processing pay right, absolutely, So, yeah,
it's it's it's one of those things where the older
you are, the more incredible today, is right, because you

(10:13):
start looking at computers and you think, I remember when
these things came out, and they were essentially the equivalent
of a of a really good desktop calculator. Right. So,
but Moore's law states that this advance will continue indefinitely
until we hit some sort of fundamental obstacle that we
just cannot engineer our way around.

Speaker 2 (10:32):
Oh right, you know, and people that's why it's it's
kind of in contention right now, because people are saying that, well,
there's there's only so much physical space that you can
fit onto with silicone. There there's there's a physical limitation
to the material in which there's only so much it
can do about it. And so does More's law still
apply if we're talking about other materials and what's you.

Speaker 1 (10:50):
Know, right, and and how small can you get before
you start to run into quantum effects that are impossible
to work around. Uh, and then do you change the
geometry of a chip? Do you go three dimensional instead
of two dimensional? Would that help? And yeah, there are
a lot of engineers are working on this, and frankly,
pretty much every couple of years, someone says, all right,

(11:10):
this is the year Moore's Law ends to end. It's over,
it's gone, it's done with. Five years later, you're still
going strong. Yeah, And then on the six years someone
else says More's Law is gonna end.

Speaker 2 (11:21):
It's a little bit of a self fulfilling prophecy. I
think that a lot of companies attempt to.

Speaker 1 (11:26):
Keep it going. To keep it going, oh sure, yeah, yeah, yeah.
I mean, no one wants to be the ones to say, uh, guys,
guess what, we can't keep up with More's law anymore.
No one wants to do that, so it is a
good motivator.

Speaker 2 (11:37):
Also, if I can footnote myself real quick, I'm pretty
sure that I just pronounced silicon is still a cone,
and I would like I would like to stay for
the record that I know that those are two different substances.

Speaker 1 (11:45):
Okay, that's fair. Anyway, I was I was going to
ask you about it, but by the time you were
finished talking, I thought, let's just go Yeah, that's cool.
It's all right. If you knew how many times I
have used that particular pronunciation to hilarious results, excellent. So
moving on with this whole idea about Moore's law, I mean,
the reason this plays into the singularity is with the

(12:08):
technological advances, you start to be able to achieve pretty
incredible things, and even within one generation of Moore's law,
which kind of a meaningless term. But let's say you
arbitrarily pick a date and then two years from that
date you look and see what's possible with the new technology,

(12:28):
and getting to twice as much power however you want
to define it doesn't necessarily mean that you've only doubled
the amount of things you can do with that power.
You may have limitless things you can do. So with
that idea, you're talking about being able to power through
problems way faster than you did before. And there's lots

(12:49):
of different ways of doing that. For example, grid computing.
Grid computing is when you are linking computers together to
work on a problem all at once. Now works really
well with certain problems parallel problems we call them. These
are problems where there are lots of potential solutions and
each computer essentially is working on one set of potential solutions.

(13:12):
And that way you have all these different computers working
on it at the same time. It reduces the overall
time it takes to solve that parallel problem. And so
like if you've ever heard of anything like folding at
Home or the SETI project, where you could dedicate your
computer's idle time, So the idle processes, the processes that
are not being used while you're serving the web or

(13:34):
writing how the singularity works, or I don't know, building
an architectural program in some sort of CAD application. Anything
that you're not using can be dedicated to one of
these projects. Same sort of idea that you don't necessarily
have to build a supercomputer to solve complex problems if

(13:57):
you use a whole bunch of computers, whole bunch of
small ones. Large Hadron Collider does this, although they use
very nice advanced computers, but they do a lot of
grid computing as well. So just using those kind of models,
we see that we're able to do much more sophisticated
things than we could.

Speaker 2 (14:15):
Otherwise if we were certainly, Yes, networks, as it turns out,
are pretty cool.

Speaker 1 (14:19):
Yeah, and networks play a part in this idea of
the singularity. Actually, I guess now is a good time
we'll kind of transition into Werner Venge's and I honestly,
I don't know how to say his last name. I
say Vinge, and it could end up being ringy. But
I just went with what you said. So that's great,
that's fine. Let's do it. What we'll say that Venge
says everything is silicone. So Werner though he vern, I

(14:43):
call him Vern. He suggested four different potential pathways that
humans could take, or really that the world could take, yes,
to arrive at the technological singularity. Okay, what are they?
The four ways are we could develop a superhuman artificial intelligence,

(15:03):
So computers suddenly are able to think on a level
that's analogous to the way humans think and can do
it better than better. Right whether or not that means
computers are conscious, that's debatable. We'll get into that too.
Computer networks could somehow become self aware. That's number two. Okay,

(15:24):
So yes, skynet, so like the grid computing we were
just talking about that. Somehow using these grid computers. The
network itself.

Speaker 2 (15:34):
Having enough cycles and enough pathways and enough loops back around,
it starts going like, hey, I recognize this, yeah, and
starts thinking about.

Speaker 1 (15:41):
Like thinking about IBM's Watson. But it's distributed across a network.
So computers. You can think of computers as all being
super powerful neurons in a brain, and that the network
is actually neural pathways. And it's definitely a science fiction
ye way of looking at things. Doesn't mean it won't happen,

(16:02):
but strangers, my friends, it feels like a matrix kind
of thing to me. Then we have the idea that
computer human interfaces are so advanced and so intrinsically tied
to who we are, that humans themselves evolve beyond being human,
we become trans human. Okay, So this is an idea

(16:22):
that we almost merge with computers at least on some.

Speaker 2 (16:25):
Level via kind of nanobot technology, you know, stuff stuff
running through our bloodstreams, stuff in our selves yep.

Speaker 1 (16:32):
Or we have just brain in our faces where our consciousness,
our consciousness is connected to So for example, we might
have it where instead of connecting to the internet via
some device like a smartphone or a computer device. Yeah,
it's right there in our meat brains, so that you know,
you're sitting there having a conversation with someone. Then you're like, oh, wait,

(16:55):
what movie was that guy in? Let me just look
up IMDb in my brain. And then you you know,
depending on how good your connection is, which means, by
the way, if you are a journalist and you attend CEES,
you will automatically be dumber because all the all the
internet connectivity will be taken up, and so you'll be
sitting there trying to ask good questions and druel will

(17:15):
come out of your mouth, which to me is a
typical CEES.

Speaker 2 (17:19):
I can I can only assume that that wireless technology
would advance also at this point, but one can only
hope fingers crossed.

Speaker 1 (17:25):
There are certain technologies that are not advancing at the
exponential rate of Moore's law, which is another problem. We'll
talk about that. Yeah. And then the fourth and final
method that Werner had suggested the world may go would
be that humans would advance so far in biological sciences
that they would allow us to engineer human intelligence so

(17:46):
that we could make ourselves as smart as we wanted
to be. This is sort of that Gattica future where
we've got all the another another great documentary where we
engineer ourselves to be super smart. Right, So those are
the four pathways artificial intelligence, computer networks become self aware,
computer human interfaces become really really awesome, or we have

(18:09):
biologically engineered human intelligence, and all four of these lead
to a similar outcome, which is this intelligence explosion. And
this is the idea that some form of superhuman intelligence
is created, either artificially or within ourselves, and that at
that point we will no longer be able to predict

(18:29):
what our world will will be like, because by definition,
we will have a superhuman intelligent entity involved. And because
that's superhuman, it's beyond our ability to predict. Right, which
is you know which.

Speaker 2 (18:44):
Which makes that experience experiments about it a little bit uh.

Speaker 1 (18:49):
Philosophical. Yeah, that's the kind way of putting it pointless,
would be another way of putting it, like we could
we could, you know, sit there and and spitball a
whole bunch of possible futures. But that's the thing, they're possible.
We don't know which one could come out. We don't
even know if these four pathways are inevitable. We have
futurists who truly believe that this is something that will

(19:11):
happen at some point. There are other people who are
more skeptical, but we'll talk about them in a bit.
So one of the outcomes that Werner was talking about,
and it's a fairly popular one in futurist circles, is
the idea of the robo apocalypse. Essentially, right, this is

(19:32):
where you've got the humans are bad, destroy all humans idea,
Essentially the ideas that humans would become extinct, either through
definition because we've evolved into something else or because whatever
the superhuman intelligence is it besides, we are a problem.

Speaker 2 (19:49):
Yeah, and a lot of futurists are a lot more
positive about that. They're more looking forward to it than
being scared of it. It's less of a oh no,
big scary robots are coming to take over our society
and more of a robot it's are coming to take
over our society like free day.

Speaker 1 (20:02):
Yeah, yeah, exactly. Yeah, I don't have to work anymore,
and and I don't because robots are supplying all the
things we need. There's no need for anyone to work anymore.
There's no need for money anymore because the only reason
you need money is so you can buy stuff. But
if everything's free, then you don't need you. So it
becomes Star Trek and we all, you know, run around

(20:23):
in jumpsuits and right punch people. And if you're Kirk,
you make out a lot. I mean a lot. That
dude every week Becrden Riiker. If you add them together,
make one.

Speaker 2 (20:37):
Kirk and yes in this documentary series.

Speaker 1 (20:39):
Yeah, Star Trek. Yeah, I don't know about our trick
because I never watched Enterprise, So you guys have to
get back to me on that. Yeah, sorry, sorry about that. That.

Speaker 2 (20:46):
It's also a gap in my personal understanding.

Speaker 1 (20:48):
I just took one look at that decontamination chamber and
I said, yep, I'm out anyway. So that's that's Werner Revenge.
It's he's sort of popularized this idea, but he's there
are other people have kind of I think their names
are synonymous with it, and we will talk about them
in just a minute and now back to the show.

(21:09):
So Werner Venge again very much associated with the idea
of the singularity. But there's another name that comes up
all the time, Ray Kurtsweile.

Speaker 2 (21:18):
Ray Kurtsweil, and this is a fellow who has been
referred to in various circles as the Thomas Edison of
modern technology, or or, perhaps more colorfully, the Willy Wonka
of technology. That was by Jef Duncan of Digital Trends,
and I just wanted to shout out because that was great, nice.

Speaker 1 (21:36):
But you get nothing. I shared a remix of Willy
Wonka earlier today and it's still playing through my head.

Speaker 2 (21:43):
We're fans, we might be fans of the Gene Wilder
Willy Wonka. Everyone, homework assignment, go watch that. It has
nothing to do with the singularity, the singularity at all.

Speaker 1 (21:51):
I don't know there's some chocolate singularity in there, Chocolate Singularity.
I want to do an episode on.

Speaker 2 (21:58):
If I were better at cover band names, I totally
would have said something whitty right there.

Speaker 1 (22:01):
Yeah, all right, well, fair enough, we'll say it's the
Archies for Sugar Sugar, Oh dear, Oh my goodness.

Speaker 2 (22:07):
Okay, So Ray Kurtzweil, Yeah, Ray Kurtzweil is the kind
of cat who you know, when he was in high
school invented a computer program. And this is in the
mid nineteen sixties. This isn't like last year or something
in the mid nineteen sixties, created a computer program that
listened to classical music, found patterns in it, and then
created new classical music based on that.

Speaker 1 (22:29):
So as a computer that composed classical music, yes, following
the rules of classical music that other composers had created. Yes,
that's kind of cool. That's just that's just something he did,
you know, And yeah, that's dude's got credentials. Yeah.

Speaker 2 (22:43):
He also kind of invented flatbed scanners, has done a
whole bunch of stuff in speech recognition, and.

Speaker 1 (22:54):
Which that's interesting because we'll and we'll talk about that
in a second. But one of Kurtzweld's big points is
that he thinks that by and this all depends upon
which interview you read of Cartswell, but in various interviews
he said that essentially, by twenty thirty, we will reach
a point where we will be able to make an
artificial brain. We'll have reverse engineered the brain, and we'll

(23:19):
be able to create an artificial one. And there's a
lot of debate in smarter circles and the ones I
move in. That's not a slap against my friends. They're
pretty bright, but none of us are neu quirologically gifted
at that point. I include myself in that circle. So,

(23:40):
but there are some very bright people who debate about
this point, whether or not we'll be able by the
year twenty thirty to reverse engineer the brain and design
an artificial one. And I think the debate is not
so much on whether or not we'll have the technological
power necessary to simulate a Sure we can simulate brains

(24:02):
on a certain superficial level today, well, I.

Speaker 2 (24:04):
Mean hypothetically we could connect enough computers that we could
make it go.

Speaker 1 (24:08):
I think, yeah, we could probably get the computer horsepower,
especially by twenty thirty, to simulate a human brain. The
question is whether we will understand the human brain enough
to do so exactly. So that's sort of where the
debate lies. It's not so much on the technological side
of things as it is the biological side of things,

(24:29):
which is kind of interesting. I've read a lot of
critics who have really jumped on Kurtzweld for this. Particularly PZ.
Myers has written some pretty yeah strongly worded, strongly worded
criticisms to Kurtzweil's theories, saying that Kurtzweil simply does not

(24:50):
understand neurological development and activities, and that you know, the
nature between the environment and the way our brains develop
over to versus the you know, nurture versus nature, all
of this stuff with the hormonal changes, electrochemical reactions.

Speaker 2 (25:07):
Saying that there's there's so many little bits that make
up our brains, so many hormones, so many processes, and
we understand such a small fraction of what they do.
This is why a lot of psychiatric drugs, for example,
are kind of like, oh, well, we invented this thing,
and we guess it does this thing right, take it
and see what happens?

Speaker 1 (25:21):
We do stuff? Yeah, we don't. It tends to make
you happy. It also makes you perceive the color red
as having the smell of oranges, like you know that
that's we don't. We don't understand it fully. And in fact,
there are other people like Stephen Novella, who is uh
he's the author of the Neurological Blog, and he also

(25:42):
is a host on a wonderful podcast called Skeptics Guide
to the Universe. If you guys haven't listened to that,
you should try that out if you especially if you
like skepticism and critical thinking. But he's he's a doctor
and a proponent of evidence based medicine, and he talks
about how we don't know how much we don't know

(26:05):
about the brain. We have no way of knowing where
the endpoint is as far as the brain is concerned,
and therefore we cannot guess at how long it will
take us to reverse engineer. It's simply because we don't
know where the finish line is.

Speaker 2 (26:20):
Right right, Yeah, Kurt Kurtzwell's Kurtzwell has a new book
new as of we're recording this in January twenty thirteen.
It just came out in November of twenty twelve called
How to Create a Mind. The Secret of Human Thought Revealed.
And in the book he theorizes that Okay, if you'll
follow me for a second, adams are tiny bits of data, Okay.
DNA is a form of a program. The nervous system

(26:44):
is a computer that coordinates bodily functions, and thought is
kind of simultaneously a program and the data that that
program contains.

Speaker 1 (26:54):
Gotcha. See, now, this is another problem that some scientists have, yeah,
is reducing the human brain to the model of a computer.

Speaker 2 (27:03):
Right, because it's you know, it's it's a very it's
a very elegant, interesting proposition. Sure, and and it's kind
of sexy like that because you go like, oh, well
that's that that sort of makes sense. Man, Like, let's
go get a pizza and talk about this more.

Speaker 1 (27:16):
Yeah, let me let me get a program that will
allow me to suddenly know all kung fu.

Speaker 2 (27:22):
Right, and when you're a programmer, that's a great plan. Yeah,
I mean, yeah, that sounds that sounds terrific. But yeah,
there's one one specific guy found. Jaren Lanier wrote a
terrific thing called One Half of a Manifesto, which is
a really entertaining read if you guys like this kind
of thing, where he was saying that what futurists are
talking about when they talk about this the singularity is

(27:44):
basically a religion. He was calling it cybernetic totalism, you know,
like like a fanatic ideology. He compares it to Marxism
at some point. Interesting, Yeah, and he was saying that that,
you know, it's this, this theory is a terrific theory
if you want to get into the philosophy of who

(28:05):
we are and what we do and what technology is.
But that you know, cybernetic patterns aren't necessarily the best
way to understand reality, and that they're not necessarily the
best model for how people work, for how culture works,
for how intelligence works. And that's saying so is an
gross over simplification.

Speaker 1 (28:24):
That's a good point, and we should also point out
that it all depends on how you define intelligence as well,
because Kurtzwell himself has worded his own predictions in such
a way that some would argue. Novella argues, for example,
that he has given himself enough room where he's going
to be right no matter what, like saying that by

(28:44):
twenty thirty, we will be able to reverse engineer basic
brain functions, and Novella says, well, technically you could argue
that now, So that kind of gives you a lot
of room, a little bit of a gimme there. Yeah,
But whether or not it means total brain function, and
that's that's a totally different question. And so the other
point is that we could theoretically create an artificial intelligence

(29:07):
that does not necessarily reverse engineer the brain. It doesn't
follow the human intelligence model. I mean, that's IBM's Watson again,
a good example of artificial intelligence that you know, in
some ways it mimics the brain because it kind of
has to. You know, we're coming at this human beings
are the ones creating this technology, and so as human

(29:27):
beings creating this technology, it's going to follow the rules
that as we understand them. So there's going to be
some medocry there. Right. But but IBM's Watson, you know,
you think about that. It doesn't really understand necessarily the
data that's passing through it. It's looking for the connections
and making.

Speaker 2 (29:45):
It really savvy at making connections and recognizing patterns and
spitting out useful information.

Speaker 1 (29:52):
Yeah, it's looking for whatever answer is most likely the
right one. It's all probability based, right, So, and if
it does doesn't reach a certain threshold, it doesn't provide
the answer. So if arbitrarily speaking, i don't know what
the threshold is, so I'm just making a number eighty
five percent. Let's say it has to be eighty five
percent certain or higher for it to give that answer.

(30:14):
If that if the if a certainty falls below that threshold,
no answer is given. That's essentially how it worked when
Watson was on Jeopardy, right. It would analyze the the
the answer in jeopardy terms and then come up with
what it thought was probably the most accurate question for

(30:34):
that answer, and occasionally it was wrong to hilarious results.
But it did sort of seem to kind of mimic
the way humans think, at least on a superficial level.

Speaker 2 (30:48):
And I mean the thing about humans is that they're
they're wrong a lot more than a lot more than
what that fifteen percent of the time.

Speaker 1 (30:55):
Yeah, it's you know, we've we've got well, we give
answers even if we're not eighty five percent short a question.
I certainly do.

Speaker 2 (31:02):
Because we all know from going to trivia nights. Yeah,
and there's there's a lot of I've read a lot
on online about arguments of how it's our deficiencies, our memory, biases,
our rational behavior are weird hormonal stuff going on or
what make us human, and that you can't teach a
computer to be irrational.

Speaker 1 (31:21):
That's true, although you can't teach you to swear. You can.
We just we read a story last week, yeah, where
IBM allowed Watson to read the Urban Dictionary and then
I Watson got a little bit of a botty mouth.

Speaker 2 (31:35):
It got it got kind of fresh.

Speaker 1 (31:37):
It did it did it started it started to say that. Uh.
Oh see what was it? Oh, I'm going to say
something and it's going to be bleeped out, right Tyler Tyler,
Tyler just said so. Uh. Anyway, so there's one point
where a researcher asked a question of Watson, and Watson
included within the answer the word bo. So since that

(32:01):
was bleeped out, you probably don't know what it was,
so go look it up. It was funny. It was
really funny.

Speaker 2 (32:05):
Yes, And then and then they basically nuked that part
of Watson from orbit. They were like, you know what,
never mind.

Speaker 1 (32:10):
It was the only way to be sure. They wiped
out the Urban Dictionary from Watson's memory. They also said
that a very similar thing happened when they let Watson
read Wikipedia. Oh no no judgments here, just saying what
what IBM said. Anyway, Again, the computer was unable to
determine when, when there, when it was appropriate and what's
the appropriate context for dropping a swear word. Yeah, it

(32:32):
didn't know, so it just started to speak kind of
like my wife does. So yeah, it was I'm going
to pay for that later. So anyway, that that that's
an interesting point though. Again you're you're showing how machine
intelligence and human intelligence are different because the machine intelligence
doesn't have that context for sure.

Speaker 2 (32:50):
And of course, you know, we're talking about about science
fiction or science future, however you want to term it
so that, you know, we might very well come up
with fancy little program script that lets you that lets
you introduce that kind of bias. But you know, yeah,
but again from that documentary Star Trek, I mean, data
never figured out those contractions.

Speaker 1 (33:09):
That's true, that's true. Touring actually had a great mental exercise,
really and it's called the Touring test, and this applies
to artificial intelligence. Touring's point, and we've talked about the
Touring test in previous episodes of Tech Stuff, but just
as a refresher, Touring had suggested that you could create

(33:30):
a test and that if a machine could pass that
test at the same level as a human, in other words,
if you were unable to determine that the person who
took that test was human or machine, the machine had
passed the Touring test and had essentially simulated human intelligence.
And it usually works as an interview, so you have

(33:51):
someone who's who's conducting an interview, and you have either
a machine answering or a human answering, and there's a
barrier up so that of course the person asking the
questions cannot see who is responding. And of course they're
responding through you know, text usually because if they're responding
through a voice and it's like I think the answer
as far you know, you'd be like, well, either it's

(34:13):
a robot or the most boring person in the world.
The idea being you would ask these questions over a
computer monitor, get text responses, and if you were unable
to answer with a certain degree of accuracy whether or
not it was a machine or a person, then you
would say the machine passed the Touring the test test.

(34:33):
And you could argue, well, that could just mean that
the machine's very good at mimicking human intelligence, it does
not actually possess human intelligence. Turing's point is, does that matter?
Because I know that I am intelligent. I speak with
someone like Lauren, who I assume is also intelligent based
upon the responses she gives, But she could just be

(34:54):
simulating intelligence. However, I have already bestowed in my mind
the feature of intelligence upon Lauren because what she does
is very much akin to what I do. So Touring said,
if you extend the courtesy to your fellow human being
that they are intelligent based on the fact that they

(35:14):
act like you do. Why would you not do the
same thing for a machine? Does it matter if the
machine can actually think, If the machine simulates thought well
enough for it to pass as human, then you're giving
it the same benefit of doubt that anyone else.

Speaker 2 (35:28):
You mean, right, this is what a lot of science
fiction movies are about.

Speaker 1 (35:32):
Actually, yeah, there's a lot of philosophy.

Speaker 2 (35:33):
And yeah, a lot of philosophy, a lot of Isaac Asmo,
have a lot of Blade Runner and that's not an author.

Speaker 1 (35:39):
Sorry, well no, but you know Philip K. Dick, look
him up. So anyway, thank you do. Android's dream of
Electric Sheep. I won't I won't spoil it for you.
They to kind of wrap this all up, getting back
into the discussion of philosophy. We had very recently, we
did a podcast about our we living in a computer simulation, right, right,

(36:02):
and that kind of plays into this idea of the singularity,
because that argument stated that if the singularity is in
fact possible, if it's inevitable, if we are going to
reach this level of transhumanism where we are no longer
able to really predict what the present will be like,
because it'll be beyond our understanding. Then one thing we

(36:23):
would expect to do is create simulations of our past
to kind of study ourselves, sure, right.

Speaker 2 (36:28):
And to see what happens play around variables yams.

Speaker 1 (36:32):
Yeah, and we could, if we're that advanced, we could,
in theory, create such a realistic simulation that the inhabitants
of that simulation would be incapable of knowing that they
were artificial and would be completely, you know, self aware
of themselves. You know, that was totally redundant, self aware,

(36:53):
but unable to know that they were a simulation. He
said that if those things are possible, then there's no
way of knowing that, you know, the the overwhelming possibility
is that we are in a computer simulation.

Speaker 2 (37:06):
It's a computer simulation, yeah, right now.

Speaker 1 (37:08):
Because yeah, if that's if that's what's gonna happen, then
there's no way of saying with certainty that we are
not in fact the product of that. And so, uh,
the point being not necessarily that we are in fact
living in a computer simulation, but that perhaps this singularity,
this transhumanism thing might not be realistic, It might not

(37:28):
be the future that we're headed to. Maybe it ends
up being a pipe dream that's not really possible for
us to attain. Or maybe we'll wipe ourselves out through
some terrible war or catastrophic accident. Maybe we create a
biological entity that wipes us out allah the stand, or

(37:51):
we create a black hole at the LHC. Which come on,
people don't write me. I already know about that and
how tiny and and alm non existent they are because
it lasts so quickly, it totally happened. Let's say that
they do that thing where you look at that one
website where the black hole forms in the parking lot
outside the LHC and you just see the whole picture

(38:11):
go which funny video. Anyway, that argument plays back into this.
So I don't know. I don't know if we're going
to ever see a future where the singularity becomes a thing.
Oh and we never really talked about it. But one
of the big points that Kurtzwell really punches in his

(38:32):
Singularity talks is the idea of digital immortality, right right.

Speaker 2 (38:35):
And he's been obsessed with this, and obsessed is probably
a judgmental word. I apologize that's but he's been very
focused on this concept. His father died when he was
about twenty four, and he's been exploring theories on life
extension ever since then, and supposedly takes all kinds of
supplements and sells them as well to extend life. Has

(38:56):
all kinds of kinds of health plans.

Speaker 1 (38:58):
Yeah, dietary that he has exercise all the idea that
the idea being that if he can preserve his own life.

Speaker 2 (39:06):
Last long enough that we hit the singularity, then he
can become immortal.

Speaker 1 (39:09):
Right, and either that, you know, we attain immortality through
one of a thousand different ways. For example, we end
up uploading our own intelligence into the cloud, right, and
then we become part of a group consciousness, so we
are no longer really individuals. Or we merge with computers
in some other way so that we are technically immortal

(39:29):
that way. Or we just conquer the genes that all
guide the aging process and we stop it, and we
stop disease.

Speaker 2 (39:37):
You know, we take like in transmit, you just take
a cancer pill and then you don't get cancer because
that's what you do.

Speaker 1 (39:42):
Yeah, So again the singularity. That's kind of why I
think a lot of critics also point to it as
being more of a religion because it's kind of this
sort of utopian pipe dream in their minds.

Speaker 2 (39:54):
There's the former CEO of Lotus, Mitch Kapor Kapper, I'm
not sure how you say it, once called called it
the intelligent design for the i Q one forty people.

Speaker 1 (40:03):
Yeah, ouch ouch. Well, meanwhile, Kurtzwild's kind of laughing all
the way to the bank. I hear that a company
that rhymes with Shmugel hired him Little little people.

Speaker 2 (40:12):
I mean, we're you probably wouldn't have heard of him, Yeah,
but yeah, they just tired him on to be uh.
I have it in my notes the official title. I
think it's the director of engineering. Yeah, a director of
engineering over there.

Speaker 1 (40:24):
Yeah, they get they get some big names. I mean
they had Vince Surf as the chief evangelist, and of
course he was one of the fathers of the Internet.
So Google's Google's got a They're known for getting some
really smart people. And to be fair, while the singularity
may or may not ever happen, I think it's important

(40:44):
that we have optimists in the field of technology who
are really pushing for our development to try and make
the world a better place for people.

Speaker 2 (40:55):
Now, you know, oh, absolutely so, even if we're even
if we never reached the point of digital immorte in
our lifetimes or any other it's I mean, if someone
wants to think so big that they want to put
in nanobots to make my body awesomer, I mean, and
not that that came out. That came out possibly crude.
It mostly means that I don't get cancer and die
kind of stuff. That's that's terrific. Can I can't argue

(41:18):
with any part of that.

Speaker 1 (41:19):
Yeah, I'm going to be on video so much this
year that I definitely need my body to be awesomer,
So I'm all for that well either way.

Speaker 2 (41:28):
Yes, And and Google, you know, Google looks forward so
much to augmented reality. Augmented reality. I'm sorry, I can't
pronounce anything today. I am on a non roll. It's okay,
in the Internet of Things and all of that wonderful
future tech that it seems like a terrific fit.

Speaker 1 (41:45):
Yeah. Yeah, so we'll see how it goes. I mean, obviously,
the nice thing about this is that all we have
to do is live long enough to see it happen
or not happen. And most predictions have the singularity hitting
somewhere between twenty thirty and twenty fifty. Yeah, it all
depends upon which futurist you're asking. And also it's one
of those kind of I think it's one of those

(42:06):
rolling goalposts as well. You know how certain technologies are
always twenty years away, or five years away or ten
years away. So we'll see. Maybe by twenty twenty we'll
be saying, all right, we've revised our figures.

Speaker 2 (42:19):
By twenty seventies, definitely, but who knows.

Speaker 1 (42:22):
We'll see. Guys, if you have any suggestions or future
episodes of tech Stuff, well here's what you can do.
You can write us an email. And a lot of
people have been asking about our email address. I do
say that every episode, but in case you've missed it,
listen carefully. Our email address is tech Stuff at Discovery
dot com. Send an email. I'll prove it by writing back,

(42:45):
or drop us a line on Facebook or Twitter. Our
handle there at both of those is text Stuff hsw
and Lauren and I will talk to you again in
the future. And that was the classic episode tech Stuff
Enters the Singularity, way back from twenty thirteen, more than

(43:06):
a decade ago. Holy cats, I have been doing this
show for so long because Lauren, of course, was my
second co host. I had already done a couple hundred
shows with a different co host. Wow really does hit
me right in the brain. So I hope you all

(43:26):
enjoyed that. I'm looking forward to seeing what comes next.
I don't think we're gonna hit the Singularity this year.
Maybe I should have put that as one of my predictions,
but who knows. Maybe open Aiye is gonna create the
next generative chat bot and it'll program the sky Net
like system that will bring us onto the Singularity kicking

(43:48):
and screaming, or if you're me, I'm probably already kicking
and screaming just just because I'm grouchy. Anyway, I hope
you're all well. I hope you had a happy and
safe New Year, and I'll talk to you again really soon.
Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio,

(44:13):
visit the iHeartRadio app, Apple Podcasts, or wherever you listen
to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Host

Jonathan Strickland

Jonathan Strickland

Show Links

AboutStoreRSS

Popular Podcasts

2. In The Village

2. In The Village

In The Village will take you into the most exclusive areas of the 2024 Paris Olympic Games to explore the daily life of athletes, complete with all the funny, mundane and unexpected things you learn off the field of play. Join Elizabeth Beisel as she sits down with Olympians each day in Paris.

3. iHeartOlympics: The Latest

3. iHeartOlympics: The Latest

Listen to the latest news from the 2024 Olympics.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.