All Episodes

August 5, 2016 38 mins

By 2021, we won't be able to shrink transistors down any further. What does this mean for Moore's Law?

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking. Hey there, and welcome to Forward Thinking, the
podcast that looks at the future and says out on
the wily Misty Moore's I'm John Constrictland and I'm Joe McCormick,

(00:23):
and our other host, Lauren vocal baumb is not with
us this time, but hopefully she will be back next time. Jonathan,
I I suspect you of equivocating on more maybe a
little bit, teeny tiny bit. Okay, So today we are
going to be talking about Moore's Law. It won't be
the first time on this podcast, probably won't be the
last time, because it contributes very much to how we

(00:45):
grapple with the future of consumer electronics and computer technology.
And we've used, I mean we being the humans in general,
have used Moore's Law to kind of be shorthand for
the progression of computational power for pretty much as long
as it's been around. So we are specifically focusing on

(01:05):
how it's gonna go through a little bit of a
transformation in the not too distant future. To quote MST
three K. Jonathan recently wrote a piece for How Stuff
Works Now about a recent development in the world of
people thinking about Moore's law. Yeah, so first, before we
even get into that, we should probably chat a little
bit about what Moore's law is. The observation was originally

(01:27):
made back in nineteen sixty five. That's when Gordon Moore
came up with this. Uh, well, it really was just
an observation, right, It wasn't a law. He was he
noticed something and he wrote a paper about it. But
maybe we should make it a law. Well, we certainly have. No,
you're right that it is not a physical law. It
is more a yeah, an observation or a prediction. Right,

(01:50):
It's just a prediction that has held more true than
not throughout the years, and so people referred to it
like a law, kind of like Murphy's law. Murphy's law
is not a real law. Moore's law is not a
real law. But when Gordon Moore made that observation back
in five, he was the director of research and development
for fair Child Semiconductor. He was also a co founder
of fair Child. He would go on to be a

(02:12):
co founder of Intel. So here's a guy who was
really really smart about integrated circuits. Uh, integrated circuits were
a relatively new thing because transistors had not been around
that long before then you're talking about vacuum tubes. And
so he was taking a look at this and seeing
that there was an interesting situation going on. He wrote

(02:35):
an article for Electronics magazine and it has one of
my favorite titles of all time for technical uh articles,
it's cramming more components onto integrated circuits, because there's nothing
delicate about that cramming. Cramming is such a great word.
You get this image of Gordon Moore with a bag
full of components and he's reaching in and pulling out

(02:58):
fistfuls and trying to cram them in to a machine.
And maybe he has like a real large wooden mallet
sitting by just in case. Yeah, but that's not quite
the actual way that this is being done. But he
was talking in fact, I think he was talking as
much about economics as he was about technology. In fact,
he was talking more about economics. Yeah, more m O

(03:21):
r E, not m o O r E in this case. Yeah,
he was talking about economics primarily because he wanted to see, uh,
where the price point fell on an individual component, At
what point in a in the density of a an
integrated circuit. Do you have the best price per component

(03:41):
so that your your circuit that you're done with is
at its lowest cost, right, the lowest price to to
make it. And there's a certain rule that he saw
that you had to hit a certain volume in order
for that price to come down. It's kind of like
if you're buying in bulk. Yeah, you know, if you're
buying in bulk, you know, you buy more the individual

(04:02):
cost of each individual item is lower. But that's why
it makes so much more sense to buy that barrel
of cheese balls, small little can of cheese ball by
by the hot tub sized mayonnaise jar and put it
in the garage. Yeah, it's always a great idea. Uh. Well,
he noticed that there was a certain volume at which

(04:24):
the the components would be at their ideal cost. But
it was a sweet spot, like if you went too
much over that, then the finished integrated circuit would be
too expensive. If it were too too much under that,
it would also be that the individual price per component
would be too high, thus making the finished circuit too expensive.
But he also knows that not only was there a

(04:46):
sweet spot, that sweet spot moved over time. There was
an incentive for companies to develop more powerful processors. There
was a need in the marketplace, which meant that there
were economic factor is that pushed the development of more
sophisticated um approaches to manufacturing transistors and and UH and

(05:08):
integrated circuits. That meant that the individual component prices began
to shift over time. So at the time that he
made this observation, he said that the ideal number of
components for an integrated circuit was fifty. Folks were in
the billions now, but at the time it was fifty
fifty components. That was where you hit the lowest price

(05:29):
per component. But he said, this was twice as good
as the year before, and then that and that was
twice as good as the year before it. So he said, well,
I think this is going to continue. I project that
within five years the lowest cost per component on an
integrade circuit would be realized with circuits having around a
thousand components. So to the day he makes the observation,

(05:54):
it's fifty five years from then, he's saying it's going
to be a thousand that we're just gonna see this
trend to continue. He said, there's no reason you can't
project this out further, and he said that by the
time you would get to nineteen seventy five, it would
be sixty five thousand combinans. He was talking about it
essentially doubling more or less doubling every single year. He says,

(06:15):
eventually you would probably have to make some adjustments. You
might hit a point where this this trend continues, but
it slows down. And in fact, that is what we saw, right.
We saw it go from uh, every twelve months to
really more like between eighteen and twenty four, depending upon
the times span you're looking at um. And that was

(06:38):
incredible that he made this observation and that it was
so prescient and so accurate. And that's why we often
call it Moore's law, although we don't necessarily I think
the wider interpretation isn't focused so heavily on the economic
side of things. So as a point of illustration of
exactly what this looks like in terms of the hardware

(07:01):
in our devices. There was a great example I found
in a piece that was in The Economist, and it
highlighted this fact. When Intel released its first microprocessor in
nineteen seventy one, this was called the four zero zero
four and the four thousand four That chip was twelve
square millimeters and it had two thousand, three hundred transistors

(07:24):
and the gap between each transistor was ten thousand nanimeters,
which is about the width of a red blood cell.
They say, you know, a kid with a decent microscope
could see the individual transistors. Intel's Skylake chips in six
are a little bit different. These chips are the spacing

(07:45):
of fourteen nanimeters between transistors, so that's ten thousand nanimeters
to fourteen and they're not only invisible to the naked eye,
they're they're even invisible to any normal microscope right light.
The wavelengths of light are too big for us to
be able to see this. You have to use like
a scanning microscope, electron scanning microscope. So here, here's two

(08:10):
factors that are important in this. Like one is that
the individual components, we were able to shrink them down
to increasingly smaller sizes, which is a weird way of
putting it, but it gets a point across, and we
were able to pack them together more densely. We were
able to decrease the spaces between those components. Because we
both both miniaturization and architecture. Right, we're getting really, really

(08:33):
good at various types of lithography and other methods of
developing and laying out these these circuits. Um and that
was what allowed us to continue on to a ridiculous degree.
When you think about it, I mean you're talking about
from a time where it was fifty components to literally billions.

(08:53):
Now it's it's hard to wrap your mind around it.
And again this was really all about how More was saying,
this only makes sense if it makes economic sense. Right,
It will only work to the point where companies can
do this and have it be a product that they

(09:14):
can make a profit selling if it gets to a
point where it's too difficult to do. Too difficult translates
to too expensive. Right, the harder something is to do,
the more expensive it is to do it. Well, there's
all kinds of stuff you can do in the lab,
and it just doesn't make sense from a consumer perspective. Yeah,
and it if it's one of those things where you
get to a point where it's just too much trouble,

(09:35):
then what it's worth then Moore's law breaks down, because
again it's not that it's technically impossible, it's that it's
economically not productive. And if it's not economically productive, no
one's going to no one's gonna lose money just trying
to uh to get across an engineering hurdle that has

(09:57):
been problematic. It's sort of like saying, you know, could
you in near me a car that goes three hundred
miles per hour? You know, I bet somehow if we
were gonna be willing to pour all of our resources
into that, we could do that. Just why, there's no
reason to do that. It doesn't make any economic sense.
There's no economic imperative for it. I can do that easily.

(10:19):
You just have to find a height tall enough to
drop the car off. And then I mean, you know,
I'm just say it. Technically I can get it up
to three hundred miles per hour. Yeah, has you're not
gonna be able to do anything with it. Well. You also,
if you were to make a car, a consumer car
that goes three hundred miles per hour, you you'd introduce
problems based sort of on physics and the limitations of drivers,

(10:43):
Like I would say, you'd, how should I put this?
You'd you'd increase the error rate of that car, and
that in fact leads us into one of the fundamental
issues that are that we face now in our production
of these incredibly complicated microprocessors. Now, one thing I should
say is that we haven't necessarily seen the number of

(11:06):
components double every eighteen to twenty four months over the
past few years. Right, That was the original definition of
Moore's law, though in MOR's law does still seem to
hold in a more abstracted sense, which is the way
it trickles down to the consumer, is that you can
expect every eighteen to twenty four months, you're computing devices

(11:27):
will be twice as fast, twice twice as fast, or
the processing power is twice as much some some variation
of that. That tends to be how we define More's
law now, is that every eighteen to twenty four months,
like if you buy a computer today, in two years,
the computer you buy that day will be twice as
powerful as the one that you just bought, And that

(11:49):
it just shows the rapid development of technology and the
speed at which we can improve processing. That that tends
to be how we focus on More's law. But even
that is starting to get difficult. So even when we
got to the point where we were no longer saying
all right, We're not not so much worried about how
many components are you adding, but how you how you
best utilize those so that you get the most out

(12:12):
of them. We still run into problems. So one of
the things we've seen is that we've seen companies like
Intel take a TikTok approach. That's what they actually call
their strategy when they're creating new types of microprocessors and UH.
The TikTok approach means that you have two different generations

(12:33):
of processors UH that are kind of piggybacked onto each other,
and in fact they chain up to the previous generations.
So the TICK generation is not blue and nine vulnerable
and has a sidekick named Arthur. The TICK generation is
when you shrink down those components to a smaller size.

(12:55):
We're talking on the nano scale now, right Like it
used to be that forty now cometers was considered the
the super small components. Now we're talking about getting down
to like less than ten nanometers per component. That's insanely small.
When you get down to that size, that becomes the

(13:15):
TICK where you say, all right, we're gonna build it
on the same architecture as the previous generation's microprocessors. But
now everything is just smaller, so we can pack stuff
in more more pieces in so we're using the same chassis,
but now we've got way more components in there. The
talk is when they figure out, hey, now we know

(13:37):
how to design all those little components so that they
work best they work, and because the architecture from one
generation to the next may not be the most efficient, right, like,
you may need to change that, especially for stuff like
heat dispersal. Heat is a real problem with microprocessors. So
the talk that would be when you figure out, all, right,

(13:57):
here's the architecture that works best with this eyes and
then you would go to the next tick, which means
everything is even smaller, and then the next talk where
everything has been laid out in the most ideal way possible.
So uh so here's a question though, Yeah, is that
TikTok of the clock counting up or counting down that's
coming down? Buddy? So we're hitting some fundamental limitations just

(14:21):
because of our good old buddy physics, right, Like, you know,
it's not like the nano scale is as small as
it gets. You can go down to the atomic scale, right.
The only problem is that as you're getting down to
the nanoscale and the atomic scale, something else starts to
play a role in your designs, and that's quantum physics.
You don't need to worry about that. On the classical side. Really,

(14:41):
quantum physics at that point becomes negligible. You're not you're
not so much worried about weird quantum effects to bring
back the car. I mean, you don't have to go
to relativity to understand the physics of designing a car, right, Yeah,
we we don't have components on our cars that go
down to such the nanoscale where you have to think, oh,
wait a minute, quantum tunneling. But we do have to

(15:02):
worry about it with microprocessors and quantum tunneling in particular.
I mean, there are a lot of quantum effects that
we could talk about, but quantum tunneling in particular is
a problem. Uh. It leads to what some folks call
electron leakage, which just sounds gross. But here's what's happening.
So quantum tunneling is this phenomenon you get where you

(15:22):
have a quantum particle. In this case, we're talking about electrons,
and a quantum particle has uh kind of a range
of of probabilities of where it can be at any
given time. So we've talked a little bit in previous
episodes about the Heisenberg on certainty principle. Right, we don't

(15:43):
know where it is. We know where it could be, right, Right,
We've got a general idea of the area where within
that area somewhere, that's where the particle is. But it
could be anywhere within that one area. So instead of
thinking of an electron as a point, think of a
a kind of a nebulous fog of war like circle,

(16:07):
and the electron could be anywhere within that circle. All right. Now,
with microprocessors, you have these things called gates. The gates
allow either allow electrons to pass through or do not
allow electrons to pass through. In terms of effects, I
would say their logic gates. The gates in a in
a process or sort of the the ability of your

(16:29):
brain to tell the difference between yes and no. Right,
And so if you think about computer processing ultimately, you're
talking about very very very very complicated approaches to just
having lots and lots of yes or no questions, Right,
I mean logic gates. You build logic gates by having
all these different channels, and by opening some channels and

(16:51):
closing off others, that's where you create the the language
that ultimately becomes the commands for the computer and you
of getting to play your Call of Duty game or
whatever it may be. So let's say you've got this
Heisenberg uncertainty principle at play, and let's say you've shrunk
down the components to such a degree that when you

(17:13):
get close to one of these gates, it's so thin
that part of the cloud of probabilities that that electron
could inhabit overlaps the gate, so that part of it
is on the other side of the gate. So if
you can think of it like a flashlight, imagine that
you're shining a flashlight on on an actual little like barrier,

(17:36):
and part of the flashlight you can see like the
circle of light. Part of it is on the is
on one side of that barrier, and part of it's
on the other side of the barrier. Think of that like,
that's the potential of where the electron could be. Now,
if there's a probability for an electron to be in
a specific location, I mean sometimes the electron is in
that specific location. So if the probability field of the

(18:00):
electron actually overlaps the gate. That means sometimes the electrons
on the other side of the gate whether you've opened
the gate or not. And so sometimes the brain of
your electronic device can't tell the difference between yes and no. Right,
It may think that it's a yes, when in fact
it was supposed to be a no. That's where you
get electron leakage, where electrons are leaking through the system

(18:21):
and inserting errors into your calculations. For computers, this is
what we call a bad thing, Like you want your
calculations to be reliable, and if they're not, then programs
are gonna crash, files will get corrupted, you won't get
you won't get good behavior out of your computer. And
we've been hitting up against this. But I was trying

(18:42):
to say, no, don't go into the room with the group.
Yeah exactly, Like yeah, so I wanted I wanted to
go east and open the mailbox. Um. Yeah. Know, when
we get to that point, then that's that's an issue,
and that's where we've been bumping up against Over the
last few years now. We keep seeing engineers come up
with new materials that help edge us away from that

(19:04):
but it's something that has been an issue for the
last several generations of microprocessors. Another issue with the design
of microprocessors you already touched on, but I was gonna
mention wasting energy's heat, Yeah, and not just wasting energy. Yeah,
so there's wasting energy, but also just the problem of
accumulating heat. Right, if if a chip gets too hot,
it'll it'll lock up, it'll shut down exactly. So you

(19:26):
continually shrink these components and increase transistor density, but this
cuts into your ability to disperse all of the waste
energy that they create as heat. Yeah, how do you
end up cooling all those billions of components so that
they can continue to operate without getting to that that
critical threshold of heat. It's not a perfect analogy, but

(19:47):
I would say it's sort of like trying to cram
more computers into a server bank. Sure, and if you
just keep cramming more and more in, well, that's great.
You can fit more computer is into that tiny little room.
But eventually the room's gonna get really hot, and if
you don't have a good capacity to cooling that room down,
then eventually those machines break down. They stop. Um, you know,

(20:11):
you may have heard that in overclocking competitions, where people
are trying to massively push the limits of what their
processors processors can do, they might go to such extremes
as cooling their systems with liquid nitrogen in order to
get rid of that heat, because since they're running them
so hard, they're generating even more heat than they normally would.

(20:33):
And these are like top of the line processors. Oh.
I I have a friend actually who is buying himself
an insane computer, and he's going to get a liquid
cooled computer. Liquid Cooled is pretty standard for things like
a high end gaming rig these days because it's it's
more efficient than air cooling. I did not know that.
I mean I knew that you could do that, but

(20:54):
I did not know that liquid cooled computers were things
somebody just have in their house as opposed to in
a labs. Right. Yeah, I would say that about maybe
four or five years ago, it became kind of like
the gold standard of how to cool your gaming rig.
Before that, it was all you know, GPUs that had

(21:14):
their own dedicated fans, so you get more and more
fans inside your computer, which would make it louder and louder. H.
But uh, it's not as unusual now as it used
to be. Used to be. Like that was, if you
had ridiculous amounts of money and you wanted to really
increase your swagger around the PC gaming circles, you would

(21:35):
invest in a water cooled system. It's it's less, it's
less extravagant. Now it's still pretty extravagant, but it's not ridiculous.
It's not like the Rolls Royce like it used to be.
But yeah, we've seen engineers work really really hard to
get around these problems. But that can only go so long, right,
you do start to bump up against this issue, and

(21:57):
and by by saying engineers work really really hard to
get around these problems, that's where we're starting to see
where the the issue really is. Like we said at
the very beginning, Moore said, as long as it's economically advantageous,
as long as it makes sense from a money standpoint,
this will continue. So those engineers working really hard, somebody's

(22:20):
got to pay their salaries, exactly, And if they have
to work really hard for a really long time, that
technology gets really expensive. So this kind of leads us
to a recent report called the International Technology Roadmap for
Semiconductors or i t r S, which was an annual
report or ITTERS. It was an annual report until until

(22:43):
this most recent one, which was technically the two thousand
fifteen i t r S, but it came out this year.
I'm sorry, I didn't mean to step on your emphasis.
It was you're saying. It was like it's done. This
was the last one that came out. That the most
recent one is the final one. Bye bye, shed it
here for ITITTERS. Yep. So they found that by twenty

(23:03):
twenty one, so not far at all, we will no
longer be shrinking transistors. We will have reached a limit.
That's five years, folks, five years, five years. We will
no longer be making smaller components for these microprocessors. We
will have gone as small as we can go. Now,
when I say as small as we can go, I

(23:25):
don't necessarily mean that we'll have reached a physical limitation
like as in the laws of physics, will deny us
the ability to go any smaller. But from an economic standpoint,
that will be as small as we can go, because
at that point it will be so complicated that to
try and go smaller would be more expensive than you
could ever recapture. So I've got an idea. I think

(23:48):
instead we need to be paying these engineers to design
a simulated universe that we can all plug into, in
which there are ways to make cheaper, smaller microprocess But
if they do that, then it's clear that we already
exist in a simulated universe, and then we all have
existential dread. Well, of course we do. We all got
to wait until that time somebody gets bored and turns

(24:08):
us off. If we get to a point where we
can create a simulated universe, the argument from a philosophical
standpoint is that we must already be in a simulated universe. Yeah,
I know that argument, because because the simulated universes will
outnumber the real universe, is right, and the odds of
us being the first universe to create a simulated universe
would be incredibly small. Well, we're getting off topics, Yeah

(24:30):
we are. But that's a fun, fun thing to talk
about anyway. So existential dread set aside. Let's take real
dread by twenty one. We cannot get those those components
any smaller for whatever reason, but that doesn't necessarily mean
that we've reached the end of the advancement of computer

(24:51):
power computer speed. Yes. It So some people have said,
does this mean Moore's law itself would end? And the
answer is not really. And the reason for that answers
because again, we have tweaked our definition of Moore's law
enough so that if we're taking it from the definition
of every eighteen to twenty four months, we're able to
double the processing speed or power of a computer. That

(25:15):
is something we could probably continue to do for a
few more years. Uh So, uh, what we would have
to do is find new ways of designing microprocessors beyond
what traditionally we have concentrated on. So your typical micro
processor you can think of as essentially two dimensional, right,

(25:36):
it's laid on an x y grid essentially. Uh. And
we've made the various little components very tiny, so you
can think of the actual squares and that grid are
on the nano scale. They're really really small. So we've
packed a ton of them into that little form factor.
But we've still really only focused on width and length.

(25:57):
We haven't gone into another dimension, the height dimension. Now
that sounds like that introduces a whole other world of
design constraints and yeah, yeah, and when you want to
go vertical, which is exactly what a lot of people
are talking about now doing vertical stacks of transistors. So
you're building transistors on top of transistors, not just left
and right, but on top of each other. How do

(26:19):
you have them communicate with each other? How do you
build vertical transistor gates and logic gates? How do you
cool that? Because now you're increasing the density of transistors
even more by going up, not just going out, and
by that you're going to create more heat. So you've
gotta figure out how to disperse that heat. One of
the the arguments I've seen is using microfluidic channels and

(26:43):
some form of magic heat dispersal liquid. I think I've
seen stuff about microfluidic channels. One of the I should
I guess maybe insert here that to manage the heat issue.
I have seen proposals of getting away from silicon. Yeah.
Now I don't know if that's ever going to be

(27:04):
an economically feasible alternative, but that's one thing people talk about,
is like going to carbon nanotube based computing or something
like that, which they think can do a better job
of of managing heat problems. Yeah, and that would that
would definitely at least buy us some more time. In fact,
that I I t R. S says that we will

(27:25):
still hit a fundamental limit, and in so just a
few years after our first fundamental limit. But this limit
would be due to the fact that we would reach
peak heat density in a silicon based chip. And if
we are using silicon and we're building vertically, we won't
be able to pack more transistors into a space after

(27:48):
because we'll be generating so much heat that we will
not be able to disperse it fast enough to allow
it to continue to work. It will break down just
because the heat it generates will be too much for
it to handle. So, uh, that's a real issue. We
probably will see a lot of innovation around those cooling systems.
The ours Technica piece that I read that kind of

(28:09):
inspired this episode had mentioned the possibility of electronic blood.
H yeah, kind of a kind of electronic version of
blood that would they would just recirculate throughout the system
and pull heat away. Did not. I did not click
on it to go into another piece to read up

(28:29):
on exactly what that was because I love ours Technica.
I love that site. Great site. Those guys know their stuff.
And when you get through one ars technical piece, you're like,
I might need to take a break before I read
another one, but they're great. That that's not that's not
that's more of a limitation on my own brain power
than on ours. Technica. UM, I really do like that

(28:51):
side a lot. So you need some of that electronic
blood for your brain, obviously I do. It's so it's
possible that we will hit a limit within our times
that we just cannot work around using the silicon based technology.
Right we may hit a point where we say, okay,
that's it, that's as that's as many transistors as we

(29:11):
can fit. That's well, we'll be able to play with
the arrangement for a while, but eventually we're going to
hit the idealized version of that, and then that's as
that's as peak as we can get with this particular
form factor. Now, it might mean at that point that
we have to build bigger processors. Right like that, the
size may increase, which means that computer sizes will increase.

(29:33):
It also means that you will quickly get to a
point where it's impractical for handheld devices. So we may
see our handheld devices max out on their processing power well,
before we see computers take what if we go back
to desktop machines. What if we go back to machines
that are taking up like an entire room or an
entire floor of a building. We used to make jokes

(29:55):
about that all the time. Yeah, and now how big
computers used to be. Now they're fit in your hand. End, Well,
that might be the future. Yeah, we may not be
able to avoid it once we hit these fundamental limits, unless, again,
like Joe was saying, we find a totally new way
of creating either the chips that we're used to now

(30:17):
but using different materials so that the heat dispersal ends
up we were able to push that off a little further,
or we come up with an entirely new way of
processing information so that we can build on that. Well,
that's something that I kind of wanted to end with here.
Not that we have any answers on this, but I
do think it's an interesting question to ponder, essentially, what

(30:40):
will the future physical side of computing look like? Because
we've entered a sort of stage of humanity where exogenous
computing information processing taking place outside of the human brain
is a fundamental part of our culture. And here we are,
and I think that's always going to have to be
a part of what humanity is is from now on,

(31:01):
But will it always have to take place based on
the same physical architecture. So right now we have silicon chips,
people are playing with other things like you know, carbon
nanotube computers and stuff like that. There are experimental things,
but mostly it's still all these silicon microprocessors. But but
what else could there be? I mean, computing is an

(31:23):
abstract concept, and it's it's obvious how the concept of
a semiconductor as an electronic device enables it by allowing
sort of on and off switches that allow you to
perform logical operations. But I wonder what are the things
that are that are next, what's out there that we're
not seeing, What are ways to use the material and

(31:43):
the energy of the universe to process information for us?
I think in the short term this is just me
kind of talking off the top of my head. It
would be offloading all of that computing to a third
party that doesn't worry so about the space constraints of
having enormous numbers of processors to use. So in other words,

(32:08):
it's again that movement to having your your terminal being
more like a dumb terminal like it would have It
would have processing capabilities of its own, but it would
be leveraging the processing capabilities of a much more powerful
machine with lots of processors that is run by some
company like Amazon or Google or something along those lines,

(32:31):
and that at that point you're really more focused on
the bandwidth issue between your device and home base that's
doing all the computing. It's like streaming based gaming versus
uh yeah, which is which has not worked out great
so far because of things like latency and other issues.

(32:53):
I mean, we've seen streaming based gaming services come and go. Uh.
I think it was a fine idea. I just don't
think that the the the realization of that idea was
powerful enough to justify moving to one of those platforms.
But that might not always be the case. We may
see that change, um in the long term. If you're

(33:17):
talking about like, well, that's at some point you're gonna
hit the peak of that too. Rite You're like, it's
just not gonna make sense that, Hey, we need to
add eighteen new machines to the farm over there, um,
you know, because we've got to be able to there's
a new new Halo game has come out and they

(33:38):
had a jet pack, so it's a thing. Now, it's
I think we will be looking at truly experimental work
ways to do computing well beyond what classic computer science
has taught us. That it'll just it'll have to it'll
be out of necessity, will have to develop that, and
you know, you will see other variations like we'll see

(34:00):
quantum computing. But as we've stated in previous episodes, quantum
computing is great for certain types of computer problems, but
is no better than classical computers with other types of
computational processing. So if you wanted to do a classic
computer problem, or you wanted to play a game, you know,
let's say let's stick with gaming. You want to play

(34:20):
a high end computer game. A quantum computer is not
going to run it better. In fact, it may run
it much worse than a classical computer. But if you
have a great problem that is specifically designed to to
be solved by things like parallel processing, the quantum computer
could be a great job, could do a great job

(34:41):
of that could also end up breaking all the encryption
and all over the world in record time, which is
kind of terrifying, but we've talked about that previous episodes. Um, yeah,
so we've still got some time with Moore's law. Maybe
a little less than a decade from now, we might
actually say a well, guys, uh, don't make more complicated

(35:05):
software for a while. Okay, you can't stop them. It's
gonna it's gonna continue to bloat. You can't stop them. Well, yeah,
but if you no longer have an increase in processing power,
that bloat eventually gets to a point where you can't
run it on the machine, and then you do have
to stop. That's not their problem. Yeah, it kind of is.

(35:26):
It kind of is when that PC magazine article comes
out PC World Magazine comes out, software doesn't run because
it's too complicated for any existing computer. You gotta back
off a little bit. But this was an interesting subject.
I'm glad that we tackled it, and it was fun
to revisit Moore's law. It's one of those subjects I

(35:47):
always love to chat about because one it's I think
it's widely misrepresented, uh in in casual conversations and media coverage.
A lot of people don't talk about at the the
actual original intent was behind it. And also just you know,
every few years you see discussions of Moore's Laws coming

(36:11):
to an end. I wonder if we could find I
actually tried to look for this before the episode, but
I couldn't find anything. But I'm sure somewhere out there
is the great first the end of Moore's Law. Yeah,
I would love to see happen in the nineteen eighties,
the nineteen I'm sure it was in the eighties. I'm
sure it was in the eighties. I would love to

(36:31):
see a timeline of all the major articles that have
come out that said Moore's Law is over or whatever,
because I'm sure it has happened at least a dozen
times um. And you know, the remarkable thing is that
engineers have found new ways to defy the end of

(36:51):
Moore's Law to keep it going. So it's entirely possible
that within a decade we're talking about a totally new
technology that does continue the spirit of Moore's Law, even
if it has moved away from what we think of
as traditional uh integrated circuit components. Who knows, it's a

(37:12):
tall order, but you know, once upon a time, the
transistor didn't exist, So it's not like we're talking about
something that is beyond the entire It's not that it's
completely implausible. I mean, it's might be a long shot,
but it's still possible. All right. Well, that wraps up
this episode, and hopefully in our next one, Lauren will

(37:33):
be back with us and we'll be able to jump
back into the future again like we love to do.
If you guys have suggestions for future episodes, we've got
any comments or questions, send them to us. Our email
address is fw thinking at how Stuff Works dot com.
Or drop us a line on Twitter we're fw thinking there,
or just search f w thinking in Facebook. Our profile

(37:56):
will pop up. You can leave us a message and
we will talk to you again really soon. For more
on this topic in the future of technology, I'll visit
forward thinking dot com, brought to you by Toyota. Let's

(38:23):
go places,

Fw:Thinking News

Advertise With Us

Follow Us On

Hosts And Creators

Jonathan Strickland

Jonathan Strickland

Joe McCormick

Joe McCormick

Lauren Vogelbaum

Lauren Vogelbaum

Show Links

RSSAbout

Popular Podcasts

2. In The Village

2. In The Village

In The Village will take you into the most exclusive areas of the 2024 Paris Olympic Games to explore the daily life of athletes, complete with all the funny, mundane and unexpected things you learn off the field of play. Join Elizabeth Beisel as she sits down with Olympians each day in Paris.

3. iHeartOlympics: The Latest

3. iHeartOlympics: The Latest

Listen to the latest news from the 2024 Olympics.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.