Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tex Stuff, a production of I Heart Radios,
How Stuff Works. Hey there, and welcome to tech Stuff.
I'm your host, Jonathan Strickland. I'm an executive producer with
How Stuff Works and I Heart Radio and I love
all things tech, and today we're going to look at
a classic episode that aired back in July of two
(00:28):
thousand twelve, July six to be precise, and it is
titled tech Stuff looks at super Computers. This was a
fun discussion to have with my former co host Chris
Pallette because I, you know, grew up in an era
of super computers and only had sort of a vague
idea of what that term meant for many years. I'm
(00:51):
sure there was a time when I was a kid
where I thought of it as a computer that wore
a cape. But as it turns out, it gets a
little more complicated than that. Let's listen in So, Chris,
if I were to ask you, just off the top
of your head, how would you define a supercomputer? What
would you say? Well, if I hadn't already made the joke,
I would have said it was a computer in the
cape and tights. But no, honestly, I would say supercomputer.
(01:15):
Is a computer that can do a lot more calculations
in a shorter period of time than the machines sitting
on our desktop. Yeah. I think of it as sort
of the bleeding edge of what a computer is capable
of doing. Something that that still feel fills a room,
even though typical computers these days don't need to fill
a room because it's that big. It still has that
(01:37):
much computing power, right, right, And the term comes from
the nineteen sixties and uh. In order to really kind
of understand the the the span of this, I think
I was going to talk a little bit about the
last computer I could find that was a powerful computer
that existed before people started talking about super computers, which
(02:02):
was the IBM seventy thirty Stretch. Yes, that was the
one that was made with elastic. Yes, ye gain a
couple of pounds. Your computer can still you know fit.
It was Mr Fantastic of the computer world. No, because
it was not a super computer. It took up two
thousand square feet back in the day, this being the
(02:26):
early sixties. Two thousand square feet. It costs thirteen million dollars, which,
if you were to translate to today's cash, would be
ninety one million dollars. It's a lot of money. It's
a lot of cash. So that was the fastest computer
at the time until a fellow named Seymour Roger Craig
(02:46):
showed up. I s Mr. Craig. Yeah, Craig ends up
being a big name in supercomputers, particularly in the sixties,
seventies and up to the mid eighties. That was the
name in supercomputers. And he was working for a company
called Engineering Research Associates or e r A, which actually
(03:07):
grew out of a naval operation, um being the U. S. Navy,
not belly Button. I was gonna you were looking at
me like joke. No, No, not that it was a
navy project. How about that as in as in the
military force, not the color It was a Navy project
that was all about code breaking, all right. So there
(03:28):
was this project about code breaking that eventually kind of
spun off and became an actual company all on its
own called Engineering Research Associates, and it branched out beyond
code breaking, although it took all the code breaking work
it could get. Yeah, we talked about the Enigma UM
some episodes back UM and we were talking about the
(03:48):
bomb UM and yeah, there was early uh that was
really the early application for supercomputers was you know, needing
to crunch a lot of data very quickly, and there weren't.
There weren't the kind of applications that we have now.
We'll get into that, I'm sure in a few minutes.
But but yeah, I mean that was you know, why
(04:09):
would you need a supercomputer? You know, that was That's
probably about the only thing I could think of where
people were needing to crunch that kind of information as
quickly as possible. And defense. Yeah, typically, especially with the
early supercomputers, they were really designed for very specialized computing,
so not necessarily specialized from the ground up for a
(04:33):
one particular type of computing, but they were They were
not meant to be general computers. They were meant to
do tip no admiral computers because they were the Navy,
that's true. Uh No, they were. They were meant to
do a specific task very very well, and that's that's
all they were meant to do. Now, Craig had an
interesting philosophy he said, and this is this is a
(04:55):
quote from him. He said, anyone can build a fast CPU.
That trick is to build a fast system and that
was the secret to create creating the first supercomputer. He
realized that if you created a processor that was really,
really fast, that did not matter if it couldn't get
the data it needed to execute operations upon fast enough.
(05:19):
So he saw the need to create a system that
would move data through very very quickly, not just processed data,
but move it so that means it needs a lot
of memory, it needs a very fast pathway from memory
to processor. There are a lot of pieces that have
to be put in place, and he saw this very
early on, and so using that philosophy, he designed a
(05:40):
computer back in nineteen two that was called the c
d C six six hundred. Now CDC stands for Controlled
Data Corporation. Yeah, um h E R A was taken
over by Remington Randy UM. And that's, uh, that's the
name I remember because you know, UH still remember a
(06:01):
lot of those old machine names UM from stuff that
I found in my uh dad's collection. Of course, he was,
you know, a mechanical engineer UM before he retired, and
you know, so he was interested in all kinds of machines.
And I didn't know what I was looking at at
the time, of course, you know, but they were all
these UM science and computing magazines laying around and that
(06:23):
name I recognized also UNSIS because Remington Rand became Unities
UM and probably a lot more of our listeners are
familiar with that name. But he partnered with William Norris
to start Controlled Data Corporation UM back in nine seven
UM and really at that point, the UNIVAC from Remington
Rand and IBM were the computing companies. And you know,
(06:46):
IBM has been the heavyweight for so long, but CDC
was the first, uh you know, upstart to really make
a dent in there, uh stranglehold on the industry. And
Craig wanted to join CDC fairly early on, but apparently
he was needed for a project UM that would not
(07:08):
let him leave exactly what he wanted to. So once
he did leave, that's when he designed the CDC sixty hundred,
which was officially announced in nineteen sixty four, so designed
in sixty two, announced two years later, and it was
the first commercially successful supercomputer, with a price tag of
between seven and eight million dollars, sometimes going up as
(07:28):
high as ten million, depending upon the configuration that the
customer wanted. UM. Now, in today's cash, that would equal
about sixty million dollars, so thirty one million dollars cheaper
in today's money than the Stretch computer, and it was
actually much more powerful. It had four hundred thousand transistors
(07:50):
and one hundred miles of wiring, and it was the
size of about four filing cabinets, so it's also significantly
smaller than the Stretch. Didn't take up two thousand square feet.
The clock speed was around a hundred nanoseconds, and it
had sixty five thousand sixty bit words of memory. So
(08:11):
this is kind of an odd time in computing. We
hadn't really settled on the thirty two sixty four bit
kind of model. This was before that. Um. It also
used six high speed drums as sort of a temporary
storage area. It had a central storage that used magnetic tape,
and it used the four trans sixty six compiler. UM
(08:33):
the equivalent to today's machines means that it would have
about a ten mega hurts processor. Yeah, well that could
work up to forty mega hurts and speed. Well, it
could do a three million floating point operations per second.
Yeah those areas, Yeah, so three million, that would be
a mega flop, three mega flops, right, so we're gonna
(08:55):
get into lots of different flop terms later as well,
and they get incredibly huge. Of course, you have to
keep it cool because otherwise it breaks out into a
flop sweat. And that's true. Uh, well, not the flop
sweat part, but you do have to keep it cool.
As we know electronics, when you're running electricity through them,
one of the byproducts is heat, and heat, as it
(09:16):
turns out, is not a great thing for electronic components.
It can make stuff expand contacts can lose connections, so
that stuff starts to malfunction. An entire system could shut down.
So the CDC had a cooling system that was provided
by a special chemical free only. Yeah, they used free
(09:39):
on to cool the system. In fact, it was they
would use free On for a while before finally having
to switch to a different coolant because free on just
wasn't efficient enough. Eventually, now at it was still doing
the job. So Cray was also an innovator in another way.
(10:00):
The stretch IBM stretched um was sort of a hybrid machine.
They had transistors and vacuum tubes in it um and
that's I think why one of the reasons why craze
machines were smaller. The six four, which proceeded the sixty
six hundred UM, was the one of the very first
(10:21):
to use transistors only, so there was also a transistor machine,
and so it would take up a lot less space
than the vacuum tubes. And I would imagine that based
on my knowledge, my personal knowledge of vacuum tubes, might
have been a little cooler simply because of that. Yeah,
I would imagine that they wouldn't have had to have
as dramatic an a C system to keep the the
(10:44):
room bearable, because vacuum tubes do put off a lot
of heat UM. Another interesting IBM C d C connection
here is that Thomas Watson Jr. Which was IBM s C.
He was IBM CEO at the time, wrote a famous
memo that time too IBM employees, and he said, last
(11:06):
week Controlled Data announced the six system. I understand that
in the laboratory developing the system, there are only thirty
four people, including the janitor. Of these fourteen our engineers
and four our programmers. Contrasting this modest effort with our
vast developmental activities, I failed to understand why we have
lost our industry leadership position by letting someone else offer
(11:28):
the world's most powerful computer. Craig's response was a reportedly, well,
there's your problem. Essentially, Craig was saying that, you know,
perhaps IBMS approach it was a little burdened by size.
That IBM had grown so large that to manage a
project like this was very difficult to do because it
(11:51):
was just too big. So that's an interesting idea that
an organization needed to be kind of small and nim
bowl in order to pull something off like creating the
world's fastest computer. He followed up the c d C
six hundred with the seventy, which had a sixty five thousand,
five hundred thirty six sixty bit word memory and a
(12:13):
clock speed of twenty seven nano seconds uh and actually
in practice ran about five times faster than the sixty.
But then Cray left c d C and he formed
his own company, Kray Research, and in nineteen seventy six
he introduced the Kray one, which if you've ever heard
the Kray supercomputer, that's what this is. It's the crazy
(12:37):
One was the first of those. It had a clock
speed of a well, its processor ran at eighty mega
hurts and back. At this time these supercomputers were still
using a single CPU, so that was kind of interesting
to these were single CPU systems. So it had eighty
mega Hurts processor sixty four bit system. It ran at
(12:59):
a hundred thirty six mega flops, so one or three
six million floating operations per second, and it had one
thousand six d sixty two printed circuit boards that made
up the components of this computer. It costs between five
and eight million dollars, depending on how you wanted it
set up, and in today's cash that's about twenty five
million dollars. So we see that the processor speed is
(13:21):
increasing and the price is coming down. Often the size
of the computer is decreasing as well, although that that
also flip flops over the years because while the solid
state electronics definitely brought the size down, eventually the way
we pack in more speed requires more space. But we'll
(13:45):
get into that. Okay. So after the Cray one came
the Cray x MP. Yeah. This is uh, this is
interesting because realized also in addition to the fact that
he knew that the components, the all of the components
the entire machine was important and not just a processor,
(14:07):
he also realized that uh, early on that parallel processing
could also speed things up. UM. Now it's common for
us to have multi core processes in our desktop machines
or laptops, or in fact, now we're starting to see
them in our mobile devices. UM. But you know, at
(14:27):
the at the time in the seventies and eighties, this
was still something sort of newish, UM, and it's not
something that everybody realized. Uh. So the x MP actually
was to Cray one computers linked together, UM, and using
those two machines together in a multiprocessing effort, UM, they
(14:49):
could triple the performance of just one Cray one UM,
which is something interesting to note. And uh Cray two
had four processors in the same machine and that was
the first to exceed one billion flops as Britainic it
tells me. Yeah, uh, it actually could have up to
(15:09):
eight CPUs the create too. UM. The these machines often
over time were upgraded, so the initial step specs you
would get when they were first released were one thing,
and then by the end of the run of production
they would be better. I mean, which makes sense. I
mean we see that in computers all the time. We
definitely we tend to call them different model numbers now,
(15:31):
but the same sort of thing happens. So back in two,
you had this the Create XMP with a hundred five
mega hurts CPUs running around two hundred megaflops each. Uh,
then if they had up to four CPUs you could
get eight hundred megaflops going, and that was pretty impressive.
It had the equivalent, by the way, of a hundred
(15:53):
and twenty eight megabytes of RAM, So yeah, you think
about that one or twenty eight megabytes of RAM in
nineteen a D two that was considered bleeding edge for
a supercomputer. Um and the storage units for the Cray
XMP were the size of a file cabinet and they
could hold up to twelve gigs of storage. Because they
have a flash drive in my bag with me that
(16:16):
has eight gigs. Yeah, and you can find you can
find you just find twenty gig or more flash drives,
which you know, you think about that, that's something that
is small enough for you to carry on a key chain. Well,
back in two you had a file cabinet sized device
that could hold twelve gigs and that was considered massive,
like a massive amount of information. So yeah, time really
(16:39):
does change things, doesn't it. So, yeah, the Cray two.
That's when they switched from free on to Flora Nert
as their coolant. I'm sorry, but that sounds like a
made up alien name from a from a an animated movie.
Technically all names are made up. I know. That's I
(17:03):
just blew your mind. What if there were no hypothetical questions?
Turn So the Floria Nert. The reason why they switched
was because they had at that point packed the components
so tightly together that free on was not efficient enough
to cool them. So they switched from free on to
Flora Nert. And it's a little Floria Nert I've had
(17:25):
around somewhere. And then they also had to figure out
a new way to access the memory on the create too,
because at this point they had reached that that point
that Kray had mentioned earlier about creating a CPU I
can process information faster than it can pull information in.
So they found they would actually dedicate processors to getting
(17:46):
data from memory and funneling it into the central processing units.
And uh, this was this was really important. It was
what kind of led the way into threading and loading memory. UH,
CPUs that have that capability to load information from memory,
preloading things that kind of came out of this work.
(18:09):
In fact, a lot of the uh, the advances that
we see in personal computers um are really possible because
of the pioneering work that was done in supercomputers. It
was stuff that that found its way from the engineering
of supercomputers into personal computers, often a completely different sense
of scale, but a similar approach. Now, after the Cray Too,
(18:32):
that's when Japan started to produce some supercomputers that were
uh that were actually faster than anything that was being
produced in the United States. So up until this point
it was all US that was they dominated that that
country dominated the supercomputer industry. But in so this is
(18:54):
you know, again, the Cray craze, if you will, lasted
front the sixties all the way into the eighties. Well
ninety six, Japan introduced the s R twenty two oh one,
which had two thousand, forty eight processors. So remember a
Cray too that was up to eight processors. The s
R two two oh one two thousand forty eight processors. Yep,
(19:17):
ye I count two thousand forty more processors with that computer.
Then with the Cray, do my math could be off
of an English major and it could it could have
up to six hundred gigaflops of processing. That's kind of crazy. Um. Yeah.
I also I also feel like we would be remissed
to mention the efforts of Danny Hillis um W. Daniel
(19:39):
Hillis was a grad student at m i T. Massachusetts
Institute of Technology when he realized that distributing computing was
the way of the future, if you will. Um. He
started thinking Machines Corporation ine UM and this CM one,
which was the first of his machines to come out
in eight five m It had six one bit processors
(20:05):
grouped sixteen to a chip. Interesting. Um, that's a that's
a really interesting approach tiny tiny processors. Yeah, so you
know wow. But yeah, I didn't come across my um my,
my research, which is why this is actually really like
I'm my mind is really as I'm thinking about that
(20:27):
sort of architecture. That's really an interesting approach. Well, it's
interesting too to see how different Uh see, Jonathan and
I do our research separately on purpose so that we
uh come up with different things on the cases and um,
so it's funny that that I would have come across that. Also, well,
I think of Danny hillis because I've seen his name
a lot in things like a long Now Foundation and
(20:49):
people with he hangs out with people like Stewart Brandon,
Kevin Kelly, UM, fascinating people. But UM anyway, Yeah, that
that's uh, that was one of his contributions. And you
see that in again in today's machines. I mean we
have this, you know, with us every day. But you
know this is uh, this is when we started to
realize that you don't necessarily have to go buy More's
(21:11):
Law and wait until next year's chip comes out with
twice as many processors on it. You can you can
do this by dividing up the work. Yeah. And and
in fact that that's another good point about the s
R O one, the computer from Japan, because uh, in
order to use these two thousand forty eight processors, there
was a new development in computer science which was called
(21:34):
multiple instruction multiple data or m I M D. Yes. Now,
this is the idea of being able to solve problems
by pulling in information from from memory and feeding it
to different processors that are all using different operations on
that data to come to a single solution, not necessarily
(21:55):
a single solution, but that's I'm using that as as
an example for this for this explanation. So this m
I M D approach is what allowed us to develop
multi core processors, because in this case we're still talking
about single processors that are all grouped together. Eventually we
will get to the point where we have multi core
(22:16):
processors where a single processor has multiple cores and each
core can work on part of a problem or separate
problems to solve things faster, to to get to a
conclusion faster than they would if it was just one
single processor, even if it was a really really fast
processor working on a series of problems. So I always,
(22:39):
I always use this analogy. Imagine that you have one
super smart math genius taking a math test, and the
math genius is going through and solving all of these problems,
and he or she is able to do this flawlessly,
able to solve all the problems, but it takes a
(22:59):
certain time to get through the test. Then you get
that same test to four above average math students. They're
not geniuses, but they're there. They can hold their own.
And you divide it up, say all right, you take
this this one fourth of the test. You take this quarter,
you take this quarter, and you take that quarter and
the four students together start to work. Those four students
(23:21):
are very likely going to be able to finish the
entirety of that test much faster, each of them working
on a quarter of it, rather than the genius who
is working on the full thing at the same time.
Even though the genius is smarter and can work faster
on each individual problem, collectively, those four students are going
to solve that test faster. That's the philosophy behind both
(23:42):
grouping cores together and making them a parallel processing unit
or taking a multi core approach to a CPU. Yep,
and you can you can thank Danny Hillis for figuring
out the idea of massively parallel computing UM. But you
know that that's a problem though too, because instead of
having two machines running side by side and linked together,
(24:05):
now you have to figure out how you're going to
parse all those instructions between all those different processors. So
you have to have the software or the operating system
that will give instructions to each of the processors actively
and direct essentially directing traffic. Yes, this is this is
kind of like, it's not It's not a simple thing
to figure out. It reminds me of Intel's tick talk
(24:28):
approach to developing processors. You think of the tick being
the physical machinery that's going to do the processing, and
you think of the talk as the software that's optimized
to work on that physical hardware to make it really
live up to its potential. And then you have another
tick where you've got an advancement in the physical hardware,
(24:51):
but perhaps the last generation of software isn't really optimized
to work on that, so you have to make new software.
This is a continuation. In fact, that's one of the
things that people say is a barrier to artificial intelligence
to the point of having a computer that has self awareness.
It's not necessarily that we can't reach the physical uh
(25:14):
requirements we would need in order to have a computer
be able to have some form of self awareness. It's
the idea that we could throw as much horsepower at
the problem as we wanted to. Without the software, it
just won't happen. Will be rejoining this classic episode in
just a moment, but first let's take a quick break
to thank our sponsor. There's one company name we haven't
(25:43):
really mentioned yet, and it's big. I mean, we talked
about a little bit just then, but not in the
terms of supercomputers. It's a big name in computer architecture,
but it wasn't a really big name in the whole
supercomputer story. And that's Intel. Now, Intel was not just
sitting back during this whole time. Now, granted, Intel's main
focus is on enterprise and consumer processors, which are not
(26:10):
completely analogous to what is you you find in supercomputers
at this time. Right, that would change, but not immediately.
But Intel did develop something called the Paragon, which was
supposed to be, you know, another fantastic supercomputer and it
could support up to four thousand processors using this M
(26:32):
I M D architecture. But it did not succeed in
the market. It just sort of well, it flopped in
a different way. The other kind of flopped, Yeah, the
bad kind. So that didn't really go anywhere, but it
did again sort of push this trend of parallel processing
and M I M D. Uh. The Japanese came out
(26:53):
with a couple of other computers called as Key Read
and Asky White. Intel also had an askey read Um. Yeah. Well,
actually this this goes back to the Comprehensive Test Band treaty. Uh,
that the United States signed, Um, they needed a certification
program for the nuclear weapons that they had built up
(27:16):
and uh so what they started was the Accelerated Strategic
Computing Initiative. Asking with only one eye instead of asking
characters yes with two eyes, just to clarify yes, I'm
glad you did, thank you, uh and ask you read yes.
Was built at Sandy and National Laboratories in Albuquerque, New Mexico.
Until helped them out with that, and that that was
(27:38):
the first machine to get a terra flop. Yeah, and
it was the first one to break the terra flop barrier.
It did that with six thousand two d mega hurts
pentium pro processors, nine thousand, seventy two of them, well
six thousand at first. It then eventually was upgraded. The
very first one had six thousand and the very last
one had nine thousand UM two z on processors. And
(28:01):
it actually hit three point one tara flops at the
end of its production life. So yeah, like I said,
you know, when we give these numbers, there are different
ones because there's a certain amount that was available when
the computer first premiered. Then there was like the average
amount during the computer's lifetime and then the amount that
was available at the very end of its run time.
(28:21):
So these numbers do change a little bit depending upon
which source you're reading in which version of the computer
they're looking at, because again, these computers are they come
in a range of models, so not all of them
are exactly the same. Now, while we talk about playing
games like chess, you know that that's one of the
(28:42):
big uh consumer uh visibility issues with supercomputers. You don't
see what supercomputers do. And that was a way for them,
the IBM in particular, to achieve notice, was taking on
people like Gary Kasparov, chess masters worldwide with a supercomputer
(29:04):
kind of computer outthink quote unquote out think a human. Well,
the point of ASKI was again one of those behind
the scenes thing. It was a very military thing. It
was more like Whopper in more games actually, uh actually
exactly like that. The point was to simulate nuclear tests, um.
(29:25):
And that was why they needed a lot of computing power,
uh and something a machine that could run a lot
of calculations very quickly, because they wanted to uh, you know,
this is not something you want to do. Hey, well
let's uh, let's test out fifty nuclear warheads. Yeah, this.
You know, they wanted to do this with a computer simulation,
(29:45):
and uh so that's why they started the initiative. It
was not a game, but a challenge. Hey, let's you know,
let's keep coming up with newer faster machines because we
need newer faster machines to run nuclear simulations and simulations
in general were a big part of what these supercomputers
were put to use for. I mean like climatology for example,
(30:09):
weather predictions that was a big requirement as well, as
supercomputers have been put towards that to try and help
map and predict climate change and just weather patterns, not
not just climate but weather, day to day weather, and
also other simulations as well. Not to mention crunching data
from facilities that generate lots and lots of information. So um,
(30:34):
things like the SETI Institute would for extraterrestrial intelligence. Yes
that they would use very powerful computers to try and
crunch all the data they would get from radio telescopes.
You also had things like the Large Hadron Collider and
other super colliders that generate lots and lots of data
and they need these really fast computers in order to
process the data and make it meaningful. So UM. Moving on.
(30:58):
So right around this time him when the sky Red
comes out. Um, that's when there was a shift in supercomputing.
So before there were all these customized UH computers that
had their own processors or had thousands of processors running together. UH.
But at this point it became possible to actually build
(31:21):
a supercomputer with off the shelf parts. You could actually
get enough computers together and linked them together to perform
as a supercomputer. And this was also when there became
a shift to using the Linux operating system. UH. So
Lenox kind of replaces Unix as the OS of choice
(31:43):
for people who are designing supercomputers, which is nice because
now you can tell the company nurse never mind. In
two thousand two, Japan comes back with the s key
White where it's had a thirty five terra flops computer.
It was the NYC Earth Simulator, and it cost a
(32:03):
hair under a billion dollars nine million. It's a lot
of hairs, actually hundred millions, a lot of hairs. If
anyone wants to give me a hair in that sense,
I will take it. Uh. And two thousand four, IBM
comes out with the blue Gene slash L computer and
had sixteen thousand computer nodes and each node had two CPUs.
(32:25):
I'm gonna be thinking Bowie the rest of the day now,
So yeah, thirty two thousand CPUs. Ultimately, if my math
is correct, and then that could run at seventy terra flops,
so twice as fast as the Asky White and a
two thousand seven version of this could actually manage up
to six hundred terra flops. And it had a hundred
thousand computer nodes, so two hundred thousand processors. With that
(32:48):
starting to get into some preretty ridiculous computers from you know,
if you're looking at it as hey, I own a
computer that's got a single processor, this one has a
two hundred thousand of them. Yeah. Yeah. It also sort
of uh makes apples claim. In the late nineties, UM
sort of silly, UM, because well, the federal government classified
(33:12):
a supercomputer UM I can't remember exactly when it was.
It was in the late nineties and uh as as
a machine that would run a giga flop and um
IBM when they were still running on on power process
power PC processors. UM, there was a MAC that they
advertised as being a supercomputer because it could reach a
gigga flop UM, and I just thought at the time
(33:33):
it was kind of weird to think about UM, But
now it's just kind of silly when you take it
into context. And these these actual supercomputers at the time. Now, uh, yeah,
a gigga flop is good, but no, right, So a
mega flop is a million floating operations per second, a
giga flop is a billion floating operations per second, a
(33:54):
tara flop is a trillion floating operations per second. Well,
and then there's a peta, which is a quadrillion floating
operations per second per second. Yeah, quadrillion. And the first
supercomputer to hit that and break that barrier was another
IBM machine, the road Runner, And uh it had twenty
(34:17):
thousand CPUs and it was the first computer to break
that pedophal up barrier. So one quadrillion floating operations per second.
It's a serious machine. Chris and I will return to
discussing supercomputers in just a moment. After this quick break.
(34:41):
In two, there was an interesting development because China entered
the supercomputer Fray. Now at this point it was really
a battle down between the United States and Japan and
Germany also has quite a few supercomputers as well. But
but US and Japan were the ones that were stealing
the record back from between each other. And then China
came out with a computer which I'm sure I'm gonna
(35:03):
mispronounced because I I don't know how to pronounce Chinese,
but tian hey is how it would be spelled in English,
and and someone's probably gonna say it's sheen hey or
something like that. Please let us know, yeah, because I don't.
But it was a computer from China that could run
at two point five pedal flops and uh it had
fourteen thousand, three hundred thirty six Intel Xeon X five
(35:27):
six seven zero CPUs and seven thousand, one six eight
in video Tesla GPUs and so that was, you know,
a really impressive machine that was that stole all the
titles away in But also another important moment for China
in that year was that China developed the sun Way,
(35:50):
which was slow by supercomputer standards because they could only
run a pedal flop um and they had already gotten
up to two point five pedal flops. Penahlop is still
incredibly fast, people, I'm just slow slow in general terms.
Here relative terms But the cool thing about the Sunway,
at least from China's perspective, is that it was the
first supercomputer China had designed with all Chinese processors, so
(36:15):
they weren't depending upon some other companies process or some
other country processors. They wanted to be able to be
self reliant when it came to developing computers, and so
that China really pushed it's it's computer engineering industry and
was able to design you know, the Chinese UM engineers
(36:36):
were able to design this the supercomputer. UM. Then you
had Fujitsu's K supercomputer, which until recently held the record.
It was capable of running up to ten pedaphlops with
eighty eight thousand, one eight Spark sixty four processors, and
each CPU had sixteen gigabytes of local RAN and it
(36:57):
had one thousand, three D seventy seven terab bytes of memory,
and eventually it got up to a seven hundred five
thousand process records. Yeah, it sits in Japan's ken Advanced
Institute for Computational Science in it thinks what And that's funny.
It's asky only spelled in different I mean the letters
(37:19):
are in different UM anyway, Sorry, I just noticed that
as I was looking down in my notes. Um, that's
actually sort of why we decided to do this now,
because it was just the week that we're recording this
that we found out about the test. Now they do
these tests twice a year, every six months. They have
the top five hundred supercomputer sites, um so computers from
(37:42):
all over the world. Uh. They put them on wheels
at the top of this big hill and push it
down the hill race hands like a big computer soapbox derby,
you know. They they they give them problems to solve
and uh see who's the fastest the top five hundred
supercomputers in the world. Which way. It's kind of silly,
(38:02):
but at the same time very very cool. And you
can actually see the results of this if you want to,
if you go to top five dot org. Um there
there are the organizations that put it on uh publish
this every year, and that happens to be the University
of Mannheim, Lawrence Berkeley National Laboratory, and the University of
Tennessee actually do this and they are trying to figure
(38:25):
out the the fastest, and the fastest was just announced.
The new fastest was just announced this week, and we
thought there would be a great time to talk about it.
It's a machine actually name for a tree. Yes, it
is the IBM Sequoia. And uh when when we say
recording this week, the date is June. And so the
(38:47):
Sequoia has taken the title of fastest supercomputer, which means
that that's from IBM. So it means the USA has
the title once more, at least until the next Supercomputer Olympics.
And um, yeah, it's a giant gold medal that is
stamped on the outside of them. So you're you're probably
all asking, hey, so what are some stats on this, uh,
(39:10):
the Sequoia computer. How how fast can it go? And
what what's making it take? Well? I do want to
point out that it is owned by the Department of Energy, UM,
so this isn't really a military machine. UM is at
the Lawrence Livermore National Laboratory. UM. And yes, the specs
(39:32):
on this are pretty impressive. I mean it uses seven
thousand kilowatts. Yeah, it's actually fairly efficient for a supercomputer. Yeah.
It has one million, five hundred seventy two thousand, eight
hundred sixty four processors and one point six peta bytes
of memory. It takes up three thousand four hundred twenty
(39:56):
two square feet of space, so we've finally gotten back
to that those a enormous computers. Remember the stretch was
two thousand square feet. Now this one's three thousand two
square feet and it can run at sixteen point three
two pedal flops, so six point three to pedophalops faster,
well not even quite that much, because the the K
(40:19):
eventually got up to ten point five, but it is
significantly faster than the K. So IBM now holds the
the distinction of having the fastest or having designed the
fastest supercomputer in the world. Now, I thought it'd be
kind of fun too to compare that to IBM's Watson computer,
(40:40):
because that made headlines last year when Watson was designed
in part to compete against humans in a very human game.
Because we've already talked about computers playing chess against humans,
we've also talked about computers playing other games against humans.
In fact, we did a full episode about this particular computer.
So i'd be Watson was designed to play in a
(41:01):
game show Let's Make a Deal, So they called out
Watson and if you didn't know what was behind, well
it was it did have a dress on no it wasn't.
It wasn't let's make a deal. It was jeopardy and uh.
And in Jeopardy, of course you are given an answer.
You have to come up with the appropriate question. And
(41:22):
it's it's really tricky for a computer to do this
because it's not just a matching game where you matching
an answer to a question. You also have to take
in context. Sometimes there's word play, sometimes there's a riddle. Um,
it's a it's a lot more complicated than just question answer. Yeah.
They they specifically wanted it to play a human game.
(41:43):
They didn't alter the clues. They're actually clues in this show.
If you've never seen it. Um, they give you the answer,
and they you are supposed to supply the question and uh.
They use wordplay and and things in these clues. And
they specifically want the IBM engineers specifically need it to
play a human game to to test its natural language
processing ability. Can it figure out what from context what
(42:07):
it is you're talking about? And it did very well. Yeah.
So what was powering the Watson if you want to
compare it to say the Sequoia, Well, it had a
it was using ninety IBM power seven fifties servers in
ten server racks, and it had sixteen terabytes of memory
and two thousand eight processors um so or processor cores
(42:32):
I should say, not just processors, uh, and so two
thousand that sounds like a lot, But then you compare
that to the one million, five sixty four processors that
the Sequoia has and you realize that Watson, as far
as supercomputers go, doesn't merit mention. It's that which, again,
Watson was designed for a very specific purpose, this whole
(42:54):
natural language. Being able to recognize that and being able
to come up with information. That's a very specialized computer.
So it doesn't necessarily have to have this incredible by
comparison processing speed and number crunching ability, which might be
used for other very intensive tasks, so things like very
(43:17):
very realistic simulations that kind of thing, and predictions. So
I just wanted to compare that so that people could
understand because Watson's one of those words that we've heard
a lot about and we think of that as like
a supercomputer, but really, if we define supercomputer as a
computer that has is on that bleeding edge of what
a computer is capable of doing, it does not It
doesn't measure up. But when you talk about comparing the
(43:40):
top five hundred or putting a computer in a chess
match or in a game of jeopardy, UM, you know,
I was. I made the joke that it was a
little silly, and yeah, you could. You could say that
you're using a computer, you could be using it for
scientific purposes or doing something and instead you're you're taking
time off to do something else. But really it's nice
(44:00):
that for one thing, people understand what it is, it's
a supercomputer is and can do. And also it's uh,
it's a way to test out these machines and make
them better. UM you know. Even like I was talking
about the h the power used by the Sequoia machine,
it's considerably more efficient than the K computer. UM the
(44:24):
seven seven thowts beats K's twelve thousand watts. So with
every every time that they come out with a new supercomputer,
it's more efficient. They find better ways to route instructions, UM,
you know, and and they can make things smaller than
(44:45):
than before. So you really do see the implications in
in our our everyday computers because now we have multi
core processors in UM these everyday devices that we use,
UM you don't necessarily need that write a letter or
serve the internet. But it does make things faster and
more efficient. Uh, computers are are more reliable. You see
(45:09):
advances in operating systems that we use every day because um,
the things that they found out, um in the process
of making these supercomputers. They find better ways to route
instructions in a simpler computer. Um. And so it's really
worth it to do these these tests and uh find
(45:29):
out just what a computer can do. So, you know,
having a challenge just for the fun of it. You know,
I don't see that necessarily as a bad thing. Um,
you know, especially when we can we can make advances
and build on those for the next generation of machines.
And just to kind of sum this up, I thought
I would just kind of a fun fact. If you
look at the top ten fastest supercomputers in the world,
(45:52):
three of them are in the United States, two of
them are in Germany, two of them are in China,
and the other three are in Japan, Italy, and France.
And that's it for this classic episode. I hope you
guys enjoyed it again. Was recorded in two thousand twelve.
We've had bigger and better supercomputers come out since then,
(46:15):
and we've also seen the rise of graphics processing units
that have largely supplanted supercomputers in many, but not all applications.
I've done other episodes about that. You can search our
archive if you want to see those. The way you
do that is you pop on over to our website
text stuff podcast dot com. We have the archive there.
(46:36):
It is searchable, so you can look for specific episodes
that cover, you know, specific topics. Maybe you don't see
the topic you want. Maybe you do a search and
nothing comes up. Well, then you can write me an
email and suggest that topic to me. The addresses tech
stuff at how stuff works dot com, where you can
pop on over to Facebook or Twitter. We have to
(46:58):
handle text stuff h s W at both of those.
You can send us a suggestion that way as well.
And I hope to talk to you again. Really sick Yeah.
Text Stuff is a production of I Heart Radio's How
Stuff Works. For more podcasts from my heart Radio, visit
the i heart Radio app, Apple Podcasts, or wherever you
(47:20):
listen to your favorite shows.