All Episodes

September 21, 2020 46 mins

Recently, news broke that Nvidia is seeking to buy ARM Holdings, designers of ARM processors, for 40 billion dollars. Where did ARM come from and what's the big deal about it?

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to Tech Stuff, a production from I Heart Radio.
Hey there, and welcome to tech Stuff. I'm your host,
Jonathan Strickland. I'm an executive producer with I Heart Radio,
and I love all things tech. And when it comes
to micro processors, there are a few names that tend

(00:27):
to pop up. Intel is obviously a big one. A
M D is another, and those are the two they
get talked about when you're discussing stuff like dusktop computers,
you know, PCs. But when it comes to more lightweight devices,
you know, like mobile devices, there's another name, ARM a

(00:48):
r M. Recently, news broke that the graphics card company
in Vidio would be acquiring ARM for a staggering forty
billion dollars a princely some so I thought it would
be a good idea to kind of go on a
full rundown on what ARM is, its history, and what

(01:08):
this acquisition means for the industry and for people like
you and me. And this is gonna be a two
parter because ARM has been around for a while and
its story is actually really interesting. Plus it gives me
opportunities to go off on crazy tangents and tell you guys,
how various stuff works, which you know is kind of

(01:29):
my jam. As the kids say, you know, fifteen years ago.
We'll start with some history lessons. Now, typically when I
cover the history of a company or a technology, I
run into a few cases where dates maybe a little confusing.
Sometimes one source will have a specific date for an

(01:50):
event that conflicts with a date that's found in another source,
and so at that point I will usually say that
I'm sorry, I apologize. I can't get too specific. And
you would think that ARM wouldn't have this issue. The
company isn't that old. We can measure its age in
a few decades, but we only go back to the

(02:13):
nineteen eighties or so to to look at origins really
the late nineteen seventies, And yet when it comes to
particulars such as which events really got things started for ARM,
there's actually a lot of disagreements. So I'm going to
give you a a version of ARMS history. But you know,
don't think of this as the definitive version, because some

(02:35):
people say, no, you shouldn't trace its history to that point.
That silly go to this other point instead. Here in
the United States, when we talk about the early days
of personal computers, the names that typically pop up in
those discussions are Apple, Texas Instruments, Commodore, maybe Tandy, and

(02:56):
then IBM would follow not too long behind does. But
across the pond in the UK there was another computer
company that was trying to get an early part of
the personal computer era, and this company was called Acorn
Computers Limited. The three co founders of the company where
Chris Curry, Hermann Hauser and Andy Hopper, whom I suppose

(03:21):
was just not dedicated enough to go all in with
the illiterate names of the other two founders, way to
go and y. Chris Curry was born in nineteen forty
six in Cambridge, England. He studied math and physics in
school and went on to work for various technology companies,
including Pie Limited that's a p y e. Not you

(03:43):
know the kind of pie that I love, The Royal
Radar Establishment and Sinclair Radionics. While his stints at Pie
and Royal Radar were fairly short, he stuck around at
Sinclair for several years. By the late nineteen seventies, Curry
was interested in developing computers, but he was finding no

(04:04):
real support at Sinclair. He had been working on some
stuff he was trying to pitch the idea to Sinclair,
but he wasn't finding them very receptive. So we're gonna
leave off for now with with Curry and move on
and we'll regroup in a second. Swapping over to Hermann Hauser,
who was born in Austria in nineteen forty eight. He

(04:26):
came to Cambridge as a teenager to attend school and
learn English, but he really enjoyed it. He went on
to pursue advanced studies in places like Vienna University, but
he came back to England attending King's College in Cambridge
and getting an advanced degree. They're smart, dude. His work
in physics led him to become friends with Curry, and

(04:49):
when Curry was ready to start a company to manufacture
and market personal computers, Houser was on board. Andy Hopper
was the youngest of the three co founders, having been
born in nineteen fifty three in Warsaw, Poland. Hopper studied
in London and later Swansea University before pursuing postgraduate studies

(05:11):
at the University of Cambridge. He focused on computer science
and he was researching networking technologies. He co founded a
company called Orbis Limited, which focused mostly on Networking Tech,
and this entity would end up merging with Curry and
Houser's efforts to bring Acorn Computers Limited to life. Although

(05:32):
the original name for the actual company was Cambridge Processing
Unit Limited CPU cute right, but the co founders decided
to use the name Acorn Computers as the trading name
for the company, allegedly choosing the name Acorn so that
their computers would appear ahead of rival Apple Computers whenever

(05:56):
it was in an alphabetical listing. One of the earliest
jobs that the group had was to develop micro controllers
for fruit machines, and folks, I consider myself an Anglo
file but when I first read that, I have to
admit I had no idea what the heck it meant.

(06:16):
In my imagination, it was some sort of harvesting device
that depended on micro controllers to do something like pick apples.
But no, that, of course, is not what a fruit
machine is. And my guess is there's more than a
few of you out there giggling at my ignorance right now.
It's well warranted. So a fruit machine in this context

(06:38):
is what Brits call a slot machine. I guess they
do it because of the symbols of fruit that appear
on the various parts of the slot machine. So the
first big job that this new company had was designing
micro controllers for these gambling machines the slot machines to
make the more difficult to tamper with, as there were

(07:00):
some clever hackers who are finding ways to rig big
payouts from the machines. And while the machines are designed
to take money from you, uh, they don't like the
design to go the opposite direction. The house is not
a big fan of that. A few years later, the
UK launched an initiative to put a computer in every classroom.

(07:21):
Acorn Computers secured the contract to provide these computers to
produce them, and it was called the BBC Micro. The
Micro used a processor called the six five O two.
This was an eight bit processor from Rockwell h Though
engineers from Most Technology originally developed the six five O

(07:42):
two processor, the six five O two was a low
cost processor that totally changed the processor market. It was
much less expensive than Intel's d D processor at the time,
and as such, the six five O two had found
its way into numerous technologies in the including the Atrey

(08:02):
video game console and the Apple two Computer Systems, among others.
The micro contract gave Acorn Computers some momentum. In three
the company wanted to free itself from dependence upon processors
from other companies. To computer scientists from Cambridge University, Sophie
Wilson and Steve Ferber became the head designers for the

(08:26):
new thirty two bit processor. Ferber focused on the actual
physical design of the chips architecture, while Wilson was focusing
on the instruction set. The limited resources forced the pair
to come up with a simplified approach to processors, and
they chose to go with a specific approach to processor
design in a category called reduced instruction set computing or

(08:52):
risk r I s C. This is in contrast with
complex instruction set computing or see I s C CISC.
But what does that actually mean? Well, let's take a
step back to understand this. A processor's job is to
perform arithmetic and logic operations on data, and this includes

(09:15):
basic stuff like you know, adding and subtracting, kind of
like your basic calculator, and it can also involve transferring
numbers and comparing different numbers to one another. The data
comes in as binary or let's be zeros and ones.
The processor follows instructions given to it by a program.

(09:35):
So the processor gets its instructions like, you know, add
the next two numbers together and then send it on,
and then the processor executes that instruction on the supplied
zeros and ones that come in. And that's basically what's
going on with a processor at a very high level.
In addition, we describe the number of operations a processor

(09:59):
can complete in a second as its clock speed, which
we talk about in hurts. A single pulse of the
processor is a cycle. A one Hurts processor would only
be able to complete a single operation every second. It
would be unfathomably slow to us. A decent processor speed

(10:19):
today is somewhere between two point five and three point
five giga hurts. That's decent. I'm not talking about top
of the line, but that would mean two point five
to three point five billion cycles per second. So the
processors in modern computers are pulsing billions of times every second,
and each pulse can power and operation. Now, some instructions

(10:44):
are pretty simple and they might only require one or
two cycles to complete that instruction. Other instructions are more
complicated and might have lots more steps involved, and this
is where we get to the risk versus CISK approach.
A risk based processor handles very simple instructions, so it

(11:05):
handles each individual instruction very quickly, like within a cycle.
CISC systems can handle much more complicated instructions. The flip
side of that is that while a risk based processor
can execute individual instructions very very quickly, you might need
a lot more instructions to complete your overall task. The

(11:27):
CISK approach might take longer to execute a single instruction,
but you need fewer instructions overall to complete your task.
Now that is a little confusing, So I'll use an analogy.
If I told the typical person, I need you to
go outside and check the weather, that's a deceptively complicated
instruction because there's a lot of other stuff that's nested

(11:49):
in that request. For example, if I were to try
and tell this to a robot, I might have to
include what direction the robot needs to go in, how
fast it should move, where of the door is, whether
that door opens inward or outward, the actual mechanism the
robot would have to manipulate to open the door, and
so on. So what appears to be a simple task

(12:12):
is actually when you break it down into its individual components,
much more complicated. So a program running on a risk
based processor has to break down instructions kind of in
that way in a longer series of simple tasks that
add up to whatever you're end goal is. But risk
chips are highly optimized, so for certain applications a risk

(12:36):
based chip can be ideal. These days, we have a
lot of risk chips and stuff like mobile devices, for example,
because these devices are rarely running super complicated software and
they need that low power, high efficiency output. One related
thing I'd like to mention, though it doesn't tie directly

(12:56):
into arms history or anything, is what is called a
semantic gap. Now, remember when I said that processors taken
data in the form of zeros and ones. This binary
code is a type of machine language, or the kind
of information a machine can actually process. Machines are not
able to process information in other forms directly. The information

(13:20):
must ultimately convert into machine language, in this case zeros
and ones, and information that we can express pretty succinctly
with language and numerals. Beyond just the ones and zeros,
that ends up taking up a lot of space. You
have to use a lot of ones and zeros to
represent that kind of information. But computers are really good

(13:41):
at processing machine code. It happens lightning fast. However, programming
computers in machine code is really really hard. I mean,
imagine having to type in a string of tens of
thousands of zeros and ones while you're trying to program
a machine, and you know that if you make just
one mistake, you mess up the whole program because the

(14:04):
whole chain is screwed up after that. Heck, if you
did make a mistake, it would be really hard for
you to track down where you made the mistake in
the programming. You would have to compare two different, very
long sheets of zeros and ones, and you'd probably lose
your mind. That's one of the big reasons computer scientists
have developed various programming languages. The idea is that the

(14:26):
programming language is something that's easier for human beings to
work with, but machines can't understand programming languages without the
use of something called a compiler. The compiler's job is
essentially that of a translator. It takes the program that's
been written in whatever programming language and converts the instructions
into machine code so that the computer can process it.

(14:50):
The compiler is essentially a middleman between the program and
the processor. We describe programming languages by calling them stuff
like level or high level. This refers to how closely
the language resembles the machine code. So a low level
programming language is really only a couple of steps away

(15:11):
from machine code itself. It's much easier for a compiler
to handle that kind of language, but it's much harder
to program in. You have to frame your programming closer
to machine code, but it's still easier than programming instructions
than just ones and zeros. A high level programming language
is modeled closer to how we would think in terms

(15:33):
of a typical language. There are still rules you have
to follow, and if you're not familiar with that particular
language and you're looking at an example of it, it's
not likely going to make a whole lot of sense
to you. But it's much easier for humans to work
with these kind of languages. However, it's less efficient for
compilers to handle that and compile that into machine language.

(15:55):
We call this adding layers of abstraction. The programming lay.
WHIGE provides an abstract platform that represents the various tasks
that the processor will ultimately carry out, and the gap
between what the programming language says and what the processor
does is the semantic gap. CISK and risk designs deal

(16:17):
with this gap in different ways. A CISK design includes
a lot of addressing modes and lots of different instructions.
A RISK design has a much more simplified instruction set
that can meet the requirements of user programs. It's really
just two different methods to achieve a similar result. Depending
upon the history you read, Acorn Computer slash Cambridge Processing

(16:41):
Unit called their risk based design Acorn Risk Machines, or
they called it Advanced Risk Machines. Most histories say that
originally it was Acorn Risk Machines and only later changed
to Advanced Risk Machines. But either way, the initialism for
this technolog g became a r M or ARM. When

(17:04):
we come back, i'll talk about how this technology would
ultimately transcend the company that spawned it, but first let's
take a quick break. Steve Ferber, Sophie Wilson, and Robert
Heaton program the initial instruction set for the ARM processor

(17:27):
in Basic. That's a programming language that originated back in
nineteen sixty four. Basic stands for Beginners all Purpose Symbolic
instruction Code. It's a high level programming language that, as
the name implies, simplifies things for beginner programmers. The Acorn
team weren't beginners, but they wanted to keep instructions as

(17:50):
simple as possible to optimize the processors. Their first effort
yielded the ARM one processor That endeavor took two years
of development, with the ARM one debuting in nine only
debuting internally v l s I Technology, another company fabricating company,

(18:13):
they actually produced the chip the working chips. The chip
had fewer than twenty five thousand transistors on it and
used a process with a resolution of three microns or micrometers.
That's one millionth of a meter for comparisons sake, Today's
Intel processors have more than a billion transistors and they

(18:36):
use a fabrication process with a resolution of just a
few nanometers, and a nanometer is one billionth of a meter,
so we've definitely come a long way since the early eighties.
The team learned a great deal through their experience of
developing the ARM one, and rather than immediately go into production,
the design team began to work on refining their product,

(18:57):
creating the next generation of processors based on the architecture.
They wanted to improve certain processes, and they added instructions
for stuff like multiply and multiply and accumulate. They built
in capabilities that would allow the processor to perform real
time digital signal processing, a necessity if they wanted the

(19:18):
processor to be able to handle processes meant to you know,
generate sounds for example, which the company considered an important
part of a computer's capabilities. They increased the number of
transistors on the microprocessor from twenty five thousand from ARM
one to thirty thousand for ARM two. The team also
developed a coprocessor, which, as that name implies, is a

(19:42):
processor that can work in concert with the primary processor.
Coprocessors typically handle specific tasks. They're meant to kick in
when something specific happens, and it offloads those tasks from
the responsibility of the primary process sessor. So this would
be kind of like having two people dividing up work

(20:04):
among them, and one person handles a subset of chores
and the other person has to do all the rest
of the chores. Now, in this case, the coprocessor was
powering a floating point accelerator. Ah, but that leads us
to ask what is a floating point? Well, I'm sad
to be the one to have to tell you this.

(20:25):
Please set yourself down and and prepare yourself. Computers have
a limited capacity. Computing memory is not infinite, and so
we have to start making some concessions when we're working
with numbers. Now, as you may be aware, some numbers
can be really really big or really really small, and

(20:49):
they might have a super long, perhaps even infinite number
of numbers behind a decimal point. Computers can't cope with that.
They have limitation on what they can handle. So we
have to make some concessions, and floating points are one
of the ways we make concessions. Now, at some point
we have to cut off numbers, and when and where

(21:12):
we cut off numbers depends upon what we're doing. So,
for example, if we are making a tool like a rake,
you know, just an old lawn rake, and you want
the handle for this rake to be five ft long,
you probably don't actually care if the handle comes out
to be four ft eleven inches and some change, or

(21:33):
five foot and a fraction of an inch that level
of precision isn't really important to you. It needs to
be five ft ish, but if it's not exactly at
five ft it's not a deal breaker. But let's say
you're building a transistor for a processor. Well, in that case,
you're working in a very very very small frame of reference,

(21:55):
and so the difference of a fraction of a meter
represents a gargantu one difference. On the flip side, you're
not likely to ever have to worry about distances of
a centimeter. That would be way too big. So you
just have to have a way to maintain accuracy relative
to what you're doing. You have a different, you know,
context for your work. This gets a bit more complicated

(22:19):
when you need to work with both really big and
really small numbers at the same time. For example, let's
say you're a scientist and you're working with Newton's gravitational constant.
That is a very small number that starts with a decimal.
Then you have ten zeros before you get to the
first non zero number, which is a six. By the way,
you might also be working with the speed of light.

(22:42):
That's a very big number, but the computer memory can't
really handle number sizes that include that wide a spectrum
of numbers, and that's why floating points are used, and
they're sort of like using numbers and scientific notation. You've
got a significant with contains the digit of the number

(23:02):
or the digits of whatever number you're talking about, and
you've got an exponent which tells you where the decimal
point needs to be in relation to the first digit
in the significant. So if I have a significant of
one point seven and I have an exponent of six,
it would be the same as if I wrote that
number in the scientific notation as one point seven times

(23:25):
ten to the sixth power, which is the same thing
as one million, seven hundred thousand. These are all just
different ways to represent the same value. So one point
seven significant with an exponent of six is one million,
seven hundred thousand. Likewise, if I had a significant of
one point seven and an exponent of negative six, this

(23:46):
would be the same as one point seven times ten
to the negative sixth power or point zero zero zero
zero zero one seven. By using floating points, we can
simplify how we represent numbers without damaging the value of
those numbers. And let's get around the limitations of computer

(24:07):
memory and how many bits a processor can handle at
a single time. We call the operations that processors perform
on these types of numbers floating point operations, and we
measure it in flops, which stands for floating point operations
per second. A giga flop would be a billion floating

(24:27):
point operations per second. The Japanese super computer Fugaku can
reach more than four hundred fifteen peda flops. A pedal
flop is a thousand million million floating point operations per second,
So a pedal flop would be a one followed by

(24:47):
fifteen zeros yauza. So the ARM to architecture included a
coprocessor for floating point acceleration, not a full floating point process,
but to accelerate floating point operation calculations, as well as
the possibility of adding other coprocessors with the basic ARM architecture.

(25:09):
It was kind of a sort of a modular design.
This generation was called, fittingly enough, ARMED two, and the
first product to market that featured the ARM two wasn't
a fully fledged computer, but rather the ARM development system,
which included the ARM processor, four megabytes of RAM, three
support chips, and some development tools. So essentially this was

(25:31):
a product meant for programmers. It wasn't like it was
meant for your average end consumer. Meanwhile, at the company
at large, things were not going so super well. Acorn
Computers was in a bit of a financial crisis and
an Italian company known for computer systems and office equipment
in Europe called Olivetti ing s c. And I know

(25:54):
I've butchered it, but Olivetti is what's best known as
it swept in and it acquired the English computer company.
At the time, Olivetti was reportedly unaware that within Acorn
Computers there were engineers who are working on new processors
because the original Acorn computers we're using processors made from

(26:14):
other companies, so the acquisition would slow things down a
little bit. That's one of the reasons why there was
a delay between the development of the original ARM one
processor and an actual Acorn computer system running on an
ARMED two processor. However, the day did eventually come around,
and that day arrived in n seven, and that is

(26:35):
when Acorn Computers launched the Archimedes. It was a home
computer running on an ARM two processor with a clock
speed of eight mega hurts, meaning it would send out
eight million pulses per second. I wish I could say
that the Archimedes revolutionized computing right away, but that just
wouldn't be true. The delays meant that Acorn Computers was

(26:59):
way behind the chief competitor, which in seven was IBM,
or rather computers running on IBM's design a k a
IBM compatibles. While Acorn Computers was working on developing its
ARM processor technology and then afterward as it sorted itself
out post acquisition from Olivetti, the computing world was consolidating

(27:22):
behind the IBM compatible approach. Apple's market share was already
heading forward decline. At this point. The company had released
the Macintosh computer. In four Steve Jobs had been ousted
or had left in a huff. Reports differ on this.
IBM had taken aim at dominating the office computer space
and then expanded beyond to home computing. But IBM had

(27:45):
also made some decisions that allowed some other manufacturers to
build machines with essentially the same components as IBM's personal
computers and licensed essentially the same operating system, allowing any
company the chance to build their version of an IBM
PC but offer it for a much more competitive price.
IBM had effectively set its own course to ultimately withdraw

(28:10):
from the home PC market further down the line, though
that would take several more years. The point, however, is
that the IBM design was firmly entrenched in the market.
There were tons of options for machines, and more importantly,
there was an enormous amount of software available that had
been developed specifically for the IBM design of computers. The

(28:33):
our Comedies, a computer with a totally different processor and
a different operating system was just getting started in this market,
and there was no enormous library of software to support that.
System sales as a result were slow. I mean, what
good is a computer if there's no software to run
on the computer. You could program your own software, but

(28:56):
that sort of approach tends to appeal to, you know,
a super narrow sliver of the overall computer market. So
it would take a few years for programmers to develop
software for the ARM architecture and for the Archimedes platform
to a point where it could stand as a worthy
alternative to the IBM PC. And I want to be
clear here, I'm not saying the Archimedes was a bad computer.

(29:18):
It wasn't. It was just that it was starting at
a point where it was at a huge disadvantage to
the IBM PC, which had an enormous head start. Meanwhile,
the R and D team with an Acorn was hard
at work at the next generation of ARM architecture, which
would be the ARM three, and man, it is so
much easier to follow this naming convention compared to some

(29:41):
other technologies, but don't get used to it, because before
long things are going to get confusing again. So the
ARM three saw further improvements in design, with an on
chip data and instruction CASH and a four kilobyte capacity
of that CASH Ohana a bide is eight bits. A
kilo bite is one thousand bites, or really because the

(30:05):
power of two properties, it's more properly one thousand, twenty
four bites. Will get more into that later. Essentially, this
meant that more instructions could load into the pipeline for
the processors simultaneously, which sped things up considerably. In addition,
the team was able to get a much faster clock speed.
The previous generation ran at eight Mega hurts, but the

(30:25):
ARM three hit twenty five Mega hurts. The first Acorn
computers running on ARM three technology would launch in nineteen nine.
The team also worked to build a version of ARMED
two tech that had a lower power requirement than the
standard armed to processors, and this became known as armed
to a S Little A Big S. This design was

(30:46):
aimed at filling a market need for companies that were
building lower cost portable and handheld devices like communication hand
sets or portable computers, and the team got as far
as developing working prototypes of the chip, but never got
to bring it to market. One thing that was working
really well, however, was the general dedication to risk based architecture.

(31:07):
The chips required less power than CISC based systems, and
with the right software they were incredibly powerful and efficient,
and they cost much less than the more complicated CISC
bay systems did. As a result, more companies were getting
interested in developing risk based technologies. The ARM family of
processors was a clear candidate for that model, but not

(31:28):
everyone was keen on the idea of relying on a
technology that belonged to a specific computer manufacturer, that being Acorn.
There was, however, a solution to this problem, and I'll
explain what it was after we return from this quick break.

(31:51):
Behind closed doors, a series of meetings had been pushing
the idea of breaking the ARM technology division out of
Acorn and into its own entity, its own company. Acorn
itself was part of these discussions, and the idea would
be that the ARM branch would spin off into a
new company, and that company would then develop new ARM technologies,

(32:14):
acting as a business to business enterprise. It would actually
fabricate the technologies as well. It would be an original
equipment manufacturer or o e M. That's a type of
company that makes products that are used as components in
products made by other companies under their own branding. The
two other companies that were part of this discussion in

(32:35):
addition to Acorn were v L s I Technology that
was the company that had fabricated the original ARM one processor,
and drumroll please, Apple Computers. Presumably, Apple was keen on
making use of ARM based processors, but didn't want to
put out computers that could be said to have Acorn

(32:55):
computing technology inside them, spending off armwood sidestep that awkward fact. However,
there is another explanation that isn't quite so, you know, petty,
and this is that Apple had taken a keen interest
in the ARM three processors in an effort to develop
computers that could go up against the IBM compatible four
D six generation, but the ARM three lacked an integrated

(33:17):
memory management unit or mm U, and as such, Apple
felt that the ARM processor design wasn't quite where Apple
needed it to be. However, developing a new ARM processor
with an integrated MMU was going to be expensive and
Acorn Computers just didn't have the resources to do it itself,

(33:38):
so it really necessitated a move to an independent spinoff
that had more support behind it. So Acorn Computers would
supply the design and engineering behind the development of the
ARM architecture, primarily in the form of a workforce of
twelve engineers v l S. I would supply the fabrication
facilities to make physical chick and Apple would supply the

(34:01):
cold hard cash needed to fund the whole thing. That's
oversimplifying things a little bit, but generally that's how the
arrangement worked. The new company was Advanced Risk Machines Limited
a k a. ARM Limited. The main goal for the
new company was to advance ARM microprocessors. This new company

(34:22):
had its fancy schmancy headquarters in a barn in Cambridge, England.
Typically with tech companies, I talk about starting out in
a garage. But with ARM it was a barn. And
so while our story started in the late nineteen seventies
with Acorn Computers, some ARM histories really point to nineteen
nine as the beginning of ARM. I think that ends

(34:45):
up skipping some important early work. However, that's just my
own personal opinion. Hermann Hauser of Acorn Computers slash Cambridge
Processing Unit reached out to Robin Saxby to serve as
the CEO of this new company. Saxby had come from
Motorola and it worked closely with Acorn Computers back when
the PCs the company made we're running on Motorola based chips.

(35:08):
The first processor this new company developed was called wait
for it, ARMS six. Wait I'm sorry, wait, hang on,
that can't be right. Six? Hang up? Wasn't the last
full processor the ARM three? What the heck happened to four?
And five? Why did we jump to six? What is

(35:28):
it with tech companies and the desire to leap over
entire numbers when releasing new versions of products? You know,
I I wish I had answers for these questions, but
my research didn't pull up anything definitive. Now that's not
to say there aren't answers out there. It's entirely possible
that there is, and I just missed it. But based

(35:49):
on what I could find, there was never any announcement
for ARMED four or ARM five as planned commercial products,
nor any record of an ARMED four or ARE five
processor being produced, either as a potential product or even
as just an internal prototype. Based on the information I
can find, the fourth generation ARM processor was in fact

(36:12):
the ARMS six, and the new company skipped four and
five for reasons that are beyond my ken. As it were,
one thing that definitely shaped the development of the ARM
six was an intended use for the tech within an
ambitious Apple product, the Apple Newton. Now a lot has

(36:33):
been said of the Newton, much of it unkind and
for arguably justifiable reasons. The Newton was meant to be
a defining example of personal digital assistance or p d
a s. In fact, the story goes that the Apple
CEO of the time, John Scully, coined the phrase personal

(36:53):
digital assistant to refer specifically to the Newton. It was
in many ways a incursor to the iPhone, which would
debut twenty years after the company had first started working
on the Newton. So you could say that the Newton
came out twenty years too early, and I think I
think a lot of people would agree with you. The

(37:14):
Newton had a tablet style form factor and it used
a touch screen input with a stylus. Apple was pushing
really hard for a device that could actually interpret handwriting.
So theoretically you would be able to write on the
tablet in normal handwriting and the Newton would interpret each
letter and capture it in text on screen. And that

(37:37):
was a super cool and innovative idea. And Apple really
needed a processor that could power operations without requiring too
much juice, because a handheld computing device isn't really that
useful if it can only operate for an hour or
so before it needs a recharge. Would that in mind?
The ARM six micro architecture began to take shape with
lots of decisions in the development guided by the knee

(38:00):
eads of the Newton. The name of the family of
ARM six microprocessors, because there were a few chips that
fell under this designation, was the ARMS six macro cell.
And I'll give a few of the changes that happened
between the ARM three generation and the ARM six. For
one thing, the process had become more precise. The ARM

(38:23):
three micro architecture used a one point five micron process,
whereas the ARM six shrank that down to point eight microns.
So what does that mean, Well, it means that the
individual components on the chip could be made much smaller,
which also means you could fit more components onto a
microprocessor without having to increase the size of the actual

(38:45):
processor chip. This falls in line with an observation that
Gordon Moore had made decades earlier, where he observed that
market influences incentivized companies to develop new ways to cram
smaller and smaller components onto a square inch of silicon wafer.
The effect of this is that the number of transistors

(39:06):
you could find on microprocessors would effectively double every two
years or so. Now these days we tend to reinterpret
this to say that a computer's processing power doubles every
two years or so due to Moore's law, and it's
really more of an observation, but that's a matter for
another episode. The point I really want to make is

(39:26):
that moving from a one point five micron process to
a point eight micron process is pretty much in line
with that observation, as the point eight micron components were
just a little over half the size of the one
point five micron version found in ARMED three microprocessors. In addition,
the ARMS six increased the address space from twenty six

(39:49):
bits to thirty two bits. Address space means the amount
of memory that's set aside for a particular computational component,
like a file or a connected device. Essentially, a computers
processor uses memory addresses in order to access information stored
within the computer's actual memory. It's how a processor can
pull relevant information for whatever process it needs to perform.

(40:13):
The term twenty six bit or thirty two bit tells
us how much memory the system can address. Now, remember
that a bit is a unit of binary information, either
a zero or a one. So each bit can have
one of two states or two values zero one, two states,
and you can have twenty six bits with the twenty

(40:34):
six bit system with that older are in three address space,
and that meant that you had a maximum of two
to the twenty six power number of address spaces. So
that meant you can have a maximum of two to
the twenty six power number of address spaces. That translates
to more than sixty seven million values. However, a thirty

(40:56):
two bit address space knocks this up to two to
the thirties second power values, and that goes up to
nearly four point three billion values. So you see how
a relatively small increase in bit size can have a
much bigger effect. It's not doubling or quadrupling, it's much
bigger than that. This meant that the ARM six could

(41:19):
map up to four Gibby bytes of memory. Gibby bytes.
I said that correctly. This is a peculiar measurement, right,
because I'm sure you've heard of gigabytes, but this is
gibby bytes. Gibby g I b I means too to
the power of thirties, So a gimba byte means one billion,
seventy three million, seven one eight hundred four bytes. You

(41:43):
can see how saying one give byte is more efficient.
Isn't that helpful? Anyway? The ARM six micro architecture can
map up to four of those bad boys in memory.
A gimme byte, in case you're curious, is equal to
about one point oh seven four gigabytes. The whole story
behind the various binary prefixes, because there's also kidby, maybe

(42:04):
and tabby and more. All that is really interesting, but
I'll save that for some other episode. The ARM six
was backwards compatible with the old ARM three architecture. It
had a twenty six bit mode of operation that it
could switch to instead of its thirty two bit. This
helped avoid making the older software that had been designed
for ARM three systems from going totally obsolete with the

(42:26):
release of the new micro architecture. It had the integrated
memory management unit that Apple wanted. It also had some
new processor instructions, but I'm not going to go too
far into the details, as I feel like it would
largely be lost and we've got a lot more to
say about ARM coming up anyway. The first Newton model
launched in with an ARM six ten risk microprocessor, and unfortunately,

(42:52):
it would ultimately be something of a clunker. The chief
problem with the Newton was not the fall of the
ARM processor. It was that the most anticipated feature, the
handwriting recognition capability, just wasn't very good. There were lots
of reviews that criticized the implementation of this feature, documenting
times when the system performed poorly and just got stuff wrong.

(43:16):
Fans of the cartoon sitcom The Simpsons might remember an
episode where they made fun of this. The school bully,
Nelson had one of his cronies take down the note
beat up Martin on his Newton, but the device interpreted
it as eat up Martha. So Nelson then grabs the
Newton and throws it at Martin, thus fulfilling the prophecy. Anyway,

(43:39):
the Newton had a troubled launch, which is putting it mildly,
and it would transition into a troubled life cycle. The
device failed to get a really good hold in the marketplace,
even as new versions of the hardware were released, and
upon his return to Apple, Steve Jobs would discontinue the
Newton in n eight. Meanwhile, over at ARM, CEO Robin

(44:03):
Saxby had a brilliant idea. He saw that depending on
being a single source of fabrication for the ARM microprocessors
is way too limiting. ARM needed more customers and would
also need to meet the production needs of those customers.
But being a small operation, this was a tough problem
to be in you couldn't easily scale up. The solution

(44:26):
was an interesting one. Saxby led the company toward moving
to a more intellectual property approach to micro architecture. So
rather than produce the chips themselves. ARM Limited would license
the design and instruction sets of these chips out to
other fabricators. They would become what is called a fabless

(44:49):
chip designer. They didn't produce the hardware themselves. The other
chip manufacturers could produce their own ARM microprocessors built on
the license designs coming from ARM itself. This move would
prove to be a game changer for the company. We're
gonna leave off there for this episode. In our next episode,
I'll pick up from that point forward, and we'll talk

(45:12):
about how ARM evolved over the years and cemented itself
as a huge player in the microprocessor space, as well
as talk about the acquisition by in video, at least
the proposed acquisition at the time of this recording, and
what that means. If you guys have suggestions for future
topics that I should cover on tech Stuff, reach out

(45:34):
to me. You can do so on Twitter. The handle
is text stuff H s W and I'll talk to
you again really soon. Text Stuff is an I Heart
Radio production. For more podcasts from my Heart Radio, visit
the I Heart Radio app, Apple Podcasts, or wherever you

(45:55):
listen to your favorite shows. Zero

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS
Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.