All Episodes

June 12, 2019 40 mins

Virtual machines open up a lot of options for information infrastructure. Where did the idea come from and how did it evolve over time?

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tex Stuff, a production of I Heart Radios,
How Stuff Works. Hey there, and welcome to tech Stuff.
I'm your host, Jonathan Strickland. I'm an executive producer with
How Stuff Works and I heart Radio and I love
all things tech. And today's episode comes to us due
to a listener request. Martin many many moons ago asked

(00:29):
if I could do an episode talking about the evolution
of virtualization, and so that's what this episode is all about.
But first, what is virtualization? Well, as the name implies,
it's all about creating a virtual representation of something else. Now,
in computing, we typically use it to mean creating virtual

(00:50):
computer platforms, and there are a lot of reasons why
you would want to do this. It would be a
virtual computer platform that exists on top of actual physical hardware.
And you might want to do it for security reasons,
to separate different programs and storage systems from one another.
You might want to do it to provide different stable

(01:11):
development environments so that a mistake in one partition doesn't
bring everything else down on the machine. You might want
to do it to run different operating systems on the
same physical hardware, or maybe you want to do it
to maximize efficiency in your computer system, or perhaps some
combination of some or all of the above, but we'll

(01:31):
get into that later. For now, the important thing, no,
is that the goal is to create a virtual version
of something that runs on top of actual physical hardware. Now,
the history of virtualization stretches back to the nineteen sixties,
well before the days when the average person had any
sort of interactions with a computer. Early computers were useful

(01:53):
but had some limitations. This is like the main frame
age of computers. So one of those limitations was that
early computers could really only run one process at a time,
and there had to be a person there to initiate
a new program. At first, that wasn't that big of
a problem because computers were such a niche gadget. Very

(02:13):
few people had any access to them in the first place,
so it wasn't that big of a limitation. The scientific
and academic communities were using them, but beyond them, very
few people had any experience with a computer. They were
mostly unknown. But as computers were becoming more available, as
businesses were starting to use them, and they were becoming

(02:35):
more capable of handling processes that went beyond academic interests,
it became clear that the limitations were a liability, and
so you had companies like IBM researching ways to get
around these limitations. One such way was an approach later
called batch processing. This dates from the time when programs
were represented on physical cards with holes punched in them,

(02:59):
known as punch cards for obvious reasons. So in an
early computer, you'd have a stack of cards representing a
single program, and you would feed those cards into the
computer's hopper, which is essentially its intake for cards, and
the computer would analyze the cards, which would have instructions

(03:19):
on them. The computer would then follow those instructions and
produce a result and either printed out or create a
new punch card or stack of punch cards, or later
on it would display the results on a monitor. IBM
developed computers that could accept stacks of cards that represented
more than one process. The computer would be able to

(03:40):
read out the stacks and complete each process in turn,
so you could feed batches of cards to the computer.
Thus batch processing it cut down on the amount of
time people had to spend babysitting a computer or waiting
for their turn to run a process, freeing them up
to work on other stuff. But was still limited, and
you were still stuck running just one process at a time.

(04:03):
You would just run them in sequence, and you couldn't
easily have multiple people using the same computer at the
same time. At best, you could use batch processing to
run a second job right after the first one, but
it was restrictive, and as computers were becoming more important,
this was a problem in the late fifties and early
nineteen sixties some engineers or computer scientists, though that term

(04:27):
was hardly in use yet, and many in academic fields
were very snooty. They didn't quite yet view the subject
as worthy of standing on its own anyway, Some of
these innovators began to experiment with ways to get around
these limitations. A fellow named John McCarthy, a professor at
the Massachusetts Institute of Technology better known as m i T,

(04:48):
proposed in nineteen fifty nine that an IBM seven O
nine main frame could be tweaked to allow for a
few people to use the machine more or less at
the same time time. He then began to recommend some
changes to the university's IBM seven oh four main frame
to allow what he called a time stealing mode. In

(05:10):
this mode, you could have a batch of jobs running
on a computer, then someone else with an unrelated job
shows up, and, using time stealing, the new person could
interrupt the batch process to run this other job, and
then the batch process could resume as per normal. Another
m i T professor named Fernando Corbato worked on a
similar project, tweaking the university's IBM seven oh nine main

(05:33):
frame so that four people could use the system at
the same time. But these were all solutions using systems
that weren't natively designed to support multiple users. They were workarounds,
and m i T computer scientists found manufacturers were uninterested
in changing that because there seemed to be very little
call for it. On July one, nineteen sixty three, m

(05:55):
i T launched a project originally called Mathematics and Computation
in or MAC, but later the acronym would be retroactively
applied to the phrase multiple access computer. Funding for the
project came courtesy of ar PA, which would later get
its own acronym update to DARPA. And in case you
are unfamiliar with that organization, it's a division within the

(06:19):
United States Department of Defense, and its mission is to
fund research and development into technologies that contribute to the
defense of the country in some way. So why was
the Department of Defense interested in this? It largely had
to do with Russia and Sputnik. Now, I've talked about
how spot Nick helped spur on a ton of innovation

(06:40):
in the United States. It scared the daylights out of
people in the US. It suggested that Russia was much
further along technologically speaking than the US had suspected. And
it lit a fire under the proverbial backside of the
US military. And so there was a strong incentive to
advance computer science and technolog g in the US to

(07:01):
outpace the Russians. Project max primary purpose was to advance
computer science in several ways, including the development of new
operating systems and computational theory. It largely grew out of
a meeting between m i T Professor Robert Fano and
Joseph C. R. Lick Lighter, who had previously established a

(07:21):
psychology group in the Electrical Engineering department of m i T.
Then gone on to join a research firm called Bolt,
Baroneck and Newman better known as B B N, and
then was named the first director of ARPA's Information Processing
Techniques Office or I p t O. Lick Lighter convinced
Fano to head up Project Mac, which would receive funding

(07:43):
from the I p t O through ARPA. But standing
in the way of this goal were the limitations I've
mentioned already. It's hard to make real progress if you're
limited to just one person working on one computer at
any given time, and so M I T sought out
proposals from companies like General Electric and IBM to develop
a computer system that could support multiple users through this

(08:06):
time sharing idea. Now, the basic idea behind time sharing
is that you have multiple workstations that all connect back
to the same computer. The workstations are all dummy terminals
or thin clients. The computer can only work on one
job at a time, but it can do so at
a pretty darn fast pace, and so as you run

(08:26):
a job at your workstation, the computer waits for a
break and its processing of other jobs, then slots your
job in that break and runs your job to you.
It's like you've got the full attention of the computer,
but in reality, the computer is actually switching back and
forth between users, finishing each job quickly before moving on
to the next one, rather than doing all the jobs

(08:46):
at the same time. The benefit of time sharing is
that it's as if you've increased the number of computers
by the number of workstations, and so more people can
take advantage of the computer than with older systems. You
reduce downtime, you increase efficiency, and more people can actually
get stuff done. IBM was initially not interested in working

(09:07):
with m I T. The company wasn't convinced that a
multi user computer was going to be that big of
a deal and that most of their customers would have
no need for such a computer. So rather than dedicate
the time and resources needed to develop something that the
company wasn't convinced would ever be an actual product, it
would just be a one off, they bowed out and
so ge became the vendor for the early days of

(09:30):
Project Mac. But over at IBM, minds were slowly changing.
The Project Mac contract was a big deal with a
lot of funding at that time, and then it became
known that Bell Labs was also looking out for a
system that would grant access to multiple users to a
computer at the same time, and this was enough for
the folks at IBM to say, huh, maybe we were wrong,

(09:52):
and they changed their minds and they put some work
into developing such a system. They created a prototype called
the c P forty mainframe computer. The computer never became
a product sold by IBM, but was used internally in
IBM S labs. But the CP forty will be really
important in our story because it was the starting point

(10:13):
for a journey that would take us to the first
commercial mainframe computer system that could support virtualization. The time
sharing approach involved sharing parts of a computer's processing capabilities,
such as its system memory or its storage, and sharing
that with each user. But this was not just useful,
it was also limiting. If something went wrong for one user,

(10:35):
it would affect everyone on that computer. Creating more meaningful
partitions that would isolate each user would be more useful. Now.
Typically we described this process as a hypervisor, or sometimes
as a control program. A hypervisor's job is to separate
the applications and operating system running on a computer from
the computer's actual hardware. There are a couple of different

(10:58):
types of hypervisors. Type one hypervisors are also known as
bare metal hypervisors because they exist in a layer that's
directly on top of a machine's hardware. Then you have
type two hypervisors that's a software layer that exists over
top the existing machines operating system, so there's an extra
layer of abstraction there. Now, through a hypervisor, a single

(11:22):
machine can host multiple virtual guest machines. This idea would
be further explored in the next phase, that of true virtualization,
which I'll get to in just a moment. But first
let's take a quick break. Okay, so we just got

(11:43):
into what you could think of as the prehistoric virtualization era,
with hypervisor technology just coming into development. IBM was at
the forefront of this research with the CP forty mainframe
system in the nineteen sixties. The system ran on a
special operating system called CP slash c MS, which originally

(12:05):
stood for Control Program slash Cambridge Monitor System, though later
CMS would change to mean Conversational Monitor System and I've
even seen a couple of other variants out there. The
control program part was what allowed for this early virtualization
in the CP forty. It effectively created a full virtualization
of the underlying hardware of the CP forty for each

(12:28):
virtual machine, so people on dumb terminals terminals that did
not have their own solely dedicated computer could have a
virtual computer at their disposal. All the work was actually
being done on top of the single CP forty prototype,
but to each user it was as if they had
access to their own individual computer. The virtualization software partitioned

(12:51):
computer assets to each user. This wasn't just a matter
of convenience. It also allowed for rapid innovation. Programmers could
work on their own virtual machines, creating applications without having
to worry about a bug affecting everyone else. The partitions
provided protection, so if you were plugging away on a
difficult section of code and your coworker fouled something up

(13:14):
on their virtual machine, you didn't have to worry about
their mistake bringing you down with them. Each virtual machine
acted like its own individual physical computer. You can see
some similarities between this model of computing and what would
come much later in the era of cloud computing and
the brief age of netbooks. Technically, netbooks are still around,

(13:37):
but they are no longer the heyday type device that
they were for a very short while. The netbook essentially
had a five year run from two thousand seven to
two thousand twelve, and it's a category of lightweight laptop
computers that had modest specifications because the real purpose of
the computer was to use cloud based services. You didn't

(13:58):
have to have a b E fee computer because most
of the processing was taking place somewhere else. The netbooks
were a type of thin client, kind of like those
dummy terminals used in time sharing mainframe systems. You didn't
have to have a lot of horsepower. You just needed
input devices like a keyboard and output devices like a display,
and the actual computing was happening on some other machine

(14:21):
out on the internet. We'll talk about that more a
bit later in this episode. So while work on and
with the CP forty system was going, IBM researchers were
looking into other breakthroughs that would become important for computers
in general and virtualization in particular. In nineteen sixty five,
IBM announced it was developing a thirty two bit central

(14:42):
processing unit that included virtual memory hardware in it. This
was called the IBM System SLASH three sixty. IBM researchers
used what they learned developing the CP forty to create
the CP sixty seven, which was built on this IBM
system Slash three sixty DASH sixties seven hardware. I love

(15:02):
these names, by the way, there's so much fun to say. Anyway,
this would be the first computer to support virtualization that
would actually be widely available as a commercial product, not
just a prototype. This was an important component for mainframe users,
but it wouldn't really transfer to micro or mini computers.
So this is still in the main frame age where

(15:23):
you have the big centralized computer, and it's not that
big of a surprise. You know, a mainframe is a
centralized machine, but later computers didn't necessarily follow that model. Instead,
you could have a computer at your own desk. The
need to partition and separate your machine from everyone else
wasn't as pressing since you were working on an actual

(15:43):
separate piece of hardware rather than sharing a centralized computer
amongst everybody else, and the hardware you were using was
capable enough to do whatever it was you were doing.
Majorization reached the point where it was possible to produce
a relatively small computer, especially compared to the giant mainframes
of the nineteen fifties and nineteen sixties, and it could

(16:04):
handle all the tasks that your average user could throw
at it. So we started to see a shift away
from this centralized model of the mainframe strategy to a
decentralized approach in which everyone would work on their own
separate machine. This would continue and accelerate in the mid
to late nineteen seventies when the personal computer emerged and
the general public began to dip their proverbial toe in

(16:27):
the computational pool or something. Now that's not to say
that all work on virtualization stopped completely in the nineteen seventies.
It certainly didn't speed up, and it pretty much remained
in the domain of mainframe computers, which still had their uses,
but in niche cases, in fact increasingly niche cases. So

(16:47):
whether they were legacy systems that companies depended upon to
keep business going, or they were more powerful machines that
could be realized in the smaller formats. You know, they
were these powerful machines that were being used are very
specific tasks. They still had a place, it was just
a reduced place, and that was pretty much where virtualization

(17:08):
was stuck until the mid nineteen eighties. That's when a
company called Locust Computing Corporation developed special software in collaboration
with A T and T for a computer called the
A T and T sixty three hundred plus. The computers
operating system was Unix s v R two. Now, the

(17:28):
thing about Unix is. It's an operating system that can
support Unix programs, but it couldn't run DOSS based programs
on its own. And for those of you who aren't
familiar with DOSS, it was the text based operating system
used by many computers. There were lots of different flavors
of DOSS um Apple had its own sort of version,

(17:49):
but the one that everyone really knew was IMMAs DOSS,
the Microsoft flavor of DOSS that was the most popular
version of the text based operating system out there for
IBM and IBM comp udible machines. So there are a
lot of programs developed for MS DOSS and DAWs in general,
but you couldn't run them on a Unix based device.

(18:10):
So Locus developed some software that would handle the interaction
between DOSS programs designed for the eight eight six instruction
set that DOSS programs were reliant upon and the underlying
Unix operating system, so it's kind of like a liaison
between those. The software have become known as MERGE and

(18:30):
would be one of the first virtual machine managing or
VMM products on the market. So now you could have
a Unix machine and still run DOSS programs on top
of it. By having this virtual DOSS machine running on
the Unix hardware. This would mark a new era in
which companies would make software that would allow computers running

(18:51):
on one type of operating system in hardware to run
an instance of a different operating system, and it opened
up access to other types of programs. Most of us
don't tend to work on Unix systems. But what about
max versus PCs. You know that a Mac computer can't
run a DOSS or later on a Windows program, and

(19:12):
vice versa Windows computer can't run a Mac program. But
developers created virtual machine software that would allow a Macintosh
computer to run a virtual instance of DOSS on a Mac,
meaning you could access those DOSS programs on a Macintosh
while running this virtual machine software. Suddenly you could take
advantage of programs meant for another type of computer on

(19:36):
your own machine. By the way, it probably comes as
very little surprised to those of you who are familiar
with Apple that the company was not a fan of
anyone running the Mac operating system on a non Apple machine.
Through virtualization. This was technically possible, but Apple maintained that
the only legal way to do it was to run

(19:57):
mac os on a virtual platform on top of another
Apple branded computer. So why would you want to run
a virtual version of mac OS on top of a
computer already running Mac OS. One reason would be to
test a new program against multiple versions of a single
operating system. So you might want to run your new

(20:18):
mac program against the latest version of Mac OS and
then the previous versions of Mac OS to see if
it's still compatible, making sure you have backwards compatibility written
in there. That would be handy to know. For about
a decade, that was the state of virtualization. Companies made
programs that would allow users to run one operating system
on top of a machine running a different operating system.

(20:41):
It was useful for people who were developing software for
other platforms, but beyond that, there wasn't much general use
for it, because, again, everyone was relying on their own
individual computers anyway, you didn't have to worry about creating
partitions between users for the most part. There were other
manifestations of virtualization that would come on the scene a
little bit later. In the early nineteen nineties, a team

(21:03):
at Sun Microsystems was hard at work developing a new
programming language that would eventually take on the name Java. Initially,
the idea was that this programming language would allow developers
to create programs running on home appliances like television's. The
need for a language that developers could use to create
programs for different platforms was obvious because these appliances would

(21:25):
be coming from different manufacturers who would be working with
different microprocessor companies, so there was no guarantee that televisions
from two different companies would have similar microchips in them. Now,
this is a non trivial problem. If you are a programmer,
you have to make some practical decisions that have little
to do with the actual purpose of your application, and
one of those decisions is what platform will I develop for. Frequently,

(21:49):
the answer that many developers gravitate toward is I want
to develop for the most popular platform out there because
it represents the largest potential customer base. To put it
in another way, let's say you're making a video game,
and let's say there's only two consoles that are on
the market. One of those consoles has a market share
and the other has a ten market share, and they're

(22:11):
both great game consoles, but you're more likely to focus
on developing the game for the one that has the
market share because that's where most of the gamers are.
But one if you could create a programming language that
could work on different platforms despite the underlying hardware, That
was the idea behind Java. So programmer James Gosling and

(22:31):
his team set out to create a programming language that
could work on top of any device that was running
a Java virtual machine. While the resulting language wasn't used
in TVs at that time, it quickly became recognized as
a valuable tool for web development by the time the
Java Development Kit debuted in By then, the web was
really starting to take off, but it was also pretty limited.

(22:54):
All sorts of computers were acting as servers on the Internet,
running on different hardware and using for an operating systems,
and there was a need for more dynamic, interesting, rich
experiences online. But with everyone running different systems, it was
impossible to guarantee that a web app would work for everyone.
Java helped take that pressure off. It was a right, once,

(23:16):
run anywhere programming language, and so the language was adopted
widely on the web, with web browsers building Java applets
to support it within the browser itself. This virtualization made
rich Internet experiences possible. As it turns out the Internet
in general and the Web in particular would create the
perfect environment for the evolution of virtualization. This appeals to

(23:39):
common sense. After all, the Internet is a network of networks,
and each network can potentially include millions of computers running
on different operating systems and hardware. This makes common sense, right,
after all, being Internet is a network of networks, and
each network can potentially include millions of computers running on
different operating systems and hardware. When we come back, I'll

(24:00):
talk about how virtualization really took off in the late
nineties and what's been going on up to today. But
first let's take a quick break. In a way, you
could say that the Internet made the evolution of virtualization

(24:21):
not just possible but necessary. Typically, I T departments like
to dedicate servers to a specific task. They handle that
one task and nothing else. This dedication helps keep things
streamlined and reduces the chance that different tasks will start
to interfere with one another and cost stability issues. But

(24:41):
as the Internet was growing faster and faster, it became
clear that this particular strategy was going to be unsustainable.
It was just getting too big and too complex. Servers
are expensive and they're not just expensive to purchase, they're
also expensive to operate and maintain. If you're dedicating a
single server to every single task and you're adding to

(25:05):
the number of services you have on offer, your server
room is going to get really crowded really quickly. Those
machines take up physical space. Then you have other considerations
you have to take into account, such as having a
sufficient cooling system to keep everything operational, because computers don't
do well when they overheat, and as you add more machines,
you're generating more and more heat, so you've got to

(25:26):
cool them down more effectively. Then you have to figure
out how much electricity these things are gobbling up. It's
it's getting more expensive as you're adding more servers, and
and then there's the question of downtime. It's quite possible
that some servers will be in less demand than others.
So you might have one server operating at about fifteen

(25:48):
percent of its overall capacity and it's essentially idling the
other of the day. That's not an efficient use of resources.
If you've got fifty servers but all of them are
working at fifteen percent, you're saying, Wow, I'm I'm really
inefficient with how I'm using these machines. Virtualization would prove
to be a solution to this problem, but it wasn't

(26:10):
an easy solution. One of the big challenges facing developers
at that time was that the X eight six architecture
that many modern computers rely upon wasn't designed with virtualization
in mind, and there were some hurdles to overcome. And
you may have heard of X eight six and maybe

(26:31):
you wonder what the heck does that actually mean. It's
a reference to Intel's eight eighty six microprocessor, which originally
debuted in ninety eight. The eight eight six architecture is
the foundation for successive processors such as the two eighty six,
the e D three eighty six, and the eight four
eighty six. So if you ever heard someone talking about

(26:52):
like a three eight six or forty six computer, they
were actually talking about this particular architecture. It's a computer
that's based off this microprocessor architecture, which was largely designed
so that it could be backwards compatible while adding to
the various features and the processing speed generation to generation.
More importantly, it became the dominant architecture for computing platforms,

(27:17):
including Internet servers. But as I said that architecture was
not created with virtualization in mind, and there were several
key instructions that, when virtualized, would tend to cause an
X eight six system to crash. So while there was
a growing need for virtualization as the Internet was growing,

(27:38):
there was this challenge of implementing virtualization without actually making
everything crash all the time. Enter vm Ware. Now I'll
have to do a full episode on the company a
vm Ware someday. It's known for its virtualization software. In
two thousand one, vm Ware would introduce virtualization platforms for

(28:00):
Internet servers that handled the operations of virtualization in a
way that wouldn't trigger these system crashes. So now you
could run virtualization software on X eight six architecture systems
and not have to worry about them just completely crapping
out on you at a moment's notice. At first, the
products were actually slow to catch on It was not

(28:22):
yet apparent how useful they would be, but that would change,
and vm Ware's early entry into the space would mean
that the company would end up holding a dominant portion
of the virtualization market even as other software companies caught
onto it. Now, to avoid this episode just becoming a
list of release dates, for virtualization software. I'm gonna summarize

(28:45):
this to say that companies like Virtue Tech, A M
D Connectics, Virtus, Sun, and then later Oracle when it
would buy Sun Microsystems, also Microsoft itself, all of them
released various virtual realization solutions over the years. Some were
built to take advantage of changes in architecture, such as

(29:06):
sixty four bit instruction sets. But beyond these technical specifications,
there's not really that much to talk about. So I
think it's better to go back to a broad picture,
because otherwise, all I'm telling you is that the virtualization
software improved so that it could take advantage of improvements
in microprocessor design. That gets really boring really fast. So

(29:29):
how did virtualization actually help in the Internet age? Well,
by using this special software and I T admin could
take a single existing Internet server and then create multiple
virtual servers, and then each virtual server would perform as
if it was its own physical machine, similar to the
way previous implementations of virtualization would do. But you know,

(29:52):
in those cases you were talking about running a different
operating system on top of a machine or something like that.
But in this case, all of the virtual servers would
exist on top of a single device, and with careful planning,
you can build out virtual servers to make each physical
machine more efficient by run making it run closer to capacity. Right,
so instead of running at capacity, you might have it

(30:15):
running at seventy or eighty percent capacity. Now, you never
really want to run at full capacity because you could
have response time issues if they're if the demand is
exceeding the server's ability to serve up data, then you're
gonna have performance problems. But if you could get it
closer to capacity, then you have less downtime for your

(30:37):
machine and you're making better use of your equipment. And
with the right virtual machine management software, each individual virtual
server is totally partitioned from the others, which helps you
maintain security and stability. If one virtual server does go down,
the other virtual servers on top of that same physical
machine should be able to continue to operate without being affected.

(30:59):
So let's say you're a company and you have four
different apps. This is just a very basic example. So
you've got a server and you divided it into four
virtual servers, and then one of your virtual servers for
one of those apps fails, the other three apps should
still be able to operate just fine. Meanwhile, you've got
an I T admin frantically trying to fix the problem

(31:22):
on the virtual server of the affected app and get
it back online as fast as possible. But ideally you're
doing this in a way where your other operations aren't
being affected at all. So not only could you get
better use out of your hardware using virtualization, you could
also reduce the number of physical servers you had in
your data centers, and this would become increasingly important as

(31:42):
cloud services came on the scene and grew in popularity.
A large cloud service, whether it's offering an operational platform,
or storage space or something else, requires an awful lot
of hardware, but that requirement would be gargantuan if it
weren't for virtualization. For example, one of the many important
concepts in computing is redundancy. That is, making sure the

(32:05):
service you offer remains available even in the event of
a failure in the system. And this is true for
all sorts of services, but it's probably easiest to understand
if we take something that most of us have had
experience with. So let's talk about cloud storage. Let's say
you're using an online data storage service like Google Drive,
so you've stored some files on Google Drive. You've created

(32:26):
numerous documents, or you've uploaded files into your personal Google Drive,
and you expect to be able to access those files
whenever and wherever you need them, assuming you have some
sort of Internet connection, and you know that your files
are not living on your own personal computer or device.
It's living on the cloud, so you can use whatever
device you want to connect to those files. So the

(32:48):
files live somewhere on some server that is owned and
operated by Google, and that's partly true, but it's more
accurate to say that those files live on multiple servers
and and operated by Google, because hardware occasionally fails, and
if that were to happen to the machine that's holding
your files, you wouldn't be able to get to your stuff,

(33:09):
and that would be a problem for you and thus
a problem for Google. So to avoid that, Google has
copies of all of your files, and it's spread across
numerous servers to provide redundancy. If one server should fail,
Google can switch to a different one to serve up
your files to you when you request them, and you
get what you want and you're none the wiser that

(33:30):
anything is wrong on the back end. Redundancy is important
for any online service, and virtualization can help reduce the
physical hardware requirements for redundancy. So now, let's say you're
the one that's running Google Drive. It's it's under your direction,
and while you'd be using virtual servers, you would likely
still want to store the same information from the same

(33:53):
client on different physical machines. So let's say you've got
two physical servers. You've got server one and server two,
and each physical server has four virtual servers on it.
So Server one has virtual servers A, B, C, and D,
and server too has virtual servers E, F, G, n H. Now,
technically you could dedicate virtual servers A and B, both

(34:16):
of those living on the same physical machine, to act
as backups for each other. But if something were to
go wrong in one of the virtual environments, it wouldn't
affect the other one their partition from each other. However,
if something were to go wrong to the underlying physical hardware,
the actual machine that's running everything, you'd be out of
luck because you'd lose both your copies or you'd at

(34:37):
least not not be able to access them until you've
repaired whatever the problem was. So for that reason, you
want to spread the load around your physical machines, and
you want to keep everything running at a good efficiency,
so you want to use virtual servers to reduce downtime.
And of course virtualization is used for all types of
cloud services, including app development um. In fact, that's a

(34:59):
great use of virtualization as it gives developers the chance
to program in an environment where they are reassured they're
not gonna mess everything else up if something goes wrong.
And so there are companies out there that get massive
amounts of revenue from customers who are essentially renting out
a virtual server space to develop apps on top of
before they develop, develop and deploy those apps into the

(35:21):
into the wild to get them to actual customers. Now,
I've talked a lot about why virtualization is useful, but
it also presents challenges. It's not a perfect solution to
all problems. It also presents opportunities for problems to arise,
and one of those is in security. If someone is
able to get access to the underlying hardware, either physically

(35:43):
or remotely, then they could potentially affect all the virtual
servers that are running on that physical machine. So instead
of corrupting one service, a bad actor could have affect
possibly several services. So security is a big concern in
the virtualization world and in the information world in general.

(36:03):
Another challenge is that there's a lack of established standards
in virtualization, though there's movement in the space to change that,
and that means that a data center might use several
different virtualization strategies, which in turn means the I T
department has to be able to handle all that, and
this can get pretty tough as organizations and systems grow
more complicated. There's also the problem that as systems grow,

(36:25):
things can be forgotten. Just keeping track of all those
virtual machines and what's running on each of them can
become challenging in itself, and with a complicated enough system,
it's possible for virtual servers to go overlooked. And if
it's overlooked and no one is using it, that brings
down the operational efficiency of the physical machine that's running
that server, and it's hard to get around performance issues

(36:47):
as well. A dedicated physical server that is only running
one task can more efficiently handle that task than a
computer that has a virtual layer on top of the
physical layer. Think of it sort of like getting permission
from a boss in a smooth operation. You just go
to your boss and you ask for permission to do something,
and you get it or you don't get it. But

(37:09):
in a company that has, say a bloated middle management layer,
maybe first you have to go through a lower level
manager who doesn't have that authority, but you still have
to talk to them before you can get access to
the boss who actually does have the authority to say
yes or no to your request. It slows things down. Well,
virtual platforms can also slow things down a little bit,

(37:33):
so that's another potential drawback to virtualization. However, overall, virtualization
has been a huge reason why we've seen such explosive
growth in online services, and it continues to be important
for developers who are creating programs for multiple platforms. And
so I hope this episode has sort of given you

(37:56):
an idea of what virtualization is all about, where it
came from, and why it's important. Uh, it's definitely something
that has enabled us to have a world where we
have discussions about things like the Internet of Things. Without virtualization,
we wouldn't have the ability to support all those services

(38:16):
because you wouldn't have a data center large enough to
hold all the actual physical servers you would need to
dedicate to each and every task. It would be a
terrible waste of space, of energy, of resources, of money.
So virtualization has really helped create an efficient and cost
effective approach to deploying numerous services across the Internet. So

(38:42):
we'll probably see a lot more development in that space,
will see more competition in it, although the big players
will probably hold on to a pretty large market share.
They've got time on their side and their reputations are
built on that. So I'll probably do more episodes about
virtualization or the company is involved in it in the future.

(39:04):
In the meantime, thank you so much for this suggestion.
I greatly appreciate it. It's the sort of topic that
I used to cover all the time on tech Stuff,
but it's been a while since I've covered one like this,
So Martin, thank you very much. I hope you enjoyed
this episode, and I hope the rest of you were
able to learn something from it. If you guys have
suggestions for future episodes, you can send me an email.

(39:26):
The address is tech stuff at how stuff works dot com.
Drop on by our website that's tech stuff podcast dot com.
There you're gonna find an archive of all of our
past episodes, as well as links to things like our
social media presence and our online store, where every purchase
you make goes to help the show and we greatly
appreciate it, and I'll talk to you again really soon.

(39:56):
Text Stuff is a production of I heart Radio's How
Stuff Works. For more podcasts from I heart Radio, visit
the I heart Radio app, Apple Podcasts, or wherever you
listen to your favorite shows. H

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.