Virtualization tools virtual machine. Implementation of virtualization tools as a solution for centralized management of enterprise infrastructure

Virtual environment concept

A new direction of virtualization, which gives an overall holistic picture of the entire network infrastructure using aggregation techniques.

Types of virtualization

Virtualization is a general term that covers the abstraction of resources for many aspects of computing. The types of virtualization are given below.

Software virtualization

Dynamic broadcast

During dynamic translation ( binary translation) problematic commands of the guest OS are intercepted by the hypervisor. After these commands are replaced with safe ones, control of the guest OS returns.

Paravirtualization

Paravirtualization is a virtualization technique in which guest operating systems are prepared for execution in a virtualized environment, for which their kernel is slightly modified. The operating system communicates with the hypervisor program, which provides it with a guest API, rather than directly using resources such as the memory page table.

The paravirtualization method achieves higher performance than the dynamic translation method.

The paravirtualization method is applicable only if the guest OSes have open source codes that can be modified under a license, or the hypervisor and the guest OS were developed by the same manufacturer taking into account the possibility of paravirtualization of the guest OS (although provided that the hypervisor can be run under the hypervisor lower level, then paravirtualization of the hypervisor itself).

The term first appeared in the Denali project.

Built-in virtualization

Advantages:

  • Sharing resources between both operating systems (directories, printers, etc.).
  • User-friendly interface for application windows from different systems (overlapping application windows, same window minimization as in the host system)
  • When fine-tuned to the hardware platform, performance differs little from the original native OS. Fast switching between systems (less than 1 sec.)
  • A simple procedure for updating the guest OS.
  • Two-way virtualization (applications on one system run on the other and vice versa)

Implementations:

Hardware virtualization

Advantages:

  • Simplify the development of virtualization software platforms by providing hardware management interfaces and support for virtual guest systems. This reduces the labor intensity and time required to develop virtualization systems.
  • The ability to increase the performance of virtualization platforms. Virtual guest systems are managed directly by a small middleware layer, the hypervisor, which increases performance.
  • Security improves, and it becomes possible to switch between several running independent virtualization platforms at the hardware level. Each of the virtual machines can operate independently, in its own hardware resource space, completely isolated from each other. This allows you to eliminate performance losses associated with maintaining the host platform and increase security.
  • The guest system becomes unrelated to the architecture of the host platform and the implementation of the virtualization platform. Hardware virtualization technology makes it possible to run 64-bit guests on 32-bit host systems (with 32-bit virtualization environments on the hosts).

Application examples:

  • testing laboratories and training: It is convenient to test applications in virtual machines that affect the settings of operating systems, for example installation applications. Due to the ease of deployment of virtual machines, they are often used for training in new products and technologies.
  • distribution of pre-installed software: many software developers create ready-made images of virtual machines with pre-installed products and provide them on a free or commercial basis. Such services are provided by Vmware VMTN or Parallels PTN

Server virtualization

  1. placement of several logical servers within one physical server (consolidation)
  2. combining several physical servers into one logical server to solve a specific problem. Example: Oracle Real Application Cluster, grid technology, high performance clusters.
  • SVISTA
  • twoOStwo
  • Red Hat Enterprise Virtualization for Servers
  • PowerVM

In addition, server virtualization makes it easier to restore failed systems on any available computer, regardless of its specific configuration.

Workstation virtualization

Resource virtualization

  • Resource sharing (partitioning). Resource virtualization can be thought of as dividing one physical server into several parts, each of which is visible to the owner as a separate server. It is not a virtual machine technology; it is implemented at the OS kernel level.

In systems with a second type of hypervisor, both operating systems (guest and hypervisor) consume physical resources and require separate licensing. Virtual servers operating at the OS kernel level have almost no loss in performance, which makes it possible to run hundreds of virtual servers on one physical server, which do not require additional licenses.

Dividing disk space or network bandwidth into a number of smaller components that are lighter than the resources of the same type.

For example, the implementation of resource sharing includes (Crossbow Project), which allows you to create several virtual network interfaces based on one physical one.

  • Aggregating, distributing, or adding multiple resources into larger resources or pooling resources. For example, symmetric multiprocessor systems combine many processors; RAID and disk managers combine many disks into one large logical drive; RAID and networking equipment uses multiple channels, combined so that they appear as a single broadband channel. At the meta level, computer clusters do all of the above. Sometimes this also includes network file systems abstracted from the data storage on which they are built, for example, Vmware VMFS, Solaris / OpenSolaris ZFS, NetApp WAFL

Application Virtualization

Advantages:

  • application execution isolation: absence of incompatibilities and conflicts;
  • every time in its original form: the registry is not cluttered, there are no configuration files - necessary for the server;
  • lower resource consumption compared to emulating the entire OS.

see also

Links

  • Overview of virtualization techniques, architectures and implementations (Linux), www.ibm.com
  • Virtual machines 2007. Natalia Elmanova, Sergey Pakhomov, ComputerPress 9’2007
Server virtualization
  • Server virtualization. Neil McAllister, InfoWorld
  • Virtualization of standard architecture servers. Leonid Chernyak, Open systems
  • Alternatives to leaders in the channel 2009, August 17, 2009
Hardware virtualization
  • Hardware virtualization technologies, ixbt.com
  • Hardware virtualization spirals. Alexander Alexandrov, Open Systems

Notes


Wikimedia Foundation. 2010.

See what “Virtualization” is in other dictionaries:

    virtualization- The following general definition is given in the works of the SNIA association. "Virtualization is an act of combining several devices, services or functions of the internal component of the infrastructure (back end) with an additional external one (front... ...

    virtualization- Separation of the physical layer of the network (location and connections of devices) from its logical layer (workgroups and users). Configure the network configuration using logical criteria instead of physical ones. ... Technical Translator's Guide

    Network virtualization is the process of combining hardware and software network resources into a single virtual network. Network virtualization is divided into external, that is, connecting many networks into one virtual one, and internal, creating... ... Wikipedia

The history of virtualization technologies goes back more than forty years. However, after a period of their triumphant use in the 70s and 80s of the last century, primarily on IBM mainframes, this concept faded into the background when creating corporate information systems. The fact is that the very concept of virtualization is associated with the creation of shared computing centers, with the need to use a single set of hardware to form several different logically independent systems. And since the mid-80s, a decentralized model of organizing information systems based on mini-computers and then x86 servers began to dominate in the computer industry.

Virtualization for x86 architecture

In the personal computers that appeared over time, the problem of virtualization of hardware resources, it would seem, did not exist by definition, since each user had at his disposal the entire computer with his own OS. But as PC power increased and the scope of x86 systems expanded, the situation quickly changed. The “dialectical spiral” of development took its next turn, and at the turn of the century, another cycle of strengthening centripetal forces to concentrate computing resources began. At the beginning of this decade, against the background of growing interest of enterprises in increasing the efficiency of their computer resources, a new stage in the development of virtualization technologies began, which is now mainly associated with the use of x86 architecture.

It should be immediately emphasized that although there seemed to be nothing previously unknown in the ideas of x86 virtualization in theoretical terms, we were talking about a qualitatively new phenomenon for the IT industry compared to the situation 20 years ago. The fact is that in the hardware and software architecture of mainframes and Unix computers, virtualization issues were immediately resolved at a basic level. The x86 system was not built with the expectation of working in data center mode, and its development in the direction of virtualization is a rather complex evolutionary process with many different options for solving the problem.

Another and perhaps even more important point is the fundamentally different business models for the development of mainframes and x86. In the first case, we are actually talking about a single-vendor software and hardware complex to support a generally rather limited range of application software for a not very wide range of large customers. In the second, we are dealing with a decentralized community of equipment manufacturers, basic software providers and a huge army of application software developers.

The use of x86 virtualization tools began in the late 90s with workstations: simultaneously with the increase in the number of client OS versions, the number of people (software developers, technical support specialists, software experts) who needed to have several at once on one PC was constantly growing. copies of various OS.

  • Virtualization for server infrastructure began to be used a little later, and this was primarily associated with solving problems of consolidating computing resources. But here two independent directions immediately formed: ·
  • support for heterogeneous operating environments (including for running legacy applications). This case most often occurs within corporate information systems. Technically, the problem is solved by simultaneously running several virtual machines on one computer, each of which includes an instance of the operating system. But the implementation of this mode is carried out using two fundamentally different approaches: full virtualization and paravirtualization; ·
  • support for homogeneous computing environments, which is most typical for application hosting by service providers. Of course, you can use the option of virtual machines here, but it is much more effective to create isolated containers based on a single OS kernel.

The next life stage of x86 virtualization technologies started in 2004-2006. and was associated with the beginning of their mass use in corporate systems. Accordingly, if earlier developers were mainly concerned with creating technologies for executing virtual environments, now the tasks of managing these solutions and their integration into the overall corporate IT infrastructure have begun to come to the fore. At the same time, there was a noticeable increase in demand from personal users (but if in the 90s these were developers and testers, now we are talking about end users - both professional and home).

To summarize the above, in general, we can highlight the following main scenarios for the use of virtualization technologies by customers: ·

  • software development and testing; ·
  • modeling the operation of real systems on research stands; ·
  • consolidation of servers in order to increase the efficiency of equipment use; ·
  • consolidation of servers to solve the problems of supporting legacy applications; ·
  • demonstration and study of new software; ·
  • deployment and updating of application software in the context of existing information systems; ·
  • work of end users (mainly home) on PCs with heterogeneous operating environments.

Basic software virtualization options

We have already said earlier that the problems of developing virtualization technologies are largely related to overcoming the inherited features of the x86 software and hardware architecture. And there are several basic methods for this.

Full virtualization (Full, Native Virtualization). Unmodified instances of guest operating systems are used, and to support the operation of these operating systems, a common layer of emulation of their execution is used on top of the host operating system, which is a regular operating system (Fig. 1). This technology is used, in particular, in VMware Workstation, VMware Server (formerly GSX Server, Parallels Desktop, Parallels Server, MS Virtual PC, MS Virtual Server, Virtual Iron. The advantages of this approach include the relative ease of implementation, versatility and reliability of the solution; all management functions are taken over by the host OS.Disadvantages - high additional overhead costs for the hardware resources used, lack of consideration of the features of the guest OS, less flexibility in the use of hardware than necessary.

Paravirtualization. The guest OS kernel is modified in such a way that it includes a new set of APIs, through which it can work directly with the hardware without conflicting with other virtual machines (VMs; Fig. 2). In this case, there is no need to use a full-fledged OS as host software, the functions of which in this case are performed by a special system called a hypervisor. It is this option that is today the most relevant direction in the development of server virtualization technologies and is used in VMware ESX Server, Xen (and solutions from other suppliers based on this technology), Microsoft Hyper-V. The advantages of this technology are that there is no need for a host OS - VMs are installed virtually on bare metal, and hardware resources are used efficiently. The disadvantages are the complexity of implementing the approach and the need to create a specialized OS hypervisor.

Virtualization at the OS kernel level (operating system-level virtualization). This option involves using a single host OS kernel to create independent parallel operating environments (Fig. 3). For guest software, only its own network and hardware environment is created. This option is used in Virtuozzo (for Linux and Windows), OpenVZ (a free version of Virtuozzo) and Solaris Containers. Advantages - high efficiency in the use of hardware resources, low technical overhead, excellent manageability, minimizing the cost of purchasing licenses. Disadvantages - implementation of only homogeneous computing environments.

Application virtualization implies the use of a model of strong isolation of application programs with controlled interaction with the OS, in which each application instance and all its main components are virtualized: files (including system ones), registry, fonts, INI files, COM objects, services (Fig. 4 ). The application is executed without the installation procedure in its traditional sense and can be launched directly from external media (for example, from flash cards or from network folders). From an IT department's perspective, this approach has obvious benefits: speeding up the deployment and management of desktop systems, minimizing not only conflicts between applications, but also the need for application compatibility testing. In fact, this particular virtualization option is used in Sun Java Virtual Machine, Microsoft Application Virtualization (formerly called Softgrid), Thinstall (became part of VMware in early 2008), Symantec/Altiris.

Questions about choosing a virtualization solution

To say: “product A is a solution for software virtualization” is not at all enough to understand the real capabilities of “A”. To do this, you need to take a closer look at the various characteristics of the products offered.

The first of them is related to the support of various operating systems as host and guest systems, as well as the ability to run applications in virtual environments. When choosing a virtualization product, the customer must also keep in mind a wide range of technical characteristics: the level of application performance loss as a result of the appearance of a new operating layer, the need for additional computing resources to operate the virtualization mechanism, and the range of supported peripherals.

In addition to creating mechanisms for executing virtual environments, systems management tasks are coming to the fore today: converting physical environments into virtual ones and vice versa, restoring a system in case of failure, transferring virtual environments from one computer to another, deploying and administering software, ensuring security, etc.

Finally, the cost indicators of the virtualization infrastructure used are important. It should be borne in mind that here the main thing in the cost structure may not be so much the price of the virtualization tools themselves, but the possibility of saving on the purchase of licenses for basic OS or business applications.

Key players in the x86 virtualization market

The market for virtualization tools began to take shape less than ten years ago and today has acquired quite definite shape.

Created in 1998, VMware is one of the pioneers in the use of virtualization technologies for x86 computers and today occupies a leading position in this market (according to some estimates, its share is 70-80%). Since 2004, it has been a subsidiary of ECM Corporation, but operates independently on the market under its own brand. According to EMC, VMware's workforce grew during this time from 300 to 3,000 people, and sales doubled annually. According to officially announced information, the company's annual income (from the sale of virtualization products and related services) is now approaching $1.5 billion. These data well reflect the general increase in market demand for virtualization tools.

Today, WMware offers a comprehensive third-generation virtualization platform, VMware Virtual Infrastructure 3, that includes tools for both the individual PC and the data center. The key component of this software package is the VMware ESX Server hypervisor. Companies can also take advantage of the free VMware Virtual Server product, which is available for pilot projects.

Parallels is the new (as of January 2008) name of SWsoft, which is also a veteran of the virtualization technology market. Its key product is Parallels Virtuozzo Containers, an OS-level virtualization solution that allows you to run multiple isolated containers (virtual servers) on a single Windows or Linux server. To automate the business processes of hosting providers, the Parallels Plesk Control Panel tool is offered. In recent years, the company has been actively developing desktop virtualization tools - Parallels Workstation (for Windows and Linux) and Parallels Desktop for Mac (for Mac OS on x86 computers). In 2008, it announced the release of a new product - Parallels Server, which supports the server mechanism of virtual machines using different operating systems (Windows, Linux, Mac OS).

Microsoft entered the virtualization market in 2003 with the acquisition of Connectix, releasing its first product, Virtual PC, for desktop PCs. Since then, it has consistently increased the range of offerings in this area and today has almost completed the formation of a virtualization platform, which includes the following components. ·

  • Server virtualization. There are two different technology approaches offered here: using Microsoft Virtual Server 2005 and the new Hyper-V Server solution (currently in beta). ·
  • Virtualization for PC. Performed using the free Microsoft Vitrual PC 2007 product. ·
  • Application virtualization. For such tasks, the Microsoft SoftGrid Application Virtualization system (formerly called SoftGrid) is proposed. ·
  • Presentation virtualization. It is implemented using Microsoft Windows Server Terminal Services and in general is a long-known terminal access mode. ·
  • Integrated management of virtual systems. System Center Virtual Machine Manager, released late last year, plays a key role in solving these problems.

Sun Microsystems offers a multi-tiered set of technologies: traditional OS, resource management, OS virtualization, virtual machines and hard partitions. This sequence is built on the principle of increasing the level of application isolation (but at the same time reducing the flexibility of the solution). All Sun virtualization technologies are implemented within the Solaris operating system. In hardware terms, there is support for x64 architecture everywhere, although UltraSPARC-based systems are initially better suited for these technologies. Other operating systems, including Windows and Linux, can be used as virtual machines.

Citrix Systems Corporation is a recognized leader in remote application access infrastructures. It seriously strengthened its position in the field of virtualization technologies by purchasing XenSource, the developer of Xen, one of the leading operating system virtualization technologies, in 2007 for $500 million. Just ahead of this deal, XenSource introduced a new version of its flagship product XenEnterprise based on the Xen 4 kernel. The acquisition caused some confusion in the IT industry, since Xen is an open source project and its technologies underlie commercial products from vendors such as , Sun, Red Hat and Novell. There is still some uncertainty about Citrix's position in the future promotion of Xen, including in marketing terms. The company's first product based on Xen technology, Citrix XenDesktop (for PC virtualization), is scheduled to be released in the first half of 2008. An updated version of XenServer is then expected to be introduced.

In November 2007, Oracle announced its entry into the virtualization market, introducing software called Oracle VM for virtualizing server applications from this corporation and other manufacturers. The new solution includes an open source server software component and an integrated browser-based management console for creating and managing virtual pools of servers running on systems based on x86 and x86-64 architectures. Experts saw this as Oracle's reluctance to support users who run its products in virtual environments from other manufacturers. It is known that the Oracle VM solution is implemented based on the Xen hypervisor. The uniqueness of this move by Oracle lies in the fact that this seems to be the first time in the history of computer virtualization that the technology is actually tailored not to the operating environment, but to specific applications.

The virtualization market through the eyes of IDC

The x86 architecture virtualization market is at a stage of rapid development, and its structure has not yet been established. This complicates the assessment of its absolute indicators and comparative analysis of the products presented here. This thesis is confirmed by the IDC report “Enterprise Virtualization Software: Customer Needs and Strategies” published in November last year. Of greatest interest in this document is the option for structuring server virtualization software, in which IDC identifies four main components (Fig. 5).

Virtualization platform. It is based on a hypervisor, as well as basic resource management elements and an application programming interface (API). Key characteristics include the number of sockets and number of processors supported by one virtual machine, the number of guests available under one license, and the range of supported operating systems.

Managing virtual machines. Includes tools for managing host software and virtual servers. Today, here are the most noticeable differences in vendor offerings, both in the composition of functions and in scaling. But IDC is confident that the capabilities of the leading vendors' tools will quickly level out, and physical and virtual servers will be managed through a single interface.

Virtual machine infrastructure. A wide range of additional tools that perform tasks such as software migration, automatic restart, load balancing of virtual machines, etc. According to IDC, it is the capabilities of this software that will decisively influence the choice of suppliers by customers, and it is at the level of these tools that the battle will be waged between vendors.

Virtualization solutions. A set of products that enable the above-mentioned core technologies to be linked to specific types of applications and business processes.

In terms of general analysis of the market situation, IDC identifies three camps of participants. The first divide is between those who virtualize at the top OS level (SWsoft and Sun) and at the bottom OS level (VMware, XenSource, Virtual Iron, Red Hat, Microsoft, Novell). The first option allows you to create the most efficient solutions in terms of performance and additional resource costs, but implementing only homogeneous computing environments. The second makes it possible to run several operating systems of different types on one computer. Within the second group, IDC draws another line separating suppliers of standalone virtualization products (VMware, XenSource, Virtual Iron) and manufacturers of operating systems that include virtualization tools (Microsoft, Red Hat, Novell).

From our point of view, the market structuring proposed by IDC is not very accurate. Firstly, for some reason IDC does not highlight the presence of two fundamentally different types of virtual machines - using a host OS (VMware, Virtual Iron, Microsoft) and a hypervisor (VMware, XenSource, Red Hat, Microsoft, Novell). Secondly, if we talk about the hypervisor, it is useful to distinguish between those who use their own core technologies (VMware, XenSource, Virtual Iron, Microsoft) and those who license others (Red Hat, Novell). And finally, it must be said that SWsoft and Sun have in their arsenal not only virtualization technologies at the OS level, but also tools for supporting virtual machines.

Annotation: Information technologies have brought many useful and interesting things to the life of modern society. Every day, inventive and talented people come up with more and more new applications for computers as effective tools for production, entertainment and collaboration. Many different software and hardware, technologies and services allow us to improve the convenience and speed of working with information every day. It is becoming more and more difficult to single out truly useful technologies from the stream of technologies falling upon us and learn to use them with maximum benefit. This lecture will talk about another incredibly promising and truly effective technology that is rapidly breaking into the world of computers - virtualization technology, which occupies a key place in the concept of cloud computing.

The purpose of this lecture is to obtain information about virtualization technologies, terminology, types and main advantages of virtualization. Get acquainted with the main solutions of leading IT vendors. Consider the features of the Microsoft virtualization platform.

Virtualization technologies

According to statistics, the average level of processor capacity utilization for servers running Windows does not exceed 10%; for Unix systems this figure is better, but nevertheless on average does not exceed 20%. The low efficiency of server utilization is explained by the “one application - one server” approach that has been widely used since the early 90s, i.e., each time a company purchases a new server to deploy a new application. Obviously, in practice this means a rapid increase in the server park and, as a consequence, an increase in the costs of it. administration, Energy consumption and cooling, as well as the need for additional premises to install more and more servers and purchase licenses for the server OS.

Virtualization of physical server resources allows you to flexibly distribute them between applications, each of which “sees” only the resources allocated to it and “believes” that a separate server has been allocated to it, i.e. in this case the “one server - several applications” approach is implemented. , but without reducing the performance, availability and security of server applications. In addition, virtualization solutions make it possible to run different operating systems on partitions by emulating their system calls to server hardware resources.


Rice. 2.1.

Virtualization is based on the ability of one computer to perform the work of several computers by distributing its resources across multiple environments. With virtual servers and virtual desktops, you can host multiple operating systems and multiple applications in a single location. Thus, physical and geographical restrictions cease to have any meaning. In addition to saving energy and reducing costs through more efficient use of hardware resources, virtual infrastructure provides high levels of resource availability, more efficient management, enhanced security, and improved disaster recovery.

In a broad sense, the concept of virtualization is the hiding of the real implementation of a process or object from its true representation for the one who uses it. The product of virtualization is something convenient for use, in fact, having a more complex or completely different structure, different from that which is perceived when working with the object. In other words, there is a separation of representation from the implementation of something. Virtualization is designed to abstract software from the hardware.

In computer technology, the term “virtualization” usually refers to the abstraction of computing resources and the provision to the user of a system that “encapsulates” (hides) its own implementation. Simply put, the user works with a convenient representation of the object, and it does not matter to him how the object is structured in reality.

Nowadays, the ability to run multiple virtual machines on a single physical machine is of great interest among computer professionals, not only because it increases the flexibility of the IT infrastructure, but also because virtualization actually saves money.

The history of the development of virtualization technologies goes back more than forty years. IBM was the first to think about creating virtual environments for various user tasks, then still on mainframes. In the 60s of the last century, virtualization was of purely scientific interest and was an original solution for isolating computer systems within a single physical computer. After the advent of personal computers, interest in virtualization weakened somewhat due to the rapid development of operating systems, which placed adequate demands on the hardware of that time. However, the rapid growth of computer hardware power in the late nineties of the last century forced the IT community to once again recall the technologies of virtualization of software platforms.

In 1999, VMware introduced x86-based system virtualization technology as an effective means of transforming x86-based systems into a single, general-use, purpose-built hardware infrastructure that provides complete isolation, portability, and a wide choice of operating systems for application environments. VMware was one of the first to make a serious bet exclusively on virtualization. As time has shown, this turned out to be absolutely justified. Today, WMware offers a comprehensive fourth-generation virtualization platform, VMware vSphere 4, that includes tools for both the individual PC and the data center. The key component of this software package is the VMware ESX Server hypervisor. Later, such companies as Parallels (formerly SWsoft), Oracle (Sun Microsystems), Citrix Systems (XenSourse) joined the “battle” for a place in this fashionable direction of information technology development.

Microsoft entered the virtualization market in 2003 with the acquisition of Connectix, releasing its first product, Virtual PC, for desktop PCs. Since then, it has consistently increased the range of offerings in this area and today has almost completed the formation of a virtualization platform, which includes such solutions as Windows 2008 Server R2 with the Hyper-V component, Microsoft Application Virtualization (App-v), Microsoft Virtual Desktop Infrastructure (VDI), Remote Desktop Services, System Center Virtual Machine Manager.

Today, virtualization technology providers offer reliable and easy-to-manage platforms, and the market for these technologies is booming. According to leading experts, virtualization is now one of the three most promising computer technologies. Many experts predict that by 2015, about half of all computer systems will be virtual.

The increased interest in virtualization technologies at present is not accidental. The computing power of current processors is growing rapidly, and the question is not even what to spend this power on, but the fact that the modern “fashion” for dual-core and multi-core systems, which has already penetrated into personal computers (laptops and desktops), could not be better allows you to realize the richest potential of ideas for virtualizing operating systems and applications, bringing the ease of using a computer to a new qualitative level. Virtualization technologies are becoming one of the key components (including marketing ones) in the newest and future processors from Intel and AMD, in operating systems from Microsoft and a number of other companies.

Benefits of Virtualization

Here are the main advantages of virtualization technologies:

  1. Efficient use of computing resources. Instead of 3, or even 10 servers, loaded at 5-20%, you can use one, used at 50-70%. Among other things, this also saves energy, as well as a significant reduction in financial investments: one high-tech server is purchased that performs the functions of 5-10 servers. Virtualization can achieve significantly more efficient resource utilization because it pools standard infrastructure resources and overcomes the limitations of the legacy one-application-per-server model.
  2. Reducing infrastructure costs: Virtualization reduces the number of servers and associated IT equipment in a data center. As a result, maintenance, power and cooling requirements for assets are reduced, and much less money is spent on IT.
  3. Reduced software costs. Some software manufacturers have introduced separate licensing schemes specifically for virtual environments. So, for example, by purchasing one license for Microsoft Windows Server 2008 Enterprise, you get the right to simultaneously use it on 1 physical server and 4 virtual ones (within one server), and Windows Server 2008 Datacenter is licensed only for the number of processors and can be used simultaneously on an unlimited number of processors. number of virtual servers.
  4. Increased flexibility and responsiveness of the system: Virtualization offers a new method for managing IT infrastructure and helps IT administrators spend less time on repetitive tasks such as provisioning, configuration, monitoring and maintenance. Many system administrators have experienced trouble when a server crashes. And you can’t take out the hard drive, move it to another server, and start everything as before... What about the installation? searching for drivers, setting up, launching... and everything takes time and resources. When using a virtual server, instant launch is possible on any hardware, and if there is no such server, then you can download a ready-made virtual machine with an installed and configured server from libraries supported by companies that develop hypervisors (virtualization programs).
  5. Incompatible applications may run on the same computer. When using virtualization on one server, it is possible to install Linux and Windows servers, gateways, databases and other applications that are completely incompatible within the same non-virtualized system.
  6. Increase application availability and ensure business continuity: With reliable backup and migration of entire virtual environments without service interruptions, you can reduce planned downtime and ensure rapid system recovery in critical situations. The “fall” of one virtual server does not lead to the loss of the remaining virtual servers. In addition, in the event of a failure of one physical server, it is possible to automatically replace it with a backup server. Moreover, this happens unnoticed by users without rebooting. This ensures business continuity.
  7. Easy archiving options. Since a virtual machine's hard drive is typically represented as a file of a specific format located on some physical media, virtualization makes it possible to simply copy this file to backup media as a means of archiving and backing up the entire virtual machine. The ability to completely restore the server from the archive is another great feature. Or you can raise the server from the archive without destroying the current server and see the state of affairs over the past period.
  8. Increasing infrastructure manageability: the use of centralized management of virtual infrastructure allows you to reduce time for server administration, provides load balancing and “live” migration of virtual machines.

Virtual machine we will call a software or hardware environment that hides the real implementation of a process or object from its visible representation.

is a completely isolated software container that runs its own OS and applications, just like a physical computer. A virtual machine acts just like a physical computer and contains its own virtual (i.e. software) RAM, hard drive, and network adapter.

The OS cannot differentiate between virtual and physical machines. The same can be said for applications and other computers on the network. Even herself virtual machine considers himself a “real” computer. Even so, virtual machines consist solely of software components and do not include hardware. This gives them a number of unique advantages over physical hardware.


Rice. 2.2.

Let's look at the main features of virtual machines in more detail:

  1. Compatibility. Virtual machines are generally compatible with all standard computers. Like a physical computer, a virtual machine runs its own guest operating system and runs its own applications. It also contains all the components standard for a physical computer (motherboard, video card, network controller, etc.). Therefore, virtual machines are fully compatible with all standard operating systems, applications and device drivers. A virtual machine can be used to run any software suitable for the corresponding physical computer.
  2. Isolation. Virtual machines are completely isolated from each other, as if they were physical computers. Virtual machines can share the physical resources of a single computer and yet remain completely isolated from each other, as if they were separate physical machines. For example, if four virtual machines are running on one physical server and one of them fails, the availability of the remaining three machines is not affected. Isolation is an important reason why applications running in a virtual environment are much more available and secure than applications running on a standard, non-virtualized system.
  3. Encapsulation. Virtual machines completely encapsulate the computing environment. A virtual machine is a software container that bundles, or “encapsulates,” a complete set of virtual hardware resources, as well as the OS and all its applications, in a software package. Encapsulation makes virtual machines incredibly mobile and easy to manage. For example, a virtual machine can be moved or copied from one location to another just like any other program file. In addition, the virtual machine can be stored on any standard storage medium: from a compact USB flash memory card to enterprise storage networks.
  4. Hardware independence. Virtual machines are completely independent of the underlying physical hardware on which they run. For example, for a virtual machine with virtual components (CPU, network card, SCSI controller), you can configure settings that are completely different from the physical characteristics of the underlying hardware. Virtual machines can even run different operating systems (Windows, Linux, etc.) on the same physical server. Combined with the properties of encapsulation and compatibility, hardware independence provides the ability to freely move virtual machines from one x86-based computer to another without changing device drivers, OS, or applications. Hardware independence also makes it possible to run a combination of completely different operating systems and applications on one physical computer.

Let's look at the main types of virtualization, such as:

  • server virtualization (full virtualization and paravirtualization)
  • virtualization at the operating system level,
  • application virtualization,
  • presentation virtualization.

Only lazy people have never heard of virtualization today. It is no exaggeration to say that today this is one of the main trends in IT development. However, many administrators still have very fragmentary and scattered knowledge about the subject, mistakenly believing that virtualization is only available to large companies. Given the relevance of the topic, we decided to create a new section and start a series of articles on virtualization.

What is virtualization?

Virtualization today is a very broad and diverse concept, but we will not consider all its aspects today; this goes far beyond the scope of this article. For those who are just getting acquainted with this technology, a simplified model will be enough, so we tried to simplify and generalize this material as much as possible, without going into details of implementation on a particular platform.

So what is virtualization? This is the ability to run several virtual machines isolated from each other on one physical computer, each of which will “think” that it is running on a separate physical PC. Consider the following diagram:

Special software runs on top of the real hardware - hypervisor(or virtual machine monitor), which provides emulation of virtual hardware and interaction of virtual machines with real hardware. It is also responsible for communications between virtual PCs and the real environment via the network, shared folders, shared clipboard, etc.

The hypervisor can work either directly on top of the hardware or at the operating system level; there are also hybrid implementations that work on top of a specially configured OS in a minimal configuration.

Using a hypervisor, virtual machines are created, for which the minimum required set of virtual hardware is emulated and access to the shared resources of the main PC, called " host". Each virtual machine, like a regular PC, contains its own instance of the OS and application software, and subsequent interaction with them is no different from working with a regular PC or server.

How is a virtual machine structured?

Despite the apparent complexity, a virtual machine (VM) is just a folder with files; depending on the specific implementation, their set and number may vary, but any VM is based on the same minimum set of files; the presence of the rest is not critical .

The virtual hard disk file is of greatest importance; its loss is equivalent to the failure of the hard disk of a regular PC. The second most important is the VM configuration file, which contains a description of the hardware of the virtual machine and the shared host resources allocated to it. Such resources include, for example, virtual memory, which is a dedicated area of ​​the host's shared memory.

In principle, the loss of the configuration file is not critical; having only one virtual HDD file, you can start the virtual machine by creating its configuration again. Just like having only one hard drive, you can connect it to another PC of a similar configuration and get a fully functional machine.

In addition, the folder in the virtual machine may contain other files, but they are not critical, although their loss may also be undesirable (for example, snapshots that allow you to roll back the state of the virtual PC).

Benefits of Virtualization

Depending on the purpose, desktop and server virtualization are divided. The first is used primarily for training and testing purposes. Now, in order to study some technology or test the implementation of any service in a corporate network, all you need is a fairly powerful PC and desktop virtualization tools. The number of virtual machines that you can have in your virtual laboratory is limited only by the size of the disk; the number of simultaneously running machines is limited mainly by the amount of available RAM.

In the figure below, a window of a desktop virtualization tool from our test laboratory in which Windows 8 is running.

Server visualization is widely used in IT infrastructures of any level and allows you to use one physical server to run several virtual servers. The advantages of this technology are obvious:

Optimal use of computing resources

It's no secret that the computing power of even entry-level servers and just average PCs is excessive for many tasks and server roles and is not fully used. This is usually solved by adding additional server roles, but this approach significantly complicates server administration and increases the likelihood of failures. Virtualization allows you to safely use free computing resources by dedicating your own server to each critical role. Now, to perform maintenance on, say, a web server, you don't have to stop the database server

Saving physical resources

Using one physical server instead of several allows you to effectively save energy, space in the server room, and costs for related infrastructure. This is especially important for small companies that can significantly reduce rental costs due to the reduction in the physical size of the equipment, for example, there is no need to have a well-ventilated server room with air conditioning.

Increased infrastructure scalability and extensibility

As a company grows, the ability to quickly and without significant costs increase the computing power of the enterprise becomes increasingly important. Typically, this situation involves replacing servers with more powerful ones, followed by migration of roles and services from old servers to new ones. Carrying out such a transition without failures, downtime (including planned ones) and various kinds of “transition periods” is almost impossible, which makes each such expansion a small emergency for the company and administrators, who are often forced to work at night and on weekends.

Virtualization allows us to solve this issue much more effectively. If there are free host computing resources, you can easily add them to the desired virtual machine, for example, increasing the amount of available memory or adding processor cores. If it is necessary to increase performance more significantly, a new host is created on a more powerful server, where the virtual machine in need of resources is transferred.

Downtime in this situation is very short and comes down to the time required to copy VM files from one server to another. In addition, many modern hypervisors include a “live migration” feature that allows you to move virtual machines between hosts without stopping them.

Increased fault tolerance

Perhaps the physical failure of a server is one of the most unpleasant moments in the work of a system administrator. The situation is complicated by the fact that a physical instance of the OS is almost always hardware dependent, which makes it impossible to quickly launch the system on another hardware. Virtual machines do not have this drawback; if the host server fails, all virtual machines are quickly and without problems transferred to another, working server.

In this case, differences in the hardware of the servers do not play any role; you can take virtual machines from a server on the Intel platform and successfully launch them a few minutes later on a new host running on the AMD platform.

The same circumstance allows you to temporarily put servers out for maintenance or change their hardware without stopping the virtual machines running on them; it is enough to temporarily move them to another host.

Ability to support legacy operating systems

Despite constant progress and the release of new software versions, the corporate sector often continues to use outdated software versions; 1C:Enterprise 7.7 is a good example. Virtualization allows such software to be integrated into a modern infrastructure at no extra cost; it can also be useful when an old PC running an outdated OS has broken down, and it is not possible to run it on modern hardware. The hypervisor allows you to emulate a set of outdated hardware to ensure compatibility with older operating systems, and special utilities allow you to transfer a physical system to a virtual environment without data loss.

Virtual networks

It's hard to imagine a modern PC without some kind of network connection. Therefore, modern virtualization technologies make it possible to virtualize not only computers but also networks. Like a regular computer, a virtual machine can have one or more network adapters, which can be connected either to an external network, through one of the host's physical network interfaces, or to one of the virtual networks. A virtual network is a virtual network switch to which network adapters of virtual machines are connected. If necessary, in such a network, using the hypervisor, DHCP and NAT services can be implemented to access the Internet through the host’s Internet connection.

The capabilities of virtual networks allow you to create quite complex network configurations even within the same host; for example, let’s look at the following diagram:

The host is connected to the external network via a physical network adapter LAN 0, the VM5 virtual machine is connected to the external network via the same physical interface via a network adapter VM LAN 0. For other machines on the external network, the host and VM5 are two different PCs, each of them has its own network address, its own network card with its own MAC address. The second VM5 network card is connected to the virtual network virtual switch VMNET 1, network adapters of virtual machines VM1-VM4 are also connected to it. Thus, within one physical host, we organized a secure internal network, which has access to the external network only through the VM5 router.

In practice, virtual networks make it easy to organize several networks with different levels of security within one physical server, for example, placing potentially unsafe hosts in the DMZ without additional costs for network equipment.

Snapshots

Another virtualization function whose usefulness is difficult to overestimate. Its essence boils down to the fact that at any time, without stopping the operation of the virtual machine, you can save a snapshot of its current state, and more than one. For an unspoiled admin, it’s just some kind of holiday to be able to easily and quickly return to the original state if something suddenly goes wrong. Unlike creating an image of the hard drive and then restoring the system using it, which can take considerable time, switching between snapshots occurs within a matter of minutes.

Another use for snapshots is for training and testing purposes; with their help, you can create an entire state tree of a virtual machine, being able to quickly switch between different configuration options. The figure below shows a tree of images of a router from our test laboratory, which you are very familiar with from our materials:

Conclusion

Despite the fact that we tried to give only a brief overview, the article turned out to be quite lengthy. At the same time, we hope that thanks to this material you will be able to really evaluate all the possibilities that virtualization technology provides and meaningfully, presenting the benefits that your IT infrastructure can receive, begin to study our new materials and the practical implementation of virtualization in everyday practice .

Recently, many different companies operating not only in the IT sector, but also in other areas, have begun to take a serious look at virtualization technologies. Home users have also experienced the reliability and convenience of virtualization platforms that allow them to run multiple operating systems in virtual machines simultaneously. At the moment, virtualization technologies are among the most promising, according to various information technology market researchers. The market for virtualization platforms and management tools is currently growing rapidly, with new players periodically appearing on it, and the process of acquisition of small companies developing software for virtualization platforms and tools for improving the efficiency of the use of virtual infrastructures is in full swing by large players.

Meanwhile, many companies are not yet ready to invest heavily in virtualization because they cannot accurately assess the economic effect of introducing this technology and do not have sufficiently qualified personnel. If in many Western countries there are already professional consultants who can analyze the IT infrastructure, prepare a plan for virtualizing the company’s physical servers and assess the profitability of the project, then in Russia there are very few such people. Of course, in the coming years, the situation will change, and at a time when various companies appreciate the benefits of virtualization, there will be specialists with sufficient knowledge and experience to implement virtualization technologies at various scales. At the moment, many companies are only conducting local experiments in the use of virtualization tools, using mainly free platforms.

Fortunately, many vendors, in addition to commercial virtualization systems, also offer free platforms with limited functionality so that companies can partially use virtual machines in the production environment of the enterprise and, at the same time, evaluate the possibility of moving to serious platforms. In the desktop sector, users are also starting to use virtual machines in their daily activities and are not placing greater demands on virtualization platforms. Therefore, free funds are considered first of all.

Leaders in virtualization platforms

The development of virtualization tools at various levels of system abstraction has been ongoing for more than thirty years. However, only relatively recently the hardware capabilities of servers and desktop PCs have made it possible to take this technology seriously in relation to the virtualization of operating systems. It so happens that for many years, both various companies and enthusiasts have been developing various tools for virtualizing operating systems, but not all of them are currently actively supported and are in an acceptable state for effective use. Today, the leaders in the production of virtualization tools are VMware, Microsoft, SWSoft (together with its Parallels company), XenSource, Virtual Iron and InnoTek. In addition to the products of these vendors, there are also such developments as QEMU, Bosch and others, as well as virtualization tools for operating system developers (for example, Solaris Containers), which are not widely used and are used by a narrow circle of specialists.

Companies that have achieved some success in the market for server virtualization platforms distribute some of their products for free, while relying not on the platforms themselves, but on management tools, without which it is difficult to use virtual machines on a large scale. In addition, commercial desktop virtualization platforms designed for use by IT professionals and software development companies have significantly more capabilities than their free counterparts.

However, if you use server virtualization on a small scale, in the SMB (Small and Medium Business) sector, free platforms may well fill a niche in a company's production environment and provide significant cash savings.

When to use free platforms

If you do not require mass deployment of virtual servers in an organization, constant monitoring of the performance of physical servers under changing loads and a high degree of availability, you can use virtual machines based on free platforms to support the organization’s internal servers. With an increasing number of virtual servers and a high degree of their consolidation on physical platforms, the use of powerful tools for managing and maintaining virtual infrastructure is required. Depending on whether you need to use various systems and storage networks, such as Storage Area Network (SAN), backup and disaster recovery tools, and hot migration of running virtual machines to other equipment, you may not be able to suffice the capabilities of free virtualization platforms, however, it should be noted that free platforms are constantly updated and acquire new functions, which expands the scope of their use.

Another important point is technical support. Free virtualization platforms exist either within the Open Source community, where many enthusiasts are developing the product and supporting it, or are supported by the platform vendor. The first option assumes the active participation of users in the development of the product, their compilation of error reports and does not guarantee a solution to your problems when using the platform; in the second case, most often, technical support is not provided at all. Therefore, the qualifications of the personnel deploying free platforms must be at a high level.

Free desktop virtualization platforms are best used for isolating user environments, decoupling them from specific hardware, for educational purposes, for studying operating systems, and for safe testing of various software. It is unlikely that free desktop platforms should be used on a large scale for software development or testing in software companies, since they do not have sufficient functionality for this. However, for home use, free virtualization products are quite suitable, and there are even examples where virtual machines based on free desktop virtualization systems are used in a production environment.

Free server virtualization platforms

In almost any organization using a server infrastructure, there is often a need to use both standard network services (DNS, DHCP, Active Directory) and several internal servers (applications, databases, corporate portals) that do not experience heavy loads and are distributed across different physical servers. These servers can be consolidated into several virtual machines on one physical host. At the same time, the process of migrating servers from one hardware platform to another is simplified, hardware costs are reduced, the backup procedure is simplified and their manageability is increased. Depending on the types of operating systems that run network services and the requirements for the virtualization system, you can choose the appropriate free product for the corporate environment. When choosing a server virtualization platform, it is necessary to take into account performance characteristics (they depend both on the virtualization technology used and on the quality of implementation of various components of the manufacturers' platform), ease of deployment, the ability to scale the virtual infrastructure and the availability of additional management, maintenance and monitoring tools.


The project is an open source virtualization platform, developed by a community of independent developers supported by SWSoft. The product is distributed under the GNU GPL license. The core of the OpenVZ platform is part of the Virtuozzo product, a commercial product from SWSoft that has greater capabilities than OpenVZ. Both products use an original virtualization technique: virtualization at the operating system instance level. This method of virtualization has less flexibility compared to full virtualization (you can only run Linux operating systems, since one kernel is used for all virtual environments), but it allows you to achieve minimal performance losses (about 1-3 percent). Systems running OpenVZ cannot be called full-fledged virtual machines; they are rather virtual environments (Virtual Environments, VE), in which hardware components are not emulated. This approach only allows you to install different Linux distributions as virtual environments on the same physical server. Moreover, each of the virtual environments has its own process trees, system libraries and users and can use network interfaces in its own way.

Virtual environments appear to users and applications running in them to be almost completely isolated environments that can be managed independently of other environments. Due to these factors and high performance, OpenVZ and SWSoft Virtuozzo products have become most widespread in supporting virtual private servers (VPS) in hosting systems. Based on OpenVZ, it is possible to provide clients with several dedicated virtual servers based on the same hardware platform, each of which can have different applications installed and which can be rebooted separately from other virtual environments. The OpenVZ architecture is presented below:

Some independent experts conducted a comparative analysis of the performance of virtual servers based on the commercial platforms SWSoft Virtuozzo and VMware ESX Server for hosting purposes and concluded that Virtuozzo copes better with this task. Of course, the OpenVZ platform on which Virtuozzo is built has the same high performance, but it lacks the advanced controls that Virtuozzo has.

The OpenVZ environment is also great for training purposes, where anyone can experiment with their own isolated environment without endangering other environments on that host. Meanwhile, using the OpenVZ platform for other purposes is not advisable at the moment due to the obvious inflexibility of the virtualization solution at the operating system level.


The company relatively recently entered the virtualization platform market, but quickly entered into competition with such serious server platform vendors as VMware, XenSource and SWSoft. Virtual Iron's products are based on the free Xen hypervisor, supported by the Open Source Xen-community. Virtual Iron is a virtualization platform that does not require a host operating system (the so-called bare-metal platform), and is aimed at use in large enterprise environments. Virtual Iron products provide all the necessary tools to create, manage, and integrate virtual machines into a company's production environment. Virtual Iron supports 32- and 64-bit guest and host operating systems, as well as virtual SMP (Symmetric Multi Processing), which allows virtual machines to use multiple processors.

Virtual Iron originally used paravirtualization techniques to run guests in virtual machines, just like XenSource's products based on the Xen hypervisor. The use of paravirtualization involves the use of special versions of guest systems in virtual machines, the source code of which is modified to run them on virtualization platforms. This requires changes to the operating system kernel, which is not a big problem for an open source OS, but is unacceptable for proprietary closed systems such as Windows. There is no significant increase in performance in paravirtualization systems. As practice has shown, operating system manufacturers are reluctant to include support for paravirtualization in their products, so this technology has not gained much popularity. As a result, Virtual Iron was one of the first to use hardware virtualization techniques that allow it to run unmodified versions of guest systems. At the moment, the latest version of the Virtual Iron 3.7 platform allows the use of virtual machines on server platforms only with support for hardware virtualization. The following processors are officially supported:

  • Intel® Xeon® 3000, 5000, 5100, 5300, 7000, 7100 Series
  • Intel® Core™ 2 Duo E6000 Series
  • Intel® Pentium® D-930, 940, 950, 960
  • AMD Opteron™ 2200 or 8200 Series Processors
  • AMD Athlon™ 64 x2 Dual-Core Processor
  • AMD Turion™ 64 x2 Dual-Core Processor

In addition, on the Virtual Iron website you can find lists of equipment certified by the company for its virtualization platform.

Virtual Iron products come in three editions:

  • Single Server Virtualization and Management
  • Multiple Server Virtualization and Management
  • Virtual Desktop Infrastructure (VDI) Solution

Currently, the free solution is the Single Server solution, which allows you to install Virtual Iron on one physical host in the organization's infrastructure. It supports the iSCSI protocol, SAN networks and local storage systems.

The free edition of Single Server has the following minimum installation requirements:

  • 2 GB RAM
  • CD-ROM drive
  • 36 GB disk space
  • Ethernet network interface
  • Fiber channel network interface (optional)
  • Support for hardware virtualization in the processor

Virtual Iron allows you to appreciate all the capabilities of hardware virtualization and virtual machine management tools. The free edition is primarily intended to evaluate the effectiveness and convenience of the virtualization platform and management tools. However, it can also be used in an enterprise production environment to support the company's internal servers. The absence of a separate host platform will allow, firstly, not to spend money on purchasing a license for the host OS, and secondly, it reduces productivity losses for supporting guest systems. Typical applications of the free edition of Virtual Iron are the deployment of several virtual servers in the infrastructure of a small organization in the SMB sector in order to separate vital servers from the hardware and increase their manageability. In the future, when purchasing a commercial version of the platform, the virtual server infrastructure can be expanded, and features such as effective backup tools and “hot” migration of virtual servers between hosts can be used.


In terms of convenience and ease of use, VMware Server is the undisputed leader, and in terms of performance it does not lag behind commercial platforms (especially on Linux host systems). Disadvantages include the lack of support for hot migration and the lack of backup tools, which, however, are provided, most often, only by commercial platforms. Of course, VMware Server is the best choice for quickly deploying an organization's internal servers, including pre-installed virtual server templates, which can be found in abundance on various resources (for example,).

Results

Summing up the review of free server virtualization platforms, we can say that each of them currently occupies its own niche in the SMB sector, where through the use of virtual machines one can significantly increase the efficiency of the IT infrastructure, make it more flexible and reduce the cost of purchasing equipment. Free platforms, first of all, allow you to evaluate the capabilities of virtualization not on paper and experience all the advantages of this technology. In conclusion, here is a summary table of the characteristics of free virtualization platforms that will help you choose the appropriate server platform for your purposes. After all, it is through free virtualization that the path to further investment in virtualization projects based on commercial systems lies.

Platform name, developerHost OSOfficially supported guest operating systemsSupport for multiple virtual processors (Virtual SMP)Virtualization techniqueTypical UseProductivity
An open source community project powered by SWSoft LinuxVarious Linux distributionsYesOperating system level virtualizationIsolation of virtual servers (including for hosting services)No losses

Virtual Iron Software, Inc
Not requiredWindows, RedHat, SuSEYes (up to 8)Server virtualization in a production environmentClose to native
Virtual Server 2005 R2 SP1
Microsoft
WindowsWindows, Linux (Red Hat and SUSE)NoNative virtualization, hardware virtualizationVirtualization of internal servers in a corporate environmentClose to native (with Virtual Machine Additions installed)

VMware
Windows, LinuxDOS, Windows, Linux, FreeBSD, Netware, SolarisYesNative virtualization, hardware virtualizationConsolidation of small enterprise servers, development/testingClose to native
Xen Express and Xen
XenSource (supported by Intel and AMD)
NetBSD, Linux, SolarisLinux, NetBSD, FreeBSD, OpenBSD, Solaris, Windows, Plan 9YesParavirtualization, hardware virtualizationDevelopers, testers, IT professionals, server consolidation of small enterprisesClose to native (some losses when working with the network and intensive disk usage)