.CO.NR Free Domain KAMAL KUMAR: VIRTUALIZATION

Search This Blog

Sunday, July 26, 2009

VIRTUALIZATION


VIRTUALIZATION

--Virtualization is the process of implementing multiple operating systems on the same set of physical hardware to better utilize the hardware. Companies with strong plans to implement virtualized computing environments look to gain many benefits, including easier systems management, increased server utilization, and reduced datacenter overhead. Traditional IT management has incorporated a one-to-one relationship between the physical servers implemented and the roles they play on the network. When a new database is to be implemented, we call our hardware vendor of choice and order a new server with specifications to meet the needs of the database. Days later we may order yet another server to play the role of a file server. This process of ordering servers to fill the needs of new network services is oftentimes consuming and unnecessary given the existing hardware in the datacenter. To ensure stronger security, we separate services across hosts to facilitate the process of hardening the operating system. We have learned over time that the fewer the functions performed by a server, the fewer the services that are required to be installed, and, in turn, the easier it is to lock down the host to mitigate vulnerabilities. The byproduct of this separation of services has been the exponential growth of our datacenters into large numbers of racks filled with servers, which in most cases are barely using the hardware within them.

Virtualization involves the installation of software commonly called a hypervisor. The hypervisor is the virtualization layer that allows multiple operating systems to run on top of the same set of physical hardware. Virtual machines that run on top of the hypervisor can run almost any operating system, including the most common Windows and Linux operating systems found today as well as legacy operating systems from the past.

By virtualizing servers into virtual machines running on a hypervisor, we can better use our processors while reducing rack space needs and power consumption in the datacenter. Depending on the product used to virtualize a server environment, there are many more benefits to virtualization. Think of the struggles IT professionals have had throughout the years and you’ll gain a terrific insight into why virtualization has become such a popular solution. The simple process of moving a server from a datacenter in Tampa, Florida, to a datacenter in Atlanta, Georgia, is a good example of a common pain point for IT pros. The overhead of removing an 80-pound server from a rack, boxing it, shipping it, unboxing it, and placing it back into another rack is enough to make you want to virtualize. With virtual machines this same relocation process can be reduced to simply copying a directory to an external media device, shipping the external media device, and copying the directory back to another ESX implementation. Other methods, such as virtual machine replication and full and delta images of virtual machines, can be taken with third-party tools.

Virtualization is an abstraction layer that breaks the standard paradigm of computer architecture, decoupling the operating system from the physical hardware platform and the applications that run on it. As a result, IT organizations can achieve greater IT resource utilization and flexibility. Virtualization allows multiple virtual machines, often with heterogeneous operating systems, to run in isolation, side-by-side, on the same physical machine. Each virtual machine has its own set of virtual hardware (CPU, memory, network interfaces, and disk storage) upon which an operating system and applications are loaded. The operating system sees the set of hardware and is unaware of the sharing nature with other guest operating systems running on the same physical hardware platform.

Virtualization technology and its core components, such as the Virtual Machine Monitor, manage the interaction with the operating system calls to the virtual hardware and the actual execution that takes place on the underlying physical hardware.

Virtualization was first introduced in the 1960s to allow partitioning of large, mainframe hardware, a scarce and expensive resource. Over time, minicomputers and PCs provided a more efficient, affordable way to distribute processing power. By the 1980s, virtualization was no longer widely employed.

However, in the 1990s, researchers began to see how virtualization could solve some of the problems associated with the proliferation of less expensive hardware, including underutilization, escalating management costs, and vulnerability.

Today, virtualization is growing as a core technology in the forefront of data center management. The technology is helping businesses, both large and small, solve their problems with scalability, security, and management of their global IT infrastructure while effectively containing, if not reducing, costs.

Virtualization history ?

Virtualization technologies have been around since the 1960s. Beginning with the Atlas and M44/44X projects, the concept of time-sharing and virtual memory was introduced to the computing world.

Funded by large research centers and system manufacturers, early virtualization technology was only available to those with sufficient resources and clout to fund the purchase of the big-iron equipment.

As time-sharing evolved, IBM developed the roots and early architecture of the virtual machine monitor, or VMM. Many of the features and design elements of the System370 and its succeeding iterations are still found in modern-day virtualization technologies.

After a short quiet period when the computing world took its eyes off of virtualization, a resurgent emphasis began again in the mid-1990s, putting virtualization back into the limelight as an effective means to gain high returns on a company’s investment.

Why Virtualize?

Virtualization offers many significant benefits, including server consolidation, rapid server provisioning, new options in disaster recovery, and better opportunities to maintain service-level

agreements (SLAs), to name a few. Perhaps the most common reason is server consolidation.

Most servers in a datacenter are performing at less than 10 percent CPU utilization. This leaves an overwhelming amount of processing power available but not accessible because of the separation of services. As virtualization technology transitioned from the mainframe world to midrange and entry-level hardware platforms and the operating systems that they ran, there was a shift from having either a decentralized or a centralized computing model to having a hybrid of the two. Large computers could now be partitioned into smaller units, giving all of the benefits of logical decentralization while taking advantage of a physical centralization. While there are many benefits that companies will realize as they adopt and implement virtualization solutions, the most prominent ones are consolidation of their proliferating sprawl of servers, increased reliability of computing platforms upon which their important business applications run, and greater security through isolation and fault containment.

How Does Virtualization Work?

The operating system and the CPU architecture historically have been bound and mated one to the other. This inherent relationship is exemplified by secure and stable computing platforms that segregate various levels of privilege and priority through rings of isolation and access, the most critical being Ring-0. The most common CPU architecture, the IA-32 or x86 architecture, follows a similar privileged model containing four rings, 0 to 4. Operating systems that run on x86 platforms are installed in Ring-0, called Supervisor Mode, while applications execute in Ring-3, called User Mode.

The Virtual Machine Monitor (VMM) presents the virtual or perceived Ring-0 for guest operating systems, enabling isolation from each platform. Each VMM meets a set of conditions referred to as the Popek and Goldberg Requirements, written in 1974. Though composed for third-generation computers of that time, the requirements are general enough to apply to modern VMM implementations. While striving to hold true to the Popek and Goldberg requirements, developers of VMMs for the x86 architecture face several challenges due in part to the non-virtualizable instructions in the IA-32 ISA. Because of those challenges, the x86 architecture cannot be virtualized in the purest form; however, x86 VMMs are close enough that they can be considered to be true to the requirements.

Types of Virtualization

Server Virtualization is the most common form of virtualization, and the original. Managed by the VMM, physical server resources are used to provision multiple virtual machines, each presented with its own isolated and independent hardware set. Of the top three forms of virtualization are full virtualization, paravirtualization, and operating system virtualization.

An additional form, called native virtualization, is gaining in popularity and blends the best of full virtualization and paravirtualization along with hardware acceleration logic. Other areas have and continue to experience benefits of virtualization, including storage, network, and application technologies.

Common Use Cases for Virtualization

A technology refresh of older, aging equipment is an opportune time to consider implementing a virtual infrastructure, consolidating workloads and easing migrations through virtualization technologies. Business can reduce recovery facility costs by incorporating the benefits of virtualization into the BCP and DR architectures. Virtualization also gives greater levels of flexibility and allows IT organizations to achieve on-demand service levels. This is evident with easily deployed proof-of-concept, pilot, or mock environments with virtually no overhead to facilitate or manage it. The benefits of virtualization can be driven beyond the walls of the data center to the desktop. Desktop virtualization can help organizations reduce costs while maintaining control of their client environment and providing additional layers of security at no additional cost.

Virtualization is, and has been, at home in the software development life cycle. Such technologies help streamline development, testing, and release management and processes while increasing productivity and shortening the window of time from design to market.


No comments:

Post a Comment