... Matthew Drummond

Virtualization in a Nutshell

What is virtualization?

According to Arwen Hutt in the paper “Employing Virtualization in Library Computing: Use Cases and Lessons Learned” (2009) Virtualization is defined as:

“…to partition the physical resources (processor, hard drive, network card, etc.) of one computer to run one or more instances of concurrent, but not necessarily identical, operating systems…”

But more commonly Virtualization is colloquially explained as “a computer within a computer”. As to the user just witnesses the installation of an operating system within a window, acting as if there’s dedicated physical hardware running the observed instance. The colloquial definition isn’t that far off, but as alluded to in Hutt’s definition, the process is quite more in depth.

Virtualization involves a computer placing artificial barriers within the hardware resources, allocating set amounts of processing power, storage, network bandwidth, and other case specific resources to an imaginary machine, and leaving the rest to be allocated back to the original machine. These additional resources are then treated as their own computer, free to have an operating system installed, and treated like any other physical computer.

Origins

Virtualization is achieved by the host computer granting different pieces of software different privileges depending on what they are. Such as an operating system has more privileges than programs, such as direct access to hardware like your RAM or CPU, forcing software running within the operating system to play nicely, and request to gain access to hardware, closing security holes, and the potential for breaking a computer system. Because of this, older virtualization software had to run without direct hardware access, translating the instructions for the virtual machine to a language the OS (Operating System) could understand. The additional translation of an entire computers functions, into a language the host machine could understand was tedious, and added immense overhead, requiring powerful, high consumption server class hardware, that was simply not available in the mainstream

This changed however in the 2000’s with standards such as AMD’s AMD-V, and Intel’s VT-X/D. Instruction sets for the physical processor that allows virtualization software to directly communicate to the processor, getting around the host operating system entirely. This allowance of virtual machines to direct hardware significantly reduced the overhead required to visualizer, and increased the average virtual machines performance by immeasurable amounts. This made it so the most consumer of hardware could run virtual machines, and making the already in production high end hardware excel further at virtualization, and opening new unforeseen possibilities. It was this lowered barrier for entry that allowed for virtualization to blast beyond its previous iterations adoption, making the tool of virtualization to make a place in every IT professionals proverbial tool box

Virtualization Around us

Virtualization is used everywhere from older software that was developed for older systems, such as banking software for ATMs, businesses inventory tracking, or even older video games. Virtualization allows software to be sand boxed in an emulated environment, making the software think it’s on a machine from the 80’s when, it can be on the most modern of hardware.

Because of this sandboxing virtualization has found a place in the market for testing. Whether it’s a systems admin testing the latest software with a copy of their company’s production OS, or the reverse engineering of malicious software for security purposes. Virtual machines are a disposable way to simulate a real machine, without the ramifications of using actual production hardware and software.

Due to virtual machines sand boxed nature, they have many tools that can externally manipulate them. Most virtualization software comes with features such as snapshots, which allow a user to configure a virtual machine to their desires, create an image of the virtual machine in its current state, manipulate it further. And if the user reaches undesirable results, they can revert to the previous image of the machine at any given time.

Because of the enhanced performance of virtualization software on server class hardware, and the ease of recovery allowed by snapshots, virtualization has taken over the centralized server space. Virtual machines fulfill the needs of servers, with easy constant backups in snapshots, redundancy in these backups with the proper storage setup, and the modularity of virtual machines, allowing power users more access to more hardware as required.

Applications of virtualization

The high modularity of virtualization has opened markets we could previously only dream of. As mentioned before, virtualization allows for more hardware resources to be allocated as needed to users. Combine this with processor instruction sets that allow for hardware such as graphics cards to interface directly with a virtual machine, and you have an interesting proposition for users requiring rendering power, such as gamers, digital artists, and even mathematicians. Although a well-defined technology for years, it caught momentum in the late 2000’s as AMD, and Intel revised their specifications for IOMMU (Input–output memory management unit). IOMMU separates individual hardware components from themselves, and the software that directly controls them. This separation allows for specific hardware to interface outside the parameters set out by a host computers operating system. This ties in with virtualization as this separation can allow for virtual machines, with virtual processors to operate with physical hardware such as graphics cards, network cards, and any other imaginable device as if it was a physical machine, with the hardware installed directly into it.

This revolution in technology allowed for companies such as NVidia to open services such as GeForce now, a service where virtual machines from massive server farms are assigned graphics cards for graphical processing. Gamers then pay a fee to gain access to these virtual machines, allowing them to have a live connection to the latest in powerful hardware, and allowing them to use it to play video games.

Parallel to this is services such as Amazons AWS (Amazon Web Services) opening programs such as “Elastic GPUs”, a program in which users of AWS can rent out the horsepower of graphics cards, hooked up to their remote virtual machines, and able to use them for various processing needs, such as rendering 3d digital animation, or even calculating complex equations, or physics simulations.

Competing technologies

Due to the benefits, and possibilities of virtualization, there’s been countless software implementations of it over the years, as every other IT professional, and company with programming knowledge has a desire, and drive to make a virtualization solution that works for them.

Because of this abundance of developers creating virtualization software there’s been countless implementations released over the years. Despite this, few technologies lead the rest for multiple reasons:

VMWare

VMWare is a virtualization software suite operated by Dell Computers. Starting off in in late 1998, VMWare was a lighter weight, optimized piece of software. The rapid maturity of the software, combined with the impending launch of the new instruction sets for processors, Dell acquired the startup known as VMWare, and had grown the software to a modern industry standard.

VMWare offers a vast range of products involving their virtualization software, such as cloud computing, network security, and vSAN’s (Virtual Storage Area Networks), on top of maintaining support of their consumer, and professional virtualization managers.

At the core of VMWare is ESXi (Formerly known as ESX), a piece of software made to create, run, and maintain virtual machines. It doesn’t fit the traditional definition of an operating system, as it is software that directly communicates with hardware, user generated configurations, and outputs virtual machines, created to the desired parameters. ESXi supports cutting edge features such as up to 128 virtual CPUs, 4 terabytes of system memory, direct interfacing with the latest IO such as USB 3, and maximum virtual machine disk sizes of 62 terabytes per virtual disk.

Hyper-V

Hyper-V is the virtualization software developed my Microsoft. In response to all the third-party software taking over the virtualization market, Microsoft decided to release their own software that would be incorporated right into Windows. Allowing for a flawless integration of virtual machines into preexisting networks with Windows Server based servers. Hyper-V was successfully launched, and even incorporated into the higher end version of Windows desktop.

Due to already being within most IT professional’s computers, Hyper-V is a simple, quick way to set up virtual machines that are both feature rich and can compete with other leading virtualization technologies.

Hyper-V shares most of the same features of its competing technologies, but is unique in its implementation through windows. Carving out its share of the market through ease of adoption, allowing preexisting servers to incorporate it into production with relative ease.

KVM

KVM is an open source implementation of virtualization software developed by The Linux Foundation. KVM had started out as a method of virtualizing on Linux based operating systems, but had spiraled into a much larger project. Adopting, and reproducing the features of industry leading competitors, with the selling point of being like Hyper-V: ease of implementation.

KVM has been a module of the base Linux Kernel, and by extension every Linux based operating system since 2007. Due to the widespread use of Linux in servers, and other esoteric, niche applications, KVM is already installed on, and compatible with countless systems.

KVM is also community developed, with programmers around the world free to add, and suggest changes to code. KVM quickly changes directions, and adds new features based on what the community desires. Due to this open philosophy KVM has multiple implementations of itself, created by developers ranging from long standing professional oriented companies such as Red Hat, and smaller, niche implementations made by newer companies such as ProxMox.

KVM is clearly the most adaptable virtualization technology, but lacks the adoption rates that VMWare, or Hyper-V has due to associations of open source software. But its adoption is growing as it begins to lead in features such as hardware pass-through via IOMMU, Open standards on Virtual Machines, system agnosticism, and inherent modularity that comes with being a part of the Linux Kernel.

Works Cited

AWS, A. (2017). Amazon EC2 Elastic GPUs. Retrieved September 25, 2017, from Amazon AWS: https://aws.amazon.com/ec2/elastic-gpus/

Davies, K., & Poggemeyer, L. (2017, September 21). What’s new in Hyper-V on Windows Server 2016. Retrieved September 25, 2017, from Microsoft IT Pro Center: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/what-s-new-in-hyper-v-on-windows

Hutt, A., Stuart, M., Suchy, D., & Westbrook, B. D. (2009, September). Employing Virtualization in Library Computing: Use Cases and Lessons Learned. Information Technology & Libraries. , 110-115. Retrieved September 25, 2017

Microsoft. (2008). Hyper-V Feature Overview. Retrieved September 26, 2017, from Microsoft Developer Network: https://msdn.microsoft.com/en-us/library/cc768521(v=bts.10).aspx

NVidia. (2017). NVidia Geforce Now FAQ. Retrieved September 25, 2017, from NVidia: https://shield.nvidia.com/support/geforce-now/faq/2

VMWare. (2017). ESXi Features. Retrieved September 25, 2017, from VMWare: https://www.vmware.com/products/esxi-and-esx.html