Virtualization with KVM
Virtualization has made a lot of progress during the last decade, primarily due to the development of myriad open-source virtual machine hypervisors. This progress has almost eliminated the barriers between operating systems and dramatically increased utilization of powerful servers, bringing immediate benefit to companies. Up until recently, the focus always has been on software-emulated virtualization. Two of the most common approaches to software-emulated virtualization are full virtualization and paravirtualization. In full virtualization, a layer, commonly called the hypervisor or the virtual machine monitor, exists between the virtualized operating systems and the hardware. This layer multiplexes the system resources between competing operating system instances. Paravirtualization is different in that the hypervisor operates in a more cooperative fashion, because each guest operating system is aware that it is running in a virtualized environment, so each cooperates with the hypervisor to virtualize the underlying hardware.
Both approaches have advantages and disadvantages. The primary advantage of the paravirtualization approach is that it allows the fastest possible software-based virtualization, at the cost of not supporting proprietary operating systems. Full virtualization approaches, of course, do not have this limitation; however, full virtualization hypervisors are very complex pieces of software. VMware, the commercial virtualization solution, is an example of full virtualization. Paravirtualization is provided by Xen, User-Mode Linux (UML) and others.
With the introduction of hardware-based virtualization, these lines have blurred. With the advent of Intel's VT and AMD's SVM, writing a hypervisor has become significantly easier, and it now is possible to enjoy the benefits of full virtualization while keeping the hypervisor's complexity at a minimum.
Xen, the classic paravirtualization engine, now supports fully virtualized MS Windows, with the help of hardware-based virtualization. KVM is a relatively new and simple, yet powerful, virtualization engine, which has found its way into the Linux kernel, giving the Linux kernel native virtualization capabilities. Because KVM uses hardware-based virtualization, it does not require modified guest operating systems, and thus, it can support any platform from within Linux, given that it is deployed on a supported processor.
KVM is a unique hypervisor. The KVM developers, instead of creating major portions of an operating system kernel themselves, as other hypervisors have done, devised a method that turned the Linux kernel itself into a hypervisor. This was achieved through a minimally intrusive method by developing KVM as kernel module. Integrating the hypervisor capabilities into a host Linux kernel as a loadable module can simplify management and improve performance in virtualized environments. This probably was the main reason for developers to add KVM to the Linux kernel.
This approach has numerous advantages. By adding virtualization capabilities to a standard Linux kernel, the virtualized environment can benefit from all the ongoing work on the Linux kernel itself. Under this model, every virtual machine is a regular Linux process, scheduled by the standard Linux scheduler. Traditionally, a normal Linux process has two modes of execution: kernel and user. The user mode is the default mode for applications, and an application goes into kernel mode when it requires some service from the kernel, such as writing to the hard disk. KVM adds a third mode, the guest mode. Guest mode processes are processes that are run from within the virtual machine. The guest mode, just like the normal mode (non-virtualized instance), has its own kernel and user-space variations. Normal kill and ps commands work on guest modes. From the non-virtualized instance, a KVM virtual machine is shown as a normal process, and it can be killed just like any other process. KVM makes use of hardware virtualization to virtualize processor states, and memory management for the virtual machine is handled from within the kernel. I/O in the current version is handled in user space, primarily through QEMU.
A typical KVM installation consists of the following components:
A device driver for managing the virtualization hardware; this driver exposes its capabilities via a character device /dev/kvm.
A user-space component for emulating PC hardware; currently, this is handled in the user space and is a lightly modified QEMU process.
The I/O model is directly derived from QEMU's, with support for copy-on-write disk images and other QEMU features.
How do you find out whether your system will run KVM? First, you need a processor that supports virtualization. For a more detailed list, have a look at wiki.xensource.com/xenwiki/HVM_Compatible_Processors. Additionally, you can check /proc/cpuinfo, and if you see vmx or smx in the cpu flags field, your system supports KVM.
|Contrast Security's Contrast Enterprise||Aug 30, 2016|
|illusive networks' Deceptions Everywhere||Aug 29, 2016|
|Happy Birthday Linux||Aug 25, 2016|
|ContainerCon Vendors Offer Flexible Solutions for Managing All Your New Micro-VMs||Aug 24, 2016|
|Updates from LinuxCon and ContainerCon, Toronto, August 2016||Aug 23, 2016|
|NVMe over Fabrics Support Coming to the Linux 4.8 Kernel||Aug 22, 2016|
- Contrast Security's Contrast Enterprise
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- illusive networks' Deceptions Everywhere
- Happy Birthday Linux
- What I Wish I’d Known When I Was an Embedded Linux Newbie
- Tech Tip: Really Simple HTTP Server with Python
- New Version of GParted
- All about printf
- ContainerCon Vendors Offer Flexible Solutions for Managing All Your New Micro-VMs