Linux in Government: How Linux Reins in Server Sprawl
Operating systems manage hardware on which they run. Like any operating system, Linux schedules or arbitrates CPU cycles, allocates memory and handles input output devices. When we virtualize the CPU, memory and input and output, an operating system--whether UNIX or Windows--becomes divorced from the hardware. The operating system becomes a guest on the physical hardware, but does not manage the hardware.
Linux has many features that make it a better host for guest operating systems than other OSes. Some of the contributions by IBM have made this possible. Linux runs well on servers and has done so in the past. But, it never enjoyed advanced mainframe capabilities. With IBM's OpenPower initiative, features taken from mainframes have become available on Linux. IBM sees the most important of these features as its Virtualization Engine, which is composed of many technologies. The engine enables systems to create dynamic execution partitions and dynamically allocate I/O resources to them.
Linux also has become outstanding with simultaneous multithreading (SMT) and hyper-threading technology. These technologies enable two threads to execute simultaneously on the same processor. This technology becomes essential when becoming a host for guest operating systems.
The 2.6 Linux kernel fits well with IBM's SMT technology. Prior to 2.6 of the kernel, Linux thread scheduling was insufficient, and thread arbitration took a long time. The 2.6 kernel fixed this problem and greatly expanded the number of processors on which the kernel could run.
Although a viable, low-cost solution to server sprawl existed three years ago, we're only now seeing buzz around it. If you look around, you can see the IT industry gearing up to solve the problem. The scalability and development of Linux clusters and grid computing has not only led the way in this area, it currently provides the best solutions.
You can see some different approaches to Linux virtualization. We already have discussed VMware to some extent, which runs Windows and Linux on the same server. It also benefits from the advances made in the Linux 2.6 kernel. In many cases, enterprises choose to use VMware because it runs Linux, Windows and Solaris.
Xen has created quite a stir around virtualization circles even though it does not run Windows. An open-source project, Xen uses paravirtualization. Novell bundled Xen with SUSE 9.3. In February 2005, the Linux kernel team said Xen modifications would become part of the standard Linux 2.6 kernel. So essentially, Linux will come with the ability to run virtual machines natively. Imagine the benefits of a computer system able to run multiple instances of Linux at the same time. I can think of several situations in the past when I wanted exactly that capability.
Xen modifies the kernel so that Linux knows it runs virtualized. Xen provides performance enhancements over VMware. Ultimately, people feel that Xen will run Windows.
Another technology worth noting is Virtual Iron. Formerly Katana Technology, Virtual Iron has a product that allows a collection of x86 servers to allocate anywhere from a fraction of one CPU to 16 CPUs to run a single OS image. Xen and VMware chop up the resources of a single system. Virtual Iron makes kernel modifications and requires specialized connections between servers.
Some startups, including Virtual Iron, have formed and found funding from investment banks. As these startups begin to market their products, we can only wonder if IT managers will recognize the value proposition.
The typical suspects have started their campaigns to discredit Linux and the kernel team. One of the most vocal, Sun Microsystems, says Linux doesn't belong in the data center. If Microsoft were to say that, it would have look pretty dumb.
Linux has come a long way since I began using it to learn UNIX. Today, Linux has a place in a world of devices, such as digital phones and PDAs, in the making of feature movies, in running the most powerful computers in the world, in running sonar arrays on nuclear submarines and as a desktop platform. As a solution for on-demand business, it looks to be getting a lead because of its capability as a host for virtual guest operating systems.
Tom Adelstein is a Principal of Hiser + Adelstein, an open-source company headquartered in New York City. He's the co-author of the book Exploring the JDS Linux Desktop and author of an upcoming book on Linux system administration to be published by O'Reilly. Tom has been consulting and writing articles and books about Linux since early 1999.
|PasswordPing Ltd.'s Exposed Password and Credentials API Service||Apr 28, 2017|
|Graph Any Data with Cacti!||Apr 27, 2017|
|Be Kind, Buffer!||Apr 26, 2017|
|Preparing Data for Machine Learning||Apr 25, 2017|
|openHAB||Apr 24, 2017|
|Omesh Tickoo and Ravi Iyer's Making Sense of Sensors (Apress)||Apr 21, 2017|
- Graph Any Data with Cacti!
- PasswordPing Ltd.'s Exposed Password and Credentials API Service
- Teradici's Cloud Access Platform: "Plug & Play" Cloud for the Enterprise
- The Weather Outside Is Frightful (Or Is It?)
- Simple Server Hardening
- From vs. to + for Microsoft and Linux
- Understanding Firewalld in Multi-Zone Configurations
- Nventify's Imagizer Cloud Engine
- Linux Journal February 2017