Linux in Government: How Linux Reins in Server Sprawl

The use of Linux and virtualization makes more sense everyday.
Get Used To Linux

Operating systems manage hardware on which they run. Like any operating system, Linux schedules or arbitrates CPU cycles, allocates memory and handles input output devices. When we virtualize the CPU, memory and input and output, an operating system--whether UNIX or Windows--becomes divorced from the hardware. The operating system becomes a guest on the physical hardware, but does not manage the hardware.

Linux has many features that make it a better host for guest operating systems than other OSes. Some of the contributions by IBM have made this possible. Linux runs well on servers and has done so in the past. But, it never enjoyed advanced mainframe capabilities. With IBM's OpenPower initiative, features taken from mainframes have become available on Linux. IBM sees the most important of these features as its Virtualization Engine, which is composed of many technologies. The engine enables systems to create dynamic execution partitions and dynamically allocate I/O resources to them.

Linux also has become outstanding with simultaneous multithreading (SMT) and hyper-threading technology. These technologies enable two threads to execute simultaneously on the same processor. This technology becomes essential when becoming a host for guest operating systems.

The 2.6 Linux kernel fits well with IBM's SMT technology. Prior to 2.6 of the kernel, Linux thread scheduling was insufficient, and thread arbitration took a long time. The 2.6 kernel fixed this problem and greatly expanded the number of processors on which the kernel could run.

Although a viable, low-cost solution to server sprawl existed three years ago, we're only now seeing buzz around it. If you look around, you can see the IT industry gearing up to solve the problem. The scalability and development of Linux clusters and grid computing has not only led the way in this area, it currently provides the best solutions.

What's Real Today

You can see some different approaches to Linux virtualization. We already have discussed VMware to some extent, which runs Windows and Linux on the same server. It also benefits from the advances made in the Linux 2.6 kernel. In many cases, enterprises choose to use VMware because it runs Linux, Windows and Solaris.

Xen has created quite a stir around virtualization circles even though it does not run Windows. An open-source project, Xen uses paravirtualization. Novell bundled Xen with SUSE 9.3. In February 2005, the Linux kernel team said Xen modifications would become part of the standard Linux 2.6 kernel. So essentially, Linux will come with the ability to run virtual machines natively. Imagine the benefits of a computer system able to run multiple instances of Linux at the same time. I can think of several situations in the past when I wanted exactly that capability.

Xen modifies the kernel so that Linux knows it runs virtualized. Xen provides performance enhancements over VMware. Ultimately, people feel that Xen will run Windows.

Another technology worth noting is Virtual Iron. Formerly Katana Technology, Virtual Iron has a product that allows a collection of x86 servers to allocate anywhere from a fraction of one CPU to 16 CPUs to run a single OS image. Xen and VMware chop up the resources of a single system. Virtual Iron makes kernel modifications and requires specialized connections between servers.

Some startups, including Virtual Iron, have formed and found funding from investment banks. As these startups begin to market their products, we can only wonder if IT managers will recognize the value proposition.

Final Thoughts

The typical suspects have started their campaigns to discredit Linux and the kernel team. One of the most vocal, Sun Microsystems, says Linux doesn't belong in the data center. If Microsoft were to say that, it would have look pretty dumb.

Linux has come a long way since I began using it to learn UNIX. Today, Linux has a place in a world of devices, such as digital phones and PDAs, in the making of feature movies, in running the most powerful computers in the world, in running sonar arrays on nuclear submarines and as a desktop platform. As a solution for on-demand business, it looks to be getting a lead because of its capability as a host for virtual guest operating systems.

Tom Adelstein is a Principal of Hiser + Adelstein, an open-source company headquartered in New York City. He's the co-author of the book Exploring the JDS Linux Desktop and author of an upcoming book on Linux system administration to be published by O'Reilly. Tom has been consulting and writing articles and books about Linux since early 1999.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Consolidation - again.

Anonymous's picture

The consolidation mantra is nothing new only the solution's name is new. But, people seem to forget the reason why there are so many servers and silos out there. The reason is stability/reliability and availability.

While it's great to have all of your enterprise's data on a SAN , the idea turns sour when the SAN requires a firmware update and has to be brought down, thereby taking your company down with it. Consolidating your systems onto a central storage facility isn't so enticing when all of the systems go down because of the central storage facility.

It's great to have your mail system sharing the hardware with your intranet server and your file and print server and your database server. That is, until you have some type of hardware failure and your entire company is shut down.

While stupid vendors insisting on their own database server does create server sprawl, the IT department has also intentionally created server sprawl in an effort to eliminate a single point of failure. Even in cases where you run clusters off of SAN's you will sooner or later suffer from the fact that your data is stored in a single location that must go offline for whatever reason. When the data goes offline no amount of clusters, virtual servers or anything else will save you.

That is one of the advantages of "server sprawl", your company was not totally reliant on a single S390 whose failure, although rare, is devastating. When your services are split up amongst many independent servers, then a single failure has far less impact. You can survive the mail system being offline for a few hours, provided that all the other systems stay online. But when a memory module fails and corrupts the data on a dozen virtual servers at once, you'll wish you'd stuck with the server sprawl.

Consolidation - again -- I beg your pardon

Anonymous's picture

I respectfully disagree with your premise and argument.

In fact, I object to your entire message and demeanor.

This isn't consolidation again, it's more like putting things together. Having lots of silos doesn't create stability and reliability. It creates confusion and chaos.

Too many big companies have worked these situations out and have created valuable cases for you to dismiss it without careful consideration.

Perhaps you might consider the author's premise instead of knee jerking like some ridiculous animal that just burnt its noise on a hot frying pan.

If case you don't, then I'll have broken one of the golden rules: Don't feed the trolls.

Get clustering to work better, on databases & elsewhere

Anonymous's picture

The real disruption will come when we can add a $200 Microtel computer to a group of other computers to add computing capacity to Postgresql or MySQL. OpenMosix doesn't currently work in this configuration because it fares poorly with shared memory from what I can figure out in the docs, or it fares poorly because the databases are a single process. One or the other. But when we can run a database off of a commodity x86 desktop, and add computing capacity to that database server by simply adding additional commodity x86 desktops to the application instead of moving up to dual and quad processor solutions, then we can really claim technological disruption.

Virtualization is one area. Another area just as disruptive, just as likely to affect end users for the better, will be having the capability of adding a $200 desktop computer to a cluster to increase computing power to whatever application you are running, whether Apache or database, and be able to do that without reprogramming any application.

There was an application announced a while back that had to do with OpenMosix type clustering called Chaos. This may be a solution to the technological challenge I'm discussing, or not. But it would seem to me that if efforts were directed in this direction, that many, many end users would benefit.

btw, good work on the article. Keep 'em coming. And stay away from the Java/Sun junk. More articles on local/state/federal governments using FOSS to save money are welcome. Specifically, the local governments need to be aware of, or exposed, on putting jobs out to bid where the agency gets a holiday basket full of licensed software as part of the construction project. I've seen the bid sheets. Every one of the software applications that are part of the bid, every application used by the government employee sitting in the construction trailer during the project, every one of those applications is a new, fully licensed, full priced, boxed version of the application. And when the taxpayers are paying for it through the bidding process to contractors, the agencies go full out on purchasing every little app, every little gimmick, every little toy they can get their hands on through this bidding process. Simply replacing Microsoft Office with OpenOffice on these RFP/RFB jobs will save $500 per copy, multiple copies, per job. The contractors don't care because they get the secretaries to figure out list prices, they add the costs into the bid, and their competitors do the same. So there is no incentive to get the government employees to switch, especially when they can't get the applications through the agency due to budget constraints. There needs to be some exposes on this. Just pick up the RFP/RFB's, make a list of all the proprietary software costs, make a list of all the FOSS substitutes, then show up during the next public meeting when tax increases or teacher/fireman/police/library job cuts & senior hot meal cuts are being proposed.

Can't virualize software vendors and licences

Waleed Hanafi's picture

Unless your application software is completely open source, there is a huge disconnect between virtualizing the servers and the language in most commercial software licenses. The charging model is per CPU, with some vendors using dongles and other locking mechanisms to enforce single system usage.

Unless we get the software vendors on side, no amount of hardware virtualization is going to cut costs.

Its time to unlearn bad habits

Anonymous's picture

The current mess goes back to Microsoft servers. Think NT. How many services could you reliably run on a NT server? Answer, maybe 1. It was fool hardy to try to run 2 services. Let the sprawl begin.

Even at the time, Unix was reliable, expensive, and supported multiple services on a single server. The cost was the problem. Lets go with NT its cheaper. Once into it, it becomes apparent that more (cheap) servers are needed. Sprawl on.

The real answer to this mess is to look at what can be done with consolidating multiple services on Linux. This permits the elimination of many of the low usage machines.

Its time to unlearn bad habits

Anonymous's picture

While that seems logical and often works in low volume networks, it doesn't work in high volume, large user sites. You have a heterogeneous environment even if you use the same operating system because one task may want more CPU capacity, while another is memory intensive and another uses lots of disk I/O.

Sure, in small shops, you're absolutely right. But then they don't have too much concern about server sprawl.

Isn't the next logical step s

Raj's picture

Isn't the next logical step skipping Linux/BSD and going back to bulletproof mainframe class hardware/software with on-demand resource allocation ?

iSeries News: Law firm picks iSeries over Windows -
http://www.iseriesnetwork.com/content/f3/index.cfm?fuseaction=news.viewA...

iSeries

Dan's picture

I'm glad to see someone mentioning the IBM iSeries. For this type of workload there simply is no better system. I'm currently supporting one application that requires four dedicated Intel servers. What a headache for something that should be simple! In my iSeries days this same app would not even require one low-end server. That same server would easily support many, many apps all while performing dymanic resource allocation with nary a reboot in sight.

Unfortunately, Ralph is correct also. Because the iSeries is a proprietary hardware/software solution with it's prices kept artificially high by IBM it is doomed to remain a poorly understood niche product.

That would not get commodity pricing

Ralph's picture

It might be the next step for some processes. But, not for most things, I think. Running virtual servers on PC gives you improved utilization on hardware that is the best price/performance part of the market because the market has commoditized it.

XEN

Ralph's picture

Good article. I had not realized that companies were putting out so many servers running one set of services a piece. This would seem to give Linux another big boost. If you virtualize 6 servers on one piece of hardware with Windows, how much are the licenses going to cost you? The price advantage of Linux multiplies there, I think. And with virtualization like XEN being built in, we could see a lot of use.
What I would like to see is an article on XEN. Is it really ready for production use? If not, what is missing?

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState