Linux in Government: How Linux Reins in Server Sprawl

The use of Linux and virtualization makes more sense everyday.

People write a lot about utility computing these days. The interest seems high. VMware gave a seminar in Dallas this past week and had 850 attendees. That followed a well-attended seminar by IBM's business development group on "On-Demand Business". Yet even with the high visibility in the media over the past year, many IT managers seem lost when I discuss utility computing with them.

I realize buzzwords come and go. People find it so easy to dismiss "utility computing" as another fad. Even after noting its undeniable benefits, people's eyes glaze over when one attempts to discuss this topic. I think many of my colleagues avoid the subject, because some vendors have said they want to sell IT as an independent service, similar to water or telephone service.

I personally find that objectionable. One can see the benefit to the vendor but not to IT departments. Within the context of cost containment and efficient use of resources, utility computing doesn't mean install a meter.

When I think of utility computing I think of frugality. I want to get the most out of what I already have. In business, we often say, if it ain't broke don't fix it. In other words, don't rip and replace the technologies that work. Instead, acquire tools that pull resources together and allow us to manage and consolidate, become more productive and eliminate duplication of effort. Linux has addressed this area more than any other operating system.

Server Sprawl

In typical data centers, you find one application tied to one or more physical servers. Most applications require different computing power based on use. In the past, we always sized hardware based on peak usage. This habit has resulted in what analysts called server sprawl. You may only reach peak usage one day a year. The rest of the time, usage goes down. That concept works great for electric companies, but not for computing.

Ultimately, dedicated servers create the "silo" effect we discussed in last week's article. Silos do not provide for efficient use of hardware resources. Many server utilization rates run around 10-15 percent overall for an organization. Obviously, the ROI on such environments becomes unacceptable, especially to stakeholders.

Blame process automation on the situation we have today. A decade ago, capturing and managing transactions and eliminating processes that did not add value brought on the prominence of enterprise resource programs. As we collected transactional data, the number of ways to store it grew proportionately. That has given rise to products such as network attached storage and storage area networks, NAS and SAN, respectively.

Ultimately, we used technology to create efficiencies, and those technologies became our next inefficiencies. Some business theorists used to say that the solution to the problem becomes the next problem. That has happened within the enterprise.

Attacking Server Sprawl and Low Utilizations

Numerous studies exist discussing server utilization rates. Companies such as IBM and HP tell us that Intel server utilizations run in the frightening low area of 10 to 15%. We easily can see how application silos syndrome results in these low rates and high costs of storage. We also can find numerous case studies that demonstrate how to raise rates, consolidate hardware and integrate processes across numerous silos.

Linux virtualization has become the primary technology in use by major solution providers today. Linux and virtualization technology, including VMware, allow for

  • a consolidation ratio of four to five workloads per CPU or higher

  • decreased capital and operational costs

  • improvements in server management

  • more robust infrastructures

In previous times, we solved the problem of needing dedicated resources that grew and shrunk on mainframes using VM/370. Linux on the IBM S/390 and zSeries mainframes rekindled the concept. Then, about three years ago, IBM and VMware got together and co-marketed a solution using IBM's xSeries 440 and VMware ESX Server.

Note: You can find a downloadable Redbook on the subject (note the date) here.

Little did we know that IBM and VMware were starting an industry. According to Dan Kuznetsky of IDC, "The switch to commodity-based servers has resulted in more companies pursuing a virtualization strategy." Referring to overall virtualization software revenue, he said, "It's growing three times faster than the revenue growth for operating system software."

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Consolidation - again.

Anonymous's picture

The consolidation mantra is nothing new only the solution's name is new. But, people seem to forget the reason why there are so many servers and silos out there. The reason is stability/reliability and availability.

While it's great to have all of your enterprise's data on a SAN , the idea turns sour when the SAN requires a firmware update and has to be brought down, thereby taking your company down with it. Consolidating your systems onto a central storage facility isn't so enticing when all of the systems go down because of the central storage facility.

It's great to have your mail system sharing the hardware with your intranet server and your file and print server and your database server. That is, until you have some type of hardware failure and your entire company is shut down.

While stupid vendors insisting on their own database server does create server sprawl, the IT department has also intentionally created server sprawl in an effort to eliminate a single point of failure. Even in cases where you run clusters off of SAN's you will sooner or later suffer from the fact that your data is stored in a single location that must go offline for whatever reason. When the data goes offline no amount of clusters, virtual servers or anything else will save you.

That is one of the advantages of "server sprawl", your company was not totally reliant on a single S390 whose failure, although rare, is devastating. When your services are split up amongst many independent servers, then a single failure has far less impact. You can survive the mail system being offline for a few hours, provided that all the other systems stay online. But when a memory module fails and corrupts the data on a dozen virtual servers at once, you'll wish you'd stuck with the server sprawl.

Consolidation - again -- I beg your pardon

Anonymous's picture

I respectfully disagree with your premise and argument.

In fact, I object to your entire message and demeanor.

This isn't consolidation again, it's more like putting things together. Having lots of silos doesn't create stability and reliability. It creates confusion and chaos.

Too many big companies have worked these situations out and have created valuable cases for you to dismiss it without careful consideration.

Perhaps you might consider the author's premise instead of knee jerking like some ridiculous animal that just burnt its noise on a hot frying pan.

If case you don't, then I'll have broken one of the golden rules: Don't feed the trolls.

Get clustering to work better, on databases & elsewhere

Anonymous's picture

The real disruption will come when we can add a $200 Microtel computer to a group of other computers to add computing capacity to Postgresql or MySQL. OpenMosix doesn't currently work in this configuration because it fares poorly with shared memory from what I can figure out in the docs, or it fares poorly because the databases are a single process. One or the other. But when we can run a database off of a commodity x86 desktop, and add computing capacity to that database server by simply adding additional commodity x86 desktops to the application instead of moving up to dual and quad processor solutions, then we can really claim technological disruption.

Virtualization is one area. Another area just as disruptive, just as likely to affect end users for the better, will be having the capability of adding a $200 desktop computer to a cluster to increase computing power to whatever application you are running, whether Apache or database, and be able to do that without reprogramming any application.

There was an application announced a while back that had to do with OpenMosix type clustering called Chaos. This may be a solution to the technological challenge I'm discussing, or not. But it would seem to me that if efforts were directed in this direction, that many, many end users would benefit.

btw, good work on the article. Keep 'em coming. And stay away from the Java/Sun junk. More articles on local/state/federal governments using FOSS to save money are welcome. Specifically, the local governments need to be aware of, or exposed, on putting jobs out to bid where the agency gets a holiday basket full of licensed software as part of the construction project. I've seen the bid sheets. Every one of the software applications that are part of the bid, every application used by the government employee sitting in the construction trailer during the project, every one of those applications is a new, fully licensed, full priced, boxed version of the application. And when the taxpayers are paying for it through the bidding process to contractors, the agencies go full out on purchasing every little app, every little gimmick, every little toy they can get their hands on through this bidding process. Simply replacing Microsoft Office with OpenOffice on these RFP/RFB jobs will save $500 per copy, multiple copies, per job. The contractors don't care because they get the secretaries to figure out list prices, they add the costs into the bid, and their competitors do the same. So there is no incentive to get the government employees to switch, especially when they can't get the applications through the agency due to budget constraints. There needs to be some exposes on this. Just pick up the RFP/RFB's, make a list of all the proprietary software costs, make a list of all the FOSS substitutes, then show up during the next public meeting when tax increases or teacher/fireman/police/library job cuts & senior hot meal cuts are being proposed.

Can't virualize software vendors and licences

Waleed Hanafi's picture

Unless your application software is completely open source, there is a huge disconnect between virtualizing the servers and the language in most commercial software licenses. The charging model is per CPU, with some vendors using dongles and other locking mechanisms to enforce single system usage.

Unless we get the software vendors on side, no amount of hardware virtualization is going to cut costs.

Its time to unlearn bad habits

Anonymous's picture

The current mess goes back to Microsoft servers. Think NT. How many services could you reliably run on a NT server? Answer, maybe 1. It was fool hardy to try to run 2 services. Let the sprawl begin.

Even at the time, Unix was reliable, expensive, and supported multiple services on a single server. The cost was the problem. Lets go with NT its cheaper. Once into it, it becomes apparent that more (cheap) servers are needed. Sprawl on.

The real answer to this mess is to look at what can be done with consolidating multiple services on Linux. This permits the elimination of many of the low usage machines.

Its time to unlearn bad habits

Anonymous's picture

While that seems logical and often works in low volume networks, it doesn't work in high volume, large user sites. You have a heterogeneous environment even if you use the same operating system because one task may want more CPU capacity, while another is memory intensive and another uses lots of disk I/O.

Sure, in small shops, you're absolutely right. But then they don't have too much concern about server sprawl.

Isn't the next logical step s

Raj's picture

Isn't the next logical step skipping Linux/BSD and going back to bulletproof mainframe class hardware/software with on-demand resource allocation ?

iSeries News: Law firm picks iSeries over Windows -
http://www.iseriesnetwork.com/content/f3/index.cfm?fuseaction=news.viewA...

iSeries

Dan's picture

I'm glad to see someone mentioning the IBM iSeries. For this type of workload there simply is no better system. I'm currently supporting one application that requires four dedicated Intel servers. What a headache for something that should be simple! In my iSeries days this same app would not even require one low-end server. That same server would easily support many, many apps all while performing dymanic resource allocation with nary a reboot in sight.

Unfortunately, Ralph is correct also. Because the iSeries is a proprietary hardware/software solution with it's prices kept artificially high by IBM it is doomed to remain a poorly understood niche product.

That would not get commodity pricing

Ralph's picture

It might be the next step for some processes. But, not for most things, I think. Running virtual servers on PC gives you improved utilization on hardware that is the best price/performance part of the market because the market has commoditized it.

XEN

Ralph's picture

Good article. I had not realized that companies were putting out so many servers running one set of services a piece. This would seem to give Linux another big boost. If you virtualize 6 servers on one piece of hardware with Windows, how much are the licenses going to cost you? The price advantage of Linux multiplies there, I think. And with virtualization like XEN being built in, we could see a lot of use.
What I would like to see is an article on XEN. Is it really ready for production use? If not, what is missing?

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix