The consolidation mantra is nothing new only the solution's name is new. But, people seem to forget the reason why there are so many servers and silos out there. The reason is stability/reliability and availability.
While it's great to have all of your enterprise's data on a SAN , the idea turns sour when the SAN requires a firmware update and has to be brought down, thereby taking your company down with it. Consolidating your systems onto a central storage facility isn't so enticing when all of the systems go down because of the central storage facility.
It's great to have your mail system sharing the hardware with your intranet server and your file and print server and your database server. That is, until you have some type of hardware failure and your entire company is shut down.
While stupid vendors insisting on their own database server does create server sprawl, the IT department has also intentionally created server sprawl in an effort to eliminate a single point of failure. Even in cases where you run clusters off of SAN's you will sooner or later suffer from the fact that your data is stored in a single location that must go offline for whatever reason. When the data goes offline no amount of clusters, virtual servers or anything else will save you.
That is one of the advantages of "server sprawl", your company was not totally reliant on a single S390 whose failure, although rare, is devastating. When your services are split up amongst many independent servers, then a single failure has far less impact. You can survive the mail system being offline for a few hours, provided that all the other systems stay online. But when a memory module fails and corrupts the data on a dozen virtual servers at once, you'll wish you'd stuck with the server sprawl.
I respectfully disagree with your premise and argument.
In fact, I object to your entire message and demeanor.
This isn't consolidation again, it's more like putting things together. Having lots of silos doesn't create stability and reliability. It creates confusion and chaos.
Too many big companies have worked these situations out and have created valuable cases for you to dismiss it without careful consideration.
Perhaps you might consider the author's premise instead of knee jerking like some ridiculous animal that just burnt its noise on a hot frying pan.
If case you don't, then I'll have broken one of the golden rules: Don't feed the trolls.
The real disruption will come when we can add a $200 Microtel computer to a group of other computers to add computing capacity to Postgresql or MySQL. OpenMosix doesn't currently work in this configuration because it fares poorly with shared memory from what I can figure out in the docs, or it fares poorly because the databases are a single process. One or the other. But when we can run a database off of a commodity x86 desktop, and add computing capacity to that database server by simply adding additional commodity x86 desktops to the application instead of moving up to dual and quad processor solutions, then we can really claim technological disruption.
Virtualization is one area. Another area just as disruptive, just as likely to affect end users for the better, will be having the capability of adding a $200 desktop computer to a cluster to increase computing power to whatever application you are running, whether Apache or database, and be able to do that without reprogramming any application.
There was an application announced a while back that had to do with OpenMosix type clustering called Chaos. This may be a solution to the technological challenge I'm discussing, or not. But it would seem to me that if efforts were directed in this direction, that many, many end users would benefit.
btw, good work on the article. Keep 'em coming. And stay away from the Java/Sun junk. More articles on local/state/federal governments using FOSS to save money are welcome. Specifically, the local governments need to be aware of, or exposed, on putting jobs out to bid where the agency gets a holiday basket full of licensed software as part of the construction project. I've seen the bid sheets. Every one of the software applications that are part of the bid, every application used by the government employee sitting in the construction trailer during the project, every one of those applications is a new, fully licensed, full priced, boxed version of the application. And when the taxpayers are paying for it through the bidding process to contractors, the agencies go full out on purchasing every little app, every little gimmick, every little toy they can get their hands on through this bidding process. Simply replacing Microsoft Office with OpenOffice on these RFP/RFB jobs will save $500 per copy, multiple copies, per job. The contractors don't care because they get the secretaries to figure out list prices, they add the costs into the bid, and their competitors do the same. So there is no incentive to get the government employees to switch, especially when they can't get the applications through the agency due to budget constraints. There needs to be some exposes on this. Just pick up the RFP/RFB's, make a list of all the proprietary software costs, make a list of all the FOSS substitutes, then show up during the next public meeting when tax increases or teacher/fireman/police/library job cuts & senior hot meal cuts are being proposed.
Unless your application software is completely open source, there is a huge disconnect between virtualizing the servers and the language in most commercial software licenses. The charging model is per CPU, with some vendors using dongles and other locking mechanisms to enforce single system usage.
Unless we get the software vendors on side, no amount of hardware virtualization is going to cut costs.
The current mess goes back to Microsoft servers. Think NT. How many services could you reliably run on a NT server? Answer, maybe 1. It was fool hardy to try to run 2 services. Let the sprawl begin.
Even at the time, Unix was reliable, expensive, and supported multiple services on a single server. The cost was the problem. Lets go with NT its cheaper. Once into it, it becomes apparent that more (cheap) servers are needed. Sprawl on.
The real answer to this mess is to look at what can be done with consolidating multiple services on Linux. This permits the elimination of many of the low usage machines.
While that seems logical and often works in low volume networks, it doesn't work in high volume, large user sites. You have a heterogeneous environment even if you use the same operating system because one task may want more CPU capacity, while another is memory intensive and another uses lots of disk I/O.
Sure, in small shops, you're absolutely right. But then they don't have too much concern about server sprawl.
Isn't the next logical step skipping Linux/BSD and going back to bulletproof mainframe class hardware/software with on-demand resource allocation ?
iSeries News: Law firm picks iSeries over Windows -
I'm glad to see someone mentioning the IBM iSeries. For this type of workload there simply is no better system. I'm currently supporting one application that requires four dedicated Intel servers. What a headache for something that should be simple! In my iSeries days this same app would not even require one low-end server. That same server would easily support many, many apps all while performing dymanic resource allocation with nary a reboot in sight.
Unfortunately, Ralph is correct also. Because the iSeries is a proprietary hardware/software solution with it's prices kept artificially high by IBM it is doomed to remain a poorly understood niche product.
It might be the next step for some processes. But, not for most things, I think. Running virtual servers on PC gives you improved utilization on hardware that is the best price/performance part of the market because the market has commoditized it.
Good article. I had not realized that companies were putting out so many servers running one set of services a piece. This would seem to give Linux another big boost. If you virtualize 6 servers on one piece of hardware with Windows, how much are the licenses going to cost you? The price advantage of Linux multiplies there, I think. And with virtualization like XEN being built in, we could see a lot of use.
What I would like to see is an article on XEN. Is it really ready for production use? If not, what is missing?