High Availability Cluster Checklist

With a variety of clustering services on the market, the ability to determine how well options meet your specific business needs is necessary.
System Hang

This is the most serious failure scenario that can confront any cluster implementation. If you didn't buy the bridge from me earlier, then perhaps I could interest you in one if you believe that systems never hang. This is another unfortunate fact of life in the computer biz. We've all seen systems mysteriously “lock up” where your only recourse is to reset or power cycle the system to get it responsive again. Fortunately, this is a relatively rare occurrence.

Just as mysteriously as computers can hang, they can also unhang. Surely you've been in scenarios where a system will “lock up” and then after a period of time become responsive again. This can happen on any operating system.

The pivotal question in evaluating cluster products is to understand how a cluster would respond in a hang/unhang scenario. Here's why the question is so important: in a hang scenario, node A becomes completely unresponsive. Suppose you learned your lesson in the prior section describing communication failures, and constructed a cluster with two Ethernet connections and a serial connection so that if any one of them failed, your cluster would still be operational. Well, in response to a system hang it wouldn't matter if you had 50 redundant connections—they all would fail to receive any response to system monitoring heartbeat requests. In response to this, node B would notice that node A has failed to respond to heartbeats over all three channels and conclude that node A has gone down. Following this, node B would mount the file systems or start up the databases formerly served by node A.

At this point, node A could become unhung and commence to update the file system. This results in a situation where two nodes are concurrently mounting and modifying the same file system, creating a data integrity violation.

This is the true litmus test of any cluster implementation. In order to protect against data integrity compromises (i.e., system crashes or invalid data) a cluster member must, before taking over services of a failed node, ensure that the failed node cannot modify the file system or database. This is commonly referred to as I/O Fencing or I/O Barrier.

In order to dodge this scrutiny, some proprietors of cluster products will dismiss the node hang/unhang scenario as an unlikely occurrence. Thankfully, in practice, system hang/unhang scenarios are infrequent. But, before dismissing this criteria entirely, remember it is your data and all the implications of having it corrupted are on the line.

Summary

If you care enough about your system's availability to warrant a cluster deployment, then it is crucial that you select a fail-over cluster implementation that ensures data integrity under each of the four failure scenarios. Keep in mind that the most valuable asset of your IT infrastructure is to have valid, accurate data. The cost of failing to ensure that data integrity is maintained is prolonged system downtime or loss of transactions, each of which can be catastrophic.

Tim Burke is Cluster Engineer at Mission Criticial Linux, Inc. He can be reached at http://www.burke@missioncriticallinux.com/.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

High-availability clusters

Emil Koutanov's picture

The problem with using Linux-based (or an OS-specific) clustering software is that you'll always be tied to the operating system.

The folks at Obsidian Dynamics have built a Java-based application-level clustering solution that isn't tied to the operating system.
(www.obsidiandynamics.com/gridlock)

I think this is the way forward, particularly seeing that many organisations are running a mixed bag of Windows and Linux servers - being able to cluster Windows and Linux machines together can be a real advantage. It also makes installation and configuration easier, since you're not supporting a dozen different operating systems and hardware configurations.

The other neat thing about Gridlock is that it doesn't use quorum and doesn't rely on NIC bonding/teaming to achieve multipath configurations - instead it combines redundant networks at the application level, which means it works on any network card and doesn't require specialised switchgear.

Re: High Availability Cluster Checklist

Anonymous's picture

http://www.gfs.lcse.umn.edu/ doesn't work.
Same as that tricodr.com in comments

Re: High Availability Cluster Checklist

Anonymous's picture

For a real clustered file system with FT and Failover

checkout www.tricord.com! Illumina an Lunar Flare familly.

It's highly scalable and offers great performance.

elhaddi@cs.umn.edu

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState