High Availability Cluster Checklist
This is the most serious failure scenario that can confront any cluster implementation. If you didn't buy the bridge from me earlier, then perhaps I could interest you in one if you believe that systems never hang. This is another unfortunate fact of life in the computer biz. We've all seen systems mysteriously “lock up” where your only recourse is to reset or power cycle the system to get it responsive again. Fortunately, this is a relatively rare occurrence.
Just as mysteriously as computers can hang, they can also unhang. Surely you've been in scenarios where a system will “lock up” and then after a period of time become responsive again. This can happen on any operating system.
The pivotal question in evaluating cluster products is to understand how a cluster would respond in a hang/unhang scenario. Here's why the question is so important: in a hang scenario, node A becomes completely unresponsive. Suppose you learned your lesson in the prior section describing communication failures, and constructed a cluster with two Ethernet connections and a serial connection so that if any one of them failed, your cluster would still be operational. Well, in response to a system hang it wouldn't matter if you had 50 redundant connections—they all would fail to receive any response to system monitoring heartbeat requests. In response to this, node B would notice that node A has failed to respond to heartbeats over all three channels and conclude that node A has gone down. Following this, node B would mount the file systems or start up the databases formerly served by node A.
At this point, node A could become unhung and commence to update the file system. This results in a situation where two nodes are concurrently mounting and modifying the same file system, creating a data integrity violation.
This is the true litmus test of any cluster implementation. In order to protect against data integrity compromises (i.e., system crashes or invalid data) a cluster member must, before taking over services of a failed node, ensure that the failed node cannot modify the file system or database. This is commonly referred to as I/O Fencing or I/O Barrier.
In order to dodge this scrutiny, some proprietors of cluster products will dismiss the node hang/unhang scenario as an unlikely occurrence. Thankfully, in practice, system hang/unhang scenarios are infrequent. But, before dismissing this criteria entirely, remember it is your data and all the implications of having it corrupted are on the line.
If you care enough about your system's availability to warrant a cluster deployment, then it is crucial that you select a fail-over cluster implementation that ensures data integrity under each of the four failure scenarios. Keep in mind that the most valuable asset of your IT infrastructure is to have valid, accurate data. The cost of failing to ensure that data integrity is maintained is prolonged system downtime or loss of transactions, each of which can be catastrophic.
Tim Burke is Cluster Engineer at Mission Criticial Linux, Inc. He can be reached at http://firstname.lastname@example.org/.
- Great Scott! It's Version 13!
- Divx# Watch The Other Woman Full HD Online Streaming Viooz
- Numerical Python
- NSA: Linux Journal is an "extremist forum" and its readers get flagged for extra surveillance
- Tech Tip: Really Simple HTTP Server with Python
- Adminer—Better Than Awesome!
- Docker: Lightweight Linux Containers for Consistent Development and Deployment
- Linux Systems Administrator
- Monitoring Virtual Memory with vmstat
- Senior Perl Developer