A High-Availability Cluster for Linux
If a node fails in some way, it is vital that only one of the nodes performs the IP and MAC address takeover. Determining which node has failed in a cluster is easier said than done. If the heartbeat network failed while using a simplistic takeover algorithm, both of the nodes would wrongly perform MAC, IP and application takeover and the cluster would become partitioned. This would cause major problems on any LAN and would probably result in some kind of network and server deadlock. One way to prevent this scenario from taking place is to make the node which first detects a remote node failure, remote login to each of that remote node's interfaces and put it into a standby run level (e.g., single-user mode). This run level would prevent the failed node from attempting to restart itself and thus stop an endless failure-recovery loop. There are problems with this method. What if node A (which has a failed NIC) thinks node B is not responding, then remotely puts node B into single-user mode? You would end up with no servers available to the LAN. There must be a mechanism to decide which node has actually failed. One of the few ways to do this on a two-node cluster is to rely on a third party. My method of implementing this is to use a list of locally accessible devices which can be pinged on the LAN. By a process of arbitration, the node which detects the highest number of unreachable devices will gracefully surrender and go into the standby runlevel. This is shown in Figure 2.
To implement this solution with minimal risk of data loss, the data on the two servers must be constantly mirrored. It would be ideal if the data written to serv1 was simultaneously written to serv2 and vice versa. In practice, a near-perfect mirror would require a substantial kernel implementation with many hurdles along the way, such as file system performance and distributed lock management. One method would be to implement a RAID mirror which used disks from different nodes: a cluster file system. This is supposed to be possible in later incarnations of the 2.1 and probably the 2.2 kernel by using md, NFS and network block devices. Another solution, which also remains to be evaluated, is the use of the CODA distributed file system.
A practical way to have a mirror of data on each node is to allow the frequency of the file mirroring to be predefined by the administrator, not only for nodes but rather on a per file or directory basis. With this fine-grained level of control, the data volatility characteristics of a particular file, directory or application can be reflected in frequency of mirroring to the other node in the cluster. For example, fast-changing data such as an IMAP4 e-mail spool, where users are constantly moving, reading and deleting e-mail, could be mirrored every minute, whereas slow-changing data such as the company's mostly static web pages could be mirrored hourly.
Trade-offs must be considered when mirroring data in this way. One major trade-off is mirror integrity with CPU and I/O resource consumption. It would be nice if I could have my IMAP4 mail spools mirrored each second. In practice, this would not work because the server takes 15 seconds to synchronize this spool each time. The CPU and disk I/O usage could be so high that the services would be noticeably slowed down. This would seem to defeat the objective of high availability. Even if the CPU had the resources to read the disks in less than one second, there might still be problems transferring the data changes between the nodes due to a network throughput bottleneck.
This mirroring approach does have flaws. If a file is saved to a Samba file share on serv1, and serv1 fails before they are mirrored, the file will remain unavailable until serv1 fully recovers. In a worst-case scenario, the serv1 file system will have been corrupted and the file lost forever. However, compared to a single server with a backup tape, this scenario is less risky because traditional backups are made far less frequently than the mirroring in the cluster. Of course, a cluster is no replacement for traditional backups which are still vital for many other reasons.
|Understanding OpenStack's Success||Feb 21, 2017|
|Natalie Rusk's Scratch Coding Cards (No Starch Press)||Feb 17, 2017|
|Own Your DNS Data||Feb 16, 2017|
|IGEL Universal Desktop Converter||Feb 15, 2017|
|Simple Server Hardening||Feb 14, 2017|
|Server Technology's HDOT Alt-Phase Switched POPS PDU||Feb 13, 2017|
- Understanding OpenStack's Success
- Own Your DNS Data
- Teradici's Cloud Access Platform: "Plug & Play" Cloud for the Enterprise
- Understanding Firewalld in Multi-Zone Configurations
- Why Python?
- Returning Values from Bash Functions
- From vs. to + for Microsoft and Linux
- Simple Server Hardening
- Validate an E-Mail Address with PHP, the Right Way
- Boot with GRUB