A High-Availability Cluster for Linux

Mr. Lewis tells us how he designed and implemented a simple high-availability solution for his company.
Cluster Load Balancing

Spreading the workload evenly across the nodes in a cluster is preferable to having one node do all the work until it fails, then having another node take over. The term for this is a hot standby system.

Load balancing can take many forms, depending on the service in question. For web servers, which provide simple static pages and are often read-only, a round-robin DNS solution can be quite effective. Unfortunately, with the read/write or transactional type of services such as e-mail or database access, unless the connection and session information from the service on one node can be shared and used by other nodes, it would be very difficult to provide seamless load balancing over the cluster. It would also require the disk mirroring to be near-instantaneous and use lots of distributed locking techniques, which most daemons will not support without complex modifications. To avoid these drawbacks, I decided to use a simpler approach which stands between the hot standby and the network-level load balancing.

In my two-node cluster, I put half of the required services on node “A” (serv1) and the other half on node “B” (serv2). A mutual failover configuration was employed so that if node A failed, node B would take over all of its services and vice versa.

Service Suitability for Clustering

We had to decide which services needed to be running on the overall cluster. This involved comparatively rating how much computing resource each service would consume. For our previous Linux server, it was found that Samba and Cyrus IMAP4 were the most resource-intensive services with FTP, httpd and Sendmail following close behind. Careful consideration had to be given to which services were suited to running on two or more nodes concurrently. Examples of such services included Sendmail (for sending mail only or as a relay), bind, httpd, ftpd (downloading only) and radius. Examples of services which cannot be run in such a way are Cyrus IMAP4, ftpd (uploading) and Samba. Samba cannot be run on two servers at once, as that would result in two servers broadcasting the same netbios name on the same LAN. It is not yet possible to have PDC/BDC (primary and backup domain controller) arrangements with the current stable versions of Samba. The pages on a simple web site, on the other hand, do not change often. Therefore, mirroring is quite effective and parallel web servers can run quite happily without major problems. The servers were configured so that each one took primary care of a specific group of services. I put Samba on serv1 and Cyrus IMAP4 on serv2. The other services are shared in a similar way; httpd, bind and radius run on both nodes concurrently.

Node Takeover

In the event of a node failure, the other node takes over all the services of the failed one in such a way as to minimize disruption to the network users. This was best achieved by using IP (Internet protocol) and MAC (mandatory access control) address takeover from the failed node onto an unused Ethernet card on the takeover node. In effect, the node would appear to be both serv1 and serv2 to the network users.

The use of MAC address takeover was preferred in order to avoid potential problems with the clients' ARP (address resolution protocol) cache still associating the old MAC address with the IP address. MAC address takeover, in my opinion, is neater and more seamless than IP takeover alone, but unfortunately has some scalability limitations.

Networking Hardware

While considering the HA network setup, it was very important to eliminate all possible single points of failure. Our previous server had many single points of failure; the machine itself, the network cable, the Ethernet hub, the UPS, etc. The list was endless. The network has been designed to be inexpensive and reliable as shown in Figure 1.

Figure 1. Network Diagram for Our Two Nodes

The network diagram shows the three network interface cards (NICs) in each server. The first NIC in each server is used for the main LAN access to clients. Each node is plugged in to a separate Ethernet switch or hub to give redundancy in case of a switch lockup or failure. (This actually happened to us not so long ago.) The second NIC is used for creating a private inter-node network using a simple cross-over 100BaseTX full-duplex Ethernet cable. A cross-over cable is far less likely to fail than two cables plugged into an Ethernet hub or switch. This link is primarily used for disk mirroring traffic and the cluster heartbeat. It also helps take the network traffic load off the main interface and provides a redundant network path between the nodes. The third NIC is the redundant LAN access card used for MAC and IP address takeover in the event of a remote node failure in the cluster. Again, they are plugged in to different Ethernet switches or hubs for greater network availability.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

High-availability clusters

Emil Koutanov's picture

The problem with using Linux-based (or an OS-specific) clustering software is that you'll always be tied to the operating system.

The folks at Obsidian Dynamics have built a Java-based application-level clustering solution that isn't tied to the operating system.
(www.obsidiandynamics.com/gridlock)

I think this is the way forward, particularly seeing that many organisations are running a mixed bag of Windows and Linux servers - being able to cluster Windows and Linux machines together can be a real advantage. It also makes installation and configuration easier, since you're not supporting a dozen different operating systems and hardware configurations.

The other neat thing about Gridlock is that it doesn't use quorum and doesn't rely on NIC bonding/teaming to achieve multipath configurations - instead it combines redundant networks at the application level, which means it works on any network card and doesn't require specialised switchgear.

In connection with his article on A High-Availability Cluster

Steve Thompson's picture

Iam trying to get in touch with Mr Phil(Philip) Lewis over e-mail but i have the impression there is something wrong with the e-mail address.Can u confirm it.I have: lewispj@e-mail.com
Thanks in advance

Updated email

Anonymous's picture

You can contact me at:

linuxjournal (at sign) linuxcentre.net

Thanks

Phil

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState