Building a Two-Node Linux Cluster with Heartbeat

C T shows you how to set up a two-node Linux cluster with Heartbeat.

The term "cluster" is actually not very well defined and could mean different things to different people. According to Webopedia, cluster refers to a group of disk sectors. Most Windows users are probably familiar with lost clusters--something that can be rectified by running the defrag utility.

However, at a more advanced level in the computer industry, cluster usually refers to a group of computers connected together so that more computer power, e.g., more MIPS (millions instruction per second), can be achieved or higher availability (HA) can be obtained.

Beowulf, Super Computer for the "Poor" Approach

Most super computers in the world are built on the concept of parallel processing--high-speed computer power is achieved by pulling the power from each individual computer. Made by IBM, "Deep Blue", the super computer that played chess with the world champion Garry Kasprov, was a computer cluster that consisted of several hundreds of RS6000s. In fact, many big time Hollywood movie animation companies, such as Pixar, Industrial Light and Magic, use computer clusters extensively for rendering (a process to translate all the information such as color, movement, physical properties, etc., into a single frame of picture).

In the past, a super computer was an expensive deluxe item that only few universities or research centers could afford. Started at NASA, Beowulf is a project of building clusters with "off-the-shelf" hardware (e.g., Pentium PCs) running Linux at a very low cost.

In the last several years, many universities world-wide have set up Beowulf clusters for the purpose of scientific research or simply for exploration of the frontier of super computer building.

High Availability (HA) Cluster

Clusters in this category use various technologies to gain an extra level of reliability for a service. Companies such as Red Hat, TurboLinux and PolyServe have cluster products that would allow a group of computers to monitor each other; when a master server (e.g., a web server) goes down, a secondary server will take over the services, similar to "disk mirroring" among servers.

Simple Theory

Because I do not have access to more than one real (or public) IP address, I set up my two-node cluster in a private network environment with some Linux servers and some Win9x workstations.

If you have access to three or more real/public IP addresses, you can certainly set up the Linux cluster with real IP addresses.

In the above network diagram (fig1.gif), the Linux router is the gateway to the Internet, and it consists of two IP addresses. The real IP, 24.32.114.35, is attached to a network card (eth1) in the Linux router and should be connected to either an ADSL modem or a cable modem for internet access.

The two-node Linux router consists of node1 (192.168.1.2) and node2 (192.168.1.3). Depending on your setup, either node1 or node2 can be your primary server, and the other will be your backup server. In this example, I will choose node1 as my primary and node2 as my backup. Once the cluster is set, with IP aliasing (read IP aliasing from the Linux Mini HOWTO for more detail), the primary server will be running with an extra IP address (192.168.1.4). As long as the primary server is up and running, services (e.g., DHCP, DNS, HTTP, FTP, etc.) on node1 can be accessed by either 192.168.1.2 or 192.168.1.4. In fact, IP aliasing is the key concept for setting up this two-node Linux cluster.

When node1 (the primary server) goes down, node2 will be take over all services from node1 by starting the same IP alias (192.168.1.4) and all subsequent services. In fact, some services can co-exist between node1 and node2 (e.g., FTP, HTTP, Samba, etc.), however, a service such as DCHP can have only one single running copy on the same physical segment. Likewise, we can never have two identical IP addresses running on two different nodes in the same network.

In fact, the underlining principle of a two-node, high-availability cluster is quite simple, and people with some basic shell programming techniques could probably write a shell script to build the cluster. We can set up an infinite loop within which the backup server (node2) simply keeps pinging the primary server, if the result is unsuccessful, and then start the floating IP (192.168.1.4) as well as the necessary dæmons (programs running at the background).

A Two-Node Linux Cluster HOWTO with "Heartbeat"

You need two Pentium class PCs with a minimum specification of a 100MHz CPU, 32MB RAM, one NIC (network interface card), 1G hard drive. The two PCs need not be identical. In my experiment, I used an AMD K6 350M Hz and a Pentium 200 MMX. I chose the AMD as my primary server as it can complete a reboot (you need to do a few reboots for testing) faster than the Pentium 200. With the great support of CFSL (Computers for Schools and Libraries) in Winnipeg, I got some 4GB SCSI hard drives as well as some Adaptec 2940 PCI SCSI controllers. The old and almost obsolete equipment is in good working condition and is perfect for this experiment.

node1

  • AMD K6 350MHz cpu

  • 4G SCSI hard drive (you certainly can use IDE hard drive)

  • 128MB RAM

  • 1.44 Floppy drive

  • 24x CD-ROM (not needed after installation)

  • 3COM 905 NIC

node2

  • Pentium 200 MMX

  • 4G SCSI hard drive

  • 96MB RAM

  • 1.44 Floppy

  • 24x CD-ROM

  • 3COM 905 NIC

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Re: defrag?

Anonymous's picture

I have never even heard of running defrag for a windows disk problem and no one I know has ever run defrag for that purpose. Defrag is for consolidating data not fixing errors. Why would you even suggest doing this???? Scandisk is the standard tool that everyone uses and runs automatically most times you have a lockup or improper shutdown.

Please for anyone reading the above message do NOT run defrag if you are getting disk errors. Run scandisk. If you continue to have problems your disk may be going bad.

Re: defrag?

Anonymous's picture

I try not to use Windows, but the last time I ran defrag (win98), scandisk was first invoked automatically.

I also thought this article would be about clusters (a la Beowolf), at least based on the title. Anyway, It was easy to read and I learned something new!

(linux.com wouldn't let me log in)

apache cluster on san

gops's picture

hi,

ok heartbeat works great - really appreciated ! and
i am amazed the way it worked perfectly.
how do we mount the san partition for that IP service in active-active apache cluster.
do i make an entry in /etc/fstab or is that a sin ?
I mea i need to mount /var/www/html on /dev/sdc1 . so i do i mak a entry in fstab file ?

gops

apache cluster on san

gops's picture

hi,

ok heartbeat works great - really appreciated ! and
i am amazed the way it worked perfectly.
how do we mount the san partition for that IP service in active-active apache cluster.
do i make an entry in /etc/fstab or is that a sin ?

gops

No Separate Storage?

Data Sheet's picture

Hi,
so it means I have two Linux computers, I can have Cluster installed. So I do not need any separate storage device right?

How shared storage work in this case?

Thanks,
Data Sheet

Mistaken config

Anonymous's picture

Why do you assign in hosts file "node1" and "node2" if you will use "atm1" and "cluster1" ?

if you choose this way "auth errors" will happen, all clusters will be in master role, it is conflict, so assign as below

in ha.cf on both clusters
node node1
node node2

in haresources on both clusters (if you want node1 to be master)

node1 virtualaddress httpd

and you must set authkeys files the same on all clusters otherwise it will not start.

auth 1
1 md5 "mysecret"

and for professional HA cluster we need null modem cable, on NIC perspective SPOFs will load extra bandwidth. For storage networks that is not permittable.

Thanks,

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix