Building a Two-Node Linux Cluster with Heartbeat

C T shows you how to set up a two-node Linux cluster with Heartbeat.

The term "cluster" is actually not very well defined and could mean different things to different people. According to Webopedia, cluster refers to a group of disk sectors. Most Windows users are probably familiar with lost clusters--something that can be rectified by running the defrag utility.

However, at a more advanced level in the computer industry, cluster usually refers to a group of computers connected together so that more computer power, e.g., more MIPS (millions instruction per second), can be achieved or higher availability (HA) can be obtained.

Beowulf, Super Computer for the "Poor" Approach

Most super computers in the world are built on the concept of parallel processing--high-speed computer power is achieved by pulling the power from each individual computer. Made by IBM, "Deep Blue", the super computer that played chess with the world champion Garry Kasprov, was a computer cluster that consisted of several hundreds of RS6000s. In fact, many big time Hollywood movie animation companies, such as Pixar, Industrial Light and Magic, use computer clusters extensively for rendering (a process to translate all the information such as color, movement, physical properties, etc., into a single frame of picture).

In the past, a super computer was an expensive deluxe item that only few universities or research centers could afford. Started at NASA, Beowulf is a project of building clusters with "off-the-shelf" hardware (e.g., Pentium PCs) running Linux at a very low cost.

In the last several years, many universities world-wide have set up Beowulf clusters for the purpose of scientific research or simply for exploration of the frontier of super computer building.

High Availability (HA) Cluster

Clusters in this category use various technologies to gain an extra level of reliability for a service. Companies such as Red Hat, TurboLinux and PolyServe have cluster products that would allow a group of computers to monitor each other; when a master server (e.g., a web server) goes down, a secondary server will take over the services, similar to "disk mirroring" among servers.

Simple Theory

Because I do not have access to more than one real (or public) IP address, I set up my two-node cluster in a private network environment with some Linux servers and some Win9x workstations.

If you have access to three or more real/public IP addresses, you can certainly set up the Linux cluster with real IP addresses.

In the above network diagram (fig1.gif), the Linux router is the gateway to the Internet, and it consists of two IP addresses. The real IP, 24.32.114.35, is attached to a network card (eth1) in the Linux router and should be connected to either an ADSL modem or a cable modem for internet access.

The two-node Linux router consists of node1 (192.168.1.2) and node2 (192.168.1.3). Depending on your setup, either node1 or node2 can be your primary server, and the other will be your backup server. In this example, I will choose node1 as my primary and node2 as my backup. Once the cluster is set, with IP aliasing (read IP aliasing from the Linux Mini HOWTO for more detail), the primary server will be running with an extra IP address (192.168.1.4). As long as the primary server is up and running, services (e.g., DHCP, DNS, HTTP, FTP, etc.) on node1 can be accessed by either 192.168.1.2 or 192.168.1.4. In fact, IP aliasing is the key concept for setting up this two-node Linux cluster.

When node1 (the primary server) goes down, node2 will be take over all services from node1 by starting the same IP alias (192.168.1.4) and all subsequent services. In fact, some services can co-exist between node1 and node2 (e.g., FTP, HTTP, Samba, etc.), however, a service such as DCHP can have only one single running copy on the same physical segment. Likewise, we can never have two identical IP addresses running on two different nodes in the same network.

In fact, the underlining principle of a two-node, high-availability cluster is quite simple, and people with some basic shell programming techniques could probably write a shell script to build the cluster. We can set up an infinite loop within which the backup server (node2) simply keeps pinging the primary server, if the result is unsuccessful, and then start the floating IP (192.168.1.4) as well as the necessary dæmons (programs running at the background).

A Two-Node Linux Cluster HOWTO with "Heartbeat"

You need two Pentium class PCs with a minimum specification of a 100MHz CPU, 32MB RAM, one NIC (network interface card), 1G hard drive. The two PCs need not be identical. In my experiment, I used an AMD K6 350M Hz and a Pentium 200 MMX. I chose the AMD as my primary server as it can complete a reboot (you need to do a few reboots for testing) faster than the Pentium 200. With the great support of CFSL (Computers for Schools and Libraries) in Winnipeg, I got some 4GB SCSI hard drives as well as some Adaptec 2940 PCI SCSI controllers. The old and almost obsolete equipment is in good working condition and is perfect for this experiment.

node1

  • AMD K6 350MHz cpu

  • 4G SCSI hard drive (you certainly can use IDE hard drive)

  • 128MB RAM

  • 1.44 Floppy drive

  • 24x CD-ROM (not needed after installation)

  • 3COM 905 NIC

node2

  • Pentium 200 MMX

  • 4G SCSI hard drive

  • 96MB RAM

  • 1.44 Floppy

  • 24x CD-ROM

  • 3COM 905 NIC

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Linux Cluster Manager:

riotorious's picture

Linux Cluster Manager is a graphical tool for managing multiple Linux systems from a central location.I have a problem installing this tool,i think is a interesting monitoring tool but the installation document is not well organised.is there someone who installed this tool and have a clear step by step installation details.
this is the link about the tool.

http://linuxcm.sourceforge.net/

No doubt it's a beautiful

Anonymous's picture

No doubt it's a beautiful post of yours and you hve done a great job.
Thanks for sharing it with us.
cna certification

For best fireplace designs

Anonymous's picture

For best fireplace designs please visit
Fireplace Designs

Heartbeat is not failingover when I stop the application.

Chandra's picture

Greetings:

I have configured MQ HA with heartbeat, we are running on Red Hat server. When I stop the heartbeat the failover is working fine, but when I stop the MQ or httpd, which are in resource group, node is not failing over. I mean there is no reponse from heartbeat, simply the application is stopping.
I am starting the heartbeat from /etc/rc.d/init.d/heartbeat start
How do I monitor the application health, if there is any problem with that application it should failover to the next node.
The httpd is default application with Redhat, where we can check the status with /etc/rc.d/init.d/httpd status.
When I stopped it is showing me as stopped or not running.
Do I need to do any OS configuration to keep the heartbeat always check the applications in the resources. I am new to Linux admin.

Thanks,
Chandra.

our problem is the same;

SAVAS's picture

our problem is the same; maybe the server just works fine, but what if apache or myswl server do not?
will hearthbeat sense it and take servis over to other node?
if yes, pls tell me how? or send mi a link thjat explains how to configure it.

thanks.

thanks

Anonymous's picture

thanks for tihs information :)eglen

thanks for this

Anonymous's picture

thanks for this information.
cna certification

Hi, Is it possible to have a

Nachiketh's picture

Hi,
Is it possible to have a 2 Nodes running different versions of Heartbeat(1.x/2.x) and different RHEL versions(RHEL3/RHEL4) to work well in tandem? Does failover and other heartbeat functionalities work fine in such a linux cluster?
Thanks in advance!!

Works

Misafir's picture

Works for us and for many many others :-)

so far it has been working

freddy's picture

so far it has been working for us with a few glitches.

patio furniture

Hello, I'm developing a

Javier Andrés Alonso's picture

Hello,

I'm developing a system for high availability and load balancing under Linux, with heartbeat, ldirectord, glusterfs, mon, MySQL Cluster, ... You can see the results in my blog:

http://redes-privadas-virtuales.blogspot.com/2008/12/alta-disponibilidad...

hi, i'm sure that you have

Anonymous's picture

hi,
i'm sure that you have explained very well, but there is a big problem (for me) your blog is in spanish :) so i cannot understand.
well, you may say "that is your problem" yes that is true :)

can you please write a good "how to" in english pls.
thanks

ha.cf

kris_kk's picture

This is the main ha.cf file.

ha.cf:
-------
logfile /var/log/ha-log
keepalive 2
deadtime 10
warntime 5
initdead 30
bcast eth1
auto_failback off
respawn hacluster /usr/lib/heartbeat/ipfail
apiauth ipfail gid=haclient uid=hacluster
node A
node B
ping 192.168.1.1 ( this is gateway IP)

heartbeat VIP failover is not happening.

kris_kk's picture

Hi,
Thanks for providing such a good article.
Iam trying to setup heartbeat with the following configuration.
However,Iam having issue in VIP failover once the node1 is down.
heartbeat is not allocating the shared VIP on node2 in case node1 is down.

A:Master
eth1 = 192.168.100.2

B:Slave
eth1 = 192.168.100.4

on A eth1:2 = 192.168.100.24 (virtual interface for heartbeat)

ha.cf:
-------
logfile /var/log/ha-log
keepalive 2
deadtime 10
warntime 5
initdead 30
ucast eth1
auto_failback on
respawn hacluster /usr/lib/heartbeat/ipfail
node A
node B
ping 192.168.1.1 ( this is a gateway IP)

haresource:
============
B 192.168.223.24 mysql

authkeys:
==========
auth 2
2 sha1 Test_HB!

Node: A (3f36b1d6-90c0-4f61-9d76-f5bedee43c12): online
Node: B (7caa9321-001d-473c-b505-7081e4ec4d7f): online

Can someone plz help in resolving the issue.
Iam unable to find the reason for failure.
Thankyou.

eth0:0

Anonymous's picture

How did you setup eth0:0? Is that a bridged eth0 interface? In any case how did you accomplish that?
Thanks in advance.

Heartbeat with Tcp based Application

Cantek's picture

Hi, my name is Cantek. I am from Turkey. I am trying to make a server application without application server softwares. My server will only use tcp sockets and recieve bytes and send bytes not anything more. I need a clustering and failover mechanism for my server side. I am using two ubuntu server, both has server application, during my researches i saw heartbeat so many.
What i need is ;
When my server one fails server 2 must be keep doing the job exactly at the same point where server2 failed.
Session and state informations must be kept.
If you can help me about that subject i will be very happy.
Good Days.

Sorry

Pooja's picture

: ) sorry i don't have any suggestion for you in this matter .
songs.pk
Love ya
Pooja

When my server one fails

Cantek's picture

When my server one fails server 2 must be keep doing the job exactly at the same point where server1 failed.

Building 2node clus !!!

Johny Anthony's picture

Hi,

Thanks a lot for the notes !!!
Had a query though...

Whats the configuration for the second set of NICs on both the nodes.

Secondly what kind of an installation should i go for???
Cluster package in redhat or normal redhat server installation.

I have RHEL 5 , will that go fine

Regards,
Johny:)

Article has Incorrect IP Address for Node 2

Marcus's picture

The ifconfig listings for node 2 I believe are incorrect. The inet address for eth0 should be 192.168.1.3 *not* 192.168.1.2.

Sharing the same address as node 1 would create a conflict.

Re: Article has Incorrect IP Address for Node 2

Anonymous's picture

I think so too. Please confirm anyone.

HA or Cluster not DHCP

Jim Balcomb's picture

heartbeat is excellent. If you do this for Apache, use ldirectord for load balancing and redundancy then use heartbeat to have ldirectord on two servers for fail-over redundancy.

ISC DHCP already supports multiple nodes in fail-over mode and is somewhat load balanced.

Defrag does not recover lost clusters.

SAN stands for Storage Area Network not Server Area Network.

The discussion about the MAC/ARP on a switch is incorrect and there is no need to run this set up on a hub.

Installation Head Acks!

Anonymous's picture

Hi All,

I am a student and I am working on a project for my class. I have no prier experience with Linux (which is one of the requirements for the project). I have two computers with Red Hat Ver. 9 and I have tryed to start the installation process of HeartBeat on one of them.
I have done several RPMs but have run into a snag.
When trying to do the heartbeat-ldirectord-1.2.3.cvs.20050404-1.rh.rl.um.1.i386.rpm I recieve this error:

error: Failed dependencies:
perl-Mail-IMAPClient is neede by heartbeat-ldirectord-1.2.3.cvs.20050404-1.rh.rl.um.1
Perl-Net-DNS is neede by heartbeat-ldirectord-1.2.3.cvs.20050404-1.rh.rl.um.1
perl-ldap is neede by heartbeat-ldirectord-1.2.3.cvs.20050404-1.rh.rl.um.1

Understand that this is a lack of previous RPMs not being available for up and coming RPMs (or least I think I do), but I have the "Mail" and "ldap" RPMs and get similular results when I try to RPM them. The don't know where the "Net-DNS" is. Any help would be greatly appriciated. Thank you.

Installing Perl Packages

Anonymous's picture

RPM's are the devils work.

Load up cpan

> cpan

install Net::DNS

...

lather, rinse repeat as you need.

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

no comment !

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

When the secondary node in this type of failover happens, shouldn't the virtual IP on the new node have the same physical (MAC) address of the old one? If not, won't this confuse the hell out of the switch that it's attached to?

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

Thats why an arp change broadcast is issued by hearbeat :)

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

I thought that it would confuse the switch, but it sends an arp command to release the ip address before taking over.

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

You should only use hubs on this kind of setup and NOT a switch.

Heartbeat just sends out gratuitous arps to get the IP takeover so all the other machines just renew their routing tables.

Obviously using a switch would 'cause drastic problems that you are mentioning here.

No, you can use also .

Vipul's picture

No,
you can use also ....switch also...main concern all things should be physically connected.

regards,
Vipul Ramani

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

The HUB should be used, of course!

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

At any time, there will be just one virtual IP. If the

master node is up, virtual IP is tied to the

master node; however, when the master node is

down, the secondary node will take over the virtual

IP. The process is just like assigning an IP

address to different servers at DIFFERENT time.

As far as the switch is concerned, it will only

keep the MAC address for a certain period of

time, after checking that virtual IP has been

reassigned to another server with a different

MAC address, it will simply update its own

MAC lookup table. In this case, I don't think it

will confuse anyone or anything such as the

switch.

Re: Building a Two-Node Linux Cluster with Heartbeat

eckes's picture

Depends on your network equipment. Heartbeat (or fake) can be set up to do gratious arp for the address which just have moved. Most equipment will simply update the ARP cache and work with the new NIC. Of course you have to check if your switch and router are happy with that.

This is the most common system for failover clusters, since in this scenario network outages are usually happen anyway.

BTW: heartbeat and fake work best if you have static applications, for replication you may better use shared storage clusters like kimberlite. For some applications, especially web application servers it is much better to run them in a load balancing configuration, cause the failover time is smaller and the hardware is better used.

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

If you want to see a small sampling of the kinds of mission critical applications people doing on heartbeat, then you might want to look here:


http://linux-ha.org/heartbeat/users.html

Heartbeat has been in production for several years, and is in use in hundreds of mission-critical sites across the world.

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

My thanks to CT on the excellent article and to the replies adding useful information. Question --- Using Samba, Pentium 3's, 1GB RAM, 1GHz Processor, how many users can this safely handle? I/we initially plan to use it to serve approx six MS apps that require frequent upgrades/patches to approx 300 users. Thanks again, art557@pacbell.net

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

I've been using Heartbeat for over a year to provide high-availability to a mission critical application.

The application employs a replicated (2-way) database, a servlet container, and several batch processes.

I've augmented Heartbeat with a program that monitors all critical resources needed by the application. It initiates a Heartbeat failover should any of these resources become unuseable.

This combination has worked flawlessly.

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

Interesting, but without replication or completely static data, not much use other than a toy with which to play.Reasons for alternate nics with crossover cable OR com cable: if the network device both nodes are connected to has a problem, and comes back after that 10 seconds (say some bozo kicks the power plug out on the switch), now you've got 2 nodes reacting to the same ip address and unpredictable results to say the least. The crossover or com connection provides a non-DOS-able failsafe.

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

"a toy with which to play" ? Never installed it have you ?

Oh, and spilt clusters is the reason why we use separate switches for each machine AND pass a heartbeat signal over the same network as the clients see the service over .....

So, no SPOF.

Works for us and for many many others :-)

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

Many people use heartbeat in combination with shared storage, a general purpose replication mechanism like DRBD, or a application-specific replication mechanism like comes with LDAP or DNS. With replication, even very cheap hardware can be made to avoid all SPOFs. Heartbeat integrates well with DRBD to provide general partition-layer replication of data.

Certainly you want more than one heartbeat connection for lots of different reasons.

There are hundreds of production users of heartbeat all over the world. For a few examples see:

http://linux-ha.org/heartbeat/users.html

Re: Building a Two-Node Linux Cluster with Heartbeat

jsw's picture

It is true that replication of dynamic data is an important issue. Howerver, this example serves the purpose of creating the lowest layer of a HA system, namely knowing when to switch to node2. The next layer would be straight software driven replication built on top of this example failover design.

An alternative to replication could be storage shared between the two nodes. I wonder if someone out there would have an inexpensive shared storage design?

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

The CVS version has of heartbeat has just added support for the IBM ServeRAID RAID controllers so that two machines can share a SCSI string and fail over (correctly) between the two controllers. These RAID controllers guarantee that only one side at a time will access any given logical volume. I think the retail on these devices starts at around $600 USD. It's a lot cheaper than a FC box.

how come the ServeRAID can

Afif's picture

How come the ServeRAID can guarantee that controller only one node access the shared storage? while the other node (while first time booting can't indentified the logical volume of enclosure?
if you have any expereience, would you like to share how to setup this hardware, I would like to setup Oracle 10g RAC using IBM x346, ServeRaid 6M, EXP400 on linux+heartbeat.
Warm Regards,

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

This tools has worked almost flawlessly at our site for the last year. It turned our Samba and FTP servers into highly-available services for the cost of a couple of obsolete PCs.

Now we're just left with Heartbeat Clusters backended by EMC Celerras.

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

Be careful running any process like DHCP in this scenario! If the DHCP database from the master server is not being replicated to the HA backup, whenever the HA backup takes over it will begin assigning IP Addresses to machines regardless of what the master was already assigning (ie: A station that gets a DHCP address after restart will probably get a duplicate IP address to a system already running).

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

You would probably not run a high-availability DHCP server. The DHCP protocol has redundancy built in, clients are expected to handle DCHPOffers from multiple servers on the same subnet. If your going to throw together the hardware for a failover node, you might as well make both active with distinct scopes.

DHCP RFC: http://www.ietf.org/rfc/rfc2131.txt?number=2131

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

this is why getting a scsi system going on each host linked to a common array is really needed. data can be stored on the array so it can be shared.

To be really useful "we" nned something like this.

regards

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

If you make /var/lib/dhcp a mount point, and mount it on a DRBD volume, then you should be in pretty good shape. Then the secondary machine will have access to the DHCP leases when it takes over. Of course, you want to make /var/lib/dhcp a journalling filesystem.

Re: Building a Two-Node Linux Cluster with Heartbeat

ghostdancer's picture

This sound more like a backup solution right? When I first saw the title, I thought it was talking about Beowulf, I guess I was wrong.

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

It's about a high avaliabilty cluster.

In this case in his minimal expression since the second node is no more than a hot stand-by (or hot backup if you prefer it).

However with not too much complication you can add load balancing capabilities to this set up gaining a lot more.

Re: Building a Two-Node Linux Cluster with Heartbeat

Anonymous's picture

Most Windows users are probably familiar with lost clusters--something that can be rectified by running the defrag utility.

Now... Running defrag to recover lost clusters isn't a good idea. I'd rather run scandisk, or in the good old days, chkdsk /f. But then again, I don't use Microsoft products any longer ;P

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState