DRBD in a Heartbeat

How to build a redundant, high-availability system with DRBD and Heartbeat.
Configuring Heartbeat

Heartbeat is designed to monitor your servers, and if your master server fails, it will start up all the services on the slave server, turning it into the master. To configure it, we need to specify which servers it should monitor and which services it should start when one fails.

Let's configure the services first. We'll take a look at the Sendmail we configured previously, because the other services are configured the same way. First, go to the directory /etc/heartbeat/resource.d. This directory holds all the startup scripts for the services Heartbeat will start up.

Now add a symlink from /etc/init.d/sendmail to /etc/heartbeat/resource.d.

Note: keep in mind that these paths may vary depending on your Linux distribution.

With that done, set up Heartbeat to start up services automatically on the master computer, and turn the slave to the master if it fails. Listing 2 shows the file that does that, and in it, you can see we have only one line, which has different resources to be started on the given server, separated by spaces.

The first command, server1, defines which server should be the default master of these services; the second one, IPaddr::192.168.1.5/24, tells Heartbeat to configure this as an additional IP address on the master server with the given netmask. Next, with datadisk::drbd0 we tell Heartbeat to mount this drive automatically on the master, and after this, we can enter the names of all the services we want to start up—in this case, we put sendmail.

Note: these names should be the same as the filename for their startup script in /etc/heartbeat/resource.d.

Next, let's configure the /etc/heartbeat/ha.cf file (Listing 3). The main things you would want to change in it are the hostnames of the master/slave machine at the bottom, and the deadtime and initdead. These specify how many seconds of silence should be allowed from the other machine before assuming it's dead and taking over.

If you set this too low, you might have false positives, and unless you've got a system called STONITH in place, which will kill the other machine if it thinks it's already dead, you can have all kinds of problems. I set mine at two minutes; it's what has worked best for me, but feel free to experiment.

Also keep in mind the following two points: for the serial connection to work, you need to plug in a crossover serial cable between the machines, and if you don't use a crossover network cable between the machines but instead go through a hub where you have other Heartbeat nodes, you have to change the udpport for each master/slave node set, or your log file will get filled with warning messages.

Now, all that's left to do is start your Heartbeat on both the master and slave server by typing:

/etc/init.d/heartbeat start

Once you've got that up and running, it's time to test it. You can do that by stopping Heartbeat on the master server and watching to see whether the slave server becomes the master. Then, of course, you might want to try it by completely powering down the master server or any other disconnection tests.

Congratulations on setting up your redundant server system! And, remember, Heartbeat and DRBD are fairly flexible, and you can put together some complex solutions, including having one server being a master of one DRBD partition and a slave of another. Take some time, play around with them and see what you can discover.

Pedro Pla (pedropla@pedropla.com) is CTO of the Holiday Marketing International group of companies, and he has more than ten years of Linux experience.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Thanks

Kokai's picture

Thank You, check http://docs.homelinux.org for other tutorials about drbd. It's also good explained like Your articel.

DRBD.conf

Anonymous's picture

Hye.!

IP of ur server1 is 192.168.1.1 instead of 192.168.1.3

is'nt it .

is it (192.168.1.3) taken here by some mistake or something else...

;-}

drbd after failure

Daniel's picture

I have done the above and setup drbd and heartbeat. I am having an issue where once say node-a looses it's network connection the failover happens as expected, node-b mounts the disk and everything is there; but when node-a comes backup it drbd is not being started as a slave and it is taking back over as primary and I loose all my new data that node-b created.

how can I get drbd to play nicer?

====DRBD.conf=====

global {
minor-count 1;
}

resource mysql {

# * for critical transactional data.
protocol C;

on server-1 {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.0.128:7788;
meta-disk internal;
}

on server-2 {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.0.129:7788;
meta-disk internal;
}

disk {
on-io-error detach;
}

net {
max-buffers 2048;
ko-count 4;
}

syncer {
rate 10M;
al-extents 257;
}

startup {
wfc-timeout 30;
degr-wfc-timeout 120;
}
}

=====END======

====ha.cf=====

logfacility local0
keepalive 500ms
deadtime 10
warntime 5
initdead 30
ucast eth0 192.168.0.129
#mcast eth0 225.0.0.1 694 2 0
auto_failback off
node server-1
node server-2
respawn hacluster /usr/lib/heartbeat/ipfail
use_logd yes
logfile /var/log/hb.log
debugfile /var/log/heartbeat-debug.log

====END=======

Is the an error in Listing 1 (drbd.conf)?

Laker Netman's picture

I'm in the midst of setting up my first HA cluster and want to be sure I didn't miss something. Shouldn't the address line in the on server section be 192.168.1.1 rather than .1.3? If not, what did I miss?

TIA,
Laker

mounting drbd drives with heartbeat

Chris's picture

I can get drbd drive to startup with heartbeat, I can fail and have the primary and secondary change. The problem I am having is that I can not get heartbeat to mount the drives, I can mount them just fine with the mount command.

I am not sure in the article how the drives are being mounted.

Does anyone know how I would mount the /dev/drbd0 to /mnt/drbd0 with heartbeat?

mount drives with heartbeat

Joseph Chackungal's picture

I have the same issue!

Did you find a way out? My HA-Cluster with DRBD works great if I manually mount my replicated drive. But it refuses to do automatically with heartbeat.

Any help/leads will be appreciated.

Thanks

Try mounting with resource

Jan's picture

Try mounting with resource script Filesystem. This works for me and is mentioned in some other how to articles...

Filesystem::/dev/drdb0::/data::ext3
Try on commandline w/o the ::

NFS support is generally quite tricky

frankie's picture

That should be at least cited in the article. Heartbeat based service with a floating IP address could be extremely tricky for locking, when used with NFS servers. Also, I would avoid to use mbox based mail spool dirs on drdb partitions, maildirs are much more safe. That's my 2 cents.

NFS support not that difficult

Alan Robertson's picture

NFS works well - including locking. Dozens to hundreds of sites have it working quite nicely. You do have to set things up correctly, but the Linux-HA web site and several other articles (pointed to by the PressRoom page) explain how to do that in detail. There is a hole where an extremely active application can possibly fail to get a lock during a failover, but it's happens rarely.

However, you do have to set it up correctly.

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix