DRBD in a Heartbeat
Heartbeat is designed to monitor your servers, and if your master server fails, it will start up all the services on the slave server, turning it into the master. To configure it, we need to specify which servers it should monitor and which services it should start when one fails.
Let's configure the services first. We'll take a look at the Sendmail we configured previously, because the other services are configured the same way. First, go to the directory /etc/heartbeat/resource.d. This directory holds all the startup scripts for the services Heartbeat will start up.
Now add a symlink from /etc/init.d/sendmail to /etc/heartbeat/resource.d.
Note: keep in mind that these paths may vary depending on your Linux distribution.
With that done, set up Heartbeat to start up services automatically on the master computer, and turn the slave to the master if it fails. Listing 2 shows the file that does that, and in it, you can see we have only one line, which has different resources to be started on the given server, separated by spaces.
Listing 2. /etc/heartbeat/haresources
server1 IPaddr::192.168.1.5/24 datadisk::drbd0 sendmail
The first command, server1, defines which server should be the default master of these services; the second one, IPaddr::192.168.1.5/24, tells Heartbeat to configure this as an additional IP address on the master server with the given netmask. Next, with datadisk::drbd0 we tell Heartbeat to mount this drive automatically on the master, and after this, we can enter the names of all the services we want to start up—in this case, we put sendmail.
Note: these names should be the same as the filename for their startup script in /etc/heartbeat/resource.d.
Next, let's configure the /etc/heartbeat/ha.cf file (Listing 3). The main things you would want to change in it are the hostnames of the master/slave machine at the bottom, and the deadtime and initdead. These specify how many seconds of silence should be allowed from the other machine before assuming it's dead and taking over.
If you set this too low, you might have false positives, and unless you've got a system called STONITH in place, which will kill the other machine if it thinks it's already dead, you can have all kinds of problems. I set mine at two minutes; it's what has worked best for me, but feel free to experiment.
Also keep in mind the following two points: for the serial connection to work, you need to plug in a crossover serial cable between the machines, and if you don't use a crossover network cable between the machines but instead go through a hub where you have other Heartbeat nodes, you have to change the udpport for each master/slave node set, or your log file will get filled with warning messages.
Listing 3. /etc/heartbeat/ha.cf
debugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 keepalive 2 deadtime 120 initdead 120 serial /dev/ttyS1 baud 9600 udpport 694 udp eth0 nice_failback on node server1 node server2
Now, all that's left to do is start your Heartbeat on both the master and slave server by typing:
Once you've got that up and running, it's time to test it. You can do that by stopping Heartbeat on the master server and watching to see whether the slave server becomes the master. Then, of course, you might want to try it by completely powering down the master server or any other disconnection tests.
Congratulations on setting up your redundant server system! And, remember, Heartbeat and DRBD are fairly flexible, and you can put together some complex solutions, including having one server being a master of one DRBD partition and a slave of another. Take some time, play around with them and see what you can discover.
Pedro Pla (firstname.lastname@example.org) is CTO of the Holiday Marketing International group of companies, and he has more than ten years of Linux experience.
|September 2015 Issue of Linux Journal: HOW-TOs||Sep 01, 2015|
|September 2015 Video Preview||Sep 01, 2015|
|Using tshark to Watch and Inspect Network Traffic||Aug 31, 2015|
|Where's That Pesky Hidden Word?||Aug 28, 2015|
|A Project to Guarantee Better Security for Open-Source Projects||Aug 27, 2015|
|Concerning Containers' Connections: on Docker Networking||Aug 26, 2015|
- Optimization in GCC
- Using tshark to Watch and Inspect Network Traffic
- September 2015 Issue of Linux Journal: HOW-TOs
- Problems with Ubuntu's Software Center and How Canonical Plans to Fix Them
- Concerning Containers' Connections: on Docker Networking
- A Project to Guarantee Better Security for Open-Source Projects
- Firefox Security Exploit Targets Linux Users and Web Developers
- Where's That Pesky Hidden Word?
- My Network Go-Bag
- Doing Astronomy with Python