Database Replication with Slony-I
Listing 2. subscribe.sh
#!/bin/sh CLUSTER=sql_cluster DB1=contactdb DB2=contactdb_slave H1=localhost H2=localhost U=postgres slonik <<_EOF_ cluster name = $CLUSTER; node 1 admin conninfo = 'dbname=$DB1 host=$H1 user=$U'; node 2 admin conninfo = 'dbname=$DB2 host=$H2 user=$U'; subscribe set (id = 1, provider = 1, receiver = 2, forward = yes);
Much like Listing 1, subscribe.sh starts by defining the cluster namespace and the connection information for the two nodes. Once completed, the subscribe set command causes the first node to start replicating the set containing a single table and sequence to the second node using the slon processes.
Once the subscribe.sh script has been executed, connect to the contactdb_slave database and examine the content of the contact table. At any moment, you should see that the information was replicated correctly:
% psql -U contactuser contactdb_slave contactdb_slave=> select * from contact; cid | name | address | phonenumber -----+--------+--------------+---------------- 1 | Joe | 1 Foo Street | (592) 471-8271 2 | Robert | 4 Bar Roard | (515) 821-3831
Now, connect to the /contactdb/ database and insert a row:
% psql -U contact contactdb contactdb=> begin; insert into contact (cid, name, address, phonenumber) values ((select nextval('contact_seq')), 'William', '81 Zot Street', '(918) 817-6381'); commit;
If you examine the content of the contact table of the contactdb_slave database once more, you will notice that the row was replicated. Now, delete a row from the /contactdb/ database:
contactdb=> begin; delete from contact where cid = 2; commit;
Again, by examining the content of the contact table of the contactdb_slave database, you will notice that the row was removed from the slave node correctly.
Instead of comparing the information for contactdb and contactdb_slave manually, we easily can automate this process with a simple script, as shown in Listing 3. Such a script could be executed regularly to ensure that all nodes are in sync, notifying the administrator if that is no longer the case.
Listing 3. compare.sh
#!/bin/sh CLUSTER=sql_cluster DB1=contactdb DB2=contactdb_slave H1=localhost H2=localhost U=postgres echo -n "Comparing the databases..." psql -U $U -h $H1 $DB1 >dump.tmp.1.$$ <<_EOF_ select 'contact'::text, cid, name, address, phonenumber from contact order by cid; _EOF_ psql -U $U -h $H2 $DB2 >dump.tmp.2.$$ <<_EOF_ select 'contact'::text, cid, name, address, phonenumber from contact order by cid; _EOF_ if diff dump.tmp.1.$$ dump.tmp.2.$$ >dump.diff ; then echo -e "\nSuccess! Databases are identical." rm dump.diff else echo -e "\nFAILED - see dump.diff." fi rm dump.tmp.?.$$
Although replicating a database on the same system isn't of much use, this example shows how easy it is to do. If you want to experiment with a replication system on nodes located on separate computers, you simply would modify the DB2, H1 and H2 environment variables from Listing 1 to 3. Normally, DB2 would be set to the same value as DB1, so an application always refers to the same database name. The host environment variables would need to be set to the fully qualified domain name of the two nodes. You also would need to make sure that the slon processes are running on both computers. Finally, it is good practice to synchronize the clocks of all nodes using ntpd or something similar.
Later, if you want to add more tables or sequences to the initial replication set, you can create a new set and use the merge set slonik command. Alternatively, you can use the set move table and set move sequence commands to split the set. Refer to the Slonik Command summary for more information on this.
In case of a failure from the master node, due to an operating system crash or hardware problem, for example, Slony-I does not provide any automatic capability to promote a slave node to become a master. This is problematic because human intervention is required to promote a node, and applications demanding highly available database services should not depend on this. Luckily, plenty of solutions are available that can be combined with Slony-I to offer automatic failover capabilities. The Linux-HA Heartbeat program is one of them.
Consider Figure 2, which shows a master and slave node connected together using an Ethernet and serial link. In this configuration, the Heartbeat is used to monitor the node's availability through those two links. The application makes use of the database services by connecting to PostgreSQL through an IP alias, which is activated on the master node by the Heartbeat. If the Heartbeat detects that the master node has failed, it brings the IP alias up on the slave node and executes the slonik script to promote the slave as the new master.
The script is relatively simple. Listing 4 shows the content of the script that would be used to promote a slave node, running on slave.example.com, so it starts offering all the database services that master.example.com offered.
Free DevOps eBooks, Videos, and more!
Regardless of where you are in your DevOps process, Linux Journal can help!
We offer here the DEFINITIVE DevOps for Dummies, a mobile Application Development Primer, and advice & help from the expert sources like:
- Linux Journal
- New Products
- Flexible Access Control with Squid Proxy
- Users, Permissions and Multitenant Sites
- Security in Three Ds: Detect, Decide and Deny
- High-Availability Storage with HA-LVM
- Tighten Up SSH
- DevOps: Everything You Need to Know
- Non-Linux FOSS: MenuMeters
- Solving ODEs on Linux
- diff -u: What's New in Kernel Development