Database Replication with Slony-I
Listing 2. subscribe.sh
#!/bin/sh CLUSTER=sql_cluster DB1=contactdb DB2=contactdb_slave H1=localhost H2=localhost U=postgres slonik <<_EOF_ cluster name = $CLUSTER; node 1 admin conninfo = 'dbname=$DB1 host=$H1 user=$U'; node 2 admin conninfo = 'dbname=$DB2 host=$H2 user=$U'; subscribe set (id = 1, provider = 1, receiver = 2, forward = yes);
Much like Listing 1, subscribe.sh starts by defining the cluster namespace and the connection information for the two nodes. Once completed, the subscribe set command causes the first node to start replicating the set containing a single table and sequence to the second node using the slon processes.
Once the subscribe.sh script has been executed, connect to the contactdb_slave database and examine the content of the contact table. At any moment, you should see that the information was replicated correctly:
% psql -U contactuser contactdb_slave contactdb_slave=> select * from contact; cid | name | address | phonenumber -----+--------+--------------+---------------- 1 | Joe | 1 Foo Street | (592) 471-8271 2 | Robert | 4 Bar Roard | (515) 821-3831
Now, connect to the /contactdb/ database and insert a row:
% psql -U contact contactdb contactdb=> begin; insert into contact (cid, name, address, phonenumber) values ((select nextval('contact_seq')), 'William', '81 Zot Street', '(918) 817-6381'); commit;
If you examine the content of the contact table of the contactdb_slave database once more, you will notice that the row was replicated. Now, delete a row from the /contactdb/ database:
contactdb=> begin; delete from contact where cid = 2; commit;
Again, by examining the content of the contact table of the contactdb_slave database, you will notice that the row was removed from the slave node correctly.
Instead of comparing the information for contactdb and contactdb_slave manually, we easily can automate this process with a simple script, as shown in Listing 3. Such a script could be executed regularly to ensure that all nodes are in sync, notifying the administrator if that is no longer the case.
Listing 3. compare.sh
#!/bin/sh CLUSTER=sql_cluster DB1=contactdb DB2=contactdb_slave H1=localhost H2=localhost U=postgres echo -n "Comparing the databases..." psql -U $U -h $H1 $DB1 >dump.tmp.1.$$ <<_EOF_ select 'contact'::text, cid, name, address, phonenumber from contact order by cid; _EOF_ psql -U $U -h $H2 $DB2 >dump.tmp.2.$$ <<_EOF_ select 'contact'::text, cid, name, address, phonenumber from contact order by cid; _EOF_ if diff dump.tmp.1.$$ dump.tmp.2.$$ >dump.diff ; then echo -e "\nSuccess! Databases are identical." rm dump.diff else echo -e "\nFAILED - see dump.diff." fi rm dump.tmp.?.$$
Although replicating a database on the same system isn't of much use, this example shows how easy it is to do. If you want to experiment with a replication system on nodes located on separate computers, you simply would modify the DB2, H1 and H2 environment variables from Listing 1 to 3. Normally, DB2 would be set to the same value as DB1, so an application always refers to the same database name. The host environment variables would need to be set to the fully qualified domain name of the two nodes. You also would need to make sure that the slon processes are running on both computers. Finally, it is good practice to synchronize the clocks of all nodes using ntpd or something similar.
Later, if you want to add more tables or sequences to the initial replication set, you can create a new set and use the merge set slonik command. Alternatively, you can use the set move table and set move sequence commands to split the set. Refer to the Slonik Command summary for more information on this.
In case of a failure from the master node, due to an operating system crash or hardware problem, for example, Slony-I does not provide any automatic capability to promote a slave node to become a master. This is problematic because human intervention is required to promote a node, and applications demanding highly available database services should not depend on this. Luckily, plenty of solutions are available that can be combined with Slony-I to offer automatic failover capabilities. The Linux-HA Heartbeat program is one of them.
Consider Figure 2, which shows a master and slave node connected together using an Ethernet and serial link. In this configuration, the Heartbeat is used to monitor the node's availability through those two links. The application makes use of the database services by connecting to PostgreSQL through an IP alias, which is activated on the master node by the Heartbeat. If the Heartbeat detects that the master node has failed, it brings the IP alias up on the slave node and executes the slonik script to promote the slave as the new master.
The script is relatively simple. Listing 4 shows the content of the script that would be used to promote a slave node, running on slave.example.com, so it starts offering all the database services that master.example.com offered.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Petros Koutoupis' RapidDisk
- ServersCheck's Thermal Imaging Camera Sensor
- The Italian Army Switches to LibreOffice
- Linux Mint 18
- Oracle vs. Google: Round 2
- The FBI and the Mozilla Foundation Lock Horns over Known Security Hole
- Privacy and the New Math
- Firefox 46.0 Released
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide