Linux Clustering with Ruby Queue: Small Is Beautiful
Here, I walk though the actual sequence of rq commands used to set up an instant Linux cluster comprised of four nodes. The nodes we are going to use are called onefish, twofish, redfish and bluefish. Each host is identified in its prompt, below. In my home directory on each of the hosts I have the symbolic link ~/nfs pointing at a common NFS directory.
The first thing we have to do is initialize the queue:
redfish:~/nfs > rq queue create created <~/nfs/queue>
Next, we start feeder daemons on all four hosts:
onefish:~/nfs > rq queue feed --daemon -l=~/rq.log twofish:~/nfs > rq queue feed --daemon -l=~/rq.log redfish:~/nfs > rq queue feed --daemon -l=~/rq.log bluefish:~/nfs > rq queue feed --daemon -l=~/rq.log
In practice, you would not want to start feeders by hand on each node, so rq supports being kept alive by way of a crontab entry. When rq runs in daemon mode, it acquires a lockfile that effectively limits it to one feeding process per host, per queue. Starting a feeder daemon simply fails if another daemon already is feeding on the same queue. Thus, a crontab entry like this:
15/* * * * * rq queue feed --daemon --log=log
checks every 15 minutes to see if a daemon is running, and it starts a daemon if and only if one is not running already. In this way, an ordinary user can set up a process that is running at all times, even after a machine reboot.
Jobs can be submitted from the command line, from an input file or, in Linux tradition, from standard input as part of a process pipeline. When using an input file or stdin, the format is either YAML (such as that produced as the output of other can rq commands) or a simple list of jobs, one job per line. The format is auto-detected. Any host that sees the queue can run commands on it:
onefish:~/nfs > cat joblist echo 'job 0' && sleep 0 echo 'job 1' && sleep 1 echo 'job 2' && sleep 2 echo 'job 3' && sleep 3 onefish:~/nfs > cat joblist | rq queue submit - jid: 1 priority: 0 state: pending submitted: 2004-11-12 20:14:13.360397 started: finished: elapsed: submitter: onefish runner: pid: exit_status: tag: command: echo 'job 0' && sleep 0 - jid: 2 priority: 0 state: pending submitted: 2004-11-12 20:14:13.360397 started: finished: elapsed: submitter: onefish runner: pid: exit_status: tag: command: echo 'job 1' && sleep 1 - jid: 3 priority: 0 state: pending submitted: 2004-11-12 20:14:13.360397 started: finished: elapsed: submitter: onefish runner: pid: exit_status: tag: command: echo 'job 2' && sleep 2 - jid: 4 priority: 0 state: pending submitted: 2004-11-12 20:14:13.360397 started: finished: elapsed: submitter: onefish runner: pid: exit_status: tag: command: echo 'job 3' && sleep 3
We see in YAML format, in the output of submitting to the queue, all of the information about each of the jobs. When jobs are complete, all of the fields are filled in. At this point, we check the status of the queue:
redfish:~/nfs > rq queue status --- pending : 2 running : 2 finished : 0 dead : 0
From this, we see that two of the jobs have been picked up by a node and are being run. We can find out which nodes are running our jobs using this input:
onefish:~/nfs > rq queue list running | egrep 'jid|runner' jid: 1 runner: redfish jid: 2 runner: bluefish
The record for a finished jobs remains in the queue until it's deleted, because a user generally would want to collect this information. At this point, we expect all jobs to be complete so we check each one's exit status:
bluefish:~/nfs > rq queue list finished | egrep 'jid|command|exit_status' jid: 1 exit_status: 0 command: echo 'job 0' && sleep 0 jid: 2 exit_status: 0 command: echo 'job 1' && sleep 1 jid: 3 exit_status: 0 command: echo 'job 2' && sleep 2 jid: 4 exit_status: 0 command: echo 'job 3' && sleep 3
All of the commands have finished successfully. We now can delete any successfully completed job from the queue:
twofish:~/nfs > rq queue query exit_status=0 | rq queue delete --- - 1 - 2 - 3 - 4
Ruby Queue can perform quite a few other useful operations. For a complete description, type rq help.
Making the choice to roll your own always is a tough one, because it breaks Programmer's Rule Number 42, which clearly states, "Every problem has been solved. It is Open Source. And it is the first link on Google."
Having a tool such as Ruby is critical when you decide to break Rule Number 42, and the fact that a project such as Ruby Queue can be written in 3,292 lines of code is testament to this fact. With only a few major enhancements planned, it is likely that this code line total will not increase much as the code base is refined and improved. The goals of rq remain simplicity and ease of use.
Ruby Queue set out to lower the barrier scientists had to overcome in order to realize the power of Linux clusters. Providing a simple and easy-to-understand tool that harnesses the power of many CPUs allows them to shift their focus away from the mundane details of complicated distributed computing systems and back to the task of actually doing science. Sometimes small is beautiful.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Server Hardening
- May 2016 Issue of Linux Journal
- EnterpriseDB's EDB Postgres Advanced Server and EDB Postgres Enterprise Manager
- The Humble Hacker?
- BitTorrent Inc.'s Sync
- The Death of RoboVM
- The US Government and Open-Source Software
- New Container Image Standard Promises More Portable Apps
- Open-Source Project Secretly Funded by CIA
- ACI Worldwide's UP Retail Payments
In modern computer systems, privacy and security are mandatory. However, connections from the outside over public networks automatically imply risks. One easily available solution to avoid eavesdroppers’ attempts is SSH. But, its wide adoption during the past 21 years has made it a target for attackers, so hardening your system properly is a must.
Additionally, in highly regulated markets, you must comply with specific operational requirements, proving that you conform to standards and even that you have included new mandatory authentication methods, such as two-factor authentication. In this ebook, I discuss SSH and how to configure and manage it to guarantee that your network is safe, your data is secure and that you comply with relevant regulations.Get the Guide