Tech Tips

 in
Make KPilot work with Ubuntu, watch processes, lock files and create passwords in PHP.
Locking Files

This tip comes courtesy of Linux Journal columnist Dave Taylor and No Starch Press.

Any script that reads or appends to a shared data file, such as a log file, needs a reliable way to lock the file so that other instantiations of the script don't step on the updates. The idea is that the existence of a separate lockfile serves as a semaphore, an indicator that a different file is busy and cannot be used. The requesting script waits and tries again, hoping that the file will be freed up relatively promptly, denoted by having its lockfile removed.

Lockfiles are tricky to work with though, because many seemingly foolproof solutions fail to work properly. For example, the following is a typical approach to solving this problem:

while [ -f $lockfile ] ; do
  sleep 1
done
touch $lockfile

Seems like it should work, doesn't it? You loop until the lockfile doesn't exist, then create it to ensure that you own the lockfile and, therefore, can modify the base file safely. If another script with the same loop sees your lock, it will spin until the lockfile vanishes. However, this doesn't in fact work, because although it seems that scripts are run without being swapped out while other processes take their turn, that's not actually true. Imagine what would happen if, right after the done in the loop just shown, but before the touch, this script were swapped out and put back in the processor queue while another script was run instead. That other script would dutifully test for the lockfile, find it missing and create its own version. Then, the script in the queue would swap back in and do a touch, with the result that the two scripts both would think they had exclusive access, which is bad.

Fortunately, Stephen van den Berg and Philip Guenther, authors of the popular procmail e-mail filtering program, include a lockfile command that lets you safely and reliably work with lockfiles in shell scripts.

Many UNIX distributions, including Linux and Mac OS X, have lockfile already installed. You can check whether your system has lockfile simply by typing man 1 lockfile. If you get a man page, you're in luck! If not, download the procmail package from www.procmail.org, and install the lockfile command on your system. The script in this section assumes you have the lockfile command.

The Code:

      
#!/bin/sh

# filelock - A flexible file-locking mechanism.

retries="10"            # default number of retries
action="lock"           # default action
nullcmd="/bin/true"     # null command for lockfile

while getopts "lur:" opt; do
  case $opt in
    l ) action="lock"      ;;
    u ) action="unlock"    ;;
    r ) retries="$OPTARG"  ;;
  esac
done
shift $(($OPTIND - 1))

if [ $# -eq 0 ] ; then
  cat << EOF >&2
Usage: $0 [-l|-u] [-r retries] lockfilename
Where -l requests a lock (the default), -u requests
an unlock, -r X specifies a maximum number of retries
before it fails (default = $retries).
EOF
  exit 1
fi

# Ascertain whether we have lockf or lockfile
# system apps

if [ -z "$(which lockfile | grep -v '^no ')" ] ; then
  echo "$0 failed: 'lockfile' utility not
  found in PATH." >&2
  exit 1
fi

if [ "$action" = "lock" ] ; then
  if ! lockfile -1 -r $retries "$1" 2> /dev/null;
    then echo "$0: Failed: Couldn't create
    lockfile in time" >&2
    exit 1
  fi
else    # action = unlock
  if [ ! -f "$1" ] ; then
    echo "$0: Warning: lockfile $1 doesn't
    exist to unlock" >&2
    exit 1
  fi
  rm -f "$1"
fi

exit 0

Running the Script:

Although the lockfile script isn't one that you'd ordinarily use by itself, you can try to test it by having two terminal windows open. To create a lock, simply specify the name of the file you want to try to lock as an argument of filelock. To remove the lock, add the -u option.

The Results:

First, create a locked file:

$ filelock /tmp/exclusive.lck
$ ls -l /tmp/exclusive.lck
-r--r--r--  1 taylor  wheel  1 Mar 21 15:35
 ↪/tmp/exclusive.lck

The second time you attempt to lock the file, filelock tries the default number of times (ten) and then fails, as follows:

$ filelock /tmp/exclusive.lck
filelock : Failed: Couldn't create lockfile in time

When the first process is done with the file, you can release the lock:

$ filelock -u /tmp/exclusive.lck

To see how the filelock script works with two terminals, run the unlock command in one window while the other window spins trying to establish its own exclusive lock.

Hacking the Script:

Because this script relies on the existence of a lockfile as proof that the lock is still enforced, it would be useful to have an additional parameter that is, say, the longest length of time for which a lock should be valid. If the lockfile routine times out, the last accessed time of the locked file could then be checked, and if the locked file is older than the value of this parameter, it safely can be deleted as a stray, perhaps with a warning message, perhaps not.

This is unlikely to affect you, but lockfile doesn't work with NFS-mounted disks. In fact, a reliable file locking mechanism on an NFS-mounted disk is quite complex. A better strategy that sidesteps the problem entirely is to create lockfiles only on local disks.

Excerpted with permission from the book Wicked Cool Shell Scripts: 101 Scripts for Linux, Mac OS X, and UNIX Systems by Dave Taylor, published by No Starch Press ( www.nostarch.com/wcss.htm).

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix