devfs for Management and Administration
Traditionally in a UNIX-style operating system, processes access devices via the /dev directory. Within this directory are hundreds of device nodes allocated as either block or character devices, each with a major and minor number that corresponds to its device driver and its device instance. Therefore, whenever a new device is supported by the kernel, you have to create a node in /dev that corresponds to the new device so processes are able to access it. This habit can become tedious and makes life as system administrator a little more complex.
A lot of other problems come with having /dev on disk. For one, managing permissions and device nodes not in use can be time-consuming and overly complex. Another major problem with having /dev on a disk is you cannot mount a read-only root filesystem that is practical for everyday use. There are also the issues of /dev growth and having a non-UNIX root filesystem.
devfs provides a solution to these problems and also gives system administrators a new tool for checking which devices are available. devfs is written as a virtual filesystem driver, and it keeps track of the device drivers currently registered while also automatically creating and removing the corresponding device nodes in /dev. devfs is comprised of three different parts, but as a system administrator you will interact only with two of them. The first part is the kernel VFS module. The part of devfs that you will not deal with as a system administrator is the kernel API that devfs provides to drivers.
Each driver in use must have devfs_register() and devfs_unregister() calls to work with devfs. If you would like more details on how to write drivers that function with devfs, check out Richard Gooch's web site in the resource section. The final piece of the devfs puzzle is called devfsd. devfsd is a system dæmon that does all the ugly tasks, including managing permissions, maintaining symbolic links within /dev and a host of other things that go beyond the scope of this article.
Managing /dev can be a big pain in the rump. For starters, on a typical system there are over 1,200 device nodes. And out of those, only a couple hundred are ever used. This results in an extremely messy /dev directory. How many of you out there actually go through and clean up all the entries in /dev that correspond to hardware you don't have and probably never will have? Not many I bet. Not doing the cleanup does not seem to be too big of a deal--device nodes do not take up a lot of space, and we all have multigigabyte hard drives. But skipping the cleanup can be somewhat problematic because /dev grows as device lookup time is increased.
With devfs in place, you now have an intelligent device management scheme that creates and removes nodes in /dev when you load and unload the kernel device driver modules. This is taken care of at the kernel level, so as a system administrator you do not have to worry about a thing. Having dynamic device node creation also allows you to use the /dev as an administration utility to see if you hardware is installed properly.
Yet another problem with having /dev on a disk is you cannot mount a practical read-only root filesystem. When you are working with embedded systems, this factor can be crucial. By having /dev on a disk, if you were to mount the root filesystem as read-only, you would not be able change tty ownerships. This results in a slew of problems and security issues. The other problem relating to this is having a non-UNIX root filesystem, because the majority of non-UNIX filesystems do not support characters and block special files or symbolic links. devfs fixes both of these problems because the /dev is now mounted as a virtual filesystem in a read-write mode and is not dependent on the state of the root filesystem.
Getting devfs up and running on your system is a fairly easy task and can be completed on a Saturday afternoon. The steps involved are rebuilding the kernel, installing the new kernel, building devfsd, installing devfsd, configuring devfsd and rebooting. If you are unfamiliar with rebuilding your kernel, you should either wait until your distribution is shipping kernel packages with devfs support or check out the Linux Kernel HOWTO (see Resources).
The first step in installing devfs is insuring that your kernel has devfs support built-in. You can do a quick check to see if your currently running kernel has devfs support by executing:
grep devfs /proc/filesystems
If your kernel has devfs you should see:
If you do not have devfs support in your kernel, you are going to need to build a new kernel, specifically kernel 2.4.10 or greater. I would recommend getting the latest kernel source from www.kernel.org; for this article I was using 2.4.18. Configure the kernel to your liking with your favorite configuration method and add the following options:
CONFIG_EXPERIMENTAL CONFIG_DEVFS_FS CONFIG_DEVFS_MOUNT
You also should disable dev pts since devfs now takes care of this process. (Various users have reported that leaving dev pts enabled creates serious operational problems with devfs.) Install your spiffy new kernel, and do not forget to make a backup of your old one in case something goes awry.
You are now ready to install devfsd. devfsd is the portion of devfs that manages permissions, symbolic links, compatibility issues and other miscellaneous things. While it is not required for you to run devfsd, it is highly recommended; if you do not run it, all of your software must be configured to point to the new locations in /dev. Go out and download the latest version of devfsd from Richard Gooch's web site. As of this writing, the latest version is 1.3.25. Compiling and installing devfsd is pretty typical. The only minor change is if you do not keep your kernel in /usr/src/linux, you should set the environment variable KERNEL_DIR to point your kernel source directory. Extract and install devfsd:
tar -xzvf devfsd-v1.3.25.tar.gz cd devfsd/ make && make install
After installing devfsd you will need to create a startup script and modify the devfsd.conf file to your liking. The startup script for devfsd should run before anything else, so any dæmon or process that accesses /dev in the old way will still run. See Listing 1 for a basic startup script. Installing the startup script is going to be different for individual distributions. For Debian GNU/Linux, copy the devfsd script to /etc/init.d and create a symbolic link to /etc/rcS.d/S01devfsd, so devfsd always gets started. You also will want to link shutdown scripts to /etc/rc1.d/K99devfsd and /etc/rc6.d/K99devfsd. Refer to your distributions documentation on how and where to place new startup scripts.
The next step to getting devfs up and running on your system is to configure devfsd. The configuration file for devfsd is located at /etc/devfsd.conf (see Listing 2). This file allows you to tweak devfsd to do almost anything relating to /dev.
I like to keep my devfsd configuration pretty simple and include only compatibility entries, module auto-loading and /dev permissions. The following two lines in devfsd.conf create compatibility symlinks to the old device names, so all your currently configured software still works:
REGISTER .* MKOLDCOMPAT UNREGISTER .* RMOLDCOMPAT
If you want to enable the module auto-loading functionality, add this line:
LOOKUP .* MODLOAD
This brings us to something that frustrates a lot of people when they first start using devfs. How do I get my permissions to come back after a reboot? This question has many answers: you can create a tarball of all the changed inodes prior to shutdown and then untar them during startup; you can store your permissions on a disk-based /dev and have devfsd copy and save them when starting up and shutting down; or you could simply add PERMISSIONS entries to your /etc/devfsd.conf file. Managing the device permissions for devfs in devfsd.conf via PERMISSIONS entries is great--you can have one entry for an entire group of devices. The following are some basic permissions I set up on my workstation:
REGISTER ^cdroms/.* PERMISSIONS root.cdrom 0660 REGISTER ^pty/s.* PERMISSIONS root.tty 0600 REGISTER ^sound/.* PERMISSIONS root.audio 0660 REGISTER ^tts/.* PERMISSIONS root.dip 0660
What those entries do is fairly simple. All the devices that are found under /dev/cdroms now have root as the owning user and cdrom as the owning group, with 0660 permissions, or u+rw g+rw o-rwx. I find this to be the easiest way to manage permissions. Using devfsd to manage permissions also prevents you from doing a quick chmod on a device when you first install it, telling yourself that you will set up the permissions correctly later and, of course, quickly forgetting that promise.
So you've done all the prerequisite steps, configured the kernel, installed it, added the devfsd startup script to your init directories and configured devfsd. Now it's time to reboot and checkout your new fancy devfs. After rebooting, you should be up and running with /dev mounted as a devfs filesystem. You can double check this by executing cat /proc/mounts. You should see a line that says none /dev devfs rw 0 0; if you don't see it, something went wrong. Double check that you configured everything properly, and look at your logs for any errors relating to devfs or devfsd.
Now that you have devfs up and running, poke around in /dev and check out how the symbolic links are set up from the different devices. One of the major changes is how the disk nodes are set up. If you look under /dev/discs, you'll see an entry for each physical hard disk installed on your system. If you look at the destination of the symbolic links, you will notice they point to an entry in either /dev/ide or /dev/scsi, depending on the type of interface you are using.
The entries in /dev/ide and /dev/scsi are fairly straight-forward. The first level is hostX; these entries correspond to the IDE and SCSI controllers you have installed on your system. For example, if you have an on-board IDE controller and a PCI card with an IDE controller, host0 will point to the on-board controller and host1 will point to the PCI cards controller. The next level is busX for IDE devices, which corresponds to the primary and secondary controllers. The next level is targetX. This typically corresponds to the physical drive itself. When using the IDE, target0 would point to your master drive and target1 to your slave device. After the target you have lunX entries. IDE devices only have one lun, so this will always be lun0. But if you have a SCSI system, the devices can have multiple luns, similar to a cd changer.
Now you are at a level that actually points to a device. Within the lunX directory are the actual nodes that point to the disk, the partitions or other type of device. For example, a hard disk with four partitions will have five entries within the lunX entry. These will be disk, part1, part2, part3 and part4. If this is the first disk on the system, disk would correspond to the old /dev/hda, part1 would correspond to the old /dev/hda1 and so on. You can see that devfs manages devices with a logical name-space and is quite easy to navigate.
Now you might want to start editing your various configuration files to point to the new device locations instead of the old style entries. A good starting place is /etc/fstab. From what you learned above, you can deduce that your old /dev/hda6 is now located at /dev/discs/disc0/part6. I find that to be the easiest name-space to use, but you also could point it to the IDE name-space at /dev/ide/host0/bus0/target0/lun0/part6. Either way will get the job done, but by using the generic /dev/discs and /dev/cdroms you won't have to modify a bunch of configuration files if you move from IDE to SCSI down the road.
You have seen how devfs provides a unique solution to the problems with a disk-based /dev. While it may not be the perfect solution, devfs and devfsd do solve a variety of problems and also provide you with an administration utility to see the current state of devices. By combining an intelligent device management scheme with a powerful dæmon to manage permissions and symbolic links, you can keep your systems /dev structure lean, clean and mean.
email: [email protected]