Dynamic Kernels: Discovery

This article, the second of five, introduces part of the actual code to create custom module implementing a character device driver. It describes the code for module initialization and cleanup, as well as the open and close system calls.
Using Minor Numbers

In the last article I introduced the idea of minor device numbers, and it is now high time to expand on the topic.

If your driver manages multiple devices, or a single device but in different ways, you'll create several nodes in the /dev directory, each with a different minor number. When your open function gets invoked, then, you can examine the minor number of the node being opened, and take appropriate actions.

The prototypes of your open and close functions are

int skel_open (struct inode *inode,
               struct file *filp);
void skel_close (struct inode *inode,
                 struct file *filp);

and the minor number (an unsigned value, currently 8 bits) is available as MINOR(inode->i_rdev). The MINOR macro and the relevant structures are defined within <linux/fs.h>, which in turn is included in <linux/sched.h>.

Our skel code (Listing 3) will split the minor number in order to manage both multiple boards (using four bits of the minor), and multiple modes (using the remaining four bits). To keep things simple we'll only write code for two boards and two modes. The following macros are used:

#define SKEL_BOARD(dev) (MINOR(dev)&0x0F)
#define SKEL_MODE(dev)  ((MINOR(dev)>>4)&0x0F)

The nodes will be created with the following commands (within the skel_load script, see last month's article):

mknod skel0    c $major  0
mknod skel0raw c $major  1
mknod skel1    c $major 16
mknod skel1raw c $major 17

But let's turn back to the code. This skel_open() sorts out the minor number and folds any relevant information inside the filp, in order to avoid further overhead when read() or write() will be invoked. This goal is achieved by using a Skel_Clientdata structure embedding any filp-specific information, and by changing the pointer to your fops within the filp; namely, filp->f_op.

Changing values within filp may appear a bad practice, and it often is; it is, however, a smart idea when the file operations are concerned. The f_op field points to a static object anyways, so you can modify it lightheartedly, as long as it points to a valid structure; any subsequent operation on the file will be dispatched using the new jump table, thus avoiding a lot of conditionals. This technique is used within the kernel proper to implement the different memory-oriented devices using a single major device number.

The complete skeletal code for open() and close() is shown in Listing 3; the flags field in the clientdata will be used when ioctl() is introduced.

Note that the close() function shown here should be referred to by both fopss. If different close() implementations are needed, this code must be duplicated.

Multiple- or Single-open?

A device driver should be a policy-free program, because policy choices are best suited to the application. Actually, the habit of separating policy and mechanism is one of the strong points of Unix. Unfortunately, the implementation of skel_open() leads itself to policy issues: is it correct to allow multiple concurrent opens? If yes, how can I handle concurrent access in the driver?

Both single-open and multiple-open have sound advantages. The code shown for skel_open() implements a third solution, somewhat in-between.

If you choose to implement a single-open device, you'll greatly simplify your code. There's no need for dynamic structures because a static one will suffice; thus, there's no risk to have memory leakage because of your driver. In addition, you can simplify your select() and data-gathering implementation because you're always sure that a single process is collecting your data. A single-open device uses a boolean variable to know if it is busy, and returns -EBUSY when open is called the second time. You can see this simplified code in the busmouse drivers and lp driver within the kernel proper.

A multiple-open device, on the other hand, is slightly more difficult to implement, but much more powerful to use for the application writer. For example, debugging your applications is simplified by the possibility of keeping a monitor constantly running on the device, without the need to fold it in the application proper. Similarly, you can modify the behaviour of your device while the application is running, and use several simple scripts as your development tools, instead of a complex catch-all program. Since distributed computation is common nowadays, if you allow your device to be opened several times, you are ready to support a cluster of cooperating processes using your device as an input or output peripheral.

The disadvantages of using a conventional multiple-open implementation are mainly in the increased complexity of the code. In addition to the need for dynamic structures (like the private_data already shown), you'll face the tricky points of a true stream-like implementation, together with buffer management and blocking and non-blocking read and write; but those topics will be delayed until next month's column.

At the user level, a disadvantage of multiple-open is the possibility of interference between two non-cooperating processes: this is similar to cat-ing a tty from another tty—input may be delivered to the shell or to cat, and you can't tell in advance. [For a demonstration of this, try this: start two xterms or log into two virtual consoles. On one (A), run the tty command, which tells you which tty is in use. On the other (B), type cat /dev/tty_of_A. Now go to A and type normally. Depending on several things, including which shell you use, it may work fine. However, if you run vi, you will probably see what you type coming out on B, and you will have to type ^C on B to be able to recover your session on A—ED]

A multiple-open device can be accessed by several different users, but often you won't want to allow different users to access the device concurrently. A solution to this problem is to look at the uid of the first process opening the device, and allow further opens only to the same user or to root. This is not implemented in the skel code, but it's as simple as checking current->euid, and returning -EBUSY in case of mismatch. As you see, this policy is similar to the one used for ttys: login changes the owner of ttys to the user that has just logged in.

The skel implementation shown here is a multiple-open one, with a minor addition: it assures that the device is “brand new” when it is first opened, and it shuts the device down when it is last closed.

This implementation is particularly useful for those devices which are accessed quite rarely: if the frame grabber is used once a day, I don't want to inherit strange setting from the last time it was used. Similarly, I don't want to wear it out by continuously grabbing frames that nobody is going to use. On the other hand, startup and shutdown are lengthy tasks, especially if the IRQ has to be detected, so you might not choose this policy for your own driver. The field usecount within the hardware structure is used to turn on the device at the first open, and to turn it off on the last close. The same policy is devoted to the IRQ line: when the device is not being used, the interrupt is available to other devices (if they share this friendly behaviour).

The disadvantages of this implementation are the overhead of the power cycles on the device (which may be lengthy) and the inability to configure the device with one program in order to use it with another program. If you need a persistent state in the device, or want to avoid the power cycles, you can simply keep the device open by means of a command as silly as this:

sleep 1000000 < /dev/skel0 &

As it should be clear from the above discussion, each possible implementation of the open() and close() semantics has its own peculiarities, and the choice of the optimum one depends on your particular device and the main use it is devoted to. Development time may be considered as well, unless the project is a major one. The skel implementation here may not be the best for your driver: it is only meant as a sample case, one amongst several different possibilities. Additional Information

Alessandro Rubini (rubini@ipvvis.unipv.it) Programmer by chance and Linuxer by choice, Alessandro is taking his PhD course in computer science and is breeding two small Linux boxes at home. Wild by his very nature, he loves trekking, canoeing, and riding his bike.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Where are functions defined

Anonymous's picture

Hello,

In which file are all the functions which are declared in the struct file_operations e.g open(), read();

Regards
Anil

Wherever You Like

Mitch Frazier's picture

Those functions have to be supplied by you. So you can put them wherever seems most appropriate for your code.

Mitch Frazier is an Associate Editor for Linux Journal.

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix