The Network Block Device

A network block device (NBD) driver makes a remote resource look like a local device in Linux, allowing a cheap and safe real-time mirror to be constructed.
Writing Linux Device Driver Code

Writing the driver code has been a salutary experience for all involved. The best advice to anyone contemplating writing kernel code is—don't. If you must, write as little as possible and make it as independent of anything else as possible.

Implementing one's own design is relatively easy as long as things go well. The very first bug, however, reveals the difficulty. Kernel code bugs crash the machine often, giving scant opportunity to detect and correct them. Twenty reboot cycles per day is probably near everyone's limit. On occasion, we have had to find a bug by halving the code changes between versions until the precise line was located. Since a moderate number of changes can lead to a patch of (say) 200 lines or more, eight recompilation cycles and tests might be required to locate the point change involved. That says nothing of the intellectual effort involved in separating the patch into independent parts in order to be able to recompile and the effort involved in developing a test for the bug or identifying the behavioural anomaly in the first place. Between one and two weeks is a reasonable estimate for locating a bug via code-halving.

It is very important to have an always-working kernel code. It doesn't matter if the code does not have the right functionality, but it must do what it does right. The code development must be planned to move forward in stages from one working state to another. There must exist no stage of development in which the driver does not work, such as for example having altered a protocol and not yet balanced the change with corresponding changes elsewhere.

Having a working version implies checking in working versions regularly (we used CVS). Check-in occurs only on working versions. On a couple of occasions, we had to fork the line to allow two development areas to proceed independently (moving the networking code out of kernel while reworking the reconnection protocols, for example), then reintegrate the changes via a sequence of non-working minor revisions, but we always had a previous working version available which we tried to make minimal changes to.

Debugging techniques essentially consist of generating usable traces via printk statements. We had printks at entry and exit of every function, activity and branch. That helps us discover where the coding bug occurs. Often, however, the bug is not detectable from the code trace, but rather must be inferred through behavioural analysis. We had a serious bug that was present through half the development cycle and was never detected until integration tests began. It was completely masked by normal kernel block-buffering and showed up as apparent buffer corruption only in large (over 64MB) memory transfers. An md5sum of the whole device would sometimes return differing results when the rest of the machine was under heavy load. It turned out to be two simple bugs, one kernel-side and one server-side, that had nothing to do with buffering at all. In this kind of situation, brainstorming possible causal chains and devising tests for them, then running the tests and interpreting the results is the only feasible and correct debugging technique. This is the “scientific method” as expounded in the 18th and 19th century, and it works for debugging.

Kernel coding really begins to bite back when kernel mechanisms not of one's own devising have to be assimilated. Interactions with the buffering code had to be taken somewhat on trust, for example, because reading the buffering code (buffer.c) does not tell the whole story in itself (for example, when and how buffers are freed by a separate kernel thread). It is good advice to try and limit interactions with the other kernel mechanisms to those that are absolutely predictable, if necessary, by patterning the interactions on other driver examples. In the case of the NBD driver, the original was developed from the loopback driver (lo.c), and the latter served as a useful reference throughout.


The Network Block Device connects a client to a remote server across a network, creating a local block device that is physically remote. The driver we have developed provides mechanisms for redundancy, reliability and security that enable its use as a real-time backup storage medium in an industrial setting as well as allowing for other more imaginative modes. A mobile agent that takes its home environment with it to every system it visits, perhaps? In terms of speed, an NBD supporting an EXT2 file system competes well with NFS.


P. T. Breuer, (

A. Martín Lopez

Arturo García Ares



Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

throughput and availability

Anonymous's picture

Sorry, got thrown off by those dd 's. Reading the summary, it is clear: you actually mount /mnt/remote, it doesn't matter whatever it is. But then, loks like you need to know the filesystem, when you mount it on /nd0 on the client!

throughput and availability

Anonymous's picture

"so that we can route through a second network card on both machines and thus double the available bandwidth through our switched network."

Only if you are limited by the throughput of your NIC (say your network card is an old 10MBit/sec and your network can do 100MBit or better; otherwise "switched" means switched: you wait on one another, and what you gain is just this synchronization overhead.

The other thing that may be missing: how do you "publish" a partition you already have? I did not get it: the "remote" resource has to be dedicated only to the NBD, or can it be still available locally on the server? In other words, say, can I "publish" the /home directory ? Does it have to be a standalone partition then? or may I just publish some part of it, like through NFS or samba?

hi!!!I hope i can learn more

aineed's picture

hi!!!I hope i can learn more about operating system..

The NBD document is really

Anonymous's picture

The NBD document is really good.
But I have some doubts. Can you help me in understanding ,how NBD is different from iSCSI device?
Which will be better for performance NBD or iSCSI?