Why NFS?

There are lots of network file system protocols with better semantics than NFS. Better file systems offer consistency guarantees like ordering of writes and locking of opens, that you can build your applications on top of. Most offer higher performance and adapt better to high latency and low bandwidth network links. Some commercial examples are RFS (AT&T Unix's Remote File System), AFS (the Andrew File System) and 9fs (the Plan9 File System). There are also research ones like Amoeba.

There are also lots of ways to add file systems to Unix. In fact, that is the problem—there are too many ways. A file system implementor needs to understand the insides of each Unix variant. The file system must run inside the kernel in a soft-real-time, shared-memory environment where debugging tools don't exist and a bug can lock the whole system. None of these properties make kernel development cheap or easy. Because each kernel is different, significant new development is required for each port of a file system. Now add the price for kernel source licenses and development kits for half a dozen major commercial Unices, and the price becomes truly exorbitant.

By contrast, I feel the only sensible place to prototype a file system is in user space, where bugs are cheap and debugging tools are plentiful. Several user-level NFS file servers already exist as examples, such as the Linux NFS server, the AMD automounter, and the cryptographic file system. A Pgfs port requires some include-file- grepping, but the ubiquity of NFS has made Sun RPC support more or less standard. Arguments to the mount system call and interfaces to the export and mount daemons differ, but these are all user-level phenomena.

Don't let my championship of NFS mislead you into believing the NFS spec is open or complete, the NFS semantics useful or the NFS protocol well-designed. It's not, they aren't and it isn't. But until someone popularizes a better network file system, it's the best we have available.