Starting Share Files with NFS

Olexiy and Denis provide an introduction to the features of network filesystem (NFS).
Setting up the NFS Client

First, make sure your Linux kernel supports NFS. The status of your system is reflected by proc directory. To look inside the filesystems file, execute the cat command, like this:

tomcat> cat /proc/filesystems
         ext2
nodev    proc
nodev    devpts
         iso9660
nodev    nfs

This file shows you what kind of filesystems you may use with your version of the kernel. If the line nodev nfs is missing, it may mean your NFS filesystem support module is not installed. In this case, try to execute the command modpobe nfs. You must be root to execute it. The output of the command delineates the situation. If module is compiled, the command installs it, and if you repeat the cat command, the clue line will be shown. If your kernel does not support the NFS server, you probably will have to recompile it, but this topic is far beyond the scope of this article.

If you have NFS supported, you can mount files located on the server machine, as we showed in the example. The full format for the mount command is as follows:

mount -t nfs server:exp dir local dir options

Here server is the name of your NFS server, and exp_dir is the full path to the directory to export on the server machine.

Usually this is the end of the story, but to reach maximum productivity for connection via NFS, let's look at options parameters. We will use them later for sure:

  • rsize=n and wsize=n: specify the size of the datagram sent or accepted by the NFS client while reading and writing requests, respectively. The default value depends on the version of the kernel and may be set in 1,024 bytes. The larger the piece of food your cat can eat at one time, the quicker it finishes eating. So, the bigger the value you hput ecere, the quicker your work with a remote file will be. If you place 4,096 or even 8,192 here, throughput may be improved greatly.

  • timeo=n and retrans=n: answer for reaction on RPC timeouts. If your pet (take a cat, for example) has eaten a lot, it is able to take a new little piece of food, after a short period of time to be digested a little, and eat a lot again after a long period of time. These time intervals are called minor and major, respectively. Although NFS is not a cat, it also has minor and major time intervals, connected with the reaction of the server. Either the Net or the server, or even both, may be down temporarily or inaccessible. If this occurs, the resending starts after timeo tenths of second. Standard value for this minor timeout is seven. So if there is no reply within 0.7 second, NFS client will repeat the same request and double the timeout (1.4 seconds) by default. If there is still no reply, the procedure will be repeated until a maximum, major timeout of 60 seconds is reached. The quantity of minor timeouts that must occur before a major timeout will be reached, is set by the retrans parameter; its default value is three.

  • hard and soft: define the reaction if major timeout is reached. In the first case, if default value hard is set, NFS client reports the error “server not responding, still trying” on the console and continues to try to connect. So when the NFS server is back on-line, the program will continue to run from where it was. For the opposite, if value soft is set, the NFS client will report an I/O error to the process accessing a file on an NFS-mounted filesystem, and some files may be corrupted by losing data, so use the soft option with care.

  • intr: allows for interrupting an NFS call and is useful for canceling the job if the server doesn't respond after a long time.

The recommended options are intr and hard, and since the latter is the default, intr is enough. With these parameters, we are able to try to increase performance of the NFS connection.

Often we want to do the process of mounting in a hidden way (transparently), perhaps while booting the system. In this case we have to create the corresponding line in the file named /etc/fstab that the system reads while booting. Any line in that file has as a minimum four fields, usually six. Here's an example:

1)device 2)mnt point 3)fs type 4)options
5)dump 6)check order
tiger:/nfs1/home /home nfs rw,hard,intr 0 2

The first parameter describes the device to be mounted. In this example, the NFS server tiger exports /nfs1/home filesystem as the home directory. Field number four points to this filesystem as nfs. The last parameters mean the filesystem should not be dumped and checked in the second order. You can find more details in section five of the man pages. It also is a good idea to start rpc.lockd and rpc.statd. These programs usually initialize from scripts. Once you've done those things, you should have the NFS client running.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Zero error

MatrixCat's picture

The example "192.168.16.0/255.255.255 matches all IP addresses..." should really be "192.168.16.0/255.255.255.0 matches all IP addresses". I know it should be obvious, but I scanned the article and spent the next few hours cursing at NFS before I found the "showmount -e 192.168.16.1" command.

linux nfs

Anonymous's picture

How to start nfs sever working in the local network in Linux

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState