Tux Knows It's Nice to Share, Part 1

Yes, ladies and gentlemen, it's another installment of the "SysAdmin's Corner" right here at the Linux Journal web site--your one-stop source for all things Linux, administrative or otherwise. This is the place to find discussion about things you are never going to hear about around the water cooler at the office. Who really cares what happened on last night's episode of "In-Laws Survivor, Episode

Think back. Way-ay-ay back...Remember what your mother used to tell you? No? Well, among other things, I guarantee she told you it was good to share. This lesson usually came around about the time your little baby brother or sister had been screaming steadily for five minutes or more because you didn't want to share your toys. Well, Tux knows all about sharing, and this is precisely what we are going to talk about in the next few columns: how to share.

If you were to try and pick one (only one!) specific cool thing about Linux, you might have to say "networking". And I might just have to agree with you. Nothing networks like Linux and what is networking about, really, but sharing. Nice segueway, huh? Linux lets you share a single, dial-up (or cable) modem connection for all the PCs in your house. It lets you share printers. It also lets you do the kind of sharing that drove local area network development to the state that it's in today. That kind of sharing is "file sharing".

Now, as flexible as Linux is, you'll soon discover that there are dozens of different ways to share files across a network. We can choose from NFS, SAMBA, CODA, AFS, and others. When all that fails, there's always sneaker net. Understanding what they do, how they do it, where they are now and where they are going is the first step in deciding what makes sense for you. So...I'd like to start this discussion with NFS, the great Grand Daddy (or Grand Mama--who knows for sure?) of network file sharing utilities. Born when the 80s were still young, this child of Sun Microsystems is ubiquitous on just about every Linux or UNIX distribution you can think of. NFS stands for "Network File System", and you can even get it for that other OS, which makes it an ideal first choice for exploration of file sharing.

Like almosteverything in the world of Linux, NFS is still evolving, and different incarnations exist on different releases. Nevertheless, NFS has been around a long time, and while it has problems (which we will discuss later), it is a good, stable file-sharing mechanism and worth a look. On most machines, the Linux implementation of NFS is sitting around version 2. The version 3 NFS, which includes improved performance, better file locking and other goodies is still in development and requires that your kernel level be at 2.2.18 or higher (although there are patches for other levels). Curious about your kernel version? Try this command.

     # uname -r
     2.2.12-20

If you want to see the latest and greatest in the world of Linux NFS, you can visit the NFS project at SourceForge at http://nfs.sourceforge.net

But I digress...NFS is a client-server system. On the server side of things, you make a list of directories that you want to make available for use by another machine--a client. This is called exporting a directory. The client then mounts these directories as though they were local drives. In order to make all this possible, the server runs several dæmons. From the client, you then mount the directory on your local system with a command similar to this one:

      mount -t nfs remote_sys:/exported_dir /mnt/local_dir

Actually, the -t is kind of optional. The mount command can usually figure out what's on the other side without specifically identifying it as a NFS mount. Sounds pretty easy, huh? Okay, there's more to it than that. NFS uses RPC as its mechanism for communicating information. RPC stands for Remote Procedure Calls. Using the rpcinfo program, we can determine what RPC services are running on a machine.

     # rpcinfo -p testsys
        program vers proto   port
         100000    2   tcp    111  portmapper
         100000    2   udp    111  portmapper
         100011    1   udp    764  rquotad
         100011    2   udp    764  rquotad
         100005    1   udp    772  mountd
         100005    1   tcp    774  mountd
         100005    2   udp    777  mountd
         100005    2   tcp    779  mountd
         100003    2   udp   2049  nfs

The -p flag tells the rpcinfo program to ask the portmapper on a remote system to report on what services it offers. This means, of course, that you need to be running the portmap dæmon. If we shut down the portmap d@aelig;mon, the results are less than stellar. Before I show you the results, let's shut down the portmapper. On a Red Hat or Mandrake system, you would use this command:

     /etc/rc.d/init.d/portmap stop

On a Debian or Storm Linux system, try this instead.

     /etc/init.d/portmap stop

Now, here's the result of an rpcinfo probe of the same system (testsys).

     rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused 

Before we can actually start exporting anything at all, we need to start the NFS dæmons. If NFS is already installed on your system (and it probably is), you can start it in the same way you started the portmapper, subtituting "nfs" for "portmap", (on a Debian system, you might want to try "nfs-server" instead of "nfs"). You can then run the script again, using "status" instead of "start". You should see something like this:

     # /etc/rc.d/init.d/nfs status
     rpc.mountd (pid 17324) is running...
     nfsd (pid 17340 17339 17338 17337 17336 17335 17334 17333) is running...
     rpc.rquotad (pid 17315) is running... 

If you do a ps ax | grep rpc, you will also see another process, rpc.statd, show up in the list of rpc programs. Let's take a moment and visit those processes, starting with rpc.statd. This dæmon is started by the nfslock script along with rpc.lockd. Together, they handle file locking and lock recovery (in the event of a NFS server crash) upon reboot. Depending on the system, you won't see "rpc.lockd" in your process status. Instead, you might simply see lockd. Next is rpc.quotad, a program that handles quota support over NFS (quota being limits imposed on disk space based on users or groups). Next on our list is the real workhorse of the whole NFS group of programs, nfsd. This is the program that actually handles all the NFS user requests.

Finally, we have rpc.mountd, which checks to see if a request for mounting an exported directory are valid by looking up exported directories and the /etc/exports file for that validation. Entries in the file (which can be edited manually with your favorite editor) have this format:

     /name_of_dir    client_name(permissions)

The name part is pretty simple; it's the full path to the directory you want to share on your network. The client name is the host name of the client machine. This can just as easily be an IP address. Furthermore, you can use a wildcard to make every host within a domain available. For instance, say your domain is myowndomain.com, and you want all machines in that domain to be able to access /mnt/data1 on the machine acting as your NFS server. You would create this entry:

     /mnt/data1    *.myowndomain.com(permissions)

You could also use an asterisk to make every single host on every network privy to your NFS services, but this would be a very bad idea. Still, sharing that drive is pretty simple so far. Which leads us to the permissions aspect of things.

Permissions are a bit more complex and should be discussed in some detail, but we are running short on time and space (imagine that). To that end, we'll keep our permission list simple this time around. Here's my simple exports file from a machine at address 192.168.1.100.

     /mnt/data1         192.168.1.2(rw,no_root_squash)

What this says is that I want to make /mnt/data1 available to the computer at 192.168.1.2 and allow it read and write permissions, even as root. If we were to accept defaults, then root would be treated as an anonymous user and not be allowed access to the disk. This is for security reasons. In my example, I am assuming that I, as the system administrator, am the one with root access to all machines. Essentially, you could say I trust myself. Once I am satisfied with the information I have in my exports file, I have to make NFS aware of the information. You could, of course, restart the system or just NFS, but this is Linux. Instead, use the exportfs command in this way.

     /usr/sbin/exportfs -r

The -r option in this case, stands for "re-export" the entries in /etc/exports. With that information, I could mount that directory on my client machine (192.168.1.2) to a local directory I had created (in this case, remote_dir).

     mount 192.168.1.100:/mnt/data1 /mnt/remote_dir

Long-winded though I may be, I must cut this week's discussion off now, and there is more to discuss on this topic. There's the nasty side of NFS: security, rights, permissions, a serious question of who's who and lots of other fun stuff. We'll pick up next time around with a discussion of permissions and security. The good and the bad of NFS will be uncovered. Until next time, remember what your Momma said, "It's nice to share".

Looking for past articles in this series? Click here for a list.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Re: Tux Knows It's Nice to Share, Part 1

Anonymous's picture

A link (at the end of the article) to part 2 would be nice...

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState