Tux Knows It's Nice to Share, Part 1
Think back. Way-ay-ay back...Remember what your mother used to tell you? No? Well, among other things, I guarantee she told you it was good to share. This lesson usually came around about the time your little baby brother or sister had been screaming steadily for five minutes or more because you didn't want to share your toys. Well, Tux knows all about sharing, and this is precisely what we are going to talk about in the next few columns: how to share.
If you were to try and pick one (only one!) specific cool thing about Linux, you might have to say "networking". And I might just have to agree with you. Nothing networks like Linux and what is networking about, really, but sharing. Nice segueway, huh? Linux lets you share a single, dial-up (or cable) modem connection for all the PCs in your house. It lets you share printers. It also lets you do the kind of sharing that drove local area network development to the state that it's in today. That kind of sharing is "file sharing".
Now, as flexible as Linux is, you'll soon discover that there are dozens of different ways to share files across a network. We can choose from NFS, SAMBA, CODA, AFS, and others. When all that fails, there's always sneaker net. Understanding what they do, how they do it, where they are now and where they are going is the first step in deciding what makes sense for you. So...I'd like to start this discussion with NFS, the great Grand Daddy (or Grand Mama--who knows for sure?) of network file sharing utilities. Born when the 80s were still young, this child of Sun Microsystems is ubiquitous on just about every Linux or UNIX distribution you can think of. NFS stands for "Network File System", and you can even get it for that other OS, which makes it an ideal first choice for exploration of file sharing.
Like almosteverything in the world of Linux, NFS is still evolving, and different incarnations exist on different releases. Nevertheless, NFS has been around a long time, and while it has problems (which we will discuss later), it is a good, stable file-sharing mechanism and worth a look. On most machines, the Linux implementation of NFS is sitting around version 2. The version 3 NFS, which includes improved performance, better file locking and other goodies is still in development and requires that your kernel level be at 2.2.18 or higher (although there are patches for other levels). Curious about your kernel version? Try this command.
# uname -r 2.2.12-20
If you want to see the latest and greatest in the world of Linux NFS, you can visit the NFS project at SourceForge at http://nfs.sourceforge.net
But I digress...NFS is a client-server system. On the server side of things, you make a list of directories that you want to make available for use by another machine--a client. This is called exporting a directory. The client then mounts these directories as though they were local drives. In order to make all this possible, the server runs several dæmons. From the client, you then mount the directory on your local system with a command similar to this one:
mount -t nfs remote_sys:/exported_dir /mnt/local_dir
Actually, the -t is kind of optional. The mount command can usually figure out what's on the other side without specifically identifying it as a NFS mount. Sounds pretty easy, huh? Okay, there's more to it than that. NFS uses RPC as its mechanism for communicating information. RPC stands for Remote Procedure Calls. Using the rpcinfo program, we can determine what RPC services are running on a machine.
# rpcinfo -p testsys program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100011 1 udp 764 rquotad 100011 2 udp 764 rquotad 100005 1 udp 772 mountd 100005 1 tcp 774 mountd 100005 2 udp 777 mountd 100005 2 tcp 779 mountd 100003 2 udp 2049 nfs
The -p flag tells the rpcinfo program to ask the portmapper on a remote system to report on what services it offers. This means, of course, that you need to be running the portmap dæmon. If we shut down the portmap d@aelig;mon, the results are less than stellar. Before I show you the results, let's shut down the portmapper. On a Red Hat or Mandrake system, you would use this command:
On a Debian or Storm Linux system, try this instead.
Now, here's the result of an rpcinfo probe of the same system (testsys).
rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused
Before we can actually start exporting anything at all, we need to start the NFS dæmons. If NFS is already installed on your system (and it probably is), you can start it in the same way you started the portmapper, subtituting "nfs" for "portmap", (on a Debian system, you might want to try "nfs-server" instead of "nfs"). You can then run the script again, using "status" instead of "start". You should see something like this:
# /etc/rc.d/init.d/nfs status rpc.mountd (pid 17324) is running... nfsd (pid 17340 17339 17338 17337 17336 17335 17334 17333) is running... rpc.rquotad (pid 17315) is running...
If you do a ps ax | grep rpc, you will also see another process, rpc.statd, show up in the list of rpc programs. Let's take a moment and visit those processes, starting with rpc.statd. This dæmon is started by the nfslock script along with rpc.lockd. Together, they handle file locking and lock recovery (in the event of a NFS server crash) upon reboot. Depending on the system, you won't see "rpc.lockd" in your process status. Instead, you might simply see lockd. Next is rpc.quotad, a program that handles quota support over NFS (quota being limits imposed on disk space based on users or groups). Next on our list is the real workhorse of the whole NFS group of programs, nfsd. This is the program that actually handles all the NFS user requests.
Finally, we have rpc.mountd, which checks to see if a request for mounting an exported directory are valid by looking up exported directories and the /etc/exports file for that validation. Entries in the file (which can be edited manually with your favorite editor) have this format:
The name part is pretty simple; it's the full path to the directory you want to share on your network. The client name is the host name of the client machine. This can just as easily be an IP address. Furthermore, you can use a wildcard to make every host within a domain available. For instance, say your domain is myowndomain.com, and you want all machines in that domain to be able to access /mnt/data1 on the machine acting as your NFS server. You would create this entry:
You could also use an asterisk to make every single host on every network privy to your NFS services, but this would be a very bad idea. Still, sharing that drive is pretty simple so far. Which leads us to the permissions aspect of things.
Permissions are a bit more complex and should be discussed in some detail, but we are running short on time and space (imagine that). To that end, we'll keep our permission list simple this time around. Here's my simple exports file from a machine at address 192.168.1.100.
What this says is that I want to make /mnt/data1 available to the computer at 192.168.1.2 and allow it read and write permissions, even as root. If we were to accept defaults, then root would be treated as an anonymous user and not be allowed access to the disk. This is for security reasons. In my example, I am assuming that I, as the system administrator, am the one with root access to all machines. Essentially, you could say I trust myself. Once I am satisfied with the information I have in my exports file, I have to make NFS aware of the information. You could, of course, restart the system or just NFS, but this is Linux. Instead, use the exportfs command in this way.
The -r option in this case, stands for "re-export" the entries in /etc/exports. With that information, I could mount that directory on my client machine (192.168.1.2) to a local directory I had created (in this case, remote_dir).
mount 192.168.1.100:/mnt/data1 /mnt/remote_dir
Long-winded though I may be, I must cut this week's discussion off now, and there is more to discuss on this topic. There's the nasty side of NFS: security, rights, permissions, a serious question of who's who and lots of other fun stuff. We'll pick up next time around with a discussion of permissions and security. The good and the bad of NFS will be uncovered. Until next time, remember what your Momma said, "It's nice to share".
Looking for past articles in this series? Click here for a list.
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds