Tux Knows It's Nice to Share, Part 1
Think back. Way-ay-ay back...Remember what your mother used to tell you? No? Well, among other things, I guarantee she told you it was good to share. This lesson usually came around about the time your little baby brother or sister had been screaming steadily for five minutes or more because you didn't want to share your toys. Well, Tux knows all about sharing, and this is precisely what we are going to talk about in the next few columns: how to share.
If you were to try and pick one (only one!) specific cool thing about Linux, you might have to say "networking". And I might just have to agree with you. Nothing networks like Linux and what is networking about, really, but sharing. Nice segueway, huh? Linux lets you share a single, dial-up (or cable) modem connection for all the PCs in your house. It lets you share printers. It also lets you do the kind of sharing that drove local area network development to the state that it's in today. That kind of sharing is "file sharing".
Now, as flexible as Linux is, you'll soon discover that there are dozens of different ways to share files across a network. We can choose from NFS, SAMBA, CODA, AFS, and others. When all that fails, there's always sneaker net. Understanding what they do, how they do it, where they are now and where they are going is the first step in deciding what makes sense for you. So...I'd like to start this discussion with NFS, the great Grand Daddy (or Grand Mama--who knows for sure?) of network file sharing utilities. Born when the 80s were still young, this child of Sun Microsystems is ubiquitous on just about every Linux or UNIX distribution you can think of. NFS stands for "Network File System", and you can even get it for that other OS, which makes it an ideal first choice for exploration of file sharing.
Like almosteverything in the world of Linux, NFS is still evolving, and different incarnations exist on different releases. Nevertheless, NFS has been around a long time, and while it has problems (which we will discuss later), it is a good, stable file-sharing mechanism and worth a look. On most machines, the Linux implementation of NFS is sitting around version 2. The version 3 NFS, which includes improved performance, better file locking and other goodies is still in development and requires that your kernel level be at 2.2.18 or higher (although there are patches for other levels). Curious about your kernel version? Try this command.
# uname -r 2.2.12-20
If you want to see the latest and greatest in the world of Linux NFS, you can visit the NFS project at SourceForge at http://nfs.sourceforge.net
But I digress...NFS is a client-server system. On the server side of things, you make a list of directories that you want to make available for use by another machine--a client. This is called exporting a directory. The client then mounts these directories as though they were local drives. In order to make all this possible, the server runs several dæmons. From the client, you then mount the directory on your local system with a command similar to this one:
mount -t nfs remote_sys:/exported_dir /mnt/local_dir
Actually, the -t is kind of optional. The mount command can usually figure out what's on the other side without specifically identifying it as a NFS mount. Sounds pretty easy, huh? Okay, there's more to it than that. NFS uses RPC as its mechanism for communicating information. RPC stands for Remote Procedure Calls. Using the rpcinfo program, we can determine what RPC services are running on a machine.
# rpcinfo -p testsys program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100011 1 udp 764 rquotad 100011 2 udp 764 rquotad 100005 1 udp 772 mountd 100005 1 tcp 774 mountd 100005 2 udp 777 mountd 100005 2 tcp 779 mountd 100003 2 udp 2049 nfs
The -p flag tells the rpcinfo program to ask the portmapper on a remote system to report on what services it offers. This means, of course, that you need to be running the portmap dæmon. If we shut down the portmap d@aelig;mon, the results are less than stellar. Before I show you the results, let's shut down the portmapper. On a Red Hat or Mandrake system, you would use this command:
On a Debian or Storm Linux system, try this instead.
Now, here's the result of an rpcinfo probe of the same system (testsys).
rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused
Before we can actually start exporting anything at all, we need to start the NFS dæmons. If NFS is already installed on your system (and it probably is), you can start it in the same way you started the portmapper, subtituting "nfs" for "portmap", (on a Debian system, you might want to try "nfs-server" instead of "nfs"). You can then run the script again, using "status" instead of "start". You should see something like this:
# /etc/rc.d/init.d/nfs status rpc.mountd (pid 17324) is running... nfsd (pid 17340 17339 17338 17337 17336 17335 17334 17333) is running... rpc.rquotad (pid 17315) is running...
If you do a ps ax | grep rpc, you will also see another process, rpc.statd, show up in the list of rpc programs. Let's take a moment and visit those processes, starting with rpc.statd. This dæmon is started by the nfslock script along with rpc.lockd. Together, they handle file locking and lock recovery (in the event of a NFS server crash) upon reboot. Depending on the system, you won't see "rpc.lockd" in your process status. Instead, you might simply see lockd. Next is rpc.quotad, a program that handles quota support over NFS (quota being limits imposed on disk space based on users or groups). Next on our list is the real workhorse of the whole NFS group of programs, nfsd. This is the program that actually handles all the NFS user requests.
Finally, we have rpc.mountd, which checks to see if a request for mounting an exported directory are valid by looking up exported directories and the /etc/exports file for that validation. Entries in the file (which can be edited manually with your favorite editor) have this format:
The name part is pretty simple; it's the full path to the directory you want to share on your network. The client name is the host name of the client machine. This can just as easily be an IP address. Furthermore, you can use a wildcard to make every host within a domain available. For instance, say your domain is myowndomain.com, and you want all machines in that domain to be able to access /mnt/data1 on the machine acting as your NFS server. You would create this entry:
You could also use an asterisk to make every single host on every network privy to your NFS services, but this would be a very bad idea. Still, sharing that drive is pretty simple so far. Which leads us to the permissions aspect of things.
Permissions are a bit more complex and should be discussed in some detail, but we are running short on time and space (imagine that). To that end, we'll keep our permission list simple this time around. Here's my simple exports file from a machine at address 192.168.1.100.
What this says is that I want to make /mnt/data1 available to the computer at 192.168.1.2 and allow it read and write permissions, even as root. If we were to accept defaults, then root would be treated as an anonymous user and not be allowed access to the disk. This is for security reasons. In my example, I am assuming that I, as the system administrator, am the one with root access to all machines. Essentially, you could say I trust myself. Once I am satisfied with the information I have in my exports file, I have to make NFS aware of the information. You could, of course, restart the system or just NFS, but this is Linux. Instead, use the exportfs command in this way.
The -r option in this case, stands for "re-export" the entries in /etc/exports. With that information, I could mount that directory on my client machine (192.168.1.2) to a local directory I had created (in this case, remote_dir).
mount 192.168.1.100:/mnt/data1 /mnt/remote_dir
Long-winded though I may be, I must cut this week's discussion off now, and there is more to discuss on this topic. There's the nasty side of NFS: security, rights, permissions, a serious question of who's who and lots of other fun stuff. We'll pick up next time around with a discussion of permissions and security. The good and the bad of NFS will be uncovered. Until next time, remember what your Momma said, "It's nice to share".
Looking for past articles in this series? Click here for a list.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide