Corporate Linux: Coexisting with the Big Boys
Now that NIS is working, let's attend to NFS. Depending on who you listen to, NFS is either the evil beast or the magic bullet to all your user data-related problems. In my opinion, NFS makes a large network with huge amounts of user data easy and transparent to set up, but it comes with a massive performance penalty common to all networked file systems. Count on NFS access being on the order of ten times slower than local hard disk file access. Slow or not, large sites simply can't live without NFS.
That said, setting up an NFS client basically follows the same steps as for the NIS client: software installation, server side configuration and client configuration changes.
NFS requires a kernel built with support for it, presumably as a kernel module, but you can compile it into the kernel itself if you wish. If your kernel does not yet have NFS support, you need to enable it under “Filesystems”. Go to your kernel source directory (most likely /usr/src/linux) and type make xconfig or make menuconfig. Obviously, to use NFS, the kernel needs to have network support enabled. After compiling and installing the NFS module, your system has all the software it needs. I'd suggest you install one piece of optional software, though, which is showmount. Look for a package called something like nfs*client* on your distribution CD-ROM.
On the NFS server, there is usually a file stating which file systems are exported. Depending on the flavor of UNIX, it can be called /etc/exports (SunOS, Linux, *BSD), /etc/dfs/dfstab (Solaris, other System V variants), or something completely different. An OS-independent way of finding that information is to run the showmount command against the NFS server, e.g., showmount -e. This will list the exported file systems and also the machines or groups of machines allowed to mount them.
Large sites usually have a need to manage machines in groups. For example, all users' desktop workstations should be able to mount any of the home directories, whereas only servers might be allowed to mount CDs from a networked jukebox. In NIS, this mechanism is provided by the netgroup map, and chances are the showmount command will list only the netgroups allowed to access specific exports. A sample output would be
/home/ftp (everyone) /homedesktops /var/mail mailservers
everyone is a special name denoting every machine, while desktops and mailservers are netgroups. Executing
ypmatch -k desktops netgroupmight produce:
desktops: penguin, turkey, heronFor your Linux machine to be able to access the /home, NFS share requires it to belong to the desktops netgroup. Otherwise, the server will deny access.
Once your server lets you in, the last obstacle is advertising the NFS exports to your client. The easiest way to handle this is a permanent mount entry in your /etc/fstab, such as:
bigboy:/export/home /home nfs 0 0
This way, /home would be hard-mounted on each boot. While this approach certainly works very well, it has limitations. At our site, we have a mount point for each user's home directory; e.g., /home/joe for Joe and /home/sue for Sue. With 1200+ users distributed across ten file servers, hard-mounting each directory would require much housekeeping, and a server replacement or elimination would be a major headache.
Fortunately, there is an elegant way around this, called the automounter. This enterprising little daemon watches a set of mount points specified in files for access by the operating system. Once an access is detected, the automount daemon tries to mount the export belonging to the mount point. Other than a slight delay, neither applications nor users notice a difference from a regular mount. As might be expected, the automounter will release (umount) a mounted file system after a configurable period of inactivity.
To make use of the automounter, install the autofs package and look at the files it installed in the /etc/auto directory. The first and most important is /etc/auto.master which lists each mount point to be supervised by the automounter and its associated map, usually named /etc/auto.mountpoint. Each of these maps follows the basic schema set forth in /etc/auto.misc:
d -fstype=iso9660,ro,user :/dev/cdrom fd -fstype=auto,user :/dev/fd0
In this example, /misc/cd is mounted with the usual options associated with a CD drive on /misc/cd, whereas the floppy currently in drive /dev/fd0 is mounted on /misc/fd. Note that the mounts will not occur until the directory is accessed, e.g., by doing ls /misc/cd, and the automounter will automatically create each of the mount points listed in the file.
“Great”, you say, “now, what's all that got to do with NFS and NIS?” Well, the automount maps are actually lists which can be maintained on the NIS server and distributed to the clients. For example, a typical NIS map named auto.home would look like this:
joe bigboy:/export/home/2/joe sue beanbox:/export/home/sue
Here, then, is the reason to have the huge number of mount points mentioned earlier. If Joe changes jobs and joins the finance department, his home directory can be moved to beanbox. His new entry would then read:
joe beanbox:/export/home/joebut the mount point on his desktop machine is still /home/joe. In other words, even though he changed to another server, he does not need to adapt any of the environment settings, application data paths or shell scripts he might have. Not convinced? Type grep $HOME $HOME/.* to see how many instances of your home path are actually saved everywhere.
If, during NIS configuration, you edited your /etc/nsswitch.conf to contain the line:
automount: files nis
the automounter will read its startup files from /etc/auto.master. After that, it will query the NIS server for an NIS map named auto.master and will process the entries accordingly. Thus, the above change for user Joe needs to be made only one time on one system (the NIS master), and it will be known to all clients. No entries to forget, no conflicting client configurations. How's that for efficiency?
|Understanding OpenStack's Success||Feb 21, 2017|
|Natalie Rusk's Scratch Coding Cards (No Starch Press)||Feb 17, 2017|
|Own Your DNS Data||Feb 16, 2017|
|IGEL Universal Desktop Converter||Feb 15, 2017|
|Simple Server Hardening||Feb 14, 2017|
|Server Technology's HDOT Alt-Phase Switched POPS PDU||Feb 13, 2017|
- Understanding OpenStack's Success
- Own Your DNS Data
- Simple Server Hardening
- Understanding Firewalld in Multi-Zone Configurations
- Teradici's Cloud Access Platform: "Plug & Play" Cloud for the Enterprise
- From vs. to + for Microsoft and Linux
- Bash Shell Script: Building a Better March Madness Bracket
- IGEL Universal Desktop Converter
- The Weather Outside Is Frightful (Or Is It?)
- Natalie Rusk's Scratch Coding Cards (No Starch Press)