Corporate Linux: Coexisting with the Big Boys
Now that NIS is working, let's attend to NFS. Depending on who you listen to, NFS is either the evil beast or the magic bullet to all your user data-related problems. In my opinion, NFS makes a large network with huge amounts of user data easy and transparent to set up, but it comes with a massive performance penalty common to all networked file systems. Count on NFS access being on the order of ten times slower than local hard disk file access. Slow or not, large sites simply can't live without NFS.
That said, setting up an NFS client basically follows the same steps as for the NIS client: software installation, server side configuration and client configuration changes.
NFS requires a kernel built with support for it, presumably as a kernel module, but you can compile it into the kernel itself if you wish. If your kernel does not yet have NFS support, you need to enable it under “Filesystems”. Go to your kernel source directory (most likely /usr/src/linux) and type make xconfig or make menuconfig. Obviously, to use NFS, the kernel needs to have network support enabled. After compiling and installing the NFS module, your system has all the software it needs. I'd suggest you install one piece of optional software, though, which is showmount. Look for a package called something like nfs*client* on your distribution CD-ROM.
On the NFS server, there is usually a file stating which file systems are exported. Depending on the flavor of UNIX, it can be called /etc/exports (SunOS, Linux, *BSD), /etc/dfs/dfstab (Solaris, other System V variants), or something completely different. An OS-independent way of finding that information is to run the showmount command against the NFS server, e.g., showmount -e. This will list the exported file systems and also the machines or groups of machines allowed to mount them.
Large sites usually have a need to manage machines in groups. For example, all users' desktop workstations should be able to mount any of the home directories, whereas only servers might be allowed to mount CDs from a networked jukebox. In NIS, this mechanism is provided by the netgroup map, and chances are the showmount command will list only the netgroups allowed to access specific exports. A sample output would be
/home/ftp (everyone) /homedesktops /var/mail mailservers
everyone is a special name denoting every machine, while desktops and mailservers are netgroups. Executing
ypmatch -k desktops netgroupmight produce:
desktops: penguin, turkey, heronFor your Linux machine to be able to access the /home, NFS share requires it to belong to the desktops netgroup. Otherwise, the server will deny access.
Once your server lets you in, the last obstacle is advertising the NFS exports to your client. The easiest way to handle this is a permanent mount entry in your /etc/fstab, such as:
bigboy:/export/home /home nfs 0 0
This way, /home would be hard-mounted on each boot. While this approach certainly works very well, it has limitations. At our site, we have a mount point for each user's home directory; e.g., /home/joe for Joe and /home/sue for Sue. With 1200+ users distributed across ten file servers, hard-mounting each directory would require much housekeeping, and a server replacement or elimination would be a major headache.
Fortunately, there is an elegant way around this, called the automounter. This enterprising little daemon watches a set of mount points specified in files for access by the operating system. Once an access is detected, the automount daemon tries to mount the export belonging to the mount point. Other than a slight delay, neither applications nor users notice a difference from a regular mount. As might be expected, the automounter will release (umount) a mounted file system after a configurable period of inactivity.
To make use of the automounter, install the autofs package and look at the files it installed in the /etc/auto directory. The first and most important is /etc/auto.master which lists each mount point to be supervised by the automounter and its associated map, usually named /etc/auto.mountpoint. Each of these maps follows the basic schema set forth in /etc/auto.misc:
d -fstype=iso9660,ro,user :/dev/cdrom fd -fstype=auto,user :/dev/fd0
In this example, /misc/cd is mounted with the usual options associated with a CD drive on /misc/cd, whereas the floppy currently in drive /dev/fd0 is mounted on /misc/fd. Note that the mounts will not occur until the directory is accessed, e.g., by doing ls /misc/cd, and the automounter will automatically create each of the mount points listed in the file.
“Great”, you say, “now, what's all that got to do with NFS and NIS?” Well, the automount maps are actually lists which can be maintained on the NIS server and distributed to the clients. For example, a typical NIS map named auto.home would look like this:
joe bigboy:/export/home/2/joe sue beanbox:/export/home/sue
Here, then, is the reason to have the huge number of mount points mentioned earlier. If Joe changes jobs and joins the finance department, his home directory can be moved to beanbox. His new entry would then read:
joe beanbox:/export/home/joebut the mount point on his desktop machine is still /home/joe. In other words, even though he changed to another server, he does not need to adapt any of the environment settings, application data paths or shell scripts he might have. Not convinced? Type grep $HOME $HOME/.* to see how many instances of your home path are actually saved everywhere.
If, during NIS configuration, you edited your /etc/nsswitch.conf to contain the line:
automount: files nis
the automounter will read its startup files from /etc/auto.master. After that, it will query the NIS server for an NIS map named auto.master and will process the entries accordingly. Thus, the above change for user Joe needs to be made only one time on one system (the NIS master), and it will be known to all clients. No entries to forget, no conflicting client configurations. How's that for efficiency?
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- Interview with Patrick Volkerding
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide