Introducing the Network Information Service for Linux
As networks of computers grow, it is increasingly important to share common system information between them to present a unified appearance to the computer user. To share files, we use NFS or Samba protocols to communicate with a file server. Mail is often kept in a central location and accessed from clients with POP3 or IMAP protocols. However, most Linux distributions aren't ready “out of the box” to share configuration file information. This doesn't have to be the case.
The Network Information Service (NIS) is a system originally invented by Sun Microsystems for distributing shared configuration files seamlessly across a network. Files such as /etc/passwrd and /etc/group become more and more difficult to maintain by hand as the number of machines grows beyond two or three. For instance, if user foo changes his password on one computer in a work group or cluster, he will probably expect to be able to sit down at a machine across the room and use his new password to log in. However, because the password configuration file is not shared, the reality is that he now has two passwords; the new one on the machine he just sat at and the old one on all the other machines. Each time he uses a new computer he will have to change his password. As the number of machines increases, this becomes more than merely annoying.
While schemes other than NIS exist for keeping files synchronized, they are generally “hacks” involving rcp and cron and leave much to be desired in the way of flexibility. Additionally, NIS has become a UNIX industry standard, supported by many of the commercial UNIX vendors, including IBM (AIX), Hewlett Packard (HP-UX) and of course Sun (SunOS/Solaris). As a result, when setting up Linux computers in a heterogeneous UNIX environment, NIS is often the rule rather than the exception.
There are two major versions of NIS. The first, called NIS but historically referred to as Yellow Pages or YP, is well-supported by Linux. The second, called NIS+, is a new implementation of NIS which attempts to address security issues in the original protocol through encryption and other methods. NIS+ support in Linux is of beta quality and is readily available only in the new glibc/libc6, which currently applies to a very small fraction of the installed Linux base. NIS+ is best suited for very large installations: in most cases the “original” NIS is sufficient and reasonably secure when the proper precautions are taken.
Linux has included most of the components necessary to support NIS for a long time, but unfortunately distributions such as Slackware and Red Hat fail to provide adequate setup information or a complete set of tools.
Documentation has also historically been severely lacking; the NIS-HOWTO was neglected and outdated for a very long time, and it was only in November of 1997 that it finally received some major updates from one of the new NIS maintainers. In general, setting up NIS on Linux has been perceived as one of the hardest administrative tasks an administrator could undertake. I hope to change that sentiment with this article.
NIS support involves several things. To serve configuration files, one computer acts as an NIS “master”. It is possible to have several NIS server machines, but machines in addition to the master are called “slaves” and are provided only for fail-safe redundancy. All of the other computers that receive information from the server computer(s) are called “clients”. This article deals primarily with setting up Linux NIS clients, but describes the basics of configuring Linux to act as a server as well. A description of configuring NIS slaves and other advanced features are beyond the scope of this article. If you need more information in these areas, I suggest that you obtain the excellent book from O'Reilly, Managing NFS and NIS, which, although Sun-oriented, has much information applicable to Linux as well.
If you do not have an NIS master server on your network, you will need to set one up. If you do have one, you can skip this section. The package for setting up NIS servers is logically called ypserv, and the latest stable version at the time of this writing (February) is 1.2.6. Unfortunately, Red Hat 4.2 ships with an older version that you probably don't want to use. There is a contributed RPM in the directory /contrib/hurricane/SRPMS on the Red Hat mirror sites, and if you get the source RPM, you can rebuild it and use it. Otherwise, the URL at which the original distribution can be found is ftp://ftp.uni-paderborn.de/linux/local/yp/.
After configuring and installing the ypserv package (the default locations it specifies are fine), make sure that it will be started at boot time. The RPM will install the appropriate SYSV-style init scripts, but if you are using another Linux, you will have to add a command to start /usr/sbin/ypserv in the configuration file from which network daemons are started, usually /etc/rc.d/rc.inet2.
The files an NIS server makes available to its clients are not the actual local configuration files, but “compiled” versions of these that are referred to as the NIS “maps”. The process of creating NIS maps is done with a Makefile, and so I refer to the process as akin to compiling. NIS maps are in the gdbm UNIX database format for quick random access. The main directory where this all takes place is /var/yp.
You also have to update a few configuration files specifically for your site. If you don't want all the traditional files published via NIS to be available to clients, you will have to edit /var/yp/Makefile and remove the rules for files you wish to omit. A typical rule might look like this:
networks: networks.byaddr networks.byname
A stock Red Hat 4.2 system doesn't come with the /etc/networks file; you will get an error when running make if the file is not present. Comment out this line with a “#” character at the beginning. Luckily, the Makefile has a great deal of documentation included with all the different options so making changes to it is a breeze.
For security reasons you may also want to edit /etc/ypserv.conf and change a few defaults. The man page for ypserv.conf explains each of the options. The most important ones are sunos_kludge, which you will have to set to “yes” if you have old SunOS 4.1.x machines on your NIS network, and the “hosts” section that allows you to include and exclude certain hosts from accessing specific NIS maps on the server. The ypserv daemon also supports the TCP wrappers package, and this gives you another option for fine-grained access control. This is not an article on security, though, so if you are installing the server in an environment where security is of the utmost importance, read the documentation that comes with the server carefully.
You're almost done; the last thing you need to do is to create the initial build of NIS maps from the local configuration files. To do this, run make in /var/yp. This will create a lot of files that you will never have to alter, but these are the files that actually get served to the clients. If you get an error during the make process, chances are that one of the files make is trying to convert to an NIS map is not present. Check for its presence; if it is not there or if you don't want it to be built, comment it out of the Makefile.
If you want users to be able to change their passwords on the client machines instead of only on the server, you must run the rpc.yppasswdd daemon. It comes with the NIS server and has good documentation. All it has to do is run at boot time after ypserv starts up.
Now the setup is done. Reboot your machine, and when it comes back up, use ps to make sure ypserv and rpc.yppasswdd are running. If they are, clients should be able to connect. There are only a few final notes before moving on to client setup. First, remember that the NIS maps are not the same as the local configuration files, even though they are created from them. Each time you change a local file such as /etc/passwd or /etc/netgroup, you need to rebuild the NIS maps by going into /var/yp and running make. You might want to set up a cron job to do this once an hour in case you make some changes but forget to rebuild. Second, if you want to be able to see the NIS maps on your server, you will have to configure it as a client. Follow the steps in the next section to do so.
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
|Android's Limits||Jun 04, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Introduction to MapReduce with Hadoop on Linux
- Senior Perl Developer
- Technical Support Rep
- Weechat, Irssi's Little Brother
- UX Designer
- One Tail Just Isn't Enough
- Android's Limits
- Reply to comment | Linux Journal
47 min 10 sec ago
- Reply to comment | Linux Journal
47 min 37 sec ago
- Replica Watches
3 hours 12 min ago
- Reply to comment | Linux Journal
7 hours 23 min ago
- on the path to understanding
7 hours 26 min ago
- As a fisher,we know that a
1 day 2 hours ago
- All I Say Is Worth Share!
1 day 4 hours ago
1 day 4 hours ago
1 day 7 hours ago
- You should consider visiting
1 day 8 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?