Buy One, Get One Free

Configuring Linux as a dual-purpose user environment.

In an era of decreasing budgets and increasing costs, it is sometimes impossible to follow standard wisdom and use distinct computer systems to achieve different goals. The East Tennessee State University (ETSU) Department of Computer and Information Sciences confronted this problem in spring 1997, when course requirements suggested the need for two computer laboratories:

  • a stable computer laboratory that would support standard classroom use, and

  • an experimental, Linux-based computer laboratory (LBCL) that would allow students to change operating system source code.

The obvious solution, which involved establishing separate laboratories, could not be implemented due to space and budgetary limitations. Therefore, ETSU system administrator Luke Pargiter decided to establish a single networked laboratory that would operate as an experimental or a stable platform.

Pargiter's strategy for supporting a dual-purpose laboratory partitioned the available computers into one trusted PC and a set of client PCs. The trusted PC, known as the kernel server PC, acted as a secure file server for the network as a whole. The client PCs were set aside for user development. Users were directed to use only client PCs, and to boot these PCs according to how they intended to use the systems:

  • Those who wanted to use the standard Linux kernel would use a designated bootstrap floppy that directed a user PC to download the standard Linux kernel from the kernel server.

  • Those who wanted to use an experimental Linux kernel would bootstrap their PCs from the local hard drive, where they could store experimental kernels.

Pargiter's dual-boot strategy ensured the integrity of the standard network file system, while making it easier for an experimental user who created a bad kernel—and, consequently, rendered a PC unbootable from the local hard drive—to recover by rebooting from the kernel server PC.

Configuration Difficulties

Configuring the six-station network posed two key challenges. The first one stemmed from the need to use surplus PCs with Micro Channel Architecture (MCA) buses—a type of bus the standard Linux 2.0 kernel does not support. The MCA problem was solved by using Chris Beauregard's MCA kernel patch from the Micro Channel Linux home page.

A second configuration-related challenge arose from the need to conserve disk space on the kernel server PC. Configuring the kernel server PC as a trusted server would help ensure the integrity of the kernel server's file system in the face of random kernel changes applied to satellite PCs. The server, however, would then be required to host one copy of the Linux operating system for every networked PC, leaving little room for user home directories.

The Linux kernel “disk bloat” problem was solved by observing that most of the files in the Linux distribution could be stored once on the server and shared among multiple PCs. This set of shared kernel files included system-independent files like /etc/passwd, which remain constant across the configuration, and satellite-independent files like /etc/exports that need to be configured only twice: once for the kernel server and once for all client PCs.

Since Linux distributions are not ordinarily partitioned into system-dependent and system-independent files, system-dependent configuration files were moved to a new directory, /adm. Most of the relocated files were originally stored in /etc. For each file moved to /adm, a corresponding soft link pointing to that file was created in each client's /etc directory. Trial and error was used to determine whether a configuration file could be moved to /adm.

Server Mass Storage Device Configuration

The principal goal of the mass storage device configuration was to provide adequate disk space for system executables and files, while reserving the greater portion of disk space for user files. Initially, three mass storage devices were found on the kernel server: a CD-ROM, a 320MB hard drive and a 545MB hard drive.

The 320MB hard drive was configured as the default boot drive /dev/sda. This drive was partitioned into a 50MB swap partition, /dev/sda1, and a second partition, /dev/sda2, for the server's root file system. The first 80MB of the 545MB hard drive, /dev/sdb1, were reserved to house the client's operating system images, /tftpboot/Client_IP_Address, and the directory containing shared operating systems file, /tfpboot/adm. The remainder of the drive, /dev/sdb2, was set aside for the server's /usr/local directory. In retrospect, it would have been better to have partitioned /dev/sda into three logical disks, reserving the third partition for temporary files such as those found in the /var directory since the demand for temporary file space changes in response to a system's tasks.

The CD-ROM drive was included in the server's initial hardware configuration to simplify and speed up the Linux installation. One of the last configuration steps was to replace the CD-ROM with a 320MB hard drive. This third hard drive, /dev/sdc, was initialized with one partition, /dev/sdc1, intended for user home directories, /home. The CD-ROM was then placed on a client system, lin2, and exported to allow other clients and the server to utilize the device.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState