Buy One, Get One Free
In an era of decreasing budgets and increasing costs, it is sometimes impossible to follow standard wisdom and use distinct computer systems to achieve different goals. The East Tennessee State University (ETSU) Department of Computer and Information Sciences confronted this problem in spring 1997, when course requirements suggested the need for two computer laboratories:
a stable computer laboratory that would support standard classroom use, and
an experimental, Linux-based computer laboratory (LBCL) that would allow students to change operating system source code.
Pargiter's strategy for supporting a dual-purpose laboratory partitioned the available computers into one trusted PC and a set of client PCs. The trusted PC, known as the kernel server PC, acted as a secure file server for the network as a whole. The client PCs were set aside for user development. Users were directed to use only client PCs, and to boot these PCs according to how they intended to use the systems:
Those who wanted to use the standard Linux kernel would use a designated bootstrap floppy that directed a user PC to download the standard Linux kernel from the kernel server.
Those who wanted to use an experimental Linux kernel would bootstrap their PCs from the local hard drive, where they could store experimental kernels.
Pargiter's dual-boot strategy ensured the integrity of the standard network file system, while making it easier for an experimental user who created a bad kernel—and, consequently, rendered a PC unbootable from the local hard drive—to recover by rebooting from the kernel server PC.
Configuring the six-station network posed two key challenges. The first one stemmed from the need to use surplus PCs with Micro Channel Architecture (MCA) buses—a type of bus the standard Linux 2.0 kernel does not support. The MCA problem was solved by using Chris Beauregard's MCA kernel patch from the Micro Channel Linux home page.
A second configuration-related challenge arose from the need to conserve disk space on the kernel server PC. Configuring the kernel server PC as a trusted server would help ensure the integrity of the kernel server's file system in the face of random kernel changes applied to satellite PCs. The server, however, would then be required to host one copy of the Linux operating system for every networked PC, leaving little room for user home directories.
The Linux kernel “disk bloat” problem was solved by observing that most of the files in the Linux distribution could be stored once on the server and shared among multiple PCs. This set of shared kernel files included system-independent files like /etc/passwd, which remain constant across the configuration, and satellite-independent files like /etc/exports that need to be configured only twice: once for the kernel server and once for all client PCs.
Since Linux distributions are not ordinarily partitioned into system-dependent and system-independent files, system-dependent configuration files were moved to a new directory, /adm. Most of the relocated files were originally stored in /etc. For each file moved to /adm, a corresponding soft link pointing to that file was created in each client's /etc directory. Trial and error was used to determine whether a configuration file could be moved to /adm.
The principal goal of the mass storage device configuration was to provide adequate disk space for system executables and files, while reserving the greater portion of disk space for user files. Initially, three mass storage devices were found on the kernel server: a CD-ROM, a 320MB hard drive and a 545MB hard drive.
The 320MB hard drive was configured as the default boot drive /dev/sda. This drive was partitioned into a 50MB swap partition, /dev/sda1, and a second partition, /dev/sda2, for the server's root file system. The first 80MB of the 545MB hard drive, /dev/sdb1, were reserved to house the client's operating system images, /tftpboot/Client_IP_Address, and the directory containing shared operating systems file, /tfpboot/adm. The remainder of the drive, /dev/sdb2, was set aside for the server's /usr/local directory. In retrospect, it would have been better to have partitioned /dev/sda into three logical disks, reserving the third partition for temporary files such as those found in the /var directory since the demand for temporary file space changes in response to a system's tasks.
The CD-ROM drive was included in the server's initial hardware configuration to simplify and speed up the Linux installation. One of the last configuration steps was to replace the CD-ROM with a 320MB hard drive. This third hard drive, /dev/sdc, was initialized with one partition, /dev/sdc1, intended for user home directories, /home. The CD-ROM was then placed on a client system, lin2, and exported to allow other clients and the server to utilize the device.
Special Reports: DevOps
Have projects in development that need help? Have a great development operation in place that can ALWAYS be better? Regardless of where you are in your DevOps process, Linux Journal can help!
With deep focus on Collaborative Development, Continuous Testing and Release & Deployment, we offer here the DEFINITIVE DevOps for Dummies, a mobile Application Development Primer, advice & help from the experts, plus a host of other books, videos, podcasts and more. All free with a quick, one-time registration. Start browsing now...
- Hash Tables—Theory and Practice
- The Ubuntu Conspiracy
- A First Look at IBM's New Linux Servers
- Vigilante Malware
- Vagrant Simplified
- Disney's Linux Light Bulbs (Not a "Luxo Jr." Reboot)
- Science on Android
- System Status as SMS Text Messages
- Bluetooth Hacks
- Making a PHP Site on Linux Work with a Microsoft SQL Server Database