Buy One, Get One Free
In an era of decreasing budgets and increasing costs, it is sometimes impossible to follow standard wisdom and use distinct computer systems to achieve different goals. The East Tennessee State University (ETSU) Department of Computer and Information Sciences confronted this problem in spring 1997, when course requirements suggested the need for two computer laboratories:
a stable computer laboratory that would support standard classroom use, and
an experimental, Linux-based computer laboratory (LBCL) that would allow students to change operating system source code.
Pargiter's strategy for supporting a dual-purpose laboratory partitioned the available computers into one trusted PC and a set of client PCs. The trusted PC, known as the kernel server PC, acted as a secure file server for the network as a whole. The client PCs were set aside for user development. Users were directed to use only client PCs, and to boot these PCs according to how they intended to use the systems:
Those who wanted to use the standard Linux kernel would use a designated bootstrap floppy that directed a user PC to download the standard Linux kernel from the kernel server.
Those who wanted to use an experimental Linux kernel would bootstrap their PCs from the local hard drive, where they could store experimental kernels.
Pargiter's dual-boot strategy ensured the integrity of the standard network file system, while making it easier for an experimental user who created a bad kernel—and, consequently, rendered a PC unbootable from the local hard drive—to recover by rebooting from the kernel server PC.
Configuring the six-station network posed two key challenges. The first one stemmed from the need to use surplus PCs with Micro Channel Architecture (MCA) buses—a type of bus the standard Linux 2.0 kernel does not support. The MCA problem was solved by using Chris Beauregard's MCA kernel patch from the Micro Channel Linux home page.
A second configuration-related challenge arose from the need to conserve disk space on the kernel server PC. Configuring the kernel server PC as a trusted server would help ensure the integrity of the kernel server's file system in the face of random kernel changes applied to satellite PCs. The server, however, would then be required to host one copy of the Linux operating system for every networked PC, leaving little room for user home directories.
The Linux kernel “disk bloat” problem was solved by observing that most of the files in the Linux distribution could be stored once on the server and shared among multiple PCs. This set of shared kernel files included system-independent files like /etc/passwd, which remain constant across the configuration, and satellite-independent files like /etc/exports that need to be configured only twice: once for the kernel server and once for all client PCs.
Since Linux distributions are not ordinarily partitioned into system-dependent and system-independent files, system-dependent configuration files were moved to a new directory, /adm. Most of the relocated files were originally stored in /etc. For each file moved to /adm, a corresponding soft link pointing to that file was created in each client's /etc directory. Trial and error was used to determine whether a configuration file could be moved to /adm.
The principal goal of the mass storage device configuration was to provide adequate disk space for system executables and files, while reserving the greater portion of disk space for user files. Initially, three mass storage devices were found on the kernel server: a CD-ROM, a 320MB hard drive and a 545MB hard drive.
The 320MB hard drive was configured as the default boot drive /dev/sda. This drive was partitioned into a 50MB swap partition, /dev/sda1, and a second partition, /dev/sda2, for the server's root file system. The first 80MB of the 545MB hard drive, /dev/sdb1, were reserved to house the client's operating system images, /tftpboot/Client_IP_Address, and the directory containing shared operating systems file, /tfpboot/adm. The remainder of the drive, /dev/sdb2, was set aside for the server's /usr/local directory. In retrospect, it would have been better to have partitioned /dev/sda into three logical disks, reserving the third partition for temporary files such as those found in the /var directory since the demand for temporary file space changes in response to a system's tasks.
The CD-ROM drive was included in the server's initial hardware configuration to simplify and speed up the Linux installation. One of the last configuration steps was to replace the CD-ROM with a 320MB hard drive. This third hard drive, /dev/sdc, was initialized with one partition, /dev/sdc1, intended for user home directories, /home. The CD-ROM was then placed on a client system, lin2, and exported to allow other clients and the server to utilize the device.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Client-Side Performance
- Tibbo Technology's Tibbo Project System
- Sony Settles in Linux Battle
- July 2016 Issue of Linux Journal
- Peppermint 7 Released
- Libarchive Security Flaw Discovered
- The Giant Zero, Part 0.x
- Profiles and RC Files
- Maru OS Brings Debian to Your Phone
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide