Automated Installation of Large-Scale Linux Networks
Installing Linux on a PC has been long considered a programming guru's domain. It usually takes a novice user weeks or even longer to get the system properly configured. However, with emerging installation techniques and package management, especially from Red Hat, Linux is on the verge of becoming user-friendly. Yet, even with these newer methods, one aspect of Linux that is still frustrating is installation on a large scale. This is not because it is difficult, but rather because it is a monotonous and cumbersome endeavor. The object of this article is to discuss the basics of a technique that will simplify large-scale installation. Furthermore, a scheme is also discussed for the automatic switch-on of a LAN employing Wake-on-LAN technology.
A standard Linux installation asks many questions about what to install, what hardware to configure, how to configure the network interface, etc. Answering these questions once is informative and maybe even fun. But imagine a system engineer who has to set up a new Linux network with a large number of machines. Now, the same issues need to be addressed and the same questions answered repeatedly. This makes the task very inefficient, not to mention a source of irritation and boredom. Hence, a need arises to automate this parameter and option selection.
The thought of simply copying the hard disks naturally crosses one's mind. This can be done quickly, and all the necessary functions and software will be copied without option selection. However, the fact is that simple copying of hard disks causes the individual computers to become too similar. This, in turn, creates an altogether new mission of having to reconfigure the individual settings on each PC. For example, IP addresses for each machine will have to be reset. If this is not done properly, strange and inexplicable behavior results.
Those of us who have worked with Red Hat Linux are probably aware of the fact that it already offers a method of automated installation called Kickstart. This useful feature actually forms the foundation of the methodology we have developed. Kickstart allows us to specify beforehand our answers to all the necessary questions asked during the installation process. The specifications of a desired installation are listed in a special file called ks.cfg. This file is then placed on an NFS server and the target system is booted using a floppy disk. The setup prompt on the Red Hat distribution allows you to choose from a number of installation methods. Kickstart can be chosen as the desired technique by simply entering “ks” at the prompt. If everything has been done properly, voilà! The only message you will get at the end is the declaration of a successful installation.
We were given the task to set up a Linux laboratory of sixty-four Pentium III machines connected via a 100MB Ethernet. Sixty machines were to be set up as workstations and four as various servers. With such a large number of machines, it was clear that a powerful means of installation was sorely needed. The power of our technique is evident from the fact that the whole setup process took us about sixty hours, spread out over fifteen days. Let's take a detailed look at the method we adopted.
The sixty computers obtained for use as workstations in our laboratory (see Figure 1) had hard disks but no floppy drives. To get Kickstart running, we needed to remove the case and manually connect a floppy to each machine, boot the machine, install Linux and, finally, remove the floppy. This is a long procedure since floppies go bad all the time and, even if they do not fail, it takes a minute or two waiting for the floppy to load. This can turn into an unpleasant two minutes as you wait with your fingers crossed, watching the screen, just to get the dreaded “Boot failed” message. Moreover, if a disk does go bad, it takes even longer to write another image onto a new disk.
A wiser approach had to be adopted. We merged the Red Hat Installation disk with a very fine net-booting package, etherboot, to obtain a network-bootable image of the disk. Now, since we also placed this image on a NFS server, only a 16KB loader was needed on the floppy which would boot up in under twenty seconds. This loader would then retrieve the actual image over the network. A new floppy could easily be made in less than thirty seconds.
The loader is, in fact, a ROM image; hence, to make it even more reliable we burned it on an EPROM. The Red Hat boot-disk image for network installation was kept on a DHCP/TFTP server. To get the installation running, the ROM was plugged in to the network card and the machine booted from the network. The same ROM can be reused to boot other machines. As the ROM is robust and small, an efficient way was thus developed for getting the installations running. We call this super-Kickstart.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- LiveCode Ltd.'s LiveCode
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide