Linux System Initialization
As the title indicates, I will discuss, in one form or another, how Linux system initialization works. System initialization starts where the kernel bootup ends. Among the topics I intend to explain include system initialization à la Slackware—a BSD (Berkeley Software Distribution) knock-off—as well as System V (five) initialization à la Red Hat, Caldera, Debian, et al., and also point out the differences between them. You'll soon see that the systems are truly more similar than they are different, despite appearances to the contrary. I will also cover passing switches through LILO to init during the boot process—this is used mostly for emergencies.
What I will not discuss (for brevity's sake) are the details in some of Red Hat's, Caldera's or other initialization scripts, specifically configuration information found in the /etc/sysconfig or /etc/modules directories. For those details, you're on your own. Besides, those details are more subject to change from one release to the next.
Back in the days when UNIX was young, many universities obtained “free” copies of the operating system (OS) and made improvements and enhancements. One was the University of California at Berkeley. This school made significant contributions to the OS, which were later adopted by other universities. A parallel development began in a more commercial environment and eventually evolved into what is now System V. While these two parallel systems shared a common kernel and heritage, they evolved into competing systems. Differences can be found in initialization, switches used by a number of common commands (such as ps: under BSD, ps aux is equivalent to System V's ps -ef), inter-process communications (IPC), printing and streams. While Linux has adopted System V inits for most distributions, the BSD command syntax is still predominant. As for IPC, both are available and in general use in Linux distributions. Linux also uses BSD-style printcaps and lacks support for streams.
Under initialization, the biggest difference between the two (BSD and System V) is in the use of init scripts. System V makes use of run levels and independent stand-alone initialization scripts. Scripts are run to start and stop daemons depending on the runlevel (also referred to as the system state), one script per daemon or process subsystem. System V states run from 0 to 6 by default, each runlevel corresponding to a different mode of operation; often, even these few states are not all used. BSD has only two modes (equivalent to System V's runlevels), single-user mode (sometimes referred to as maintenance mode) and multi-user mode. All daemons are started essentially by two (actually more like four to six ) scripts—a general systems script, either rc.K or rc.M for single- or multi-user mode, respectively, a local script and a couple of special scripts, rc.inet and rc.inet2. The systems script is usually provided by the distribution creator; the local script is edited by the system administrator and tailored to that particular system. The BSD-style scripts are not independent, but are called sequentially. (The BSD initialization will be most familiar to those coming from the DOS world.) The two main scripts can be compared to config.sys and autoexec.bat, which, by the way, call one or two other scripts. However, the likeness ends there. Having only these few scripts to start everything does not allow for the kind of flexibility System V brings (or so say some). It does, however, make things easier to find. In System V circles (but only in System V circles), BSD initialization is considered obsolete—but what do they know? Like a comfortable pair of shoes, it won't be discarded for a very long time, if ever.
Recall that earlier I said Slackware did a BSD knock-off, and yet it still uses the rc.S/rc.M, et al., scripts. This is because inittab, (which we'll look at later) uses the same references to runlevels, and uses those (very much System V) runlevels to decide which scripts to run. In fact, the same init binary is used by all the distributions I have looked at, so there is really less difference between Slackware and Red Hat or Debian than appears on the surface, not at all like older BSD systems that reference only modes “S” or “M”.
Once the kernel boots, we have a running Linux system. It isn't very usable, since the kernel doesn't allow direct interactions with “user space”. So, the system runs one program: init. This program is responsible for everything else and is regarded as the father of all processes. The kernel then retires to its rightful position as system manager handling “kernel space”. First, init reads any parameters passed to it from the command line. This command line was the LILO prompt you saw before the system began to boot the kernel. If you had more than one kernel to choose from, you chose it by name and perhaps put some other boot parameters on the line with it. Any parameters the kernel didn't need, were passed to init. These command-line options override any options contained in init's configuration file. As a good inspection of what's really going on will tell you, runlevels are just a convenient way to group together “process packages” via software. They hold no special significance to the kernel.
When init starts, it reads its configuration from a file called inittab which stands for initialization table. Any defaults in inittab are discarded if they've been overridden on the command line. The inittab file tells init how to set up the system. Sample Slackware, Red Hat and Debian inittabs are included later in this article.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Back to Backups
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- A New Version of Rust Hits the Streets
- Google's Abacus Project: It's All about Trust
- Secure Desktops with Qubes: Introduction
- Seeing Red and Getting Sleep
- Fancy Tricks for Changing Numeric Base
- Secure Desktops with Qubes: Installation
- Working with Command Arguments
- Linux Mint 18
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide