In UNIX parlance, the word “init” doesn't identify a specific program, but rather a class of programs. The name “init” is used generically to call the first process executed at system boot—actually, the only process that is executed at system boot. When the kernel is finished setting up the computer's hardware, it invokes init and gives up controlling the computer. From that point on, the kernel processes only system calls without taking any decisional role in system operation. After the kernel mounts the root file system, everything is controlled by init.
Currently, several choices of init are available. You can use the now-classic program that comes with the SysVinit package by Miquel van Smoorenburg, simpleinit by Peter Orbaek (found in the source package of util-linux), or a simple shell script (such as the one shown in this article, which has a lot less functionality than any C-language implementation). If you set up embedded systems, you can even run the target application as if it were init. Masochistic people who dislike multitasking could even port command.com to Linux and run it as the init process, although you won't ever be able to restrict yourself to 640KB when running a Linux kernel.
No matter which program you choose, it needs to be accessed with a path name of /sbin/init, /etc/init or /bin/init, because these path names are compiled in the kernel. If none of them can be executed, then the system is severely broken, and the kernel will spawn a root shell to allow interactive recovery (i.e., /bin/sh is used as the init process).
To achieve maximum flexibility, kernel developers offer a way to select a different path name for the init process. The kernel accepts a command line option of init= exactly for that purpose. Kernel options can be passed interactively at boot time, or you can use the append= directive in /etc/lilo.conf. Silo, Milo, Loadlin and other loaders allow specifying kernel options as well.
As you may imagine, the easiest way to get root access to a Linux box is by typing init=/bin/sh at the LILO prompt. Note that this is not a security hole per se, because the real security hole is physical access to the console. If you are concerned about the init= option, LILO can prevent interaction using its own password protection.
Now we know that init is a generic naming, and almost anything can be used as init. The question is, what is a real init supposed to do?
Being the first (and only) process spawned by the kernel, the task of init consists of spawning every other process in the system, including the various daemons used in system operation as well as any login session on the text console.
init is also expected to restart some of its child processes as soon as they exit. This typically applies to login sessions running on the text consoles. As soon as you log out, the system should run another getty to allow starting another session.
init should also collect dead processes and dispose of them. In the UNIX abstraction of processes, a process can't be removed from the system table unless its death is reported to its parent (or another ancestor in case its parent doesn't exist anymore). Whenever a process dies by calling exit or otherwise, it remains in the state of a zombie process until someone collects it. init, being the ancestor of any other process, is expected to collect the exit status of any orphaned zombie process. Note that every well-written program should reap its own children—zombies exist only when some program is misbehaving. If init didn't collect zombies, lazy programmers could easily consume system resources and hang the system by filling the process table.
The last task of init is handling system shutdown. The init program must stop any process and unmount all the file systems when the superuser indicates that shutdown time has arrived. The shutdown executable doesn't do anything, it only tells init that everything is over.
As we have seen, the task of init is not too difficult to implement, and a shell script could perform most of the required tasks. Note that every decent shell collects its dead children, so this is not a problem with shell scripts.
What real init implementations add to the simple shell script approach is a greater control over system activity, and thus a huge benefit in overall flexibility.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Returning Values from Bash Functions
- Tech Tip: Really Simple HTTP Server with Python
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide