Linux Kernel Installation
We're now ready to begin building the kernel. There are several options for accomplishing this task:
make zImage: makes the basic, compressed kernel and leaves it in the /usr/src/linux/arch/i386/boot directory as zImage.
make zlilo: Copies the zImage to the root directory (unless you edited the top-level Makefile) and runs LILO. If you choose to use this option, you'll have to ensure that /etc/lilo.conf is preconfigured.
make zdisk: Writes zImage to a floppy disk in /dev/fd0 (the first floppy drive—the a: drive in DOS). You'll need the disk in the drive before you start. You can accomplish the same thing by running make zImage and copying the image to a floppy disk cp /usr/src/linux/arch/i386/boot/zImage /dev/fd0 Note that you'll need to use a high-density disk. The low density 720k disks will reportedly not boot the kernel.
make boot: Works just the same as the zImage option.
make bzImage: Used for big kernels and operates the same as zImage. You will know if you need this option, because make will fail with a message that the image is too big.
make bzdisk: Used for big kernels and operates the same as zdisk. You will know if you need this option, because make will fail with a message that the image is too big.
Other make options are available, but are specialized, and are not covered here. Also, if you need specialized support, such as for a RAM disk or SMP, read the appropriate documentation and edit the Makefile in /usr/src/linux (also called the top-level Makefile) accordingly. Since all the options I discussed above are basically the same as the zImage option, the rest of this article deals with make zImage--it is the easiest way to build the kernel.
For those of you who wish to speed up the process and won't be doing other things (such as configuring other applications), I suggest you look at the man page for make and try out the -j option (perhaps with a limit like 5) and also the -l option.
If you chose modules during the configuration process, you'll want to issue the commands:
make modules make modules_install
to put the modules in their default location of /lib/modules/2.0.x/, x being the kernel minor number. If you already have this subdirectory and it has subdirectories such as block, net, scsi, cdrom, etc., you may want to remove 2.0.x and everything below it unless you have some proprietary modules installed, in which case don't remove it. When the modules are installed, the subdirectories are created and populated.
You could just as easily have combined the last three commands:
make zImage; make modules; make modules_install
then returned after all the disk churning finished. The ; (semicolon) character separates sequential commands on one line and performs each command in order so that you don't have to wait around just to issue the next command.
Once your kernel is built and your modules installed, we have a few more items to take care of. First, copy your kernel to the root (or /boot/ or /etc/, if you wish):
cp /usr/src/linux/arch/i386/boot/zImage /zImage
You should also copy the /usr/src/linux/System.map file to the same directory as the kernel image. Then change (cd) to the /etc directory to configure LILO. This is a very important step. If we don't install a pointer to the new kernel, it won't boot. Normally, an install kernel is called vmlinuz. Old-time Unix users will recognize the construction of this name. The trailing “z” means the image is compressed. The “v” and “m” also have significance and mean “virtual” and “sticky” respectively and pertain to memory and disk management. I suggest you leave the vmlinuz kernel in place, since you know it works.
Edit the /etc/lilo.conf file to add your new kernel. Use the lines from the image=/vmlinuz line to the next image= line or the end. Duplicate what you see, then change the first line to image=/zImage (assuming your kernel is in the root directory) and choose a different name for the label=. The first image in the file is the default, others will have to be specified on the command line in order to boot them. Save the file and type:
You will now see the kernel labels, and the first one will have an asterisk. If you don't see the label that you gave your new kernel or LILO terminates with an error, you'll need to redo your work in /etc/lilo.conf (see LILO man pages).
We're almost ready to reboot. At this point, if you know your system will only require one reboot to run properly, you might want to issue the command:
depmod -a 2.0.x
where x is the minor number of the kernel you just built. This command creates the dependencies file some modules need. You'll also want to make sure you don't boot directly into xdm. For Red Hat type systems, this means ensuring the /etc/inittab file doesn't have a default run level of 5, or that you remember to pass LILO the run level at boot time. For Debian systems, you can just type:
mv /etc/init.d/xdm /etc/init.d/xdm.origfor now and move it back later.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- SUSE LLC's SUSE Manager
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Non-Linux FOSS: Caffeine!
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide