Self-Hosting Movies with MoviX
To create a bootable CD containing the kernel and filesystem created above, you need a boot image. The most convenient choice you can make is the IsoLinux boot image, called isolinux.bin and part of the SysLinux package, so your system is able to load the initrd.gz no matter what its size.
Begin by creating a new directory, say /cdrom. Then, create a /cdrom/isolinux/kernel directory; put the initrd.gz and isolinux.bin files in /isolinux and the kernel in isolinux/kernel. Finally, inside /isolinux edit a isolinux.cfg file to tell the boot loader the boot options you want to use (see Listing 2).
The format of this file is similar to the lilo.conf format; consult the SysLinux web site for detailed information. A nice improvement is the possibility of calling up to 10 text files from the boot prompt, using the F1-F10 keys. That is, there is a way to make the user able to access documentation about boot parameters right at boot time, directly from the CD. For the type of distribution we are talking about, this is a useful feature. Another nice feature is the capability to visualize pictures rather than text, for example, making it possible to add a “splash” boot logo to the distribution (16 colors at most; otherwise, try the BootScriptor package).
To produce a bootable CD image, run mkisofs with the following options:
mkisofs -o /tmp/distro.iso -r -V "My distro" -v -no-emul-boot \ -boot-load-size 4 -boot-info-table -b isolinux/isolinux.bin \ -c isolinux/isolinux.boot /cdrom
Then, burn the image:
cdrecord dev=0,0 -v -eject /tmp/distro.iso
Now you can reboot the system to be sure the burning was successful and the CD is really bootable.
As is, this CD already is a pretty good hardware checking tool, and it easily can be turned into a good recovery tool. Indeed, simply by booting from it, you can find out the brand and model of most PC hardware (everything except ISA cards) by looking at /proc/pci and /proc/cpuinfo. By taking a look at the kernel boot log with dmesg, you also may find information on the P&P ISA cards. Adding to it binaries such as e2fsck, you have all the tools necessary to recovery a Linux system experiencing problems.
On the other hand, no card is supported by the system at this point—no NIC, no audio card, no SCSI card, nothing. Although this probably is okay for a rescue CD, it most likely is not okay for our mini-distribution.
The standard way of activating kernel hardware support is to use kernel modules, but simply loading all possible modules is not a good idea. You need some autodetection tool. Several autodetection tools have been developed by big Linux distributions: kudzu (Red Hat), libdetect (Mandrake, but now they use kudzu too), discover (Progeny). But these seem much too complex for the kind of small distribution we are building.
Luckily, you can base a simple autodetection procedure on the devfs. Indeed, its automatic creation of device nodes can be used as an effective way of checking whether some device has been recognized by the kernel. For example, the device node /dev/sound/dsp automatically is created only when you load the right module for your audio card. So, you can easily write a script that loads, one by one, every single audio module and verifies every time whether the audio device appeared. If it did, then you successfully loaded the driver and can stop the loop; otherwise, you can unload the module and go on. See Listing 3 for a simple Perl example.
Hence, our method of autodetecting some kind of card, say the audio one, follows this path:
go back to the distribution kernel directory and activate the support for all possible audio cards as modules;
compile all modules with make modules
install the modules on your system with make modules_install (to avoid overwriting the “true” modules directory, make sure you rename it before installing the distribution's);
re-mount the initrd.gz file on /distro (remember to gunzip it before mounting it or it won't work);
copy the newly created directory /lib/modules/2.4.20 to /distro/lib/modules/; if the space left is smaller than ~.5MB, build a new initrd, as explained above, and assign to it more space;
add some script to load all possible modules and add a line to call it in rc.S
I don't have enough data to tell you whether this method is always reliable, but I've been using it in all MoviX packages for four months. Thus far, I have received no negative feedback, so at least I can tell you it is not totally unreliable.
Repeating this procedure for every kind of driver you need, you can easily build a script able to autodetect all hardware supported by Linux on any PC. You can find working examples of such scripts in each MoviX package.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide