Most LJ readers are familiar with the various commercial distributions of Linux available for desktop and server systems. When one thinks of these commercial versions of Linux, one naturally might gravitate toward such names as Red Hat, SuSE or Mandrake. Some may even go so far as to think of UnitedLinux or, dare I go there, SCO Linux. Then, of course, there are your non-commercial community-supported distributions, including Debian and Slackware. When I mentioned to a friend that I was going to install Debian on a system so I could learn more about Linux, he suggested I try the Gentoo distribution.
According to Gentoo's architects and developers, Gentoo Linux is a ”special flavor of Linux that can be automatically optimized and customized for just about any application or need“. Depending on how deeply customized you want your system to be, this customization can be as simple as selecting only the applications and services you want on the system. On the other end of the spectrum, you could go as complex as instructing the compiler directives to utilize instruction sets that render executable code compiled specifically for your processor. The ease with which this customization occurs is part of what gives Gentoo its strength. Modify one file (/etc/make.conf) with the processor directives you wish to use, and let Gentoo's Portage system build executable files optimized for your exact needs.
Another, probably more important, hallmark of Gentoo's flexibility is you build the system according to your exact needs. You determine, at the package level, what it is you require your system to do. For example, if you were building a desktop system on which you had no desire to run a Web or mail server, you could install only those packages that you want on your system. If you want GNOME, you install GNOME. If you want KDE, you install KDE. If you want the plain Free86 windowing system with twm, you install the plain Free86 windowing system with twm. What makes Gentoo perhaps the best distribution with which to do this sort of customized system building is the underlying package management system that is this distribution's foundation: Portage.
According to Gentoo's home page, Portage "is the heart of Gentoo Linux, and performs many key functions". Portage acts as the software distribution system; it also acts as an integrated package-building and installation system, as well as a system updater. In these ways, it is similar to Red Hat's RPM and Debian's apt-get functions, but it is more powerful than either. This power manifests itself in the use of the Portage tree, which is a set of scripts downloaded to the machine that control the dependency needs and compilation options of various source-based software packages (over 4,000 at last count).
Installing Gentoo Linux is a more manual affair than is installing the commercial distributions; however, there is talk in the Gentoo forums of building a graphical installation that will automate many of the tasks. Presently, in order to install the Gentoo distribution, you make your decision about how much customization you wish to introduce to your system, and then you download the appropriate ISO image from the Gentoo Web site or purchase CDs from the Gentoo store.
You can choose to optimize your system fully based on compiler directives and built-in dependencies, which includes setting the optimizations and then building the compilers used to compile the rest of the software. Alternatively, you can choose to use pre-built software from the Gentoo group. The advantage to optimizing the compiled code with your own settings is the code generally runs faster on your system if you optimize it for your processor's specific instruction set. The disadvantage of this option is the time spent on the compilation process, which can be quite extensive, even given the advances in modern chip architecture. For my installation, I chose to go with the Stage 1 tarball installation. This means I was building my system from the ground up, compiling the compilers that would be used to compile the rest of the software that would be installed on my machine.
Essentially, the installation is as straightforward and as similar in fashion as almost every other operating system installation out there. The only difference is the manual nature of the individual steps, which are well documented by the Gentoo staff on the Web site. I started out by downloading the Live CD ISO image that I wanted to use and then burned it to CD. Using the Live CD, I booted my destination machine into a self-contained Gentoo environment included on the CD image. I enabled DMA on my hard drive and allowed the network to be configured by DHCP. Following this, I used fdisk to partition my drive. I created my filesystems and formatted them; I chose ext3 for my boot partition and used ReiserFS for my root and home partitions. After disk setup, the fun really starts.
The next step was to extract the Stage 1 Tarball I chose to start with. Afterward, I wanted to make certain I was using the latest Portage tree, so I performed an emerge sync. Then, to make sure I was compiling my software with the appropriate compiler directives, I used nano to edit a single file, /etc/make.conf. After ensuring I had all the customizations I wanted in the configuration file, I started the bootstrap process, whereby Gentoo's scripts recompiled the GCC compiler. Following this step, I moved on to Stage 2, which essentially comprises more compilation of basic system components. This occurs automatically with the use of the Portage system and the command emerge system.
After the several hours that the compilation took, I had to modify the /etc/fstab manually to indicate where my partitions were. I also had to download the source for my kernel and compile it. After this, I downloaded and compiled a system logger, a cron dæmon, set my root password and configured a boot loader. I then cleaned up by unmounting the various file systems I had mounted for the installation process, ejected the CD and restarted my machine. At this point, my machine was a clean shell, awaiting my command to install software using the Portage system. When all was said and done, it took me approximately 24 hours from start to finish to have a fully functional, fully customized desktop system.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Control Your Linux Desktop with D-Bus
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide