GAR: Automating Entire OS Builds
I'm a member of the LNX-BBC Project. The LNX-BBC is a business card-sized CD-ROM containing a miniature distribution of GNU/Linux. In order to fit everything we wanted on the roughly 50MB volume, we had to build every single piece of software from scratch and weed out unnecessary files.
The first few LNX-BBC releases were done entirely by hand, pulling precompiled binaries from existing distributions and hand compiling some 200 software packages. While ultimately successful, it made the process of upgrading individual packages difficult. It was also impossible to keep our work in any sort of revision-control system.
User requests for source code revealed another problem with our development process. We had written some scripts to automate the creation of the compressed loopback filesystem and the El-Torito bootable ISO image, but the really difficult work was the compilation and installation of all the packages. We had no way of giving people a single tarball for building a BBC from scratch.
What we needed was a system for automating the compilation and installation of all of the third-party software. It needed to allow us to store our customizations in CVS and provide a simple mechanism for end users to build their own LNX-BBC ISO images.
These requirements had been noticed and met before. In 1994, Jordan Hubbard began work on the BSD Ports system. The FreeBSD operating system includes a great many programs and utilities, but it is not complete without a number of third-party programs. The BSD Ports system manages the compilation and installation of third-party software that has been ported to BSD.
Often when one asks for help with FreeBSD, an expert may answer by saying ``Just use ports!'' and listing the following commands:
cd /usr/ports/<category>/<package> make install
to suggest that the user needs to install a particular software package. This is similar to the way many Debian experts will tell people to apt-get install <package>. It's a simple way for a user to install software and related dependencies.
The Ports system is written entirely in pmake, the version of the make utility that comes with BSD. The choice to use make is both an obvious and a novel one. Make can be thought of as a language designed to automate software compilation and has many facilities for expressing build dependencies and rule-based build actions.
On the other hand, make has very limited flow control and lacks many features traditionally found in procedural programming languages. It can be rather unwieldy when used to build large projects.
As of the time of this writing, the core Ports runtime for FreeBSD has undergone 400 revisions since 1994. You can see all of the revisions and their changelog entries at http://www.freebsd.org/cgi/cvsweb.cgi/ports/Mk/bsd.port.mk. The collection of software currently contains nearly 4,000 packages.
I had made the mistake, in 1998, of making a fuss about the GNU system needing something like Ports. I made claims about how much more elegant Ports would be if written in GNU make and spent a lot of time reading the FSF's make book and the NetBSD Ports source code. It wasn't until an LNX-BBC meeting in 2001 that someone called my bluff, and I actually had to sit down and write the thing.
GAR ostensibly stands for the Gmake Autobuild Runtime because it's a library of Makefile rules that provide Ports-like functionality to individual packages. (It's actually just named GAR because that's my favorite interjection: ``Gar!'')
From the user's perspective, the GAR system may well be a tree of carefully maintained source code, ready to compile. The reality is that the system is just a tree of directories containing Makefiles, and the only thing that's stored in the GAR system itself is the information necessary to perform the steps a user would take in compiling and installing software.
The base of the GAR directory tree contains a number of directories. These directories are package categories, and within each category is a directory for each package. Inside a package directory is (among other things) a Makefile.
By way of the GAR system libraries, this Makefile provides seven basic targets for the package: fetch, checksum, extract, patch, configure, build and install.
Thus, to install Python using the BBC's GAR tree, one would cd to lang/python and run make install. To look at the source code to netcat, one would cd to net/netcat and run make extract.
Each of these seven targets runs all previous targets, though any of them may be undefined for a given package. If you run make patch, it's the same as running
make fetch checksum extract patch
fetch: this target downloads all files and patches needed to compile the package. Typically this is a single tarball, accompanied by the occasional patch file.
checksum: uses md5sum to ensure that the downloaded files match those with which the package maintainer worked.
extract: makes sure that all of the necessary source files are available in a working directory. In some cases (such as when downloading a single C source file), this will simply copy files over.
patch: if the package has to be patched (either via third-party patches or package maintainer patches), this target will perform that step.
configure: configures the package as specified in the Makefile. It will typically run the package's underlying configuration system (such as autoconf or Imake).
build: performs the actual step of compilation.
install: puts files in the proper locations and performs any necessary mop-up work.
These targets are named after their counterparts in the BSD Ports system and behave in the same manner.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide