Automating Builds on Linux

by Adam Kolawa

An automated nightly build is a process for building an application every night using an infrastructure that automatically executes the required steps at the scheduled time, without any human intervention. A well-planned build process not only builds your application, but also provides you and your team with early detection of incompatible changes in the application components and early detection of errors introduced by newly integrated code. When configured and used properly, automated builds are a critical component for ensuring that the application satisfies quality requirements and continues to run as expected.

Understanding Automated Nightly Build Basics

At regularly scheduled intervals, the automated process should access the most recent set of source files from the source control system and then perform all tasks needed to build the application, including compilations, initialization, linking, transfers and other tasks needed to construct the application. Depending on the nature and complexity of the application being built, the automated build could take a long time and could involve multiple machines. If multiple versions of the product must be built, the build automatically should construct all required versions. In any case, messages related to build completion and issues should be made available to a designated team member; for example, results could be directed to a file or e-mailed to that team member.

If you are working on n-tier applications such as Web-based applications, the automated build process should be capable of building the application on a staging server as well as on the production server. The purpose of a staging area is to provide a safe zone where the application modifications can be exercised and tested thoroughly before they are made live. This way, errors can be found and fixed before they reach the public. Some files, such as images and static pages, can be tested thoroughly without a staging area, but those with dynamic functionality--such as programs, database connections and the like--cannot. The staging area should look like the actual application but should contain copies of the same components used in the actual application.

Builds can be automated using scripts, makefiles and build tools such as Ant. Once you have a process for automating all build tasks, you can use utilities such as cron to ensure that the necessary tasks are performed automatically at the same time each day.

Maximizing the Benefits of Automated Builds

For maximum effectiveness, builds should start with a clean slate by pulling all necessary code from the source code repository into an empty build sandbox, compiling the necessary components and building the application. Next, the application should be tested automatically to verify that it satisfies the quality criteria that the team manager and architect deem critical. At the very least, it should run all available test cases and report any failures that occur. By integrating testing into the build process, you can verify that no code has slipped through the tests that developers are required to perform before adding their code to the source code repository.

Often, groups shy away from integrating testing into the build during development and requiring that code pass designated tests in order for the build to be completed. They assume that as code is added and modified, errors inevitably are introduced into the application and that builds are going to fail frequently. These build failures are a blessing and not a problem, however: if there is a problem with the code, it is best to discover that problem as soon as it is introduced, when it is easiest, fastest and least costly to fix.

Moreover, if builds fail when errors are introduced, it introduces discipline into the group. If you implement policies stating that developers should add only clean, compiling code to the source control system and that the build will fail if any code does not pass the designated tests, it is easy for team members to identify anyone who is not adhering to this policy. If one developer introduces errors that cause a build to fail, the other team members can reprimand him the next morning. As a result, new developers quickly learn the value of adhering to the policy of adding only well-tested, compilable code to the source control system.

Build failures also lead all team members to value the quality of the code. Code is the group's greatest asset, as it is the main thing that they have to show for all of their work. It also serves as a means of communication: developers exchange the majority of their ideas by reading and writing code. Thus, by protecting the quality of their code, developers can preserve their best ideas in the clearest, most concise way possible, as well as ensure that their communications are as effective as possible.

Implementing an Automated Nightly Build

Now that we've discussed what an automated build should do and the benefits it can provide, let's take a look at the nuts and bolts of how to implement these steps on a sample application that should be built and tested automatically at the same time each night. Assume that you want to perform a full nightly build on a C++ application, and this application is divided into the following projects:


* $BLDROOT/alex
* $BLDROOT/mike
* $BLDROOT/mike_Linux
* $BLDROOT/paul
* $BLDROOT/paul/test_suites
* $BLDROOT/Makefile

$BLDROOT is the environment variable that represents the root location of the nightly build source. It is good practice to make this location a relative or parameterized project root path. This way, the build process can work on any machine that has the correct $BLDROOT environment variable--even machines that do not have the same directory structure as the initial build machine. In other words, it allows the build process to be independent of the machine's directory structure.

$ARCH is another environment variable that sometimes must be set. If you are building on more than one platform, $ARCH distinguishes which configuration file to use and which projects to build. For example, to build the complete mike project from the Linux architecture, you need to build the mike_linux project.

Each project (alex, mike, paul) has multiple source code files. For example, assume that the alex project has the following files: Alex_a.cpp, Foo.cpp, Boo.cpp and Test.cpp. There is a directory of test suites for the paul project, but test suites are not available yet for the other two projects.

Each project also has its own Makefile, which is part of a larger hierarchy of Makefiles. At the top level is the application-wide Makefile (in $BLDROOT/Makefile), which sets macro variables used throughout this build process and coordinates the builds of all available projects by calling the lower-level project Makefiles in the designated order. This file is parameterized so that it is portable in a multi-machine/multi-user environment. The lower-level Makefiles are the ones created for each individual project; when executed, they build one specific project and nothing else. The Makefile hierarchy is represented in Figure 1.

Automating Builds on Linux

Figure 1. The Makefile Hierarchy

Before you start implementing an automated build, it's a good idea to create a special account for running the build. If you run the build from a special account, you eliminate the possibility of developer configuration errors and make nightly builds portable. For this example, assume that we have created a special account named nightly. Once you have a special build account created, you can start implementing the details of the nightly build. The ideal automated build performs the following tasks:

1. Cleaning: cleaning involves removing all elements of the previous nightly build(s), including sources, binaries and temp files. Old files always should be removed before a new build begins.

2. Shadowing or cloning: if you have more than one machine using the same code for a nightly build, it is a good idea to shadow it on only one machine and then clone that shadowed code on the other machines. This way, the same code is used for all builds, even if changes are introduced into the source control system between the time the first machine shadows the code and the time the last machine accesses the code. If you create a source tar file to archive all latest sources, the other machines then can clone the build environment by getting the archive tar files.

  • Shadowing: Shadowing involves getting the latest project sources from the source control system. The sources should be stored in a directory accessible across the network. This way, if you have multiple machines running nightly builds, all nightly builds can access the archived source or individual files shadowed on the original machine.

  • Cloning: cloning involves copying previously shadowed source files over any existing files in the build directory. This process is called cloning because the same source archive is used for multiple machines or multiple platforms.

3. Building: building is the process of actually constructing the application. It can be as simple as executing make on the $BLDROOT directory. In this case, the top-level Makefile in $BLDROOT can configure the build environment (for example, by creating a binary repository and setting environment variables) and call each project's Makefile to build the related project in the designated order.

4. Testing: testing involves automatically running the existing test suites for the application. Testing should occur only after a successful build.

The logistics of this procedure are illustrated in Figure 2.

Automating Builds on Linux

Figure 2. The Structure of an Automated Build

The following nightly.sh shell script orchestrates these steps for the sample application:


nightly.sh
	#!/usr/bin/sh
	# $BLDROOT must be set by user (nightly) environment or hard coded here
	export BLDROOT=$HOME/build

	# Set $DATE for source file
        export DATE=`date +%Y%m%d`
	
	# Detect $ARCH if there is none available
	if [ 'x$ARCH' = 'x' ]; then
		export ARCH = `uname -s`
        fi 
        
        # clean
	rm -rf $BLDROOT

	# shadow (if it is the main building machine)
	mkdir $BLDROOT
	cd $BLDROOT; cvs get source_modules; tar cvzf  source-$date.tar.gz *
	
	# clone (if it is a secondary building machine)	
        mkdir $BLDROOT
	cd $BLDROOT; cp /source/to/archive/source-$date.tar.gz $BLDROOT
	tar xzvf source-$date.tar.gz

	# build
	make build ARCH=$ARCH MFLAG=-g

	# test
	make test ARCH=$ARCH MFLAG=-g

This shell script is parameterized so it can be portable in a multi-machine/multi-user environment. Only one parameter comes in and then all build actions are performed automatically. The script says to make build and make test, and the script immediately knows what to do with this information. It also sends make flags, and these flags come specifically from the include file, which has all the parameters.

This same script could be adapted easily to orchestrate the building of other projects. To make it work for your own projects, you need to modify the includes.

After the script performs the necessary shadowing and cloning, it runs $BLDROOT/Makefile to launch the building process. For this example, the Makefile would look something like this:


$BLDROOT/Makefile:
	# for multi-platform builds, read the platform
	# specific config.def file to set up the build
	# environment (config.def is not attached)
	-include $(BLDROOT)/config/$(ARCH)/config.def

	MAKE=make # you can override make with your build script name
       all: setup build test
       setup: 
       	-mkdir $(BLDROOT)/$(BINDIR)
       build: build_alex build_mike build_paul
       
       test: test_alex test_mike test_paul
       
       build_alex:
       	-cd $(BLDROOT)/alex &&	$(MAKE)  $(MFLAGS)
       build_mike:
       	-cd $(BLDROOT)/mike_$(ARCH) && $(MAKE) $(MFLAGS)
       	-cd $(BLDROOT)/mike &&	$(MAKE) $(MFLAGS)
       build_paul:	
       -cd $(BLDROOT)/paul && 	$(MAKE) $(MFLAGS)
       test_alex:
       	-cd $(BLDROOT)/alex/test_suites  &&	$(MAKE)  $(MFLAGS)
       test_mike:
       	-cd $(BLDROOT)/mike/test_suites &&	$(MAKE) $(MFLAGS)
       test_paul:	
       -cd $(BLDROOT)/paul/test_suites && 	$(MAKE) $(MFLAGS)

This sample Makefile also could be adapted easily for use with your own projects.

To ensure that this build is performed automatically each night, you would set up the following crontab for the nightly account:


         0 18 * * * /usr/bin/sh $HOME/bin/nightly.sh > $HOME/log/nightly-build.log 2>&1

This configures the machine to run the build process at 6:00 PM every day and to send all output to the nightly-build.log file.

If you wanted to build a Java application, the same principle steps still apply. Here, you have the option of using a Makefile, a shell script,or an Ant task to automate the required steps. For an idea of how this might be done by creating an Ant task and running that task with a script, consider the following sample Ant task and script:


$BLDROOT/build.xml
.
<property name="src.dir" value="${bldroot}"/>
	<property name="build.dir" value="${bldroot}/bin"/>
	<target name="init">
	<path id="classpath">
		<fileset dir="${src.dir}/">
			<include name="**/*.jar"/>
		</fileset>
	</path>
	<property name="classpath" refid="classpath"/>
       </target>
       <target name='clean'>
	  <delete>
    		<fileset dir="${src.dir}" includes="**/**"/>
  	  </delete>
       </target>
       <target name=shadow>
      <cvs                     
		cvsRoot=":pserver:nightly@cvs.hello.com:/home/cvspublic"
         command='update -A -d'  
         dest="${ws.dir}"/>
	    	<tar 	destfile="${src.dir}/source-$(date).tar.gz" 
       compression="gzip">
			<tarfileset dir="${src.dir}/>
		</tar>

      </target>
<target name="build" depends="timestamp"
        description="build all java files in source directory">    
	<javac srcdir="${src.dir}"
           destdir="${build.dir}"
           classpath="${classpath}"
           source="1.4"
           deprecation="true"
           listfiles="true"
           optimize="off"
           debug="on">
    </javac>
</target>
<target name=test>
<junit printsummary="yes" haltonfailure="yes">
  <classpath>
    <pathelement location="${build.dir}"/>
    <pathelement path="${java.class.path}"/>
  </classpath>

  <formatter type="plain"/>
  <batchtest fork="yes" todir="${reports.tests}">
    <fileset dir="${src.dir}">
      <include name="**/*Test*.java"/>
      <exclude name="**/AllTests.java"/>
    </fileset>
  </batchtest>
</junit>
</target>
	<target name="all" depends="init,clean,shadow,build,test"
        description="Build Nightly"/>

$HOME/bin nightly.sh

	#!/usr/bin/bash
	
	# Make sure JAVA_HOME is set and ant is set in PATH
	cd $BLDROOT; ant all

As with the previous example, a crontab could be used to ensure that the build process is run automatically at the same time every day.

Final Thoughts

When an automated build is configured and used properly, it not only ensures that the application is built correctly each night, but it also verifies code quality and exposes errors as soon as they are introduced. Even complex build processes, which involve everything from cleaning the build directory to testing the resulting application, can be automated once you understand the basic logistics. To see how an automated build can benefit your group, try to adapt this article's samples to automate and extend your own build process.

Automated builds are a powerful part of the software lifecycle. That's why Parasoft made them a cornerstone of our Automated Error Prevention (AEP) methodology, a methodology for improving software quality and reliability that is based on the process of learning from your mistakes. To learn more about automated builds and AEP, visit the Parasoft Web site.

Load Disqus comments

Firstwave Cloud