Automating Builds on Linux

Why nightly builds improve code integrity and how to incorporate them into your product's lifecycle.

An automated nightly build is a process for building an application every night using an infrastructure that automatically executes the required steps at the scheduled time, without any human intervention. A well-planned build process not only builds your application, but also provides you and your team with early detection of incompatible changes in the application components and early detection of errors introduced by newly integrated code. When configured and used properly, automated builds are a critical component for ensuring that the application satisfies quality requirements and continues to run as expected.

Understanding Automated Nightly Build Basics

At regularly scheduled intervals, the automated process should access the most recent set of source files from the source control system and then perform all tasks needed to build the application, including compilations, initialization, linking, transfers and other tasks needed to construct the application. Depending on the nature and complexity of the application being built, the automated build could take a long time and could involve multiple machines. If multiple versions of the product must be built, the build automatically should construct all required versions. In any case, messages related to build completion and issues should be made available to a designated team member; for example, results could be directed to a file or e-mailed to that team member.

If you are working on n-tier applications such as Web-based applications, the automated build process should be capable of building the application on a staging server as well as on the production server. The purpose of a staging area is to provide a safe zone where the application modifications can be exercised and tested thoroughly before they are made live. This way, errors can be found and fixed before they reach the public. Some files, such as images and static pages, can be tested thoroughly without a staging area, but those with dynamic functionality--such as programs, database connections and the like--cannot. The staging area should look like the actual application but should contain copies of the same components used in the actual application.

Builds can be automated using scripts, makefiles and build tools such as Ant. Once you have a process for automating all build tasks, you can use utilities such as cron to ensure that the necessary tasks are performed automatically at the same time each day.

Maximizing the Benefits of Automated Builds

For maximum effectiveness, builds should start with a clean slate by pulling all necessary code from the source code repository into an empty build sandbox, compiling the necessary components and building the application. Next, the application should be tested automatically to verify that it satisfies the quality criteria that the team manager and architect deem critical. At the very least, it should run all available test cases and report any failures that occur. By integrating testing into the build process, you can verify that no code has slipped through the tests that developers are required to perform before adding their code to the source code repository.

Often, groups shy away from integrating testing into the build during development and requiring that code pass designated tests in order for the build to be completed. They assume that as code is added and modified, errors inevitably are introduced into the application and that builds are going to fail frequently. These build failures are a blessing and not a problem, however: if there is a problem with the code, it is best to discover that problem as soon as it is introduced, when it is easiest, fastest and least costly to fix.

Moreover, if builds fail when errors are introduced, it introduces discipline into the group. If you implement policies stating that developers should add only clean, compiling code to the source control system and that the build will fail if any code does not pass the designated tests, it is easy for team members to identify anyone who is not adhering to this policy. If one developer introduces errors that cause a build to fail, the other team members can reprimand him the next morning. As a result, new developers quickly learn the value of adhering to the policy of adding only well-tested, compilable code to the source control system.

Build failures also lead all team members to value the quality of the code. Code is the group's greatest asset, as it is the main thing that they have to show for all of their work. It also serves as a means of communication: developers exchange the majority of their ideas by reading and writing code. Thus, by protecting the quality of their code, developers can preserve their best ideas in the clearest, most concise way possible, as well as ensure that their communications are as effective as possible.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Re: Automating Builds on Linux

Anonymous's picture

In script file "nightly.sh", where you read:

export ARCH = `uname -s`

should be:

export ARCH=`uname -s`

Re: Automating Builds on Linux

Anonymous's picture

Maven makes automating builds pretty simple as well and is pretty sophisticated.

multiple target machine

Sandy's picture

I have a question here:
The script will be run on one machine. and if we need to build on different machines with different architecture, how will the script connect to those machines and do build there?
We have a product which supports multiple platforms like windows,linux,HP-UX,AIX,etc..
and currently we manually login to every machine and start build there.
How will I automate this?

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState