One-Click Release Management
Say you have a large piece of software, a complicated Web site or a whole bunch of little ones. You also have a gaggle of coders and a farm of machines on which to deploy the end product. Worst of all, the client insists on a short turnaround time for critical changes. Proprietary products that may provide you with a systematic, unified development, testing and deployment process typically are expensive and offer limited deployment options. They often require new hardware resources and software licenses simply to support installation of the system itself. Such a solution can be difficult to sell to managers who are concerned about cost and to developers who are concerned about learning a new and complicated process.
However, managing the development process from end to end on a tight schedule without such a unified approach can lead to serious inefficiencies, schedule slippage and, in general, big headaches. If you're the administrator of such a project, chances are you're spending a lot of time dealing with the management of code releases. On the other hand, you already may be using an expensive piece of proprietary software that solves all of your problems today, but the higher-ups are balking at the ever-increasing license renewal fees. You need to present them with an alternative. It's also possible that you release code only once a year and have more free time than you know what to do with, but you happen to be an automation junkie. If any of these scenarios seem familiar to you, read on.
Various open-source products can be adapted to minimize costs and developer frustration while taming your out-of-control release process by serving as the glue between existing toolsets. Maybe you even can start making it home in time to play a round or two of Scrabble before bedtime.
I base the examples presented in this article on a few assumptions that hopefully are common or generic enough that the principles can be extrapolated easily to fit with the particulars of a real environment. Our developers probably already use a bug-tracking system (BTS), such as Bugzilla, ClearQuest or Mantis, or an in-house database solution to track change requests and bugs. They also may be using a version control system (VCS), such as Arch, CVS or Subversion, to manage the actual code changes called for in various BTS entries.
If they're not using a BTS and a VCS for a large project, these developers probably have either superhuman organization skills or a high level of tolerance for emotional trauma. Which BTS and VCS we use is not essential to this discussion, and any exploration of the pros and cons between one system and another requires much more text than I am allotted here. In short, they all should support the building blocks needed for the type of process we'd like to employ. Namely, most any BTS can:
Assign a unique ID to all issues or bugs in its database.
Allow you to use the unique ID to track the state of an issue and store and retrieve a listing of any source files it effects.
Any VCS worth its salt (sorry VSS fans) can:
Allow some form of branching and merging of a central code hierarchy.
Allow a command-line client process to connect over a secure network connection in order to perform updates.
We use a Subversion (SVN) repository with the SVN+SSH access method enabled as our VCS and a generic MySQL database table as the BTS. We use Python, which tends to be quite readable even for the novice programmer, as our scripting language of choice. Chances are your distribution has packages for all of these products readily available; configuring them will be left as an exercise for the reader. The target machines are generic Web servers, all of which support SSH connections as well as the VCS client tools.
Here's the 10,000-foot overview of the example end-to-end process we are likely to be starting out with:
An issue is generated in the BTS and is assigned an ID of 001 and an initial status of “new”. It includes, or will include, a listing of file paths that represent new or changed files within the VCS repository and is assigned to the appropriate developer.
The assignee developer makes changes to his local copy of the source code, checks these changes into the VCS repository and updates the status of BTS ID# 001 to “in testing”.
The testing server is updated with the new changes.
A QA tester charged with reviewing all BTS items with a status of “in testing” verifies that the changes to the code are what is desired and updates the status of BTS ID 001 to “ready for production”.
A release manager then packages all changes affected by BTS ID# 001 into a release and updates the status of BTS ID# 001 to “in production”.
The live server is updated with the changes.
For the most part, we're managing to fix bugs and add new features to the code base without bugging the system administrator for much, aside from the occasional password reset or RAM upgrade. But steps 3 and 6 require us somehow to get the code out of the VCS and onto a live system. We could cut and paste files from the VCS into a folder on our hard drive, zip it up, send it to the sysadmin and ask him to unzip it on the live system. Or, we could take advantage of the structure of our VCS and its utilities to do the work for us and completely avoid having a conversation with the administrator, whose time tends to be a hot commodity.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Designing Electronics with Linux
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
3 hours 5 min ago
- BASH script to log IPs on public web server
7 hours 32 min ago
11 hours 8 min ago
- Reply to comment | Linux Journal
11 hours 40 min ago
- All the articles you talked
14 hours 4 min ago
- All the articles you talked
14 hours 7 min ago
- All the articles you talked
14 hours 8 min ago
18 hours 33 min ago
- Keeping track of IP address
20 hours 24 min ago
- Roll your own dynamic dns
1 day 1 hour ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?