Revision Control with Arch: Introduction to Arch
Arch quickly is becoming one of the most powerful tools in the free software developer's collection. This is the first in a series of three articles that teaches basic use of Arch for distributed development, to manage shared archives and script automated systems around Arch projects.
This article shows you how to get code from a public Arch archive, contribute changesets upstream and make a local branch of a project for disconnected use. In addition, it provides techniques to improve performance of both local and remote archives.
Revision control is the business of change management within a project. The ability to examine work done on a project, compare development paths and replicate and undo changes is a fundamental part of free software development. With so many potential contributors and such rapid release of changes, the tools developers use to manipulate these changes have had to evolve quickly.
Early revision control was handled with tape backups. Old versions of a project would be dragged out of backup archives and compared line by line with the new copy. The process of restoring a backup from tape is not quick, so this is not an efficient method by any means.
To work around this lag, many developers kept old copies of files around for comparison, and this was soon integrated into early development tools. File-based revision control, such as that used by the Emacs editor, uses numbered backup files so you can compare foo.c~7~ with foo.c~8~ to see what changed. Versioned backup files even were integrated into the filesystem on some early proprietary operating systems.
For nearly two decades, the preferred format for third-party contributions to free software projects has been a patch file, sometimes called a diff. Given two files, the diff program generates a listing that highlights the differences between them. To apply the changes specified in the diff output, a user need only run it through the patch program.
In the 1990s, the Concurrent Versions System (CVS) became the default for managing the changes of a core group of developers. CVS stores a list of patches along with attribution information and a changelog. A primitive system of branching and merging allows users to experiment with various lines of development and then fold successful efforts back into the main project.
CVS has its limitations, and they are becoming a burden for many projects. First, it does not store any metadata changes, such as the permissions of a file or the renaming of a file. In addition, check-ins are not grouped together in a set, making it difficult to examine a change that spanned multiple files and directories. Finally, nearly all operations on a remote CVS repository require that a new connection be opened to the server, making it difficult for disconnected use.
Efforts such as the Subversion Project have come a long way toward fixing the flaws found in CVS. Subversion is effectively a CVS++, and it supports file metadata change logging and atomic check-ins. What it still requires is a centralized server on the network that all clients connect to for revision management operations.
A new generation of revision control systems has sprung up in the past few years, all operating on a distributed model. Distributed revision control systems do away with a single centralized repository in favor of a peer-to-peer architecture. Each developer keeps a repository, and the tools allow easy manipulation of changes between systems over the network.
Projects such as Monotone, DARCS and Arch are finding popularity in a world where free software development happens outside of well-connected universities, and laptops are much more common.
One of the most promising distributed systems today is GNU Arch. Arch handles disconnected use by encouraging users to create archives on their local machines, and it provides powerful tools for manipulating projects between archives. Arch lacks any sort of dedicated server process and uses a portable subset of filesystem operations to manipulate the archive. Archives are simply directories that can be made available over the network using your preferred remote filesystem protocol. In addition, Arch supports archive access over HTTP, FTP and SFTP.
One advantage to not having a dedicated dæmon is that no new code is given privilege on your server machine. Thus, your security concerns are with your SSH dæmon or Web server, which most system administrators already are keeping tabs on.
Another advantage is that for most tasks no root privilege is needed to make use of Arch. Developers can begin using it on their own machines and publish archives without even installing Arch on the Web server machine. This affects the pattern of adoption as well. Using CVS or Subversion is a top-down decision made for an entire project team, although Arch can be adopted by one or two developers at a time until everyone in the group is up to speed.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- SUSE LLC's SUSE Manager
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide