Revision Control with Arch: Maintenance and Advanced Use
Arch is part of a recent generation of revision control systems that provide an important architectural advantage over the old Concurrent Version System (CVS) and its work-alikes. As a decentralized revision control system, Arch allows remote users to join large development efforts without needing to acquire special access privileges. Arch also provides powerful inter-archive operations that encourage participation from third-party contributors.
The previous article in this series [LJ, November 2004] demonstrated basic Arch operations, such as checking out code and creating branches from remote archives. This installment shows how to revert changes in an archive, how to publish your private archives to public mirrors and how to move a copy of your changes from archive to archive when you forget to make a new branch.
The Arch program is called tla. The program name arch is taken by the POSIX standard, which requires that /bin/arch report system information. A lot of information can be found by running tla help. If you need to figure out the arguments to a particular command, such as commit, it helps to run tla commit -H, to see what the tla commit command can do.
One of the more immediate benefits of any revision control system is the ability to undo a change or set of changes. Everyone makes mistakes now and again, and it is important for your tools to provide the means to a graceful recovery.
The quickest way to return a checked-out tree to a state without your local changes is to run tla undo. This creates a directory called ,,undo-1/ that contains all of the changes made. If you so desire, you simply can tla redo to re-apply those changes. For example:
$ tla register-archive http://www.lnx-bbc.org/arch $ tla get \ firstname.lastname@example.org/lnx-bbc--stable bbc $ cd bbc/ $ echo "BIG MISTAKE" > robots.txt $ echo "#smaller change" >> Makefile $ tla undo $ tla redo
The tla undo command is most useful during hold-that-thought moments, when a line of work needs to be set aside briefly for a quick change of some sort. Arch uses the undo and redo commands internally when performing operations such as update or star-merge.
If a mistake is localized to a single file, the entire changeset doesn't need to be backed out. Arch lets you revert the changes made to a single file by generating a unified diff representing that file's changes since the last commit. This diff then can be fed into the patch program in reverse mode, which causes the changes to be unpatched out of the file.
$ tla file-diffs robots.txt | patch -R
If the file had been deleted accidentally, it would be necessary to do touch robots.txt before executing this command. Without a file (even an empty one), Arch has no basis from which to generate the file-diffs. When working with complete changesets, however, Arch is far more intelligent.
One of the big advantages Arch has over its predecessor, CVS, is that it permits the creation and manipulation of changesets. A changeset is a complete collection of all the edits, renames, added and deleted files and log entries recorded during a single tla commit invocation.
Sometimes a changeset is committed that shouldn't be, or a temporary approach to something needs to be backed out before a more permanent one can be implemented. In these cases, revert the changeset by replaying it in reverse:
$ tla replay --reverse \ email@example.com/foo--bar--1.0--patch-4 $ tla sync-tree \ firstname.lastname@example.org/foo--bar--1.0--patch-4
The first command reverts the fourth changeset in the 1.0 version of the bar branch of the foo tree, even if it is not the most recent revision. This has the added effect of backing out the log entry for that changeset as well, so you can use the tla sync-tree command to put the commit log back the way it ought to be.
The patch-4 changeset still is stored in the email@example.com—projects archive, and the tree still can be checked out in that state. Only the current working copy of the code has been affected by the above commands. When the above user runs tla commit, a new changeset will be added that includes the inverse of patch-4.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- New Products
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Validate an E-Mail Address with PHP, the Right Way
- Tech Tip: Really Simple HTTP Server with Python
- Why Python?
- Build a Skype Server for Your Home Phone System
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Reply to comment | Linux Journal
48 min 50 sec ago
- Reply to comment | Linux Journal
1 hour 39 min ago
- Not free anymore
5 hours 40 min ago
9 hours 28 min ago
- Reply to comment | Linux Journal
9 hours 36 min ago
- Understanding the Linux Kernel
11 hours 50 min ago
14 hours 20 min ago
- Kernel Problem
1 day 23 min ago
- BASH script to log IPs on public web server
1 day 4 hours ago
1 day 8 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?