Revision Control with Arch: Maintenance and Advanced Use
In the above cherry-picking example, Alice B. Hacker used a Web-accessible directory for her personal archive. This is convenient, but it poses a problem for disconnected use. What if Alice wanted to work from her laptop during a long airplane flight or train ride? She either would have to generate changeset tarballs with tla changes or star-merge her various branches manually one by one from her laptop to her Web-space archive when she reached a network connection. Fortunately, Arch permits the creation of archives that are simply mirrors of other archives:
$ tla make-archive -ls --mirror-from \ firstname.lastname@example.org \ sftp://email@example.com/public_html/arch/
In this instance of make-archive, J. Random Hacker is creating an archive in his public_html directory on an Internet server. Once the mirror archive is created, it shows up in a tla archives listing as firstname.lastname@example.org-MIRROR. Now data can be pushed to it with a single command:
$ tla archive-mirror email@example.com
In addition to push mirrors that copy local archive data to remote systems, Arch allows pull mirrors that create local copies of remote archives:
$ tla make-archive -ls --mirror \ firstname.lastname@example.org \ /var/tmp/gar-cache $ tla archive-mirror email@example.com
This can be handy during disconnected operation, when a local branch may not be sufficient. Pull mirrors allow read-only access to a remote archive's data while off the Net.
One drawback to the firstname.lastname@example.org—signed-MIRROR archive is that it is a separate signed archive in its own right. This means J. Random Hacker must sign each changeset as it is copied from the original archive to the mirror.
In some cases, this is the desired effect. A release manager personally vouches for each changeset that enters the public mirror, for example. In most cases, however, it is important simply to copy the existing signatures along with the changeset. This is achieved by creating a special file on the system where tla archive-mirror is run:
$ echo email@example.com > \ ~/.firstname.lastname@example.org-MIRROR
Mirrors are extremely useful, but they are, by nature, read-only. The only way changes can be committed to a mirror is through the original archive by way of tla archive-mirror.
Consider Alice's laptop mirror situation. While sitting in the observation car of Amtrak's Coast Starlight, she pulls out her laptop and does tla get to grab some code out of a local mirror of email@example.com. Somewhere in the Willamette Valley, she finds inspiration and completes a remarkably useful hack.
Any attempt to commit her changes would receive the message attempt to write directly to mirror, which means the commit failed. The simple solution is to wait until she reaches an Internet access point and use the undo and redo commands:
$ tla undo ,changes-to-mirror $ cd ~/real-project/ $ tla redo ~/mirror-checkout/,changes-to-mirror/ $ tla commit
This works fine if your changes are not enough to require more than one changeset. For longer detached sessions, you'll want to make a new local branch.
After her trip down the Pacific Coast, Alice takes the Zephyr to Chicago. It is a longer trip, and she found herself working in a local mirror of firstname.lastname@example.org on the foo--stable--2.4.2 code. After a few hours of work, she decides to move her changes to a new branch.
First, she makes a new archive and branch on her laptop:
$ tla make-archive -l email@example.com ~/arch $ tla my-default-archive firstname.lastname@example.org $ tla archive-setup foo--laptop-hacks--1.0
Next, she tags off the mirror branch to her new archive. She runs the tla logs command in shell backticks so she doesn't have to remember which patch level and version she was working in at the moment:
$ tla tag `tla logs -r -f | head -n 1` \ foo--laptop-hacks--1.0
Finally, Alice coerces the checked-out copy into believing it is the first revision in her new laptop-hacks branch:
$ tla sync-tree foo--laptop-hacks--1.0--base-0 $ tla set-tree-version foo--laptop-hacks--1.0
At this point, she has shifted her checked-out copy from the read-only mirror over to a read-write archive hosted on her laptop.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- RSS Feeds
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Introduction to MapReduce with Hadoop on Linux
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Help with Designing or Debugging CORBA Applications
- Linux Systems Administrator
- Returning Values from Bash Functions
- Welcome to 1998
26 min 22 sec ago
- notifier shortcomings
50 min 4 sec ago
2 hours 26 min ago
- Android User
2 hours 28 min ago
- Reply to comment | Linux Journal
4 hours 21 min ago
7 hours 11 min ago
- This is a good post. This
12 hours 24 min ago
- Great, This is really amazing
12 hours 26 min ago
- These posts are really good
12 hours 27 min ago
- It’s a really great site you
12 hours 29 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?