Revision Control with Arch: Introduction to Arch
Of course, while you work on your branch, development may have continued on the original archive. Running tla update fetches changes only from your local branch and not the original project. To fold in changes from upstream, you need to star-merge:
$ tla star-merge \ firstname.lastname@example.org/lnx-bbc--stable--2.1
In the event of conflicts (situations where both your branch and the upstream project have changes to the same lines of code), Arch uses the standard patch method of creating .orig and .rej files for each file that has conflicts. It is a good idea to use the find utility to seek out any rejects before committing your star-merge.
You may have noticed that revisions are named either base-0 or patch-#, where # is the number of patches to base-0 that must be applied. Arch uses a log-structured archive format, so that archive operations only ever add information to a project. This means that for big projects with many revisions, it can take a long time for certain tasks.
To speed up operations, you can make a snapshot of a given revision. Arch snapshots are simply a compressed tarball of a checked-out revision. When a checkout or other operation is performed, Arch looks for the highest-numbered snapshot and applies any necessary patches from there:
$ tla cacherev
Once this is finished, you can run tla cachedrevs to see what revisions have snapshots within your archive:
Because you do not always have access to create snapshots in an archive, it can be useful to make a local cache to speed up file operations. Arch provides a second kind of cache, called a library, that stores copies of checked-out files from various revisions. This is especially helpful for remote archives, because it means you do not even need to download the base snapshot revision before applying changesets:
$ mkdir ~/LIBRARY $ tla my-revision-library ~/LIBRARY $ tla library-config --greedy ~/LIBRARY $ tla library-add \ email@example.com/lnx-bbc--stable--2.1
This library is not small, with the example above comprising over 78MB as of June 2004. The advantage over a slow link, however, is well worth the trouble. In addition, laptops often have slow ATA hard drives, and involved archive operations can be a drag as the drivers use up plenty of CPU cycles. A greedy (auto-updating) Arch library can make your revision control operations quicker and more responsive, even for local archives.
In the next article in this series, you'll learn how to make publicly available mirrors so that upstream developers can star-merge back from your branches. In addition, you'll learn how to cherry-pick changesets from a busy branch and how to use GnuPG to sign your changesets cryptographically for security purposes.
The third and final installment of this series will describe centralized development techniques with Arch. You'll learn how to manage a shared access archive using OpenSSH's SFTP protocol and how to write scripts to perform automated tasks on your archives.
Resources for this article: /article/7752.
Nick Moffitt is a Linux professional living in the San Francisco Bay Area. He is the build engineer for the LNX-BBC Bootable Business Card distribution of GNU/Linux and the author of the GAR build system. When not hacking, he studies the history of urban public transportation.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- LiveCode Ltd.'s LiveCode
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide