You can do just about everything you need to with ci, co, and rcsdiff. There are a few other commands that come with RCS that are also of interest.
The rcs command is used for changing the state of RCS files. In particular, it can be used to lock a file that is not locked or to break someone else's lock on an RCS file. This latter operation is perilous and should only be done in an emergency. There are a number of other operations that rcs can perform; see the man page for details.
It is possible to have “branches” off the main line (or “trunk”) of development. For instance, assume that the released version of hello.c is 2.6 and that version 2.7 will be the next released version. Programmer Mary is writing version 2.7, while programmer Joe has to maintain version 2.6. Normally, Joe would start a separate branch off the main development trunk, generating versions 18.104.22.168, 22.214.171.124, and so on. RCS can maintain an arbitrary number of branches off the main trunk, as well as branches off the branches. However, as you might imagine, keeping track of many levels of branching can become confusing.
At some point, Mary will want to make sure that all of Joe's fixes are incorporated into her version of hello.c; she would do this using rcsmerge. (rcsmerge uses a separate program that also comes with RCS, named merge, which does the actual work of merging the files.)
Finally, the rlog command will print out all the log messages for a particular source file. This allows you to see the complete change history of a file.
$ rlog hello.c RCS file: RCS/hello.c,v Working file: hello.c head: 1.3 branch: locks: strict arnold: 1.3 access list: symbolic names: comment leader: " * " keyword substitution: kv total revisions: 3; selected revisions: 3 description: world famous C program that prints a friendly message. ---------------------------- revision 1.3 locked by: arnold; date: 1994/11/07 03:41:32; author: arnold; state: Exp; lines: +6 -0 add id and log keywords. ---------------------------- revision 1.2 date: 1994/11/07 03:40:21; author: arnold; state: Exp; lines: +7 -1 Added -advice option, and made regular case use exit. ---------------------------- revision 1.1 date: 1994/11/07 03:38:50; author: arnold; state: Exp; Initial revision ====================================================
Most of the initial stuff that rlog prints out is explained in the RCS man pages. Of interest to us are the description and log message parts of the output, which tell us what the program is, what changes were made, by whom, and when. Interestingly, the timestamps are in UTC, not local time. This is so that developers in different time zones can collaborate without getting discrepancies in their Id strings.
The main problem that RCS does not solve is having multiple people working on a file at the same time and the larger issues of release management, i.e., making sure that the release is complete and up to date.
A separate software suite is available for this purpose: cvs, the Concurrent Version System. From the README file in the cvs distribution:
cvs is a front end to the rcs(1) revision control system which extends the notion of revision control from a collection of files in a single directory to a hierarchical collection of directories consisting of revision-controlled files. These directories and files can be combined together to form a software release. cvs provides the functions necessary to manage these software releases and to control the concurrent editing of source files among multiple software developers.
You can get cvs from ftp.gnu.ai.mit.edu in /pub/gnu. At the time of this writing, the current version is cvs-1.3.tar.gz. By the time you read this, CVS 1.4 may be out, so look for cvs-1.4.tar.gz, and retrieve that version if it is there.
RCS provides complete, flexible revision control in an easy-to-use package. Like make, RCS is a software suite that any serious programmer needs to learn and use.
Thanks to Paul Eggert for reviewing this article. His comments were very useful; several of them were incorporated almost verbatim. Thanks also to Miriam Robbins for forcing me to run spell.
Arnold Robbins is a professional programmer and semi-professional author. He has been doing volunteer work for the GNU project since 1987 and working with Unix and Unix-like systems since 1981.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide