Typesetting with groff Macros
“In the beginning was the word.” And from the wordy primordial void there soon arose the blank page, the toner cartridge and the now ceaseless human craving for print. If you have a desire to look good in print, or just need to knock out a memo, term paper or letter to mom, you should know about groff. groff is a rich yet accessible set of document formatting tools and is available as standard equipment on every Linux system. groff can help take your words and typeset them beautifully on the printed page.
groff refers specifically to the GNU and updated version of troff (that venerable document formatting system developed for UNIX in the prehistoric era, before the Internet, compact disc and microwave popcorn). Traditional troff was first written in the early 1970s by Joseph Ossana at Bell Labs, rewritten a few years later by Brian Kernighan and designed for the computers and typesetting equipment available at the time. The GNU version of troff—first called gtroff, now simply groff—was written in the early 1990s by James Clark. While remaining compatible with traditional troff, groff offers several key enhancements making it easier to use, more powerful and containing fewer limitations than the program it supersedes. GNU groff is actively maintained and continues to evolve. In addition to Linux and other UNIX/UNIX-like systems, ports of groff are available for most of the other platforms out there. This ubiquity and open-source freedom lets you publish and share your documents portably and freely among platforms.
Using groff's macro capabilities for generating printed output is the focus of this article. It should also be mentioned that groff serves as the formatting engine for the on-line manual pages produced by the man command. If you need a sample of the typesetting prowess of groff, simply generate a printed manual page with the -t option to man:
man -t troff >troff.man.ps
This will produce a PostScript version of the manual page for groff, which you can then view on-screen with one of the PostScript previewers (gv, mgv), print directly with a PostScript printer or print to a non-PostScript printer using a PostScript interpreter such as GhostScript. (You should really take a look at this man page, by the way. It provides a thorough summary of all the additional features available in GNU groff, with more detail than presented here.)
groff offers all the niceties of computerized typesetting, including automatic ligatures, kerning, hyphenation and end-of-sentence spacing. groff also provides low-level control over all aspects of page layout by means of typesetting commands embedded into an otherwise plain text file. Most often these commands—or, in groff parlance, requests—are specified with a period in the first column of the line containing the command. For example, the following snippet of document has embedded commands for increasing the left indent and decreasing the current line length:
This is an example of a groff document..in +0.5i .ll -0.5i When formatted by groff, the text continuing here will appear indented by one-half inch from both of the previous margins.
Although it is possible to format a document completely using such “raw” groff requests, it is more typical for endusers to work with a collection of predefined macros that encapsulate sequences of raw requests into single commands. For example, if we wanted to create a macro for the block indent commands in the previous snippet, it might look like this:
.de Bi.in +0.5i .ll -0.5i ..The .de request begins the definition of our macro named Bi, and the double period on the last line marks the end. Invoking a macro within a document follows the same syntax as using a raw request (the name of the macro follows on a line with a period in the first column). Our new macro used in a document would look like:
This is another section of my groff document..Bi Oh boy, now the text continuing here is indented from both margins!If at some later time we want to increase the block indent to three-quarters of an inch, we need only change the macro definition. All instances of Bi throughout the document will then format with the new dimensions.
So far, we haven't seen a whole lot here to get excited about. One of the limitations of traditional troff is that the names of all commands, macros and other variables are limited to two characters. Two measly characters? As mentioned earlier, troff was developed in the veritable stone age of computing, when every bit mattered, and succinctness was sublime. While the developers of troff and the standard macro packages have done their best to devise naming schemes that are as mnemonic as possible within this two-character constraint, the resulting interface is about as user-friendly as 80x86 assembly language (which at least uses three characters for most of its instruction set!).
Fortunately, GNU groff eliminates this two-character naming limitation. For both the macro developer and the eduser, the
most significant enhancement of groff is that all names, including macros, numbers, strings, fonts and environments, can be of arbitrary length. Groff also allows for the aliasing of troff commands, macros and variables to provide alternative names for existing ones. We will exploit this feature heavily through the rest of the article. In fact, let's begin right now by aliasing the groff alias command itself:
.als ALIAS als
We can now use this command to provide a set of longer names for other key groff commands:
.ALIAS MACRO de.ALIAS NUMBER nr .ALIAS STRING dsSure, your old-time, hard-core troff jocks will gnash their teeth at the syntactic sugar. But the rest of us will have an easier time figuring out what in Sam Hill some macro is doing when we get back to work on it after a long and pleasurable weekend—or some other lapse into real life.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- New Products
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Nice article, thanks for the
7 hours 6 min ago
- I once had a better way I
12 hours 52 min ago
- Not only you I too assumed
13 hours 9 min ago
- another very interesting
15 hours 2 min ago
- Reply to comment | Linux Journal
16 hours 56 min ago
- Reply to comment | Linux Journal
23 hours 50 min ago
- Reply to comment | Linux Journal
1 day 6 min ago
- Favorite (and easily brute-forced) pw's
1 day 1 hour ago
- Have you tried Boxen? It's a
1 day 7 hours ago
- seo services in india
1 day 12 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?