Unix Programming Tools
Author: Eric F. Johnson
Publisher: M&T Books
Price: US $34.95
Reviewer: Andrew L. Johnson
Although the front and back covers of Unix Programming Tools and the author's introduction seem to indicate in-depth coverage, this book is really an introduction. In fact, the discrepancy between what is really covered and the implied coverage is my major gripe—but I'll take that up later. First, let's take a quick run-through of what a reader can expect.
The book itself is divided into three major sections: “Building Programs” (Chapters 1-6), “Maintaining Programs” (Chapters 7-11) and “Documenting Your Work” (Chapters 12-13).
Chapter One gives a very brief introduction to Unix, including some basic shell commands and utilities. Chapter Two whisks you through the process of compiling and linking C and C++ programs and creating libraries. For anything beyond the basics, you'll need to consult the man pages. An example is given for creating and running a Java program, and Perl and Tcl are briefly discussed.
Chapter Three outlines the basics of using make to automate the build process. There is enough information here for the newcomer to begin creating and using their own Makefiles. The commands imake and xmkmf are given brief treatment, but not enough for the neophyte to begin comfortably using them.
In the Chapter Four, “Working with Text Files”, you can expect to be shown how to invoke vi or Emacs on text files, and you are provided with a few tables of the common editing commands for each editor. A handful of graphical editors are mentioned and a few screen-shots of these editors are provided. The chapter finishes with passing mention of sed, awk and Perl.
Chapter Five is primarily an introduction to the grep and find commands, though again, the coverage is limited. The examples of using grep all focus on finding literal text. Regular expressions are mentioned and a few character classes are shown, but with the caveat that such expressions are beyond most grep usage.
The sixth and final chapter of the first section covers installation. Here you are introduced to tar, shar, split, uuencode, compress and gzip. The install program is mentioned as an alternative to using tar in some situations, as well.
Section Two begins with Chapter Seven looking at the debuggers: dbx, gdb and xdb. You are shown how to compile with debugging turned on and how to get a stack trace, set a breakpoint and print variable values. A few graphical front ends are mentioned with screen-shots, and the C language debugger lint and a few memory checking utilities are mentioned. The chapter finishes with a quick run through of the Java debugger jdb.
Chapter Eight offers basic information on diff and related programs, and instructions on using the patch program.
Chapter Nine, on version control, is probably the best chapter in the book. This chapter begins with the barest essentials of using RCS and moves on to list the important commands and uses of RCS. Although the author gives the impression that RCS sub-directories must be used to work with RCS, the coverage is more than enough for a newcomer to begin applying version control to their projects.
The remaining two chapters in Section Two very briefly discuss cross-platform development and using prof and gprof to check program performance. These chapters, like earlier ones, provide neither breadth nor depth in their coverage.
The final section of the book is on documentation, with Chapter 12 focusing on man pages and Chapter 13 on documentation in HTML format. In Chapter 12 you are shown the basic formatting commands for creating a man page and given an example that can be used as a template.
Chapter 13 appears more focused on source code documentation and shows how one can use the tools cocoon and cxref to produce HTML formatted documentation from C++ header files or C source and header files respectively—as long as a specialized format for comments is used in those files. A similar tool for Java programs, javadoc, is also introduced in this chapter.
The chapter ends with a discussion of POD documentation for Perl scripts, but as POD is actually a method for easily generating man-pages, it would have made more sense to put this discussion in the previous chapter. Notably lacking in Chapter 13's discussion of documentation was any mention at all of Literate Programming techniques and tools. While Literate Programming is not mainstream, I see this as an unfortunate omission of a powerful set of tools and techniques.
As mentioned above, there is a definite discrepancy between what the covers of the book and the author's introduction claim, and what is actually covered in the book. The best example of this is the statement on the front cover: “Covers Perl, Tcl, Java, Emacs, make, sed, awk, grep, C, C++ and more.” In actuality, awk appears just twice in the book—once on page 110: “awk is another text file tool, although it's mostly used for creating files or reports on data kept in files.”; and once in the index, referencing page 110. sed gets three full sentences, also on page 110. Perl's coverage is limited to showing how to invoke perl on a script, or how to use the #! notation to create an executable script. Notably absent is any mention of Perl's built-in interactive debugging environment.
On the back cover you are told that you will find out how to “get the most out of your text editor”. However, in reality, only a basic introduction to vi and Emacs is provided in Chapter Four. In subsequent chapters, there are occasional passages on integrating these editors (mainly Emacs) with some of the other tools, but don't expect to get the most out of either of these editors with just the information contained in this book.
In the introduction, the author suggests that the book will be useful to newcomers and “hard-core UNIX developers”, and that this book will cover all the “nitty-gritty details”. However, in the summary of Chapter Two, he gives us the following description: “Well, that's the whirlwind tour of creating C, C++, Java, Perl and Tcl programs on Unix.” “Whirlwind Tour” is an apt description for this book's coverage of Unix programming tools.
Most of the tools discussed in this book are available on the included CD-ROM, but then, most of them are included on most Linux CD-ROM distributions or are easily obtained from Linux archives. A similar level of introduction to many of the tools in this book can also be found in some of the introductory Linux books (see Other Resources), which have the added benefit of providing greater detail on the Unix/Linux environment in general.
For those wanting better coverage of the major programming tools, as well as an exploration of some programming issues in the Unix/Linux environment (such as terminal programming, sockets, semaphores, pipes, data management and more), I would suggest Beginning Linux Programming (see Other Resources), which, as its title suggests, is suitable for those new to Unix programming, but offers far more information than the book reviewed here.
Andrew is working on his Ph.D. in Physical Anthropology. He currently resides in Winnipeg, Manitoba with his wife and two sons, where he runs a small consulting business and enjoys a good dark ale whenever he can. He can be reached at firstname.lastname@example.org.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?