Programming with GNU Software
Chapter 4 begins with an overview of the compilation process, including the preprocessor, translator/optimizer, assembler and linker, plus many other tools (such as the libraries) that take part in the software development process. There are many compiler options, optimization levels and intermediate file formats you can use. Like the rest of the book, this chapter does not attempt to be a comprehensive reference. Instead, it does a good job of discussing the most frequently used commands and options, and adds tips that even power users will appreciate. The chapter ends with an introduction to cross-compilers, and the requirements for building your own libraries in a cross-development environment. A table is included that outlines the large number of host and target systems supported by the Cygnus libraries, and the output formats such as a.out, COFF and ELF, that can be generated.
Chapter 5 continues with more details about using the C and C++ libraries, and what is needed to support the system interface to Unix- or Posix-like systems. So if you are interested in porting Linux to the latest 64-bit PDA or the new WebTV your aunt just bought on sale, this is the place to learn what it takes. Keep in mind that these two chapters are not about the C or C++ languages or how to write programs. Many other books are more useful as learning aids (see the Resources sidebar). Chapter 5 ends with a brief discussion of the library licensing issues.
The GNU debugging tool, gdb, is an interactive shell with its own commands, history (previously executed commands) and editor (Emacs-like, of course). The basic idea is that you can control and examine the internal working of an executing process, and interact with its source code and variables. The coverage of gdb is extensive here, and I have not seen gdb covered with a good tutorial in any other reference. This coverage alone could be worth the price of the book.
The make utility is used to build programs from multiple sources and compiles only files in need of updating, based on the date stamps and dependencies for each file. It is fairly easy to write simple dependencies so that if an include file is changed, for example, only the files that use it are recompiled; automating these steps saves time when building a new executable program. The make utility has been around a long time and has become very sophisticated, and the GNU make is one of the most comprehensive. The coverage in Chapter 7 is brief, but is an excellent tutorial introduction that covers both basic and advanced features. For more in-depth coverage, see the O'Reilly book on make.
The RCS revision control system is a tool to manage the versions of a program as it evolves over time. GNU make is aware of RCS, and can automatically use the current revisions. Again, the coverage is brief but presents sufficient basics for you to start using them; further details are available in other works from O'Reilly and FSF.
There are two tools for timing and profiling your programs: time and gprof. The time command is built into the bash shell and is similar to the timex command in other shells. It simply gives the elapsed execution time of a program as a whole, broken down by user and system, with a few additional system details. The gprof tool is a report generator that can provide detailed information on where your program is spending its time. The gprof utility can give either a one-dimensional profile or a two-dimensional accounting that follows the call graph of your program. The call graph starts with the main() function and has an execution breakdown for every function called that includes both the time spent and the number of times the function is called. It even handles recursive programs. Learning to use gprof is the best way to improve the performance of your programs. Coverage of this important tool has not been easily available elsewhere and is another reason why this book is a valuable resource.
What is particularly good about this book is the combination of an excellent tutorial style that makes it easy for you to get started, and depth that cuts to the important topics in each subject. Even if you are already experienced with C/C++ programming using Unix tools, you will find many useful tips. With only about 250 pages, this is brief coverage, and the one thing I might wish for is a more complete reference. For that we will have to turn elsewhere, such as the info pages and the references listed below. I'm sure this book will be a valuable reference for me for some time. The authors, Mike Loukides and Andy Orem, are senior technical editors with O'Reilly and have done an excellent job that rises well above the average for software documentation.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Tech Tip: Really Simple HTTP Server with Python
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Returning Values from Bash Functions
- Rogue Wave Software's Zend Server
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide