man make: a Primer on the Make Utility
$?— evaluates to the list of components that are younger than the current target. Can be used only in description file command lines.
$@— evaluates to the current target name. Can be used only in description file command lines.
$$@— also evaluates to the current target name. However, it can be used only on dependency lines.
$<— the name of the related file that caused the action (the precursor to the target). This is only for suffix rules.
$*— the shared prefix of the target and dependent—only for suffix rules.
Common Variables for C++ Programming
CC— the name of the compiler.
DEBUG— the debugging flag. This is -g in both g++ and cxx. The purpose of the flag is to include debugging information into the executable, so that you can use utilities like gdb to debug the code.
LFLAGS— the flags used in linking. As it turns out, you don't need any special flags for linking. The option listed is
-Wall, which tells the compiler to print all warnings. But, that's fine. We can use that.
CFLAGS— the flags used in compiling and creating object files. This includes both
-coption is needed to create object files—that is, .o files.
In certain situations, you will find that the rules for a certain file type are identical except for the filename. For instance, a lot of times in a C project, you will see rules like this:
file.o: file.c cc -O -Wall file.c
because for every .c file, you need to make the intermediate .o file, so that the end binary then can be built. Suffix rules are a way of minimizing the amount of time you spend writing out rules and the number of rules in your makefile. In order to use suffix rules, you need to tell make which file suffixes are considered significant (suffix rules won't work unless the suffix is defined this way), then write the generic rule for the suffixes. In the case described above, you would do this:
<![CDATA[ .SUFFIXES: .o .c .c.o: cc -O -Wall $< ]]>
You may note that in the case of suffix rules, the dependency suffix goes
before the target suffix, which is a reversal from the normal order in a
makefile. You also will see that you use
$< in the command, which evaluates
to the .c filename associated with the .o file that triggered the rule.
There are a couple predefined variables like this that are used exclusively
for suffix rules:
$<— evaluates to the component that is being used to make the target—that is, file.c.
$*— evaluates to the filename part (without any suffix) of the component that is being used to make the target—that is, file.
Note that the
$? variable cannot occur in suffix
rules, but the
still will work.
Command Special Characters
Certain characters can be used in conjunction with commands to
alter the behavior of make or the command. If you're familiar with
shell scripting, you'll recognize that
\ signifies a line continuation.
That is to say, using
\ means that the command isn't finished and continues
on the next line. Nobody likes looking at a messy file, and using this
character at the end of a line helps keep your makefile clean and
pretty. If a rule has more than one command, use a semicolon to separate
commands. You can start a command with a hyphen, and make will ignore any
errors that occur from the command. If you want to suppress the output of a
command during execution, start the command with an at sign (@).
Using these symbols will allow you to make a more usable and readable makefile.
Sometimes, you need more control over how the makefile is read and executed. Directives are designed exactly for that purpose.
From defining, overriding or exporting variables to importing other makefiles, these directives are what make a more robust makefile possible. The most useful of the directives are the conditional directives though.
Conditional directives allow you to define multiple versions of a command based on preexisting conditions. For example, say you have a set of libraries you want included in your binary only if the compiler used is gcc:
libs_for_gcc = -lgnu normal_libs = foo: $(objects) ifeq ($(CC),gcc) $(CC) -o foo $(objects) $(libs_for_gcc) else $(CC) -o foo $(objects) $(normal_libs) endif
In this example, you use
ifeq to check if CC equals
gcc and if it
does, use the gcc libraries; otherwise, use the generic libraries.
This is just a small, basic sampling of the things you can do with make and makefiles. There are so many more complex and interesting things you can do, you just have to dig around to find them!
GNU make comes with most Linux distributions by default, but it can be found on the main GNU FTP server: http://ftp.gnu.org/gnu/make (via HTTP) and ftp://ftp.gnu.org/gnu/make (via FTP). It also can be found on the GNU mirrors at http://www.gnu.org/prep/ftp.html.
Documentation for make is available on-line at
http://www.gnu.org/software/make/manual, as is documentation for most GNU
software. You also can find more information about make by running
man make, or by looking at /usr/doc/make/, /usr/local/doc/make/ or
similar directories on your system. A brief summary is available by running
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Interview with Patrick Volkerding
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide