man make: a Primer on the Make Utility
Phony targets (also called dummy or pseudo-targets) are not real files; they
simply are aliases within the makefile. As I mentioned before, you can specify
targets from the command line, and this is precisely what phony targets are
used for. If you're familiar with the process of using make to build
applications on your system, you're familiar with
install (which installs the
application after compiling the source) or
(which cleans up the
temporary files created while compiling the source). These are two examples
of phony targets. Obviously, there are no "install" or
"clean" files in the
project; they're just aliases to a set of commands set aside to complete some
task not dependent on the modification time of any particular file in the
project. Here is an example of using a "clean" phony target:
clean: -rm *.o my_bin_file
Some special targets are built in to make. These special targets hold special meaning, and they modify the way make behaves during execution:
.PHONY — this target signifies which other targets are phony targets. If a target is listed as a dependency of .PHONY, the check to ensure that the target file was updated is not performed. This is useful if at any time your project actually produces a file named the same as a phony target; this check always will fail when executing your phony target.
.SUFFIXES — the dependency list of this target is a list of the established file suffixes for this project. This is helpful when you are using suffix rules (discussed later in this article).
.DEFAULT — if you have a bunch of targets that use the same set of commands, you may consider using the .DEFAULT target. It is used to specify the commands to be executed when no rule is found for a target.
.PRECIOUS — all dependencies of the .PRECIOUS target are preserved should make be killed or interrupted.
.INTERMEDIATE — specifies which targets are intermediate, or temporary, files. Upon completion, make will delete all intermediate files before terminating.
.SECONDARY — this target is similar to .INTERMEDIATE, except that these files will not be deleted automatically upon completion. If no dependencies are specified, all files are considered secondary.
.SECONDEXPANSION — after the initial read-in phase, anything listed after this target will be expanded for a second time. So, for example:
.SECONDEXPANSION: ONEVAR = onefile TWOVAR = twofile myfile: $(ONEVAR) $$(TWOVAR)
will expand to:
.SECONDEXPANSION: ONEVAR = onefile TWOVAR = twofile myfile: onefile $(TWOVAR)
after the initial read-in phase, but because I specified .SECONDEXPANSION, it will expand everything following a second time:
.SECONDEXPANSION: ONEVAR = onefile TWOVAR = twofile myfile: onefile twofile
I'm not going to elaborate on this here, because this is a rather complex subject and outside the scope of this article, but you can find all sorts of .SECONDEXPANSION goodness out there on the Internet and in the GNU manual.
.DELETE_ON_ERROR — this target will cause make to delete a target if it has changed and any of the associated commands exit with a nonzero status.
.IGNORE — if an error is encountered while building a target list as a dependency of .IGNORE, it is ignored. If there are no dependencies to .IGNORE, make will ignore errors for all targets.
.LOW_RESOLUTION_TIME — for some reason or another, if you have files that will have a low-resolution timestamp (missing the subsecond portion), this target allows you to designate those files. If a file is listed as a dependency of .LOW_RESOLUTION_TIME, make will compare times only to the nearest second between the target and its dependencies.
.SILENT — this is a legacy target that
causes the command's output to be suppressed. It
is suggested that you use Command Echoing (discussed in the Command Special
or by using the
-s flag on the command line.
.EXPORT_ALL_VARIABLES — tells make to export all variables to any child processes created.
.NOTPARALLEL — although make can run simultaneous jobs in order to complete a task faster, specifying this target in the makefile will force make to run serially.
.ONESHELL — by default, make will invoke a new shell for each command it runs. This target causes make to use one shell per rule.
.POSIX — with this target, make is forced to conform to POSIX standards while running.
In other versions of make, variables are called macros, but in the GNU version (which is the version you likely are using), they are referred to as variables, which I personally feel is a more appropriate title. Nomenclature aside, variables are a convenient way to store information that may be used multiple times throughout the makefile. It becomes abundantly clear the first time you write a makefile and then realize that you forgot a command flag for your compiler in all 58 rules you wrote. If I had used variables to designate my compiler flags, I'd have had to change it only once instead of 58 times. Lesson learned. Set these at the beginning of your makefile before any rules. Simply use:
VARNAME = information stored in the variable
to set the variable, and do use
$(VARNAME) to invoke it throughout the
makefile. Any shell variables that existed prior to calling make will exist
within make as variables and, thus, are invoked the same way as variables. You
can specify a variable from the command line as well. Simply add it to the
end of your make command, and it will be used within the make execution.
If, at some point, you need to alter the data stored in a variable temporarily, there is a very simple way to substitute in this new data without overwriting the variable. It's done using the following format:
find is the substring you are trying to find,
replace is the string
to replace it with. So, for instance:
LETTERS = abcxyz xyzabc xyz print: echo $(LETTERS:xyz=def)
will produce the output
abcdef xyzabc def.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Linux Systems Administrator
- New Products
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Have you tried Boxen? It's a
4 hours 55 min ago
- seo services in india
9 hours 27 min ago
- For KDE install kio-mtp
9 hours 28 min ago
- Evernote is much more...
11 hours 28 min ago
- Reply to comment | Linux Journal
20 hours 13 min ago
- Dynamic DNS
20 hours 47 min ago
- Reply to comment | Linux Journal
21 hours 46 min ago
- Reply to comment | Linux Journal
22 hours 36 min ago
- Not free anymore
1 day 2 hours ago
1 day 6 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?