man make: a Primer on the Make Utility
Phony targets (also called dummy or pseudo-targets) are not real files; they
simply are aliases within the makefile. As I mentioned before, you can specify
targets from the command line, and this is precisely what phony targets are
used for. If you're familiar with the process of using make to build
applications on your system, you're familiar with
install (which installs the
application after compiling the source) or
(which cleans up the
temporary files created while compiling the source). These are two examples
of phony targets. Obviously, there are no "install" or
"clean" files in the
project; they're just aliases to a set of commands set aside to complete some
task not dependent on the modification time of any particular file in the
project. Here is an example of using a "clean" phony target:
clean: -rm *.o my_bin_file
Some special targets are built in to make. These special targets hold special meaning, and they modify the way make behaves during execution:
.PHONY — this target signifies which other targets are phony targets. If a target is listed as a dependency of .PHONY, the check to ensure that the target file was updated is not performed. This is useful if at any time your project actually produces a file named the same as a phony target; this check always will fail when executing your phony target.
.SUFFIXES — the dependency list of this target is a list of the established file suffixes for this project. This is helpful when you are using suffix rules (discussed later in this article).
.DEFAULT — if you have a bunch of targets that use the same set of commands, you may consider using the .DEFAULT target. It is used to specify the commands to be executed when no rule is found for a target.
.PRECIOUS — all dependencies of the .PRECIOUS target are preserved should make be killed or interrupted.
.INTERMEDIATE — specifies which targets are intermediate, or temporary, files. Upon completion, make will delete all intermediate files before terminating.
.SECONDARY — this target is similar to .INTERMEDIATE, except that these files will not be deleted automatically upon completion. If no dependencies are specified, all files are considered secondary.
.SECONDEXPANSION — after the initial read-in phase, anything listed after this target will be expanded for a second time. So, for example:
.SECONDEXPANSION: ONEVAR = onefile TWOVAR = twofile myfile: $(ONEVAR) $$(TWOVAR)
will expand to:
.SECONDEXPANSION: ONEVAR = onefile TWOVAR = twofile myfile: onefile $(TWOVAR)
after the initial read-in phase, but because I specified .SECONDEXPANSION, it will expand everything following a second time:
.SECONDEXPANSION: ONEVAR = onefile TWOVAR = twofile myfile: onefile twofile
I'm not going to elaborate on this here, because this is a rather complex subject and outside the scope of this article, but you can find all sorts of .SECONDEXPANSION goodness out there on the Internet and in the GNU manual.
.DELETE_ON_ERROR — this target will cause make to delete a target if it has changed and any of the associated commands exit with a nonzero status.
.IGNORE — if an error is encountered while building a target list as a dependency of .IGNORE, it is ignored. If there are no dependencies to .IGNORE, make will ignore errors for all targets.
.LOW_RESOLUTION_TIME — for some reason or another, if you have files that will have a low-resolution timestamp (missing the subsecond portion), this target allows you to designate those files. If a file is listed as a dependency of .LOW_RESOLUTION_TIME, make will compare times only to the nearest second between the target and its dependencies.
.SILENT — this is a legacy target that
causes the command's output to be suppressed. It
is suggested that you use Command Echoing (discussed in the Command Special
or by using the
-s flag on the command line.
.EXPORT_ALL_VARIABLES — tells make to export all variables to any child processes created.
.NOTPARALLEL — although make can run simultaneous jobs in order to complete a task faster, specifying this target in the makefile will force make to run serially.
.ONESHELL — by default, make will invoke a new shell for each command it runs. This target causes make to use one shell per rule.
.POSIX — with this target, make is forced to conform to POSIX standards while running.
In other versions of make, variables are called macros, but in the GNU version (which is the version you likely are using), they are referred to as variables, which I personally feel is a more appropriate title. Nomenclature aside, variables are a convenient way to store information that may be used multiple times throughout the makefile. It becomes abundantly clear the first time you write a makefile and then realize that you forgot a command flag for your compiler in all 58 rules you wrote. If I had used variables to designate my compiler flags, I'd have had to change it only once instead of 58 times. Lesson learned. Set these at the beginning of your makefile before any rules. Simply use:
VARNAME = information stored in the variable
to set the variable, and do use
$(VARNAME) to invoke it throughout the
makefile. Any shell variables that existed prior to calling make will exist
within make as variables and, thus, are invoked the same way as variables. You
can specify a variable from the command line as well. Simply add it to the
end of your make command, and it will be used within the make execution.
If, at some point, you need to alter the data stored in a variable temporarily, there is a very simple way to substitute in this new data without overwriting the variable. It's done using the following format:
find is the substring you are trying to find,
replace is the string
to replace it with. So, for instance:
LETTERS = abcxyz xyzabc xyz print: echo $(LETTERS:xyz=def)
will produce the output
abcdef xyzabc def.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- A New Version of Rust Hits the Streets
- Google's Abacus Project: It's All about Trust
- Secure Desktops with Qubes: Introduction
- Working with Command Arguments
- Back to Backups
- Secure Desktops with Qubes: Installation
- Fancy Tricks for Changing Numeric Base
- CentOS 6.8 Released
- Seeing Red and Getting Sleep
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide