Cross-Platform Software Development Using CMake
When looking through a large list of projects, one thing becomes apparent: a description of the build process always is stored in a group of files. These files can be simple shell scripts, Makefiles, Jam files, complex scripts based on projects like Autoconf and Automake or tool-specific files.
Recently another player came into the software building game, CMake. CMake is not directly a build process, because it uses native build tools, such as Make or even Microsoft Visual Studio. With support for numerous platforms, in-source and out-of-source builds, cross-library dependency checking, parallel building and simple configuration of header files, it significantly reduces the complexity of cross-platform software development and maintenance processes.
Looking at most software development projects, you are undoubtedly faced with a common problem. You have a bunch of source files, some depend on each other, and you want to make some final binary. Sometimes you want to do something more complicated, but in most cases, that is it.
So, you have this little project and you want to build it using your Linux desktop. You sit down and quickly write the following Makefile:
main.o: main.c main.h cc -c main.c MyProgram: main.o cc -o MyProgram main.o -lm -lz
Once this file is ready, all you have to do is type make and the project is built. If any file is modified, all the necessary files also are rebuilt. Great, you can now congratulate yourself and go have a drink.
Except your boss comes by and says, "We just got this great new XYZ computer and you need to build the software on it." So, you copy files there, type make and receive the following error message:
cc: Command not found
You know there is a compiler on that XYZ computer and it is called cc-XYZ, so you modify Makefile and try again. But that system does not have zlib. So, you remove -lz and play with source code and on it goes.
As you see, the problem with the Makefile approach is that once the file is moved to new platform, where the compiler name is not cc or where compile flags are different or even where the syntax of the compile line is different, make breaks.
As a more elaborate example of this problem, let us check our favorite compression library, zlib. Zlib is a fairly simple library, consisting of 17 C source files and 11 header files. Compiling zlib is straightforward. All you need to do is compile each C file and then link them all together. You can write a Makefile for it, but then you have to modify it on every single platform.
Tools such as Autoconf and Automake do a good job of solving some of these problems on UNIX and UNIX-like platforms. They are, however, usually too complex. To make things even worse, in most projects developers end up writing shell scripts inside Autoconf input files. The results then quickly become dependent on assumptions the developer made. Because the result of Autoconf depends on the shell, these configuration files do not run on platforms where the Bourne Shell or another standard /bin/sh is not available. Autoconf and Automake also depend on several tools installed on the system.
CMake is a solution to these problems. As opposed to other similar tools, CMake makes few assumptions about the underlying system. It is written in fairly standard C++, so it should run on almost any modern platform. It does not use any other tool except the native build tools of the system.
For several platforms, such as Debian GNU/Linux, CMake is available as a standard package. For most other platforms, including UNIX, Mac OS X and Microsoft Windows, CMake binaries can be downloaded from the CMake Web site. To check if CMake is installed, you can run the command cmake --help. This will display the version of CMake and the usage information. If the location to the CMake executable is not in the system path, you can run it by specifying the full path to the executable.
Now that CMake is installed, we can use it for our projects. For this, we have to prepare the CMake input file, which is called CMakeLists.txt. For example, this is a simple CMakeLists.txt for a possible project:
PROJECT(MyProject C) ADD_LIBRARY(MyLibrary STATIC libSource.c) ADD_EXECUTABLE(MyProgram main.c) TARGET_LINK_LIBRARIES(MyProgram MyLibrary z m)
Using CMake to build the project is extremely easy. In the directory containing CMakeLists.txt, supply the following two commands, where path is the path to the source code.
cmake path make
The cmake step reads the CMakeLists.txt file from the source directory and generates appropriate Makefiles for the system, in the current directory. CMake also maintains a list of all header files that objects depend on, so dependency checking can be assured. If you need to add more source files, simply add them to the list. Once Makefiles are generated, you do not have to run CMake any more, because the dependency to CMakeLists.txt also are in the generated Makefiles. If you want to make sure that dependencies are regenerated, you can always run make depend.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- SourceClear Open
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide