Cross-Platform Software Development Using CMake
When looking through a large list of projects, one thing becomes apparent: a description of the build process always is stored in a group of files. These files can be simple shell scripts, Makefiles, Jam files, complex scripts based on projects like Autoconf and Automake or tool-specific files.
Recently another player came into the software building game, CMake. CMake is not directly a build process, because it uses native build tools, such as Make or even Microsoft Visual Studio. With support for numerous platforms, in-source and out-of-source builds, cross-library dependency checking, parallel building and simple configuration of header files, it significantly reduces the complexity of cross-platform software development and maintenance processes.
Looking at most software development projects, you are undoubtedly faced with a common problem. You have a bunch of source files, some depend on each other, and you want to make some final binary. Sometimes you want to do something more complicated, but in most cases, that is it.
So, you have this little project and you want to build it using your Linux desktop. You sit down and quickly write the following Makefile:
main.o: main.c main.h cc -c main.c MyProgram: main.o cc -o MyProgram main.o -lm -lz
Once this file is ready, all you have to do is type make and the project is built. If any file is modified, all the necessary files also are rebuilt. Great, you can now congratulate yourself and go have a drink.
Except your boss comes by and says, "We just got this great new XYZ computer and you need to build the software on it." So, you copy files there, type make and receive the following error message:
cc: Command not found
You know there is a compiler on that XYZ computer and it is called cc-XYZ, so you modify Makefile and try again. But that system does not have zlib. So, you remove -lz and play with source code and on it goes.
As you see, the problem with the Makefile approach is that once the file is moved to new platform, where the compiler name is not cc or where compile flags are different or even where the syntax of the compile line is different, make breaks.
As a more elaborate example of this problem, let us check our favorite compression library, zlib. Zlib is a fairly simple library, consisting of 17 C source files and 11 header files. Compiling zlib is straightforward. All you need to do is compile each C file and then link them all together. You can write a Makefile for it, but then you have to modify it on every single platform.
Tools such as Autoconf and Automake do a good job of solving some of these problems on UNIX and UNIX-like platforms. They are, however, usually too complex. To make things even worse, in most projects developers end up writing shell scripts inside Autoconf input files. The results then quickly become dependent on assumptions the developer made. Because the result of Autoconf depends on the shell, these configuration files do not run on platforms where the Bourne Shell or another standard /bin/sh is not available. Autoconf and Automake also depend on several tools installed on the system.
CMake is a solution to these problems. As opposed to other similar tools, CMake makes few assumptions about the underlying system. It is written in fairly standard C++, so it should run on almost any modern platform. It does not use any other tool except the native build tools of the system.
For several platforms, such as Debian GNU/Linux, CMake is available as a standard package. For most other platforms, including UNIX, Mac OS X and Microsoft Windows, CMake binaries can be downloaded from the CMake Web site. To check if CMake is installed, you can run the command cmake --help. This will display the version of CMake and the usage information. If the location to the CMake executable is not in the system path, you can run it by specifying the full path to the executable.
Now that CMake is installed, we can use it for our projects. For this, we have to prepare the CMake input file, which is called CMakeLists.txt. For example, this is a simple CMakeLists.txt for a possible project:
PROJECT(MyProject C) ADD_LIBRARY(MyLibrary STATIC libSource.c) ADD_EXECUTABLE(MyProgram main.c) TARGET_LINK_LIBRARIES(MyProgram MyLibrary z m)
Using CMake to build the project is extremely easy. In the directory containing CMakeLists.txt, supply the following two commands, where path is the path to the source code.
cmake path make
The cmake step reads the CMakeLists.txt file from the source directory and generates appropriate Makefiles for the system, in the current directory. CMake also maintains a list of all header files that objects depend on, so dependency checking can be assured. If you need to add more source files, simply add them to the list. Once Makefiles are generated, you do not have to run CMake any more, because the dependency to CMakeLists.txt also are in the generated Makefiles. If you want to make sure that dependencies are regenerated, you can always run make depend.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Android's Limits
- Weechat, Irssi's Little Brother
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?