Compile C Faster on Linux
lcc is a small, fast C compiler now available on Linux. A perfectly good C compiler, gcc, comes with Linux. Why would anyone bother installing a second one? Because the two compilers make different tradeoffs, so they suit different stages of the development cycle. gcc has many targets and users, and it includes an ambitious optimizer. lcc is 75% smaller (more when counting source code), compiles more quickly, and helps prevent some porting bugs.
For those who have always wanted to customize or extend their compiler, our recent book, A Retargetable C Compiler: Design and Implementation, tours lcc's source code in detail and thus offers especially thorough documentation. Pointers to lcc's source code, executables, book, and authors appear at the end of this article.
lcc is fast. gcc implements a more ambitious global code optimizer, so it emits better code, particularly with full optimization options, but global optimization takes time and space. lcc implements a few low-cost, high-yield optimizations that collaborate to yield respectable code in a hurry.
For example, lcc compiles itself in 36 seconds on a 90 megahertz Pentium running Linux. gcc takes 68 seconds to compile the same program (the lcc source) with the default compiler options, and 130 seconds with the highest level of optimization. Code quality varied less. gcc's default code took 36 seconds to reprocess this input, just like lcc's code. gcc's best code (that is, with optimization level 3) runs in 30 seconds, about 20% faster. This is only a single data point, and both compilers evolve constantly, so your mileage may vary. Naturally, one can save time by using lcc for development and optimizing with gcc for the final release build.
Indeed, compiling code with two different compilers helps expose portability bugs. If a program is useful and if the source code is available, sooner or later someone will try to port it to another machine, or compile it with another compiler, or both. With a new machine or compiler, glitches are not uncommon. Which of the following solutions will net you less unwanted e-mail? For you to find and erase these blots while the code is fresh in your mind? Or for the porter to get diagnostics much later, about non-standard source code?
lcc follows the ANSI standard faithfully and implements no extension. Indeed, one option directs lcc to warn about a variety of C constructs that are valid but give undefined results, and thus can behave differently on a different machine or with a different compiler. Some programmers use lcc mainly for its strict-ANSI option, which helps them keep their code portable.
Like gcc, lcc can be configured as a cross-compiler that runs on one machine and compiles code for another. Cross-compilers can simplify life for programmers with multiple target platforms. lcc takes this notion a step further than most cross-compilers: we can, and typically do, link code generators for several machines into each version of the compiler.
For example, we maintain code generators for the MIPS, SPARC, and X86 architectures. We both work on and generate code for multiple platforms, so it's handy to be able to generate code for any target from any machine. We usually fold all three code generators into all compiler executables. A run-time option tells lcc which target to generate code for. If you don't maintain code for multiple targets, you're free to use an lcc that includes just one code generator, saving roughly 50KB for each code generator omitted.
lcc is small. lcc's Linux executable with one code generator is 232 KB, and its text segment is 192 KB. Both figures for the corresponding phase of gcc (cc1) exceed a megabyte. lcc's small size contributes to its speed, especially on modest systems. A compact program benefits those who wish to modify the compiler. Most developers will use pre-built executables for lcc; they will never examine or even recompile the source code. But the Linux community particularly prizes the availability of source code, partly because it allows users to customize their programs or adapt them for other purposes.
When configured with the Linux PC code generator lcc is 12,000 lines of C source code. gcc's root directory—without the target-specific description files—holds 240,000 lines. Surely, some of this material is not part of the compiler proper, but the separation is not immediately apparent to those who haven't browsed gcc's source recently. The machine-specific module is the part most often changed, because new target machines come along more often than, say, new source languages. The lcc target-specific module for the Linux PC is 1200 lines, and half of that repeats boilerplate declarations or supports the debugger, so the actual code generator is under 600 lines. The target-specific modules for gcc average about 3000 lines. These comparisons illustrate the fact that the two compilers embody different trade-offs and that neither beats the other at everything: gcc can emit better code and offers many options, while lcc is easier to comprehend but is otherwise less ambitious.
gcc and lcc use retargetable code generators driven in part by formal specifications of the target machine, just as a parser can be driven by a formal grammar of its input language. gcc's code generator is based in part on techniques that one of us (Fraser) originated in the late 1970's. lcc uses a different technique that is simpler but somewhat less flexible.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- New Products
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Nice article, thanks for the
2 hours 58 min ago
- I once had a better way I
8 hours 44 min ago
- Not only you I too assumed
9 hours 1 min ago
- another very interesting
10 hours 54 min ago
- Reply to comment | Linux Journal
12 hours 48 min ago
- Reply to comment | Linux Journal
19 hours 42 min ago
- Reply to comment | Linux Journal
19 hours 58 min ago
- Favorite (and easily brute-forced) pw's
21 hours 49 min ago
- Have you tried Boxen? It's a
1 day 3 hours ago
- seo services in india
1 day 8 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?