Garbage Collection in C Programs
gctest -gu -s 128 -n 150000 -l 5
Under these conditions, the improvement of GC over malloc/free was around 20%, 0.85 vs. 0.60 seconds. Recall, though, the GC library always clears the allocated blocks, but malloc() does not. The overhead associated with this operation grows linearly with node size and thus is more important with larger nodes. To make a fair comparison, then, we need to substitute malloc() with calloc() at those points where GC_malloc() is called, as happens in:
gctest -tuc -s 128 -n 150000 -l 5
This test yields an execution time of 0.88 seconds and brings the GC improvement to 32%. Heap expansion is greater in the GC case, with a value of 1.7 vs. 1.0. Allocation latency is practically the same for both traditional and GC management, although a larger latency variation was experienced in the latter. Enabling incremental collection (-i option) did not lower the variation, although introducing calls to GC_free() (-F option) to explicitly free the list nodes explicitly actually did yield better results than the malloc/free case, on both execution time and latency. However, in this case we are not strictly using a real garbage collection approach.
Testing even larger memory blocks makes the difference between traditional and GC memory management quite noticeable. On 4KB nodes, GC performed quite poorly in comparison with malloc/free on execution time, 0.85 vs. 2 seconds; heap expansion, 2.75 vs. 1; and latency, 0.7 milliseconds vs. 1.6 microseconds. When compared with calloc/free performance, the execution time of GC still is quite competitive (40% faster). But, issues related to heap and latency remain.
GC techniques often are surrounded by myths and legends. In this article we have shown that GC actually can perform better than malloc/free. These advantages do not come for free, however, and the correct use of the library requires minimum knowledge on its internal mechanisms.
There is no final verdict on the suitability of GC for C programs. For now, the BDW library can be one more tool in your box, to be given serious consideration the next time you deal with a complex piece of software. Several open-source projects, like the Mono Project and GNU gcj Java runtime, have been using it for a while.
Pros and Cons
Before deciding that GC is for wimps and a hard-core C programmer would never need it, it may be beneficial to consider the actual advantages GC may offer with respect to the traditional C/C++ memory management schemes. As Ellis and Stroustrup say in The Annotated C++ Reference Manual, “C programmers think memory management is too important to be left to the computer. LISP programmers think memory management is too important to be left to the user.”
That memory-related problems contain some of the most insidious and time-wasting bugs a C programmer can encounter is a well-known fact. Even an experienced programmer might have a hard time tracking down bugs caused by invalid accesses, overflowing writes, accesses to dead memory, memory leaks and the like. Furthermore, from a software design point of view, the need to prevent such situations often leads designers to unclean interfaces between modules that should be decoupled. Think of functions that return a dynamically allocated memory block that then must be freed by the caller or a pointer to an internal static buffer, spoiling threadsafety—strtok() is a typical example in this sense—not to mention problems arising from memory areas shared among several modules. Each module has to free such areas only if no other modules are or will be using it, resulting in a tighter coupling between the modules themselves. This problem usually is addressed by using reference counting, where an internal counter is kept for each active user of the area; multiple copies, where one copy of the area is kept for each module; or, for those who fancy C++, smart pointers.
GC is an effective way of dealing with memory-related problems in C, relieving the programmer of memory accounting burdens. Each memory block knows when it is in use, actually, the GC engine knows, and the block automatically disappears when it is not referenced anymore. GC eliminates a priori all premature frees and memory leak problems.
GC has some drawbacks, of course, the most annoying being the feeling of having lost control that afflicts C programmers. More concretely, drawbacks stem from the automated management of resources, which translates into:
Not knowing when an unused memory block actually is freed and if it ever will be. This has further consequences when the memory block is a C++ object, whose destructor is called at an unspecified time.
Not being able to determine in advance how much time an allocation takes (allocation latency and jitter), which often is an issue for real-time systems. It must be said, though, that traditional malloc() also presents this problem sometimes. Incremental GC can help alleviate the problem and provide limited constraints to allocation times.
Longer execution time due to GC processing overhead. This often is not true, as sometimes the overhead associated with traditional free()s is equal to or even greater than that of GC. In this sense, only storage allocators written specifically for a given program might be faster. Even so, programmers often write their own memory management systems without a preliminary profiling activity, sometimes resulting in negative effects on performance.
Higher memory fragmentation (heap expansion), caused by unused objects not being freed by the GC engine when more memory is allocated by the program. It is quite common to have a heap much larger than the amount of memory in actual use at a certain time. Even worse, the heap size sometimes is proportional to the total amount of allocation performed since the program started. This may be a serious issue on systems with scarce RAM availability, where frequent collections and explicit free() calls might be necessary to limit fragmentation.
Reduced locality of reference, due to the garbage detection phase that has to traverse the whole addressable heap space to look for unused blocks. This results in cache and virtual memory misses more often than happens with traditional allocations. Generational GC algorithms limit the severity of this problem.
GC enthusiasts claim that these issues are not so relevant, considering the cleaner design and greater robustness that GC offers. Even if these advantages are difficult to be evaluated concretely, they sometimes provide a convenient trade-off between code manageability and loss of resources.
Finally, the BDW library also can be used proficiently as a leak detector; the Mozilla team uses it this way, for example. For more information, have a look at the included documentation.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Sony Settles in Linux Battle
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Profiles and RC Files
- Maru OS Brings Debian to Your Phone
- Understanding Ceph and Its Place in the Market
- Snappy Moves to New Platforms
- What's Our Next Fight?
- Git 2.9 Released
- The Giant Zero, Part 0.x
- Astronomy for KDE
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide