Coverage Measurement and Profiling
Maybe you've always wondered what the gcov utility that comes with GCC is used for, or maybe your new project at work has a regulatory or customer requirement that your delivered software be tested to a certain percentage of coverage, and you are looking for how to accomplish that task. In this article, I introduce the general ideas of coverage measurement and of performance profiling, along with the standard GNU tools (gcov and gprof) used in these two techniques.
Coverage measurement is the recording of what paths were executed in code. Coverage can be measured with different degrees of granularity. The coarsest level is function coverage, measuring which functions were called; then comes statement coverage, measuring which lines of code were executed; and finally, branch coverage, measuring which logic conditions in branch statements were satisfied. When someone refers to coverage measurement usually statement or branch coverage is being discussed. gcov is the standard GNU coverage measurement tool, and it requires GCC.
It's a sad fact, but slow software can outstrip Moore's Law and cheap hardware. Even if you have the latest CPU and memory to spare, it seems you always can find software to soak it up. Furthermore, when discussing resource-constrained systems, like PDAs and embedded systems, it's often not possible to solve performance problems simply by throwing more hardware at them.
Profiling allows you to measure which part of your code is taking the most time to run. It gives you a window into the actual runtime behavior of the program and lets you see what the hot spots are for optimization. These spots often are not obvious from simple code inspection or reasoning about how the program ought to behave, so a profiler is a necessary tool here.
Profiling is a superset of coverage measurement, but its intent is different. With coverage measurement, you hope to measure which parts of your code didn't run. With profiling, you hope to measure which parts of your code did run and which consumed the most time. A profiler measures the number of times a function has been called, the number of times any particular line of code or branch of logic has been executed, the call graph of which functions have called and the amount of time spent in each area of your program. Gprof is the standard GNU profiler. It also requires GCC.
Although some industries, such as aerospace, require test coverage measurement, it seems to be an underused technique outside of those areas. This is because coverage measurement is a bit more indirect than other debugging techniques, such as memory leak detection and overwrite detections. Coverage measurement in and of itself is only a measurement; it does not automatically find bugs or improve the quality of your code.
What coverage management does do is provide information on how comprehensive your tests are. If you don't test at all or don't test in some systematic way, there's no point in measuring coverage. Furthermore, if you don't have some sort of standard, automated test suite (DejaGNU, for example), collecting coverage measurement data can be so labor-intensive, ad hoc and error-prone that it might be difficult to interpret your coverage measurements meaningfully.
Even test suites that seem comprehensive can leave large gaps. On my first project ever using coverage measurement, we ran our standard regression suite, which we were very proud of, and found the percentage of lines of code exercised was 63%. If you've never done this exercise before, you may be tempted to think “63%! Your tests must not have been very good.” We actually were quite happy to have a number this high; we knew our code had a large amount of error-handling cases for system faults that our test suite did not have the ability to trigger, such as out-of-memory or out-of-file descriptors. We also had read that the industry average for coverage on a newly instrumented test suite was close to 50%, so we were glad to have done as well as we did.
Let that number sink in: 50%. An average comprehensive test suite exercises only 50% of the code it is supposed to be checking. If you are not doing coverage measurement, you have no idea if you are doing even this well—and you do not have a measurement telling you how much better you could do. It's hard to optimize something that's unmeasured.
Knowing how much of your code was exercised during your test suite is the beginning, though, not the end. Once you know this, you can look at the details of the coverage reports, find what code was not exercised and start adding to your test suite.
|Happy Birthday Linux||Aug 25, 2016|
|ContainerCon Vendors Offer Flexible Solutions for Managing All Your New Micro-VMs||Aug 24, 2016|
|Updates from LinuxCon and ContainerCon, Toronto, August 2016||Aug 23, 2016|
|NVMe over Fabrics Support Coming to the Linux 4.8 Kernel||Aug 22, 2016|
|What I Wish I’d Known When I Was an Embedded Linux Newbie||Aug 18, 2016|
|Pandas||Aug 17, 2016|
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Happy Birthday Linux
- Updates from LinuxCon and ContainerCon, Toronto, August 2016
- ContainerCon Vendors Offer Flexible Solutions for Managing All Your New Micro-VMs
- New Version of GParted
- What I Wish I’d Known When I Was an Embedded Linux Newbie
- Tor 0.2.8.6 Is Released
- NVMe over Fabrics Support Coming to the Linux 4.8 Kernel
- All about printf
- Blender for Visual Effects
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide