Memory Access Error Checkers
Checker found the problem at the correct line (printf) in Listing 2, and it pointed out the freeing of an address different from the one returned by malloc. Actually, I decremented the foo pointer and tried to free this address.
Mem-Test didn't find the problem, but this was expected.
If I link the EF library without specifying any switch, Electric Fence returns only an error regarding the freeing of a non-malloc returned value:
Electric Fence 2.0.5 Copyright (C) 1987-1995 Bruce Perens. ElectricFence Aborting: free(400b3ff3): address not from malloc(). Illegal Instruction (core dumped)
Then, I tried to setting the protect below switch:
export EF_PROTECT_BELOW=1With this variable, EF caused a segmentation fault. With gdb, I tracked down the program to the printf where the segmentation fault occurred.
For Listing 3, Checker found the bound violation at line 14 (strcpy). Moreover, it also found an uninitialized data read at line 16 (printf). Actually, the print is going to read after the allocated area.
Mem-Test gave no reports as expected.
Again, the first run (without any switch) of Electric Fence did not report any error. I then set EF_ALIGNMENT to 0, and the strcpy caused a core dump.
The error in Listing 4 was correctly detected by Checker as the incorrect free. Mem-Test gave no reports. When I didn't set a switch, Electric Fence detected only the wrong free, but with EF_PROTECT_BELOW set, it also found the pre-write.
Checker found the exact line in Listing 5 where the bad assignment was made. No reports were expected or created by Mem-Test. There was a core dump, but the log was not created. Electric Fence did not detect this error. When you run the program, you will get a core dump whether you use EF or not.
By default, Checker does not find memory leaks. The documentation shows several switches you can set to modify the checking. Different switches are set by defining the environment variable CHECKEROPTS. The more interesting options are:
-o=file: redirect the output to a file.
-d=xx: disable a type of memory access error.
-D=end: do leak detection at the end of the program.
-m=x: define the behavior for a malloc(0).
I ran export CHECKEROPTS="-D=end" and then recompiled. Now it found the memory leak of 50 bytes in Listing 6. Checker implements a garbage detector to find out this type of error. You can call it by setting this option or by calling a specific Checker function inside your program.
Mem-Test easily identifies the memory leak, with a clear report:
50 bytes of memory left allocated at 134524624 134524624 was last touched by licalloc at line 12 of unfree.c
Electric Fence returned no messages.
From these tests, it appears clear that Checker is a complete product which found all the errors without any problems. It is quite easy to use, and you don't have to set a lot of switches because, by default, it checks for a wide range of errors. It does have a little problem when you use external libraries and functions (such as the GDBM). Actually, to ensure it will check for everything, you should recompile all the external programs with it. If you call a function not compiled with it, the memory bitmap used to track the memory accesses will not be updated; this will create holes in your checks. You have two ways to do this: recompile the library with checkergcc, or create function stubs. The stubs are particular aliases for each function, which perform some checks on the parameters passed to and returned from the function. In particular, you must check for pointers to see if the memory area you will access by using them has the correct status (readable, writable, etc.).
Provided in the package are many ready-to-use stubs for the most popular functions (such as stdio and the string functions). After a look at these stubs, it should not be difficult to write your own for libraries you cannot recompile with checkergcc.
On the other hand, Electric Fence showed some hesitation in error detection but is easier to use. It is enough to link it to the program and run it (no problem with external libraries). If used in tandem with Mem-Test, it will also detect memory leaks. For best results, be careful with the switches: use the correct spelling and the correct alignment and protect (below or after the memory allocation).
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- RSS Feeds
- Validate an E-Mail Address with PHP, the Right Way
- Readers' Choice Awards
- Tech Tip: Really Simple HTTP Server with Python
1 hour 8 min ago
- Reply to comment | Linux Journal
1 hour 40 min ago
- All the articles you talked
4 hours 4 min ago
- All the articles you talked
4 hours 7 min ago
- All the articles you talked
4 hours 8 min ago
8 hours 33 min ago
- Keeping track of IP address
10 hours 24 min ago
- Roll your own dynamic dns
15 hours 38 min ago
- Please correct the URL for Salt Stack's web site
18 hours 49 min ago
- Android is Linux -- why no better inter-operation
21 hours 4 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?