A Memory-Efficient Doubly Linked List
The current delete implementation erases the whole list. For this article, our objective is to show the dynamic memory usage and execution times for the implemented primitives. It should not be difficult to come up with a canonical set of primitive operations for all the known operations of a doubly linked list.
Because our traversal depends on having pointers to two nodes, we cannot delete the current node as soon as we find the next node. Instead, we always delete the previous node once the next node is found. Moreover, if the current node is the end, when we free the current node, we are done. A node is considered to be an end node if the NextNode() function applied to it returns a null node.
A sample program to test the implementation discussed here is available as Listing 2 from the Linux Journal FTP site (ftp.linuxjournal.com/pub/lj/listings/issue129/6828.tgz). On my Pentium II (349MHz, 32MB of RAM and 512KB of level 2 cache), when I run the pointer distance implementation, it takes 15 seconds to create 20,000 nodes. This is the time needed for the insertion of 20,000 nodes. Traversal and deletion of the whole list does not take even a second, hence the profiling at that granularity is not helpful. For system-level implementation, one might want to measure timings in terms of milliseconds.
When we run the same pointer distance implementation on 10,000 nodes, insertion takes only three seconds. Traversal through the list and deletion of the entire list both take less than a second. For 20,000 nodes the memory being used for the whole list is 160,000 bytes, and for 10,000 nodes it is 80,000 bytes. On 30,000 nodes it takes 37 seconds to run the insertion. Again it takes less than a second to finish either the traversal or the deletion of the whole list. It is somewhat predictable that we would see this kind of timing, as the dynamic memory (heap) used here is being used more and more as the number of nodes increases. Hence, finding a memory slot from the dynamic memory takes longer and longer in a nonlinear, rather hyperlinear fashion.
For the conventional implementation, the insertion of 10,000 nodes takes the same three seconds. Traversal takes less than a second for both forward and backward traversal. Total memory taken for 10,000 nodes is 120,000 bytes. For 20,000 nodes, the insertion takes 13 seconds. The traversal and deletion individually takes less than a second. Total memory taken for 20,000 nodes is 240,000 bytes. On 30,000 nodes it takes 33 seconds to run the insertion and less than a second to run the traversal and the deletion. Total memory taken by 30,000 nodes is 360,000 bytes.
A memory-efficient implementation of a doubly linked list is possible to have without compromising much timing efficiency. A clever design would give us a canonical set of primitive operations for both implementations, but the time consumptions would not be significantly different for those comparable primitives.
Prokash Sinha has been working in systems programming for 18 years. He has worked on the filesystem, networking and memory management areas of UNIX, OS/2, NT, Windows CE and DOS. His main interests are in the kernel and embedded systems. He can be reached at email@example.com.
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- New Products
- The Secret Password Is...
2 hours 52 min ago
- Keeping track of IP address
4 hours 43 min ago
- Roll your own dynamic dns
9 hours 57 min ago
- Please correct the URL for Salt Stack's web site
13 hours 8 min ago
- Android is Linux -- why no better inter-operation
15 hours 23 min ago
- Connecting Android device to desktop Linux via USB
15 hours 52 min ago
- Find new cell phone and tablet pc
16 hours 50 min ago
18 hours 19 min ago
- Automatically updating Guest Additions
19 hours 27 min ago
- I like your topic on android
20 hours 14 min ago