A Memory-Efficient Doubly Linked List
The current delete implementation erases the whole list. For this article, our objective is to show the dynamic memory usage and execution times for the implemented primitives. It should not be difficult to come up with a canonical set of primitive operations for all the known operations of a doubly linked list.
Because our traversal depends on having pointers to two nodes, we cannot delete the current node as soon as we find the next node. Instead, we always delete the previous node once the next node is found. Moreover, if the current node is the end, when we free the current node, we are done. A node is considered to be an end node if the NextNode() function applied to it returns a null node.
A sample program to test the implementation discussed here is available as Listing 2 from the Linux Journal FTP site (ftp.linuxjournal.com/pub/lj/listings/issue129/6828.tgz). On my Pentium II (349MHz, 32MB of RAM and 512KB of level 2 cache), when I run the pointer distance implementation, it takes 15 seconds to create 20,000 nodes. This is the time needed for the insertion of 20,000 nodes. Traversal and deletion of the whole list does not take even a second, hence the profiling at that granularity is not helpful. For system-level implementation, one might want to measure timings in terms of milliseconds.
When we run the same pointer distance implementation on 10,000 nodes, insertion takes only three seconds. Traversal through the list and deletion of the entire list both take less than a second. For 20,000 nodes the memory being used for the whole list is 160,000 bytes, and for 10,000 nodes it is 80,000 bytes. On 30,000 nodes it takes 37 seconds to run the insertion. Again it takes less than a second to finish either the traversal or the deletion of the whole list. It is somewhat predictable that we would see this kind of timing, as the dynamic memory (heap) used here is being used more and more as the number of nodes increases. Hence, finding a memory slot from the dynamic memory takes longer and longer in a nonlinear, rather hyperlinear fashion.
For the conventional implementation, the insertion of 10,000 nodes takes the same three seconds. Traversal takes less than a second for both forward and backward traversal. Total memory taken for 10,000 nodes is 120,000 bytes. For 20,000 nodes, the insertion takes 13 seconds. The traversal and deletion individually takes less than a second. Total memory taken for 20,000 nodes is 240,000 bytes. On 30,000 nodes it takes 33 seconds to run the insertion and less than a second to run the traversal and the deletion. Total memory taken by 30,000 nodes is 360,000 bytes.
A memory-efficient implementation of a doubly linked list is possible to have without compromising much timing efficiency. A clever design would give us a canonical set of primitive operations for both implementations, but the time consumptions would not be significantly different for those comparable primitives.
Prokash Sinha has been working in systems programming for 18 years. He has worked on the filesystem, networking and memory management areas of UNIX, OS/2, NT, Windows CE and DOS. His main interests are in the kernel and embedded systems. He can be reached at email@example.com.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide