Exploiting 64-Bit Linux
Listing 1. A List of Memory Blocks and Their Usage
------address------ --size-- 0, 0x400000000000, 1024KB Total in use 1024KB Total free 0KB 0, 0x400000100000, 1024KB 1, 0x400000200000, 2048KB 2, 0x400000400000, 4096KB 3, 0x400000800000, 8192KB Total Uncommitted 15360KB 0, 0x400001000000, 16384KB 1, 0x400002000000, 32768KB 2, 0x400004000000, 65536KB 3, 0x400008000000, 131072KB 4, 0x400010000000, 262144KB 5, 0x400020000000, 524288KB 6, 0x400040000000, 1048576KB 7, 0x400080000000, 2097152KB 8, 0x400100000000, 4194304KB 9, 0x400200000000, 8388608KB 10, 0x400400000000, 16777216KB 11, 0x400800000000, 33554432KB 12, 0x401000000000, 67108864KB 13, 0x402000000000, 134217728KB 14, 0x404000000000, 268435456KB 15, 0x408000000000, 536870912KB 16, 0x410000000000, 1073741824KB 17, 0x420000000000, 2147483648KB 18, 0x440000000000, 4294967296KB 19, 0x480000000000, 8589934592KB 20, 0x500000000000, 17179869184KB Total Region free 34359721984KB 0, 0x400000000000, 16384KB Total Region used 16384KB
We choose a hash table-based lock manager using the virtual address as the lock ID. The addresses of data within SPH are unique, and active locks can be found quickly in a hash table.
Storage for the lock tables has to be shared but not persistent, so we allocate IPC shared memory for that. This memory is initialized with a block header and associated heap, which is used to allocate storage for the lock hash table and lock lists. IPC semaphores also are allocated and used to block threads waiting on contended locks.
So far, we have blocks of shared persistent storage and an address-based lock manager. Blocks are useful for storing large uniform arrays but are awkward to use for complex structures, such as link lists and trees. The SPH runtime includes utility objects that allocate and manage blocks for finer-grained allocations and complex lists and trees.
Utility objects all start with a block header and provision for an internal heap (using the same heap manager as the anchor block). The same signature words are used, but each utility object has a unique type. The type values define a simple type system for runtime checks. The signature words and the fact that blocks are all powers of two sizes simplifies finding the block header for any utility object from any address within a block. This is the trick supporting the new-near allocation scheme discussed later.
The simplest utility object is SPHSimpleStack. The SimpleHeap is simply a block header and internal heap. A CompoundHeap is a heap manager that allocates SimpleHeaps. The block header links multiple CompoundHeap blocks together to form an expandable CompoundHeap. This “heap of heaps” structure, combined with the “new-near” mechanism is useful for maintaining locality of reference for large complex list and tree structures.
The CompoundHeap is a framework (think superclass) for the SPHStringBTree, SPHIndex and SPHContext utility objects. The SimpleHeap is the framework for the BTree nodes internal to SPHStringBTree and SPHIndex. An SPHStringBTree maps a string (name) to an address. An SPHIndex maps an arbitrary binary value to an address. An SPHContext defines a two-way mapping between one or more strings and an address. These utility objects are useful for creating naming structures and content-addressable memories.
I needed a test for Shared Persistent Heap runtime and thought storing and processing large images would be interesting. In a previous personal project, I implemented a fast Mandelbrot Fractal (see Resources) display based on breaking the image in quadrants and storing the image in a quadtree structure. The Mandelbrot set is interesting, because it is “self-similar” and shows detail and any zoom factor.
This program could pan and zoom over color renditions of the Mandelbrot set where most of the image was pre-computed and stored in the quadtree. The algorithm is incremental, so detail is computed and added to the quadtree as needed (for the current display).
At the time, I had no good way to store the resulting quadtree for later use. I did write recursive streaming code to write/read a flattened representation of the quadtree to/from a file. But, as the quadtree image accumulated detail, it slowed down noticeably. Also, I ended up with multiple files with pre-computed details of different areas of the Mandelbrot set, but had no good way to merge them in a single high-resolution image. At this point, the Mandelbrot Project was set aside.
Later, when I was working on the SPHDE Project and was trying to think of a good demo, I remembered the quadtree Mandelbrot Project. The hard part was converting the original Mandelbrot program from Borland OWL to Linux and GTK2. The actual conversion to use SPHDE was much easier.
First, I added SPHJoinRegion and SPHCleanUp calls to the entry and exit of main. Then, I added code to handle first-time setup. This involved allocating a control block to anchor the quadtree and create an expanding SPHCompoundHeap to manage storage of the quadtree. Subsequent use of the program needs to obtain only the address of this control. This pointer can be stored and retrieved from a free slot in the anchor block header.
The next step was to change the quadtree algorithm to use SPHDE storage. This is only slightly complicated by a desire to maintain good locality of reference within the quadtree itself. The SPHCompoundHeap allocates SPHSimpleHeaps from which quadtree nodes are allocated. Allocating adjacent quadtree nodes from the same SPHSimpleHeap ensures physically locality in memory and the backing file. This minimizes the number of pages touched to display the whole Mandelbrot set (the topmost part of the quadtree) or zoom to any part of the set.
The simple call:
node = (TQuadTree*) SPHSimpleHeapNearAlloc (near, sizeof (TQuadTree));
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Dart: a New Web Programming Experience
- A Topic for Discussion - Open Source Feature-Richness?
- The Secret Password Is...
- RSS Feeds
1 hour 40 min ago
- Keeping track of IP address
3 hours 31 min ago
- Roll your own dynamic dns
8 hours 44 min ago
- Please correct the URL for Salt Stack's web site
11 hours 56 min ago
- Android is Linux -- why no better inter-operation
14 hours 11 min ago
- Connecting Android device to desktop Linux via USB
14 hours 40 min ago
- Find new cell phone and tablet pc
15 hours 38 min ago
17 hours 7 min ago
- Automatically updating Guest Additions
18 hours 15 min ago
- I like your topic on android
19 hours 2 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?