Parallel Programming with NVIDIA CUDA

 in
Using hardware acceleration via General Programming on stock GPUs (GPGPU), I've sped up my algorithms by more than tenfold. This article shows how you can achieve these results too!
Putting Two Stream Operators in Sequence

Now, let's apply a second operation that detects local minima on the computed vector field. Local minima are those places where all vectors are converging (Figure 6). Flagging out the local minima will prevent the mobile robot from stopping in one of them with none of the vectors guiding it out.

Figure 6. Three Local Minima the Mobile Robot Should Avoid

Under the stream processing model, operators can be daisy-chained: a second operator consumes the output of a first operator, much like the pipe operator of an operating system. In the example CUDA implementation, you will consume the vector field matrix stored in GPU memory. Sequential local minima detection pseudo-code:


In Parameters: calculated vector field, a decimal threshold
Out Parameters: a boolean matrix called "minima"

detect_local_minima_cpu(in field, in threshold, out minima):

    for (y=0 to h):
        for (x=0 to w):
            minima[y][x] =
                (norm(field[y][x]) < threshold)? true : false

    return

The sequential algorithm takes the vector field as input and fills in a Boolean matrix of the same dimensions with values “true” or “false”, depending on whether the length is below a given threshold. Conversely, the matrix “minima” at position (x, y) indicates whether the norm of the vector located at (x, y) is less than the given threshold. Parallel local minima detection:


In Parameters: the calculated vector field, a decimal threshold
Out Parameters: a boolean matrix called "minima"

detect_local_minima_gpu(in field, in threshold, out minima):
    x = blockIdx.x * BLOCK_SIZE + threadIdx.x
    y = blockIdx.y * BLOCK_SIZE + threadIdx.y
    minima[y][x] = (norm(field[y][x]) < threshold) ? true : false

The output is a field of Boolean values that indicates whether a given point is a local minimum.

Building Up a Test Benchmark

At this point, I have implemented four algorithms. You can, of course, download all the source code from our Web site for free and try them out yourself.

So, how does a CUDA algorithm stack up against its CPU equivalent? Next, I compare the parallel versions against their sequential counterparts in order to find out. The hardware used for the benchmark implementation includes:

  • Intel Core 2 Duo E6320, running at 1.6GHz with 4GB of RAM.

  • NVIDIA GeForce 8600GT GPU.

  • Ubuntu Linux 8.10.

  • CUDA version 2.2.

I implemented all four algorithms in one C++ program that can switch between the CPU and the CUDA versions of the algorithms dynamically. Not only does this make the benchmarking process easier, but it also is a good technique for developing programs that can fall back to the CPU on a computer where CUDA is not supported.

Benchmark Results

Each of the benchmarks uses different vector field configurations, increasing the size of the field as well as the number of repulsors. The number of attractors always is set to just one. The size of the vector fields are: 16x16, 32x32, 64x64, 128x128 and 256x256. The repulsors are randomly distributed on the field with a ratio of one repulsor per 32 vector field points. Hence, the number of repulsors is 8, 32, 128, 512, 2048 and 8192.

Figure 7 shows the results of the benchmarks. I am using the notation “WxH/R”, where WxH denotes the vector field's dimensions and R the number of repulsors present. The execution time is in milliseconds on a logarithmic scale (so a small difference in graph size is actually a much larger speedup than it appears to be visually).

Figure 7. Calculation Times

How much faster is the GPU? The speedup is calculated by dividing the execution time of the sequential algorithm by the execution time of the parallel algorithm (Figure 8).

Figure 8. Speedup

Computation times are the closest in the case of a small vector field. However, even in that case, we get a speedup of 2.5 times just by switching to the CUDA implementation of the vector field calculation. The local minima detection becomes interesting to parallelize only with slightly larger data sets that are more compute-intensive than smaller ones.

On average, the speedup is around eight times for our algorithms. In layman's terms, this means if you have a computation that takes one work day to complete, just by switching to CUDA, you can have your results in less than one hour.

This provides significant benefits for computations that require a user to run a computation several times while correcting the parameters each time. Such iterative processes are frequent, for instance, in financial models.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

The statement minima[y][x] =

Anonymous's picture

The statement
minima[y][x] = (norm(field[y][x]) < threshold) ? true : false
may incur branching penalty

You can just use the first part
minima[y][x] = (norm(field[y][x]) < threshold)

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix