Parallel Programming with NVIDIA CUDA

 in
Using hardware acceleration via General Programming on stock GPUs (GPGPU), I've sped up my algorithms by more than tenfold. This article shows how you can achieve these results too!
Putting Two Stream Operators in Sequence

Now, let's apply a second operation that detects local minima on the computed vector field. Local minima are those places where all vectors are converging (Figure 6). Flagging out the local minima will prevent the mobile robot from stopping in one of them with none of the vectors guiding it out.

Figure 6. Three Local Minima the Mobile Robot Should Avoid

Under the stream processing model, operators can be daisy-chained: a second operator consumes the output of a first operator, much like the pipe operator of an operating system. In the example CUDA implementation, you will consume the vector field matrix stored in GPU memory. Sequential local minima detection pseudo-code:


In Parameters: calculated vector field, a decimal threshold
Out Parameters: a boolean matrix called "minima"

detect_local_minima_cpu(in field, in threshold, out minima):

    for (y=0 to h):
        for (x=0 to w):
            minima[y][x] =
                (norm(field[y][x]) < threshold)? true : false

    return

The sequential algorithm takes the vector field as input and fills in a Boolean matrix of the same dimensions with values “true” or “false”, depending on whether the length is below a given threshold. Conversely, the matrix “minima” at position (x, y) indicates whether the norm of the vector located at (x, y) is less than the given threshold. Parallel local minima detection:


In Parameters: the calculated vector field, a decimal threshold
Out Parameters: a boolean matrix called "minima"

detect_local_minima_gpu(in field, in threshold, out minima):
    x = blockIdx.x * BLOCK_SIZE + threadIdx.x
    y = blockIdx.y * BLOCK_SIZE + threadIdx.y
    minima[y][x] = (norm(field[y][x]) < threshold) ? true : false

The output is a field of Boolean values that indicates whether a given point is a local minimum.

Building Up a Test Benchmark

At this point, I have implemented four algorithms. You can, of course, download all the source code from our Web site for free and try them out yourself.

So, how does a CUDA algorithm stack up against its CPU equivalent? Next, I compare the parallel versions against their sequential counterparts in order to find out. The hardware used for the benchmark implementation includes:

  • Intel Core 2 Duo E6320, running at 1.6GHz with 4GB of RAM.

  • NVIDIA GeForce 8600GT GPU.

  • Ubuntu Linux 8.10.

  • CUDA version 2.2.

I implemented all four algorithms in one C++ program that can switch between the CPU and the CUDA versions of the algorithms dynamically. Not only does this make the benchmarking process easier, but it also is a good technique for developing programs that can fall back to the CPU on a computer where CUDA is not supported.

Benchmark Results

Each of the benchmarks uses different vector field configurations, increasing the size of the field as well as the number of repulsors. The number of attractors always is set to just one. The size of the vector fields are: 16x16, 32x32, 64x64, 128x128 and 256x256. The repulsors are randomly distributed on the field with a ratio of one repulsor per 32 vector field points. Hence, the number of repulsors is 8, 32, 128, 512, 2048 and 8192.

Figure 7 shows the results of the benchmarks. I am using the notation “WxH/R”, where WxH denotes the vector field's dimensions and R the number of repulsors present. The execution time is in milliseconds on a logarithmic scale (so a small difference in graph size is actually a much larger speedup than it appears to be visually).

Figure 7. Calculation Times

How much faster is the GPU? The speedup is calculated by dividing the execution time of the sequential algorithm by the execution time of the parallel algorithm (Figure 8).

Figure 8. Speedup

Computation times are the closest in the case of a small vector field. However, even in that case, we get a speedup of 2.5 times just by switching to the CUDA implementation of the vector field calculation. The local minima detection becomes interesting to parallelize only with slightly larger data sets that are more compute-intensive than smaller ones.

On average, the speedup is around eight times for our algorithms. In layman's terms, this means if you have a computation that takes one work day to complete, just by switching to CUDA, you can have your results in less than one hour.

This provides significant benefits for computations that require a user to run a computation several times while correcting the parameters each time. Such iterative processes are frequent, for instance, in financial models.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

The statement minima[y][x] =

Anonymous's picture

The statement
minima[y][x] = (norm(field[y][x]) < threshold) ? true : false
may incur branching penalty

You can just use the first part
minima[y][x] = (norm(field[y][x]) < threshold)

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState