An Introduction to GCC Compiler Intrinsics in Vector Processing

Suggestions for Writing Vector Code

Examine the Tradeoffs

Writing vector code with intrinsics forces you to make trade-offs. Your program will have a balance between scalar and vector operations. Do you have enough work for the vector hardware to make using it worthwhile? You must balance the portability of C against the need for speed and the complexity of vector code, especially if you maintain code paths for scalar and vector code. You must judge the need for speed versus accuracy. It may be that integer math will be fast enough and accurate enough to meet the need. One way to make those decisions is to test: write your program with a scalar code path and a vector code path and compare the two.

Data Structures

Start by laying out your data structures assuming that you'll be using intrinsics. This means getting data items aligned. If you can arrange the data for uniform vectors, do that.

Write Portable Scalar Code and Profile

Next, write your portable scalar code and profile it. This will be your reference code for correctness and the baseline to time your vector code. Profiling the code will show where the bottlenecks are. Make vector versions of the bottlenecks.

Write Vector Code

When you're writing that vector code, group the non-portable code into separate files by architecture. Write a separate Makefile for each architecture. That makes it easy to select the files you want to compile and supply arguments to the compiler for each architecture. Minimize the intermixing of scalar and vector code.

Use Compiler-Supplied Symbols if you #ifdef

For files that are common to more than one architecture, but have architecture-specific parts, you can #ifdef with symbols supplied by the compiler when SIMD instructions are available. These are:

  • __MMX__ -- X86 MMX
  • __SSE__ -- X86 SSE
  • __SSE2__ -- X86 SSE2
  • __VEC__ -- altivec functions
  • __ARM_NEON__ -- neon functions

To see the baseline macros defined for other processors:


touch emptyfile.c
gcc -E -dD emptyfile.c | more

To see what's added for SIMD, do this with the SIMD command-line arguments for your compiler (see Table 1). For example:


touch emptyfile.c
gcc -E -dD emptyfile.c -mmmx -msse  -msse2 -mfpmath=sse | more

Then compare the two results.

Check Processor at Runtime

Next, your code should check your processor at runtime to see if you have vector support for it. If you don't have a vector code path for that processor, fall back to your scalar code. If you have vector support, and the vector support is faster, use the vector code path. Test processor features on X86 with the cpuid instruction from <cpuid.h>. (You saw examples of that in samples/simple/x86/*c.) We couldn't find something that well established for Altivec and Neon, so the examples there parse /proc/cpuinfo. (Serious code might insert a test SIMD instruction. If the processor throws a SIGILL signal when it encounters that test instruction, you do not have that feature.)

Test, Test, Test

Test everything. Test for timing: see if your scalar or vector code is faster. Test for correct results: compare the results of your vector code against the results of your scalar code. Test at different optimization levels: the behavior of the programs can change at different levels of optimization. Test against integer math versions of your code. Finally, watch for compiler bugs. GCC's SIMD and intrinsics are a work in progress.

This gets us to our last code sample. In samples/colorconv2 is a colorspace conversion library that takes images in non-planar YUV422 and turns them into RGBA. It runs on PowerPCs using Altivec; ARM Cortex-A using Neon; and X86 using MMX, SSE and SSE2. (We tested on PowerMac G5 running Fedora 12, a Beagleboard running Angstrom 2009.X-test-20090508 and a Pentium 3 running Fedora 10.) Colorconv detects CPU features and uses code for them. It falls back to scalar code if no supported features are detected.

To build, untar the sources file and run make. Make uses the "uname" command to look for an architecture specific Makefile. (Unfortunately, Angstrom's uname on Beagleboard returns "unknown", so that's what the directory is called.)

Test programs are built along with the library. Testrange compares the results of the scalar code to the vector code over the entire range of inputs. Testcolorconv runs timing tests comparing the code paths it has available (intrinsics and scalar code) so you can see which runs faster.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Hi! > Use GCC's "aligned"

Gluttton's picture

Hi!

> Use GCC's "aligned" attribute to align data sources and destinations on 16-bit
> float anarray[4] __attribute__((aligned(16))) = { 1.2, 3.5, 1.7, 2.8 };

I'm not shure, but it seams to me that instead of "16-bit" should be writen "16-bytes" ( http://gcc.gnu.org/onlinedocs/gcc/Type-Attributes.html#Type-Attributes ). Isn't it?

intrisics is i´m follow

ikkela's picture

intrisics is i´m follow

vector Processing

brian ( vector processing)'s picture

Very interesting article here. But i found different Vector Processing Concepts here >> http://akiavintage.com/vector/vector-processing/ at http://akiavintage.com/

Correction

ftheile's picture

The pattern for ARM Neon types is not [type]x[elementcount]_t, but [type][elementcount]x_t.

re Correction

G. Koharchik's picture

You might take a look at:

http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dht0004a/CHD...

In example 1.1 they use uint32x4_t as a four element vector of 32-bit unsigned integers...

autovectorisation

ssam's picture

http://locklessinc.com/articles/vectorize/ has some tips on helping GCC autovectorise code.

How old is this article?

Anonymous's picture

So it talks about ancient tech like MMX and SSE2, my guess these days you would write about AVX. Also the links at the end often lead to nowhere, and an article from 2005. This makes me wonder when this article was actually written.

re How old is this article?

G. Koharchik's picture

Very perceptive. The article was accepted for publication in July of 2011. That's why the ARM and Freescale links have gone stale. (I'll post an updated set later this week.)

The choice of MMX and SSE2 for X86 was deliberate. For an introductory article, things that are simple and widespread are often the best choices.

I think an AVX article would wonderful. Any volunteers?

no, intrinsics are no replacement for hand-optimized simd asm

holger's picture

so far, i encountered only one case where intrinsics are somewhat useful - when trying to unroll a loop of non-trivial vector code. if you write a test implementation using intrinsics and let gcc unroll that a bit for you, gcc's liveness analysis and resulting register allocation may give you useful hints for writing the final asm function. but i have never seen a case where gcc produces optimal code from intrinsics for a non-trivial function.

and regarding vendor libraries - the functions they provide are of varying quality with regard to optimization, but even in the cases where the code is pretty good, they don't compete on equal grounds. they have to be pretty generic, which means you always have some overhead. optimizations in simd asm often come from specific knowledge regarding variable ranges. data layout, or data reuse. the vendor lib can't do that.

so write your proof-of-concept using intrinsics or vendor libs. and if performance satisfies you, just keep it that way. but if a function still is a major hotspot, you can do better if you go asm (maybe only a bit, more likely a lot)

Recent i see one articles in

mikkela's picture

Recent i see one articles in the site backlinks where speak about seo

What?

Anonymous dude's picture

Perhaps you meant to say: "Recently I saw an article on the site with backlinks. Where to they talk about seo?" Orlando locksmith

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix