An Introduction to GCC Compiler Intrinsics in Vector Processing

Speed is essential in multimedia, graphics and signal processing. Sometimes programmers resort to assembly language to get every last bit of speed out of their machines. GCC offers an intermediate between assembly and standard C that can get you more speed and processor features without having to go all the way to assembly language: compiler intrinsics. This article discusses GCC's compiler intrinsics, emphasizing vector processing on three platforms: X86 (using MMX, SSE and SSE2); Motorola, now Freescale (using Altivec); and ARM Cortex-A (using Neon). We conclude with some debugging tips and references.

Download the sample code for this article here:

So, What Are Compiler Intrinsics?

Compiler intrinsics (sometimes called "builtins") are like the library functions you're used to, except they're built in to the compiler. They may be faster than regular library functions (the compiler knows more about them so it can optimize better) or handle a smaller input range than the library functions. Intrinsics also expose processor-specific functionality so you can use them as an intermediate between standard C and assembly language. This gives you the ability to get to assembly-like functionality, but still let the compiler handle details like type checking, register allocation, instruction scheduling and call stack maintenance. Some builtins are portable, others are not--they are processor-specific. You can find the lists of the portable and target specific intrinsics in the GCC info pages and the include files (more about that below). This article focuses on the intrinsics useful for vector processing.

Vectors and Scalars

In this article, a vector is an ordered collection of numbers, like an array. If all the elements of a vector are measures of the same thing, it's said to be a uniform vector. Non-uniform vectors have elements that represent different things, and their elements have to be processed differently. In software, vectors have their own types and operations. A scalar is a single value, a vector of size one. Code that uses vector types and operations is said to be vector code. Code that uses only scalar types and operations is said to be scalar code.

Vector Processing Concepts

Vector processing is in the category of Single Instruction, Multiple Data (SIMD). In SIMD, the same operation happens to all the data (the values in the vector) at the same time. Each value in the vector is computed independently. Vector operations include logic and math. Math within a single vector is called horizontal math. Math between two vectors is called vertical math.

Instead of writing: 10 x 2 = 20, express it vertically as:

                        x  2

In vertical math, vectors are lines of these values; multiple operations happen at the same time:

        |  10   |   10  |  10  |  10  |   vector1
    x   |  2    |   2   |  2   |  2   |   vector2
        |  20   |   20  |  20  |  20  |   vector3

All 10s are multiplied by all 2s at the same time.

So, to convert Celsius to Fahrenheit using F = (9/5) * C + 32 for a vector of temperatures in Celsius:

        |  C0   |   C1  |  C2  |  C3  |   Celsius temperatures vector
    x   |  9    |   9   |  9   |  9   |   vector2
        |  p1   |   p2  |  p3  |  p4  |   partial result
    /   |  5    |   5   |  5   |  5   |   vector3
        |  p1   |   p2  |  p3  |  p4  |   partial result
   +    |  32   |   32  |  32  |  32  |   vector4

        |  F0   |   F1  |  F2  |  F3  |   Fahrenheit temperatures vector

Saturation arithmetic is like normal arithmetic except that when the result of the operation that would overflow or underflow an element in the vector, that is clamped at the end of the range and not allowed to wrap around. (For instance, 255 is the largest unsigned character. In saturation arithmetic on unsigned characters, 250 + 10 = 255.) Regular arithmetic would allow the value to wrap around zero and become smaller. For example, saturation arithmetic is useful if you have a pixel that is slightly brighter than maximum brightness. It should be maximum brightness, not wrap around to be dark.



Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Hi! > Use GCC's "aligned"

Gluttton's picture


> Use GCC's "aligned" attribute to align data sources and destinations on 16-bit
> float anarray[4] __attribute__((aligned(16))) = { 1.2, 3.5, 1.7, 2.8 };

I'm not shure, but it seams to me that instead of "16-bit" should be writen "16-bytes" ( ). Isn't it?

intrisics is i´m follow

ikkela's picture

intrisics is i´m follow

vector Processing

brian ( vector processing)'s picture

Very interesting article here. But i found different Vector Processing Concepts here >> at


ftheile's picture

The pattern for ARM Neon types is not [type]x[elementcount]_t, but [type][elementcount]x_t.

re Correction

G. Koharchik's picture

You might take a look at:

In example 1.1 they use uint32x4_t as a four element vector of 32-bit unsigned integers...


ssam's picture has some tips on helping GCC autovectorise code.

How old is this article?

Anonymous's picture

So it talks about ancient tech like MMX and SSE2, my guess these days you would write about AVX. Also the links at the end often lead to nowhere, and an article from 2005. This makes me wonder when this article was actually written.

re How old is this article?

G. Koharchik's picture

Very perceptive. The article was accepted for publication in July of 2011. That's why the ARM and Freescale links have gone stale. (I'll post an updated set later this week.)

The choice of MMX and SSE2 for X86 was deliberate. For an introductory article, things that are simple and widespread are often the best choices.

I think an AVX article would wonderful. Any volunteers?

no, intrinsics are no replacement for hand-optimized simd asm

holger's picture

so far, i encountered only one case where intrinsics are somewhat useful - when trying to unroll a loop of non-trivial vector code. if you write a test implementation using intrinsics and let gcc unroll that a bit for you, gcc's liveness analysis and resulting register allocation may give you useful hints for writing the final asm function. but i have never seen a case where gcc produces optimal code from intrinsics for a non-trivial function.

and regarding vendor libraries - the functions they provide are of varying quality with regard to optimization, but even in the cases where the code is pretty good, they don't compete on equal grounds. they have to be pretty generic, which means you always have some overhead. optimizations in simd asm often come from specific knowledge regarding variable ranges. data layout, or data reuse. the vendor lib can't do that.

so write your proof-of-concept using intrinsics or vendor libs. and if performance satisfies you, just keep it that way. but if a function still is a major hotspot, you can do better if you go asm (maybe only a bit, more likely a lot)

Recent i see one articles in

mikkela's picture

Recent i see one articles in the site backlinks where speak about seo


Anonymous dude's picture

Perhaps you meant to say: "Recently I saw an article on the site with backlinks. Where to they talk about seo?" Orlando locksmith

Geek Guide
The DevOps Toolbox

Tools and Technologies for Scale and Reliability
by Linux Journal Editor Bill Childers

Get your free copy today

Sponsored by IBM

Upcoming Webinar
8 Signs You're Beyond Cron

Scheduling Crontabs With an Enterprise Scheduler
11am CDT, April 29th
Moderated by Linux Journal Contributor Mike Diehl

Sign up now

Sponsored by Skybot