An Introduction to GCC Compiler Intrinsics in Vector Processing

PowerPC Altivec Types and Debugging

PowerPC Processors with Altivec (also known as VMX and Velocity Engine) add the keyword "vector" to their types. They're all 16 bytes long. The following are some Altivec vector types:

  • vector unsigned char: 16 unsigned chars
  • vector signed char: 16 signed chars
  • vector bool char: 16 unsigned chars (0 false, 255 true)
  • vector unsigned short: 8 unsigned shorts
  • vector signed short: 8 signed shorts
  • vector bool short: 8 unsigned shorts (0 false, 65535 true)
  • vector unsigned int: 4 unsigned ints
  • vector signed int: 4 signed ints
  • vector bool int: 4 unsigned ints (0 false, 2^32 -1 true)
  • vector float: 4 floats

The debugger prints these vectors as collections of individual elements.

ARM Neon Types and Debugging

On ARM processors that have Neon extensions available, the Neon types follow the pattern [type]x[elementcount]_t. Types include those in the following list:

  • uint64x1_t - single 64-bit unsigned integer
  • uint32x2_t - pair of 32-bit unsigned integers
  • uint16x4_t - four 16-bit unsigned integers
  • uint8x8_t - eight 8-bit unsigned integers
  • int32x2_t - pair of 32-bit signed integers
  • int16x4_t - four 16-bit signed integers
  • int8x8_t - eight 8-bit signed integers
  • int64x1_t - single 64-bit signed integer
  • float32x2_t - pair of 32-bit floats
  • uint32x4_t - four 32-bit unsigned integers
  • uint16x8_t - eight 16-bit unsigned integers
  • uint8x16_t - 16 8-bit unsigned integers
  • int32x4_t - four 32-bit signed integers
  • int16x8_t - eight 16-bit signed integers
  • int8x16_t - 16 8-bit signed integers
  • uint64x2_t - pair of 64-bit unsigned integers
  • int64x2_t - pair of 64-bit signed integers
  • float32x4_t - four 32-bit floats
  • uint32x4_t - four 32-bit unsigned integers
  • uint16x8_t - eight 16-bit unsigned integers

The debugger prints these vectors as collections of individual elements.

There are examples of these in the samples/simple directory.

Now that we've covered the vector types, let's talk about vector programs.

As Ian Ollman points out, vector programs are blitters. They load data from memory, process it, then store it to memory elsewhere. Moving data between memory and vector registers is necessary, but it's overhead. Taking big bites of data from memory, processing it, then writing it back to memory will minimize that overhead.

Alignment is another aspect of data movement to watch for. Use GCC's "aligned" attribute to align data sources and destinations on 16-bit boundaries for best performance. For instance:

float anarray[4] __attribute__((aligned(16))) = { 1.2, 3.5, 1.7, 2.8 };

Failure to align can result in getting the right answer, silently getting the wrong answer or crashing. Techniques are available for handling unaligned data, but they are slower than using aligned data. There are examples of these in the sample code.

The sample code uses intrinsics for vector operations on X86, Altivec and Neon. These intrinsics follow naming conventions to make them easier to decode. Here are the naming conventions:

Altivec intrinsics are prefixed with "vec_". C++ style overloading accomodates the different type arguments.

Neon intrinsics follow the naming scheme [opname][flags]_[type]. A "q" flag means it operates on quad word (128-bit) vectors.

X86 intrinsics are follow the naming convention _mm_[opname]_[suffix]

    suffix    s single-precision floating point
              d double-precision floating point
              i128 signed 128-bit integer
              i64 signed 64-bit integer
              u64 unsigned 64-bit integer
              i32 signed 32-bit integer
              u32 unsigned 32-bit integer
              i16 signed 16-bit integer
              u16 unsigned 16-bit integer
              i8 signed 8-bit integer
              u8 unsigned 8-bit integer
              pi# 64-bit vector of packed #-bit integers
              pu# 64-bit vector of packed #-bit unsigned integers
              epi# 128-bit vector of packed #-bit unsigned integers
              epu# 128-bit vector of packed #-bit unsigned integers
              ps 128-bit vector of packed single precision floats
              ss 128-bit vector of one single precision float
              pd 128-bit vector of double precision floats
              sd 128-bit vector of one double precision (128-bit) float
              si64 64-bit vector of single 64-bit integer
              si128 128 bit vector

Table 2 lists the intrinsics used in the sample code.

Table 2. Subset of vector operators and intrinsics used in the examples.

Operation Altivec Neon MMX/SSE/SSE2
loading vec_ld vld1q_f32 _mm_set_epi16
vector vec_splat vld1q_s16 _mm_set1_epi16
vec_splat_s16 vsetq_lane_f32 _mm_set1_pi16
vec_splat_s32 vld1_u8 _mm_set_pi16
vec_splat_s8 vdupq_lane_s16 _mm_load_ps
vec_splat_u16 vdupq_n_s16 _mm_set1_ps
vec_splat_u32 vmovq_n_f32 _mm_loadh_pi
vec_splat_u8 vset_lane_u8 _mm_loadl_pi
storing vec_st vst1_u8
vector vst1q_s16 _mm_store_ps
add vec_madd vaddq_s16 _mm_add_epi16
vec_mladd vaddq_f32 _mm_add_pi16
vec_adds vmlaq_n_f32 _mm_add_ps
subtract vec_sub vsubq_s16
multiply vec_madd vmulq_n_s16 _mm_mullo_epi16
vec_mladd vmulq_s16 _mm_mullo_pi16
vmulq_f32 _mm_mul_ps
arithmetic vec_sra vshrq_n_s16 _mm_srai_epi16
shift vec_srl _mm_srai_pi16
byte vec_perm vtbl1_u8 _mm_shuffle_pi16
permutation vec_sel vtbx1_u8 _mm_shuffle_ps
vec_mergeh vget_high_s16
vec_mergel vget_low_s16
type vec_cts vmovl_u8 _mm_packs_pu16
conversion vec_unpackh vreinterpretq_s16_u16
vec_unpackl vcvtq_u32_f32
vec_cts vqmovn_s32 _mm_cvtps_pi16
vec_ctu vqmovun_s16 _mm_packus_epi16
vector vec_pack vcombine_u16
combination vec_packsu vcombine_u8
maximum _mm_max_ps
minimum _mm_min_ps
vector _mm_andnot_ps
logic _mm_and_ps
rounding vec_trunc
misc _mm_empty


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Hi! > Use GCC's "aligned"

Gluttton's picture


> Use GCC's "aligned" attribute to align data sources and destinations on 16-bit
> float anarray[4] __attribute__((aligned(16))) = { 1.2, 3.5, 1.7, 2.8 };

I'm not shure, but it seams to me that instead of "16-bit" should be writen "16-bytes" ( ). Isn't it?

intrisics is i´m follow

ikkela's picture

intrisics is i´m follow

vector Processing

brian ( vector processing)'s picture

Very interesting article here. But i found different Vector Processing Concepts here >> at


ftheile's picture

The pattern for ARM Neon types is not [type]x[elementcount]_t, but [type][elementcount]x_t.

re Correction

G. Koharchik's picture

You might take a look at:

In example 1.1 they use uint32x4_t as a four element vector of 32-bit unsigned integers...


ssam's picture has some tips on helping GCC autovectorise code.

How old is this article?

Anonymous's picture

So it talks about ancient tech like MMX and SSE2, my guess these days you would write about AVX. Also the links at the end often lead to nowhere, and an article from 2005. This makes me wonder when this article was actually written.

re How old is this article?

G. Koharchik's picture

Very perceptive. The article was accepted for publication in July of 2011. That's why the ARM and Freescale links have gone stale. (I'll post an updated set later this week.)

The choice of MMX and SSE2 for X86 was deliberate. For an introductory article, things that are simple and widespread are often the best choices.

I think an AVX article would wonderful. Any volunteers?

no, intrinsics are no replacement for hand-optimized simd asm

holger's picture

so far, i encountered only one case where intrinsics are somewhat useful - when trying to unroll a loop of non-trivial vector code. if you write a test implementation using intrinsics and let gcc unroll that a bit for you, gcc's liveness analysis and resulting register allocation may give you useful hints for writing the final asm function. but i have never seen a case where gcc produces optimal code from intrinsics for a non-trivial function.

and regarding vendor libraries - the functions they provide are of varying quality with regard to optimization, but even in the cases where the code is pretty good, they don't compete on equal grounds. they have to be pretty generic, which means you always have some overhead. optimizations in simd asm often come from specific knowledge regarding variable ranges. data layout, or data reuse. the vendor lib can't do that.

so write your proof-of-concept using intrinsics or vendor libs. and if performance satisfies you, just keep it that way. but if a function still is a major hotspot, you can do better if you go asm (maybe only a bit, more likely a lot)

Recent i see one articles in

mikkela's picture

Recent i see one articles in the site backlinks where speak about seo


Anonymous dude's picture

Perhaps you meant to say: "Recently I saw an article on the site with backlinks. Where to they talk about seo?" Orlando locksmith