# An Introduction to GCC Compiler Intrinsics in Vector Processing

### PowerPC Altivec Types and Debugging

PowerPC Processors with Altivec (also known as VMX and Velocity Engine) add the keyword "vector" to their types. They're all 16 bytes long. The following are some Altivec vector types:

- vector unsigned char: 16 unsigned chars
- vector signed char: 16 signed chars
- vector bool char: 16 unsigned chars (0 false, 255 true)
- vector unsigned short: 8 unsigned shorts
- vector signed short: 8 signed shorts
- vector bool short: 8 unsigned shorts (0 false, 65535 true)
- vector unsigned int: 4 unsigned ints
- vector signed int: 4 signed ints
- vector bool int: 4 unsigned ints (0 false, 2^32 -1 true)
- vector float: 4 floats

The debugger prints these vectors as collections of individual elements.

### ARM Neon Types and Debugging

On ARM processors that have Neon extensions available, the Neon types follow the pattern [type]x[elementcount]_t. Types include those in the following list:

- uint64x1_t - single 64-bit unsigned integer
- uint32x2_t - pair of 32-bit unsigned integers
- uint16x4_t - four 16-bit unsigned integers
- uint8x8_t - eight 8-bit unsigned integers
- int32x2_t - pair of 32-bit signed integers
- int16x4_t - four 16-bit signed integers
- int8x8_t - eight 8-bit signed integers
- int64x1_t - single 64-bit signed integer
- float32x2_t - pair of 32-bit floats
- uint32x4_t - four 32-bit unsigned integers
- uint16x8_t - eight 16-bit unsigned integers
- uint8x16_t - 16 8-bit unsigned integers
- int32x4_t - four 32-bit signed integers
- int16x8_t - eight 16-bit signed integers
- int8x16_t - 16 8-bit signed integers
- uint64x2_t - pair of 64-bit unsigned integers
- int64x2_t - pair of 64-bit signed integers
- float32x4_t - four 32-bit floats
- uint32x4_t - four 32-bit unsigned integers
- uint16x8_t - eight 16-bit unsigned integers

The debugger prints these vectors as collections of individual elements.

There are examples of these in the samples/simple directory.

Now that we've covered the vector types, let's talk about vector programs.

As Ian Ollman points out, vector programs are blitters. They load data from memory, process it, then store it to memory elsewhere. Moving data between memory and vector registers is necessary, but it's overhead. Taking big bites of data from memory, processing it, then writing it back to memory will minimize that overhead.

Alignment is another aspect of data movement to watch for. Use GCC's "aligned" attribute to align data sources and destinations on 16-bit boundaries for best performance. For instance:

```
float anarray[4] __attribute__((aligned(16))) = { 1.2, 3.5, 1.7, 2.8 };
```

Failure to align can result in getting the right answer, silently getting the wrong answer or crashing. Techniques are available for handling unaligned data, but they are slower than using aligned data. There are examples of these in the sample code.

The sample code uses intrinsics for vector operations on X86, Altivec and Neon. These intrinsics follow naming conventions to make them easier to decode. Here are the naming conventions:

Altivec intrinsics are prefixed with "vec_". C++ style overloading accomodates the different type arguments.

Neon intrinsics follow the naming scheme [opname][flags]_[type]. A "q" flag means it operates on quad word (128-bit) vectors.

X86 intrinsics are follow the naming convention _mm_[opname]_[suffix]

suffix s single-precision floating point d double-precision floating point i128 signed 128-bit integer i64 signed 64-bit integer u64 unsigned 64-bit integer i32 signed 32-bit integer u32 unsigned 32-bit integer i16 signed 16-bit integer u16 unsigned 16-bit integer i8 signed 8-bit integer u8 unsigned 8-bit integer pi# 64-bit vector of packed #-bit integers pu# 64-bit vector of packed #-bit unsigned integers epi# 128-bit vector of packed #-bit unsigned integers epu# 128-bit vector of packed #-bit unsigned integers ps 128-bit vector of packed single precision floats ss 128-bit vector of one single precision float pd 128-bit vector of double precision floats sd 128-bit vector of one double precision (128-bit) float si64 64-bit vector of single 64-bit integer si128 128 bit vector

Table 2 lists the intrinsics used in the sample code.

Table 2. Subset of vector operators and intrinsics used in the examples.

Operation | Altivec | Neon | MMX/SSE/SSE2 |
---|---|---|---|

loading | vec_ld | vld1q_f32 | _mm_set_epi16 |

vector | vec_splat | vld1q_s16 | _mm_set1_epi16 |

vec_splat_s16 | vsetq_lane_f32 | _mm_set1_pi16 | |

vec_splat_s32 | vld1_u8 | _mm_set_pi16 | |

vec_splat_s8 | vdupq_lane_s16 | _mm_load_ps | |

vec_splat_u16 | vdupq_n_s16 | _mm_set1_ps | |

vec_splat_u32 | vmovq_n_f32 | _mm_loadh_pi | |

vec_splat_u8 | vset_lane_u8 | _mm_loadl_pi | |

storing | vec_st | vst1_u8 | |

vector | vst1q_s16 | _mm_store_ps | |

vst1q_f32 | |||

vst1_s16 | |||

add | vec_madd | vaddq_s16 | _mm_add_epi16 |

vec_mladd | vaddq_f32 | _mm_add_pi16 | |

vec_adds | vmlaq_n_f32 | _mm_add_ps | |

subtract | vec_sub | vsubq_s16 | |

multiply | vec_madd | vmulq_n_s16 | _mm_mullo_epi16 |

vec_mladd | vmulq_s16 | _mm_mullo_pi16 | |

vmulq_f32 | _mm_mul_ps | ||

vmlaq_n_f32 | |||

arithmetic | vec_sra | vshrq_n_s16 | _mm_srai_epi16 |

shift | vec_srl | _mm_srai_pi16 | |

vec_sr | |||

byte | vec_perm | vtbl1_u8 | _mm_shuffle_pi16 |

permutation | vec_sel | vtbx1_u8 | _mm_shuffle_ps |

vec_mergeh | vget_high_s16 | ||

vec_mergel | vget_low_s16 | ||

vdupq_lane_s16 | |||

vdupq_n_s16 | |||

vmovq_n_f32 | |||

vbsl_u8 | |||

type | vec_cts | vmovl_u8 | _mm_packs_pu16 |

conversion | vec_unpackh | vreinterpretq_s16_u16 | |

vec_unpackl | vcvtq_u32_f32 | ||

vec_cts | vqmovn_s32 | _mm_cvtps_pi16 | |

vec_ctu | vqmovun_s16 | _mm_packus_epi16 | |

vqmovn_u16 | |||

vcvtq_f32_s32 | |||

vmovl_s16 | |||

vmovq_n_f32 | |||

vector | vec_pack | vcombine_u16 | |

combination | vec_packsu | vcombine_u8 | |

vcombine_s16 | |||

maximum | _mm_max_ps | ||

minimum | _mm_min_ps | ||

vector | _mm_andnot_ps | ||

logic | _mm_and_ps | ||

_mm_or_ps | |||

rounding | vec_trunc | ||

misc | _mm_empty |

## Comments

## Hi! > Use GCC's "aligned"

Hi!

> Use GCC's "aligned" attribute to align data sources and destinations on 16-bit

> float anarray[4] __attribute__((aligned(16))) = { 1.2, 3.5, 1.7, 2.8 };

I'm not shure, but it seams to me that instead of "16-bit" should be writen "16-bytes" ( http://gcc.gnu.org/onlinedocs/gcc/Type-Attributes.html#Type-Attributes ). Isn't it?

## intrisics is i´m follow

intrisics is i´m follow

## vector Processing

Very interesting article here. But i found different Vector Processing Concepts here >> http://akiavintage.com/vector/vector-processing/ at http://akiavintage.com/

## Correction

The pattern for ARM Neon types is not

[type]x[elementcount]_t, but[type][elementcount]x_t.## re Correction

You might take a look at:

http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dht0004a/CHD...

In example 1.1 they use uint32x4_t as a four element vector of 32-bit unsigned integers...

## autovectorisation

http://locklessinc.com/articles/vectorize/ has some tips on helping GCC autovectorise code.

## How old is this article?

So it talks about ancient tech like MMX and SSE2, my guess these days you would write about AVX. Also the links at the end often lead to nowhere, and an article from 2005. This makes me wonder when this article was actually written.

## re How old is this article?

Very perceptive. The article was accepted for publication in July of 2011. That's why the ARM and Freescale links have gone stale. (I'll post an updated set later this week.)

The choice of MMX and SSE2 for X86 was deliberate. For an introductory article, things that are simple and widespread are often the best choices.

I think an AVX article would wonderful. Any volunteers?

## no, intrinsics are no replacement for hand-optimized simd asm

so far, i encountered only one case where intrinsics are somewhat useful - when trying to unroll a loop of non-trivial vector code. if you write a test implementation using intrinsics and let gcc unroll that a bit for you, gcc's liveness analysis and resulting register allocation may give you useful hints for writing the final asm function. but i have never seen a case where gcc produces optimal code from intrinsics for a non-trivial function.

and regarding vendor libraries - the functions they provide are of varying quality with regard to optimization, but even in the cases where the code is pretty good, they don't compete on equal grounds. they have to be pretty generic, which means you always have some overhead. optimizations in simd asm often come from specific knowledge regarding variable ranges. data layout, or data reuse. the vendor lib can't do that.

so write your proof-of-concept using intrinsics or vendor libs. and if performance satisfies you, just keep it that way. but if a function still is a major hotspot, you can do better if you go asm (maybe only a bit, more likely a lot)

## Recent i see one articles in

Recent i see one articles in the site backlinks where speak about seo

## What?

Perhaps you meant to say: "Recently I saw an article on the site with backlinks. Where to they talk about seo?" Orlando locksmith