Optimization in GCC

Here's what the O options mean in GCC, why some optimizations aren't optimal after all and how you can make specialized optimization choices for your application.
Architecture Specification

The optimizations discussed thus far can yield significant improvements in software performance and object size, but specifying the target architecture also can yield meaningful benefits. The -march option of gcc allows the CPU type to be specified (Table 2).

Table 2. x86 Architectures

Target CPU Types-march= Type
i386 DX/SX/CX/EX/SLi386
i486 DX/SX/DX2/SL/SX2/DX4i486
487i486
Pentiumpentium
Pentium MMXpentium-mmx
Pentium Propentiumpro
Pentium IIpentium2
Celeronpentium2
Pentium IIIpentium3
Pentium 4pentium4
Via C3c3
Winchip 2winchip2
Winchip C6-2winchip-c6
AMD K5i586
AMD K6k6
AMD K6 IIk6-2
AMD K6 IIIk6-3
AMD Athlonathlon
AMD Athlon 4athlon
AMD Athlon XP/MPathlon
AMD Duronathlon
AMD Tbirdathlon-tbird

The default architecture is i386. GCC runs on all other i386/x86 architectures, but it can result in degraded performance on more recent processors. If you're concerned about portability of an image, you should compile it with the default. If you're more interested in performance, pick the architecture that matches your own.

Let's now look at an example of how performance can be improved by focusing on the actual target. Let's build a simple test application that performs a bubble sort over 10,000 elements. The elements in the array have been reversed to force the worst-case scenario. The build and timing process is shown in Listing 1.

By specifying the architecture, in this case a 633MHz Celeron, the compiler can generate instructions for the particular target as well as enable other optimizations available only to that target. As shown in Listing 1, by specifying the architecture we see a time benefit of 237ms (23% improvement).

Although Listing 1 shows an improvement in speed, the drawback is that the image is slightly larger. Using the size command (Listing 2), we can identify the sizes of the various sections of the image.

From Listing 2, we can see that the instruction size (text section) of the image increased by 28 bytes. But in this example, it's a small price to pay for the speed benefit.

Math Unit Optimizations

Some specialized optimizations require explicit definition by the developer. These optimizations are specific to the i386 and x86 architectures. A math unit, for one, can be specified, although in many cases it is automatically defined based on the specification of a target architecture. Possible units for the -mfpmath= option are shown in Table 3.

Table 3. Math Unit Optimizations

OptionDescription
387Standard 387 Floating Point Coprocessor
sseStreaming SIMD Extensions (Pentium III, Athlon 4/XP/MP)
sse2Streaming SIMD Extensions II (Pentium 4)

The default choice is -mfpmath=387. An experimental option is to specify both sse and 387 (-mfpmath=sse,387), which attempts to use both units.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

hi, follow the "Listing 3.

Anonymous's picture

hi, follow the "Listing 3. Simple Example of gprof" but when using -O or -O2, the profile is "Flat profile".So how to resoult it?

my step is:
1: gcc -o test_optimization test_optimization.c -pg -march=i386
2: ./test_optimization
3: gprof --no-graph -b ./test_optimization gmon.out
4: the result is:
Flat profile:
Each sample counts as 0.01 seconds.
no time accumulated
% cumulative self self total
time seconds seconds calls Ts/call Ts/call name
0.00 0.00 0.00 1 0.00 0.00 factorial

if add -O2, the result is:
Flat profile:
Each sample counts as 0.01 seconds.
no time accumulated

% cumulative self self total
time seconds seconds calls Ts/call Ts/call name

single optimization flag without level

Anonymous's picture

Any optimization can be enabled outside of any level simply by specifying its name with the -f prefix, as: gcc -fdefer-pop -o test test.c

In current versions of GCC it is incorrect ( http://gcc.gnu.org/wiki/FAQ#optimization-options ). Single optimization flag without optimization level doesn't work. I don't know what about old versions.

gcc 4.2.3 vs visual c 2005

nanjil's picture

hello:
I just compiled a code under gcc cygwin and visual c 2005 in a lpatop with dula core intel processor.

The debuggable gcc code was about 2x times than faster than visual c++ debuggable code

however the situation reversed when i used O3 optimization in gcc and "release" optimization in visual c.

now the visual c code is 2x faster than gcc.

i did not expect that large a difference; it is HUGE!!
am i missin gsomehting or anybody else has noticed similar thing?

visual c++ optimizations

Anonymous's picture

apparentely, MSVC uses a few insecure optimizations counting that the developer created a secure code. Probably thats why its debug build is slower.

I've seen lots of situations where gcc code gives a error right away, and promptly showing me and bug and MSVC happily executing a code until it finally stumble upon a non-static field of a class and finally giving a error. For me , this is simple misleading and thats why I prefer gcc

Detailed article, that is great!

Anonymous's picture

You wrote very detailedly!
It is really useful for me right now since I am doing my thesis work on optimization under Linux. Thank your so much!

Someone should write some

Anonymous's picture

Someone should write some "C" code and a few scripts that will enable / disable every compiler option and then print out which options worked best for _your_ particular system.

A benchmark that would specifically test each option (as opposed to using a single benchmark, and huge) could be written.

EG: no point in benchmarking if we should use:
gcc -O2 -O3 code.c -- One disables the other

gcc -fno-gcse SSE2_code.c

Benchmarks need to have a 'large' effect on the option that is being switched.

This could be ran overnight (or on multiple machines, each doing part of the testing) and results provided on a web page somewhere.

Experts could put in thier two cents and a wiki of snipperts could
be fed into a code compilator (not compiler, just a bunch of scripts) that would compilate all the snippets and produce a final program to be compiled on many different machines.

This way we could figure out that if we had such-and-such a system then "how-often" (what % of the time) would we simply be better off
to use a particular option and when is it more likely based on that TYPE of program we are running (wordprocessor vs. MultiMedia app).

EG: If you have a Pentium is is ALWAYS (or should be if gcc is correct) best to use the -march=pentium option - BUT - it is NOT always best to use "-fcrossjumping" (though it _could_ be for certain applications).

The output of all this could simply be a half dozen command line choices for each processor - including a "general purpose 'best'" setting and a "quick compile with great optimization" setting (for intermediate builds).

This is something that a few dozen people need to work on to get the ball rolling and then the rest of us need to pitch in and compile the resulting test scripts to check for errors. With everyone's help we should have the so-called answer(S) to "which compilation options should I use for machine-X when compiling applcation=category Y.

Just a thought ...

Looks like you have a good

Anonymous's picture

Looks like you have a good project to setup now.

Got Table?

Anonymous's picture

Where can I get a readable copy of Table 1? The copy here is too small to read, and can't be enlarged.

try clicking on it

Anonymous's picture

try clicking on it

-O versus -O0

Anonymous's picture

Minor comment -- -O defaults to -O1, not to -O0.

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix