Maximize Desktop Speed

Are you a speed junkie who wants the fastest, most responsive machine? Try these changes and get even more speed out of your Linux box.


echo 25 > /proc/sys/vm/swappiness

Note that this change will last only until the next time you restart your box. If you want to make the change permanent, edit /etc/sysctl.conf, and add a line like the following:


Now, it will be loaded every time you boot. If you want to test the changes, make the edit to /etc/sysctl.conf and then reload it with /sbin/sysctl -p.

Is it better to have lower values (down to 5 or 10) or higher values (up to 100)? Personally, I use 5, and I like the way my machines (desktop and laptop) work. If you notch it up, the kernel will use more CPU time to free RAM in advance; if you turn it down, the CPU will be freer, but there will be more I/O.

For CPU-intensive programs, if you have fast disks, I'd go with lower values, as I did myself. This will produce improvements, such as when switching between applications, because it's more likely that they reside in physical RAM instead of on the swap partition. Even if you set swappiness to zero, if needed, the kernel will do its swapping, so once again, you would benefit from getting more RAM if possible.

However, Linux kernel developer Andrew Morton suggests using 100, and author Mladen Gogale observes he found no difference, so you may want to try different values and see what you prefer (see Resources for links to articles on this topic).

Make Applications Load Faster

Under Linux, most applications are in a special Executable and Linkable Format (ELF) that allows them to be smaller. Instead of including all needed libraries, the program file has references to them, which are resolved (or linked) when the code is loaded for execution. You might recognize here a classic time vs. space compromise: a smaller file size, but a higher loading time. If your program requires only a few libraries, the linking process is quick, but for larger programs that use several libraries, the linking process gets noticeably longer.

If you are game to using a bit more disk space (and spending some time preparing all files), you can use the prelink command to do the linking phase in advance and store the needed libraries within the program file itself, so it will be ready to execute as soon as it is loaded. (Actually, I fudged a bit here. When the program is loaded, the libraries are checked to verify they haven't changed since the prelinking, but that check is much speedier than doing the linking itself.) Using prelink in this way obviously requires more disk space (for there will be a copy of every prelinked library within each executable file), but with the current large disks, this won't even be noticed.

In order to prelink your programs, you need to set up a configuration file (/etc/prelink.conf), so prelink knows where to search for shared libraries and what programs to work with should you opt for the -a option and prelink everything possible. The format of this file is simple: blank lines don't matter, comments start with a # character, and the rest of the lines should be something like the following:

-l aDirectoryToBeProcessed
-h anotherDirectoryButAllowingForSymlinks
-b fileToSkip

The -l lines specify directories that should be processed. The -h lines are pretty much the same, but allow for symlinks, which will be followed, so the prelink process might end up working with files actually residing in other directories than the ones you originally specified. Finally, the -b lines show blacklisted programs (patterns also can be used) that should be skipped by the prelinking process. I recommend leaving these lines alone. If your prelink experiments show that certain programs cannot be prelinked (you'll get an error message if you try), you should add an appropriate -b line to avoid future unnecessary warnings. As an example, Listing 1 shows some portions of my (already provided in OpenSUSE) /etc/prelink.conf file.

If you want to prelink a single program, just do prelink theProgramPathAndName, and if the program can be relinked successfully (remember my comment—this just isn't feasible for some programs), the original binary ELF file will be overwritten with the new, larger, all-including version.

You could start a massive prelinking session by executing prelink -a, which will go through all the -l and -h directories in /etc/prelink.conf and prelink everything it finds. Here are a few more options to note:

  • Do a dry run by including the -n option. This generates a report of all results, but no changes will be committed to disk. Use this to see whether there are unexpected problems or files to be excluded.

  • Include the -m option so prelink will try to conserve memory, if you have many libraries in your system (highly likely) and not a very large memory. On my own machine, if I omit this option, prelink fails to work, so my usual command to prelink everything possible is prelink -m -a.

  • If you dislike the prelinked files, or if you get tired of prelinking everything every time you get updated libraries, use the -u parameter to undo the changes. Executing prelink -u aPrelinkedProgramName will restore the program to its previous, unlinked format, with no fuss. Of course, for a radical throwback to the original situation, do prelink -a -u.

The prelinked versions of all programs are executed just like the normal ones, but will load a bit faster, thus providing a snappier feel. I have found conflicting opinions as to actual, measured results, but most references point to real speedups.



Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

How to get rid of fonts?

Roland's picture

OK, so 'xfontsel' shows I have over 10K fonts. Many of the families are clearly foreign languages, that came with install. How do I get rid of some of them? 'apropos font' was no help, and 'adept' indicated I didn't have any font packages installed. I use Kubuntu. 'man apt-get' didn't give me a clue about finding font packages. Suggestions? Thanks.

Wrong information about prelink

sspr's picture

The section about prelink is incorrectly stating that...

Using prelink in this way obviously requires more disk space (for there will be a copy of every prelinked library within each executable file), but with the current large disks, this won't even be noticed.

This is complete rubbish ! The author is clearly confusing rewriting the relocation tables with hard-wiring the libraries in executables. The scheme set up by the author would turn all binaries into static linked programs. Nothing is less true ! As a simple peek at the man page (or even Wikipedia) reveals, only the relocation tables are rewritten such that when a library is loaded into memory, it's symbols sits at the right spot in the virtual memory, such that the calculation of the symbol locations no longer needs to be done, but is exactly there where the program expects them.

So prelinking does not use a single byte (apart from the lightweight checksum mechanism) more disk space.

Another blatant fault:

Include the -m option so prelink will try to conserve memory, if you have many libraries in your system (highly likely) and not a very large memory.

This has *nothing* to do with your actual memory, but with the 4GB virtual address space limit on 32bit systems ! It just means that, _if_ each library was used in a single program (and that's what prelink allows without the -m option), you'd exceed the virtual address space limit. The solution is to see which libraries are mutually exclusive linked and let them overlap in virtual memory, as they'll never occur at the same time in the same process.

This article is also quite mediocre in the section about compiling a kernel, you're supposed to know how many RAM your system has. Wrong again. It's perfectly safe/performant to turn on "High Memory Support (4GB)", even if you only got 1GB. And the instructions for compiling a kernel are of the lesser quality I've seen around. You, a daring non-kernel hacking user, should nowadays only install a kernel through your packaging system (like make-kpkg for Debian/Ubuntu) and not using the good old 'make' command, which might as well overwrite your current kernel image and leave your system in an unbootable state (for example, when initrd support has been omitted and your filesystem drivers are compiled as modules.)

This article had good intentions, but the proof reading (if any) clearly missed some faults.