Maximize Desktop Speed
One of the best things about Linux is that you can get much more performance out of the same computer than with other operating systems. However, there always is room for improvement, and you should be able to get a bit more speed out of your box by applying some specific enhancements.
Don't expect miracles, however. No amount of tweaking can turn a Pentium II into a Quad Core monster (remember the old saying about silk purses and sow's ears?), but you can expect to get a more responsive machine that “handles” better. Although some of the changes are internal and hard to see, you will find that your system feels livelier, your clicks produce answers faster, you can switch between applications more quickly and programs run in less time.
Let's be practical. If you get a better CPU, there's probably nothing in this article that will match your results, and the same goes for a better graphics card or speedier disks. But, you expected that, didn't you? (Making such hardware upgrades would benefit not only Linux, but also every other operating system out there.) However, making such changes are practically the equivalent to getting a whole new machine, so you wouldn't be really enhancing the performance of your old box, but starting anew.
That said, this article discusses configuration changes with the aim to leave everything (well, almost everything) as it was but make it perform better. Of course, these changes aren't all equal; some are more difficult (and riskier), some require rebooting or other procedures, and some even require delving into the command line and editing configuration files. But, don't give up. The results are worth it.
As a final note, I use OpenSUSE (version 10.3) and KDE for the examples in this article. If you are using other distributions or desktop environments, you will find small differences in file locations or procedures. Currently, because most distributions offer exactly the same packages and drivers, one of the largest remaining differences between them is precisely in the configuration tools, so you may need to do some searching on your own. In any case, it's a safe bet you will find a way to manage anything described here, only in a different way.
Similar to the old real-estate adage “Location, location, location”, getting more RAM, RAM, RAM will provide a great improvement. All processes need memory, and whenever the kernel runs out of RAM, it starts swapping to disk, but as this is orders of magnitude slower, your performance takes a hit. If you are willing to spend something, don't hesitate. Go out and get some extra RAM sticks for your machine. As soon as you plug them in, you will notice better performance. Getting more RAM isn't very costly, and it doesn't require any configuration or re-installation.
Even if you don't want to spend the money for more RAM, you can make Linux manage the available RAM in a more efficient way. Here are some simple changes to consider:
Change from KDE or GNOME to a lighter desktop environment. GNOME is about the worst in terms of RAM requirements (although it's far below that of Windows Vista), and KDE is a close second. Try using a less-demanding environment, such as Xfce or Enlightenment, which is used in gOS, the operating system pre-installed in the Everex Green gPCs sold at Wal-Mart [see Doc Searls' interview with David Liu on page 58 for more on the gOS]. Other possibilities include IceWM, Blackbox, Fluxbox, Fvwm, JWM or (the now seemingly defunct) Window Maker. Note that these window managers are not exactly equivalent to having a full desktop environment, so you will have to adapt a bit. Plenty popular distributions, such as DSL (Damn Small Linux) or Puppy Linux use these lightweight window managers, and many are available as optional packages for Red Hat or SUSE.
Get rid of fonts you never use. I was once a fonts junkie and loaded my box with several hundred fonts (I'm not exaggerating) just in case I might use them some day. Each font requires memory, and the fewer fonts you have, the more RAM you will free. And, some programs will run faster, because they will have shorter lists of fonts to load.
Reduce the number of virtual desktops. Windows users work with only one desktop, but do you really need 16 virtual desktops in Linux? Experiment a bit with this. I wouldn't go down to one desktop, but most of the time, having two or three virtual desktops is more than enough.
Linux (as most other, if not all, modern operating systems) uses a technique called Virtual Memory to give programs the impression that they have plenty of memory available, even more than the actual RAM size of the machine. This technique implies using disk memory (the /swap partition) to simulate actual RAM, swapping pieces back and forth. Of course, whenever this swapping process runs, you will experience longer response times and slower performance.
The kernel tries to prevent future swapping by doing some of it in advance, and you can alter the degree to which this is done by changing a parameter from 0 (minimum swapping, done only if needed) to 100 (try to free as much RAM as possible).
There are two ways to change this. The standard value is set at 60. To lower it, as root, do something like:
sysctl -w vm.swappiness=25
echo 25 > /proc/sys/vm/swappiness
Note that this change will last only until the next time you restart your box. If you want to make the change permanent, edit /etc/sysctl.conf, and add a line like the following:
Now, it will be loaded every time you boot. If you want to test the changes, make the edit to /etc/sysctl.conf and then reload it with /sbin/sysctl -p.
Is it better to have lower values (down to 5 or 10) or higher values (up to 100)? Personally, I use 5, and I like the way my machines (desktop and laptop) work. If you notch it up, the kernel will use more CPU time to free RAM in advance; if you turn it down, the CPU will be freer, but there will be more I/O.
For CPU-intensive programs, if you have fast disks, I'd go with lower values, as I did myself. This will produce improvements, such as when switching between applications, because it's more likely that they reside in physical RAM instead of on the swap partition. Even if you set swappiness to zero, if needed, the kernel will do its swapping, so once again, you would benefit from getting more RAM if possible.
However, Linux kernel developer Andrew Morton suggests using 100, and author Mladen Gogale observes he found no difference, so you may want to try different values and see what you prefer (see Resources for links to articles on this topic).
Under Linux, most applications are in a special Executable and Linkable Format (ELF) that allows them to be smaller. Instead of including all needed libraries, the program file has references to them, which are resolved (or linked) when the code is loaded for execution. You might recognize here a classic time vs. space compromise: a smaller file size, but a higher loading time. If your program requires only a few libraries, the linking process is quick, but for larger programs that use several libraries, the linking process gets noticeably longer.
If you are game to using a bit more disk space (and spending some time preparing all files), you can use the prelink command to do the linking phase in advance and store the needed libraries within the program file itself, so it will be ready to execute as soon as it is loaded. (Actually, I fudged a bit here. When the program is loaded, the libraries are checked to verify they haven't changed since the prelinking, but that check is much speedier than doing the linking itself.) Using prelink in this way obviously requires more disk space (for there will be a copy of every prelinked library within each executable file), but with the current large disks, this won't even be noticed.
In order to prelink your programs, you need to set up a configuration file (/etc/prelink.conf), so prelink knows where to search for shared libraries and what programs to work with should you opt for the -a option and prelink everything possible. The format of this file is simple: blank lines don't matter, comments start with a # character, and the rest of the lines should be something like the following:
-l aDirectoryToBeProcessed -h anotherDirectoryButAllowingForSymlinks -b fileToSkip
The -l lines specify directories that should be processed. The -h lines are pretty much the same, but allow for symlinks, which will be followed, so the prelink process might end up working with files actually residing in other directories than the ones you originally specified. Finally, the -b lines show blacklisted programs (patterns also can be used) that should be skipped by the prelinking process. I recommend leaving these lines alone. If your prelink experiments show that certain programs cannot be prelinked (you'll get an error message if you try), you should add an appropriate -b line to avoid future unnecessary warnings. As an example, Listing 1 shows some portions of my (already provided in OpenSUSE) /etc/prelink.conf file.
Listing 1. Portions of the Provided OpenSUSE /etc/prelink.conf File
# Acrobat Reader -b /usr/X11R6/lib/Acrobat5/Reader/intellinux/bin/acroread -b /usr/X11R6/lib/Acrobat7/Reader/intellinux/bin/acroread # RealPlayer -b /usr/lib/RealPlayer8/realplay [...some snipped lines...] # Files to skip -b *.la -b *.png -b *.py -b *.pl -b *.pm -b *.sh -b *.xml -b *.xslt -b *.a -b *.js # kernel modules -b /lib/modules [...more snipped lines...] -l /lib -l /lib64 -l /usr/lib -l /usr/lib64 -l /usr/X11R6/lib -l /usr/X11R6/lib64 -l /usr/kerberos/lib -l /usr/kerberos/lib64 -l /opt/kde3/lib -l /opt/kde3/lib64
If you want to prelink a single program, just do prelink theProgramPathAndName, and if the program can be relinked successfully (remember my comment—this just isn't feasible for some programs), the original binary ELF file will be overwritten with the new, larger, all-including version.
You could start a massive prelinking session by executing prelink -a, which will go through all the -l and -h directories in /etc/prelink.conf and prelink everything it finds. Here are a few more options to note:
Do a dry run by including the -n option. This generates a report of all results, but no changes will be committed to disk. Use this to see whether there are unexpected problems or files to be excluded.
Include the -m option so prelink will try to conserve memory, if you have many libraries in your system (highly likely) and not a very large memory. On my own machine, if I omit this option, prelink fails to work, so my usual command to prelink everything possible is prelink -m -a.
If you dislike the prelinked files, or if you get tired of prelinking everything every time you get updated libraries, use the -u parameter to undo the changes. Executing prelink -u aPrelinkedProgramName will restore the program to its previous, unlinked format, with no fuss. Of course, for a radical throwback to the original situation, do prelink -a -u.
The prelinked versions of all programs are executed just like the normal ones, but will load a bit faster, thus providing a snappier feel. I have found conflicting opinions as to actual, measured results, but most references point to real speedups.
No Prelink Needed in Ubuntu or Debian?
Recent Ubuntu and Debian distributions include a different mechanism for speeding application loading and a new linking mechanism that speeds up the linking process without using prelink.
To enable the faster startup times, do sudo apt-get install preload, and from that moment on, Linux monitors which applications you run and fetches those binaries and libraries into memory.
For example, if you use Firefox and OpenOffice.org every day, preload will determine that those two are common applications and will keep the needed libraries in RAM. Of course, should you change to Seamonkey and KOffice, preload eventually will detect your change of habits and do the appropriate thing.
Every time you create, modify or simply access a file, Linux dutifully records the current timestamp in its directory structures. In particular, the latter update obviously implies a penalty on file access time. Even if you merely read a file (without changing anything), Linux updates the file's inode (see Resources for more on inodes) with the current timestamp. Because writes obviously require some time, doing away with these updates results in performance gains.
In order to achieve this enhancement, you need to change the way the filesystem is mounted. Working as root, do cat /etc/fstab to get the following:
/dev/hda1 /boot ext2 acl,user_xattr 1 2 /dev/hda2 swap swap defaults 0 0 /dev/hda3 / reiserfs acl,user_xattr 1 1 /dev/hdd1 /media/disk2 reiserfs defaults 1 2 /dev/hdc /media/cdrom udf,iso9660 ro,user,noauto 0 0 proc /proc proc defaults 0 0 sysfs /sys sysfs noauto 0 0 debugfs /sys/kernel/debug debugfs noauto 0 0 usbfs /proc/bus/usb usbfs noauto 0 0 devpts /dev/pts devpts mode=0620,gid=5 0 0
Given this output, the best candidates for the optimization are / and /dev/hdd1; /boot is used only when booting, /swap is out of bounds for you, and the others are not hard disks.
Making the change is simple. With your favorite text editor, add ,noatime to the options in the fourth column. When you are done, issue the mount -a command to remount all partitions, and then issue a plain mount to check whether the changes were done (Listing 2).
Listing 2. Checking the New Parameters with mount
$ mount -a $ mount /dev/hda3 on / type reiserfs (rw,noatime,acl,user_xattr) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) debugfs on /sys/kernel/debug type debugfs (rw) udev on /dev type tmpfs (rw) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) /dev/hda1 on /boot type ext2 (rw,acl,user_xattr) /dev/hdd1 on /media/disk2 type reiserfs (rw,noatime)
Notice the noatime parameters in the /dev/hda3 and /dev/hdd1 lines. Those mean you did everything right, and access times are no longer being recorded.
By the way, if you research this on the Web, you may find a reference to another option, nodiratime, which has to do with directories. Do not bother setting this option, because noatime implies nodiratime.
I ran some tests (creating lots of files, and copying them to /dev/null) and timed the results both with and without the noatime option and found some small performance enhancements—every little bit helps.
Now, if you gotten this far, you're ready for the big one: enhancing your kernel.
Data Integrity vs. Speed?
Googling for filesystem performance enhancements, you might come upon a suggestion for ext3 and ReiserFS, involving another mounting option: data=writeback. This option effectively undoes the advantage of those two filesystems by partially disabling their journaling. (Journaling is what ensures that your data won't be lost, even after a system crash.) If you include data=writeback, you'll gain an increase in speed at the cost of having old data show up after a crash. I don't like this kind of risk, so I don't use that option.
All the tweaks we have done so far are just part of the job, and you even can get a bit more speed if you recompile your kernel and adjust it optimally for your specific hardware and needs. Note that even though compiling a full kernel isn't the challenge it used to be (mainly you just have to make a few choices and key in some commands), there still is room for botching things up. Don't try this unless you feel comfortable.
Most distributions usually provide a one-size-fits-all kernel compiled with the most generic options, which should work for everybody. Of course, this won't necessarily fit your specific case. If your box has an Athlon XP CPU (as my laptop does), or many processors, or a certain graphics card, the generic kernel won't take advantage of them. What to do? You can tweak some kernel options and recompile it for optimal performance. Here, I pay specific attention to the options that enhance speed and responsiveness.
The specific commands used in this article are appropriate for the OpenSUSE distribution, but do vary from one distribution to another. Check your documentation for the specific commands you will need before trying to recompile your kernel.
Compiling your kernel isn't that difficult, but remember there's a distinct probability of hosing your machine and turning it into a paperweight. (Okay, that may be a bit of an exaggeration. In the worst case, you simply would have to re-install Linux, and you wouldn't lose your data.) In my case, I used the YaST administration tool and installed two kernels, so I could choose either of them at boot time, and if I destroyed one, I could reboot with the other one, re-install the broken kernel and keep trying.
You need some specific packages to do this: kernel-source (the source files for the actual kernel), gcc (the compiler), ncurses (for the menus) and bzip2 (used internally to create boot images). You also need to know a bit about your hardware. Use cat /proc/cpuinfo to see how many CPUs you have and their brands, and cat /proc/meminfo for RAM information (Listing 3).
Listing 3. You will need information about your CPU and RAM before recompiling your kernel.
$ cat /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD cpu family : 6 model : 8 model name : Mobile AMD Athlon(tm) XP 2200+ [...some lines snipped...] $ cat /proc/meminfo MemTotal: 483488 kB MemFree: 11560 kB Buffers: 19888 kB Cached: 323408 kB SwapCached: 2768 kB Active: 166432 kB Inactive: 230396 kB [...more lines snipped...]
Start with a dry run and recompile the kernel without any changes, just to see if everything is set up okay. Working as root, do what's shown in Listing 4.
Listing 4. Do a dry run to ensure that you have everything you need for compiling the kernel.
cd /usr/src/linux make clean make make modules_install make install
The make processes will run for a while, and although they might produce some warnings, there shouldn't be any errors. If everything still is running okay after you reboot, it means you can start experimenting; you already did a kernel build. (If things did go seriously wrong, reboot with the other kernel, re-install the thrashed kernel, fix the problem, and try a dry run again.)
Tweaking the kernel is simply a matter of choosing the appropriate options from a (large) menu. As root, do the following:
cd /usr/src/linux make clean make menuconfig
and you will see a screen (Figure 1) with a menu full of hundreds of options, although luckily, you will have to change only a few of them.
If graphical interfaces are more your style, change the last command to make xconfig for a friendlier way of working (Figure 2).
The following are some of the options to change:
Under General Setup, uncheck Cpuset support.
Under Processor Type and Features, check Tickless System and High Resolution Timer Support. Select the right CPU type under Processor Family, so the compiled kernel code will be optimized for it, and uncheck Generic x86 Support, which is needed only for generic kernels. Choose the amount of RAM you have under High Memory Support. Check Preempt the Big Kernel Lock, and under Preemption Model, choose Preemptible Kernel (Low-Latency Desktop). Note that for a server machine, you should select the No forced preemption option. Under Timer Frequency, choose 1000 (standing for 1000H). Finally, if you have a machine with only one CPU, uncheck Symmetric multi-processing support. If you have two or more CPUs, check that box, and under Maximum number of CPUs, enter the correct number. (All this data comes from doing cat /proc/cpuinfo, as discussed previously.)
Under Block Layer, uncheck everything, unless you have disks larger than 2Tb.
Under Kernel Hacking, uncheck Kernel Debugging, Collect kernel timer statistics, Debug preemptible kernel and Write protect kernel read-only data structures.
After you are done selecting options, exit the configuration program (say “yes” to save the new kernel configuration) and then do the following:
make make modules_install make install
Watch for unexpected error messages; there should be none. You will need to wait, as when you did with the dry run. On my laptop, the complete process requires more than 30 minutes. If you get an error message, either go back to the menu to try to fix whatever was wrong, or reboot with your backup kernel, re-install the broken kernel, and try again. If everything is okay, simply reboot, and try out your new kernel.
By applying just a few changes to your Linux box, you can get a faster response and greater speed, and you will be able to show off your machine to everybody. Then, after following the suggestions in this article, look around the Internet on your own, and you will be able to pick up more speed, but be careful, making these enhancements can become addictive!
“The ELF Object File Format by Dissection” by Eric Youngdale: www.linuxjournal.com/article/1060
“Making inodes behave” by Clay J. Claiborne, Jr.: www.linuxjournal.com/article/4404
“Wikipedia: Inode”: en.wikipedia.org/wiki/Inode
“Linux: Tuning swappiness”: kerneltrap.org/node/3000
Wikipedia: Virtual Memory: en.wikipedia.org/wiki/Virtual_memory
“Tuning Linux VM on Kernel 2.6” by Mladen Gogala: www.dba-oracle.com/t_tuning_linux_kernel_2_6_oracle.htm
“...and especially for your laptop”: beranger.org/index.php?article=1547&page=3k
gOS Features: www.thinkgos.com/technology.html
gOS, Wikipedia: en.wikipedia.org/wiki/GOS_(Linux_distribution)
Federico Kereki is an Uruguayan Systems Engineer, with more than 20 years' experience teaching at universities, doing development and consulting work, and writing articles and course material. He has been using Linux for many years, having installed it at several different companies. He is particularly interested in the better security and performance of Linux boxes.