Maximize Desktop Speed
echo 25 > /proc/sys/vm/swappiness
Note that this change will last only until the next time you restart your box. If you want to make the change permanent, edit /etc/sysctl.conf, and add a line like the following:
Now, it will be loaded every time you boot. If you want to test the changes, make the edit to /etc/sysctl.conf and then reload it with /sbin/sysctl -p.
Is it better to have lower values (down to 5 or 10) or higher values (up to 100)? Personally, I use 5, and I like the way my machines (desktop and laptop) work. If you notch it up, the kernel will use more CPU time to free RAM in advance; if you turn it down, the CPU will be freer, but there will be more I/O.
For CPU-intensive programs, if you have fast disks, I'd go with lower values, as I did myself. This will produce improvements, such as when switching between applications, because it's more likely that they reside in physical RAM instead of on the swap partition. Even if you set swappiness to zero, if needed, the kernel will do its swapping, so once again, you would benefit from getting more RAM if possible.
However, Linux kernel developer Andrew Morton suggests using 100, and author Mladen Gogale observes he found no difference, so you may want to try different values and see what you prefer (see Resources for links to articles on this topic).
Under Linux, most applications are in a special Executable and Linkable Format (ELF) that allows them to be smaller. Instead of including all needed libraries, the program file has references to them, which are resolved (or linked) when the code is loaded for execution. You might recognize here a classic time vs. space compromise: a smaller file size, but a higher loading time. If your program requires only a few libraries, the linking process is quick, but for larger programs that use several libraries, the linking process gets noticeably longer.
If you are game to using a bit more disk space (and spending some time preparing all files), you can use the prelink command to do the linking phase in advance and store the needed libraries within the program file itself, so it will be ready to execute as soon as it is loaded. (Actually, I fudged a bit here. When the program is loaded, the libraries are checked to verify they haven't changed since the prelinking, but that check is much speedier than doing the linking itself.) Using prelink in this way obviously requires more disk space (for there will be a copy of every prelinked library within each executable file), but with the current large disks, this won't even be noticed.
In order to prelink your programs, you need to set up a configuration file (/etc/prelink.conf), so prelink knows where to search for shared libraries and what programs to work with should you opt for the -a option and prelink everything possible. The format of this file is simple: blank lines don't matter, comments start with a # character, and the rest of the lines should be something like the following:
-l aDirectoryToBeProcessed -h anotherDirectoryButAllowingForSymlinks -b fileToSkip
The -l lines specify directories that should be processed. The -h lines are pretty much the same, but allow for symlinks, which will be followed, so the prelink process might end up working with files actually residing in other directories than the ones you originally specified. Finally, the -b lines show blacklisted programs (patterns also can be used) that should be skipped by the prelinking process. I recommend leaving these lines alone. If your prelink experiments show that certain programs cannot be prelinked (you'll get an error message if you try), you should add an appropriate -b line to avoid future unnecessary warnings. As an example, Listing 1 shows some portions of my (already provided in OpenSUSE) /etc/prelink.conf file.
Listing 1. Portions of the Provided OpenSUSE /etc/prelink.conf File
# Acrobat Reader -b /usr/X11R6/lib/Acrobat5/Reader/intellinux/bin/acroread -b /usr/X11R6/lib/Acrobat7/Reader/intellinux/bin/acroread # RealPlayer -b /usr/lib/RealPlayer8/realplay [...some snipped lines...] # Files to skip -b *.la -b *.png -b *.py -b *.pl -b *.pm -b *.sh -b *.xml -b *.xslt -b *.a -b *.js # kernel modules -b /lib/modules [...more snipped lines...] -l /lib -l /lib64 -l /usr/lib -l /usr/lib64 -l /usr/X11R6/lib -l /usr/X11R6/lib64 -l /usr/kerberos/lib -l /usr/kerberos/lib64 -l /opt/kde3/lib -l /opt/kde3/lib64
If you want to prelink a single program, just do prelink theProgramPathAndName, and if the program can be relinked successfully (remember my comment—this just isn't feasible for some programs), the original binary ELF file will be overwritten with the new, larger, all-including version.
You could start a massive prelinking session by executing prelink -a, which will go through all the -l and -h directories in /etc/prelink.conf and prelink everything it finds. Here are a few more options to note:
Do a dry run by including the -n option. This generates a report of all results, but no changes will be committed to disk. Use this to see whether there are unexpected problems or files to be excluded.
Include the -m option so prelink will try to conserve memory, if you have many libraries in your system (highly likely) and not a very large memory. On my own machine, if I omit this option, prelink fails to work, so my usual command to prelink everything possible is prelink -m -a.
If you dislike the prelinked files, or if you get tired of prelinking everything every time you get updated libraries, use the -u parameter to undo the changes. Executing prelink -u aPrelinkedProgramName will restore the program to its previous, unlinked format, with no fuss. Of course, for a radical throwback to the original situation, do prelink -a -u.
The prelinked versions of all programs are executed just like the normal ones, but will load a bit faster, thus providing a snappier feel. I have found conflicting opinions as to actual, measured results, but most references point to real speedups.
|Red Hat Enterprise Linux 7.1 beta available on IBM Power Platform||Jan 23, 2015|
|Designing with Linux||Jan 22, 2015|
|Wondershaper—QOS in a Pinch||Jan 21, 2015|
|Ideal Backups with zbackup||Jan 19, 2015|
|Non-Linux FOSS: Animation Made Easy||Jan 14, 2015|
|Internet of Things Blows Away CES, and it May Be Hunting for YOU Next||Jan 12, 2015|
- Designing with Linux
- Wondershaper—QOS in a Pinch
- Red Hat Enterprise Linux 7.1 beta available on IBM Power Platform
- Internet of Things Blows Away CES, and it May Be Hunting for YOU Next
- Ideal Backups with zbackup
- Slow System? iotop Is Your Friend
- New Products
- Hats Off to Mozilla
- 2014 Book Roundup
- January 2015 Issue of Linux Journal: Security