Tips for Optimizing Linux Memory Usage

by Jeff Tranter
Introduction

Like most Unix-compatible operating systems, the single most important factor in determining the performance you get out of Linux is often the amount of physical memory available. This is often a source of confusion to users accustomed to other systems such as MS-DOS. Since many Linux users are on a tight budget, the option of simply purchasing more memory is not always feasible. This article presents some ways in which you can make better use of the memory you already have.

Background

Linux implements a demand-paged virtual memory system. Processes have a large (4 gigabyte) virtual memory space. As virtual memory is referenced, the appropriate pages are transferred between disk and physical memory.

When there are no more physical memory pages available, the kernel swaps some older pages back to disk. (If they are code pages that have not been changed, then they are just discarded; otherwise they are written to the swap areas.)

Disk drives are mechanical devices; reading and writing to disk is several orders of magnitude slower than accessing physical memory. If the total memory pages required significantly exceed the physical memory available, the kernel starts spending more time swapping pages than executing code. The system begins thrashing, and slows down to a crawl. If this increases to a point where the swap device becomes fully utilized, the system can virtually come to a standstill. This is definitely a situation we want to avoid.

When extra physical memory is not in use, the kernel attempts to put it to work as a disk buffer cache. The disk buffer stores recently accessed disk data in memory; if the same data is needed again it can be quickly retrieved from the cache, improving performance. The buffer grows and shrinks dynamically to use the memory available, although priority is given to using the memory for paging. Thus, all the memory you have is put to good use.

Tools for Measuring Memory Utilization

In order to know what your memory situation is and whether any changes you make are resulting in improvement, you need to have some way of measuring memory usage. What tools do we have at our disposal?

When the system first boots, the ROM BIOS typically performs a memory test. You can use this to identify how much physical memory is installed (and working) in your system, if you don't know already. On my system, it looks something like this:

ROM BIOS (C) 1990
008192 KB OK WAIT......

The next piece of useful information is displayed during the Linux boot process. Output such as the following should be displayed:

Memory: 7100k/8192k available (464k
kernel code, 384k reserved, 244k data) ...
Adding Swap: 19464k swap-space

This shows the amount of RAM available after the kernel has been loaded into memory (in this case 7100K out of the original 8192K). You can also see if the swap space has been properly enabled. If the kernel bootup messages scroll by too quickly to read, on many systems you can recall them at a later time using the “dmesg” command.

Once Linux is running, the “free” command is useful for showing the total memory available (which should match that shown during boot-up), as well as a breakdown showing the amount of memory being used, and the amount free. (If you don't have a “free” command, you can use “cat /proc/meminfo”.) Both physical memory and swap space is shown. Here is a typical output on my system:

Here is a typical output on my system:

total

used

free

shared

buffers

Mem:

7096

5216

1880

2328

Swap:

19464

0

19464

2800

The information is shown in kilobytes (1024 bytes). The “total” memory is the amount available after loading the kernel. Any memory being used for processes or disk buffering is listed as “used.” Memory that is currently unused is listed in the “free” column. Note that the total memory is equal to the sum of the “used” and “free” columns.

The memory indicated as “shared” is an indication of how much memory is common to more than one process. A program such as the shell typically has more than one instance running. The executable code is read-only and can be shared by all processes running the shell.

The “buffers” entry indicates how much of the memory in use is currently being used for disk buffering.

The “free” command also shows very clearly whether the swap space is enabled, and how much swapping is going on.

To better understand how the kernel uses memory, it is instructive to watch the output of the “free” command as the system is used. I'll show some examples taken from my own system; I suggest you try similar experimentation yourself.

On bootup, with one user logged in, my system reports the following:

        total   used    free    shared  buffers
Mem:    7096    2672    4424    1388    1136
Swap:   19464   0       19464

Note that we have considerable free memory (4.4MB) and a relatively small disk buffer (1.1MB). Now watch how the situation changes after running a command that reads data from the disk. (In this case I typed ls -lR /.)

        total   used    free    shared  buffers
Mem:    7096    5104    1992    1396    3460
Swap:   19464   0      19464

We see that the disk buffer has grown by over 2 MB. This brings the “used” memory up correspondingly, and the free memory down. Next, I start up the X Window system and examine the results:

        total   used    free    shared  buffers
Mem:    7096    7016    80      3112    3792
Swap:   19464   8       19456

This has caused the memory used to increase to 7MB, leaving only 80K free. The increase is to support the additional processes running (the X server, window manager, xterm, etc...). Note that the disk buffer didn't shrink, because there is still free memory. Remember: “free” memory means memory that is being wasted.

Now I start up the GNU chess program, having it play against itself. This starts two instances of a rather large program:

        total   used    free    shared   buffers
Mem:    7096    7016    80      1080    860
Swap:   19464   5028    14436

We see now that the disk buffer has shrunk down to less than 1MB and we are 5MB into swap to accommodate the large processes. Because of the swapping, the system has slowed down, and heavy disk drive activity can be heard. There is still a small amount of free memory. (The kernel tries to prevent user processes from taking all of the available memory; it reserves some for the “root” user only.)

The next step is to exit the X Window system and the applications running under it; here is the result.

        total   used    free    shared  buffers
Mem:    7096    2444    4652    412     1480
Swap:   19464   728     18736

We now have lots of free memory, the swap usage is almost gone (some idle programs are still presumably swapped out), and the disk buffer is starting to grow again.

The “top” and “ps” commands are also very useful for showing how memory usage changes dynamically, and how individual processes are using memory. For the scenario described earlier, we can see from the output of “ps” that each of the two chess processes was taking almost 8MB of virtual memory, obviously more than could fit in physical memory, causing the system to thrash.

USER  PID %CPU %MEM SIZE  RSS TTY STAT START   TIME COMMAND ...
tranter   282  4.1 34.4 7859 2448 v01 D  14:08 0:11 gnuchessx 40 5
tranter   285  7.9 30.7 7859 2180 v01 D  14:09 0:21 gnuchessx 40 5
...

Another facility for getting system status information is built into the virtual console driver. This depends on your keyboard mapping, but the default for the US keyboard is to use the Scroll-Lock key. Pressing <Alt><Scroll Lock> shows the current value of the CPU registers. The <Shift><Scroll Lock> combination shows memory information, similar to the “free” command, but more detailed. Finally, <Ctrl><Scroll Lock> will give information on individual processes, much like the “ps” command.

These keys can be particularly handy if your system is slow, or appears to have crashed. Note that if you are running the syslog daemon, this information will probably be logged to a file instead of being displayed on the console. On my Slackware system for example, it is logged to the file /var/adm/syslog.

Increasing Available Memory

Now that we have some measurement tools at our disposal, its time to try to improve the memory situation. The first line of attack is before Linux boots—your ROM BIOS setup program has some options that may increase the amount of memory available. Many systems can shadow the ROM address ranges in RAM, because it is faster than ROM. Unlike MS-DOS, however, Linux doesn't use the ROM BIOS routines, so disabling this can free close to 200K of memory (if you still run MS-DOS occasionally then you may not want to do this).

Incidently, now is also a good time to look at your other setup options and do some experimentation. You may be able to improve CPU performance with the options to enable caching and setting the CPU clock speed. One way to measure this is to use the BogoMIPs rating displayed when Linux boots as an indicator of CPU speed (this is not always accurate though, because as everyone knows, BogoMIPs are “bogus”). If you boot Linux from a hard disk, you may also be able to speed up reboot times by disabling the floppy disk drive seek at bootup. Don't change too many settings at once, or you may not know which changes are having a positive effect. Be sure to write down your original settings in case you put your system in a state where it will no longer boot.

Recompiling the Kernel

Are you still using the default kernel that came when you installed Linux? If so, shame on you! Kernel memory is special—unlike the memory pages used by processes, the kernel is never swapped out. If you can reduce the size of the kernel, you free up memory that can be be used for executing user programs (not to mention reducing kernel compile times and disk storage).

The idea here is to recompile the kernel with only the options and device drivers you need. The kernels shipped with Linux distributions typically have every possible driver and file system compiled in so that any system can boot from it. If you don't have network cards, CD-ROM, SCSI, and so on, you can save considerable memory by removing them from the kernel. Besides, you can't really consider yourself a Linux hacker if you've never recompiled a customized kernel yourself.

If there are drivers that you only need occasionally, consider building several kernels, and set up LILO to let you choose an alternate kernel when booting. If you have a math coprocessor, you can consider taking out the FPU emulation routines as well. You can also remove any of the Linux file systems that you do not require.

More advanced Linux hackers might want to look at the “modules” facility which allows for loadable device drivers. With this you can dynamically add and remove drivers without rebooting. This facility has been available for some time to kernel hackers, and it has now become a part of the standard kernel. This facility is particularly useful for rarely used devices such as tape drives that are only occasionally used for backup purposes.

Finally, make sure you are running a recent kernel. Newer kernels, as well as (in most cases) being more stable, also have improvements in memory usage.

Compiling Applications

If you develop your own applications, or compile code you obtain from the Internet or bulletin board systems, then using the right compile options can reduce the memory used. Turning on optimization will generally produce code which is smaller and executes faster, as well as requiring less memory. A few optimizations, such as in-line functions, can make the code larger. You should also check that your executables are dynamically linked and stripped of debug information.

Which optimizations are best depend on the specific application and even on the version of compiler used; you may wish to experiment.

Reducing Memory Usage Further

Once Linux is up and running your new kernel, it's time to look at where the memory is going. Before you even log on, how many processes are running?

The bare minimum for a Linux system would typically be:

  • init (this starts all other processes)

  • update (this periodically writes the disk buffers to disk)

  • a single getty (which becomes your shell when logged in)

Run “top” and see what is running on your system. How many getty processes do you need? Do you really need all those other processes such as lpd, syslogk, syslogd, crond, and selection? On a standalone system, you don't need to run full networking software.

If you are using an init package that supports multiple run levels, you might want to consider defining several different run levels. This way you could, for example, switch your system between full networking and running standalone, allowing you to free up resources when you don't need them.

You can also examine some of your larger executables to see if they were built with the appropriate compiler and linker options. To identify the largest programs, try using a command such as this:

ls -s1 /bin /usr/bin /usr/bin/X11 |  sort -n | tail

Strictly speaking this only finds the largest files, but file size is usually a good indication of the memory requirements of a program.

The most common shell under Linux is GNU BASH. While very functional, it is also quite large. You can save memory by using a smaller shell such as the Korn shell (usually called ksh or pdksh).

The emacs editor is also big; you could use a smaller editor such as vi, jove, or even ed instead.

The X Window System

If you ran the command line described earlier, one of your largest binaries was probably the X server. The X Window system takes a lot of memory resources.

The first question to consider is, do you really need to run X? Using the virtual consoles and selection service you can have multiple windows supporting cut & paste of text using a mouse. Particularly while performing large compiles (such as the kernel), you should consider the option of simply not running X.

There is also a windowing system called “mgr” than can be used as an alternative to X, but requires less memory.

If you decide to use X, then you can obtain replacements for some of the standard tools that require less resources. “Rxvt” is similar to xterm, but requires significantly less memory. The window manager “fvwm” will also use less resources than others, and “rclock” is a small X-based clock program. These three tools, written by Robert Nation, can make running X feasible on a machine that constantly swapped before.

How many programs do you run on the X desktop? Run “top” to see how much memory is being taken by xclock, xeyes, xload, and all those other goodies you think you need.

The “Tiny X” package, put together by Craig I. Hagan, contains the Korn shell, fvwm window manager, rxvt, rclock, X server, and the minimum of other files needed to run X. The package is small enough to fit on one 3.5" floppy disk. Also included are some useful notes on saving memory under X.

With the techniques described here, you can run small X applications reasonably well on a machine with only 4 megabytes of memory. On machines with more memory, the same methods will allow you to run larger applications and free up memory to use for disk buffering.

Conclusions

By combining the techniques I've described, the net effect on system performance can be well worth the effort. I encourage you to experiment, and along the way you'll almost certainly learn something new.

For More Information

The software mentioned in this article is available on a number of Internet archive sites, including sunsite.unc.edu and tsx-11.mit.edu. I suggest getting a copy of the Linux Software Map to help track down the software you need.

If you want to learn more about how the Linux kernel implements memory management, check out ”The Linux Kernel Hackers' Guide“, by Michael K. Johnson, part of the Linux documentation project. Appendix A of that document includes an extensive bibliography of books covering operating system concepts in general.

”How to Maximize the Performance of X" is periodically posted to the Usenet newsgroup news.answers, and contains more ideas for improving X performance on small systems.

Load Disqus comments

Firstwave Cloud