Tips for Optimizing Linux Memory Usage

In a previous issue, Jeff discussed ways to reduce disk space usage under Linux. In this sequel article, he shows some useful techniques for making the best use of another valuable resource—memory.
Increasing Available Memory

Now that we have some measurement tools at our disposal, its time to try to improve the memory situation. The first line of attack is before Linux boots—your ROM BIOS setup program has some options that may increase the amount of memory available. Many systems can shadow the ROM address ranges in RAM, because it is faster than ROM. Unlike MS-DOS, however, Linux doesn't use the ROM BIOS routines, so disabling this can free close to 200K of memory (if you still run MS-DOS occasionally then you may not want to do this).

Incidently, now is also a good time to look at your other setup options and do some experimentation. You may be able to improve CPU performance with the options to enable caching and setting the CPU clock speed. One way to measure this is to use the BogoMIPs rating displayed when Linux boots as an indicator of CPU speed (this is not always accurate though, because as everyone knows, BogoMIPs are “bogus”). If you boot Linux from a hard disk, you may also be able to speed up reboot times by disabling the floppy disk drive seek at bootup. Don't change too many settings at once, or you may not know which changes are having a positive effect. Be sure to write down your original settings in case you put your system in a state where it will no longer boot.

Recompiling the Kernel

Are you still using the default kernel that came when you installed Linux? If so, shame on you! Kernel memory is special—unlike the memory pages used by processes, the kernel is never swapped out. If you can reduce the size of the kernel, you free up memory that can be be used for executing user programs (not to mention reducing kernel compile times and disk storage).

The idea here is to recompile the kernel with only the options and device drivers you need. The kernels shipped with Linux distributions typically have every possible driver and file system compiled in so that any system can boot from it. If you don't have network cards, CD-ROM, SCSI, and so on, you can save considerable memory by removing them from the kernel. Besides, you can't really consider yourself a Linux hacker if you've never recompiled a customized kernel yourself.

If there are drivers that you only need occasionally, consider building several kernels, and set up LILO to let you choose an alternate kernel when booting. If you have a math coprocessor, you can consider taking out the FPU emulation routines as well. You can also remove any of the Linux file systems that you do not require.

More advanced Linux hackers might want to look at the “modules” facility which allows for loadable device drivers. With this you can dynamically add and remove drivers without rebooting. This facility has been available for some time to kernel hackers, and it has now become a part of the standard kernel. This facility is particularly useful for rarely used devices such as tape drives that are only occasionally used for backup purposes.

Finally, make sure you are running a recent kernel. Newer kernels, as well as (in most cases) being more stable, also have improvements in memory usage.

Compiling Applications

If you develop your own applications, or compile code you obtain from the Internet or bulletin board systems, then using the right compile options can reduce the memory used. Turning on optimization will generally produce code which is smaller and executes faster, as well as requiring less memory. A few optimizations, such as in-line functions, can make the code larger. You should also check that your executables are dynamically linked and stripped of debug information.

Which optimizations are best depend on the specific application and even on the version of compiler used; you may wish to experiment.

Reducing Memory Usage Further

Once Linux is up and running your new kernel, it's time to look at where the memory is going. Before you even log on, how many processes are running?

The bare minimum for a Linux system would typically be:

  • init (this starts all other processes)

  • update (this periodically writes the disk buffers to disk)

  • a single getty (which becomes your shell when logged in)

Run “top” and see what is running on your system. How many getty processes do you need? Do you really need all those other processes such as lpd, syslogk, syslogd, crond, and selection? On a standalone system, you don't need to run full networking software.

If you are using an init package that supports multiple run levels, you might want to consider defining several different run levels. This way you could, for example, switch your system between full networking and running standalone, allowing you to free up resources when you don't need them.

You can also examine some of your larger executables to see if they were built with the appropriate compiler and linker options. To identify the largest programs, try using a command such as this:

ls -s1 /bin /usr/bin /usr/bin/X11 |  sort -n | tail

Strictly speaking this only finds the largest files, but file size is usually a good indication of the memory requirements of a program.

The most common shell under Linux is GNU BASH. While very functional, it is also quite large. You can save memory by using a smaller shell such as the Korn shell (usually called ksh or pdksh).

The emacs editor is also big; you could use a smaller editor such as vi, jove, or even ed instead.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Buffer+Cache grows with time and is not freed when nedded

Vikash's picture

This is wrt a telecom hardware that is supposed to support some no of calls/UEs. It has a Cavium Octeon processor and an SMP LINUX by WindRiver systems.
Now, when I admit a new call, the used grows (okay), after some 50 UEs are introduced, the system is left with some 40 K of memory and the buffer and cache part has grown so much.
TOTAL USED FREE Buf/Cache
15:28:40 IST 2010 Mem: 1383 702 680 0 0 141
15:45:04 IST 2010 Mem: 1383 965 417 0 0 328
16:06:31 IST 2010 Mem: 1383 1341 41 0 2 693

Isnt the LINUX kernel supposed to free the buffer/cached part when the need for memory needed by new application come up..?
Please someone help me here. Any other info needed, pls let me know.

-Vikash

Need to tune memory

Anonymous's picture

cache memory is getting full very often. server is getting into crashing state. Nothing written in the swap space.

need ideas to tune the memory.

RAM 64G
swap 2 G

...

JShuford's picture

Thank you...

...I'm not just a "troll", but also a subscriber!

nice article. It will be

Manish Madhukar's picture

nice article. It will be great if you also explain the difference between cache and buffers as well...

-bash-3.2$ free -m
total used free shared buffers cached
Mem: 15364 6738 8625 0 211 4011
-/+ buffers/cache: 2514 12849
Swap: 12001 0 12001

Regards
Manish

Its really nice to see the

Madhusudhanan's picture

Its really nice to see the article written in 1994 still helps us, but things have slightly changed a bit since then.

Regarding your query,
The first row, labeled Mem, displays physical memory utilization, including the amount of memory allocated to buffers and caches. A buffer, also called buffer memory, is usually defined as a portion of memory that is set aside as a temporary holding place for data that is being sent to or received from an external device, such as a HDD, keyboard, printer or network.

The second line of data, which begins with -/+ buffers/cache, shows the amount of physical memory currently devoted to system buffer cache. This is particularly meaningful with regard to application programs, as all data accessed from files on the system that are performed through the use of read() and write() system calls1 pass through this cache. This cache can greatly speed up access to data by reducing or eliminating the need to read from or write to the HDD or other disk.

The third row, which begins with Swap, shows the total swap space as well as how much of it is currently in use and how much is still available.

CREDIT: http://www.linfo.org/free.html

- then again...

omelette's picture

... turns out that it was the linux kernel itself that was to blame;

http://www.intellinuxwireless.org/bugzilla/show_bug.cgi?id=1716

let's be careful out there... :)

things are not so clear-cut...

omelette's picture

Nice article, unfortunately like the dozens of others in a similar vein one can google, they all imply that the 'seekers' lack of knowledge of what is discussed above is all that's at issue - imo, this is not (always!) the case, as some (many?) programs leak memory like a sieve! For instance, at this moment my Debian lenny KDE ProDuo 1Gig laptop, up for 6-8hrs but with all apps closed produces the following memory-footprint - again, there's NOTHING running apart form the desktop:

omelette@debian:~$ free
total used free shared buffers cached
Mem: 1034636 956996 77640 0 3944 153828
-/+ buffers/cache: 799224 235412
Swap: 1052216 84856 967360

One reboot later, here is the results:

omelette@debian:~$ free
total used free shared buffers cached
Mem: 1034636 167316 867320 0 10788 88568
-/+ buffers/cache: 67960 966676
Swap: 1052216 0 1052216

Some difference! - clearly, 400-500meg of memory is not being released by apps and linux seems completely oblivious to this! Note, running any decent-sized app in this out-of-memory state sees swap grow proportionally, so linux definitely believes it has run out of memory! And this is not down to Debian or a specific kernel either - I have just transferred over from Ubuntu which performs abysmally as well (far worse it appears, though this may be 'cos Gnome uses more resources than KDE, so it's more noticeable) Ironically, I tested Debian extensively for one day on an 'old' Athlon-based computer, where it performed magnificently, before sticking it on the laptop as well. I'm now starting to wonder might it have something to do with dual-processors... BTW, I'm not out to bash linux here - I have Ubuntu running continuously as a server of sorts on a really old 700mhz Celeron without problems. Anyway, as 'top' is just-about useless in resolving this, I've just installed 'valgrind' in the hope that it will somehow shed some light on the problem (if I can ever figure out how to use it...)

Very useful

Pravin's picture

Nice Article. I was wondering how free memory is so less on my system and large memory is shown in buffers? This article cleared it.

The total of free is always

jithu's picture

The total of free is always less that the amount of physical memory (RAM) actually present on the machine. Why is it ?

you have told The “total” memory is the amount available after loading the kernel. so is the difference the space that is occupied by kernel and its data structures like page table etc ?

Nice article but would be

Anonymous's picture

Nice article but would be good if you explained how to do some of these things - particularly how to stop processes from running at startup like reducing the number of getty's etc.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState