Tips for Optimizing Linux Memory Usage
Now that we have some measurement tools at our disposal, its time to try to improve the memory situation. The first line of attack is before Linux boots—your ROM BIOS setup program has some options that may increase the amount of memory available. Many systems can shadow the ROM address ranges in RAM, because it is faster than ROM. Unlike MS-DOS, however, Linux doesn't use the ROM BIOS routines, so disabling this can free close to 200K of memory (if you still run MS-DOS occasionally then you may not want to do this).
Incidently, now is also a good time to look at your other setup options and do some experimentation. You may be able to improve CPU performance with the options to enable caching and setting the CPU clock speed. One way to measure this is to use the BogoMIPs rating displayed when Linux boots as an indicator of CPU speed (this is not always accurate though, because as everyone knows, BogoMIPs are “bogus”). If you boot Linux from a hard disk, you may also be able to speed up reboot times by disabling the floppy disk drive seek at bootup. Don't change too many settings at once, or you may not know which changes are having a positive effect. Be sure to write down your original settings in case you put your system in a state where it will no longer boot.
Are you still using the default kernel that came when you installed Linux? If so, shame on you! Kernel memory is special—unlike the memory pages used by processes, the kernel is never swapped out. If you can reduce the size of the kernel, you free up memory that can be be used for executing user programs (not to mention reducing kernel compile times and disk storage).
The idea here is to recompile the kernel with only the options and device drivers you need. The kernels shipped with Linux distributions typically have every possible driver and file system compiled in so that any system can boot from it. If you don't have network cards, CD-ROM, SCSI, and so on, you can save considerable memory by removing them from the kernel. Besides, you can't really consider yourself a Linux hacker if you've never recompiled a customized kernel yourself.
If there are drivers that you only need occasionally, consider building several kernels, and set up LILO to let you choose an alternate kernel when booting. If you have a math coprocessor, you can consider taking out the FPU emulation routines as well. You can also remove any of the Linux file systems that you do not require.
More advanced Linux hackers might want to look at the “modules” facility which allows for loadable device drivers. With this you can dynamically add and remove drivers without rebooting. This facility has been available for some time to kernel hackers, and it has now become a part of the standard kernel. This facility is particularly useful for rarely used devices such as tape drives that are only occasionally used for backup purposes.
Finally, make sure you are running a recent kernel. Newer kernels, as well as (in most cases) being more stable, also have improvements in memory usage.
If you develop your own applications, or compile code you obtain from the Internet or bulletin board systems, then using the right compile options can reduce the memory used. Turning on optimization will generally produce code which is smaller and executes faster, as well as requiring less memory. A few optimizations, such as in-line functions, can make the code larger. You should also check that your executables are dynamically linked and stripped of debug information.
Which optimizations are best depend on the specific application and even on the version of compiler used; you may wish to experiment.
Once Linux is up and running your new kernel, it's time to look at where the memory is going. Before you even log on, how many processes are running?
The bare minimum for a Linux system would typically be:
init (this starts all other processes)
update (this periodically writes the disk buffers to disk)
a single getty (which becomes your shell when logged in)
Run “top” and see what is running on your system. How many getty processes do you need? Do you really need all those other processes such as lpd, syslogk, syslogd, crond, and selection? On a standalone system, you don't need to run full networking software.
If you are using an init package that supports multiple run levels, you might want to consider defining several different run levels. This way you could, for example, switch your system between full networking and running standalone, allowing you to free up resources when you don't need them.
You can also examine some of your larger executables to see if they were built with the appropriate compiler and linker options. To identify the largest programs, try using a command such as this:
ls -s1 /bin /usr/bin /usr/bin/X11 | sort -n | tail
Strictly speaking this only finds the largest files, but file size is usually a good indication of the memory requirements of a program.
The most common shell under Linux is GNU BASH. While very functional, it is also quite large. You can save memory by using a smaller shell such as the Korn shell (usually called ksh or pdksh).
The emacs editor is also big; you could use a smaller editor such as vi, jove, or even ed instead.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide