Linux Swap Space

Swap space isn't important, is it? Swap space just slows you down—or does it? Discover some little-known facts about your operating system's virtual memory that may change the way you think about swap.

which gathers data over a four-minute time span every five minutes. You can analyze the resulting data using a utility called kSar (ksar.atomique.net) or at the command line. kSar has the advantage of making graphs that compare one metric to another.

You can check your historical usage by running sar against one of these data collection files. For example, to view data from the 2nd of this month:

# sar -f /var/log/sa/sa02

The most direct measure of swapping is reported from sar -W:

# sar -W -f /var/log/sa/sa02
00:00:01     pswpin/s pswpout/s
00:10:01         0.00      0.00
...

The raw page numbers are a good start. If nonzero, you're doing some swapping. How much is acceptable is more difficult to judge. You'll need to examine the I/O on your swap partition. To do this, you may need to figure out the device major/minor numbers of your swap device. For example, if you swap to /dev/sdb1, use:

# ls -l /dev/hdb1
brw-r----- 1 root disk 8, 17 Dec  1 14:24 /dev/sdb1

to find that your device numbers are 8/17. Many versions of sar will report disk I/O by major/minor number. The command to look at the gathered disk data is:

# sar -d -f /var/log/sa/sa02
23:15:01   DEV                               \
                tps       rd_sec/s           \
                wr_sec/s  avgrq-sz  avgqu-sz \
                await     svctm     %util
23:15:01   dev8-17                           \
                0.00      0.00               \
                0.00      0.00      0.00     \
                0.00      0.00      0.00

If you have a lot of disks, use egrep to see only the header and device:

# sar -d -f /var/log/sa/sa02 | egrep '(DEV|dev8-17)'
...

Any swapping could be bad, but if you're seeing the avgrq-sz (average size in sectors of the requests that were issued to the device) below 1 and an await time that roughly matches svctm (average service time in milliseconds for I/O requests that were issued to the device), you may be okay. Check other indicators of system performance, such as CPU waiting for I/O and high system CPU times.

Figures 2 and 3 show some custom graphs generated with kSar that make it easy to see things more clearly.

Figure 2 is a graph comparing the amount of swap space used to the amount of time the CPU spent waiting on I/O. There are spikes of I/O wait, but they don't follow any sort of pattern with the swap use. However, this system deserves more attention because there is an I/O bottleneck of some sort. It is likely that this system is underperforming, but it probably is not a swapping issue (in this case, the I/O waits were heavy database request loads).

Figure 2. This graph of swap use vs. CPU I/O wait contains no real correlation between a nonzero use of swap and a performance degradation.

Figure 3 shows a graph comparing writes to the swap device vs. I/O wait (over a daytime time interval). It is a bit hard to see, but the red line for I/O is actually zero across the whole of the time span. This indicates that the I/O wait had nothing to do with swapping. Note that this does not mean no swapping of any sort occurred. Non-anonymous pages could have been reclaimed and reloaded.

Figure 3. A Graph of I/O Writes vs. CPU I/O Wait

If you are lucky enough to have all of your data on one partition and all of your code on another, you can analyze this sort of paging by watching the reads on your “code” partition. The reads will indicate program loading at startup (which may be infrequent on your systems) and also paging due to memory pressure. If the reads correspond to large I/O waits, you may benefit from more RAM.

The final parameter I want to talk about can affect the responsiveness of certain loads. Have you ever left your system idle for a few hours, only to return to find that it takes a long time to start responding again? Here is probably what is happening: while you were gone, a cron job (like updatedb) scanned the filesystem, and because you weren't using the computer, all of the code pages for your “inactive” programs were thrown out in favor of disk cache! There is a kernel parameter called vm.swappiness that will affect the tendency of the system to throw out code pages instead of others.

The default value of vm.swappiness is 60 on most systems. Changing this parameter to 0 will prevent the kernel from throwing out code pages for filesystem caching and will prevent your desktop from becoming zombie-like due to an unmonitored filesystem scan.

The default value is fine for production systems, because any services that are heavily used will be kept in memory due to their activity. This allows the file cache to expand over anything that is truly unused and generally will work better. It may be advantageous to tune this if you are pretty certain that the majority of memory for programs should be kept (for example, you run a database that does most of its own caching). It might also help to increase this parameter to 100 on systems that have a set of very active programs.

As with any performance parameter, it helps to have some benchmarks that simulate your real workload so you can measure the effects.

Tuning these parameters will lead to a virtual memory system that provides a more stable foundation, particularly for production systems running programs that consume large amounts of memory.

Tony Kay has been working with UNIX and Linux systems for more than 20 years. He lives in Bend, Oregon, where he manages large Linux systems and develops mobile applications.

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix