Tweaking Tux, Part 3

by Marcel Gagné

Last time around, we looked at the uptime command (among other things) and what it tells you about what your system is doing right now. In particular, uptime tracks the load average, or number of processes waiting to be dealt with. As a refresher of what that information looks like, allow me to post one reader's uptime stats.

   7:47pm  up 458 days, 13:07,  2 users,  load average: 0.00, 0.00, 0.00

Not bad at all in terms of days without a reboot. As far as what this system is currently doing, the load average is quite low, and we can pretty much surmise that our CPU is bored much of the time (certainly at 7:45pm).

For a more comprehensive peek into what you system is doing, try the top command. The first thing you'll notice is that the load average numbers from the uptime command are also part of the top display. Running top delivers far more information than the uptime command does by itself, including the actual number of processes, distribution of work done between system and user processes, memory utilization, and much more.

To run top, simply type top. Here's a little sample output from one of my systems.

    1:39pm  up 127 days,  6:20,  4 users,  load average: 0.01, 0.01, 0.03   43 processes: 42 sleeping, 1 running, 0 zombie, 0 stopped   CPU states:  0.1% user,  0.1% system,  0.0% nice, 99.6% idle   Mem:   61864K av,  57876K used,   3988K free,  33484K shrd,   1836K buff   Swap: 136040K av,   4496K used, 131544K free                 32752K cached

     PID USER     PRI  NI  SIZE  RSS SHARE STAT  LIB %CPU %MEM   TIME COMMAND   24382 root      19   0  1000 1000   824 R       0  0.3  1.6   0:00 top       1 root       0   0   104   52    36 S       0  0.0  0.0   0:05 init       2 root       0   0     0    0     0 SW      0  0.0  0.0   3:39 kflushd       3 root       0   0     0    0     0 SW      0  0.0  0.0   0:00 kpiod       4 root       0   0     0    0     0 SW      0  0.0  0.0   1:22 kswapd

Notice the load average numbers there as well. From this interactive screen, you can kill (send SIGNALs to) processes (hit "k" at any time and enter the PID and signal), renice processes (by hitting "r") that don't need the system's full attention, change the sort order of the fields (hit "o"), and much more. If you are curious as to the various combinations, try hitting "h" for help while top is running. One of the cool things I like to do is run top in its own window, where it reports real-time activity, letting me get a feel for just what is eating a system's resources. If you are going to do this and you have a habit of walking away from your terminal without locking your screen --say you are running the program as root (we're not going there today) -- you might want to start top like this.

     top -s

This starts top in secure mode. If I try to hit "k" now, I get a nice "Can't kill in secure mode", sort of like leaving the safety ON.

The only real catch with top is that is has a fairly big footprint in memory, CPU, etc., which means you have to take that into consideration. If you run top on your system, you'll notice that it spends a fair amount of time as the top process in terms of its demands on the system. This is why I still find myself using small, lightweight command line tools like free.

   # free                total       used       free     shared    buffers     cached   Mem:         63004      60768       2236      24448       3720      19956   -/+ buffers/cache:      37092      25912   Swap:       128480      13996     114484

Free reports on memory, both real and swap. You get a snapshot of the amount of real memory split across programs sharing the same memory space (shared), buffers used by the kernel (buffers) and what has been cached to disk. The "-/+" line reflects the total vs. used memory as reflected by the combination of the disk buffer cache and memory actually written to disk.

There are plenty of other tools for determining how your system is spending its time, and we will cover them at a later time. But first . . .

Let's talk about your disks, shall we? Are they feeling a bit sluggish? Listless? Do you need to give them a bit of a pep talk to get them running instead of walking for your data? Well, ladies and gentlemen, we here at the SysAdmin's Corner (there are more than just one of me?) may have just what you need. Hidden away on your Linux system is a little command called hdparm. This is a command line utility with a fair amount of flexibility and power. With it, you have the ability to modify certain IO related parameters on your actual hard drives which can lead to substantial changes in disk access performance. Now, before I go into what it can and can't do, allow me to repeat the tweaker's weasel words.

<weasel words> Be very careful when doing any kind of OS level tweaking. Some parameters can actually decrease your system performance rather than help it. If you are experimenting with live data, then always have a backup. This is fun, but there is an element of risk. </weasel words>

Ah, good. I feel much better. On to the fun stuff.

We should probably start by having a look at the hard drive before we go changing anything. Using the -i flag, hdparm will return a little information about the type of drive we have and some of the basic capabilities. Here's what happens when I run it on one of my Linux machines here in the office (devsys1).

[root@devsys1 /root]# hdparm -i /dev/hda

   /dev/hda:

    Model=QUANTUM BIGFOOT TS12.7A, FwRev=A21.0G00, SerialNo=38190552    Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs }    RawCHS=24876/16/63, TrkSize=32256, SectSize=21298, ECCbytes=4    BuffType=3(DualPortCache), BuffSize=418kB, MaxMultSect=16, MultSect=off    DblWordIO=no, maxPIO=2(fast), DMA=yes, maxDMA=2(fast)    CurCHS=1658/240/63, CurSects=-1656749698, LBA=yes, LBAsects=25075008    tDMA={min:120,rec:120}, DMA modes: sword0 sword1 sword2 mword0 mword1 mword2    IORDY=on/off, tPIO={min:120,w/IORDY:120}, PIO modes: mode3 mode4

There's a lot of good stuff here. For instance, take note of that DMA information. This will come in handy as we start playing with the settings. To find out whether anything I do will have any benefit, it is a good idea to get a baseline reading of what kind of access I have beforehand. There are two hdparm flags that deliver this type of information. The first, -t, will provide a kind of benchmark report of a physical read of sequential data on the disk. The second parameter is -T and reports on cached buffer reads; in essence, this involves no real read of physical data but, rather, is more of a performance report of your processor, memory, etc. For simplicity, you can use the two parameters in conjunction with each other; hdparm will take this into consideration and make some corrections. The hdparm documentation suggests running this test a few times to get a good average reading. In this example, I will only show you the report of one such run (they were all extremely close).

   [root@devsys1 /root]# /sbin/hdparm -Tt /dev/hda

   /dev/hda:    Timing buffer-cache reads:   64 MB in  0.74 seconds =86.49 MB/sec    Timing buffered disk reads:  32 MB in 10.54 seconds = 3.04 MB/sec

Keep those numbers in mind because the change will be quite dramatic. By default, data transfer from your disk is happening in 16 bit chunks. On an IDE or EIDE drive, this is what the hardware does, but by the time it hits the controller, it could travel across your system's bus in 32 bit chunks. To find out whether 32 bit I/O support it enabled, use the -c parameter.

   [root@devsys1 /root]# /sbin/hdparm -c /dev/hda

   /dev/hda:    I/O support  =  0 (default 16-bit)

As you can see, we are running bare 16 bit. Let's change that with a -c3 switch to hdparm. The "3" tells the program to turn on 32 bit I/O with "sync".

   # /sbin/hdparm -c3 /dev/hda

   /dev/hda:    setting 32-bit I/O support flag to 3    I/O support  =  3 (32-bit w/sync)

When we run hdparm with the -Tt flags again, we get a set of numbers that is starting to look quite a bit more interesting than the first run-through.

   # /sbin/hdparm -Tt /dev/hda

   /dev/hda:    Timing buffer-cache reads:   64 MB in  0.65 seconds =98.46 MB/sec    Timing buffered disk reads:  32 MB in  6.23 seconds = 5.14 MB/sec

Notice that while the buffer cache reads (the reads from memory) have not changed in any great way, our disk reads have changed quite dramatically. Since disk access is generally among the slowest of random access operations on your system (not counting CD-ROM or floppy), this is starting to look very interesting <last two words stretched out with silly comedy voice>.

So 32MB of disk reads went from 10.54 seconds to 6.23 seconds. I suppose I should be happy with that, but can we do better? There is one other parameter you might want to consider. Whether you can or not should be evident in your -i (information) hdparm operation from earlier on. You'll notice that in my example, I have a DMA=yes reading followed by some additional DMA information: DMA modes: sword0 sword1 sword2 mword0 mword1 mword2 . This tells me that my disk supports DMA or Direct Memory Access. Essentially, this means that my drive can send information directly from the drive to system memory. The processor does not have to be involved in the operation. To set DMA access, you use the -d1 parameter (-d0 means off).

   # /sbin/hdparm -d1 /dev/hda

   /dev/hda:    setting using_dma to 1 (on)    using_dma    =  1 (on)

All right.  Now, let's check that little benchmarking result again, shall we?

   /dev/hda:    Timing buffer-cache reads:   64 MB in  0.65 seconds =98.46 MB/sec    Timing buffered disk reads:  32 MB in  2.32 seconds =13.79 MB/sec

Wow! From an original 10.54 seconds to 6.23 seconds to 2.32 seconds. Some IDE/EIDE drives out there also support UDMA but, sadly, I cannot speak for those today.

Just before I wrap up yet another column, let me give you a parting /proc tidbit. You might have noticed that when you accidentally pressed "Ctrl-Alt-Del" on your Linux system (a hold-over from previous days of the three-finger salute) that it started doing a nice, orderly shutdown. Well, that response to the three-finger salute is set in /proc/sys/kernel in a pseudo-file called "ctrl-alt-del". This value is set to "0" by default which means that Linux will, upon seeing the salute, do that nice reboot. When set to anything else, this value will cause Linux to just plain shutdown without preamble, and it could be messy. The lesson here, I guess, is that some things are better left alone.

Until next we meet, here on this most sunny of corners, give Tux a tweak. You both might enjoy it.

Load Disqus comments

Firstwave Cloud