Hack and / - Linux Troubleshooting, Part I: High Load
This column is the first in a series of columns dedicated to one of my favorite subjects: troubleshooting. I'm a systems administrator during the day, and although I enjoy many aspects of my job, it's hard to beat the adrenaline rush of tracking down a complex server problem when downtime is being measured in dollars. Although it's true that there are about as many different reasons for downtime as there are Linux text editors, and just as many approaches to troubleshooting, over the years, I've found I perform the same sorts of steps to isolate a problem. Because my column is generally aimed more at tips and tricks and less on philosophy and design, I'm not going to talk much about overall approaches to problem solving. Instead, in this series I describe some general classes of problems you might find on a Linux system, and then I discuss how to use common tools, most of which probably are already on your system, to isolate and resolve each class of problem.
For this first column, I start with one of the most common problems you will run into on a Linux system. No, it's not getting printing to work. I'm talking about a sluggish server that might have high load. Before I explain how to diagnose and fix high load though, let's take a step back and discuss what load means on a Linux machine and how to know when it's high.
When administrators mention high load, generally they are talking about the load average. When I diagnose why a server is slow, the first command I run when I log in to the system is uptime:
$ uptime 18:30:35 up 365 days, 5:29, 2 users, load average: 1.37, 10.15, 8.10
As you can see, it's my server's uptime birthday today. You also can see that my load average is 1.37, 10.15, 8.10. These numbers represent my average system load during the last 1, 5 and 15 minutes, respectively. Technically speaking, the load average represents the average number of processes that have to wait for CPU time during the last 1, 5 or 15 minutes. For instance, if I have a current load of 0, the system is completely idle. If I have a load of 1, the CPU is busy enough that one process is having to wait for CPU time. If I do have a load of 1 and then spawn another process that normally would tie up a CPU, my load should go to 2. With a load average, the system will give you a good idea of how consistently busy it has been over the past 1, 5 and 10 minutes.
Another important thing to keep in mind when you look at a load average is that it isn't normalized according to the number of CPUs on your system. Generally speaking, a consistent load of 1 means one CPU on the system is tied up. In simplified terms, this means that a single-CPU system with a load of 1 is roughly as busy as a four-CPU system with a load of 4. So in my above example, let's assume that I have a single-CPU system. If I were to log in and see the above load average, I'd probably assume that the server had pretty high load (8.10) during the last 15 minutes that spiked around 5 minutes ago (10.15), but recently, at least during the last 1 minute, the load has dropped significantly. If I saw this, I might even assume that the real cause of the load has subsided. On the other hand, if the load averages were 20.68, 5.01, 1.03, I would conclude that the high load had likely started in the last 5 minutes and was getting worse.
After you understand what load average means, the next logical question is “What load average is good and what is bad?” The answer to that is “It depends.” You see, a lot of different things can cause load to be high, each of which affects performance differently. One server might have a load of 50 and still be pretty responsive, while another server might have a load of 10 and take forever to log in to. I've had servers with load averages in the hundreds that were certainly slow, but didn't crash, and I had one server that consistently had a load of 50 that was still pretty responsive and stayed up for years.
What really matters when you troubleshoot a system with high load is why the load is high. When you start to diagnose high load, you find that most load seems to fall into three categories: CPU-bound load, load caused by out of memory issues and I/O-bound load. I explain each of these categories in detail below and how to use tools like top and iostat to isolate the root cause.
If the first tool I use when I log in to a sluggish system is uptime, the second tool I use is top. The great thing about top is that it's available for all major Linux systems, and it provides a lot of useful information in a single screen. top is a quite complex tool with many options that could warrant its own article. For this column, I stick to how to interpret its output to diagnose high load.
To use top, simply type top on the command line. By default, top will run in interactive mode and update its output every few seconds. Listing 1 shows sample top output from a terminal.
Kyle Rankin is a director of engineering operations in the San Francisco Bay Area, the author of a number of books including DevOps Troubleshooting and The Official Ubuntu Server Book, and is a columnist for Linux Journal.
Free DevOps eBooks, Videos, and more!
Regardless of where you are in your DevOps process, Linux Journal can help!
We offer here the DEFINITIVE DevOps for Dummies, a mobile Application Development Primer, and advice & help from the expert sources like:
- Linux Journal
- Users, Permissions and Multitenant Sites
- New Products
- Flexible Access Control with Squid Proxy
- Security in Three Ds: Detect, Decide and Deny
- High-Availability Storage with HA-LVM
- Tighten Up SSH
- DevOps: Everything You Need to Know
- Solving ODEs on Linux
- Non-Linux FOSS: MenuMeters
- diff -u: What's New in Kernel Development