Book Excerpt: DevOps Troubleshooting: Linux Server Best Practices

This excerpt is from the book, 'DevOps Troubleshooting: Linux Server Best Practices' by Kyle Rankin, published by Pearson/Addison-Wesley Professional, ISBN 0321832043, Nov 2012, Copyright 2013 Pearson Education, Inc. For more info please visit: http://www.informit.com/store/devops-troubleshooting-linux-server-best-practices-9780321832047

Chapter 8: Is the Website Down? Tracking Down Web Server Problems

Get Web Server Statistics

Although there’s a fair amount of web server troubleshooting you can perform outside of the server itself, ultimately you will get in a situation where you want to know this type of information: How many web server processes are currently serving requests? How many web server processes are idle? What are the busy processes doing right now? To pull data like this, you can enable a special server status page that gives you all sorts of useful server statistics.

Both Apache and Nginx provide a server status page. In the case of Apache, it requires that you enable a built-in module named status. How modules are enabled varies depending on your distribution; for example, on an Ubuntu server, you would type a2enmod status. On other distributions you may need to browse through the Apache configuration files and look for a commented-out section that loads the status module; it may look something like this:

LoadModule status_module /usr/lib/apache2/modules/mod_status.so

After the module is loaded on Ubuntu systems, the server-status page is already configured for use by localhost. On other systems you may need to add configuration like the following to your Apache configuration:

ExtendedStatus On
<IfModule mod_status.c>
#
# Allow server status reports generated by mod_status,
# with the URL of http://servername/server-status
# Uncomment and change the ".example.com" to allow
# access from other hosts.
#
<Location /server-status>
    SetHandler server-status
    Order deny,allow
    Deny from all
    Allow from localhost ip6-localhost
#    Allow from .example.com
</Location>

</IfModule>

Note that in this configuration example, we have really locked down who can access the page by saying deny from all hosts and only allow from localhost. This is a safe default because you generally don’t want the world to be able to view this kind of debugging information. As you can see in the commented-out example, you can add additional Allow from statements to add IPs or hostnames that are allowed to view the page.

For Nginx, you would add a configuration like the following to your existing Nginx config. In this example, Nginx will only listen on localhost, but you could change this to allow other machines on your local network:

server {
   listen 127.0.0.1:80;

   location /nginx_status {
           stub_status on;
           access_log off;
           allow 127.0.0.1;
           deny all;
   }
}

Once you have your configuration set and have reloaded the web server, if you have allowed some remote IPs to view the page, open up a web browser and access /server-status on your web server. For instance, if your web server was located at http://www.example.net, you would load http://www.example.net/server-status and see a page like the one shown in Figure 8-1.

Figure 8-1: Standard Apache status page

At the top of the status page, you will see general statistics about the web server including what version of Apache it is running and data about its uptime, overall traffic, and how many requests it is serving per second. Below that is a scoreboard that gives a nice at-a-glance overview of how busy your web server is, and below that is a table that provides data on the last request that each process served.

Although all of this data is useful in different troubleshooting circumstances, the scoreboard is particularly handy to quickly gauge the health of a server. Each spot in the scoreboard corresponds to a particular web server process, and the character that is used for that process gives you information about what that process is doing:

  • _

  • Waiting for a connection

  • S

  • Starting up

  • R

  • Reading the request

  • W

  • Sending a reply

  • K

  • Staying open as a keepalive process so it can send multiple files

  • D

  • Performing DNS lookup

  • C

  • Closing connection

  • L

  • Logging

  • G

  • Gracefully finishing

  • I

  • Performing idle cleanup of worker

  • .

  • Open slot with no current process

Figure 8-1 shows a fairly idle web server with only one process in a K (-keep-alive) state and one process in a W (sending reply) state. If you are curious about what each of those processes were doing last, just scroll down the page to the table and find the process of the correct number in the scoreboard. So, for instance, the W process would be found as server 2-16. It’s not apparent in the screenshot, but that process was actually the response to the request for the server-status page itself. You will also notice a few _ (waiting for connection) processes in the scoreboard, which correspond to the number of processes Apache is configured to always have running to respond to new requests. The rest of the scoreboard is full of ., which symbolize slots where new process could go—basically the MaxClients setting (the maximum number of processes Apache will spawn).

______________________

Kyle Rankin is a systems architect; and the author of DevOps Troubleshooting, The Official Ubuntu Server Book, Knoppix Hacks, Knoppix Pocket Reference, Linux Multimedia Hacks, and Ubuntu Hacks.

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

hello !!!

sandra9's picture

Thank you for add in your friends, if you could help me to find my site around it would be nice you, thank you in advance !!!

Bonne voyante

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState