Protecting Your Site with Access Controls

Portions of your web site can be kept secure using user name, password combinations.
Protecting Directories

Now that we have a list of user names and passwords in the correct format, we can use that list to protect the directories on our server. Each directory can use a different file containing user names and passwords—so your “top-secret” directory can have a different list of users than your “secret” directory.

There are two ways to protect files on your system. One is to put a file, called .htaccess by default, in the directory you wish to protect. This gives you the flexibility to modify individual directories quickly and easily and to give responsibility for different directories to the people in charge of those directories—but it also removes a certain element of central control.

We will thus look at the method in which access restrictions are defined in srm.conf, one of the Apache configuration files. Placing the access restrictions in srm.conf means you will have centralized control of access to your server, and you will have to restart the server each time you make changes.

Protected directories are declared in srm.conf within <Directory> and </Directory> statements with a relatively straightforward syntax. For instance, I added the following lines in this file to protect directories used in this article:

<Directory /home/httpd/html/private>
  AuthType Basic
  AuthName TestRealmName
  AuthUserFile /tmp/authusers
  require valid-user
  </Directory>

The first and last lines confine these declarations to /home/httpd/html/private, the protected directory on my server. Someone requesting a file within /home/httpd/html (the root directory on my web server) can do so without having to enter a user name or password. Someone trying to retrieve a file in /home/httpd/html/private (known as /private to the outside world), or in any subdirectory of /private, will have to enter a user name and password.

The user name, password pair is be passed using the “basic” authentication scheme that we saw earlier, in which user name, password is encoded using Base64 and sent as part of the HTTP headers following the request. Until browsers begin to support the “digest” method (or even more secure methods), all protected directories should declare the AuthType to be “Basic”.

AuthName is a way of identifying this directory to the outside world. You might want to call the directory something meaningful, such as “Joe's private directory”, or “FYI”. You might use AuthName to distinguish between different protected sections of your web server, such as “private area” and “staff area”. AuthName is generally displayed in the dialog box into which a user can enter her user name and password.

Next, we indicate which password file should be used for this directory. As mentioned earlier, each directory can use a separate password file, so it is important to specify which one you wish to use. If you expect to use more than a few password files on your system, you might want to investigate the use of groups, which allow you to grant privileges to different subsets of users in a single password file. (Users can be placed in groups, which we will not address here, but which allow you to associate each user in the password file with one or more groups).

Finally, we indicate that we will allow only valid users, meaning only those whose user names and passwords are in the password file named in AuthUserFile. You could also specify individual users who would be allowed into the site, such as:

require user reuven reena

Once you have placed this information in your server's srm.conf file, you need to tell the server to reread its configuration file. You can do this by shutting the server down and then restarting it or by sending it a HUP signal, as follows:

killall -v -1 httpd
This command sends a HUP signal (aka signal #1) to all instances of httpd currently running. Remember that Apache normally runs a number of servers simultaneously, so trying to identify individual processes and use the standard kill command is probably not a good way to go about it.

Once you have restarted the server, protected directories are only accessible to someone whose user name and password appears in one of these directories. If you want to test the protection mechanism, using TELNET (as described above) to pretend to be a web browser might be the best way to do it, in order to avoid a browser's cache of passwords.

Using This Information in CGI Programs

Just as you can protect directories containing HTML files and pictures, you can also protect directories containing CGI programs. For instance, if you want to make a selected number of CGI programs accessible only to a select number of users, you can define /cgi-bin/private in the same way as you did /private.

Here, for example, is the definition that I added to srm.conf in order to protect /cgi-bin/private:

<Directory /home/httpd/cgi-bin/private>
AuthType Basic
AuthName TestRealmName
AuthUserFile /tmp/authusers
require valid-user
</Directory>

As you can see, the definition is identical to that for /private, except for the name of the directory.

In this case, we will be asked for a user name, password combination if we try to execute a CGI program in this directory, using either GET or POST. (Apache allows you to set a separate access privilege for each method, so you could allow all users to GET but a restricted group to POST and still others to PUT and DELETE.) Before the request will actually be sent to the CGI program in question, we will have to authenticate ourselves.

One of the nice benefits of protecting CGI directories is that all programs in that directory immediately have access to a new environment variable, REMOTE_USER, which contains the name of the user in question. This is available to CGI programs written in Perl and using CGI.pm via the remote_user method, but all programs can retrieve the value of the environment variable.

How can this be of use? Well, we know that the user name must be unique; no two users can share a user name. Thus, we can use the user name as a primary key (i.e., a unique index) into a table in a relational database containing more information about the user—his or her age, interests and last visit.

Indeed, over the last few months, this column has looked at a variety of techniques for keeping track of information about users, most often by setting an HTTP cookie on the user's computer and setting a primary key value in the cookie.

The advantage of this system is that the user must verify his or her identity before being allowed to access the program—meaning that by the time the CGI program is executed, we can be sure that the user name exists, is associated with a real user and that this user represents that person (or has access to the user's password). HTTP cookies operate on a per-computer basis; if someone were to use my computer while I am not looking, they could retrieve information from all of the private sites from which I have retrieved cookies.

Another advantage of using this form of identification rather than cookies is that it gives the user mobility. No longer is the user tied to a particular computer or browser. While users must sign in before being allowed to use the site, they can access the site from anywhere rather than just from their computer at work or home.

There are disadvantages, too—the main one is the inherent insecurity associated with the basic authentication scheme. And some users prefer not to be bothered with having to enter their user name and password each time they visit a site. Such users would rather the site recognize and remember their settings automatically.

Listing 1 is a short CGI program written in Perl that identifies the user name entered. If this program is placed in an unprotected directory, it will indicate that no value for REMOTE_USER is available. If run from within a protected directory, however, it will return the user name that was used to access that directory.

If you were to create a table in a relational database (such as MySQL), you could define the primary key to be a user name of no more than eight characters. The value of remote_user could then be used as a reliable index into the database.

Protecting web sites is sure to be an increasingly important topic as the Web continues to mature. Apache is remarkably flexible when it comes to such security mechanisms. While I mentioned groups, there was not enough space to discuss additional options, such as restricting access by domain or IP address. See the Apache documentation for more information on this issue and the sidebar for additional sources.

While user name, password combinations are useful for restricting access to a web site, they can also be used to produce a unique key into a database. If you are thinking of creating a database to keep track of your users, you might want to consider using access controls to force users to log in.

Restricting access to directories on your web site is neither complicated nor difficult and lets you put sensitive or private materials on the Web without having to worry about someone discovering a secret URL.

Resources

Reuven M. Lerner is an Internet and Web consultant living in Haifa, Israel, who has been using the Web since early 1993. In his spare time, he cooks, reads and volunteers with educational projects in his community. You can reach him at reuven@netvision.net.il.

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix