Protecting Your Site with Access Controls
Now that we have a list of user names and passwords in the correct format, we can use that list to protect the directories on our server. Each directory can use a different file containing user names and passwords—so your “top-secret” directory can have a different list of users than your “secret” directory.
There are two ways to protect files on your system. One is to put a file, called .htaccess by default, in the directory you wish to protect. This gives you the flexibility to modify individual directories quickly and easily and to give responsibility for different directories to the people in charge of those directories—but it also removes a certain element of central control.
We will thus look at the method in which access restrictions are defined in srm.conf, one of the Apache configuration files. Placing the access restrictions in srm.conf means you will have centralized control of access to your server, and you will have to restart the server each time you make changes.
Protected directories are declared in srm.conf within <Directory> and </Directory> statements with a relatively straightforward syntax. For instance, I added the following lines in this file to protect directories used in this article:
<Directory /home/httpd/html/private> AuthType Basic AuthName TestRealmName AuthUserFile /tmp/authusers require valid-user </Directory>
The first and last lines confine these declarations to /home/httpd/html/private, the protected directory on my server. Someone requesting a file within /home/httpd/html (the root directory on my web server) can do so without having to enter a user name or password. Someone trying to retrieve a file in /home/httpd/html/private (known as /private to the outside world), or in any subdirectory of /private, will have to enter a user name and password.
The user name, password pair is be passed using the “basic” authentication scheme that we saw earlier, in which user name, password is encoded using Base64 and sent as part of the HTTP headers following the request. Until browsers begin to support the “digest” method (or even more secure methods), all protected directories should declare the AuthType to be “Basic”.
AuthName is a way of identifying this directory to the outside world. You might want to call the directory something meaningful, such as “Joe's private directory”, or “FYI”. You might use AuthName to distinguish between different protected sections of your web server, such as “private area” and “staff area”. AuthName is generally displayed in the dialog box into which a user can enter her user name and password.
Next, we indicate which password file should be used for this directory. As mentioned earlier, each directory can use a separate password file, so it is important to specify which one you wish to use. If you expect to use more than a few password files on your system, you might want to investigate the use of groups, which allow you to grant privileges to different subsets of users in a single password file. (Users can be placed in groups, which we will not address here, but which allow you to associate each user in the password file with one or more groups).
Finally, we indicate that we will allow only valid users, meaning only those whose user names and passwords are in the password file named in AuthUserFile. You could also specify individual users who would be allowed into the site, such as:
require user reuven reena
Once you have placed this information in your server's srm.conf file, you need to tell the server to reread its configuration file. You can do this by shutting the server down and then restarting it or by sending it a HUP signal, as follows:
killall -v -1 httpdThis command sends a HUP signal (aka signal #1) to all instances of httpd currently running. Remember that Apache normally runs a number of servers simultaneously, so trying to identify individual processes and use the standard kill command is probably not a good way to go about it.
Once you have restarted the server, protected directories are only accessible to someone whose user name and password appears in one of these directories. If you want to test the protection mechanism, using TELNET (as described above) to pretend to be a web browser might be the best way to do it, in order to avoid a browser's cache of passwords.
Just as you can protect directories containing HTML files and pictures, you can also protect directories containing CGI programs. For instance, if you want to make a selected number of CGI programs accessible only to a select number of users, you can define /cgi-bin/private in the same way as you did /private.
Here, for example, is the definition that I added to srm.conf in order to protect /cgi-bin/private:
<Directory /home/httpd/cgi-bin/private> AuthType Basic AuthName TestRealmName AuthUserFile /tmp/authusers require valid-user </Directory>
As you can see, the definition is identical to that for /private, except for the name of the directory.
In this case, we will be asked for a user name, password combination if we try to execute a CGI program in this directory, using either GET or POST. (Apache allows you to set a separate access privilege for each method, so you could allow all users to GET but a restricted group to POST and still others to PUT and DELETE.) Before the request will actually be sent to the CGI program in question, we will have to authenticate ourselves.
One of the nice benefits of protecting CGI directories is that all programs in that directory immediately have access to a new environment variable, REMOTE_USER, which contains the name of the user in question. This is available to CGI programs written in Perl and using CGI.pm via the remote_user method, but all programs can retrieve the value of the environment variable.
How can this be of use? Well, we know that the user name must be unique; no two users can share a user name. Thus, we can use the user name as a primary key (i.e., a unique index) into a table in a relational database containing more information about the user—his or her age, interests and last visit.
Indeed, over the last few months, this column has looked at a variety of techniques for keeping track of information about users, most often by setting an HTTP cookie on the user's computer and setting a primary key value in the cookie.
The advantage of this system is that the user must verify his or her identity before being allowed to access the program—meaning that by the time the CGI program is executed, we can be sure that the user name exists, is associated with a real user and that this user represents that person (or has access to the user's password). HTTP cookies operate on a per-computer basis; if someone were to use my computer while I am not looking, they could retrieve information from all of the private sites from which I have retrieved cookies.
Another advantage of using this form of identification rather than cookies is that it gives the user mobility. No longer is the user tied to a particular computer or browser. While users must sign in before being allowed to use the site, they can access the site from anywhere rather than just from their computer at work or home.
There are disadvantages, too—the main one is the inherent insecurity associated with the basic authentication scheme. And some users prefer not to be bothered with having to enter their user name and password each time they visit a site. Such users would rather the site recognize and remember their settings automatically.
Listing 1 is a short CGI program written in Perl that identifies the user name entered. If this program is placed in an unprotected directory, it will indicate that no value for REMOTE_USER is available. If run from within a protected directory, however, it will return the user name that was used to access that directory.
If you were to create a table in a relational database (such as MySQL), you could define the primary key to be a user name of no more than eight characters. The value of remote_user could then be used as a reliable index into the database.
Protecting web sites is sure to be an increasingly important topic as the Web continues to mature. Apache is remarkably flexible when it comes to such security mechanisms. While I mentioned groups, there was not enough space to discuss additional options, such as restricting access by domain or IP address. See the Apache documentation for more information on this issue and the sidebar for additional sources.
While user name, password combinations are useful for restricting access to a web site, they can also be used to produce a unique key into a database. If you are thinking of creating a database to keep track of your users, you might want to consider using access controls to force users to log in.
Restricting access to directories on your web site is neither complicated nor difficult and lets you put sensitive or private materials on the Web without having to worry about someone discovering a secret URL.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- SourceClear Open
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide