The Quick Road to an Intranet Web Server
Apache has many wonderful configuration options available. To someone who has never managed a web server, the list of options might seem quite daunting. However, it is possible to employ a simple cookbook method to get your web server up and running in short order. The following values will produce a working Intranet server which should perform marvelously in most organizations. Of course, if you have special security requirements, a review of the Apache documentation is quite helpful.
Any Linux distribution with an easy installation of Apache will likely have a reasonable set of parameters already enabled. Even if your web server is live seconds after installation, you may want to review the configuration files just to find out what features have been enabled and disabled by default.
The Apache configuration generally has four files of note. In Red Hat, these are found in /etc/httpd/conf/. In Debian, they are located in /etc/apache/. If you built Apache from the sources as described above, they should be in /usr/local/etc/apache/conf/. The four files are access.conf, httpd.conf, srm.conf and mime.types.
If the files are not found in your kit, you should be able to locate the same group of files with names ending in .conf-dist. Simply copy each of these files to the appropriate location (e.g., /etc/httpd/conf/ if you are using Red Hat) using the .conf file type, and edit the files so that they contain the parameters described below. Note that the Apache-supplied configuration files contain useful hints not shown here for the sake of space, as well as many additional parameters not discussed here. So, it is best to edit the files to contain the following parameters, rather than attempting to construct new configuration files from scratch.
Of the four files, mime.types is the least likely to require modifications. It is a table which associates MIME headers with file types. For example, the table will say that a file ending with .gz should generate a MIME header of application/x-gzip.
The most likely candidate for parameter adjustment is the access configuration definition. In access.conf, try using the following values:
<Directory /home/httpd/html> Options Indexes Includes ExecCGI FollowSymLinks AllowOverride None order allow,deny allow from all </Directory>
In this example taken from a modified Red Hat file, this entry says that the main HTML files for this server will be stored in /home/httpd/html. The options include:
Indexes: if the user specifies a URL that points to a directory name rather than a file name and that directory does not contain an index file (such as index.html), Apache will display a list of the files contained in the specified directory. If you don't want this behavior, simply omit the “Indexes” keyword from the “Options” line.
Includes: this allows the server to include files as directed.
ExecCGI: if the URL specified is actually a CGI script, this will allow that script to execute. If you are not interested in CGI scripts, don't include the keyword.
FollowSymLinks: Let's say that I've created a symbolic link to my CD drive in this directory. This keyword will instruct Apache to allow access to that CD as if it were a subdirectory of this directory. This facility can be quite handy if you wish to serve up the contents of a CD or allow access to the HOWTOs which normally reside in a tree such as /usr/doc. One quick symbolic link, and you can allow Apache to serve those files without moving them or creating individual symbolic links. But remember to create symbolic links with care, or else you might find that your web site is serving up documents you never intended to be universally available.
The CGI directory entry will generally look more minimal, like:
<Directory /home/httpd/cgi-bin> AllowOverride None Options None </Directory>
The parameters that define the operational parameters for the Apache daemon in httpd.conf are as follows:
HostnameLookups onIf HostnameLookups is “on”, the server will use DNS to try to determine the user's host name for logging purposes.
User nobody Group nobodyUser and Group determine the access privileges of the remote user. The server will act as if it were a job created by the specified user and group (in this case, “nobody”).
ServerAdmin root@localhostServerAdmin sets the user name and host to receive mail messages that might be generated by the daemon.
ServerRoot /etc/httpdServerRoot sets the base directory for configuration and log files.
ErrorLog logs/error_log TransferLog logs/access_log RefererLog logs/referer_log AgentLog logs/agent_logThese parameters specify the name and location of various log files. The directories specified are relative to the ServerRoot directory. Once you've run Apache for awhile, examine these log files. In them, you'll find information about which pages were accessed when and by whom. You'll even find information about the page that called your page and the type of browser the user was employing to look at your site.
MinSpareServers 5 MaxSpareServers 10 StartServers 5 MaxClients 150These parameters deal with the fact that Apache generates child processes before they are needed in order to deal with any sudden increase in incoming traffic. These parameters specify the minimum and maximum number of unused children which should exist at any point in time. They also specify the absolute minimum and maximum number of children which should be available. Using the command ps ax will reveal the multiple children which the Apache daemon is currently using.
The file srm.conf contains many different items. It deals largely with the name space of the server as well as how the server responds to requests. Of special interest are the following lines:
DocumentRoot /home/httpd/html Alias /icons/ /home/httpd/icons/ ScriptAlias /cgi-bin/ /home/httpd/cgi-bin/ UserDir public_html
DocumentRoot specifies the top of the directory tree that contains the pages to be served. Alias allows the specified directory to be accessed by a pseudonym, even though it is not under the DocumentRoot tree. ScriptAlias is like Alias except that the directory will contain CGI scripts. UserDir specifies the user's subdirectory to look in for any URL that uses a /~username/ specification. For example, if the user's default directory is normally /home/username/, then the user's default HTML directory will be /home/username/public_html/.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- I once had a better way I
16 min 20 sec ago
- Not only you I too assumed
33 min 43 sec ago
- another very interesting
2 hours 26 min ago
- Reply to comment | Linux Journal
4 hours 20 min ago
- Reply to comment | Linux Journal
11 hours 14 min ago
- Reply to comment | Linux Journal
11 hours 30 min ago
- Favorite (and easily brute-forced) pw's
13 hours 21 min ago
- Have you tried Boxen? It's a
19 hours 13 min ago
- seo services in india
23 hours 45 min ago
- For KDE install kio-mtp
23 hours 45 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?