Advanced “New” Labels
Last month, we looked at those pesky “new” labels that webmasters like to put on their sites. The intention is good, pointing us to documents we are unlikely to have seen before. In practice, the “new” labels are artificial, telling us when the document was last published, rather than whether it is actually new to us.
The techniques we explored last month—server-side includes, CGI programs, and templates—were interesting, but inefficient and slow. This month, we will look at ways to speed up our performance by using mod_perl, the Perl module for the Apache server.
We have discussed mod_perl on previous occasions in this column, but it is worth giving a quick introduction for those of you who may have missed it. The Apache server is built of modules, each of which handles a different part of the software's functionality. One of the advantages of this architecture is that it allows webmasters to customize their copy of Apache, including or excluding modules as necessary. It also means programmers can add functionality to Apache by writing new modules.
One of the most popular modules is mod_perl, which puts a copy of the Perl language inside the Apache server. This provides functionality on a number of levels, including the ability to set the configuration directives in Perl (or conditional, depending on whether or not certain Perl code executes). More significantly, it allows us to write Perl modules that can modify Apache's behavior.
When I say “behavior”, I mean both the behavior users see, displaying documents and responding to HTTP requests, and that which takes place behind the scenes, ranging from the way authentication takes place to the way logging is done.
Each invocation of a CGI program requires a new process, as well as start-up time. By contrast, mod_perl turns your code into a subroutine of the Apache executable. Your code is loaded and compiled once, then saved for future invocations.
When we first think about what happens to an HTTP request when it is submitted to Apache, it seems relatively simple. The request is read by Apache, passed to the correct module or subroutine and returned to the user's browser in an HTTP response. In fact, each request must travel through many (over a dozen) different “handlers” before a response is generated and sent. mod_perl allows us to modify and enhance any or all of these handlers by attaching a Perl module to it. The handler most often modified is called PerlHandler. Other more specific handlers are given other names, such as PerlTransHandler (for URL-to-filename translation) and PerlLogHandler (for log files).
This month, we will look at a number of PerlHandlers that will make it possible to create truly useful “new” labels for our web sites.
The first PerlHandler we will define is rather simple: it puts a “new” label next to any link on a page. This is not a particularly difficult task or a good use of mod_perl. However, it does gently ease us into writing a Perl module for mod_perl, and it will form the basis for future versions we will write.
Our module begins much the same as any other module, declaring its own name space (Apache::TagNew, in this case), then importing several symbolic constants from the Apache::Constants package. The module defines a single subroutine, called “handler”. This is the conventional way to define a handler under mod_perl; that is, create a module with a “handler” subroutine, then tell Apache to use that module as a handler for a particular directory.
We instruct Apache to invoke our handler in the configuration file httpd.conf. For example, my copy of httpd.conf says the following:
PerlModule Apache::TagNew <Directory /usr/local/apache/share/htdocs/tag> SetHandler perl-script PerlHandler Apache::TagNew </Directory>
The PerlModule directive tells Apache to load the Apache::TagNew module. The <Directory> section tells Apache that the /tag subdirectory of my HTML content tree should be treated specially, using the handler method of Apache::New instead of the default content handler. Once we activate our module by restarting our server (or by sending it a HUP signal), any file in the /tag directory will be handled by Apache::TagNew, rather than Apache's default handler.
The first thing handler must do is retrieve the Apache request object, traditionally called $r. This object is the key to everything in mod_perl, since it allows us to retrieve information about the HTTP request, the environment, and the server on which the program is running. We also use $r to send data back to the user's browser.
Our method is expected to return one of the symbols we imported from Apache::Constants. Returning OK means we successfully handled the query, data has been returned to the user's browser, and Apache should move to its next stage of handling the request. If we return DECLINED, Apache assumes our module did not handle the request and it should find some other handler willing to do the job. There are a variety of other symbols we can return, including NOT_FOUND, which indicates that the file was not found on our server.
In Apache::TagNew (see Listing 1), we normally return OK. We return NOT_FOUND if an error occurs when opening the file, and DECLINED if the file does not have a MIME type of “text/html”. Hyperlinks are going to appear only in HTML-formatted text files, so we can save everyone a bit of time and energy by letting another handler take care of other file types.
The rest of the handler works by reading the contents of the file, then replacing them with our new and improved version. We append a “new” label after every </a>, which comes after each hyperlink. In this way, every hyperlink is tagged as new.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide