Compressing Web Content with mod_gzip and mod_deflate
Reducing costs is a key consideration for every IT budget. One of the items looked at most closely is the cost of a company's bandwidth. Using content compression on a Web site is one way to reduce both bandwidth needs and cost. With that in mind, this article examines some of the compression modules available for Apache, specifically, mod_gzip for Apache 1.3.x and 2.0.x and mod_deflate for Apache 2.0.x.
Most compression algorithms, when applied to a plain-text file, can reduce its size by 70% or more, depending on the content in the file. When using compression algorithms, the difference between standard and maximum compression levels is small, especially when you consider the extra CPU time necessary to process these extra compression passes. This is quite important when dynamically compressing Web content. Most software content compression techniques use a compression level of 6 (out of 9 levels) to conserve CPU cycles. The file size difference between level 6 and level 9 is usually so small as to be not worth the extra time involved.
For files identified as text/.* MIME types, compression can be applied to the file prior to placing it on the wire. This simultaneously reduces the number of bytes transferred and improves performance. Testing also has shown that Microsoft Office, StarOffice/OpenOffice and PostScipt files can be GZIP-encoded for transport by the compression modules.
Prior to sending a compressed file to a client, it is vital that the server ensures the client receiving the data correctly understands and renders the compressed format. Browsers that understand compressed content send a variation of the following client request headers:
Accept-encoding: gzip, deflate
Current major browsers include some variation of this message with every request they send. If the server sees the header and chooses to provide compressed content, it should respond with the server response header:
This header tells the receiving browser to decompress the content and parse it as it normally would. Alternatively, content may be passed to the appropriate helper application, based on the value of the Content-type header.
The file size benefits of compressing content can be seen easily by looking at a couple of examples, one an HTML file (Table 1) and the other a PostScript file (Table 2). Performance improvements are examined later in this article.
mod_deflate for Apache versions 2.0.44 and earlier comes with the compression ratio set for best speed, not best compression. This configuration can be modified using the tips found at www.webcompression.org/mod_deflate-hack.php. Starting with Apache 2.0.45, a configuration directive is included.
Table 1. /compress/homepage2.html
|No compression||56,380 bytes||n/a|
|Apache 1.3.x/mod_gzip||16,333 bytes||29% of original|
|Apache 2.0.x/mod_deflate||19,898 bytes||35% of original|
|Apache 2.0.x/mod_deflate ||16,337 bytes||29% of original|
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- SUSE LLC's SUSE Manager
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide