Caching the Web, Part 2
If your cache is to be part of a cache mesh or your proxy server is to be connected to another proxy that will be its parent, you must use the cache_host directive. You must include one line for each of your neighbors. The syntax for this line is:
hostname is the name of your neighbor.
type is one of parent or sibling.
http_port is the neighbor's port from which to fetch objects.
icp_port is the port to which ICP queries are sent. Use a value of 0 if your neighbor does not run ICP, or 7 if your neighbor runs the UDP echo service. This can help Squid to detect if the host is alive.
If you have a stand-alone cache, you should not include any of these directives. If you have one parent that runs its HTTP port on 3128 and its ICP port on 3130, the line to include in the squid.conf file is:
With the cache_peer_domain directive, you can limit which neighbors are queried for specific domains. For example:
will query the first cache only for the .COM and .EDU domains, and the second for some of the European domains.
If you have only one parent cache, the overhead of the ICP protocol is unnecessary. Since you are going to fetch all objects (HITs and MISSes) from the parent, you can use the no_query option in the cache_peer directive to send HTTP queries to only that cache.
Also, there are some domains you will always want to fetch directly rather than from your neighbors. Your own domain is a good example. Fetching objects belonging to your local web servers from a faraway cache is not efficient. In this case, use the always_direct acl command. For example, in our organization we use:
acl intranet dstdomain mec.es always_direct allow intranet
to avoid getting our own objects from the national cache server.
Squid includes a simple, web-based interface called cachemgr.cgi to monitor the cache performance and provide useful statistics, such as:
The amount of memory being used and how it is distributed
The number of file descriptors
The contents of the distinct caches it maintains (objects, DNS lookups, etc.)
Traffic statistics with each client and neighbors
The “Utilization” page, where you can check the percentage of HIT your cache is registering (and thus bandwidth you are saving).
Be sure to copy the cachemgr.cgi program installed in your /usr/local/squid/bin (or wherever you chose) to your standard CGI directory, and point your browser to http://your.cache.host/cgi-bin/cachemgr.cgi. There, you should type your cache host name, usually “localhost” or the name of your system, and the port your cache is running, usually 3128, and check all the options.
A proxy-cache server is a necessary service for almost any organization connected to the Internet. In this article, we have tried to show the whys and hows to implement this technology, and a brief tutorial on Squid, the most advanced and powerful tool for this purpose. Don't forget to read all the comments in the example configuration file. They are complete and useful and show a lot of features not mentioned in this article.
Perhaps in a few years, with the growth of PUSH technology and the use of dynamic content on the Web, caching won't be a solution to the bandwidth crisis. Today, it's the best we have.
One problem proxy caches don't solve is making certain your users configure their browsers to use the caches. Users can always choose to bypass your proxy server by not configuring their browsers. Some organizations have chosen to block port 80 in their routers except for the system running the proxy-cache server. It's a radical solution, but very effective.
Another thing you can do to improve the speed of your users' browsers is pre-fetching the most accessed web sites from your cache. Recursive web-fetching tools which support proxy connections can help do this task in non-peak hours, e.g., url_get, webcopy. Launching one of these retrieval tools with the standard output redirected to /dev/null updates the cache with fresh objects.
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
|Trying to Tame the Tablet||May 08, 2013|
|Dart: a New Web Programming Experience||May 07, 2013|
- RSS Feeds
- New Products
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Home, My Backup Data Center
- New Products
- Developer Poll
- Trying to Tame the Tablet
- Tech Tip: Really Simple HTTP Server with Python
- Deceptive Advertising and
8 min ago
- Let\'s declare that you have
8 min 57 sec ago
- Alterations in Contest Due
10 min 3 sec ago
- At a numbers mindset, your
11 min 14 sec ago
- Do not get Just Almost any
14 min 43 sec ago
- A fantastic rule-of-thumb to
16 min 6 sec ago
- Keren mastah..
1 hour 13 min ago
- mini tablet compare
2 hours 32 min ago
- Looking Good
6 hours 5 min ago
- Hey God - You may not be
10 hours 19 min ago
Enter to Win an Adafruit Prototyping Pi Plate Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Prototyping Pi Plate Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- Next winner announced on 5-21-13!
Free Webinar: Linux Backup and Recovery
Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.
In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.