Caching the Web, Part 1
The web is everywhere. Everyone uses it. Everyone talks about it. But in this less-than-perfect world, you know there are problems. Bandwidth is a problem. Web document latency (the time a document takes to arrive at your browser once its URL is requested) is a problem. As more of your bandwidth space is used, latency of documents retrieved from the Internet increases. Bandwidth is expensive, perhaps the most expensive element of an Internet connection.
Despite the fact that the web is growing fast, the same documents get requested and the same web sites are visited repeatedly. We can take advantage of this to avoid downloading redundant objects. You would be surprised to learn how many of your users read the NBA.COM web pages, or how many times the GIFs from AltaVista cross your line.
Even if you know nothing about web caching, you are probably using it with your web browser. Most common browsers use this approach with the documents and objects you retrieve from the Web, keeping a copy of recent documents in memory or disk. Each time you click on the “back” button or visit the same page, that page is in memory and does not need to be retrieved. This is the first level of caching, and the technique can be expanded to the entire web.
The basic idea behind caching is to store the documents retrieved by one user in a common location, and thus avoid retrieving the same document for a second user from its source. Instead, the second user gets the document from the common place. This is very important when you deal with organizations in Europe, where most of the inbound traffic comes from the other side of the Atlantic, frequently across slow links.
The main benefit of this approach is the fact that your users' browsing is now collaborative, and an important number of the documents your users retrieve are served in a very small period of time. In a medium-sized organization (with between 50 to 100 users), you can serve up to 60% of URL requests from the local cache.
The difference between a browser cache and a proxy-cache server is that the browser cache works for only one user and is located in the final user workstation, while the proxy-cache server is a program that acts on behalf of a number of web browser clients, allowing one client to read documents requested by others earlier. This proxy-cache server is located in a common server that usually lies between the local network and the Internet. All browsers request documents from the proxy server, which retrieves the documents and returns them to the browsers. It's the second level of caching in an organization. Figure 1 shows this type of network configuration.
A proxy-cache is not just a solution to the bandwidth crisis; it is also desirable when a network firewall is needed to guarantee the security of your organization. In this case, the proxy-cache sits on a computer accessible from all local browsers, but isolates them from the Internet at the same time. This computer must have two network interfaces attached to the internal and external networks and must be the only computer reachable from the Internet. Figure 2 illustrates such a configuration. The proxy-cache server must be accessible only by internal systems to ensure that no one on the Internet can access your internal documents by requesting them from the proxy-cache. I will discuss access control to the proxy-cache later in this article.
One step forward from this approach is the concept of a cache hierarchy, where two or more proxy-cache servers cooperate by serving documents to each other. A proxy-cache can play two different roles in a hierarchy, depending on network topology, ISP policies and system resources. A neighbor (or sibling) cache is one that serves only documents it already has. A parent cache can get documents from another cache higher in the hierarchy or from the source, depending whether it has more parent or neighbor caches in its level. A parent cache should be used when there are no more opportunities to get the document from a cache on the same level.
Choosing a good cache topology is very important in order to avoid generating more network traffic than without web caching. An organization can choose to have several sibling caches in its departmental networks and a parent cache close to the network link to the Internet. This parent cache can be configured to request documents from another parent cache in the upstream ISP, in case they have one (most do). Agreements can be made between organizations and ISPs to build sibling or parent caches to reduce traffic overload in their links, or to route web traffic through a different path than the regular IP traffic. Web caching can be considered an application-level, routing mechanism, which uses ICP (Internet Cache Protocol) as its main protocol. Figure 3 is an example of how an organization can implement multi-level web caching.
|The True Internet of Things||Sep 02, 2015|
|September 2015 Issue of Linux Journal: HOW-TOs||Sep 01, 2015|
|September 2015 Video Preview||Sep 01, 2015|
|Using tshark to Watch and Inspect Network Traffic||Aug 31, 2015|
|Where's That Pesky Hidden Word?||Aug 28, 2015|
|A Project to Guarantee Better Security for Open-Source Projects||Aug 27, 2015|
- Using tshark to Watch and Inspect Network Traffic
- September 2015 Issue of Linux Journal: HOW-TOs
- The True Internet of Things
- Problems with Ubuntu's Software Center and How Canonical Plans to Fix Them
- Concerning Containers' Connections: on Docker Networking
- Firefox Security Exploit Targets Linux Users and Web Developers
- Where's That Pesky Hidden Word?
- A Project to Guarantee Better Security for Open-Source Projects
- Build a “Virtual SuperComputer” with Process Virtualization
- My Network Go-Bag