Deploying the Squid proxy server on Linux
To provide Internet access for users in the SAS Institute Europe, Middle East and Africa (EMEA), a number of proxy servers have been installed both at the country office level and centrally at SAS European Headquarters in Heidelberg, Germany.
These servers run the Squid proxy server software; this software is available under the GNU general public license. In brief, Squid provides for caching and/or forwarding requests for internet objects such as the data available via HTTP, FTP and gopher protocols. Web browsers can then use the local Squid cache server as a proxy HTTP server, reducing access time as well as bandwidth consumption. Squid keeps these objects in RAM or on local disk. Squid servers can be installed in hierarchies to allow central servers to build large caches of data available for servers lower in the hierarchy.
Squid has been in use for some time around SAS EMEA and is performing very well; the software is extremely stable and is delivering seamless access to the Internet for connected clients.
The original proxy servers were installed on HP workstations running release 10.20 of HPUX and Squid version 2.1. This was run on a mix of hardware but typically HP9000/720 workstations with 64MB of memory and about 4GB of disk. This configuration is difficult to support; the hardware has reached an age where failures are becoming common and the increased use of the Internet coupled with growth in the offices has left the configuration under-powered. Our main problem of late has been disk space management; the increased access patterns have left our existing log areas looking undersized at 100MB and our actual cache directories are looking rather small at 2GB.
As a result, we began researching some alternatives in order to maintain the service. Since we were happy with the Squid software itself, and we already had a good understanding of the configurations, we decided to continue using Squid but to review the hardware base.
Since Squid is an open-source project and well supported under Linux, it seemed a good idea to explore the possibility of using a Linux-based solution using a standard SAS EMEA Intel PC. This configuration is a Dell desktop PC with 256MB of RAM, 500MHz Intel Pentium and internal 20GB IDE disk. As Dell has a relationship with Red Hat, it made sense to their distribution. Also, SAS has recently released versions of the SAS product in partnership with Red Hat.
The original architecture in SAS EMEA used three central parent Squid caches with direct access to the Internet and child Squid servers in many of the country offices. Some of the smaller countries' operations still connect to the central headquarter caches, and we felt that using less expensive hardware would give us the opportunity to install proxies in these offices. Further, in many of the country operations the SAS presence is split among several offices connected via WAN links; again the less expensive hardware gives us the opportunity to install proxies in these offices. These deployments should improve the response times for web traffic and hopefully reduce the overhead on our WAN links.
Finally, we had some reservations about the resilience and availability of the original infrastructure, and we felt that with revised client and server configurations we could improve the service level of our internal customers.
Our new architecture is not much altered in principle; we still have three central servers, but they now run Linux. We are deploying more child proxies, and we require a three-level hierarchy in some offices. For example, some countries have satellite offices that only connect to the SAS Intranet via a single WAN link to the country headquarters; in these cases we will install proxies at the satellite office with a preferred parent cache in the country headquarters rather than European headquarters.
A new addition to our architecture has been the Trend Interscan Virus Wall product for HTTP virus-scanning. We have installed three virus scanning systems also running Red Hat Linux; these systems are positioned behind the current Squid parent caches, providing a virus-scanning layer between the Squid cache hierarchy and the external Internet. Since the virus scanners are simply pass-through in nature, we simply configure our top-level Squid servers to “round-robin” between them.
The original HP-UX servers were installed by duplicating a disk image from a known configuration. This was a totally unsatisfactory method for several reasons, not the least of which was that it was difficult to make provisions for maintenance of this image for patches or version updates for Squid, etc.
Our goal was a scripted and automated installation that could be performed quickly by local office staff. We have been pleased with the implementation of this concept, and it has some useful benefits with regard to recovery and configuration management (see below).
We produced a KickStart configuration to build machines for us. KickStart is a tool from Red Hat to automate system installations. Basically we can tell the install how to partition a disk, which packages (RPMs) to install and include some local configuration steps via shell commands. Our KickStart configuration is placed on a floppy disk along with normal Linux boot utilities, and we instruct the KickStart to perform installation from a CD.
This means that for a new proxy server we can arrange shipment of a PC that looks similar to our expected hardware configuration and ship a CD and floppy disk for the remote office to complete the configuration.
The installation process has been automated with three exceptions: users will be prompted for the hostname, IP information and the keyboard type (some of our offices use different keyboards for the local language). The KickStart hard-codes all other choices; for example the installation language is always English, the choice of packages are always the same and the disk always partitions in the same way.
The basic installation from placing the disk in the drive until the reboot with a freshly installed OS takes under ten minutes. This is much quicker than we could do and a huge decrease in the time it would take to perform a HP-UX installation. This obviously has some implications for our backup and recovery procedures (see below).
|Using tshark to Watch and Inspect Network Traffic||Aug 31, 2015|
|Where's That Pesky Hidden Word?||Aug 28, 2015|
|A Project to Guarantee Better Security for Open-Source Projects||Aug 27, 2015|
|Concerning Containers' Connections: on Docker Networking||Aug 26, 2015|
|My Network Go-Bag||Aug 24, 2015|
|Doing Astronomy with Python||Aug 19, 2015|
- Using tshark to Watch and Inspect Network Traffic
- Problems with Ubuntu's Software Center and How Canonical Plans to Fix Them
- Concerning Containers' Connections: on Docker Networking
- A Project to Guarantee Better Security for Open-Source Projects
- Where's That Pesky Hidden Word?
- Firefox Security Exploit Targets Linux Users and Web Developers
- My Network Go-Bag
- Doing Astronomy with Python
- Build a “Virtual SuperComputer” with Process Virtualization
- diff -u: What's New in Kernel Development