Paranoid Penguin - Building a Secure Squid Web Proxy, Part III
As I mentioned previously, one of Squid's most handy capabilities is its ability to authenticate proxy users by means of a variety of external helper mechanisms. One of the simplest and probably most commonly used helper applications is ncsa_auth, a simple user name/password scheme that uses a flat file consisting of rows of user name/password hash pairs. The HOWTO by Vivek Gite and, to a lesser extent, the Squid User's Guide, explain how to set this up (see Resources).
Briefly, you'll add something like this to /etc/squid/squid.conf:
auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squidpasswd auth_param basic children 5 auth_param basic realm Squid proxy-caching web server at Wiremonkeys.org auth_param basic credentialsttl 2 hours auth_param basic casesensitive off
And, in the ACL section:
acl ncsa_auth_users proxy_auth REQUIRED http_access allow ncsa_auth_users
The block of auth_param tags specifies settings for a “basic” authentication mechanism:
program is the helper executable ncsa_auth, using the file /etc/squid/squidpassd as the user name/password hash list (created previously).
children, the number of concurrent authentication processes, is five.
realm, part of the string that greets users, is “Squid proxy-caching Web server at Wiremonkeys.org”.
credentialsttl, the time after authentication that a successfully authenticated client may go before being re-authenticated, is two hours.
casesensitive, which determines whether user names are case-sensitive, is off.
In the ACL section, we defined an ACL called ncsa_auth_users that says the proxy_auth mechanism (as defined in the auth_param section) should be used to authenticate specified users. Actually in this case, instead of a list of user names to authenticate, we've got the wild card REQUIRED, which expands to “all valid users”. The net effect of this ACL and its subsequent http_access statement is that only successfully authenticated users may use the proxy.
The main advantages of the NCSA mechanism are its simplicity and its reasonable amount of security (only password hashes are transmitted, not passwords proper). Its disadvantage is scalability, because it requires you to maintain a dedicated user name/password list. Besides the administrative overhead in this, it adds yet another user name/password pair your users are expected to remember and protect, which is always an exercise with diminishing returns (the greater the number of credentials users have, the less likely they'll avoid risky behaviors like writing them down, choosing easy-to-guess passwords and so forth).
Therefore, you're much better off using existing user credentials on an external LDAP server (via the ldap_auth helper) on an NT Domain or Active Directory server (via the msnt_auth helper) or the local Pluggable Authentication Modules (PAM) facility (via the pam_auth helper). See Resources for tutorials on how to set up Squid with these three helpers.
Note that Squid's helper programs are located conventionally under /usr/lib/squid. Checking this directory is a quick way to see which helpers are installed on your system, although some Linux distributions may use a different location.
Access Control Lists really are Squid's first line of defense—that is, Squid's primary mechanism for protecting your network, your users and the Squid server itself. There are a couple other things worth mentioning, however.
First, there's the matter of system privileges. Squid must run as root, at least while starting up, so that, among other things, it can bind to privileged TCP ports such as 80 or 443 (although by default it uses the nonprivileged port 3128). Like other mainstream server applications, however, Squid's child processes—the ones with which the outside world actually interacts—are run with lower privileges. This helps minimize the damage a compromised or hijacked Squid process can do.
By default, Squid uses the user proxy and group proxy for nonprivileged operations. If you want to change these values for effective UID and GID, they're controlled by squid.conf's cache_effective_user and cache_effective_group tags, respectively.
Squid usually keeps its parent process running as root, in case it needs to perform some privileged action after startup. Also, by default, Squid does not run in a chroot jail. To make Squid run chrooted, which also will cause it to kill the privileged parent process after startup (that is, also will cause it to run completely unprivileged after startup), you can set squid.conf's chroot tag to the path of a previously created Squid chroot jail.
If you're new to this concept, chrooting something (changing its root) confines it to a subset of your filesystem, with the effect that if the service is somehow hacked (for example, via some sort of buffer overflow), the attacker's processes and activities will be confined to an unprivileged “padded cell” environment. It's a useful hedge against losing the patch rat race.
Chrooting and running with nonroot privileges go hand in hand. If a process runs as root, it can trivially break out of the chroot jail. Conversely, if a nonprivileged process nonetheless has access to other (even nonprivileged) parts of your filesystem, it still may be abused in unintended and unwanted ways.
Somewhat to my surprise, there doesn't seem to be any how-to for creating a Squid chroot jail on the Internet. The world could really use one—maybe I'll tackle this myself at some point. In the meantime, see Resources for some mailing-list posts that may help. Suffice it to say for now that as with any other chroot jail, Squid's must contain not only its own working directories, but also copies of system files like /etc/nsswitch.conf and shared libraries it uses.
Common Squid practice is to forego the chroot experience and to settle for running Squid partially unprivileged per its default settings. If, however, you want to run a truly hardened Squid server, it's probably worth the effort to figure out how to build and use a Squid chroot jail.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- SourceClear Open