Server Hardening

Every distribution has its tools for managing a firewall, and others are available in most package managers. I don't bother with them, as iptables (once you gain some familiarity with it) is fairly easy to understand and use, and it is the same on all systems. Like vi, you can expect its presence everywhere, so it pays to be able to use it. A basic firewall looks something like this:


# make sure forwarding is off and clear everything
# also turn off ipv6 cause if you don't need it 
# turn it off
sysctl net.ipv6.conf.all.disable_ipv6=1
sysctl net.ipv4.ip_forward=0
iptables -F
iptables --flush
iptables -t nat --flush
iptables -t mangle --flush
iptables --delete-chain
iptables -t nat --delete-chain
iptables -t mangle --delete-chain


#make the default -drop everything
iptables --policy INPUT DROP
iptables --policy OUTPUT ACCEPT
iptables --policy FORWARD DROP


#allow all in loopback
iptables -A INPUT -i lo -j ACCEPT

#allow related
iptables -A INPUT -m state --state 
 ↪ESTABLISHED,RELATED -j ACCEPT

#allow ssh
iptables -A INPUT -m tcp -p tcp --dport 22 -j ACCEPT

You can get fancy, wrap this in a script, drop a file in /etc/rc.d, link it to the runlevels in /etc/rcX.d, and have it start right after networking, or it might be sufficient for your purposes to run it straight out of /etc/rc.local. Then you modify this file as requirements change. For instance, to allow ssh, http and https traffic, you can switch the last line above to this one:


iptables -A INPUT -p tcp -m state --state NEW -m 
 ↪multiport --dports ssh,http,https -j ACCEPT

More specific rules are better. Let's say what you've built is an intranet server, and you know where your traffic will be coming from and on what interface. You instead could add something like this to the bottom of your iptables script:


iptables -A INPUT -i eth0 -s 192.168.1.0/24 -p tcp 
 ↪-m state --state NEW -m multiport --dports http,https

There are a couple things to consider in this example that you might need to tweak. For one, this allows all outbound traffic initiated from the server. Depending on your needs and paranoia level, you may not wish to do so. Setting outbound traffic to default deny will significantly complicate maintenance for things like security updates, so weigh that complication against your level of concern about rootkits communicating outbound to phone home. Should you go with default deny for outbound, iptables is an extremely powerful and flexible tool—you can control outbound communications based on parameters like process name and owning user ID, rate limit connections—almost anything you can think of—so if you have the time to experiment, you can control your network traffic with a very high degree of granularity.

Second, I'm setting the default to DROP instead of REJECT. DROP is a bit of security by obscurity. It can discourage a script kiddie if his port scan takes too long, but since you have commonly scanned ports open, it will not deter a determined attacker, and it might complicate your own troubleshooting as you have to wait for the client-side timeout in the case you've blocked a port in iptables, either on purpose or by accident. Also, as I've detailed in a previous article in Linux Journal (http://www.linuxjournal.com/content/back-dead-simple-bash-complex-ddos), TCP-level rejects are very useful in high traffic situations to clear out the resources used to track connections statefully on the server and on network gear farther out. Your mileage may vary.

Finally, your distribution's minimal install might not have sysctl installed or on by default. You'll need that, so make sure it is on and works. It makes inspecting and changing system values much easier, as most versions support tab auto-completion. You also might need to include full paths to the binaries (usually /sbin/iptables and /sbin/sysctl), depending on the base path variable of your particular system.

All of the above probably should be finished within a few minutes of bringing up the server. I recommend not opening the ports for your application until after you've installed and configured the applications you are running on the server. So at the point when you have a new minimal server with only SSH open, you should apply all updates using your distribution's method. You can decide now if you want to do this manually on a schedule or set them to automatic, which your distribution probably has a mechanism to do. If not, a script dropped in cron.daily will do the trick. Sometimes updates break things, so evaluate carefully. Whether you do automatic updates or not, with the frequency with which critical flaws that sometimes require manual configuration changes are being uncovered right now, you need to monitor the appropriate lists and sites for critical security updates to your stack manually, and apply them as necessary.

Once you've dealt with updates, you can move on and continue to evaluate your server against the two security principles of 1) minimal attack surface and 2) secure everything that must be exposed. At this point, you are pretty solid on point one. On point two, there is more you can yet do.

The concept of hurdles requires that you not allow root to log in remotely. Gaining root should be at least a two-part process. This is easy enough; you simply set this line in /etc/ssh/sshd_config:


PermitRootLogin no

For that matter, root should not be able to log in directly at all. The account should have no password and should be accessible only via sudo—another hurdle to clear.

If a user doesn't need to have remote login, don't allow it, or better said, allow only users that you know need remote access. This satisfies both principles. Use the AllowUsers and AllowGroups settings in /etc/ssh/sshd_config to make sure you are allowing only the necessary users.

You can set a password policy on your server to require a complex password for any and all users, but I believe it is generally a better idea to bypass crackable passwords altogether and use key-only login, and have the key require a complex passphrase. This raises the bar for cracking into your system, as it is virtually impossible to brute force an RSA key. The key could be physically stolen from your client system, which is why you need the complex passphrase. Without getting into a discussion of length or strength of key or passphrase, one way to create it is like this:


ssh-keygen -t rsa

Then when prompted, enter and re-enter the desired passphrase. Copy the public portion (id_rsa.pub or similar) into a file in the user's home directory called ~/.ssh/authorized_keys, and then in a new terminal window, try logging in, and troubleshoot as necessary. I store the key and the passphrase in a secure data vault provided by Personal, Inc. (https://personal.com), and this will allow me, even if away from home and away from my normal systems, to install the key and have the passphrase to unlock it, in case an emergency arises. (Disclaimer: Personal is the startup I work with currently.)

Once it works, change this line in /etc/ssh/sshd_config:


PasswordAuthentication no

Now you can log in only with the key. I still recommend keeping a complex password for the users, so that when you sudo, you have that layer of protection as well. Now to take complete control of your server, an attacker needs your private key, your passphrase and your password on the server—hurdle after hurdle. In fact, in my company, we also use multi-factor authentication in addition to these other methods, so you must have the key, the passphrase, the pre-secured device that will receive the notification of the login request and the user's password. That is a pretty steep hill to climb.

Encryption is a big part of keeping your server secure—encrypt everything that matters to you. Always be aware of how data, particularly authentication data, is stored and transmitted. Needless to say, you never should allow login or connections over an unencrypted channel like FTP, Telnet, rsh or other legacy protocols. These are huge no-nos that completely undo all the hard work you've put into securing your server. Anyone who can gain access to a switch nearby and perform reverse arp poisoning to mirror your traffic will own your servers. Always use sftp or scp for file transfers and ssh for secure shell access. Use https for logins to your applications, and never store passwords, only hashes.

Even with strong encryption in use, in the recent past, many flaws have been found in widely used programs and protocols—get used to turning ciphers on and off in both OpenSSH and OpenSSL. I'm not covering Web servers here, but the lines of interest you would put in your /etc/ssh/sshd_config file would look something like this:


Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128
MACs hmac-sha1,umac-64@openssh.com,hmac-ripemd160

Then you can add or remove as necessary. See man sshd_config for all the details.

Depending on your level of paranoia and the purpose of your server, you might be tempted to stop here. I wouldn't. Get used to installing, using and tuning a few more security essentials, because these last few steps will make you exponentially more secure. I'm well into principle two now (secure everything that must be exposed), and I'm bordering on the third principle: assume that every measure will be defeated. There is definitely a point of diminishing returns with the third principle, where the change to the risk does not justify the additional time and effort, but where that point falls is something you and your organization have to decide.

The fact of the matter is that even though you've locked down your authentication, there still exists the chance, however small, that a configuration mistake or an update is changing/breaking your config, or by blind luck an attacker could find a way into your system, or even that the system came with a backdoor. There are a few things you can do that will further protect you from those risks.

Speaking of backdoors, everything from phones to the firmware of hard drives has backdoors pre-installed. Lenovo has been caught no less than three times pre-installing rootkits, and Sony rooted customer systems in a misguided attempt at DRM. A programming mistake in OpenSSL left a hole open that the NSA has been exploiting to defeat encryption for at least a decade without informing the community, and this was apparently only one of several. In the late 2000s, someone anonymously attempted to insert a two-line programming error into the Linux kernel that would cause a remote root exploit under certain conditions. So suffice it to say, I personally do not trust anything sourced from the NSA, and I turn SELinux off because I'm a fan of warrants and the fourth amendment. The instructions are generally available, but usually all you need to do is make this change to /etc/selinux/config:


#SELINUX=enforcing # comment out
SELINUX=disabled # turn it off, restart the system

In the spirit of turning off and blocking what isn't needed, since most of the malicious traffic on the Internet comes from just a few sources, why do you need to give them a shot at cracking your servers? I run a short script that collects various blacklists of exploited servers in botnets, Chinese and Russian CIDR ranges and so on, and creates a blocklist from them, updating once a day. Back in the day, you couldn't do this, as iptables gets bogged down matching more than a few thousand lines, so having a rule for every malicious IP out there just wasn't feasible. With the maturity of the ipset project, now it is. ipset uses a binary search algorithm that adds only one pass to the search each time the list doubles, so an arbitrarily large list can be searched efficiently for a match, although I believe there is a limit of 65k entries in the ipset table.

______________________

-- I was cloud before cloud was cool. Not in the sense of being an amorphous collection of loosely related molecules with indeterminate borders -- or maybe I am. Holla @geek_king, http://twitter.com/geek_king