Simple Server Hardening, Part II

What to Avoid

Along with best practices, some security practices are best avoided. The first is security by obscurity. This means securing something merely by hiding it instead of hardening it. Obscurity should be avoided because it doesn't actually stop an attack; it just makes something harder to find and can give you a false sense of security.

A great example of this is the practice of moving your SSH port from the default (22) to something more obscure. Although moving SSH to port 60022 might lower the number of brute-force attempts in your logs, if you have a weak password, any halfway-decent attacker will be able to find your SSH port with a port scan and service discovery and be able to get in.

Port knocking (the practice of requiring a service to access random ports on the server in a sequence before the firewall allows the client through—think of it like a combination lock using ports) also falls into this category, because any router between the client and server can see what port the client uses—they aren't a secret—but will give you a false sense of security that your service is firewalled off from attack. If you are that concerned about SSH brute-force attacks, just follow my hardening steps from the first part of this series in the October 2016 issue of LJ to eliminate it as an attack completely.

Many of the other practices to avoid are essentially the opposites of the best practices. You should avoid complexity whenever possible, and avoid reliance on any individual security measure (they all end up having a security bug or failing eventually). In particular, when choosing network software, you should avoid software that doesn't support encrypted communication. I treat network software that doesn't support encryption in this day and age as a sign that it's still a bit too immature for production use.

Where Sysadmin and Security Best Practices Collide

Earlier, I mentioned that general best practices and security best practices often are the same, and this first tip is a great example. Centralized configuration management tools like Puppet, Chef and SaltStack are tools systems administrators have used for quite some time to make it easier to deploy configuration files and other changes throughout their infrastructure. It turns out that configuration management also makes hardening simpler, because you can define your gold standard, hardened configuration files and have them enforced throughout your environment with ease.

For instance, if you use configuration management to control your web server configuration, you can define the set of approved, secure, modern TLS cipher suites and deploy them to all of your servers. If down the road one of those ciphers proves to be insecure, you can make the change in one place and know that it will go out to all relevant servers in your environment.

Another best practice with configuration management is checking your configuration management configuration files into a source control system like git. This "configuration as code" approach has all kinds of benefits for systems administrators, including the ability to roll back mistakes and the benefit of peer review. From a security standpoint, it also provides a nice auditing trail of all changes in your environment—especially if you make a point to change your systems only through configuration management.

Along with configuration management, another DevOps tool that also greatly aids security is an orchestration tool—whether it's MCollective, Ansible or an SSH for loop. Orchestration tools make it easy to launch commands from a central location that apply to particular hosts in your environment in a specific order and often are used to stage software updates. This ease of deploying software also provides a great security benefit because it's very important to stay up to date on security patches.

With an orchestration tool like MCollective, for instance, if you find out there's been a new bug in ImageMagick, you can get a report of the ImageMagick versions in your environment with one command, and with another command, you can update all of them. Regular security updates become simpler, which means you are more likely to stay up to date on them, and more involved security updates (like kernel updates that require a reboot) at least become more manageable, and you can use the orchestration software to tell for sure when all systems are patched.

Finally, set up some sort of centralized logging system. Although you can get really far with grep, it just doesn't scale when you have a large number of hosts generating a large number of logs. Centralized logging systems like Splunk and ELK (Elasticsearch, Logstash and Kibana) allow you to collect your logs in central place, index them and then search through them quickly. This provides great benefits to general sysadmin troubleshooting, but from a security perspective, centralized logging means attackers who compromise a system now have a much harder time covering up their tracks—they now have to compromise the logging systems too. With all of your logs searchable in one place, you also get the ability to build queries that highlight (and with many systems graph as well) suspicious log events for review.

______________________

Kyle Rankin is senior security and infrastructure architect, the author of many books including Linux Hardening in Hostile Networks, DevOps Troubleshooting and The Official Ubuntu Server Book, and a columnist for Linux Journal. Follow him @kylerankin