Automating Security with GNU cfengine
Many years ago, I had a small revelation that I'm sure many of you have experienced yourselves. I realized that maintaining 10 systems requires a good bit more work than administering a single computer. But, it doesn't have to take that much more work--assuming the proper tools and methodologies are used.
When you want to make a change to a single system, you simply decide what to change and poke around until everything works properly. Three months later, you may not even remember what it was you did or why you did it. Does that matter? Not usually.
But when you have to make those changes to several systems, do you really want to perform manually the same task numerous times? If you had 10 systems but now have 11, will you remember to make all of the same changes to the new arrival? Maybe the new system is slightly different--or maybe none of your systems are the same. Wouldn't it be nice to know exactly what you did and why you did it?
This is where GNU cfengine, by Mark Burgess, comes into play. It allows you to affect changes effortlessly across any number of dissimilar systems. Perhaps even more important, it provides automatic documentation of exactly what you did. You even can use a few comments to explain why you did it. Each of your systems become a member of one or more classes, and changes are made on a per-class basis. If a new system arrives, it automatically acquires the changes previously made to other members of its class.
Once cfengine is installed (from www.cfengine.org) and running, making changes to your group of systems becomes almost as easy as changing a single system. This gives you more time to decide what to do and how to do it, something that remains the primary responsibility of an administrator to this day.
As with many programs, the most difficult part about using cfengine is getting started. For this reason, we are going to set up a basic environment, with one client and one server. You should perform these steps on a couple of test systems, if possible, to follow along with the examples. Later, you can expand this framework to build your own cfengine environment.
I believe you always should start a new adventure with the simplest practical configuration. In this case, the systems are assumed to each have a single, static IP address and the proper DNS entries (forward and reverse). The setup process can be summarized as follows:
1. Create master configuration files on the master server (cfservd.conf, update.conf and cfagent.conf).
2. Create public/private keys on each system and distribute appropriately.
3. Start cfservd on at least the server, and run cfexecd on all systems.
Let's start with the master server. It should have the directory /usr/local/var/cfengine/inputs containing the master set of configuration files. It must run the cfservd dæmon, and it must have a valid cfservd.conf configuration file, as follows:
control: domain = ( mydomain.com ) AllowUsers = ( root ) cfrunCommand = ( "/var/cfagent/bin/cfagent" ) admit: /usr/local/var/cfengine/inputs *.mydomain.com /var/cfagent/bin/cfagent *.mydomain.com
This simple configuration file does little more than allow all hosts from mydomain.com to download the master set of configuration files. It also allows remote systems to execute the cfagent command using cfrun, a useful feature you should explore once you have things up and running.
We now can move on to update.conf, which is processed and executed first when cfagent is run on any system. This file's primary function is to transfer the master configuration files to the local system. It must be kept simple and reliable, as any errors in this file have to be repaired manually on each system.
Listing 1. Sample update.conf
control: actionsequence = ( copy tidy ) domain = ( mydomain.com ) workdir = ( /var/cfengine ) policyhost = ( server.mydomain.com ) master_cfinput = ( /usr/local/var/cfengine/inputs ) cf_install_dir = ( /usr/local/sbin ) copy: $(master_cfinput) dest=$(workdir)/inputs r=inf mode=644 type=binary exclude=*.lst exclude=*~ exclude=#* server=$(policyhost) $(cf_install_dir)/cfagent dest=$(workdir)/bin/cfagent mode=755 type=checksum $(cf_install_dir)/cfservd dest=$(workdir)/bin/cfservd mode=755 type=checksum $(cf_install_dir)/cfexecd dest=$(workdir)/bin/cfexecd mode=755 type=checksum tidy: $(workdir)/outputs pattern=* age=7
The file shown in Listing 1 should be all you need in most environments; simply replace server.mydomain.com with the hostname of your cfengine server. When executed, this update.conf file creates a local copy of the required cfengine binaries in /var/cfengine/bin (from /usr/local/sbin, which is assumed to be an NFS filesystem, or similar).
More importantly, the configuration files are copied from the server that runs cfservd to /var/cfengine/inputs on every system, including the server itself. The configuration files are compared bit-by-bit with the master file (type=binary), while the binaries are updated if their checksums don't match the master copies.
Finally, the tidy section removes output logfiles that are more than seven days old to keep your drive from filling up. The tidy section also can be used in the main cfagent.conf to clean up a variety of other files on your systems.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space