Large-Scale Linux Configuration Management
Before the introduction of LCFG, we were configuring machines using a typical range of techniques, including vendor installs and disk copying (cloning). These were followed by the application of a monolithic script which applied assorted “tweaks” for all the different configuration variations. This met with virtually none of the requirements listed above and was a nightmare to manage.
The available alternatives ranged from large commercial systems (too expensive and probably too inflexible) to systems developed at individual sites for their own use (often not much of an improvement over our existing process). More recently, interesting tools such as COAS and the GNU cfengine (see Resources 5) have appeared, but we are still not aware of any comparable system which addresses quite the same set of requirements as LCFG.
Given limited development resources, we attempted to design an initial system as a number of independent subsystems, intending to use temporary implementations for some of the ones where we could leverage existing technology:
Resource Repository: design a standard syntax for representing resources (individual configuration parameters). These would be stored in a central place where they could be analysed and processed as well as distributed to individual machines.
Resource Compiler: preprocess the resources so that we could create configurations by inheritance and avoid specifying large numbers of low-level resources explicitly.
Distribution Mechanism: distribute the master copy of the resources to clients on demand in a robust way.
Component Framework: provide a framework which allows components to be easily written for configuring new subsystems and services, using the resources from the repository.
Core Components: implement a number of core components, including basic OS installation and the standard daemons. We wanted some of these to act as exemplars to make it as easy as possible for other people to create new components.
Items of configuration data are represented as key,value pairs, in a way similar to X resources. The key consists of three parts: the hostname, the component and the attribute. For example, the nameserver (cul) for the host wyrgly is configured by the DNS component:
Notice that this specification is a rather abstract representation, not directly tied to the form in which the configuration is actually required by the machine, in this case, as a line in the resolv.conf file. This allows the same representation to be used for different platforms, and it permits high-level programs to analyse and generate the resources easily . The LCFG components on each machine are responsible for translating these resources into the appropriate form for the particular platform. COAS uses a similar representation for configuration parameters.
The resources are currently stored in simple text files, with one file per host. This collection of files forms the repository. We intend to provide a special-purpose language for specifying these resources; it would support inheritance, default configurations, validation and some concept of higher-level specifications. However, we are currently using a “temporary” solution based on the C preprocessor, followed by a short Perl script to preprocess the resources. The C preprocessor provides file inclusion and macros, which can be used for primitive inheritance. The Perl script allows inherited resources to be modified with regular expressions. Wild cards are also supported to provide default values.
In practice, most machines have very short resource files which simply inherit some standard templates. Machines can be cloned simply by copying these resource files. Often, a few resources are overridden to provide slight variations. For example:
#include <generic_client.h> #include <linux.h> #include <portable.h> amd.localhome: paul auth.users: paul
The name of the host is not necessary in the resource keys, because this is generated from the name of the resource file.
Resources are currently distributed to clients using NIS (Sun's Network Information System). This is another “temporary” solution which is far from ideal; we hope to replace it in the near future.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide