Automate System Administration Tasks with Puppet
If you have more than one UNIX box in your care, you know how duplication happens. Every machine needs a common set of settings. Package upgrades need to be deployed. Certain packages need to be on every server.
You also want to make sure that any changes to your systems happen in a controlled manner. It's one thing to start off with two servers that are similarly configured; it's another thing to know they're the same a year later, especially if other people are involved.
Puppet is a system for automating system administration tasks (in the author's own words). In the Puppet world, you define a policy (called a manifest) that describes the end state of your systems, and the Puppet software takes care of making sure the system meets that end state. If a file changes, it is replaced with a pristine copy. If a required package is removed, it is re-installed.
It is important to draw a distinction between shell scripts that copy files between systems and a tool like Puppet. The latter abstracts the policy from the steps required to make a system conform. Puppet is smart enough to use apt-get to install a package on a Debian system and yum on a Fedora system. Puppet is smart enough to do nothing if the system already is conformant to the policy.
The Puppet system is split into two parts: a central server and the clients. The server runs a dæmon called puppetmaster. The clients run puppetd, which both connects to, and receives connections from, the puppetmaster. The manifest is written on the puppetmaster. If Puppet is used to manage the central server, it also runs the puppetd client.
The best way to begin with a configuration management system like Puppet is to start with a single client and a simple policy, and then roll it out to more clients and a more complex policy. To that end, start off by installing the Puppet software. Puppet is written in the Ruby scripting language, so you need to install that before you begin (Ruby is available as a package for most distributions).
Packages Are Good
Some people scoff at the idea of using a prebuilt binary package and prefer to build everything from source. That'll work, but it just doesn't scale. When you get further along with Puppet, you'll see how your manifest can manage packages with a single line. It's certainly possible to specify all the files you built, but then you're putting in a lot of needless effort.
You can (and should) build your own packages where needed. Packaging your own applications means you will build the software consistently, version after version, so that files will be in the same place and you won't accidentally drop features. Building your own packages also handles dependencies against other packages and keeps track of software versions.
In all likelihood, you will end up with your own package repository that holds your locally developed packages and any vendor packages that you've modified. You also will use Puppet to ensure that your clients are pointed at your repository.
Installing Puppet from a package also lets you manage the client's Puppet software through Puppet itself. Need to upgrade in order to get more features? Simply update your manifest.
If you choose to install from source, you need the facter and puppet tarballs from the author's site:
The facter tarball contains the Facter utility, which generates facts about the host system. Facts can be anything from the Linux distribution to whether the host is a virtual machine. The puppet tarball contains both puppetd and puppetmaster.
Untar the files (tar -xzf facter-latest.tgz and tar -xzf puppet-latest.tgz). Change to the newly created facter directory, and run ruby install.rb as root. You will do the same for the puppet directory, which installs both the client and server packages.
puppetmasterd --mkusers; chown puppet /var/puppet
|Updates from LinuxCon and ContainerCon, Toronto, August 2016||Aug 23, 2016|
|NVMe over Fabrics Support Coming to the Linux 4.8 Kernel||Aug 22, 2016|
|What I Wish I’d Known When I Was an Embedded Linux Newbie||Aug 18, 2016|
|Pandas||Aug 17, 2016|
|Juniper Systems' Geode||Aug 16, 2016|
|Analyzing Data||Aug 15, 2016|
- Updates from LinuxCon and ContainerCon, Toronto, August 2016
- NVMe over Fabrics Support Coming to the Linux 4.8 Kernel
- What I Wish I’d Known When I Was an Embedded Linux Newbie
- New Version of GParted
- All about printf
- Analyzing Data
- Tech Tip: Really Simple HTTP Server with Python
- Tor 0.2.8.6 Is Released
- Returning Values from Bash Functions
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide