Getting Started with Salt Stack-the Other Configuration Management System Built with Python
I was proudly wearing one of my Salt Stack shirts the other day when my daughter asked me, "What is Salt Stack?" I began by explaining the problem it solved. If you have multiple servers and want to do things to those servers, you would need to log in to each one and do those things one at a time on each one. They could be fairly simple tasks like restarting them or checking how long they have been running. Or, you might want to do more complicated things like installing software and then configuring that software based upon your own specific criteria. You also might want to add users and configure permissions for them.
What if you have ten or maybe even 100 servers though? Imagine logging in one at a time to each server individually, issuing the same commands on those 100 machines and then editing the configuration files on all 100 machines? What a pain! Just updating user password policies would take days, and introducing an error would be quite likely. What if you could update all your servers at once just by typing one single command? The solution? Salt Stack!
Like my daughter, you may not have heard of Salt Stack (http://saltstack.org), but you might be familiar with Puppet (http://puppetlabs.com) and Chef (http://opscode.com). Salt is a similar tool, but it's written in Python, is relatively lightweight as far as resources and requirements, and it's much easier to use (in my opinion). Salt uses the 0MQ (http://www.zeromq.org) communication layer, which makes it really fast. It also is entirely open source, licensed under the Apache2 (http://www.apache.org/licenses/LICENSE-2.0) license, and boasts a vibrant and productive community.
There currently aren't any plans to release a crippled community version or a more feature-rich paid enterprise edition either. With Salt, the version you get is the version everyone else gets too—whether you've paid money or not. There are plans for an enterprise version, but it merely will be less bleeding-edge and will be subjected to a higher amount of testing and quality assurance, and it possibly will include training as well.
Tools like Salt, Puppet and Chef allow you to issue commands on multiple machines at once, and install and configure software too. Salt has two main aspects: configuration management and remote execution.
Salt Stack is a command-line tool. There isn't anything to click on with your mouse, and the feedback is presented as text that is returned on your screen. This is good. It keeps things lean, and most servers don't include a graphical user interface anyway. (Note: I use the terms Salt and Salt interchangeably throughout this article. They mean the same thing in this context.)
In this article, I cover the two tools included with Salt. The first is remote execution, although there isn't any clear delineation or any different way to interact with Salt if you want to work with configuration management or remote execution. This allows you to log in to a master machine and then execute commands on one or many other machines at once. With Salt, you simply type your command once on your master machine, and it executes on every machine, or even a targeted group of machines.
Second, Salt is capable of storing configuration directives, and then instructing other machines to follow those directives by doing things like installing software, making configuration changes to the software, and then reporting back on the progress and success or failures of the installation.
Later, I demonstrate using Salt to install an additional package on one, or even 1,000 machines, and then configure that package by issuing just one command.
Salt is a constantly evolving organism. Possibly by the time you read this, some things may have changed. You always can find the most current documentation here: http://docs.saltstack.org/en/latest/index.html.
You do need a few prerequisites before installing Salt:
A Linux server.
sudo or root access to this server.
An Internet connection to this server.
Knowledge of your server's IP address (it can be a public or private address).
Even though Salt is designed to interact with multiple servers, for this tutorial, you actually can accomplish everything on one machine.
Use your package manager to install Salt, and follow the installation guide found in the Salt Docs for your particular distribution (http://docs.saltstack.org/en/latest/topics/installation/index.html). You'll also need sudo or root privileges to use Salt and install these packages.
The benefits of using a package manager or installing from source are a constant source of on-line and water-cooler debates. Depending on your distribution, you may have to install the packages from source instead of using your package manager.
If you'd like to install from source, you can find the latest Salt source files in the Salt Project's GitHub repository (https://github.com/saltstack/salt).
After following the instructions for installing both a salt-master and salt-minion, hopefully, everything went well and you didn't receive any errors. If things didn't work out quite right, support is generally available quickly from the Salt Stack mailing list (http://saltstack.org/learn/#tab-mailinglist) and the #salt IRC channel.
Configure Your Master and Minion(s)
The terms master and minion refer to the controller and the controlled. The master essentially is the central coordinator for all of the minions—similar to a client/server configuration where the master is the server, and the minion is the client.
Ben Hosmer is a DEVOP with RadiantBlue Technologies where he develops and maintains Drupal sites and administers various servers. He is an open-source advocate and helps spread the use of Linux and other open-source software within the US government.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide