Digging Through the DevOps Arsenal: Introducing Ansible

""

If you need to deploy hundreds of server or client nodes in parallel, maybe on-premises or in the cloud, and you need to configure each and every single one of them, what do you do? How do you do it? Where do you even begin? Many configuration management frameworks exist to address most, if not all, of these questions and concerns. Ansible is one such framework.

You may have heard of Ansible already, but for those who haven't or don't know what it is, Ansible is a configuration management and provisioning tool. (I'll get to exactly what that means shortly.) It's very similar to other tools, such as Puppet, Chef and Salt.

Why use Ansible? Well, because it's simple to master. I don't mean that the others are not simple, but Ansible makes it easy for individuals to pick up quickly. That's because Ansible uses YAML as its base to provision, configure and deploy. And because of this approach, tasks are executed in a specific order. During execution, if you trip over a syntax error, it will fail once you hit it, potentially making it easier to debug.

Now, what's YAML? YAML (or YAML Ain't Markup Language) is a human-readable data-serialization language mostly used to capture configuration files. You know how JSON is easier to implement and use over XML? Well, YAML takes a more simplistic approach than JSON. Here's an example of a typical YAML structure containing a list:


data:
    - var1:
        a: 1
        b: 2
    - var2:
        a: 1
        b: 2
        c: 3

Now, let's swing back to Ansible. Ansible is an open-source automation platform freely available for Linux, macOS and BSD. Again, it's very simple to set up and use, without compromising any power. Ansible is designed to aid you in configuration management, application deployment and the automation of assorted tasks. It works great in the realm of IT orchestration, in which you need to run specific tasks in sequence and create a chain of events that must happen on multiple and different servers or devices.

Here's a good example: say you have a group of web servers behind a load balancer. You need to upgrade those web servers, but you also need to ensure that all but one server remains online for the upgrade process. Ansible can handle such a complex task.

Ansible uses SSH to manage remote systems across the network, and those systems are required to have a local installation of not only SSH but also Python. That means you don't have to install and configure a client-server environment for Ansible.

Install Ansible

Although you can build the package from source (either from the public Git repository or from a tarball), most modern Linux distributions will have binary packages available in their local package repositories. You need to have Ansible installed on at least one machine (your control node). Remember, all that's required on the remote machines are SSH and Python.

To install on Red Hat or CentOS:


$ sudo yum install ansible

To install on Ubuntu:


$ sudo apt install ansible

Configure Your SSH Keys and Install Them on the Remote Hosts

Life will be much easier once you install SSH keys on each node as an authorized key. The purpose of this exercise is to provision access to each node from the other without requiring a password for each login. This feature facilitates automated and passwordless logins using the SSH protocol. Another name for key-based authentication in SSH is called public key authentication.

Create an RSA key pair:


$ ssh-keygen -t rsa

For the sake of simplicity, let's leave the defaults to both the location of the key and the passphrase. Proceed by pressing enter for every requested input until you return back to the shell prompt.

Once the SSH key has been created, copy the public key to the remote server. In this exercise, you're required to do this from the control node over to the remote node:


$ cat ~/.ssh/id_rsa.pub | ssh [email protected] "cat >>
 ↪~/.ssh/authorized_keys"

Replace the user name and IP address as needed. You can make sure that everything works by SSHing to the remote node from your designated control node. If done correctly, you won't be prompted for a password, and you'll automatically log in to the shell of the remote machine.

Define the Remote Machines

Let's define which nodes are going to be the remote nodes from the control node. But before doing that, let's first relocate the default hosts configuration file:


$ sudo mv /etc/ansible/hosts /etc/ansible/hosts.orig

Create a new /etc/ansible/hosts file, and define a new group with a list of the IP addresses to be identified under that same group. In this case, let's define a group called web, and underneath it, let's have a single remote node, 192.168.1.109:


[web]
192.168.1.109

If you want to add more to this group, you would do so on a new line. For example:


[web]
192.168.1.109
192.168.1.110
192.168.1.111

If you want to test this on a local machine instead of two or more separate nodes, create a group called local, and add the localhost IP address:


[local]
127.0.0.1

Run Basic Tasks

Now that you've done all of this, you should be able to run tasks on the defined remote servers. But, first let's make sure that all is well. Remember, Ansible needs to be able to log in directly to the remote nodes via SSH and without a password. If you haven't already, please refer to the SSH key section above. Run the following command:


$ ansible all -m ping

Your response should look something like this JSON output for all the nodes in all the groups defined in the /etc/ansible/hosts file:


192.168.1.109 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}

If you want to run a command to all of your nodes under the group web, and you know that each node in that group is a Debian-based distribution, you would run the following:


$ ansible web -m shell -a 'cat /etc/debian_version'

Note: the -m option defines the module to be used. The first attempt used the ping module, and this example shows invoking the shell for a shell-based command.

The output of the above command will look similar to the following:


192.168.1.109 | CHANGED | rc=0 >>
buster/sid

Now let's say you need to run a command as a completely different user:


$ ansible web --become-user=root -m shell -a 'tail -n5
 ↪/var/log/syslog'

You can rely on the --become-user= option and append the desired user to the parameter. The tail command above will output what you would typically expect:


192.168.1.109 | CHANGED | rc=0 >>
Jun 15 20:17:51 ubuntu-test systemd[1]: Started Session 9
 ↪of user petros.
Jun 15 20:17:52 ubuntu-test ansible-command: Invoked with
 ↪creates=None executable=None _uses_shell=True
 ↪strip_empty_ends=True _raw_params=cat
 ↪/etc/debian_version removes=None argv=None warn=True
 ↪chdir=None stdin_add_newline=True stdin=None
Jun 15 20:25:12 ubuntu-test systemd[1]: Started Session 10
 ↪of user petros.
Jun 15 20:25:13 ubuntu-test ansible-command: Invoked with
 ↪creates=None executable=None _uses_shell=True
 ↪strip_empty_ends=True _raw_params=tail -n5
 ↪/var/log/messages removes=None argv=None warn=True
 ↪chdir=None stdin_add_newline=True stdin=None
Jun 15 20:25:34 ubuntu-test ansible-command: Invoked with
 ↪creates=None executable=None _uses_shell=True
 ↪strip_empty_ends=True _raw_params=tail -n5
 ↪/var/log/syslog removes=None argv=None warn=True
 ↪chdir=None stdin_add_newline=True stdin=None

Create Playbooks

Using these basic functions, you easily can batch a few commands to various nodes across your network, but often you'll find yourself in need of running more than one or two shell commands. This is where Playbooks come into the picture. Playbooks run multiple tasks and provide more advanced functionality than your ad hoc commands.

Say you want to install a few packages when a remote node comes online. You'll need to create a YAML file to capture those actions. Using a text editor, create a file named package-install.yml with the following YAML structure:


---
- hosts: web
  tasks:
   - name: Install Make
     apt: pkg=make state=present update_cache=true
     become: yes
   - name: Install GCC
     apt: pkg=gcc state=present update_cache=true
     become: yes

You're essentially going to tell Ansible that you want to install both the Make and GCC packages (alongside its dependencies) on all nodes in the group web. You also are telling Ansible that you need to install these two packages as a privileged user with the become: yes field.

Now it's time to kick off the Ansible Playbook. If you're not executing as a privileged user already, you need to add the --ask-become-pass option, which will prompt you for a password to su into root to execute the desired actions. This works only if all nodes under the same group share the same user and password schemes:


$ ansible-playbook --ask-become-pass package-install.yml
BECOME password:

PLAY [web]
**************************************************************

TASK [Gathering Facts]
**************************************************************
ok: [192.168.1.109]

TASK [Install Make]
**************************************************************
 [WARNING]: Updating cache and auto-installing missing
 ↪dependency: python-apt

changed: [192.168.1.109]

TASK [Install GCC]
**************************************************************
changed: [192.168.1.109]

PLAY RECAP
**************************************************************
192.168.1.109       : ok=3    changed=2    unreachable=0
 ↪failed=0    skipped=0    rescued=0    ignored=0

Now you should be starting to see some real power here: both Make and GCC have been installed to the nodes in the group.

Handlers

Ansible supports an event-handling system called handlers. A handler is sort of like a task, and it can pretty much accomplish anything a task can do, but it'll instead run when called by another task. A handler will take action only when the event it's listening for is called.

Say your YAML file looks like the following:


---
- hosts: web
  tasks:
   - name: Install Apache
     apt: pkg=apache2 state=present update_cache=true
     become: yes
     notify:
      - Start Apache
  handlers:
   - name: Start Apache
     service: name=apache2 state=started

This instructs Ansible to run a task named "Install Apache", and once it completes, it will notify a handler named "Start Apache" to start the web service. It's able to start the web service via a service module, which supports your typical start, stop, restart and reload commands. (I mentioned the concept of modules earlier, if you can recall both ping and shell.) The output of the above YAML structure should look something like this:


$ ansible-playbook --ask-become-pass package-install.yml
BECOME password:

PLAY [web]
**************************************************************

TASK [Gathering Facts]
**************************************************************
ok: [192.168.1.109]

TASK [Install Apache]
**************************************************************
changed: [192.168.1.109]

RUNNING HANDLER [Start Apache]
**************************************************************
ok: [192.168.1.109]

PLAY RECAP
**************************************************************
192.168.1.109      : ok=3    changed=1    unreachable=0
 ↪failed=0   skipped=0    rescued=0    ignored=0

Summary

The examples here are quite small and limited. As you likely have guessed, you are able to add more tasks and notify more handlers from within a single YAML file. It doesn't need to be limited to just a few. It may take some time and trial and error to build up enough of a list to handle every action in your automated environment. There is so much more that you can do with Ansible and so much more to cover. Although this guide provides a good foundation to get you started, I barely scraped the surface of this extremely powerful configuration management framework.

Resources

Petros Koutoupis, LJ Editor at Large, is currently a senior performance software engineer at Cray for its Lustre High Performance File System division. He is also the creator and maintainer of the RapidDisk Project. Petros has worked in the data storage industry for well over a decade and has helped pioneer the many technologies unleashed in the wild today.

Load Disqus comments