In a prevIous tutorial, we discussed how to install the ANSIBLE software and learned some basic commands. In this guide, we will discuss ANSIBLE playbooks which are a way of creating automated scripts to configure client computers. We will assume that you have already configured ANSIBLE Serve/Control Node and a few Client Nodes.



CREATE A YAML FILE



The ANSIBLE playbook is a set of instructions that will be executed on a singlehost or a group of hosts. It is written as a YAML file with the file extension .yml.

YAML stands for YAML Ain't Markup Language.

YAML is human readable and easy to write, but you do have to be careful with syntax as an extra space or wrong indentation will give problems.

A playbook is an ordered lists of tasks which are saved so you can run those tasks in that order repeatedly.

To open a Playbook file start by using the command vi followed by the file name and then the extension .yml

Ex: vi test.yml

Opens a yaml file called test

In the example below we have 3 tasks:

1.) Installing httpd - installs an Apache Server.

2.) Start the service - starts the httpd service.

3.) Creates a file called test 5 in the /tmp directory.

Tags are also included in the Playbook.

If you have a large playbook, it may become useful to be able to run only a specific part of it rather than running everything in the playbook.

ANSIBLE supports a “tags:” attribute for this reason







Step 2:

Change Permissions.

For this example we will be using the root user so use the command below to gain root access in order to generate an RSA Key Pair as the root user.

We could have just easily used ec-2 user as the account:

$ sudo -i



Step 3:

Create An RSA Key Pair On The Control Node.

Before we can configure the Client Nodes and enable communication between the Control node and the Client Nodes we have to generate an RSA Key Pair

We can create an RSA Key Pair by using the following command:

# ssh-keygen -t rsa





The ssh-keygen -t rsa command will generate the public/private key pair followed by 3 prompts:

1.) What location to save the key to.

2.) An optional paraphrase (for extra security)

3.) A second prompt for the paraphrase



Step 4:

Change To the SSH Directory

Use the cd ~/ .ssh command below to change to the ssh directory .

# cd ~/ .ssh







The public key will be saved to / root/ .ssh/ id_rsa.pub





Step 5:

View The Contents Of The SSH Directory

Use the ls command below to view the contents of the ssh directory.

# ls







After issuing the command notice that there are 4 files in the directory

1.) authorized_keys

2.) id_rsa

3.) id_rsa.pub

4.) known_hosts



Step 6:

Open The id_rsa.pub File

The id_rsa.pub file contains the key we generated previously. Open the id_rsa.pub file by generating the command below:

# vi id_rsa.pub





Step 7:

Copy the key

Copy the entire key from the id_rsa.pub file with no white spaces before or after the key and save it a file or the clipboard.

Save and close the file.



Step 8:

SSH in to the Client Node

Get the ssh connection string from the AWS console and ssh in to the Client node.





Step 9:

Change Permissions And Update The Client Node

After logging Into the Client Node use $ sudo -i to change permissions followed by # yum update -y to update the Client Node.



Step 10:

Change To The Home Directory Of The Client Node

# cd ~/.ssh





Step 11:

List The Contents Of The SSH Directory

# ls





Step 12:

Edit The Authorized_Keys File

Open the authorized_keys file and paste the contents from the id_rsa.pub file that you previously copied from the ANSIBLE Control Node.

At the end of the authorized_keys file change the root@"Control Node" to root@ipaddress or if you are logged in as a different user such as

ec2-user replace ec2-user@"ControlNodeName" to ec2@ipaddress.

The ip address will be the ip address of your currently logged in Client Node.

This is a one time configuration that you will perform for each Client Node you wish to manage.



Step 13:

Create An Inventory File

ANSIBLE has a default inventory file used to designate which servers it will be managing. The location is located at /etc/ansible/hosts.

The inventory file can configured be in differentyformats, depending on what inventory plugins you have installed. In this example, the format for /etc/ansible/hosts is similar to an INI which is one of the default formats for ANSIBLE. In the image below we have defined some groups and in each group there are some servers to manage.

****Note:

Groups must be enclosed with brackets in the hosts file.

In the screenshot below we have created 3 groups:
1.) [application]
2.) [databasegroup]
3.) [redhat8]


In the application group [application] we have 1 server with an ip address of 3.17.167.81.

In the database group [databasegroup] we also have 1 server with an ip of 18.22.229.199.

In the redhat 8 group [redhat8] we have 2 server to manage with 2 ip addresses

3.17.167.81 and 18.224.229.199.

Notice how the last group [redhat8] has an ip of 3.17.167.81 and 18.224.229.199 inside of it.

This is to illustrate that servers can be part of multiple groups.

For instance a server could be both a webserver and in a specific datacenter. For example, you could create groups that track:

What - It could be an application or a microservice in Docker or Kubernetes.

For example: database servers, web servers, application servers etc.

Where - A specific datacenter or region.

For example: north, south, east, west.

When - Different stages of cic/cd, to avoid testing on resources.

For example: dev, prod, test.





Having the hosts file makes testing and performing tasks on multiple Client Nodes easy.

****Note:

The hosts file is also called an inventory file.

You don't need to setup multiple servers or virtual machines. There is no need to install agents.

Once we have an inventory configured, we can start running tasks against the defined servers. ANSIBLE will assume you have SSH access available to your servers, usually based on SSH-Key. Make sure SSH is enabled on your EC2 security groups. ANSIBLE uses SSH so the server it’s on needs to be able to SSH into the inventory servers.

****Note:

Client Nodes can also be referred to as inventory servers.



Step 14:

Testing Connectivity

It's time to test the connectivity between the Control Node and the Client Nodes. ANSIBLE commands uses modules to manage most of its tasks.

Modules can control system resources, like services, packages, or handle executing system commands like installing software or copying files etc.

The basic syntax of ANSIBLE commands are:





In place of <group_name> you can use:

Any group name.

A single host name.

Multiple host names separated by a :(colon)

Multiple groups separated by a :(colon)

Multiple host names and multiple groups by a :(colon)

All which is equivalent to all the groups and or host names (everything)

****Note:

All these should be located within the inventory file ("hosts")

<Arguments> are also optional

Let's start with a simple command using the "ping" module which is used to check the connectivity of hosts (servers, clients etc)





Where -m is the module followed by the module name which is called "ping"

followed by the group name "application" which resides in the inventory file.





In the previous example we only have a response from one Client Node.

This is because there is only one server listed under the group "application" within the inventory file called "hosts" in /etc/ansible.

Now let's try using the ping module with the "redhat8" group.





Now notice how this time we are getting back successful responses from 2 different client nodes. That's because we had 2 different Client Nodes listed in the group "redhat8".

****Note:

If the group had 50 Client Nodes in it and 4 were unreachable you

would have gotten back 4 unsuccessful connection requests.



Step 15:

Yum Module

Now let's try another command this time using the yum module to install an Apache Web Server on Client Nodes in the application group.





Where -m is followed by our module name "yum"

-a represents an argument:

"name=httpd state=present" followed by the group name called application

name=httpd is representitive of Apache Server

state=present will ensure that the package is available for all Client

Nodes under the group application.





As you can see the Apache Server was installed for all Client Nodes in the application group. Once again since there is only one Client Node in the group you only see it installed on one Client Node with the ip address 3.17.167.81.

****Note

When initially running the command it will appear in yellow.

"changed": true,

followed by the results of the installation

"results": [

"Installed: httpd"


If the command is run again it will display in green with:

"changed": false,

"msg": "Nothing to do"






Step 16:

ANSIBLE-DOC

Ansible-doc displays information on modules installed in ANSIBLE libraries. It displays a brief listing of modules and their short descriptions, provides a printout of their DOCUMENTATION strings, and it can create a short "snippet" which can be pasted into a playbook.

Lets use the command ansible-doc -l

Where l is:

List available plugins







There are a few thousand modules so it could take you awhile hitting enter to either find the module you are looking for or getting to the end. To escape hit q.

Now let's try to find the yum module directly and also get the arguments displayed







If we keep on scrolling down we can find the argument we called before when installing the Apache server.







That was a basic beginners introduction to ANSIBLE Automation.

Look for an upcoming post where we will take a look at ANSIBLE Playbooks which use scripting to automate tasks.