Ansible Snippets
- Bootstraping
- Execution order
- Create a file without external template
- Creating inventory script
- Writing ansible modules in sh
- Writing Ansible Action Plugins
- Using Ansible playbooks through SSH bastion hosts/jump servers
- Report no changes when running a script
- Role File structure
- Disable gathering facts
- vars and defaults in roles
Bootstraping
- Create a default config file:
ansible-config init --disabled > ansible.cfg
- Createa new ansible role directory:
ansible-galaxy init --init-path roles _role-name_
This creates the directory role-name in theroles
directory.
Execution order
- pre_tasks
- roles (in the order they are listed)
- tasks
- handlers (only if notified by a task and before post_tasks)
- post_tasks
Create a file without external template
Normally you can use the template module to send a Jinja2 templated file to the managed host. This requires that the Jinja2 template be stored in its own file. Some times I find it more convenient to place the template directly into the YAML file.
This can be achieved using the copy module and the content parameter. Example:
- name: Create a customized file
copy:
content: |
Hello {{ who }},
This file was created by Ansible.
dest: /path/to/your/file.txt
vars:
who: "World"
Creating inventory script
When specifying inventories in ansible, an executable file can be specified. This will be executed to generate the inventory. It takes the following command line arguments:
-
script --list
Returns a JSON formatted inventory. Example:{ "group001": { "hosts": ["host001", "host002"], "vars": { "var1": true }, "children": ["group002"] }, "group002": { "hosts": ["host003","host004"], "vars": { "var2": 500 }, "children":[] } "_meta": { "hostvars": { "host001": { "var001" : "value" }, "host002": { "var002": "value" } } } }
The output should be a JSON object containing all the groups to be managed as dictionary items. Each group's value should be either an object containing a list of each host, any child groiups, and potential group variables, or simply a list of hosts. Empty elements of a group can be omitted.
An optional_meta
block can be added to contain host specific variables. If this is omitted, the--host
command line argument is used to query host variables. -
script --host
hostname
Where hostname is a host from the--list
. The script should return either an empty JSON object, or a JSON dictionary containing meta varaibles specific to the host. Example:{ "VAR001": "VALUE", "VAR002": "VALUE" }
Other arguments are allowed but ansible will not use them.
See Inventory scripts for more details.
You can use the ansible-inventory command to see what ansible will process for its inventory.
Writing ansible modules in sh
See Ansible Module architecture for more details.
This documents how to create Old-style Ansible module. These are less efficient but are asier to implement in shell script.
Ansible playbooks are meant to be declarative in nature. So for handling more
complex tasks the recommendation is to write modules, which then can be used from a
playbook. To create a module in shell script, you just need to create a file in your
module's path (ANSIBLE_LIBRARY). Input parameters are given as a file in "$1". This
file is formatted as a source
able file so using:
. "$1"
The return code of the script is used to determine if the module was succesful or an error happened.
The output of the script must be in JSON format. And it should contain the following keys:
changed
: boolean indicating if changes were mademsg
: optional informational message, particularly useful in an error condition.ansible_facts
: dictionary containing facts that will be added to the playbook run. This is optional.
There are additional arguments sent to the script. These are internal ansible arguments. Worth mentioning are:
_ansible_no_log
(boolean) : do not log output. Used to keep sensitive strings from logging._ansible_debug
(boolean) : debugging_ansible_diff
(boolean) : Running in diff mode._ansible_check_mode
(boolean) : Running in check mode.
Check mode
If _ansible_check_mode
is set to True
, the user is running the playbook with the
--check
flag. Modules should only show that changes would be made, without making
any actual changes.
More details here
Diff mode
If _ansible_diff
is set to True
, the user is running the playbook with the --diff
flag. Modules that support this should add a key named diff
with either:
- two keys,
before
andafter
. This contains the contents before and after the change. - single
prepared
key. This contains a list with a textual descriptions of the changes to be made.
See article
Writing Ansible Action Plugins
Modules described earlier, run on the managed node. While they can return data to the control node, sometimes is necessary to do some preparatory work on the control node before calling a Module proper. This is done using Action Plugins
Essentially, action plugins let you integrate local processing and local data with module functionality.
To create an action plugin, create a new class with the Base(ActionBase) class as the parent:
from ansible.plugins.action import ActionBase
class ActionModule(ActionBase):
pass
From there, execute the module using the _execute_module
method to call the original module.
After successful execution of the module, you can modify the module return data.
module_return = self._execute_module(module_name='<NAME_OF_MODULE>',
module_args=module_args,
task_vars=task_vars, tmp=tmp)
A simple example template:
#!/usr/bin/python
# Make coding more python3-ish, this is required for contributions to Ansible
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.plugins.action import ActionBase
from datetime import datetime
class ActionModule(ActionBase):
def run(self, tmp=None, task_vars=None):
result = super(ActionModule, self).run(tmp, task_vars)
module_args = self._task.args.copy()
module_return = self._execute_module(module_name='setup',
module_args=module_args,
task_vars=task_vars, tmp=tmp)
result.update(module_return)
return result
Transferring data from an ActionPlugin
So you need to copy a large file from the control node to the managed node in an Action Plugin. To do this you need to declar in your class:
TRANSFERS_FILES = True
Next in your Action implementation, you can use this command to create temporary file:
tmp_src = self._connection._shell.join_path(self._connection._shell.tmpdir, 'archive.zip')
Then you can copy bytes:
self._transfer_data(tmp_src, bytes_object)
or copy a file:
self._transfer_file(local_file, tmp_src)
A complete example:
#!/usr/bin/python
# Make coding more python3-ish, this is required for contributions to Ansible
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.plugins.action import ActionBase
from ansible.errors import AnsibleActionFail, AnsibleError
class ActionModule(ActionBase):
TRANSFERS_FILES = True
def run(self, tmp=None, task_vars=None):
result = super(ActionModule, self).run(tmp, task_vars)
# Get the arguments passed to the action plugin
src = self._task.args.get('src', '')
if len(src) == 0:
raise AnsibleActionFail('Missing or empty src parameter',result=result)
# Copy archive to remote/managed node:
try:
tmp_src = self._connection._shell.join_path(self._connection._shell.tmpdir, 'archive.zip')
self._transfer_file(src, tmp_src)
except AnsibleError as e:
raise AnsibleActionFail(to_text(e))
return result
Using Ansible playbooks through SSH bastion hosts/jump servers
There are two approaches:
Inventory vars
The first way to do it with Ansible is to describe how to connect through the proxy server in Ansible's inventory. This is helpful for a project that might be run from various workstations or servers without the same SSH configuration (the configuration is stored alongside the playbook, in the inventory).
Example Inventory:
[proxy]
bastion.example.com
[nodes]
private-server-1.example.com
private-server-2.example.com
private-server-3.example.com
[nodes:vars]
ansible_ssh_common_args='-o ProxyCommand="ssh -p 2222 -W %h:%p -q [email protected]"'
This sets up an SSH proxy through bastion.example.com on port 2222 (if using the default port, 22, you can drop the port argument). The -W argument tells SSH it can forward stdin and stdout through the host and port, effectively allowing Ansible to manage the node behind the bastion/jump server.
The important config line is ansible_ssh_common_args
, which adds the relevant options to
the ansible ssh
command. A few notes on the options:
Recent SSH versions could just use:
ansible_ssh_common_args='-J [email protected]:2222"'
SSH config
The alternative, which would apply the proxy configuration to all SSH connections on a
given workstation, is to add the following configuration inside your ~/.ssh/config
file:
Host private-server-*.example.com
ProxyJump user@bastion:2222
Ansible will automatically use whatever SSH options are defined in the user or global SSH config, so it should pick these settings up even if you don't modify your inventory.
This method is most helpful if you know your playbook will always be run from a server or
workstation where the SSH config is present. Also, this applies to normal ssh
invokations
so if you also use the ssh
and related utilities directly, then, htey will use the
same configuration.
TCP Tunneling
These options assume that the bastion host has TCP Tunneling/Forwarding enabled. If your
bastion host has this feature disabled, you can replace the ProxyJump
with
proxy command:
ProxyCommand ssh user@bastion nc %h %p
This replaces the -W
option with the nc
(netcat) command.
Source: https://www.jeffgeerling.com/blog/2022/using-ansible-playbook-ssh-bastion-jump-host
Report no changes when running a script
Running a script always assume that it makes changes.
Using scripts is not recommenteded because it is just as easy to convert the script into a proper Ansible module. Doing so makes it also possible to:
- Return ansible facts
- Report change status
- Support check and diff modes.
Regardless, this is an example:
tasks:
- name: Exec sh command
shell:
cmd: "echo ''; exit 254;"
register: result
failed_when: result.rc != 0 and result.rc != 254
changed_when: result.rc != 254
I have customized command
module and the script
action plugin to simplify these
three lines of code into a single line. So the previous example becomes:
tasks:
- name: Exec sh command
shell:
cmd: "echo ''; exit 254;"
no_change_rc: 254
Adding check_mode to a script
In addition, it is actually possible to support check_mode
in a script. You need to:
- Pass the appropriate settings to your script indicating to it that it is in
check_mode
.
This can be done by adding a check for theansible_check_mode
variable in a Jinja2 template. - Set-up the task so that it will also execute in
check_mode
by adding the tag:check_mode: false
to it. - Example:
- name: "Apply config to " command: | xop cfg --no-change-rc=127 {% if ansible_check_mode %}--dry-run{% endif %} register: res failed_when: res.rc != 0 and res.rc != 127 changed_when: res.rc != 127 check_mode: false
- In this example, the
xop cfg
command gets executed regardless ifcheck_mode
is on or off. - The script then is passed the
--dry-run
option ifansible_check_mode
is on. So the script will not make any actual changes to the system.
Scripts:
Role File structure
- roles
- role name
- tasks
- main.yml : tasks file can include smaller files if warranted
- handlers
- main.yml : handlers file
- templates : files to use in the template resource
- ntp.conf.j2 : templates end in
.j2
- ntp.conf.j2 : templates end in
- files
- bar.txt : files for use with the copy resource
- foo.sh : script files for use with the script resource
- vars
- main.yml : variables associated with this role
- defaults
- main.yml : default lower priority variables for this role
- meta
- main.yml : role dependencies
- library : roles can also include custom modules
- module_utils : roles can also include custom module_utils
- lookup_plugins : or other types of plugins, like lookup in this case
- tasks
- role name
See: https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_reuse_roles.html
Disable gathering facts
This needed for managing hosts that do not have a python interpreter by default. Specially for scenarios where it is needed to bootstrap the python interpreter.
There are different ways to do this.
- Disabling on a per playbook basis.
Example:# sample playbook - hosts: sites user: root tags: - configuration gather_facts: no tasks: - .... # ... tasks to install python ... - ... # Optionally, call setup explicitly - setup # ... more tasks ...
- Using an environment variable
ANSIBLE_GATHERING
This can be set toexplicit
. For example:ANSIBLE_GATHERING=0 ansible-playbook playbook.yaml -i inventory.yaml
- From
ansible.cfg
:- plays will gather facts by default, which contain information about the remote system.
- Options:
- smart - gather by default, but don't regather if already gathered
- implicit - gather by default, turn off with gather_facts: False
- explicit - do not gather by default, must say gather_facts: True
- example:
gathering = explicit
vars and defaults in roles
In an ansible role, there are two places where you can define variables,
defaults
and vars
directories. It always was confusing to me when
would you use one over the other one.
defaults
- Contains default variables for the role.
- Variables in this directory have the lowest precedence, meaning they can be easily overridden by other variable sources (e.g., playbooks, command line).
- Use this for variables that can be set by users or other roles but need to be initialized with a default value.
- Example:
# defaults/main.yml my_default_variable: "default_value"
vars
- Contains variables that are generally more static and not meant to be easily overridden.
- Variables in this directory have higher precedence than those in
defaults
but lower than those defined in inventory files or extra vars. - Use this for essential role-specific variables that should rarely change.
- Example:
# vars/main.yml my_static_variable: "static_value"
Best practices
- Use
defaults/
for configurable options: Place variables here if you expect them to be overridden by the user or other roles. - Use
vars/
for crucial settings: Use this directory for variables that are critical to the role and not intended to be changed. - Document Variables: Always document the purpose and acceptable values for
your variables, especially in
defaults/
since users might change them. - Keep it Simple: Avoid overly complex variable hierarchies. Keep your role's variables understandable and maintainable.