Automating Server Setup with Ansible Playbooks
November 2025 · 6 min readSSH-ing into servers and running commands one by one is fine for a single machine. When you manage a fleet of IoT backend nodes, it becomes unsustainable. Ansible lets you define your infrastructure as code — repeatable, version-controlled, and idempotent.
Why Ansible over Shell Scripts?
I started with Bash scripts. They worked — until they didn't. The problems:
- Not idempotent. Running the same script twice could break things (duplicate config entries, re-installed packages failing).
- Error handling was manual. Every command needed
|| exit 1and it was easy to forget. - No inventory management. Targeting different server groups meant maintaining multiple scripts.
Ansible solves all three. Modules are idempotent by default, error handling is built in, and the inventory system lets you target groups of hosts with one command.
Project Structure
ansible/
├── inventory/
│ ├── production.yml
│ └── staging.yml
├── roles/
│ ├── common/
│ │ └── tasks/main.yml
│ ├── docker/
│ │ └── tasks/main.yml
│ └── app-deploy/
│ ├── tasks/main.yml
│ └── templates/docker-compose.yml.j2
├── group_vars/
│ ├── all.yml
│ └── production.yml
├── playbooks/
│ ├── provision.yml
│ └── deploy.yml
└── ansible.cfg
Roles keep things modular. common handles base packages and security hardening. docker installs Docker and Compose. app-deploy pushes the application.
The Provisioning Playbook
# playbooks/provision.yml
---
- name: Provision backend servers
hosts: backend
become: true
roles:
- common
- docker
The common role handles the essentials:
# roles/common/tasks/main.yml
---
- name: Update apt cache
apt:
update_cache: true
cache_valid_time: 3600
- name: Install base packages
apt:
name:
- curl
- git
- ufw
- fail2ban
- unattended-upgrades
state: present
- name: Configure UFW — allow SSH and HTTP(S)
ufw:
rule: allow
port: "{{ item }}"
proto: tcp
loop:
- "22"
- "80"
- "443"
- name: Enable UFW
ufw:
state: enabled
default: deny
- name: Disable root SSH login
lineinfile:
path: /etc/ssh/sshd_config
regexp: "^PermitRootLogin"
line: "PermitRootLogin no"
notify: restart sshd
Deploying the Application
The deploy playbook uses a Jinja2 template for docker-compose.yml, injecting environment-specific variables:
# roles/app-deploy/tasks/main.yml
---
- name: Create app directory
file:
path: /opt/app
state: directory
owner: deploy
mode: "0755"
- name: Template docker-compose file
template:
src: docker-compose.yml.j2
dest: /opt/app/docker-compose.yml
owner: deploy
mode: "0600"
- name: Pull latest images
command: docker compose pull
args:
chdir: /opt/app
- name: Start services
command: docker compose up -d --remove-orphans
args:
chdir: /opt/app
Managing Secrets with Ansible Vault
Secrets (database passwords, API keys) are encrypted with Ansible Vault:
# Encrypt a vars file
ansible-vault encrypt group_vars/production.yml
# Run a playbook with vault
ansible-playbook playbooks/deploy.yml --ask-vault-pass
# Or use a password file (for CI)
ansible-playbook playbooks/deploy.yml \
--vault-password-file ~/.vault_pass
Vault-encrypted files can live in version control safely. The decryption key stays out of the repo.
Things Worth Knowing
- Always use
--checkfirst. Dry-run mode shows what would change without changing anything. Invaluable before running against production. - Tag your tasks. Adding
tags: [deploy]lets you run justansible-playbook deploy.yml --tags deployinstead of the whole playbook. - Use
ansible-lint. It catches common mistakes (deprecated modules, missingbecome, unused variables) before they reach a server. - Keep roles small and focused. A role that does "everything" is just a glorified shell script. One role = one responsibility.