Ansible Installation & EC2 Setup – My Practical Experience

 

After understanding Ansible modules and the overall architecture, the next thing I wanted to do was actually install Ansible and set up a working environment.
This is the part where theory meets hands-on practice — and honestly, this is where everything started making sense for me.

In this blog, I’ll share how I installed Ansible on a Linux EC2 instance and connected it with other EC2 servers to run automation.


Where I Installed Ansible

I installed Ansible on a Linux-based EC2 instance, which acted as my control node.
You can use:

  • Amazon Linux 2

  • Ubuntu

  • CentOS

All work fine.
I mostly used Amazon Linux 2 and Ubuntu in my practice.


๐Ÿ”ง Installing Ansible on Amazon Linux 2

These are the exact steps I followed:

1️⃣ Update the system

sudo yum update -y

2️⃣ Install Ansible using Amazon Extras

sudo amazon-linux-extras install ansible2 -y

3️⃣ Verify installation

ansible --version

Once I saw the version output, I knew Ansible was installed successfully.


๐Ÿ”ง Installing Ansible on Ubuntu (Alternative Method)

If you use Ubuntu as your controller:

sudo apt update -y sudo apt install ansible -y ansible --version

Both operating systems work smoothly.


Setting Up the Managed Nodes (Target Servers)

Once Ansible was installed on the controller, I launched additional EC2 instances to use as managed nodes.

To allow Ansible to connect:

  • I kept port 22 (SSH) open

  • I ensured the private key (.pem) file was on the controller machine

  • I SSH’d into the controller and copied my key there

Example:

scp -i mykey.pem mykey.pem ec2-user@<controller-public-ip>:/home/ec2-user/

After that, I set correct permissions:

chmod 400 mykey.pem

Creating a Custom Inventory File

This is the file where I tell Ansible which servers to manage.

I created a file named inventory.ini:

[webservers] web1 ansible_host=54.123.45.67 ansible_user=ec2-user ansible_ssh_private_key_file=mykey.pem web2 ansible_host=54.123.55.21 ansible_user=ec2-user ansible_ssh_private_key_file=mykey.pem

What these mean:

  • ansible_host → EC2 public IP

  • ansible_user → default login user (ec2-user / ubuntu)

  • ansible_ssh_private_key_file → key file path

This is where I finally understood how Ansible identifies and connects to servers.


Testing SSH Connectivity Using Ansible

Before running any automation, I tested connection using the ping module:

ansible webservers -i inventory.ini -m ping

A successful output looked like:

web1 | SUCCESS => {"changed": false, "ping": "pong"} web2 | SUCCESS => {"changed": false, "ping": "pong"}

Seeing this “pong” output gave me confidence that everything was working correctly.


๐Ÿ“ฆ Final Setup Flow (My Practical Workflow)

✔ Ansible installed on EC2 controller

✔ Private key copied with correct permissions

✔ Inventory file created

✔ Ping test successful

After this setup, I was ready to:

  • run ad-hoc commands

  • write playbooks

  • deploy apps

  • automate server configuration

This was the point where my real Ansible learning started.


Quick Example After Setup

Here’s the first real command I ran:

ansible webservers -i inventory.ini -m apt -a "name=nginx state=present" --become

And within seconds, nginx was installed across all my servers — fully automated.

That felt like a true DevOps moment.


Final Thoughts

Setting up Ansible on a controller and connecting it to EC2 instances was a simple but powerful experience.
Once the SSH connection and inventory file were in place, everything else became very easy.

This setup is the foundation for:

  • ad-hoc commands

  • playbooks

  • inventory configurations

  • roles

  • automation workflows

In the next blog, I’ll write about:

๐Ÿ‘‰ Ansible Ad-Hoc Commands
(the fastest way to run automation without creating playbooks)

Comments

Popular posts from this blog

Ansible Playbooks – How I Learned to Automate Complete Tasks

Understanding Ansible Modules

“What is Ansible? – My Understanding as a DevOps Learner”