Configure Kubernetes Multi-Node Cluster Using Ansible

Pawan Kumar
5 min readFeb 10, 2021

Hello guys as we all know how kubernetes and ansible plays an vital role now days in industry for managing container technology like docker and automating the configuration of any environment respectively. So this article is for configuring Kubernetes(k8s) Multi-Node Cluster using ansible so lets start..

Pre-Requisite:

  1. Ansible Installed in your system as Controller Node.
  2. Must have AWS account and aws-cli installed on your system.
  3. Configure AWS-cli with your aws-access-key and aws-secret-key using command aws configure.

Steps to configure:

  1. First we will launch 3 aws ec2-instance of type Amazon Linux using ansible playbook.
  2. This playbook will update the inventory for the same containing two host-group namely master and slave.
  3. Then finally will start configuring kubernetes cluster with playbook which will run two roles. One for master node of kubernetes and one for slave or worker node of kubernetes.

So let’s start…

Configuration:

Create a workspace that will contain..

▪️Inventory file namely hosts.

▪️Ansible configuration file namely ansible.cfg.

▪️Two role one for master and one for slave(worker) node namely k8s-master and k8s-slave respectively.

▪️Two playbook one for launching aws instances one for running both role namely aws_launch.yml and playbook.yml respectively.

Same you can see in bleow screenshot.

Workspace

⚫️ Configure ansible.cfg as below..

Put your AWS key in /root directory.

ansible.cfg

⚫️ Now create two role using command..

ansible-galaxy role init k8s-master for k8s-master configuration.

ansible-galaxy role init k8s-slave for k8s-slave configuration.

⚫️ For now don’t write anything in inventory file namely hosts as it will dynamically update after AWS instance launched.

⚫️ Now let’s launch aws instance using ansible playbook namely aws_launch.yml..

This will run on localhost as hosts write yml file as below..

aws_launch.yml-1
aws_launch.yml-2

now run the aws_launch.yml with command..

ansible-playbook aws_launch.yml

running aws_launch.yml

Now as you can see 3 aws instance launched in my aws console..

after running playbook

This will also update the inventory file as you can see below..

Inventory File

⚫️ Now let’s write playbook for k8s-master node. Go to k8s-master directory.

Using command: cd k8s-master

Define a var of name cidr in vars/main.yml

as cidr: 17.1.0.0/16

Download this file flannel and put it into template directory.

Flannel playbook:- Download link

after downloading this file edit it and replace this part as shown below:

kube-flannel.yml

Now go to task directory and open main.yml to edit and write the tasks:

k8s-master main.yml-1
k8s-master main.yml-2
k8s-master main.yml-3

⚫️ Now let’s write playbook for k8s-slave node go back to main workspace and then to k8s-slave directory.

using commands first : cd .. then cd k8s-slave

Write yml file under tasks dircectory main.yml

k8s-slave main.yml-1
k8s-slave main.yml-2

⚫️ Now time to write final playbook with name playbook.yml which will run both the roles one after another.

so go back to main workspace using command cd ..

create a playbook with name playbook.yml and open this file to edit

playbook.yml

Now all done just have to run main playbook that is playbook.yml

using command: ansible-playbook playbook.yml

Running playbook.yml

The final status after running playbook can be seen as below..

Final status

Now conects with master node using ssh then switch to root user.

master node

Check the number of nodes present in this cluster using command:

kubectl get nodes

So as you can see below there is 3 worker nodes one itself master.

Checking nodes

Now lets create a deploy and expose it to public using command as follows:

kubectl create deploy mydep — image=vimal13/apache-webserver-php

I’m using vimal13/apache-webserver-php as image here.

then expose it using command:

kubectl expose deploy mydep — type=NodePort — port=80

Check number of deployments and pods using command:

kubectl get deploy and kubectl get pods

Also check the service created for deployment after exposing and note the port number.

kubectl get svc

Running services

Now out of any public IP from cluster we can see the page deployed in pods:

Copy any one IP and in new tab write ip with port number obtained by kubectl get svc command.

Web page on pod

Now if you check the pod is launched in one of the nodes whose private ip is 172.31.5.226

Which is private IP for slave1(worker node 1)

And when we try to access the pods using any other IP of cluster that of slave1 then it is accessible too. As you can see the slave2 public ip from above screenshot that is 52.66.236.254 let’s try with this IP and same port number.

Final

Github sorce code:- Source Code

Roles On Ansible Galaxy:-

▪️Master-role:- k8s-master

▪️Slave-role:- k8s-slave

▪️Aws-Role:- Aws Provision

Connect me on linkedin for more update.

Thank You! 😊

--

--