Configure Kubernetes Multi-Node Cluster Using Ansible
Hello guys as we all know how kubernetes and ansible plays an vital role now days in industry for managing container technology like docker and automating the configuration of any environment respectively. So this article is for configuring Kubernetes(k8s) Multi-Node Cluster using ansible so lets start..
Pre-Requisite:
- Ansible Installed in your system as Controller Node.
- Must have AWS account and aws-cli installed on your system.
- Configure AWS-cli with your aws-access-key and aws-secret-key using command aws configure.
Steps to configure:
- First we will launch 3 aws ec2-instance of type Amazon Linux using ansible playbook.
- This playbook will update the inventory for the same containing two host-group namely master and slave.
- Then finally will start configuring kubernetes cluster with playbook which will run two roles. One for master node of kubernetes and one for slave or worker node of kubernetes.
So let’s start…
Configuration:
Create a workspace that will contain..
▪️Inventory file namely hosts.
▪️Ansible configuration file namely ansible.cfg.
▪️Two role one for master and one for slave(worker) node namely k8s-master and k8s-slave respectively.
▪️Two playbook one for launching aws instances one for running both role namely aws_launch.yml and playbook.yml respectively.
Same you can see in bleow screenshot.
⚫️ Configure ansible.cfg as below..
Put your AWS key in /root directory.
⚫️ Now create two role using command..
ansible-galaxy role init k8s-master for k8s-master configuration.
ansible-galaxy role init k8s-slave for k8s-slave configuration.
⚫️ For now don’t write anything in inventory file namely hosts as it will dynamically update after AWS instance launched.
⚫️ Now let’s launch aws instance using ansible playbook namely aws_launch.yml..
This will run on localhost as hosts write yml file as below..
now run the aws_launch.yml with command..
ansible-playbook aws_launch.yml
Now as you can see 3 aws instance launched in my aws console..
This will also update the inventory file as you can see below..
⚫️ Now let’s write playbook for k8s-master node. Go to k8s-master directory.
Using command: cd k8s-master
Define a var of name cidr in vars/main.yml
as cidr: 17.1.0.0/16
Download this file flannel and put it into template directory.
Flannel playbook:- Download link
after downloading this file edit it and replace this part as shown below:
Now go to task directory and open main.yml to edit and write the tasks:
⚫️ Now let’s write playbook for k8s-slave node go back to main workspace and then to k8s-slave directory.
using commands first : cd .. then cd k8s-slave
Write yml file under tasks dircectory main.yml
⚫️ Now time to write final playbook with name playbook.yml which will run both the roles one after another.
so go back to main workspace using command cd ..
create a playbook with name playbook.yml and open this file to edit
Now all done just have to run main playbook that is playbook.yml
using command: ansible-playbook playbook.yml
The final status after running playbook can be seen as below..
Now conects with master node using ssh then switch to root user.
Check the number of nodes present in this cluster using command:
kubectl get nodes
So as you can see below there is 3 worker nodes one itself master.
Now lets create a deploy and expose it to public using command as follows:
kubectl create deploy mydep — image=vimal13/apache-webserver-php
I’m using vimal13/apache-webserver-php as image here.
then expose it using command:
kubectl expose deploy mydep — type=NodePort — port=80
Check number of deployments and pods using command:
kubectl get deploy and kubectl get pods
Also check the service created for deployment after exposing and note the port number.
kubectl get svc
Now out of any public IP from cluster we can see the page deployed in pods:
Copy any one IP and in new tab write ip with port number obtained by kubectl get svc command.
Now if you check the pod is launched in one of the nodes whose private ip is 172.31.5.226
Which is private IP for slave1(worker node 1)
And when we try to access the pods using any other IP of cluster that of slave1 then it is accessible too. As you can see the slave2 public ip from above screenshot that is 52.66.236.254 let’s try with this IP and same port number.
Github sorce code:- Source Code
Roles On Ansible Galaxy:-
▪️Master-role:- k8s-master
▪️Slave-role:- k8s-slave
▪️Aws-Role:- Aws Provision
Connect me on linkedin for more update.