Kubernetes And It’s Use Cases

Pawan Kumar
7 min readDec 25, 2020

What is Kubernetes?

Kubernetes (commonly stylized as k8s) is an open-source-container-orchestration system for automating computer application deployment, scaling, and management.

It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It works with a range of container tools and runs containers in a cluster, often with images built using Docker. Kubernetes originally interfaced with the Docker runtime through a “Dockershim”; however, the shim has since been deprecated in favor of directly interfacing with container or another CRI-compliant runtime.

Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions.

How Kubernetes Works!!

Kubernetes defines a set of building blocks (“primitives”), which collectively provide mechanisms that deploy, maintain, and scale applications based on CPU, memory or custom metrics. Kubernetes is loosely coupled and extensible to meet different workloads. This extensibility is provided in large part by the Kubernetes API, which is used by internal components as well as extensions and containers that run on Kubernetes. The platform exerts its control over compute and storage resources by defining resources as Objects, which can then be managed as such.

Kubernetes follows the primary /replica architecture. The components of Kubernetes can be divided into those that manage an individual node and those that are part of the control plane.

Kubernetes Architecture

Control plane

The Kubernetes master is the main controlling unit of the cluster, managing its workload and directing communication across the system. The Kubernetes control plane consists of various components, each its own process, that can run both on a single master node or on multiple masters supporting high-availability clusters. The various components of the Kubernetes control plane are as follows:

  • etcd: etcd is a persistent, lightweight, distributed, key-value data store developed by CoreOS that reliably stores the configuration data of the cluster, representing the overall state of the cluster at any given point of time. The Kubernetes API Server uses etcd’s watch API to monitor the cluster and roll out critical configuration changes or simply restore any divergences of the state of the cluster back to what was declared by the deployer. As an example, if the deployer specified that three instances of a particular pod need to be running, this fact is stored in etcd. If it is found that only two instances are running, this delta will be detected by comparison with etcd data, and Kubernetes will use this to schedule the creation of an additional instance of that pod.
  • API server: The API server is a key component and serves the Kubernetes API using JSON over HTTP, which provides both the internal and external interface to Kubernetes. The API server processes and validates REST requests and updates state of the API objects in etcd, thereby allowing clients to configure workloads and containers across Worker nodes.
  • Scheduler: The scheduler is the pluggable component that selects which node an unscheduled pod (the basic entity managed by the scheduler) runs on, based on resource availability. The scheduler tracks resource use on each node to ensure that workload is not scheduled in excess of available resources. For this purpose, the scheduler must know the resource requirements, resource availability, and other user-provided constraints and policy directives such as quality-of-service, affinity/anti-affinity requirements, data locality, and so on. In essence, the scheduler’s role is to match resource “supply” to workload “demand”.
  • Controller manager: The controller manager is a process that manages a set of core Kubernetes controllers. One kind of controller is a Replication Controller, which handles replication and scaling by running a specified number of copies of a pod across the cluster. It also handles creating replacement pods if the underlying node fails.Other controllers that are part of the core Kubernetes system include a DaemonSet Controller for running exactly one pod on every machine (or some subset of machines), and a Job Controller for running pods that run to completion, e.g. as part of a batch job. The set of pods that a controller manages is determined by label selectors that are part of the controller’s definition.

Nodes

A Node, also known as a Worker or a Minion, is a machine where containers (workloads) are deployed. Every node in the cluster must run a container runtime such as Docker, as well as the below-mentioned components, for communication with the primary for network configuration of these containers.

  • Kubelet: Kubelet is responsible for the running state of each node, ensuring that all containers on the node are healthy. It takes care of starting, stopping, and maintaining application containers organized into pods as directed by the control plane.

Kubelet monitors the state of a pod, and if not in the desired state, the pod re-deploys to the same node. Node status is relayed every few seconds via heartbeat messages to the primary. Once the primary detects a node failure, the Replication Controller observes this state change and launches pods on other healthy nodes.

  • Kube-proxy:It is responsible for routing traffic to the appropriate container based on IP and port number of the incoming request. It supports the service abstraction along with other networking operation.
  • Container runtime: A container resides inside a pod. The container is the lowest level of a micro-service, which holds the running application, libraries, and their dependencies. Containers can be exposed to the world through an external IP address. Kubernetes has supported Docker containers since its first version.

Pods

The basic scheduling unit in Kubernetes is a pod. A pod is a grouping of containerized components. A pod consists of one or more containers that are guaranteed to be co-located on the same node. Each pod in Kubernetes is assigned a unique IP address within the cluster, which allows applications to use ports without the risk of conflict.

ReplicaSets

A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.

Kubernetes Use Cases

1. Tinder’s Move to Kubernetes

Due to high traffic volume, Tinder’s engineering team faced challenges of scale and stability. What did they do?

Tinder’s engineering team solved interesting challenges to migrate 200 services and run a Kubernetes cluster at scale totaling 1,000 nodes, 15,000 pods, and 48,000 running containers.

Was that easy? No way. However, they had to do it for the smooth business operations going further. One of their engineering leaders said, “As we onboarded more and more services to Kubernetes, we found ourselves running a DNS service that was answering 250,000 requests per second.” Tinder’s entire engineering organization now has knowledge and experience on how to containerize and deploy their applications on Kubernetes.

2. Pinterest’s Kubernetes Story

With over 250 million monthly active users and serving over 10 billion recommendations every single day, the engineers at Pinterest knew these numbers are going to grow day by day, and they began to realize the pain of scalability and performance issues.

Their initial strategy was to move their workload from EC2 instances to Docker containers; they first moved their services to Docker to free up engineering time spent on Puppet and to have an immutable infrastructure.

The next strategy was to move to Kubernetes. Now they can take ideas from ideation to production in a matter of minutes, whereas earlier they used to take hours or even days. They have cut down so much overhead cost by utilizing Kubernetes and have removed a lot of manual work without making engineers worry about the underlying infrastructure.

3. Pokemon Go’s Kubernetes Story

How was Pokemon Go able to scale so efficiently became so successful? The answer is Kubernetes. Pokemon Go was developed and published by Niantic Inc., and grew to 500+ million downloads and 20+ million daily active users.

Pokemon Go engineers never thought their user base would increase exponentially to surpass expectations within a short time. They were not ready for it, and the servers couldn’t handle this much traffic.

Pokemon Go also faced a severe challenge when it came to vertical and horizontal scaling because of the real-time activity by millions of users worldwide. Niantic was not prepared for this.

The solution was in the magic of containers. The application logic for the game ran on Google Container Engine (GKE) powered by the open source Kubernetes project. Niantic chose GKE for its ability to orchestrate their container cluster at planetary-scale, freeing its team to focus on deploying live changes for their players. In this way, Niantic used Google Cloud to turn Pokémon GO into a service for millions of players, continuously adapting and improving. This gave them more time to concentrate on building the game’s application logic and new features rather than worrying about the scaling part.

Thankyou 😊

--

--