Towards Orchestration Docker (Container): Begining Kubernetes

Rushi Trivedi
5 min readSep 28, 2020

Hello readers…We have already started our journey of DevOps, starting from continuous Integration, Continuous Delivery and Continuous Deployment, then Jenkins that is nothing but CI/CD-CD implementation and ended at Docker-an Containerization platform.

In this blog, I will put most of the focus on Kubernetes, but before directly going to Kubernetes, let me discuss the basic unit for kubernetes i.e. is nothing but…Container (Docker).

CONTAINER (DOCKER)

Before we try to understand Kubernetes, Let us spend little time on clarifying what a container is, and why they are so popular. After all, there is no point in talking about containers orchestrator (Kubernetes) without knowing what a container is

A container is…mmmmm nothing but an Container which holds on all the things(content) which you put in.

Content like our application code, libraries, other dependencies including kernels. They fundamental concept here is to provide Isolation, yes you heard it right Isolation. Container Isolate all our contents from the rest for better control.

In general 3 types of isolation is being provided by container or docker; they are:

  • Process/Network Isolation: Workspace.
  • CPU/Memory Isolation: Resource.
  • File System Isolation.

VM’s are very similar to Containers, just think Container as VM’s on diet……ha ha ha …I mean Containers are lean, fast (quick startup) and small.

For more details, visit my own blog on container, use link https://rushimtechcse.blogspot.com/2020/07/getting-started-with-docker.html

Now we all know that what Container is…now lets move towards Orchestration of Containers, i.e. nothing but towards kubernetes.

Heading towards Kubernetes: When to use?

Everything is going well with developers life having docker in hand and application in dockers hand, then why we need this another piece of technology i.e nothing but Kubernetes, a container orchestrator?

n….numbers of containers

We or developers need it when we get into this state, where there are too many containers to manage.

Here I am enlisting some questions and answers which might be enough to clear our dilema that why to use kubernetes.

Q: Where is my front end container, how many of them am I running?
A: Hard to tell. Use a container orchestrator

Q: How will I make my front end containers to talk to newly created backend containers?
A: Hardcode the IPs. Or, Use a container orchestrator

Q: How will I do rolling upgrades?
A: Manually hand holding in each step. Or, Use a container orchestrator.

By going through above questions, we can conclude that if there is requirement to manage, control, communicate between n numbers of containers, Use a container orchestrator.

Why Developers Prefer Kubernetes?

There are multiple Orchestrators like Docker Swarm, Mesos and Kubernetes. Most of the developers choice is Kubernetes because:

…like above lego blocks,kubernetes not only has the component needed to run a container orchestrator at big or large scale, but also has the flexibility to swap different components in and out as per the requirement. Kubernetes also provides custome schedulers, plug-ins,CRD for new resource requirements.

Let us now quickly moves towards the working Architecture of Kubernetes……

Kubernetes Architecture

This architecture of Kubernetes provides a flexible, loosely-coupled mechanism for service discovery. Like most distributed computing platforms, a Kubernetes cluster consists of at least one master and multiple compute nodes. The master is responsible for exposing the application program interface (API), scheduling the deployments and managing the overall cluster. Each node runs a container runtime, such as Docker or rkt, along with an agent that communicates with the master. The node also runs additional components for logging, monitoring, service discovery and optional add-ons. Nodes are the workhorses of a Kubernetes cluster. They expose compute, networking and storage resources to applications. Nodes can be virtual machines (VMs) running in a cloud or bare metal servers running within the data center.

A pod is a collection of one or more containers. The pod serves as Kubernetes’ core unit of management. Pods act as the logical boundary for containers sharing the same context and resources. The grouping mechanism of pods make up for the differences between containerization and virtualization by making it possible to run multiple dependent processes together. At runtime, pods can be scaled by creating replica sets, which ensure that the deployment always runs the desired number of pods.

Replica sets deliver the required scale and availability by maintaining a pre-defined set of pods at all times. A single pod or a replica set can be exposed to the internal or external consumers via services. Services enable the discovery of pods by associating a set of pods to a specific criterion. Pods are associated to services through key-value pairs called labels and selectors. Any new pod with labels that match the selector will automatically be discovered by the service. This architecture provides a flexible, loosely-coupled mechanism for service discovery.

The definition of Kubernetes objects, such as pods, replica sets and services, are submitted to the master. Based on the defined requirements and availability of resources, the master schedules the pod on a specific node. The node pulls the images from the container image registry and coordinates with the local container runtime to launch the container.

etcd is an open source, distributed key-value database from CoreOS, which acts as the single source of truth (SSOT) for all components of the Kubernetes cluster. The master queries etcd to retrieve various parameters of the state of the nodes, pods and containers.

This architecture of Kubernetes makes it modular and scalable by creating an abstraction between the applications and the underlying infrastructure.

API server RESTful API server that exposes end points to operate the cluster. Almost all of the components in master and worker nodes communicates to this server to perform their duties

Scheduler plays an vital load to decide which payload will execute on which machine.

kubelet is the heart of the worker node. It usually communicates with the master node API server and runs the containers scheduled for its node.

Key Design Principles

Briefing various Key design principles of Kubernetes…

* Workload Scalability

* High Availability

* Security

* Portability

Note: For installation process and various kubectl, kubeadm, kubelet commands visit official site https://kubernetes.io/docs/reference/setup-tools/

Read…Learn…Share………

--

--

Rushi Trivedi

Full Stack Developer || Application and Software Developer || DevOps Engineer || Ex-Oracle || M.Tech. CSE