You can Watch all our Tutorials and Training Videos for Free on ourYouTube Channel

What is Kubernetes - Learn Kubernetes from Basics

what is kubernetes, kubernetes basics, kubernetes tutorial, kubernetes introduction, how kubernetes works, what is kubernetes in devops
This tutorial post will help you to understand What is Kubernetes from Basics.

At the end of this post, you will be able to understand,

What is Kubernetes - Learn Kubernetes from Basics

Let's get started.

Also You can Watch this Tutorial video on our YouTube Channel - What is Kubernetes - Learn Kubernetes from Basics

What is Kubernetes?

Kubernetes is an orchestration engine and open-source platform for managing containerized application workloads and services, that facilitates both declarative configuration and automation. Kubernetes is also commonly referred as K8s.

Actually, Kubernetes is not a replacement for Docker, But Kubernetes can be considered as a replacement for Docker Swarm, Kubernetes is significantly more complex than Swarm, and requires more work to deploy.

Advantages of Kubernetes

Kubernetes can speed up the development process by making easy, automated deployments, updates (rolling-update) and by managing our apps and services with almost zero downtime. It also provides self-healing.

Kubernetes can detect and restart services when a process crashes inside the container. Kubernetes is originally developed by Google, it is open-sourced since its launch and managed by a large community of contributors.

It makes better use of hardware to maximize resources needed to run your enterprise apps. It can be used to Scale containerized applications and their resources on the fly. Any developer can package up applications and deploy them on Kubernetes with basic Docker knowledge.

Kubernetes Architecture

what is kubernetes

Kubernetes Components

Web UI (Dashboard)

Dashboard is a web-based Kubernetes user interface. You can use Dashboard to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster itself along with its attendant resources.


Kubectl is a command line configuration tool (CLI) for Kubernetes used to interact with master node of kubernetes. Kubectl has a config file called kubeconfig, this file has the information about server and authentication information to access the API Server.

Kubernetes Master

Kubernetes Master is a main node responsible for managing the entire kubernetes clusters. It handles the orchestration of the worker nodes. It has three main components that take care of communication, scheduling and controllers.

  1. API Server - Kube API Server interacts with API, Its a frontend of the kubernetes control plane.

  2. Scheduler - Scheduler watches the pods and assigns the pods to run on specific hosts.

  3. Kube-Controller-Manager - Controller manager runs the controllers in background which runs different tasks in Kubernetes cluster.

Some of the controllers are,

  1. Node controller - Its responsible for noticing and responding when nodes go down.

  2. Replication controllers - It maintains the number of pods. It controls how many identical copies of a pod should be running somewhere on the cluster.

  3. Endpoint controllers joins services and pods together.

  4. Services account and Token controllers handles access managements.

  5. Replicaset controllers ensure number of replication of pods running at all time.

  6. Deployment controller provides declarative updates for pods and replicasets.

  7. Daemonsets controller ensure all nodes run a copy of specific pods.

  8. Jobs controller is the supervisor process for pods carrying out batch jobs

  9. Services allow the communication.


etcd is a simple distribute key value store. kubernetes uses etcd as its database to store all cluster datas. some of the data stored in etcd is job scheduling information, pods, state information and etc.

Worker Nodes

Worker nodes are the nodes where the application actually running in kubernetes cluster, it is also know as minion. These each worker nodes are controlled by the master node using kubelet process.
Container Platform must be running on each worker nodes and  it works together with kubelet to run the containers, This is why we use Docker engine and takes care of managing images and containers. We can also use other container platforms like CoreOS, Rocket.

Requirements of Worker Nodes:
1. kubelet must be running
2. Docker container platform
3. kube-proxy must be running
4. supervisord


Kubelet is the primary node agent runs on each nodes and reads the container manifests which ensures that containers are running and healthy.


Kube-proxy is a process helps us to have network proxy and loadbalancer for the services in a single worker node. It performs network routing for tcp and udp packets, and performs connection folding. Worker nodes can be exposed to internet via kubeproxy.


  • A group of one or more containers deployed to a single node.

  • Containers in a pod share an IP Address, hostname and other resources.

  • Containers within the same pod have access to shared volumes

  • Pods abstract network and storage away from the underlying container. This lets you move containers around the cluster more easily.

  • With Horizontal Pod Autoscaling, Pods of a Deployment can be automatically started and halted based on CPU usage.

  • Each Pod has its unique IP Address within the cluster

  • Any data saved inside the Pod will disappear without a persistent storage


  • A deployment is a blueprint for the Pods to be created.

  • Handles update of its respective Pods.

  • A deployment will create a Pod by it’s spec from the template.

  • Their target is to keep the Pods running and update them (with rolling-update) in a more controlled way.

  • Pod(s) resource usage can be specified in the deployment.

  • Deployment can scale up replicas of Pods.


A service is responsible for making our Pods discoverable inside the network or exposing them to the internet. A Service identifies Pods by its LabelSelector.

Types of services available:

1. ClusterIP

The deployment is only visible inside the cluster
The deployment gets an internal ClusterIP assigned to it
Traffic is load balanced between the Pods of the deployment

2. Node Port

The deployment is visible inside the cluster
The deployment is bound to a port of the Master Node
Each Node will proxy that port to your Service
The service is available at http(s)://:/
Traffic is load balanced between the Pods of the deployment

3. Load Balancer

The deployment gets a Public IP address assigned
The service is available at http(s)://:<80||42>/
Traffic is load balanced between the Pods of the deployment

Hope you have got an idea about what is kubernetes and introduction of kubernetes. In the next post, we have shown you How to Install & Configure Kubernetes Cluster with Docker on Linux. Going forward we will play more with kubernetes cluster.

Keep practicing and have fun. Leave your comments if any.

Support Us: Share with your friends and groups.

Stay connected with us on social networking sites, Thank you.

Cookie Consent
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.
whatsapp logo