Latest Article

This post will show you how to add new additional worker nodes to existing kubernetes cluster without disturbing running application pods.

If you are new to kubernetes and want to learn about kubernetes from basics, Refer the below links to understand this topic from basics and also you can checkout our all tutorial videos on YouTube for free.

What is Kubernetes - Learn Kubernetes from Basics
How to Install Kubernetes on Linux (RedHat / CentOS)
How to Install Kubernetes On Ubuntu 16.04 LTS
How to Create Kubernetes Deployment, Services & Pods Using Kubectl
How to Create Kubernetes YAML for Deployment, Service & Pods
Kubernetes Volumes Explained with Examples
Kubernetes Persistent Volumes and Claims Explained

how to add new worker node to existing kubernetes cluster

Prerequisites of new worker nodes:
1. A New Node with Minimum 2 CPU's with 4Gb Memory is required. Operating system should be installed and ready for the setup, i used Ubuntu 16.04 LTS - 64 Bit.
2. Make sure kubernetes master and new worker node is reachable between each other.
3. Kubernetes doesn't support "Swap". So Disable Swap on new node using below command and also to make it permanent comment out the swap entry in /etc/fstab file.
sudo swapoff -a
4. Internet must be enabled on new node, because required packages will be downloaded from official repository.

Get the Joining Token first from Kubernetes Master Node
Log in to Kubernetes Master node and get the joining token as below.
user1@kubernetes-master:~$ kubeadm token list
If no join token is available, generate new join token using kubeadm command.
user1@kubernetes-master:~$ kubeadm token create --print-join-command
W0331 13:10:57.055398   12062 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 192.168.2.1:6443 --token jnnu6g.wqbmmc1l2xtdf40t     --discovery-token-ca-cert-hash sha256:ce4c91f6f5442c8c8519cacd4673864f3ce5e466435a6f6ac9e877d1c831f6dc
user1@kubernetes-master:~$ kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION  EXTRA GROUPS
jnnu6g.wqbmmc1l2xtdf40t   23h         2020-04-01T13:10:57+05:30   authentication,signing   <none>       system:bootstrappers:kubeadm:default-node-token
Copy the token highlighted in yellow color to join the worker nodes and keep it aside.
[ads-post]
On Worker Nodes, perform the below steps to join the worker nodes.

Update the Ubuntu Repositories and install basic tools like apt-transport-https, curl.
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
Add the Kubernetes Signing Key on new worker nodes.
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
Add kubernetes repository on new worker nodes.
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Install the below required packages on new worker nodes using apt-get command.
sudo apt-get update && sudo apt-get install -y kubelet kubeadm kubectl docker.io
Start and Enable the Docker service on new worker nodes.
sudo systemctl enable docker
Use the Joining token you have copied earlier to add the worker nodes.
user1@kubernetes-worker3:~$ sudo kubeadm join 192.168.2.1:6443 --token jnnu6g.wqbmmc1l2xtdf40t     --discovery-token-ca-cert-hash sha256:ce4c91f6f5442c8c8519cacd4673864f3ce5e466435a6f6ac9e877d1c831f6dc
W0331 14:00:32.775291    7193 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
user1@kubernetes-worker3:~$
Our new worker node is joined to the cluster successfully as per the above output. Lets login to Kubernetes master and confirm the node list.

On Kubernetes Master:
Verify the list of nodes using kubectl command.
user1@kubernetes-master:~$ kubectl get nodes
NAME                 STATUS   ROLES    AGE   VERSION
kubernetes-master    Ready    master   71m   v1.18.0
kubernetes-worker1   Ready    <none>   69m   v1.18.0
kubernetes-worker2   Ready    <none>   69m   v1.18.0
kubernetes-worker3   Ready    <none>   59m   v1.18.0
That's it, Above output shows our newly added worker nodes.

Keep practicing and have fun. Leave your comments if any.

Also refer below articles and checkout all tutorial videos on youtube.

What is Kubernetes - Learn Kubernetes from Basics
How to Install Kubernetes on Linux (RedHat / CentOS)
How to Install Kubernetes On Ubuntu 16.04 LTS
How to Create Kubernetes Deployment, Services & Pods Using Kubectl
How to Create Kubernetes YAML for Deployment, Service & Pods
Kubernetes Volumes Explained with Examples
Kubernetes Persistent Volumes and Claims Explained

Support Us: Share with your friends and groups.

Stay connected with us on social networking sites, Thank you.
YouTube | Facebook | Twitter | Pinterest | Rss
Incoming searches: add worker node to kubernetes, kubernetes add node to existing cluster, kubeadm add node, kubeadm join token, generate join token, how to add new worker nodes to kubernetes, add additional nodes to kubernetes, how to add worker node to master in kubernetes, how to add worker node in kubernetes, kubernetes add worker node, join new worker node to kubernetes cluster, adding new worker node to kubernetes cluster, join new worker nodes with kubernetes cluster, add more nodes to kubernetes cluster, add nodes to kubernetes cluster, how to add nodes to kubernetes cluster, add new node to kubernetes cluster, kubernetes full tutorial, real time kubernetes scenario, kubernetes tutorial for beginners, kubernetes beginner tutorial, kubernetes free online tutorial, kubernetes online free tutorial, kubernetes best tutorial, kubernetes videos


In this post, I will explain you about Kubernetes Persistent volumes and Claims with Best examples. At the end of this video, you will be able to understand,

1. What is Kubernetes Persistent Volumes?
2. What is Kubernetes Persistent Volume Claims?
3. How it is different from other Kubernetes volume types?
4. How to Create Kubernetes Persistent Volume?
5. How to Create Kubernetes Persistent Volume Claims?
6. How to use Kubernetes NFS Persistent Volume?
In the previous posts, already we have explained the below topics related to Kubernetes. Refer those links to understand this topic from basics.

What is Kubernetes - Learn Kubernetes from Basics
How to Install Kubernetes on Linux (RedHat / CentOS)
How to Install Kubernetes On Ubuntu 16.04 LTS
How to Create Kubernetes Deployment, Services & Pods Using Kubectl
How to Create Kubernetes YAML for Deployment, Service & Pods
Kubernetes Volumes Explained with Examples

Lets get started.

Also You can Watch this Entire Tutorial video with more examples on our YouTube Channel. Make use of it.
[youtube src="hAhoeg3RryY" height="315" width="560" /]

What is Kubernetes Persistent Volume?
What is Kubernetes Persistent Volume
As per official kubernetes website, A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.

In simple words, Persistent Volume is a solution to store data of our containers permanently even after the pods got deleted.

What is Persistent Volume Claim?
What is Kubernetes Persistent Volume Claim
A PersistentVolumeClaim (PVC) is a claim request for some storage space by users.

How it is different from other kubernetes volume types?
lets say you have multiple pod running on different nodes and you used hostpath volume type. Your data written by pod 1 running on worker node 1 will be resides only on worker node 1 and that cannot be access by pod 2 running on worker node 2. similarly pod 1 cannot access data written by pod 2 on worker node 2. right?. Since hostpath is a type that writes data only on your local node directory. Its not kind of a shared volume.
Multiple Pods running different nodes
Other example is, lets say you have one pod running on worker node1 and your pod written some data now on local worker node1.
kubernetes persistent volume examples
But Due to some reasons your pod is rescheduled to run on worker node 2, how about your data written on worker node1? your pod will be running on worker node 2 now, but your data wont be available here on worker node 2 since your data written by pod1 exists only on worker node1.
kubernetes nfs persistent volume examples
So we must have shared volume that should be accessible across all worker nodes only when pods need it. In this case, persistent volume and persistent volume claim can be used at the kubernetes cluster level.

But there is a traditional method to have shared volume across worker nodes at operating system level by mounting some volume through nfs, fc, iscsi on all worker nodes that can share the same volume. This example is discussed in the previous article.

Before I explain you how to create persistent volume and persistent volume claim, Let me explain you What is actually happening in persistent volume and how it works?
[ads-post]
In a legacy infrastructure environment, when you need additional storage space to your server, you will reach out to the storage administrator for the space. So there would be a storage administrator who allocates some storage space from storage device to your server as you requested. Similarly, in kubernetes. Persistent volume is a resource type through which you can get your storage space allocated to your kubernetes cluster. Let's say you got some 10G persistent volume allocated to your kubernetes cluster. Obviously that should be through any one of the kubernetes volume types. Might be through iscsi, fc, nfs, or any other cloud providers. From which you can claim some space you want for your pod using persistent volume claim. Let's say you want 5gb for your pod. You can use persistent volume claim to request 5gb space from your persistent volume. Now you persistent volume will allocates the space you requested using persistent volume when it is find suitable, now you can use that volume claim in your deployment.
What is Kubernetes Persistent Volume

Lets see how to create Persistent Volume.

1. Create a yaml file for persistent volume to get the storage space for our kubernetes cluster.
2. Create a yaml file to claim the space using peristent volume claim as per our requirement.
3. Define the persistent volume claim in your pod deployment file.

Already I have a single pod running on worker node 1 with two containers. Sample deployment file is given below. It doesn’t have any volume specification. Let's see how to use persistent volume and claim.
kind: Deployment
apiVersion: apps/v1
metadata:
  name: ebay-app
spec:
  selector:
    matchLabels:
        environment: dev
        app: ebay
  replicas: 1
  template:
    metadata:
      labels:
        environment: dev
        app: ebay
    spec:
      containers:
      - name: container1-nginx
        image: nginx
      - name: container2-tomcat
        image: tomcat
I have a nfs server that acts as a storage and exported a volume named /nfsdata from 192.168.1.7. Traditional way is to mount the share in all worker nodes, instead we will be using this share through persistent volume. Right. So create a persistent volume yaml file.
#cat nfs_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: ebay-pv
spec:
  capacity:
    storage: 20Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: ebaystorage
  mountOptions:
    - nfsvers=4.1
  nfs:
    path: /nfsdata
    server: 192.168.1.7
Persistent Volume supports three types of Reclaim Policy - Retain, Delete and Recycle and also it supports different access modes which are ReadWriteOnce, ReadOnlyMany and ReadWriteMany. For more in detail refer this link https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Lets apply the changes and verify it.
user1@kubernetes-master:~/codes/pv$ kubectl apply -f nfs_pv.yaml
persistentvolume/ebay-pv created
user1@kubernetes-master:~/codes/pv$ kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
ebay-pv   20Gi       RWO            Recycle          Available           ebaystorage             24s
Above output shows that pv "ebay-pv" is created as expected and it is available for claim.

Lets create persistent volume claim:
user1@kubernetes-master:~/codes/ebay$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  storageClassName: ebaystorage
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20G
Claim can be given from kubernetes cluster only when it finds suitable any Storageclassname and accessmode are same as specified in this claim file, if any persistent volume doesn’t have these storageclassname or accessmode, then persistent volume claim will not be processed.

Lets apply this and verify it.
user1@kubernetes-master:~/codes/ebay$ kubectl apply -f pvc.yaml
persistentvolumeclaim/myclaim created
user1@kubernetes-master:~/codes/ebay$ kubectl get pvc
NAME      STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myclaim   Bound    ebay-pv   20Gi       RWO            ebaystorage    15s
So our claim is validated and allocated for us.

Now we can use this claim to our pods. Edit your deployment file as below to define the volume specification. I will be using this volume only for my first container.
user1@kubernetes-master:~/codes/ebay$ cat httpd-basic-deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  name: ebay-app
spec:
  selector:
    matchLabels:
        environment: dev
        app: ebay
  replicas: 1
  template:
    metadata:
      labels:
        environment: dev
        app: ebay
    spec:
      volumes:
      - name: myvolume
        persistentVolumeClaim:
          claimName: myclaim

      containers:
      - name: container1-nginx
        image: nginx
        volumeMounts:
        - name: myvolume
          mountPath: "/tmp/persistent"

      - name: container2-tomcat
        image: tomcat

Just apply the changes.
user1@kubernetes-master:~/codes/ebay$ kubectl apply -f httpd-basic-deployment.yaml
deployment.apps/ebay-app configured
Use "describe" option to find the volume parameters and confirm the claim is successful. it should looks like this.
user1@kubernetes-master:~/codes/ebay$ kubectl describe pods ebay-app
........trimmed some content......
Volumes:
  myvolume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  myclaim

    ReadOnly:   false
  default-token-2tqkb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-2tqkb
    Optional:    false
........trimmed some content......
Thats it, we have successfully created persistent volume persistent volume claim. Now When your pod is rescheduled to other worker node, your data will be still available.

Also You can Watch this Entire Tutorial video with more examples on our YouTube Channel.
[youtube src="hAhoeg3RryY" height="315" width="560" /]

Keep practicing and have fun. Leave your comments if any.

Also refer other articles,

What is Kubernetes - Learn Kubernetes from Basics
How to Install Kubernetes on Linux (RedHat / CentOS)
How to Install Kubernetes On Ubuntu 16.04 LTS
How to Create Kubernetes Deployment, Services & Pods Using Kubectl
How to Create Kubernetes YAML for Deployment, Service & Pods
Kubernetes Volumes Explained with Examples

Support Us: Share with your friends and groups.

Stay connected with us on social networking sites, Thank you.
YouTube | Facebook | Twitter | Pinterest | Rss
Incoming searches: k8s, kubernetes, kubernetes tutorial, kubernetes volumes, kubernetes volume tutorial, kubernetes persistent volume, kubernetes persistent volume and claims tutorial, kubernetes persistent persistent volume claims tutorial, kubernetes persistent volumes tutorial, kubernetes persistent storage, kubernetes persistent volume nfs, kubernetes persistent nfs volume, kubernetes persistent nfs volume example, kubernetes persistent volume vs claim, different of kubernetes persistent volume and claims, kubernetes persistent volume examples, kubernetes persistent volumes explained, kubernetes persistent volumes best practices, kubernetes nfs volume example, kubernetes volumes explained with examples, kubernetes volumes examples, kubernetes full tutorial, kubernetes volume explained, kubernetes volumes explained, kubernetes volume local, kubernetes hostpath, real time kubernetes scenario, kubernetes tutorial for beginners, kubernetes beginner tutorial, kubernetes free online tutorial, kubernetes online free tutorial, kubernetes best tutorial, kubernetes videos


In this article, I will explain you about kubernetes volumes and its usage with best examples. At the end of this article, you will be able to understand,

1. What is Kubernetes Volumes?
2. What are the types of Kubernetes Volumes?
3. How to use kubernetes volumes to pod and containers?
4. How to assign a single volume to specific container in a pod?
5. How to share a same volume to all containers within a pod?
6. How to assign a dedicated volumes to each container in a pod?
7. How to assign a shared volume across all pods running on different worker nodes?

In the previous posts, already we have explained the below topics related to Kubernetes. Refer those links to understand this topic from basics.

What is Container, What is Docker on Container - Get Started
What is Kubernetes - Learn Kubernetes from Basics
How to Install Kubernetes on Linux (RedHat / CentOS)
How to Install Kubernetes On Ubuntu 16.04 LTS
How to Create Kubernetes Deployment, Services & Pods Using Kubectl
How to Create Kubernetes YAML for Deployment, Service & Pods

Lets get started.

What is Kubernetes Volumes?
Kubernetes Volumes are used to store data that should be accessible across all your containers running in a pod based on the requirement.

What are the types of Kubernetes Volumes?
Kubernetes supports many kind of storage types, these are determined by how it is created and assigned to pods.

Local Node Types - emptyDIR, hostpath, local
File Sharing types - nfs
Storage types - fc, iscsi
Special Purpose Types - Secret, Git repo
Cloud Provider types - Vsphere, Cinder, awsElasticBlockStore, azureDisk, gcepersistentDisk
Distributed filesystem types - glusterfs, cephfs
Special type - persistent volume, persistent volume claim

Note:
1. emptyDIR - Its a type of storage types that writes data only in memory till the pods running. So you data will be erased when the pod is deleted. So its not a persistent kind of types.
2. hostpath, local, fc and other types are persistent kind only, but volume wont be available across the nodes. it will be available only on local nodes. So we may need to setup something shared volume using traditional storage mount across all the nodes.
3. Persistent volume type volumes can be accessible across all the nodes.

Also You can Watch this Entire Tutorial video on our YouTube Channel.
[youtube src="AXi2oENUJHo" height="315" width="560" /]

How to use kubernetes volumes to pod and containers?
Use an option "Volumes" along with name and types as below in a deployment file for the entire PODS and use the "volumeMounts" along with mountPath where the volume to be mounted for the container. we must use the volume name unique and exactly as specified in specification for the containers. If not you will end up with error.

Example:
    spec:
      volumes:
      - name: volume
        hostPath:
         path: /mnt/data
      containers:
      - name: container1-nginx
        image: nginx
        volumeMounts:
        - name: volume
          mountPath: "/var/nginx-data"
      - name: container2-tomcat
        image: tomcat
Above example tells that, volume name "volume" specified in "spec" section with Path "/mnt/data" will be used as a volume for this entire pod. It will be mounted only on container "container1-nginx" since it is claimed to be mounted on path "/var/nginx-data" using "volumeMounts" option.

How to assign a single volume to specific container in a pod?
In order to use a volume only to specific container running in a pod, we must use volumemounts option. so that particular container will use the volume specified in spec.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: ebay-app
spec:
  selector:
    matchLabels:
        environment: dev
        app: ebay
  replicas: 1
  template:
    metadata:
      labels:
        environment: dev
        app: ebay
    spec:
      volumes:
      - name: volume
        hostPath:
         path: /mnt/data
      containers:
      - name: container1-nginx
        image: nginx
        volumeMounts:
        - name: volume
          mountPath: "/var/nginx-data"
      - name: container2-tomcat
        image: tomcat

[ads-post]
So we have claimed the volume name "volume" from specification and mapped to the container "container1-nginx" that would mount the volume under "/var/nginx-data", This volume will be only available to the first container "container1-nginx" not to the second container "container2-tomcat". This is how we can assign a single volume to specific container in a pod.

How to share a same volume to all containers within a pod?
In order to share a same volume to all containers running in a pod, we must use volumemounts option in all containers.
kind: Deployment
apiVersion: apps/v1
metadata:
  name: ebay-app
spec:
  selector:
    matchLabels:
        environment: dev
        app: ebay
  replicas: 1
  template:
    metadata:
      labels:
        environment: dev
        app: ebay
    spec:
      volumes:
      - name: volume
        hostPath:
         path: /mnt/data
      containers:
      - name: container1-nginx
        image: nginx
        volumeMounts:
        - name: volume
          mountPath: "/var/nginx-data"
      - name: container2-tomcat
        image: tomcat
        volumeMounts:
        - name: volume
          mountPath: "/var/tomcat-data"
This time, we have used volumeMount option for both containers with different path, as per the code definition, same volume "volume" will be mount on both containers in path "/var/nginx-data" on container1-nginx and "/var/tomcat-data" on container2-tomcat respectively.

How to assign a dedicated volumes to each container in a pod?
In order to assign a dedicated volumes to each containers running in a pod, we must use volumes and volumemounts option in all containers accordingly as per the example given below.
kind: Deployment
apiVersion: apps/v1
metadata:
  name: ebay-app
spec:
  selector:
    matchLabels:
        environment: dev
        app: ebay
  replicas: 1
  template:
    metadata:
      labels:
        environment: dev
        app: ebay
    spec:
      volumes:
      - name: volume1
        hostPath:
         path: /mnt/data1
      - name: volume2
        hostPath:
         path: /mnt/data2
      containers:
      - name: container1-nginx
        image: nginx
        volumeMounts:
        - name: volume1
          mountPath: "/var/nginx-data"

      - name: container2-tomcat
        image: tomcat
        volumeMounts:
        - name: volume2
          mountPath: "/var/tomcat-data"
As per the above example, volume1 will be used by the first container "container1-nginx" and volume2 will be used by the second container "container2-tomcat". This is how we can assign dedicated volumes to each containers running in a pod.

How to assign a shared volume across all pods running on different worker nodes?
Why do we actually need this setup is, so far we have seen volumes that is used only on single pod running on one worker node. so your data wont be available when pod is rescheduled to other node since your hostpath you have used is local directory. If you want your data to be available for all worker nodes, we must have shared volumes concepts to overcome such situation. We can use a special type ie PersistentVolume and PersistentVolumeClaim or our traditional approach that mount a shared volume from storage and use that mounted path in the deployment file. You can checkout this video for the traditional approach and will explain you about persistentvolume and persistentvolume claim in the next article.

Also You can Watch this Entire Tutorial video on our YouTube Channel.
[youtube src="AXi2oENUJHo" height="315" width="560" /]

Keep practicing and have fun. Leave your comments if any.

Also refer other articles,

What is Kubernetes - Learn Kubernetes from Basics
How to Install Kubernetes on Linux (RedHat / CentOS)
How to Install Kubernetes On Ubuntu 16.04 LTS
How to Create Kubernetes Deployment, Services & Pods Using Kubectl
How to Create Kubernetes YAML for Deployment, Service & Pods
What is Docker - Get Started from Basics - Docker Tutorial
What is Container, What is Docker on Container - Get Started
How to Install Docker on CentOS 7 / RHEL 7
Docker Images Explained with Examples - Docker Tutorial
How to Run Docker Containers - Explained with Examples

Support Us: Share with your friends and groups.

Stay connected with us on social networking sites, Thank you.
YouTube | Facebook | Twitter | Pinterest | Rss
Incoming searches: Kubernetes, kubernetes, kubernetes tutorial, kubernetes volume, kubernetes volume tutorial, kubernetes volumes explained, kubernetes volumes explained in detail, best kubernetes volumes tutorial, kubernetes volumes explained with examples, kubernetes volumes examples, kubernetes volumes expansion, kubernetes volume mounts, kubernetes volume mount path, kubernetes volumes beginner tutorial, kubernetes local volume tutorial, kubernetes hostpath volume tutorial

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget