In the previous posts, already we have explained the below topics related to Kubernetes. Refer those links to understand this topic from basics.
What is Kubernetes - Learn Kubernetes from Basics
Install & Configure Kubernetes Cluster with Docker on Linux
Create Kubernetes Deployment, Services & Pods Using Kubectl
Create Kubernetes YAML for Deployment, Service & Pods
What is Docker - Get Started from Basics - Docker Tutorial
What is Container, What is Docker on Container - Get Started
How to Install Docker on CentOS 7 / RHEL 7
Let's get started.
How to Install Kubernetes Cluster on Ubuntu 20.04 LTS
Our Lab Setup:
How to Install Kubernetes Cluster on Ubuntu 20.04 LTS
Prerequisites:
1. Minimum 2 CPU's with 4Gb Memory is required.
2. Make an entry of each host in /etc/hosts file for name resolution on all kubernetes nodes as below or configure it on DNS if you have DNS server.
user1@kubernetes-master:~$ cat /etc/hosts
192.168.2.11 kubernetes-master.learnitguide.net kubernetes-master
192.168.2.12 kubernetes-worker1.learnitguide.net kubernetes-worker1
192.168.2.13 kubernetes-worker2.learnitguide.net kubernetes-worker2
2. Make sure kubernetes master and worker nodes are reachable between each other.
3. Kubernetes doesn't support "Swap". Disable Swap on all nodes using below command and also to make it permanent comment out the swap entry in /etc/fstab file.
sudo swapoff -a
4. Internet must be enabled on all nodes, because required packages for kubernetes cluster will be downloaded from official repository.
Steps involved to Install Kubernetes Cluster on Ubuntu,
On All Nodes:
1. Enable Kubernetes repository on master and all worker nodes
2. Install all required packages on master and all worker nodes
3. Start and Enable docker service on master and all worker nodes
On Master Node:
4. Initializing and setting up the kubernetes cluster only on Master node
5. Copy /etc/kubernetes/admin.conf and Change Ownership only on Master node
6. Install Network add-on to enable the communication between the pods only on Master node
On Worker Nodes:
7. Join all worker nodes with kubernetes master node
Before begin, we must update the Ubuntu Repositories and install basic tools like apt-transport-https, curl.
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
Once completed, move on to the next step.
1. Enable Kubernetes repository on master and all worker nodes
Add the Kubernetes signing key on all nodes.
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
Add kubernetes repository on all nodes.
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
2. Install all required packages on master and all worker nodes
Use apt-get command to install kubelet, kubeadm, kubectl, docker.io packages.
sudo apt-get update && sudo apt-get install -y kubelet=1.20.0-00 kubeadm=1.20.0-00 kubectl=1.20.0-00 docker.io
3. Start and Enable docker service on master and all worker nodes
sudo systemctl start docker && sudo systemctl enable docker
4. Initializing and setting up the kubernetes cluster only on Master node
Use "kubeadm" command to initialize the kubernetes cluster along with "apiserver-advertise-address" and "--pod-network-cidr" options. It is used to specify the IP address for kubernetes cluster communication and range of networks for the pods.
[user1@kubernetes-master ~]# sudo kubeadm init --apiserver-advertise-address 192.168.2.11 --pod-network-cidr=172.16.0.0/16
Output:
Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.2.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.2.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0113 09:25:59.504686 2708 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0113 09:25:59.506027 2708 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.003265 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 3vnj3b.vuyqixnigpj9rvfc
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.2.11:6443 --token 3vnj3b.vuyqixnigpj9rvfc
--discovery-token-ca-cert-hash sha256:ee21beeb285a694c88df5defacee97ab50aad3024d9bfab4f9125287899976cc
Kubernetes cluster initialization is completed, Copy the join token highlighted in yellow color from the "kubeadm init" command output and store it somewhere, it is required while joining the worker nodes.
6. Copy /etc/kubernetes/admin.conf and Change Ownership only on Master node
Once kubernetes cluster is initialized, copy "/etc/kubernetes/admin.conf" and change ownership. You could see this same instructions in the output of "kubeadm init" command.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
7. Install Network add-on to enable the communication between the pods only on Master node
We have lot of network add-on available to enable the network communication with different functionality, Here I have used flannel network provider. Flannel is an overlay network provider that can be used with Kubernetes. You can refer more add-on from here.
[user1@kubernetes-master ~]# sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Output:
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
Use "kubectl get nodes" command to ensure the kubernetes master node status is ready. Wait for few minutes until the status of the kubernetes master turn ready state.
[user1@kubernetes-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 4m45s v1.20.0
[root@kubernetes-master ~]#
8. Join all worker nodes with kubernetes master node
Now, Login into all worker nodes and use the join token what you have copied earlier to join all the worker nodes with kubernetes master node as below.
sudo kubeadm join 192.168.2.11:6443 --token 3vnj3b.vuyqixnigpj9rvfc --discovery-token-ca-cert-hash sha256:ee21beeb285a694c88df5defacee97ab50aad3024d9bfab4f9125287899976cc
Output:
W0113 09:31:57.468547 10744 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.20" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Once worker nodes are joined with kubernetes master, then verify the list of nodes within the kubernetes cluster. Wait for few minutes until the status of the kubernetes nodes turn ready state.
[user1@kubernetes-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 7m16s v1.20.0
kubernetes-worker1 Ready <none> 91s v1.20.0
kubernetes-worker2 Ready <none> 74s v1.20.0
[root@kubernetes-master ~]#
Thats it, We have successfully configured the kubernetes cluster. Hope you have got an idea How to Install Kubernetes Cluster on Ubuntu. Our Kubernetes master and worker nodes are ready to deploy the application. To know how to deploy an applications on kubernetes, go through the below links.
Create Kubernetes Deployment, Services & Pods Using Kubectl
Create Kubernetes YAML for Deployment, Service & Pods
Keep practicing and have fun. Leave your comments if any.
Support Us: Share with your friends and groups.
Stay connected with us on social networking sites, Thank you.
1 تعليقات
I have using documents, end of it, coredns not creating,
ردحذفcoredns-74ff55c5b-7fkxh 0/1 ContainerCreating 0 19m
coredns-74ff55c5b-lz4rm 0/1 ContainerCreating 0 19m
how to fix it.