Kubernetes RBAC Explained with Hands-On Demo

Kubernetes RBAC Explained with Hands-On Demo

In this post, I will help you to explain what is RBAC in Kubernetes? and help you understand the concept in a very simple way with hands-on examples.

So, if you want to know how to give access to multiple users with different permissions in your Kubernetes cluster, then this tutorial will definitely help you.

For example:

  1. Admin access for some users
  2. Read-only access for some users in specific namespaces
  3. Full access for some users in specific namespaces

We will start with understanding what is RBAC and why we need it.

Topics Covered

  1. What is RBAC and why do we need it?
  2. How Kubernetes users are authenticated and authorized
  3. Types of RBAC components:
    1. ▸ Role
    2. ▸ ClusterRole
    3. ▸ RoleBinding
    4. ▸ ClusterRoleBinding
  4. Creating admin and read-only users
  5. Mapping them using aws-auth
  6. And finally, a demo with examples using different AWS profiles:
    1. 🔹 user1-ro – Read-only access to a single namespace
    2. 🔹 user2-full – Full access to a single namespace
    3. 🔹 user3-admin – Full access to the entire cluster

Since this is for Kubernetes, I use AWS EKS. It may vary if you are using other cloud providers like Google Cloud, etc. But here, I’ll be using AWS IAM for authentication.

You can also watch this tutorial demo on our Youtube Channel

What is RBAC?

RBAC stands for Role-Based Access Control. It is a method to define who can do what on your Kubernetes cluster.

To do this, RBAC uses four main components:

  1. Role
  2. ClusterRole
  3. RoleBinding
  4. ClusterRoleBinding

Each component plays a key role in defining and controlling access.

Authentication & Authorization

How Kubernetes RBAC works

Kubernetes always checks two things when a request is made:

  1. Authentication – Who is making the request?
    In EKS, the identity is verified using AWS IAM.

  2. Authorization – What is that user allowed to do?
    This is controlled using RBAC.

For example, if a user tries to list pods, Kubernetes checks: is this user authenticated? Then, is the user allowed to list pods?

Creating IAM Users

So I have already created three IAM users for this demo:

  1. user1-ro
  2. user2-full
  3. user3-admin

I’ve added them with the AdministratorAccess policy just for the sake of demo.

Each user has their own access key ID and secret, which is configured using AWS CLI on my system.

Updating aws-auth ConfigMap

For mapping AWS IAM users to Kubernetes clusters, We have to use aws-auth ConfigMap.

kubectl -n kube-system get cm aws-auth -o yaml

So export this aws-auth configmap to a file, which will be easy for managing access

kubectl -n kube-system get cm aws-auth -o yaml > aws-auth.yaml

You can verify the aws-auth.yaml file using cat command.

skuma@server1:~/kubernetes/auth$ cat aws-auth.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::591874054217:role/eksctl-demo-cluster-nodegroup-demo-NodeInstanceRole-cW2EtCUZvNc7
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

API Resources

Before creating actual roles and bindings, we must know the list of resources we can use in RBAC for access:

kubectl api-resources

Output:


You will see list of all Kubernetes resources, that supports to manage using RBAC including whether they are namespaced controlled or not.

Use long names like services, pods, deployments, configmaps. Don’t use short names like svc, po, deploy.

Create Roles & ClusterRole

Now let’s create the role and cluster role.

1. Read-only Role for dev namespace

$ cat role-dev-readonly.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: dev
  name: dev-readonly
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods","services","configmaps"]
  verbs: ["get", "watch", "list"]

2. Full-access Role for dev namespace

$ cat role-dev-fullaccess.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: dev
  name: dev-fullaccess
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["*"]
  verbs: ["*"]

3. Cluster Admin Role

$ cat clusterrole-cluster-admin-full.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-admin-full
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["*"]
  verbs: ["*"]
Once we have created role, cluster role with required permissions including resources, verbs. Now we need to create bindings for each role and clusterrole.

RoleBinding and ClusterRoleBinding

RoleBinding read-only access for namespace "dev":

$ cat rolebinding-dev-readonly.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rolebinding-dev-readonly
  namespace: dev
subjects:
- kind: Group
  name: dev-readonly
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: dev-readonly
  apiGroup: rbac.authorization.k8s.io

RoleBinding Full-access for namespace "dev":

$ cat rolebinding-dev-fullaccess.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rolebinding-dev-fullaccess
  namespace: dev
subjects:
- kind: Group
  name: dev-fullaccess
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: dev-fullaccess
  apiGroup: rbac.authorization.k8s.io

ClusterRoleBinding Admin access for entire cluster:

$ cat rolebinding-cluster-admin-full.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: rolebinding-cluster-admin-full
subjects:
- kind: Group
  name: cluster-admin-full
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: cluster-admin-full
  apiGroup: rbac.authorization.k8s.io

Map users in aws-auth configmap

Open aws-auth yaml file using some editor and add all your users with respective groups that we have created in the rolebdindings as below.
$ cat aws-auth.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::591874054217:role/eksctl-demo-cluster-nodegroup-demo-NodeInstanceRole-cW2EtCUZvNc7
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
  mapUsers: |
    - groups:
      - dev-readonly
      userarn: arn:aws:iam::591874054217:user/user1-ro
      username: user1-ro
    - groups:
      - dev-fullaccess
      userarn: arn:aws:iam::591874054217:user/user2-full
      username: user1-full
    - groups:
      - cluster-admin-full
      userarn: arn:aws:iam::591874054217:user/user3-admin
      username: user3-admin

Testing Access with AWS CLI Profiles

Ask each users to update kubeconfig into the system, then only they can manage your kubernetes cluster.

aws eks update-kubeconfig --region us-east-1 --name demo-cluster
Once you have got your kubeconfig file. Test the access.

Switch to each user and run different commands to check the appropriate permission are set or not.
kubectl get pods -n dev        # ✅ for user1-ro
kubectl delete pod podname     # ❌ for user1-ro
kubectl get ns                 # ❌ for user1-ro

Finally,
  1. user1-ro can only view resources in dev namespace.
  2. user2-full can manage everything in dev namespace but cannot touch prod or cluster-wide objects.
  3. user3-admin has full access across the entire cluster.
That’s how we control access in Kubernetes using RBAC. I hope you understood the concept.

Post a Comment

0 Comments