In this post, I will help you to explain what is RBAC in Kubernetes? and help you understand the concept in a very simple way with hands-on examples.
So, if you want to know how to give access to multiple users with different permissions in your Kubernetes cluster, then this tutorial will definitely help you.
For example:
- Admin access for some users
- Read-only access for some users in specific namespaces
- Full access for some users in specific namespaces
We will start with understanding what is RBAC and why we need it.
Topics Covered
- What is RBAC and why do we need it?
- How Kubernetes users are authenticated and authorized
- Types of RBAC components:
- ▸ Role
- ▸ ClusterRole
- ▸ RoleBinding
- ▸ ClusterRoleBinding
- Creating admin and read-only users
- Mapping them using aws-auth
- And finally, a demo with examples using different AWS profiles:
- 🔹 user1-ro – Read-only access to a single namespace
- 🔹 user2-full – Full access to a single namespace
- 🔹 user3-admin – Full access to the entire cluster
Since this is for Kubernetes, I use AWS EKS. It may vary if you are using other cloud providers like Google Cloud, etc. But here, I’ll be using AWS IAM for authentication.
What is RBAC?
RBAC stands for Role-Based Access Control. It is a method to define who can do what on your Kubernetes cluster.
To do this, RBAC uses four main components:
- Role
- ClusterRole
- RoleBinding
- ClusterRoleBinding
Each component plays a key role in defining and controlling access.
Authentication & Authorization
Kubernetes always checks two things when a request is made:
-
Authentication – Who is making the request?
In EKS, the identity is verified using AWS IAM. -
Authorization – What is that user allowed to do?
This is controlled using RBAC.
For example, if a user tries to list pods, Kubernetes checks: is this user authenticated? Then, is the user allowed to list pods?
Creating IAM Users
So I have already created three IAM users for this demo:
- user1-ro
- user2-full
- user3-admin
I’ve added them with the AdministratorAccess policy just for the sake of demo.
Each user has their own access key ID and secret, which is configured using AWS CLI on my system.
Updating aws-auth ConfigMap
For mapping AWS IAM users to Kubernetes clusters, We have to use aws-auth ConfigMap.
kubectl -n kube-system get cm aws-auth -o yaml
So export this aws-auth configmap to a file, which will be easy for managing access
kubectl -n kube-system get cm aws-auth -o yaml > aws-auth.yaml
You can verify the aws-auth.yaml file using cat command.
skuma@server1:~/kubernetes/auth$ cat aws-auth.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::591874054217:role/eksctl-demo-cluster-nodegroup-demo-NodeInstanceRole-cW2EtCUZvNc7
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
API Resources
Before creating actual roles and bindings, we must know the list of resources we can use in RBAC for access:
kubectl api-resources
You will see list of all Kubernetes resources, that supports to manage using RBAC including whether they are namespaced controlled or not.
Use long names like services, pods, deployments, configmaps. Don’t use short names like svc, po, deploy.
Create Roles & ClusterRole
Now let’s create the role and cluster role.
1. Read-only Role for dev
namespace
$ cat role-dev-readonly.yamlapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:namespace: devname: dev-readonlyrules:- apiGroups: [""] # "" indicates the core API groupresources: ["pods","services","configmaps"]verbs: ["get", "watch", "list"]
2. Full-access Role for dev
namespace
$ cat role-dev-fullaccess.yamlapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:namespace: devname: dev-fullaccessrules:- apiGroups: [""] # "" indicates the core API groupresources: ["*"]verbs: ["*"]
3. Cluster Admin Role
$ cat clusterrole-cluster-admin-full.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: cluster-admin-fullrules:- apiGroups: [""] # "" indicates the core API groupresources: ["*"]verbs: ["*"]
RoleBinding and ClusterRoleBinding
RoleBinding read-only access for namespace "dev":
$ cat rolebinding-dev-readonly.yamlapiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:name: rolebinding-dev-readonlynamespace: devsubjects:- kind: Groupname: dev-readonlyapiGroup: rbac.authorization.k8s.ioroleRef:kind: Rolename: dev-readonlyapiGroup: rbac.authorization.k8s.io
RoleBinding Full-access for namespace "dev":
$ cat rolebinding-dev-fullaccess.yamlapiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:name: rolebinding-dev-fullaccessnamespace: devsubjects:- kind: Groupname: dev-fullaccessapiGroup: rbac.authorization.k8s.ioroleRef:kind: Rolename: dev-fullaccessapiGroup: rbac.authorization.k8s.io
ClusterRoleBinding Admin access for entire cluster:
$ cat rolebinding-cluster-admin-full.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: rolebinding-cluster-admin-fullsubjects:- kind: Groupname: cluster-admin-fullapiGroup: rbac.authorization.k8s.ioroleRef:kind: ClusterRolename: cluster-admin-fullapiGroup: rbac.authorization.k8s.io
Map users in aws-auth configmap
$ cat aws-auth.yamlapiVersion: v1kind: ConfigMapmetadata:name: aws-authnamespace: kube-systemdata:mapRoles: |- rolearn: arn:aws:iam::591874054217:role/eksctl-demo-cluster-nodegroup-demo-NodeInstanceRole-cW2EtCUZvNc7username: system:node:{{EC2PrivateDNSName}}groups:- system:bootstrappers- system:nodesmapUsers: |- groups:- dev-readonlyuserarn: arn:aws:iam::591874054217:user/user1-rousername: user1-ro- groups:- dev-fullaccessuserarn: arn:aws:iam::591874054217:user/user2-fullusername: user1-full- groups:- cluster-admin-fulluserarn: arn:aws:iam::591874054217:user/user3-adminusername: user3-admin
Testing Access with AWS CLI Profiles
Ask each users to update kubeconfig into the system, then only they can manage your kubernetes cluster.
aws eks update-kubeconfig --region us-east-1 --name demo-cluster
kubectl get pods -n dev # ✅ for user1-rokubectl delete pod podname # ❌ for user1-rokubectl get ns # ❌ for user1-ro
- user1-ro can only view resources in dev namespace.
- user2-full can manage everything in dev namespace but cannot touch prod or cluster-wide objects.
- user3-admin has full access across the entire cluster.
0 Comments