In this guide, I'm going to walk you through how to create a Kubernetes cluster on AWS (EKS) using Terraform modules. And it's not just about the Kubernetes cluster. we will also create the VPC required for it. This is going to be a complete Kubernetes environment, and by the end, you’ll have a strong understanding of both VPC and AWS EKS setup.
Finally, we'll do a deployment of a simple test application like Apache to verify that the cluster is running and accessible.
You can also watch this tutorial demo on our Youtube Channel
AWS EKS + VPC Architecture
This is the architecture we are going to create:
First, we need to create a VPC with two public subnets and two private subnets.
Since this is a non-production environment, we’ll deploy the Kubernetes cluster on public subnets.
If you're just getting started with Kubernetes on AWS, this is the best way to begin. In a follow-up, I’ll show you how to deploy on private subnets for a full production setup.
I assume, you have already installed the required tools.
kubectl – https://kubernetes.io/docs/tasks/tools
- Run aws configure to set your AWS access and secret keys.
I’ve created a directory called eks-terraform-demo. Inside it, I’ve already defined the necessary Terraform files and blocks.
Let’s go through them one by one.
provider.tf
In the provider.tf file, I’ve defined the AWS provider, and the region is set to us-east-1.
provider "aws" {
region = "us-east-1"
}
Creating VPC with Terraform Module:
vpc.tf
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = "flipkart-vpc-nonprod"
cidr = "10.0.0.0/16"
azs = ["us-east-1a", "us-east-1b"]
public_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
private_subnets = ["10.0.11.0/24", "10.0.12.0/24"]
map_public_ip_on_launch = true
enable_nat_gateway = false
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
tags = {
Terraform = "true"
}
}
In vpc.tf, I’ve used the official Terraform VPC module. You can find it by searching “Terraform modules” in your browser and heading to terraform-aws-modules. There, you’ll find a module for VPC and another for EKS.
I’ve copied the VPC example and made some modifications based on the requirement for integrating the AWS VPC with AWS EKS Module.
Mentioned Two public subnets and two private subnets across us-east-1a and us-east-1b for better high availability.
I also disabled the NAT gateway to save cost, since we’re not deploying anything on the private subnets for now.
I faced issues using the latest VPC module version (v6.x) with the EKS module, so I used version 5.x instead—it worked perfectly.
The VPC is named flipkart-vpc-nonpro, just as an example.
Creating AWS EKS with Terraform Module
eks.tf
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
cluster_name = "flipkart-eks-nonprod"
cluster_version = "1.31"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.public_subnets
cluster_endpoint_public_access = true
enable_cluster_creator_admin_permissions = true
eks_managed_node_groups = {
default = {
instance_types = ["t3.medium"]
min_size = 1
max_size = 2
desired_size = 1
}
}
tags = {
Environment = "nonprod"
}
}
In eks.tf, I used the EKS managed node group example from the Terraform registry. I prefer managed node groups as they are fully controlled by AWS.
Here are some key details:
Cluster name: flipkart-eks-nonpro
VPC ID is passed from the VPC module
Subnets used are public subnets only
cluster_endpoint_public_access = true to allow access from outside (like your laptop)
Managed node group with:
- Instance type: t3.medium
- Desired capacity: 1
- Min size: 1
- Max size: 2
Initialize and Apply Terraform Configuration:
Once all the files are ready:
terraform init
terraform apply -auto-approve
After a successful apply, I check the output values for cluster_name and cluster_endpoint. Because i have added the below output blocks to get printed the values on the terminal.
outputs.tf
output "cluster_name" {
value = module.eks.cluster_name
}
output "cluster_endpoint" {
value = module.eks.cluster_endpoint
}
Update KubeConfig
First, we need the kubeconfig locally:
aws eks update-kubeconfig --region us-east-1 --name flipkart-eks-nonprodThen run:
kubectl get nodesIf you see the node listed, your setup is working.
Deploying Test Application
We now test by deploying a simple Apache pod.
Now deploy Apache directly:
kubectl run apache --image=httpd --port=80
Now Check pod:
Go to EKS → Networking → Security Groups
Open the additional node group security group
Add an inbound rule for HTTP (port 80) from anywhere (0.0.0.0/0)
Save the rule
Now access the Load Balancer URL from your browser using HTTP (not HTTPS). If it redirects to HTTPS, remove the "s" and try again.
You should see the Apache default page.
kubectl get podsExpose it via a LoadBalancer:
kubectl expose pod apache --type=LoadBalancer --port=80Now check:
kubectl get svcYou’ll see a LoadBalancer service created with an external IP (provided by AWS ELB).
Allowing Access (Temporary for Testing)
By default, port 80 won’t be open in the security group. So:Go to EKS → Networking → Security Groups
Open the additional node group security group
Add an inbound rule for HTTP (port 80) from anywhere (0.0.0.0/0)
Save the rule
Now access the Load Balancer URL from your browser using HTTP (not HTTPS). If it redirects to HTTPS, remove the "s" and try again.
You should see the Apache default page.
This setup is perfect for learning and testing purposes. It will give you better understanding of how Terraform integrates with AWS services like EKS and VPC.
In the next guide, I’ll show you how to set up a production-ready cluster using private subnets, proper NAT gateways, and restricted access.
0 تعليقات