Friday, November 22, 2024
Google search engine
HomeGuest BlogsHow To Deploy EKS Cluster on AWS using Terraform

How To Deploy EKS Cluster on AWS using Terraform

Elastic Kubernetes Service is a managed Kubernetes service on Amazon Web Service. You can provision provides Cloud Native Computing (CNCF) applications on AWS using EKS.

Terraform on the other hand is an Infrastructure as Code(IaC) automation tool used for automation of deployments on cloud environments. You can use Terraform to manage the entire lifecycle of your cloud infrastructure. This ranges from applications, containers, and even virtual machines.

Deploying an EKS cluster on AWS can be made very easy by the use of Terraform. This is as opposed to the other methods of deployment available such CloudFormation or configuring the cluster directly on the AWS console.

In this guide, we shall cover how to setup a fully configured EKS cluster on AWS using Terraform. Follow the highlighted steps below to deploy your EKS cluster in a few minutes using Terraform.

How To Deploy EKS Cluster Using Terraform

Before we can deploy our EKS cluster on AWS, below are the requirements. Make sure you have all the items highlighted below.

  1. AWS account
  2. AWS user account with IAM SystemAdministrator permissions.
  3. AWS CLI
  4. Kubectl

Step 1. Setup AWS Account

AWS provides a free package for 12 months which includes some of the basic services. Sign up for an AWS account for free and provide your billing details. The free package however doesn’t provide EKS service for free. You will be charged for this service.

Step 2. Create AWS Policy

You can now proceed to the AWS web console after successfully creating your account. To create a user with IAM permissions to the EKS cluster, we will need to create an AWS policy that grants permissions to the said user once created.

On AWS console, go to IAM >Policies>Create Policy. Choose the JSON option then add the details below:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "autoscaling:AttachInstances",
                "autoscaling:CreateAutoScalingGroup",
                "autoscaling:CreateLaunchConfiguration",
                "autoscaling:CreateOrUpdateTags",
                "autoscaling:DeleteAutoScalingGroup",
                "autoscaling:DeleteLaunchConfiguration",
                "autoscaling:DeleteTags",
                "autoscaling:Describe*",
                "autoscaling:DetachInstances",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:UpdateAutoScalingGroup",
                "autoscaling:SuspendProcesses",
                "ec2:AllocateAddress",
                "ec2:AssignPrivateIpAddresses",
                "ec2:Associate*",
                "ec2:AttachInternetGateway",
                "ec2:AttachNetworkInterface",
                "ec2:AuthorizeSecurityGroupEgress",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:CreateDefaultSubnet",
                "ec2:CreateDhcpOptions",
                "ec2:CreateEgressOnlyInternetGateway",
                "ec2:CreateInternetGateway",
                "ec2:CreateNatGateway",
                "ec2:CreateNetworkInterface",
                "ec2:CreateRoute",
                "ec2:CreateRouteTable",
                "ec2:CreateSecurityGroup",
                "ec2:CreateSubnet",
                "ec2:CreateTags",
                "ec2:CreateVolume",
                "ec2:CreateVpc",
                "ec2:CreateVpcEndpoint",
                "ec2:DeleteDhcpOptions",
                "ec2:DeleteEgressOnlyInternetGateway",
                "ec2:DeleteInternetGateway",
                "ec2:DeleteNatGateway",
                "ec2:DeleteNetworkInterface",
                "ec2:DeleteRoute",
                "ec2:DeleteRouteTable",
                "ec2:DeleteSecurityGroup",
                "ec2:DeleteSubnet",
                "ec2:DeleteTags",
                "ec2:DeleteVolume",
                "ec2:DeleteVpc",
                "ec2:DeleteVpnGateway",
                "ec2:Describe*",
                "ec2:DetachInternetGateway",
                "ec2:DetachNetworkInterface",
                "ec2:DetachVolume",
                "ec2:Disassociate*",
                "ec2:ModifySubnetAttribute",
                "ec2:ModifyVpcAttribute",
                "ec2:ModifyVpcEndpoint",
                "ec2:ReleaseAddress",
                "ec2:RevokeSecurityGroupEgress",
                "ec2:RevokeSecurityGroupIngress",
                "ec2:UpdateSecurityGroupRuleDescriptionsEgress",
                "ec2:UpdateSecurityGroupRuleDescriptionsIngress",
                "ec2:CreateLaunchTemplate",
                "ec2:CreateLaunchTemplateVersion",
                "ec2:DeleteLaunchTemplate",
                "ec2:DeleteLaunchTemplateVersions",
                "ec2:DescribeLaunchTemplates",
                "ec2:DescribeLaunchTemplateVersions",
                "ec2:GetLaunchTemplateData",
                "ec2:ModifyLaunchTemplate",
                "ec2:RunInstances",
                "eks:CreateCluster",
                "eks:DeleteCluster",
                "eks:DescribeCluster",
                "eks:ListClusters",
                "eks:UpdateClusterConfig",
                "eks:UpdateClusterVersion",
                "eks:DescribeUpdate",
                "eks:TagResource",
                "eks:UntagResource",
                "eks:ListTagsForResource",
                "eks:CreateFargateProfile",
                "eks:DeleteFargateProfile",
                "eks:DescribeFargateProfile",
                "eks:ListFargateProfiles",
                "eks:CreateNodegroup",
                "eks:DeleteNodegroup",
                "eks:DescribeNodegroup",
                "eks:ListNodegroups",
                "eks:UpdateNodegroupConfig",
                "eks:UpdateNodegroupVersion",
                "iam:AddRoleToInstanceProfile",
                "iam:AttachRolePolicy",
                "iam:CreateInstanceProfile",
                "iam:CreateOpenIDConnectProvider",
                "iam:CreateServiceLinkedRole",
                "iam:CreatePolicy",
                "iam:CreatePolicyVersion",
                "iam:CreateRole",
                "iam:DeleteInstanceProfile",
                "iam:DeleteOpenIDConnectProvider",
                "iam:DeletePolicy",
                "iam:DeletePolicyVersion",
                "iam:DeleteRole",
                "iam:DeleteRolePolicy",
                "iam:DeleteServiceLinkedRole",
                "iam:DetachRolePolicy",
                "iam:GetInstanceProfile",
                "iam:GetOpenIDConnectProvider",
                "iam:GetPolicy",
                "iam:GetPolicyVersion",
                "iam:GetRole",
                "iam:GetRolePolicy",
                "iam:List*",
                "iam:PassRole",
                "iam:PutRolePolicy",
                "iam:RemoveRoleFromInstanceProfile",
                "iam:TagOpenIDConnectProvider",
                "iam:TagRole",
                "iam:UntagRole",
                "iam:UpdateAssumeRolePolicy",
                "logs:CreateLogGroup",
                "logs:DescribeLogGroups",
                "logs:DeleteLogGroup",
                "logs:ListTagsLogGroup",
                "logs:PutRetentionPolicy",
                "kms:CreateAlias",
                "kms:CreateGrant",
                "kms:CreateKey",
                "kms:DeleteAlias",
                "kms:DescribeKey",
                "kms:GetKeyPolicy",
                "kms:GetKeyRotationStatus",
                "kms:ListAliases",
                "kms:ListResourceTags",
                "kms:ScheduleKeyDeletion"
            ],
            "Resource": "*"
        }
    ]
}

The above IAM policy gives the following permissions to the user:

  1. EC2 for instance creation
  2. EC2 autoscaling for worker node autoscaling
  3. EKS cluster privileges
  4. Cloud watch privileges for log analytics
  5. IAM and KMS privileges to create policies and roles for the EKS users.

Make sure that there are no errors before you can proceed.

deploy EKS cluster using Terraform

Click “Next” and head over to the “Review policy” section. Provide a name for the policy. You can also review and confirm the permissions being passed to the policy.

deploy eks on AWS using terraform

Step 3. Create AWS User

To add the user, navigate to IAM > Users > Add users.

Provide a username then check the “Programmatic access” and “AWS Management Console access” options. You can choose to use a custom password on an autogenerated one. It is advisable that the user resets the password on the first login attempt.

setup eks using terraform

Proceed to the Permissions tab by pressing the “Next: Permissions” button. Here, you will be required to map the policy we created in the previous step to the user and also add administrator rights if you wish to. Select “Attach existing policies directly” then search for the policy we created in the previous step.

setup eks cluster on AWS using terraform

Proceed to the next tab where you will review the user details and permissions before you can finish the user setup.

setup eks on aws using terraform

Click on “Create user” to create the user. Download the .csv file that will contain information about the user’s Access Key ID and the secret. These details are important as we shall be using them in the next step.

setup eks on aws using terraform1

The login URL for the user will also be shown on this page.

Step 4. Configure AWS CLI

AWS CLI is the binary that allows users to manage their AWS cloud infrastructure through the command line.

Download the latest version of AWS CLI version 2.

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

Extract the downloaded package:

unzip awscliv2.zip

Install AWS CLI with sudo privileges.

sudo ./aws/install

Check and confirm that you have installed AWS CLI

$ aws --version
aws-cli/2.7.29 Python/3.9.11 Linux/4.18.0-372.9.1.el8.x86_64 exe/x86_64.rocky.8 prompt/off

Configure the user credentials and setup the default region for AWS CLI.

$ aws configure
AWS Access Key ID [None]: <your-AWS-USER-ACCESS-ID>
AWS Secret Access Key [None]: <YOUR-SECRET-KEY>
Default region name [None]: eu-west-1
Default output format [None]: json

The above details for Acess Key ID and the Secret Access Key are available on the .csv file we downloaded in the previous step. Log in to AWS web console and obtain the default region code where you wish to deploy your cluster as shown below:

How to Deploy EKS Cluster on AWS using Terraform2

To verify if your credentials have been successfully set, run the command below:

aws sts get-caller-identity

You should get an output similar to this below:

{
    "UserId": "AIKAYMPUUSVTMRUXXXXX",
    "Account": "57657XXXXXX",
    "Arn": "arn:aws:iam::57657XXXXXX:user/eks-admin"
}

Step 5. Install Kubectl tool on your bastion

Kubectl is the Kubernetes command-line tool that allows you to manage your Kubernetes cluster through the command-line.

Download the latest release of Kubectl:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

Install Kubectl on Linux

sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

Check and confirm the version of the installed Kubectl

$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:44:59Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7

Check out installation guides for other Operating Systems on the Kubernetes website.

Step 6. Install Terraform Automation tool

Follow the guide below to install Terraform on your Linux server.

For a quick guide, follow the steps below.

Download the latest version of Terraform from GitHub.

TER_VER=`curl -s https://api.github.com/repos/hashicorp/terraform/releases/latest | grep tag_name | cut -d: -f2 | tr -d \"\,\v | awk '{$1=$1};1'`
wget https://releases.hashicorp.com/terraform/${TER_VER}/terraform_${TER_VER}_linux_amd64.zip

Extract the downloaded file.

$ unzip terraform_${TER_VER}_linux_amd64.zip
Archive:  terraform_xxx_linux_amd64.zip
 inflating: terraform

Move the binary file to /usr/local/bin directory

sudo mv terraform /usr/local/bin/

Check and confirm the version of Terraform you have just installed

$ teraform -v
Terraform v1.2.8
on linux_amd64

Step 7. Setup Terraform Workspace

Let’s now set up the Terraform workspace. This will be the Terraform working directory for the deployment. We will create the files listed below which will contain the necessary modules and resources for the EKS cluster deployment.

  1. cluster.tf
  2. security-groups.tf
  3. versions.tf
  4. kubernetes.tf
  5. vpc.tf
  6. outputs.tf

Create Terraform Working Directory

Create a directory that shall be used as your working directory.

mkdir -p ~/terraform-deployments && cd ~/terraform-deployments

Configure the EKS Cluster Resources

Create the cluster.tf file and add the content below. This file contains details such as:

  • The EKS cluster version
  • the EKS Terrafom Module
  • Details about the worker nodes (instance type, root volume disk type and the security groups for the worker nodes)
$ vim cluster.tf
module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = local.cluster_name
  cluster_version = "1.20"
  subnets         = module.vpc.private_subnets

  tags = {
    Environment = "development"
    GithubRepo  = "terraform-aws-eks"
    GithubOrg   = "terraform-aws-modules"
  }


  vpc_id = module.vpc.vpc_id

  workers_group_defaults = {
    root_volume_type = "gp2"
  }

cluster_endpoint_private_access = "true"
  cluster_endpoint_public_access  = "true"

  write_kubeconfig      = true
  manage_aws_auth       = true

  worker_groups = [
    {
      name                          = "worker-group-1"
      instance_type                 = "t2.small"
      additional_userdata           = "echo foo bar"
      asg_desired_capacity          = 2
      additional_security_group_ids = [aws_security_group.worker_group_mgmt_one.id]
    },
    {
      name                          = "worker-group-2"
      instance_type                 = "t2.medium"
      additional_userdata           = "echo foo bar"
      additional_security_group_ids = [aws_security_group.worker_group_mgmt_two.id]
      asg_desired_capacity          = 1
    },
  ]
}

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id

Configure VPC Resources

Create the vpc.tf file to provision the VPC resources such as:

  1. The VPC default region
  2. the EKS cluster name
  3. VPC available zone
  4. VPC subnets
  5. Terraform VPC module
$ vim vpc.tf
variable "region" {
  default     = "eu-west-1"
  description = "AWS region"
}

provider "aws" {
  region = "eu-west-1"
}

data "aws_availability_zones" "available" {}

locals {
  cluster_name = "my-eks-cluster"
}


module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  name                 = "my-eks-cluster-vpc"
  cidr                 = "10.0.0.0/16"
  azs                  = data.aws_availability_zones.available.names
  private_subnets      = ["10.0.11.0/24", "10.0.22.0/24", "10.0.33.0/24"]
  public_subnets       = ["10.0.44.0/24", "10.0.55.0/24", "10.0.66.0/24"]
  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
  }

  public_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/elb"                      = "1"
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"             = "1"
  }
}

In the above, take note of the region, cluster name, and the VPC subnets. Use the details that fit in your environment.

Configure Security Groups

Create security groups to specify the rules in the VPC. In the configuration below, we have allowed ingress TCP traffic to all the subnets on port 22 for SSH service. This is for the management of the cluster.

$ vim security-groups.tf
resource "aws_security_group" "worker_group_mgmt_one" {
  name_prefix = "worker_group_mgmt_one"
  vpc_id      = module.vpc.vpc_id

  ingress {
    from_port = 22
    to_port   = 22
    protocol  = "tcp"

    cidr_blocks = [
      "10.0.0.0/8",
    ]
  }
}

resource "aws_security_group" "worker_group_mgmt_two" {
  name_prefix = "worker_group_mgmt_two"
  vpc_id      = module.vpc.vpc_id

  ingress {
    from_port = 22
    to_port   = 22
    protocol  = "tcp"

    cidr_blocks = [
      "192.168.0.0/16",
    ]
  }
}

resource "aws_security_group" "all_worker_mgmt" {
  name_prefix = "all_worker_management"
  vpc_id      = module.vpc.vpc_id

  ingress {
    from_port = 22
    to_port   = 22
    protocol  = "tcp"

    cidr_blocks = [
      "10.0.0.0/8",
      "172.16.0.0/12",
      "192.168.0.0/16",
    ]
  }
}

Configure Terraform Providers

Configure Terraform providers and their required versions as shown below:

$ vim versions.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 3.20.0"
    }

    random = {
      source  = "hashicorp/random"
      version = "3.0.0"
    }

    local = {
      source  = "hashicorp/local"
      version = "2.0.0"
    }

    null = {
      source  = "hashicorp/null"
      version = "3.0.0"
    }

    template = {
      source  = "hashicorp/template"
      version = "2.2.0"
    }

    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.0.1"
    }
  }

  required_version = "> 0.14"
}

Add k8s provider:

$ vim k8s.tf
provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  token                  = data.aws_eks_cluster_auth.cluster.token
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}

Configure Terraform Outputs

We need to configure the outputs so that we can get the required information when the deployment is complete. This can be done in the outputs.tf file as shown below:

$ vim outputs.tf
output "cluster_id" {
  description = "EKS cluster ID."
  value       = module.eks.cluster_id
}

output "cluster_endpoint" {
  description = "Endpoint for EKS control plane."
  value       = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
  description = "Security group ids attached to the cluster control plane."
  value       = module.eks.cluster_security_group_id
}

output "kubectl_config" {
  description = "kubectl config as generated by the module."
  value       = module.eks.kubeconfig
}

output "config_map_aws_auth" {
  description = "A kubernetes configuration to authenticate to this EKS cluster."
  value       = module.eks.config_map_aws_auth
}

output "region" {
  description = "AWS region"
  value       = var.region
}

output "cluster_name" {
  description = "Kubernetes Cluster Name"
  value       = local.cluster_name
}

Step 8. Initialize Terraform Workspace

With the terraform workspace set, we can now initialize terraform to download the required modules and providers. This can be done by the command below:

$ terraform init

The workspace initialization should take some moments before it’s done.

$ terraform init
Upgrading modules...
Downloading terraform-aws-modules/eks/aws 17.1.0 for eks...
- eks in .terraform/modules/eks
- eks.fargate in .terraform/modules/eks/modules/fargate
- eks.node_groups in .terraform/modules/eks/modules/node_groups
Downloading terraform-aws-modules/vpc/aws 2.66.0 for vpc...
- vpc in .terraform/modules/vpc

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/kubernetes versions matching ">= 1.11.1, >= 2.0.1"...
- Finding hashicorp/aws versions matching ">= 2.68.0, >= 3.20.0, >= 3.40.0, >= 3.43.0"...
- Finding hashicorp/random versions matching "3.0.0"...
- Finding hashicorp/local versions matching ">= 1.4.0, 2.0.0"...
- Finding hashicorp/null versions matching "3.0.0"...
- Finding hashicorp/template versions matching "2.2.0"...
- Finding terraform-aws-modules/http versions matching ">= 2.4.1"...
- Finding latest version of hashicorp/cloudinit...
- Installing hashicorp/template v2.2.0...
- Installed hashicorp/template v2.2.0 (signed by HashiCorp)
- Installing terraform-aws-modules/http v2.4.1...
- Installed terraform-aws-modules/http v2.4.1 (self-signed, key ID B2C1C0641B6B0EB7)
- Installing hashicorp/cloudinit v2.2.0...
- Installed hashicorp/cloudinit v2.2.0 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.3.2...
- Installed hashicorp/kubernetes v2.3.2 (signed by HashiCorp)
- Installing hashicorp/aws v3.51.0...
- Installed hashicorp/aws v3.51.0 (signed by HashiCorp)
- Installing hashicorp/random v3.0.0...
- Installed hashicorp/random v3.0.0 (signed by HashiCorp)
- Installing hashicorp/local v2.0.0...
- Installed hashicorp/local v2.0.0 (signed by HashiCorp)
- Installing hashicorp/null v3.0.0...
- Installed hashicorp/null v3.0.0 (signed by HashiCorp)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

This means that we can now proceed with our deployment since the workspace has been initialized successfully.

Terraform Plan

Run the terraform plan command to confirm the resources that will be provisioned when we run our script.

$ terraform plan
....
Plan: 50 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_endpoint          = (known after apply)
  + cluster_id                = (known after apply)
  + cluster_name              = "my-eks-cluster"
  + cluster_security_group_id = (known after apply)
  + config_map_aws_auth       = [
      + {
          + binary_data = null
          + data        = (known after apply)
          + id          = (known after apply)
          + metadata    = [
              + {
                  + annotations      = null
                  + generate_name    = null
                  + generation       = (known after apply)
                  + labels           = {
                      + "app.kubernetes.io/managed-by" = "Terraform"
                      + "terraform.io/module"          = "terraform-aws-modules.eks.aws"
                    }
                  + name             = "aws-auth"
                  + namespace        = "kube-system"
                  + resource_version = (known after apply)
                  + uid              = (known after apply)
                },
            ]
        },
    ]
  + kubectl_config            = (known after apply)
  + region                    = "eu-west-1"

We are now aware that terraform will make 50 changes to our environment, each change has been described in the output above.

Step 9. Provision EKS Cluster using Terraform

Proceed to provision the EKS cluster using Terraform with the command below:

$ terraform apply

You will be asked to confirm if Terraform will proceed with the deployment. Type ‘yes’.

$ terraform apply
.....
Plan: 50 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_endpoint          = (known after apply)
  + cluster_id                = (known after apply)
  + cluster_name              = "my-eks-cluster"
  + cluster_security_group_id = (known after apply)
  + config_map_aws_auth       = [
      + {
          + binary_data = null
          + data        = (known after apply)
          + id          = (known after apply)
          + metadata    = [
              + {
                  + annotations      = null
                  + generate_name    = null
                  + generation       = (known after apply)
                  + labels           = {
                      + "app.kubernetes.io/managed-by" = "Terraform"
                      + "terraform.io/module"          = "terraform-aws-modules.eks.aws"
                    }
                  + name             = "aws-auth"
                  + namespace        = "kube-system"
                  + resource_version = (known after apply)
                  + uid              = (known after apply)
                },
            ]
        },
    ]
  + kubectl_config            = (known after apply)
  + region                    = "eu-west-1"

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

The above will initialize the provisioning of the EKS on AWS. The deployment process will take some minutes to complete.

After a successful deployment, you should get an output similar to the one below:

Apply complete! Resources: 50 added, 0 changed, 0 destroyed.

Outputs:

cluster_endpoint = "https://15B63349104862B5FA303A1F430XXXX.gr7.eu-west-1.eks.amazonaws.com"
cluster_id = "my-eks-cluster"
cluster_name = "my-eks-cluster"
cluster_security_group_id = "sg-00dbf022c27xxxdxx"
config_map_aws_auth = [
  {
    "binary_data" = tomap(null) /* of string */
    "data" = tomap({
      "mapAccounts" = <<-EOT
      []
      
      EOT
      "mapRoles" = <<-EOT
      - "groups":
        - "system:bootstrappers"
        - "system:nodes"
        "rolearn": "arn:aws:iam::57657xxxxx:role/my-eks-cluster2021072919190220210000000c"
        "username": "system:node:{{EC2PrivateDNSName}}"
      
      EOT
      "mapUsers" = <<-EOT
      []

Open AWS web console to confirm if the cluster has been deployed successfully. Navigate to Elastic Kubernetes service > Amazon EKS > Clusters.

aws eks cluster list

You should be able to see the cluster listed with the name you specified in the vpc.tf file. In my case, it is “my-eks-cluster“. You can also check the resources that have been deployed in the cluster and their state.

eks cluster overview 1

To manage our EKS cluster on CLI, we need to configure kubectl context by importing the EKS KUBECONFIG as below:

aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name)

The above command will export the EKS KUBECONFIG and you can now manage your Kubernetes cluster using kubectl

We can run some basic commands for Kubernetes to confirm the status of the cluster from your terminal

$ kubectl get nodes -o wide
$ kubectl get pods --all-namespaces

It is expected that you get an output such as the one shown in the image below. This proves that the worker nodes are up and running, the pods have also initialized successfully.

eks get services

Destroy EKS Cluster on AWS using Terraform

You can also destroy your stack using Terraform with the command below:

$ terraform destroy

This will ask you to confirm if you really want to completely destroy the stack as the process is irreversible.

....
Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

Conclusion

At this point, you have successfully deployed your EKS cluster on AWS using Terraform. You might as well choose to destroy the cluster as shown above. All in all, we can surely say that setting up a managed Kubernetes service on AWS, EKS, is much easier if you deploy the cluster using Terraform.

I hope the guide was elaborate enough. Please feel free to reach out in case you encounter issues when following this guide. Cheers!

RELATED ARTICLES

Most Popular

Recent Comments