Install Kubernetes on AWS using KOPS

In this article we will look at how to install Kubernetes on AWS using KOPS. We will build a 5 node cluster with 2 masters, 3 etcd nodes and 3 worker nodes. I have used ubuntu to do the installation (Ubuntu installed on a VirtualBox on a Windows Machine). If you download VirtualBox and install Ubuntu 18.10 on it, the steps should work.

Steps to install Kubernetes on AWS using KOPS

1. Install KOPS client tool

Install KOPS by following the instructions from

curl -Lo kops$(curl -s | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/

2. Install kubectl client tool

We install kubectl client tool using the method specified on the same page as above

curl -sLo kubectl$(curl -s
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

3. Install AWS CLI Tool

We will install the AWS client using the recommended method which is to use the pip tool. Install the pip tool if you haven’t done that already. Install awscli using the pip tool. You will then need to add the awscli to classpath.

sudo apt install python-pip
pip install awscli
export PATH=$PATH:~/.local/bin

4. Configure AWS CLI Tool

Once the aws client tool is installed we need to configure it and add the access key Id and the secret key using which we will connect to aws. To get the access kay id and the secret key,
login to your console -> IAM -> Users -> (select User) -> Security Credentials tab -> scroll down -> Create Access Key.

Copy the Acess Key ID and Access Key. To configure the keys in the client type in

aws configure

Creating a user

For this demo we will create a new user called kops_user and give them the following permissions – AmazonEC2FullAccess, AmazonRoute53FullAccess, AmazonS3FullAccess, IAMFullAccess, AmazonVPCFullAccess

Configuring a DNS for Kubernetes on AWS

The next step is to configure a DNS for Kubernetes. We will host Kubernetes on a subdomain of an existing cluster that we own on AWS. The domain name is We will host Kubernetes on we first create a hosted zone for the subdomain

ID=$(uuidgen) && aws route53 create-hosted-zone --name --caller-reference $ID | jq .DelegationSet.NameServers

Get the parent hosted zone id

aws route53 list-hosted-zones | jq '.HostedZones[] | select(.Name=="") | .Id'

Create entries for the subdomain

  "Comment": "Create a subdomain NS record in the parent domain",
  "Changes": [
      "Action": "CREATE",
      "ResourceRecordSet": {
        "Name": "",
        "Type": "NS",
        "TTL": 300,
        "ResourceRecords": [
            "Value": ""
            "Value": ""
            "Value": ""
            "Value": ""

apply the subdomain to the parent domain

aws route53 change-resource-record-sets --hosted-zone-id /hostedzone/Z1ZH5XPL8YR04 --change-batch file://kub.dns.json

Storing Cluster State

We need to create an S3 bucket to store the state of the cluster. We call the bucket

aws s3api create-bucket  --bucket kub-findaddress-state-store --region us-east-1

As per the recommendation, we enable versions

aws s3api put-bucket-versioning --bucket kub-findaddress-state-store  --versioning-configuration Status=Enabled

Create the Kubernetes Cluster

We then use kops to create the kubernetes cluster. The step below creates a definition for the cluster

export KOPS_STATE_STORE=s3://kub-findaddress-state-store
kops create cluster --zones ap-southeast-2a $NAME

You can check the changes that kops will be making. If you are happy with the changes you can launch the cluster using

kops update cluster $NAME --yes

This will create all required resources. It might take a while for the cluster to be created. You can check the status of the cluster by

kops validate cluster

When the cluster has successfully started you can connect to the cluster using the kubectl client. kops creates the config for kubectl client so if you type in

kubectl get nodes

It will connect to the cluster and get the nodes. You can connect to the master using the ssh key.

The kuberentes cluster is now ready for use. The number of master and worker nodes can be managed by changing the autoscaling groups. Each master stored the etcd in an ebs volume. The cluster does not back up the etcd by default. We will cover how to do that in a later article.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.