Part 2 – Deploy kubernetes on AWS with KOPS

Introduction

KOPS is an open-source tools written in Golang that will help us deploying and maintaining our kubernetes cluster. The source code available here and the official k8s doc here .

It really started in may 2016 with a first beta release on the 8th of september 2016. So it’s a very young tool but with a good community and support (slack channel).

Pre-Requisite

To test KOPS all you will need :

  • An AWS account
  • An IAM user allowed to create resources and use the S3 bucket

The first step is to install KOPS. On a mac simply use brew :

brew update && brew install kops

At the time i am writing this my kops version is 1.4.3

Because I am using different AWS account I like to use direnv tool to switch between then (https://direnv.net/)
Therefore create a .envrc file with the following environment variables

export AWS_DEFAULT_REGION=eu-west-1
export AWS_ACCESS_KEY_ID=xxxx
export AWS_SECRET_ACCESS_KEY=xxx

To use your .envrc file run :

direnv allow .

To test everything works you can run :

aws ec2 describe-instances

Now we need to create an S3 bucket, this is where KOPS will store all our configuration files and keys for the Kubernetes deployement.
To create the bucket run :

aws s3api create-bucket --bucket techful-kops-cluster01

aws s3 ls

Add the bucket to your .envrc

echo "export KOPS_STATE_STORE=s3://techful-kops-cluster01" >> .envrc

Deploy the cluster

All right, we are now ready to test KOPS.
My AWS account  has a default configuration (1VPC, 3Subnets, 1IG, no NAT GW) with a public domain : techfulab.co.uk.

In this tutorial we will use KOPS to deploy all the infrastructure (VPC, Subnet etc) so your current config does not really matter.
However you could also deploy your cluster in different configuration (Public/Private subnet, Internal/public DNS zone etc.) with additional option (see annex).

List the content of the bucket

aws s3 ls techful-kops-cluster01/kops-cluster01.techfulab.co.uk/

If required – Delete the content of the bucket

aws s3 rm s3://techful-kops-cluster01 --recursive

You can build a very simple, 1 master, 1 node cluster with the following command

kops create cluster --zones=eu-west-1a --master-zones=eu-west-1a --dns-zone=techfulab.co.uk --master-size=t2.small --node-count=1 --node-size=t2.small kops-cluster01.techfulab.co.uk

Or a Highly Available cluster with 3 master and 3 nodes :

kops create cluster --zones=eu-west-1a,eu-west-1b,eu-west-1c --master-zones=eu-west-1a,eu-west-1b,eu-west-1c --dns-zone=techfulab.co.uk --master-size=t2.small --node-count=3 --node-size=t2.small kops-cluster01.techfulab.co.uk

These commands will create the configuration file in the S3 bucket in a folder using the cluster name.

At this moment the cluster is not deployed, kops only created all the config files.
Here is the list of command you can run

- List clusters with: kops get cluster

- Edit this cluster with: kops edit cluster kops-cluster01.techfulab.co.uk
- Edit your node instance group: kops edit ig --name=kops-cluster01.techfulab.co.uk nodes
- Edit your master instance group: kops edit ig --name=kops-cluster01.techfulab.co.uk master-eu-west-1a

- View the change to apply : kops update cluster kops-cluster01.techfulab.co.uk

And finally to deploy your cluster run the following command

kops update cluster kops-cluster01.techfulab.co.uk --yes

Many resources will be deployed by KOPS : VPC, Subnets, route tables, SG, IAM Roles, Route53 records etc.

Once you cluster is deployed KOPS generate the kubeconfig therefore you will be able to interact with the cluster with kubectl.

To see your kubectl config

- See the config file : cat ${HOME}/.kube/config

- See the different contexts configured : kubectl config get-contexts
- See current context : kubectl config current-context

- See all cluster configured in the config file : kubectl config get-clusters

To interact with your kubernetes cluster :

- See your nodes : kubectl get nodes

- See the available namespaces : kubectl get namespaces

- See the pods : kubectl get pods --namespace kube-system

This last command should show you :

⇒ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
dns-controller-2150731133-3higs 1/1 Running 0 28m
etcd-server-events-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m
etcd-server-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m
kube-apiserver-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 3 28m
kube-controller-manager-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m
kube-dns-v20-3531996453-04yiw 3/3 Running 0 28m
kube-dns-v20-3531996453-uxl3m 3/3 Running 0 28m
kube-proxy-ip-172-20-42-253.eu-west-1.compute.internal 1/1 Running 0 27m
kube-proxy-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m
kube-scheduler-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m

This are all the pods that kubernetes run for it’s own internal operation.

A good thing could be to install the kubernetes dashboard to have a graphical tools to see and configure it. To install the cluster run the following command :

kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

Once it’s done, if you re-run the “get pods ” command you should see a new system pod :

kubernetes-dashboard-3203831700-scmox

Now to access the dashboard run the following command

kubectl proxy

Update your cluster

Once you cluster is deployed you will still be able to manage your cluster lifecycle using KOPS.

You can modify three things :

The cluster configuration :

kops edit cluster kops-cluster01.techfulab.co.uk

The nodes configuration

kops edit ig --name=kops-cluster01.techfulab.co.uk nodes

The master configuration

kops edit ig --name=kops-cluster01.techfulab.co.uk master-eu-west-1a
- View the change to apply : kops update cluster kops-cluster01.techfulab.co.uk

Her fore example we will update the number of node (from 1 to 2)

kops edit ig --name=kops-cluster01.techfulab.co.uk nodes

Updte the following : 
 maxSize: 2
 minSize: 2

Once your config file is saved you can see the changes that would be apply by a cluster update using the following command :

kops update cluster kops-cluster01.techfulab.co.uk

Then to apply the changes simply run :

kops update cluster kops-cluster01.techfulab.co.uk --yes

KOPS will deploy a new node and update all necessary configuration.

Destroy the cluster

kops delete cluster kops-cluster01.techfulab.co.uk
kops delete cluster kops-cluster01.techfulab.co.uk --yes

Conclusion

KOPS make kubernetes deployment and update on AWS very easy. However some advanced functionalities are not yet supported (like CNI, core os) making it difficult to in such situations.

Annex

Usage:
kops create cluster [flags]

Flags:
--admin-access string Restrict access to admin endpoints (SSH, HTTPS) to this CIDR. If not set, access will not be restricted by IP.
--associate-public-ip Specify --associate-public-ip=[true|false] to enable/disable association of public IP for master ASG and nodes. Default is 'true'. (default true)
--channel string Channel for default versions and configuration to use (default "stable")
--cloud string Cloud provider to use - gce, aws
--dns-zone string DNS hosted zone to use (defaults to last two components of cluster name)
--image string Image to use
--kubernetes-version string Version of kubernetes to run (defaults to version in channel)
--master-size string Set instance size for masters
--master-zones string Zones in which to run masters (must be an odd number)
--model string Models to apply (separate multiple models with commas) (default "config,proto,cloudup")
--network-cidr string Set to override the default network CIDR
--networking string Networking mode to use. kubenet (default), classic, external. (default "kubenet")
--node-count int Set the number of nodes
--node-size string Set instance size for nodes
--out string Path to write any local output
--project string Project to use (must be set on GCE)
--ssh-public-key string SSH public key to use (default "~/.ssh/id_rsa.pub")
--target string Target - direct, terraform (default "direct")
--vpc string Set to use a shared VPC
--yes Specify --yes to immediately create the cluster
--zones string Zones in which to run the cluster