Code, Deploy and manage your first App on Kubernetes

Introduction

Creating a k8s cluster is a nice thing but it’s better to use it 🙂
Here is a simple tutorial where I demonstrate how to quickly code and test an app.

Code the application

For this example we’ll use a simple http server written in Python.

#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer

PORT_NUMBER = 8080

#This class will handles any incoming request from
#the browser
class myHandler(BaseHTTPRequestHandler):

#Handler for the GET requests
def do_GET(self):
self.send_response(200)
self.send_header('Content-type','text/html')
self.end_headers()
# Send the html message
self.wfile.write("Hello World - This is http python node test app v1 !")
return

try:
#Create a web server and define the handler to manage the
#incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER

#Wait forever for incoming htto requests
server.serve_forever()

except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()

Source : http://www.acmesystems.it/python_httpd

Test your app

Now that we have our application we can test it locally

python httpython.py
Started httpserver on port 8080

Result

curl http://localhost:8080
Hello World - This is http python node test app v1 !%

It works !

Create the container

Now in order to put our application on a kubernetes cluster we first need to create a container that contains our code.

Create the docker file

To create the container we define it in a Dockerfile

 

FROM python:2-onbuild
EXPOSE 8080
COPY httpython.py .
ENTRYPOINT [ "python", "./httpython.py" ]

FROM : specify the source image name. https://store.docker.com
EXPOSE : make the container listen on the specified port
COPY : copy file to destination (copy src dst)
ENTRYPOINT : specify the executable to be run we you start the container

Create a Requierment filetouch requirements.txt

touch requirements.txt

Build your docker container

docker build -t gcr.io/techfulab-testk8s-01/hello-py-node:v1 .

-t, –tag value Name and optionally a tag in the ‘name:tag’ format (default [])

Name your container with

  • gcr.io = google container repo
  • techfulab-testk8s-01 = Name of your project on GCP
  • hello-py-node : name of the container
  • v1 = version number/id

At the moment the container is build locally but this naming will help us later

Test locally

docker run -d -p 8080:8080 --name hello_tutorial gcr.io/techfulab-testk8s-01/hello-py-node:v1
⇒ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e483b5998d71 gcr.io/techfulab-testk8s-01/hello-py-node:v3 "python ./httpython.p" 5 seconds ago Up 3 seconds 0.0.0.0:8080->8080/tcp hello_tutorial

Test with :

curl http://localhost:8080
Hello World - This is http python node test app v1 !%

It works !

Push to Google cloud registry

gcloud docker -- push gcr.io/techfulab-testk8s-01/hello-py-node:v1

Deploy your application on Kubernetes

Now that we have our application containerised and pushed to google container registry we can deploy it to kubernetes

Create namespace

kubectl create namespace "team-01"

Change your config to use this namespace (rather than puttin –namespace team-01 at the end of every command)

kubectl config get-contexts
kubectl config set-context gke_techfulab-testk8s-01_europe-west1-d_techful-kops-cluster01 --namespace team-01

Deploy the application :

kubectl run hello-py-app --image=gcr.io/techfulab-testk8s-01/hello-py-node:v1 --port=8080 --replicas=2 --namespace team-01
Check status :
watch -n 1 kubectl get deployments
watch -n 1 kubectl get pods

Expose it :

kubectl expose deployment hello-py-app --type="LoadBalancer"

Check status :

watch -n 1 kubectl get services

Retrieve the Public IP and :

watch -n 1 curl http://$PUBLIC_IP:8080

Modify your deployment – Change the number of replica

kubectl scale deployment hello-py-app --replicas=4

Modify your deployment – Edit the deployment file

kubectl edit deployment/hello-py-app

Modify your deployment -Update the version of your app

Edit the local code (httpython.py) and change the message to display

...
# Send the html message
self.wfile.write("Hello World - This is http python node test app v2 !")
return
...

Build a new version of the container

docker build -t gcr.io/techfulab-testk8s-01/hello-py-node:v2 .

Test it locally

docker run -d -p 8080:8080 --name hello_tutorial gcr.io/techfulab-testk8s-01/hello-py-node:v2
curl http://localhost:8080


Hello World - This is http python node test app v2 !%

Push to Google cloud registry

gcloud docker -- push gcr.io/techfulab-testk8s-01/hello-py-node:v2

Update the version deployed

kubectl set image deployment/hello-py-app hello-node=gcr.io/techfulab-testk8s-01/hello-py-node:v2

Rollback

kubectl set image deployment/hello-py-app hello-node=gcr.io/techfulab-testk8s-01/hello-py-node:v1

Delete your application

kubectl delete service,deployment hello-py-app

Documentation/sources :

https://kubernetes.io/docs/hellonode/#create-a-docker-container-image
https://kubernetes.io/docs/user-guide/kubectl/kubectl_run/

Part 3 – Deploy kubernetes on GCP with Google Container Engine (aka GKE)

Introduction

Since Kubernetes was created by Google it seems natural to run it on GCP.
And indeed, running a K8S cluster in GCP is very straightforward !
To avoid confusion between terms

  • GCP : Google Cloud Platform
  • GCE : Google Compute Engine
  • GCK : Google Container Engine ….

You can go through the console but here we will use the glcoud sdk.

Pre requisite

  • A GCP Account
  • Install gcloud SDK

First you’ll need to configure the CLI to access your gcloud account and project, to do so use the following command :

gcloud init

Follow the gcloud steps to configure your account. Once it’s done you can test your configuration by listing your running instances :

gcloud compute instances list

And that’s it !

Deploy the cluster

Since kubernetes is natively supported by GCP we don’t need to use any other tools. Gcloud container will deploy a pure K8S cluster on top of Google Compute engine (virtual machines/instances) and will let us use it using the native kubectl tool.

To Deploy a simple cluster run the following :

gcloud container clusters create "techful-kops-cluster01" --zone "europe-west1-d" --machine-type "custom-1-1024" --image-type "GCI" --disk-size "100"  --num-nodes "2" --network "default" --enable-cloud-logging --no-enable-cloud-monitoring

The options are self explanatory. However note that you could use several zones with  :

--additional-zones "europe-west1-b"

The only two images available are :

  • GCI
  • Container VM

More info on images here

Once the cluster is running you’ll be able to see the instances launched by GKE

gcloud compute instances list

NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-techful-kops-cluster-default-pool-86c719c1-8135 europe-west1-d custom (1 vCPU, 1.00 GiB) 10.132.0.2 35.187.20.58 RUNNING
gke-techful-kops-cluster-default-pool-86c719c1-zcln europe-west1-d custom (1 vCPU, 1.00 GiB) 10.132.0.3 35.187.31.62 RUNNING

Again here, gcloud export all necessary information to you kubectl config :

- See the config file : cat ${HOME}/.kube/config

- See the different contexts configured : kubectl config get-contexts
- See current context : kubectl config current-context

- See all cluster configured in the config file : kubectl config get-clusters

To interact with your kubernetes cluster :

- See your nodes : kubectl get nodes

- See the available namespaces : kubectl get namespaces

- See the pods : kubectl get pods --namespace kube-system

On GCP as you can see the dashboard is installed by default

⇒ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
fluentd-cloud-logging-gke-techful-kops-cluster-default-pool-8f6ae37d-krc7 1/1 Running 0 2m
fluentd-cloud-logging-gke-techful-kops-cluster-default-pool-8f6ae37d-tv05 1/1 Running 0 22m
heapster-v1.2.0-1912761325-zt3j9 2/2 Running 0 7m
kube-dns-v20-0aihy 3/3 Running 0 7m
kube-dns-v20-xu3ci 3/3 Running 0 21m
kube-proxy-gke-techful-kops-cluster-default-pool-8f6ae37d-krc7 1/1 Running 0 2m
kube-proxy-gke-techful-kops-cluster-default-pool-8f6ae37d-tv05 1/1 Running 0 22m
kubernetes-dashboard-v1.4.0-04b55 1/1 Running 0 7m
l7-default-backend-v1.0-3asdt 1/1 Running 0 21m

To access the dashboard simply run the proxy command

kubectl proxy

and browse to :

http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/

Resize cluster

If you want to change the number of nodes in your cluster, use the resize commdn :

gcloud container clusters resize "techful-kops-cluster01" --size=3 --zone "europe-west1-d"

The fun thing is that you can resize your cluster to 0 node. It will kill machines but keep your cluster configuration so that you can relaunch it as is later.

Documentation on the command here

Delete the cluster

Finally, to entirely delete your cluster run the following :

gcloud container clusters delete "techful-kops-cluster01" --zone "europe-west1-d"

Conclusion

Running a k8s cluster on GCP with GKE is very easy, however you won’t have the same level of configuration that you have with more custom solution (such as KOPS or even a manual install). But you will benefit from the stability of the platform, auto update and so on.

Part 2 – Deploy kubernetes on AWS with KOPS

Introduction

KOPS is an open-source tools written in Golang that will help us deploying and maintaining our kubernetes cluster. The source code available here and the official k8s doc here .

It really started in may 2016 with a first beta release on the 8th of september 2016. So it’s a very young tool but with a good community and support (slack channel).

Pre-Requisite

To test KOPS all you will need :

  • An AWS account
  • An IAM user allowed to create resources and use the S3 bucket

The first step is to install KOPS. On a mac simply use brew :

brew update && brew install kops

At the time i am writing this my kops version is 1.4.3

Because I am using different AWS account I like to use direnv tool to switch between then (https://direnv.net/)
Therefore create a .envrc file with the following environment variables

export AWS_DEFAULT_REGION=eu-west-1
export AWS_ACCESS_KEY_ID=xxxx
export AWS_SECRET_ACCESS_KEY=xxx

To use your .envrc file run :

direnv allow .

To test everything works you can run :

aws ec2 describe-instances

Now we need to create an S3 bucket, this is where KOPS will store all our configuration files and keys for the Kubernetes deployement.
To create the bucket run :

aws s3api create-bucket --bucket techful-kops-cluster01

aws s3 ls

Add the bucket to your .envrc

echo "export KOPS_STATE_STORE=s3://techful-kops-cluster01" >> .envrc

Deploy the cluster

All right, we are now ready to test KOPS.
My AWS account  has a default configuration (1VPC, 3Subnets, 1IG, no NAT GW) with a public domain : techfulab.co.uk.

In this tutorial we will use KOPS to deploy all the infrastructure (VPC, Subnet etc) so your current config does not really matter.
However you could also deploy your cluster in different configuration (Public/Private subnet, Internal/public DNS zone etc.) with additional option (see annex).

List the content of the bucket

aws s3 ls techful-kops-cluster01/kops-cluster01.techfulab.co.uk/

If required – Delete the content of the bucket

aws s3 rm s3://techful-kops-cluster01 --recursive

You can build a very simple, 1 master, 1 node cluster with the following command

kops create cluster --zones=eu-west-1a --master-zones=eu-west-1a --dns-zone=techfulab.co.uk --master-size=t2.small --node-count=1 --node-size=t2.small kops-cluster01.techfulab.co.uk

Or a Highly Available cluster with 3 master and 3 nodes :

kops create cluster --zones=eu-west-1a,eu-west-1b,eu-west-1c --master-zones=eu-west-1a,eu-west-1b,eu-west-1c --dns-zone=techfulab.co.uk --master-size=t2.small --node-count=3 --node-size=t2.small kops-cluster01.techfulab.co.uk

These commands will create the configuration file in the S3 bucket in a folder using the cluster name.

At this moment the cluster is not deployed, kops only created all the config files.
Here is the list of command you can run

- List clusters with: kops get cluster

- Edit this cluster with: kops edit cluster kops-cluster01.techfulab.co.uk
- Edit your node instance group: kops edit ig --name=kops-cluster01.techfulab.co.uk nodes
- Edit your master instance group: kops edit ig --name=kops-cluster01.techfulab.co.uk master-eu-west-1a

- View the change to apply : kops update cluster kops-cluster01.techfulab.co.uk

And finally to deploy your cluster run the following command

kops update cluster kops-cluster01.techfulab.co.uk --yes

Many resources will be deployed by KOPS : VPC, Subnets, route tables, SG, IAM Roles, Route53 records etc.

Once you cluster is deployed KOPS generate the kubeconfig therefore you will be able to interact with the cluster with kubectl.

To see your kubectl config

- See the config file : cat ${HOME}/.kube/config

- See the different contexts configured : kubectl config get-contexts
- See current context : kubectl config current-context

- See all cluster configured in the config file : kubectl config get-clusters

To interact with your kubernetes cluster :

- See your nodes : kubectl get nodes

- See the available namespaces : kubectl get namespaces

- See the pods : kubectl get pods --namespace kube-system

This last command should show you :

⇒ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
dns-controller-2150731133-3higs 1/1 Running 0 28m
etcd-server-events-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m
etcd-server-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m
kube-apiserver-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 3 28m
kube-controller-manager-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m
kube-dns-v20-3531996453-04yiw 3/3 Running 0 28m
kube-dns-v20-3531996453-uxl3m 3/3 Running 0 28m
kube-proxy-ip-172-20-42-253.eu-west-1.compute.internal 1/1 Running 0 27m
kube-proxy-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m
kube-scheduler-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m

This are all the pods that kubernetes run for it’s own internal operation.

A good thing could be to install the kubernetes dashboard to have a graphical tools to see and configure it. To install the cluster run the following command :

kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

Once it’s done, if you re-run the “get pods ” command you should see a new system pod :

kubernetes-dashboard-3203831700-scmox

Now to access the dashboard run the following command

kubectl proxy

Update your cluster

Once you cluster is deployed you will still be able to manage your cluster lifecycle using KOPS.

You can modify three things :

The cluster configuration :

kops edit cluster kops-cluster01.techfulab.co.uk

The nodes configuration

kops edit ig --name=kops-cluster01.techfulab.co.uk nodes

The master configuration

kops edit ig --name=kops-cluster01.techfulab.co.uk master-eu-west-1a
- View the change to apply : kops update cluster kops-cluster01.techfulab.co.uk

Her fore example we will update the number of node (from 1 to 2)

kops edit ig --name=kops-cluster01.techfulab.co.uk nodes

Updte the following : 
 maxSize: 2
 minSize: 2

Once your config file is saved you can see the changes that would be apply by a cluster update using the following command :

kops update cluster kops-cluster01.techfulab.co.uk

Then to apply the changes simply run :

kops update cluster kops-cluster01.techfulab.co.uk --yes

KOPS will deploy a new node and update all necessary configuration.

Destroy the cluster

kops delete cluster kops-cluster01.techfulab.co.uk
kops delete cluster kops-cluster01.techfulab.co.uk --yes

Conclusion

KOPS make kubernetes deployment and update on AWS very easy. However some advanced functionalities are not yet supported (like CNI, core os) making it difficult to in such situations.

Annex

Usage:
kops create cluster [flags]

Flags:
--admin-access string Restrict access to admin endpoints (SSH, HTTPS) to this CIDR. If not set, access will not be restricted by IP.
--associate-public-ip Specify --associate-public-ip=[true|false] to enable/disable association of public IP for master ASG and nodes. Default is 'true'. (default true)
--channel string Channel for default versions and configuration to use (default "stable")
--cloud string Cloud provider to use - gce, aws
--dns-zone string DNS hosted zone to use (defaults to last two components of cluster name)
--image string Image to use
--kubernetes-version string Version of kubernetes to run (defaults to version in channel)
--master-size string Set instance size for masters
--master-zones string Zones in which to run masters (must be an odd number)
--model string Models to apply (separate multiple models with commas) (default "config,proto,cloudup")
--network-cidr string Set to override the default network CIDR
--networking string Networking mode to use. kubenet (default), classic, external. (default "kubenet")
--node-count int Set the number of nodes
--node-size string Set instance size for nodes
--out string Path to write any local output
--project string Project to use (must be set on GCE)
--ssh-public-key string SSH public key to use (default "~/.ssh/id_rsa.pub")
--target string Target - direct, terraform (default "direct")
--vpc string Set to use a shared VPC
--yes Specify --yes to immediately create the cluster
--zones string Zones in which to run the cluster

Part 1 – Test Kubernetes with minikube

Introduction

The first part of this series of article start with the easiest way to test K8S. Minikube is a tool to deploy kubernetes locally on your machine using a virtual machine running with virtual box.

Pre-requisite

  • Virtual Box
  • A linux or Mac Os system

First we will have to download the minikube image for virtualbox

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

Deploy

As I said before, minikube is very easy, the only thing to deploy your cluster is to run :

⇒ minikube start
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.

It will take a couple of minutes to get it ready. You can see in your virtual box console that a minikube machine as been created. You can track the deployment of your k8s cluster with the following command :

watch kubectl get pods --namespace kube-system

It will show you the deployment status of the different pods necessary for k8s to run. Once it show the following your cluster is ready :

Every 2.0s: kubectl get pods --namespace kube-system HOLBMAC0404: Tue Jan 24 15:04:39 2017

NAME READY STATUS RESTARTS AGE
kube-addon-manager-minikube 1/1 Running 0 21m
kube-dns-v20-bsdcc 3/3 Running 0 16m
kubernetes-dashboard-7zm9h 1/1 Running 0 17m

As you can see, by default minikube run DNS and Dashboard services.

You can see your kubectl config :

- See the config file : cat ${HOME}/.kube/config

- See the different contexts configured : kubectl config get-contexts
- See current context : kubectl config current-context

- See all cluster configured in the config file : kubectl config get-clusters

To interact with your kubernetes cluster :

- See your nodes : kubectl get nodes

- See the available namespaces : kubectl get namespaces

- See the pods : kubectl get pods --namespace kube-system

To access the dashboard simply run the proxy command

kubectl proxy

and browse to :

http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/

Or you could simply run the following :

 minikube dashboard
Opening kubernetes dashboard in default browser...

Stop/delete you cluster

If you want to reuse your cluster later :

minikube stop

Otherwise delete your cluster (and the virtual machine)

minikube delete

Conclusion

Minikube is a very easy and efficient tool to test k8s locally and is also a great tool to have a local test/dev environment. However, because it’s running locally when it comes to complex application it could be very slow and impact your own computer performances.

3 ways to deploy a kubernetes POC

Kubernetes (aka. k8s) is usually described as a container manager, meaning that it is used to run and delete container on top of an any infrastructure (physique, virtual, cloud).
Describe like this it seems pretty simple and boring : “Yeah, another management layer …”
But when you look at how it works, why it is used, what new functionalities and concepts it brings, things get definitely more interesting.

I’ve seen the virtualisation changing the way we used physical hardware, I’ve seen the cloud changing how we think about infrastructure and I believe containerisation as a global concept will change how we run application.

It’s the missing part and the logical evolution of IT. And Kubernetes (or others) bring what is missing to the cloud revolution, it bring an abstraction layer on top of any infrastructure that will help organisation to gather physical and/or cloud providers as one global resource.

Kubernetes is a very new technology, it was release in stable version 1.0 in July 21, 2015. It’s today in version 1.4 with a very active community and a lot of enthusiasm from IT pro.
To understand how things works I usually like to get my hands on and play with the solution to understand the different functionalities and concepts.
In this series of articles we will see three different ways to test k8s, locally, on AWS and GCP.

Part 1 – Test kubernetes locally with Minikube
Part 2 – Deploy a kubernetes POC on AWS with KOPS
Part 3 – Deploy a kubernetes POC on Google Cloud Platform with Google Container Engine

Extend corporate local network to Azure Cloud

Cloud is more and more common in companies strategies. However having a cloud completely isolated from you corporate network could be frustrating. Connecting your cloud tenant to your local network will allow you use your cloud environment much easier and much efficiently. Your cloud will really become your test/dev environment, heavy workload platform or disaster recovery solution. Connecting it to your network will really benefit your IT.

In this article I’ll show you how to extend your corporate local network to Microsoft Azure cloud infrastructure.

To do so we’ll need a gateway it could be a dedicated hardware (see compatibility list here : http://msdn.microsoft.com/en-us/library/azure/jj156075.aspx#bkmk_VPNDevice) or a server with RRAS (Routing and Remote Access Service) role.

In this tutorial we will use the second option (RRAS server).

What we need :

  • An Azure account
  • A corporate network with active directory/DNS server
  • A brand new server for RRAS role
  • Know your public IP
  • Access to your router or to your DHCP server configuration
  • And finally a coffee machine 🙂

 

Here is the network architecture we will build.

AzureCo01

Create an affinity group

Creating an affinity group allow us to place all our network and virtual machine in the same logical place

AzureCo02

Name your affinity group and select your region

AzureCo03

Configure networks

Then we will define our local network. The local network refer to the corporate infrastructure

AzureCo04

Enter your corporate public IP

AzureCo05

Enter your corporate network address and CIDR

AzureCo06

The next step is to define the virtual network.

This virtual network will be the Azure network.

Go to networks and create it

AzureCo07

Name your Azure Network. In the screenshot below i named it “AzurePublicIP” which is not a great idea because the logical network is composed of the PublicIP + local subnet. For a better comprehension it could be better to name it “AzureNetwork”

AzureCo08

Create the desire subnet

AzureCo09

Then click on add gateway subnet

AzureCo10

Enter your corporate DNS and select your corporate Network

AzureCo11

 

Connect to azure

Wait for the virtual network creation. Then click on the network name to access the dashboard

AzureCo12

Create the gateway

AzureCo13

Wait for the creation could take a couple of minutes

AzureCo14

When it’s done download the VPN agent script

AzureCo15

Put the script to your RRAS server (here it’s my 192.168.1.191)

Change your execution policy and rename the script to *.ps1

AzureCo16

Execute the script.

It will automatically install all pre-requisite roles and configure the desired connection

AzureCo17

When it’s done open “Routing and remote access” console and click connect on the connection to Azure network

AzureCo18

It will dial and you’ll see the connection UP in the azure portal

AzureCo19

Create an Azure instance

Now that we have our local network connected to our azure Tenant let’s test it by creating a virtual machine in the cloud.

AzureCo20

Select from gallery to be able to configure the proper subnet

AzureCo21

AzureCo22 AzureCo23

Here i choose a Basic tier.

The standard is a little bit more expensive but is better for performance

Standard :

The Standard tier of compute instances provides an optimal set of compute, memory and IO resources for running a wide array of applications. These instances include both auto-scaling and load balancing capabilities.

See price calculator :

http://azure.microsoft.com/en-us/pricing/calculator/?scenario=virtual-machines

AzureCo24

Select the AzureNetwork to have your instance connected to the 192.168.2.0 subnet

AzureCo25

AzureCo26

Wait for the virtual machine to be ready.

AzureCo27

Click on connect save or directly open the RDP file.

AzureCo28

Use the credentials configured during the deployment

AzureCo29

Update the firewall security rules to allo ping response

Import-Module NetSecurity
Set-NetFirewallRule -DisplayName “File and Printer Sharing (Echo Request – ICMPv4-In)” -enabled True

AzureCo30

To reach this subnet from your local network you’ll have to add a route to 192.168.2.0/24.

You have several way to do so.

  • Directly on your router
  • Pushing the route using GPO
  • Pushing the route using DHCP

AzureCo31

  • Adding the route manually

AzureCo32

So now from my azure we are able to contact our server on our corporate network and vice versa

AzureCo33

We are also able to add the instance to the domain to control it as a traditional server

AzureCo34

It’s done. Our azure tenant is now completely reachable from the corporate network. It could easily be used for workload, test or disaster recovery site.

Cost

The VM is 0,056 €/hr and the virtual router is €0.03/hr.
On average a month is 730.5hr then the cost should be : 730.5 x (0.056 +0.03) = 62.82€ / month

Then you’ll also have to pay for the amount of data out but 5GB are included each month

 

I hope it’ll help you.

Sources :

http://msdn.microsoft.com/en-us/library/dn636917.aspx