Code, Deploy and manage your first App on Kubernetes

Introduction

Creating a k8s cluster is a nice thing but it’s better to use it 🙂
Here is a simple tutorial where I demonstrate how to quickly code and test an app.

Code the application

For this example we’ll use a simple http server written in Python.

#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer

PORT_NUMBER = 8080

#This class will handles any incoming request from
#the browser
class myHandler(BaseHTTPRequestHandler):

#Handler for the GET requests
def do_GET(self):
self.send_response(200)
self.send_header('Content-type','text/html')
self.end_headers()
# Send the html message
self.wfile.write("Hello World - This is http python node test app v1 !")
return

try:
#Create a web server and define the handler to manage the
#incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER

#Wait forever for incoming htto requests
server.serve_forever()

except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()

Source : http://www.acmesystems.it/python_httpd

Test your app

Now that we have our application we can test it locally

python httpython.py
Started httpserver on port 8080

Result

curl http://localhost:8080
Hello World - This is http python node test app v1 !%

It works !

Create the container

Now in order to put our application on a kubernetes cluster we first need to create a container that contains our code.

Create the docker file

To create the container we define it in a Dockerfile

 

FROM python:2-onbuild
EXPOSE 8080
COPY httpython.py .
ENTRYPOINT [ "python", "./httpython.py" ]

FROM : specify the source image name. https://store.docker.com
EXPOSE : make the container listen on the specified port
COPY : copy file to destination (copy src dst)
ENTRYPOINT : specify the executable to be run we you start the container

Create a Requierment filetouch requirements.txt

touch requirements.txt

Build your docker container

docker build -t gcr.io/techfulab-testk8s-01/hello-py-node:v1 .

-t, –tag value Name and optionally a tag in the ‘name:tag’ format (default [])

Name your container with

  • gcr.io = google container repo
  • techfulab-testk8s-01 = Name of your project on GCP
  • hello-py-node : name of the container
  • v1 = version number/id

At the moment the container is build locally but this naming will help us later

Test locally

docker run -d -p 8080:8080 --name hello_tutorial gcr.io/techfulab-testk8s-01/hello-py-node:v1
⇒ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e483b5998d71 gcr.io/techfulab-testk8s-01/hello-py-node:v3 "python ./httpython.p" 5 seconds ago Up 3 seconds 0.0.0.0:8080->8080/tcp hello_tutorial

Test with :

curl http://localhost:8080
Hello World - This is http python node test app v1 !%

It works !

Push to Google cloud registry

gcloud docker -- push gcr.io/techfulab-testk8s-01/hello-py-node:v1

Deploy your application on Kubernetes

Now that we have our application containerised and pushed to google container registry we can deploy it to kubernetes

Create namespace

kubectl create namespace "team-01"

Change your config to use this namespace (rather than puttin –namespace team-01 at the end of every command)

kubectl config get-contexts
kubectl config set-context gke_techfulab-testk8s-01_europe-west1-d_techful-kops-cluster01 --namespace team-01

Deploy the application :

kubectl run hello-py-app --image=gcr.io/techfulab-testk8s-01/hello-py-node:v1 --port=8080 --replicas=2 --namespace team-01
Check status :
watch -n 1 kubectl get deployments
watch -n 1 kubectl get pods

Expose it :

kubectl expose deployment hello-py-app --type="LoadBalancer"

Check status :

watch -n 1 kubectl get services

Retrieve the Public IP and :

watch -n 1 curl http://$PUBLIC_IP:8080

Modify your deployment – Change the number of replica

kubectl scale deployment hello-py-app --replicas=4

Modify your deployment – Edit the deployment file

kubectl edit deployment/hello-py-app

Modify your deployment -Update the version of your app

Edit the local code (httpython.py) and change the message to display

...
# Send the html message
self.wfile.write("Hello World - This is http python node test app v2 !")
return
...

Build a new version of the container

docker build -t gcr.io/techfulab-testk8s-01/hello-py-node:v2 .

Test it locally

docker run -d -p 8080:8080 --name hello_tutorial gcr.io/techfulab-testk8s-01/hello-py-node:v2
curl http://localhost:8080


Hello World - This is http python node test app v2 !%

Push to Google cloud registry

gcloud docker -- push gcr.io/techfulab-testk8s-01/hello-py-node:v2

Update the version deployed

kubectl set image deployment/hello-py-app hello-node=gcr.io/techfulab-testk8s-01/hello-py-node:v2

Rollback

kubectl set image deployment/hello-py-app hello-node=gcr.io/techfulab-testk8s-01/hello-py-node:v1

Delete your application

kubectl delete service,deployment hello-py-app

Documentation/sources :

https://kubernetes.io/docs/hellonode/#create-a-docker-container-image
https://kubernetes.io/docs/user-guide/kubectl/kubectl_run/

Docker snippet

List old docker containers :

 docker ps --no-trunc -aq

Delete them :

docker rm `docker ps --no-trunc -aq`
Delete image :
docker rmi climz/apache
or with the id
docker rmi f8bfe5f3d6e8
Run docker with tty :
docker run -i -t ubuntu /bin/bash
Run docker container interactive with port redirect :
docker run -i -p 8000:80 -t djangosrv/latest /bin/bash
Create an image from a running container :
docker commit -m "Create nginx, uwsgi, supervisord server" -a "Julien"  3d1645041d69 climz/djangosrv:v1

DockerFile example

# This is a comment
FROM climz/djangosrv:v2
MAINTAINER Julien
RUN apt-get update && apt-get install
ADD supervisord.conf /home/bada/mysite/supervisord.conf
CMD ["/usr/bin/supervisord"]
EXPOSE 8000
Create a container from a docker file
docker build -t climz/djangosrv:v2 .

Run docker container as daemon :

docker run -P -d climz/djangosrv:v3
Where
-P redirect all expose port
-d daemonise

Connect to running docker

bash-4.4$ docker exec -it e38de44945ab /bin/bash

Part 3 – Deploy kubernetes on GCP with Google Container Engine (aka GKE)

Introduction

Since Kubernetes was created by Google it seems natural to run it on GCP.
And indeed, running a K8S cluster in GCP is very straightforward !
To avoid confusion between terms

  • GCP : Google Cloud Platform
  • GCE : Google Compute Engine
  • GCK : Google Container Engine ….

You can go through the console but here we will use the glcoud sdk.

Pre requisite

  • A GCP Account
  • Install gcloud SDK

First you’ll need to configure the CLI to access your gcloud account and project, to do so use the following command :

gcloud init

Follow the gcloud steps to configure your account. Once it’s done you can test your configuration by listing your running instances :

gcloud compute instances list

And that’s it !

Deploy the cluster

Since kubernetes is natively supported by GCP we don’t need to use any other tools. Gcloud container will deploy a pure K8S cluster on top of Google Compute engine (virtual machines/instances) and will let us use it using the native kubectl tool.

To Deploy a simple cluster run the following :

gcloud container clusters create "techful-kops-cluster01" --zone "europe-west1-d" --machine-type "custom-1-1024" --image-type "GCI" --disk-size "100"  --num-nodes "2" --network "default" --enable-cloud-logging --no-enable-cloud-monitoring

The options are self explanatory. However note that you could use several zones with  :

--additional-zones "europe-west1-b"

The only two images available are :

  • GCI
  • Container VM

More info on images here

Once the cluster is running you’ll be able to see the instances launched by GKE

gcloud compute instances list

NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-techful-kops-cluster-default-pool-86c719c1-8135 europe-west1-d custom (1 vCPU, 1.00 GiB) 10.132.0.2 35.187.20.58 RUNNING
gke-techful-kops-cluster-default-pool-86c719c1-zcln europe-west1-d custom (1 vCPU, 1.00 GiB) 10.132.0.3 35.187.31.62 RUNNING

Again here, gcloud export all necessary information to you kubectl config :

- See the config file : cat ${HOME}/.kube/config

- See the different contexts configured : kubectl config get-contexts
- See current context : kubectl config current-context

- See all cluster configured in the config file : kubectl config get-clusters

To interact with your kubernetes cluster :

- See your nodes : kubectl get nodes

- See the available namespaces : kubectl get namespaces

- See the pods : kubectl get pods --namespace kube-system

On GCP as you can see the dashboard is installed by default

⇒ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
fluentd-cloud-logging-gke-techful-kops-cluster-default-pool-8f6ae37d-krc7 1/1 Running 0 2m
fluentd-cloud-logging-gke-techful-kops-cluster-default-pool-8f6ae37d-tv05 1/1 Running 0 22m
heapster-v1.2.0-1912761325-zt3j9 2/2 Running 0 7m
kube-dns-v20-0aihy 3/3 Running 0 7m
kube-dns-v20-xu3ci 3/3 Running 0 21m
kube-proxy-gke-techful-kops-cluster-default-pool-8f6ae37d-krc7 1/1 Running 0 2m
kube-proxy-gke-techful-kops-cluster-default-pool-8f6ae37d-tv05 1/1 Running 0 22m
kubernetes-dashboard-v1.4.0-04b55 1/1 Running 0 7m
l7-default-backend-v1.0-3asdt 1/1 Running 0 21m

To access the dashboard simply run the proxy command

kubectl proxy

and browse to :

http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/

Resize cluster

If you want to change the number of nodes in your cluster, use the resize commdn :

gcloud container clusters resize "techful-kops-cluster01" --size=3 --zone "europe-west1-d"

The fun thing is that you can resize your cluster to 0 node. It will kill machines but keep your cluster configuration so that you can relaunch it as is later.

Documentation on the command here

Delete the cluster

Finally, to entirely delete your cluster run the following :

gcloud container clusters delete "techful-kops-cluster01" --zone "europe-west1-d"

Conclusion

Running a k8s cluster on GCP with GKE is very easy, however you won’t have the same level of configuration that you have with more custom solution (such as KOPS or even a manual install). But you will benefit from the stability of the platform, auto update and so on.

Part 2 – Deploy kubernetes on AWS with KOPS

Introduction

KOPS is an open-source tools written in Golang that will help us deploying and maintaining our kubernetes cluster. The source code available here and the official k8s doc here .

It really started in may 2016 with a first beta release on the 8th of september 2016. So it’s a very young tool but with a good community and support (slack channel).

Pre-Requisite

To test KOPS all you will need :

  • An AWS account
  • An IAM user allowed to create resources and use the S3 bucket

The first step is to install KOPS. On a mac simply use brew :

brew update && brew install kops

At the time i am writing this my kops version is 1.4.3

Because I am using different AWS account I like to use direnv tool to switch between then (https://direnv.net/)
Therefore create a .envrc file with the following environment variables

export AWS_DEFAULT_REGION=eu-west-1
export AWS_ACCESS_KEY_ID=xxxx
export AWS_SECRET_ACCESS_KEY=xxx

To use your .envrc file run :

direnv allow .

To test everything works you can run :

aws ec2 describe-instances

Now we need to create an S3 bucket, this is where KOPS will store all our configuration files and keys for the Kubernetes deployement.
To create the bucket run :

aws s3api create-bucket --bucket techful-kops-cluster01

aws s3 ls

Add the bucket to your .envrc

echo "export KOPS_STATE_STORE=s3://techful-kops-cluster01" >> .envrc

Deploy the cluster

All right, we are now ready to test KOPS.
My AWS account  has a default configuration (1VPC, 3Subnets, 1IG, no NAT GW) with a public domain : techfulab.co.uk.

In this tutorial we will use KOPS to deploy all the infrastructure (VPC, Subnet etc) so your current config does not really matter.
However you could also deploy your cluster in different configuration (Public/Private subnet, Internal/public DNS zone etc.) with additional option (see annex).

List the content of the bucket

aws s3 ls techful-kops-cluster01/kops-cluster01.techfulab.co.uk/

If required – Delete the content of the bucket

aws s3 rm s3://techful-kops-cluster01 --recursive

You can build a very simple, 1 master, 1 node cluster with the following command

kops create cluster --zones=eu-west-1a --master-zones=eu-west-1a --dns-zone=techfulab.co.uk --master-size=t2.small --node-count=1 --node-size=t2.small kops-cluster01.techfulab.co.uk

Or a Highly Available cluster with 3 master and 3 nodes :

kops create cluster --zones=eu-west-1a,eu-west-1b,eu-west-1c --master-zones=eu-west-1a,eu-west-1b,eu-west-1c --dns-zone=techfulab.co.uk --master-size=t2.small --node-count=3 --node-size=t2.small kops-cluster01.techfulab.co.uk

These commands will create the configuration file in the S3 bucket in a folder using the cluster name.

At this moment the cluster is not deployed, kops only created all the config files.
Here is the list of command you can run

- List clusters with: kops get cluster

- Edit this cluster with: kops edit cluster kops-cluster01.techfulab.co.uk
- Edit your node instance group: kops edit ig --name=kops-cluster01.techfulab.co.uk nodes
- Edit your master instance group: kops edit ig --name=kops-cluster01.techfulab.co.uk master-eu-west-1a

- View the change to apply : kops update cluster kops-cluster01.techfulab.co.uk

And finally to deploy your cluster run the following command

kops update cluster kops-cluster01.techfulab.co.uk --yes

Many resources will be deployed by KOPS : VPC, Subnets, route tables, SG, IAM Roles, Route53 records etc.

Once you cluster is deployed KOPS generate the kubeconfig therefore you will be able to interact with the cluster with kubectl.

To see your kubectl config

- See the config file : cat ${HOME}/.kube/config

- See the different contexts configured : kubectl config get-contexts
- See current context : kubectl config current-context

- See all cluster configured in the config file : kubectl config get-clusters

To interact with your kubernetes cluster :

- See your nodes : kubectl get nodes

- See the available namespaces : kubectl get namespaces

- See the pods : kubectl get pods --namespace kube-system

This last command should show you :

⇒ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
dns-controller-2150731133-3higs 1/1 Running 0 28m
etcd-server-events-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m
etcd-server-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m
kube-apiserver-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 3 28m
kube-controller-manager-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m
kube-dns-v20-3531996453-04yiw 3/3 Running 0 28m
kube-dns-v20-3531996453-uxl3m 3/3 Running 0 28m
kube-proxy-ip-172-20-42-253.eu-west-1.compute.internal 1/1 Running 0 27m
kube-proxy-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m
kube-scheduler-ip-172-20-61-150.eu-west-1.compute.internal 1/1 Running 0 28m

This are all the pods that kubernetes run for it’s own internal operation.

A good thing could be to install the kubernetes dashboard to have a graphical tools to see and configure it. To install the cluster run the following command :

kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

Once it’s done, if you re-run the “get pods ” command you should see a new system pod :

kubernetes-dashboard-3203831700-scmox

Now to access the dashboard run the following command

kubectl proxy

Update your cluster

Once you cluster is deployed you will still be able to manage your cluster lifecycle using KOPS.

You can modify three things :

The cluster configuration :

kops edit cluster kops-cluster01.techfulab.co.uk

The nodes configuration

kops edit ig --name=kops-cluster01.techfulab.co.uk nodes

The master configuration

kops edit ig --name=kops-cluster01.techfulab.co.uk master-eu-west-1a
- View the change to apply : kops update cluster kops-cluster01.techfulab.co.uk

Her fore example we will update the number of node (from 1 to 2)

kops edit ig --name=kops-cluster01.techfulab.co.uk nodes

Updte the following : 
 maxSize: 2
 minSize: 2

Once your config file is saved you can see the changes that would be apply by a cluster update using the following command :

kops update cluster kops-cluster01.techfulab.co.uk

Then to apply the changes simply run :

kops update cluster kops-cluster01.techfulab.co.uk --yes

KOPS will deploy a new node and update all necessary configuration.

Destroy the cluster

kops delete cluster kops-cluster01.techfulab.co.uk
kops delete cluster kops-cluster01.techfulab.co.uk --yes

Conclusion

KOPS make kubernetes deployment and update on AWS very easy. However some advanced functionalities are not yet supported (like CNI, core os) making it difficult to in such situations.

Annex

Usage:
kops create cluster [flags]

Flags:
--admin-access string Restrict access to admin endpoints (SSH, HTTPS) to this CIDR. If not set, access will not be restricted by IP.
--associate-public-ip Specify --associate-public-ip=[true|false] to enable/disable association of public IP for master ASG and nodes. Default is 'true'. (default true)
--channel string Channel for default versions and configuration to use (default "stable")
--cloud string Cloud provider to use - gce, aws
--dns-zone string DNS hosted zone to use (defaults to last two components of cluster name)
--image string Image to use
--kubernetes-version string Version of kubernetes to run (defaults to version in channel)
--master-size string Set instance size for masters
--master-zones string Zones in which to run masters (must be an odd number)
--model string Models to apply (separate multiple models with commas) (default "config,proto,cloudup")
--network-cidr string Set to override the default network CIDR
--networking string Networking mode to use. kubenet (default), classic, external. (default "kubenet")
--node-count int Set the number of nodes
--node-size string Set instance size for nodes
--out string Path to write any local output
--project string Project to use (must be set on GCE)
--ssh-public-key string SSH public key to use (default "~/.ssh/id_rsa.pub")
--target string Target - direct, terraform (default "direct")
--vpc string Set to use a shared VPC
--yes Specify --yes to immediately create the cluster
--zones string Zones in which to run the cluster

Part 1 – Test Kubernetes with minikube

Introduction

The first part of this series of article start with the easiest way to test K8S. Minikube is a tool to deploy kubernetes locally on your machine using a virtual machine running with virtual box.

Pre-requisite

  • Virtual Box
  • A linux or Mac Os system

First we will have to download the minikube image for virtualbox

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

Deploy

As I said before, minikube is very easy, the only thing to deploy your cluster is to run :

⇒ minikube start
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.

It will take a couple of minutes to get it ready. You can see in your virtual box console that a minikube machine as been created. You can track the deployment of your k8s cluster with the following command :

watch kubectl get pods --namespace kube-system

It will show you the deployment status of the different pods necessary for k8s to run. Once it show the following your cluster is ready :

Every 2.0s: kubectl get pods --namespace kube-system HOLBMAC0404: Tue Jan 24 15:04:39 2017

NAME READY STATUS RESTARTS AGE
kube-addon-manager-minikube 1/1 Running 0 21m
kube-dns-v20-bsdcc 3/3 Running 0 16m
kubernetes-dashboard-7zm9h 1/1 Running 0 17m

As you can see, by default minikube run DNS and Dashboard services.

You can see your kubectl config :

- See the config file : cat ${HOME}/.kube/config

- See the different contexts configured : kubectl config get-contexts
- See current context : kubectl config current-context

- See all cluster configured in the config file : kubectl config get-clusters

To interact with your kubernetes cluster :

- See your nodes : kubectl get nodes

- See the available namespaces : kubectl get namespaces

- See the pods : kubectl get pods --namespace kube-system

To access the dashboard simply run the proxy command

kubectl proxy

and browse to :

http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/

Or you could simply run the following :

 minikube dashboard
Opening kubernetes dashboard in default browser...

Stop/delete you cluster

If you want to reuse your cluster later :

minikube stop

Otherwise delete your cluster (and the virtual machine)

minikube delete

Conclusion

Minikube is a very easy and efficient tool to test k8s locally and is also a great tool to have a local test/dev environment. However, because it’s running locally when it comes to complex application it could be very slow and impact your own computer performances.

3 ways to deploy a kubernetes POC

Kubernetes (aka. k8s) is usually described as a container manager, meaning that it is used to run and delete container on top of an any infrastructure (physique, virtual, cloud).
Describe like this it seems pretty simple and boring : “Yeah, another management layer …”
But when you look at how it works, why it is used, what new functionalities and concepts it brings, things get definitely more interesting.

I’ve seen the virtualisation changing the way we used physical hardware, I’ve seen the cloud changing how we think about infrastructure and I believe containerisation as a global concept will change how we run application.

It’s the missing part and the logical evolution of IT. And Kubernetes (or others) bring what is missing to the cloud revolution, it bring an abstraction layer on top of any infrastructure that will help organisation to gather physical and/or cloud providers as one global resource.

Kubernetes is a very new technology, it was release in stable version 1.0 in July 21, 2015. It’s today in version 1.4 with a very active community and a lot of enthusiasm from IT pro.
To understand how things works I usually like to get my hands on and play with the solution to understand the different functionalities and concepts.
In this series of articles we will see three different ways to test k8s, locally, on AWS and GCP.

Part 1 – Test kubernetes locally with Minikube
Part 2 – Deploy a kubernetes POC on AWS with KOPS
Part 3 – Deploy a kubernetes POC on Google Cloud Platform with Google Container Engine

Extend corporate local network to Azure Cloud

Cloud is more and more common in companies strategies. However having a cloud completely isolated from you corporate network could be frustrating. Connecting your cloud tenant to your local network will allow you use your cloud environment much easier and much efficiently. Your cloud will really become your test/dev environment, heavy workload platform or disaster recovery solution. Connecting it to your network will really benefit your IT.

In this article I’ll show you how to extend your corporate local network to Microsoft Azure cloud infrastructure.

To do so we’ll need a gateway it could be a dedicated hardware (see compatibility list here : http://msdn.microsoft.com/en-us/library/azure/jj156075.aspx#bkmk_VPNDevice) or a server with RRAS (Routing and Remote Access Service) role.

In this tutorial we will use the second option (RRAS server).

What we need :

  • An Azure account
  • A corporate network with active directory/DNS server
  • A brand new server for RRAS role
  • Know your public IP
  • Access to your router or to your DHCP server configuration
  • And finally a coffee machine 🙂

 

Here is the network architecture we will build.

AzureCo01

Create an affinity group

Creating an affinity group allow us to place all our network and virtual machine in the same logical place

AzureCo02

Name your affinity group and select your region

AzureCo03

Configure networks

Then we will define our local network. The local network refer to the corporate infrastructure

AzureCo04

Enter your corporate public IP

AzureCo05

Enter your corporate network address and CIDR

AzureCo06

The next step is to define the virtual network.

This virtual network will be the Azure network.

Go to networks and create it

AzureCo07

Name your Azure Network. In the screenshot below i named it “AzurePublicIP” which is not a great idea because the logical network is composed of the PublicIP + local subnet. For a better comprehension it could be better to name it “AzureNetwork”

AzureCo08

Create the desire subnet

AzureCo09

Then click on add gateway subnet

AzureCo10

Enter your corporate DNS and select your corporate Network

AzureCo11

 

Connect to azure

Wait for the virtual network creation. Then click on the network name to access the dashboard

AzureCo12

Create the gateway

AzureCo13

Wait for the creation could take a couple of minutes

AzureCo14

When it’s done download the VPN agent script

AzureCo15

Put the script to your RRAS server (here it’s my 192.168.1.191)

Change your execution policy and rename the script to *.ps1

AzureCo16

Execute the script.

It will automatically install all pre-requisite roles and configure the desired connection

AzureCo17

When it’s done open “Routing and remote access” console and click connect on the connection to Azure network

AzureCo18

It will dial and you’ll see the connection UP in the azure portal

AzureCo19

Create an Azure instance

Now that we have our local network connected to our azure Tenant let’s test it by creating a virtual machine in the cloud.

AzureCo20

Select from gallery to be able to configure the proper subnet

AzureCo21

AzureCo22 AzureCo23

Here i choose a Basic tier.

The standard is a little bit more expensive but is better for performance

Standard :

The Standard tier of compute instances provides an optimal set of compute, memory and IO resources for running a wide array of applications. These instances include both auto-scaling and load balancing capabilities.

See price calculator :

http://azure.microsoft.com/en-us/pricing/calculator/?scenario=virtual-machines

AzureCo24

Select the AzureNetwork to have your instance connected to the 192.168.2.0 subnet

AzureCo25

AzureCo26

Wait for the virtual machine to be ready.

AzureCo27

Click on connect save or directly open the RDP file.

AzureCo28

Use the credentials configured during the deployment

AzureCo29

Update the firewall security rules to allo ping response

Import-Module NetSecurity
Set-NetFirewallRule -DisplayName “File and Printer Sharing (Echo Request – ICMPv4-In)” -enabled True

AzureCo30

To reach this subnet from your local network you’ll have to add a route to 192.168.2.0/24.

You have several way to do so.

  • Directly on your router
  • Pushing the route using GPO
  • Pushing the route using DHCP

AzureCo31

  • Adding the route manually

AzureCo32

So now from my azure we are able to contact our server on our corporate network and vice versa

AzureCo33

We are also able to add the instance to the domain to control it as a traditional server

AzureCo34

It’s done. Our azure tenant is now completely reachable from the corporate network. It could easily be used for workload, test or disaster recovery site.

Cost

The VM is 0,056 €/hr and the virtual router is €0.03/hr.
On average a month is 730.5hr then the cost should be : 730.5 x (0.056 +0.03) = 62.82€ / month

Then you’ll also have to pay for the amount of data out but 5GB are included each month

 

I hope it’ll help you.

Sources :

http://msdn.microsoft.com/en-us/library/dn636917.aspx

Active Directory 2012, Group Policy Management Tips

Replication Status

Microsoft released a great feature here especially for people working in an international infrastructure with unreliable and low bandwidth links.

In this kind of context it often happen that you modify a GPO that has not been replicated between DC.

You are now able to see the replication status and set a baseline DC.

To set your Domain Controller baseline, click change:



Select your reference DC :



And generate the report :



You know see the GPO replication status between your domain controllers.



If you need to check the replication status of a unique GPO select it under “Group Policy Object” folder (not the linked GPO).

You see the replication status and where the GPO in not yet replicated.



Click on GPO Version to see the detailed status as bellow:



GPO Update

Who never said to a user «Please open a command prompt and run a GPUPDATE /force” or ” Please, Log Off / Log One”.

In order to avoid this curious situation Microsoft finally gives us a tools.






It’ll then create two scheduled task to update computer and user policy. The triggers is in a 10minutes range.



If you don’t want to apply the Group policy to all users and computers under the OU then you’ll need to run a PowerShell script :



Invoke-Gpupdate documentation :

http://technet.microsoft.com/en-us/library/hh967455.aspx

Different RSOP :



In the result set of policy you have a different presentation and especially the Processing time and event log of the different components:



See more here:

http://channel9.msdn.com/Shows/Edge/EdgeShow-46-Whats-up-with-GPOs-in-Windows-Server-2012?format=html5

Enjoy,

Julien

Hyper-v 2012 Replica: Configure and test scenarios

Introduction

In this article I’ll try to show you in a very simple way, how to implement and test the new Replica feature included in Win2012 Hyper V.
Replica allows you to build a Disaster Recovery strategy with a built in feature in Hyper-v with no additional license cost!

Configuration

My lab infrastructure is pretty basic:

  • Two Hyper-v hosts in a cluster linked to a NAS via iSCSI and using Clustered Shared Volume
  • One hyper-v core with local storage
  • For test purpose all hyper-v are running on the same network


The first step is to create in the cluster node a replica broker:

Then the Replica Broker should start.

In my case I encountered the following error: “Cluster network name resource failed to create its associated computer object in domain …”

And the replica service couldn’t start.

The fact is that when you create a Replica broker the cluster try to create the Replica object in his active directory container.
This error shows that the Cluster Object does not have enough right to do it!
To correct the issue open “AD Users and computers” in Advanced Features view, locate the OU where your cluster object is located



Right click on your OU and view the advanced security parameters

Add to your cluster object the necessary right to create and delete computer objects

Apply change and start the replica broker.

It should be better:

Now we can configure the replication.
Select your replica server (here HYPERV03), open “Hyper-V Settings” and modify the replica configuration as shown below:

Authorize your Replica broker to replicate with the server

Now ye also need to configure the replica broker. To do so open “Replication settings”:

Modify as shown below:

Now our infrastructure is configured to run replication.

Select the desire Virtual Machine (here it’s a test VM) and activate replication:

Select the replica server:

Configure the replication history and the snapshot recurrence:

The replication will begin and you can see the status on the VM summary:

Now that the replication is finished we can proceed to several tests.

1 – Test Failover (TFO)

A TFO allows you to test global replication mechanism in a controlled environment without any impact on current replication or production.

To do so go on your replica server (here HYPERV03), select the replica VM and select “Test Failover”

Select the appropriate Recovery point:

After validating the Test Failover a VM with name “VM NAME – Test” will be created.
By default the VM has no virtual switch connected. Verify this setting and configure it if you want to test it with another VM.
Be careful, the Master VM is still running in your production network so for test purpose isolate this replica VM.

You can therefore start your VM and test if everything works as you need.

You also can configure a specific IP address for this replica VM. Thus when your replica VM will start on the replica server the new IP configuration will be applied. This is to adapt your VM to your DR IP plan.

After your tests shut down the VM and stop the test failover from the replica VM.

The replica test VM will be deleted.

2 – Planned Failover (PFO)

The planned Failover allows you to move the master VM from your main site to the DR site. This can be very helpful in case of a planned outage, a natural disaster risk or anything that can cause failure of your main site and which can be anticipate.

Open a console view on your test machine and on the desktop create a text file named Replica.txt, write test01 in it to mark the initial demo step.

Then shut down the VM and select “Planned failover”

From the replica VM

Note: If you do the “Planned failover” from, Hyper-v manager instead of Cluster Manager you’ll not have to select the Failover operation on the replica VM.

Now your VM should be running on your replica server which is located on your DR site. The VM is now the master VM and the VM which is still in your cluster is the replica VM. The replication direction has been reversed.
Connect and check the replica VM. The file should be present with “test01”.

To follow the next step write “test02” and save the file.

When your main site becomes safe again you’ll want to revert back your VM from Replica site to Main site.

To do so shutdown the VM on Hyperv03.

Run a “Planned failover”

The virtual machine start on the master node (Cluster).
 Connect to it and check the replica file. You’ll see that the modifications you did when the VM were running in your DR site is present.

3 – Unplanned failover (UFO)

An unplanned is when your primary site goes down because of a power outage, a natural disaster or anything that could happen in your main site and which could not have been planned.
 To follow the demo steps open a console on your test VM (the same that you used during the previous steps) and add “test03»:

In my demo I simulated a failover by deactivate all network on Hyperv01 and Hyperv02.
As you can see in the following screenshot both node off and my Cluster is down.
 In the DR site you’ll have to connect to your replica server and activate the Failover feature of the virtual machine.

Select the appropriate snapshot.

The VM start:

Connect to the VM and open the Replica file. As you can see the “test03” is not present. Indeed, the last VM snapshot did not contained the modification. Thus we lost data between the last snapshot and the main site outage. As I configured the snapshot recurrence to 1hours I’ll only lost 1h our data production but this data loss depends on your replica configuration.

If the latest recovery point is not what you need you can revert to an older point N-1, N-2; N-3 etc. depending on how many snapshot you selected during initial configuration. To do so you can select “Cancel Failover”, re-do a “Failover” and select another recovery point.

To continue in our test process write test04 in the replica file.

Use your VM as normal.

When main site come back both virtual machine will be running, so turn the main site VM off.
 Merge all snapshot on your active VM (on the replica server).

To move back the master VM to the main site you’ll need first to “Remove Replication” on the VM located on your main site

Then “Reverse replication” from the replica VM (on HYPERV03).

Specify the replica broker and configure all replication:

The VM will begin to send replication to the VM in the cluster:

Once the replication is over, to move back the master role to the VM located on the cluster run a planned failover.

So shut down the VM on hyperv03 and select “Planned Failover»:

The virtual machine located on the Cluster boot.

You lost the data between outage and recovery but retrieve data created during outage on the VM located on the DR site.

Check the replication health:

Comments

1 – Each time I run a TFO a new test VM is created. Then when you stop TFO the VM is deleted.
BUT when you look in your VMM 2012 console the VM are still present:

You’ll have to manually delete it:

2 – I HIGHLY regret that one of the greatest and smartest feature of Hyper-v 2012 is not included in the VMM console!!!

Conclusion

Hyper-v replica is a great and easy to use feature. It allows you to build a disaster recovery solution with no additional costs. The TFO allows you to test regularly your disaster recovery solution without any impact on your infrastructure.
The PFO allows you to be proactive on any risky intervention or external activity in your primary datacenter.
Then is the most undesired case the UFO give you the ability to restart your production infrastructure in your disaster recovery environment very quickly and with a small data loss.
However, I regret that the behavior and the configuration take place in three different consoles VMM, Failover Cluster and Hyper-V Manager.

 

Sources

http://technet.microsoft.com/en-us/library/jj134172.aspxg
http://blogs.technet.com/b/virtualization/archive/2012/07/26/types-of-failover-operations-in-hyper-v-replica.aspx
http://amaugard.wordpress.com/2012/08/30/hyper-v-r3-et-la-replication/
http://flemmingriis.com/?p=854

What’s new in Active Directory 2012

More than a month after the official release of Windows 2012 I wanted to give an overview of the most asked question you will hear in the next years : “What’s new in AD 2012”

Recycle Bin GUI : you remember this big improvement in AD 2008 R2 ? That was great ! The only problem was that you had to activate it via Powershell command use Ldp.exe to use it ! WHY ?
Years later Microsoft finally built a GUI !

To activate it go in he AD Administrative Center and :

Enable Recycle Bin

Try to create a user, delete it go select the “Tree view” go to Deleted Objects and see what’s in there !

Restore AD object

 

Fine Grained Password Policy GUI : Another win2008R2 “revolution” ! But once again it turned into low adoption because because you had to use a “not so familiar tool” called AdsiEdit !

So now still on the AD Administrative center, select the tree view, go under system / Password Setting Container just right click and Create a new Password Settings :

Password policy 1

Complete all required field and if need apply to groups or users !

Password policy 2

Easy !

Dynamic access control : This brand new feature allows you to create access rules based on user and/or device claims.
For example : Allow access to shared folder named “Finance” to only people which Departement field in AD is set to Finance.
I’ll write an article on this !

But basically all is done here :

Dynamic Access Control 1

 

Windows Powershell History Viewer : This great feature allows you to see which poweshell command is created when you do an action in the GUI.
Basically it can really help you to learn the command and build your own scripts.

Dynamic Access Control 2

 

Windows PowerShell Cmdlets for Active Directory Replication and Topology : this is a new set of Cmdlet all is here :
http://technet.microsoft.com/en-us/library/jj574083.aspx

Active Directory-Based Activation (ADBA) : This is replacing KMS server. All computer who have GVLK licence when they join the domain are automatically activated. This role is domain base and so hosted by every domain controller. But it only work with windows 8 and windows 2012.

Flexible Authentication Secure Tunneling (FAST) : also named Kerberos Armoring and definning in RDC 6113. It provide a secure channel between the client and the KDC.
This is required if you want to use claims within Dynamic Access Control.

More : http://blogs.dirteam.com/blogs/sanderberkouwer/archive/2012/09/05/new-features-in-active-directory-domain-services-in-windows-server-2012-part-11-kerberos-armoring-fast.aspx

Virtualisation safe tech : Active Directory is now fully with virtualization tech (At least HyperV). It allows you to clone and copy a DC to deploy it easily and quickly. You can also now safely create snapshot of your domain controllers. The technology that allow these features is VM Generation ID. Of course it’s implemented in HyperV and Microsoft also provided API to VMware and Citrix.

Easy deployment : Microsoft simply shot ADPREP and Dcpromo ! All is now integrated in a wizard. You can therefore deploy a new windows 2012 DC it will prepare forest or domain automatically. Awesome !

Off-premises Domain join : You can now using directAccess join your computer to the domain over internet.

Kerberos Constrained Delegation across domains : KCD permits interaction between multi-tier server using service accoun on behalf of users. It was before limited to domain and is now extanded.

GMSAs – Group Managed Service Accounts : MSA has been introduced with windows 2008 R2 allows admins to create administrative accounts which password reset automatically such as computers password. In windows 2012 GMSAs extend this administrative accounts to clustered or load balanced services.

Other technical :

Install From Media defrag is still default but no more mandatory (should be included in the command line)
Dcpromo retry bug fixed
RID Improvments (from 1 to 2 billion, event warning on consumption)

 

Source :

http://technet.microsoft.com/fr-fr/library/hh831477.aspx
http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/SIA312