Code, Deploy and manage your first App on Kubernetes


Creating a k8s cluster is a nice thing but it’s better to use it 🙂
Here is a simple tutorial where I demonstrate how to quickly code and test an app.

Code the application

For this example, we’ll use a simple HTTP server written in Python.

from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer


#This class will handles any incoming request from
#the browser
class myHandler(BaseHTTPRequestHandler):

#Handler for the GET requests
def do_GET(self):
# Send the html message
self.wfile.write("Hello World - This is http python node test app v1 !")

#Create a web server and define the handler to manage the
#incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER

#Wait forever for incoming htto requests

except KeyboardInterrupt:
print '^C received, shutting down the web server'


Read More

3 ways to deploy a kubernetes POC

Kubernetes (aka. k8s) is usually described as a container manager, meaning that it is used to run and delete container on top of an any infrastructure (physique, virtual, cloud).
Describe like this it seems pretty simple and boring : “Yeah, another management layer …”
But when you look at how it works, why it is used, what new functionalities and concepts it brings, things get definitely more interesting.

I’ve seen the virtualisation changing the way we used physical hardware, I’ve seen the cloud changing how we think about infrastructure and I believe containerisation as a global concept will change how we run application.

It’s the missing part and the logical evolution of IT. And Kubernetes (or others) bring what is missing to the cloud revolution, it bring an abstraction layer on top of any infrastructure that will help organisation to gather physical and/or cloud providers as one global resource.

Kubernetes is a very new technology, it was release in stable version 1.0 in July 21, 2015. It’s today in version 1.4 with a very active community and a lot of enthusiasm from IT pro.
To understand how things works I usually like to get my hands on and play with the solution to understand the different functionalities and concepts.
In this series of articles we will see three different ways to test k8s, locally, on AWS and GCP.

Part 1 – Test kubernetes locally with Minikube
Part 2 – Deploy a kubernetes POC on AWS with KOPS
Part 3 – Deploy a kubernetes POC on Google Cloud Platform with Google Container Engine

Extend corporate local network to Azure Cloud

Cloud is more and more common in companies strategies. However, having a cloud completely isolated from your corporate network could be frustrating. Connecting your cloud tenant to your local network will allow you to use your cloud environment much easier and much efficiently. Your cloud will really become your test/dev environment, heavy workload platform or disaster recovery solution. Connecting it to your network will really benefit your IT.

In this article, I’ll show you how to extend your corporate local network to Microsoft Azure cloud infrastructure.

To do so we’ll need a gateway it could be dedicated hardware (see compatibility list here: or a server with RRAS (Routing and Remote Access Service) role.

Read More

Active Directory 2012, Group Policy Management Tips

Replication Status

Microsoft released a great feature here especially for people working in an international infrastructure with unreliable and low bandwidth links.

In this kind of context it often happen that you modify a GPO that has not been replicated between DC.

You are now able to see the replication status and set a baseline DC.

To set your Domain Controller baseline, click change:

Select your reference DC :

And generate the report :

You know see the GPO replication status between your domain controllers.

If you need to check the replication status of a unique GPO select it under “Group Policy Object” folder (not the linked GPO).

You see the replication status and where the GPO in not yet replicated.

Click on GPO Version to see the detailed status as bellow:

GPO Update

Who never said to a user «Please open a command prompt and run a GPUPDATE /force” or ” Please, Log Off / Log One”.

In order to avoid this curious situation Microsoft finally gives us a tools.

It’ll then create two scheduled task to update computer and user policy. The triggers is in a 10minutes range.

If you don’t want to apply the Group policy to all users and computers under the OU then you’ll need to run a PowerShell script :

Invoke-Gpupdate documentation :

Different RSOP :

In the result set of policy you have a different presentation and especially the Processing time and event log of the different components:

See more here:



Hyper-v 2012 Replica: Configure and test scenarios


In this article I’ll try to show you in a very simple way, how to implement and test the new Replica feature included in Win2012 Hyper V.
Replica allows you to build a Disaster Recovery strategy with a built in feature in Hyper-v with no additional license cost!


My lab infrastructure is pretty basic:

  • Two Hyper-v hosts in a cluster linked to a NAS via iSCSI and using Clustered Shared Volume
  • One hyper-v core with local storage
  • For test purpose all hyper-v are running on the same network

The first step is to create in the cluster node a replica broker:

Then the Replica Broker should start.

In my case I encountered the following error: “Cluster network name resource failed to create its associated computer object in domain …”

And the replica service couldn’t start.

The fact is that when you create a Replica broker the cluster try to create the Replica object in his active directory container.
This error shows that the Cluster Object does not have enough right to do it!
To correct the issue open “AD Users and computers” in Advanced Features view, locate the OU where your cluster object is located

Right click on your OU and view the advanced security parameters

Add to your cluster object the necessary right to create and delete computer objects

Apply change and start the replica broker.

It should be better:

Now we can configure the replication.
Select your replica server (here HYPERV03), open “Hyper-V Settings” and modify the replica configuration as shown below:

Authorize your Replica broker to replicate with the server

Now ye also need to configure the replica broker. To do so open “Replication settings”:

Modify as shown below:

Now our infrastructure is configured to run replication.

Select the desire Virtual Machine (here it’s a test VM) and activate replication:

Select the replica server:

Configure the replication history and the snapshot recurrence:

The replication will begin and you can see the status on the VM summary:

Now that the replication is finished we can proceed to several tests.

1 – Test Failover (TFO)

A TFO allows you to test global replication mechanism in a controlled environment without any impact on current replication or production.

To do so go on your replica server (here HYPERV03), select the replica VM and select “Test Failover”

Select the appropriate Recovery point:

After validating the Test Failover a VM with name “VM NAME – Test” will be created.
By default the VM has no virtual switch connected. Verify this setting and configure it if you want to test it with another VM.
Be careful, the Master VM is still running in your production network so for test purpose isolate this replica VM.

You can therefore start your VM and test if everything works as you need.

You also can configure a specific IP address for this replica VM. Thus when your replica VM will start on the replica server the new IP configuration will be applied. This is to adapt your VM to your DR IP plan.

After your tests shut down the VM and stop the test failover from the replica VM.

The replica test VM will be deleted.

2 – Planned Failover (PFO)

The planned Failover allows you to move the master VM from your main site to the DR site. This can be very helpful in case of a planned outage, a natural disaster risk or anything that can cause failure of your main site and which can be anticipate.

Open a console view on your test machine and on the desktop create a text file named Replica.txt, write test01 in it to mark the initial demo step.

Then shut down the VM and select “Planned failover”

From the replica VM

Note: If you do the “Planned failover” from, Hyper-v manager instead of Cluster Manager you’ll not have to select the Failover operation on the replica VM.

Now your VM should be running on your replica server which is located on your DR site. The VM is now the master VM and the VM which is still in your cluster is the replica VM. The replication direction has been reversed.
Connect and check the replica VM. The file should be present with “test01”.

To follow the next step write “test02” and save the file.

When your main site becomes safe again you’ll want to revert back your VM from Replica site to Main site.

To do so shutdown the VM on Hyperv03.

Run a “Planned failover”

The virtual machine start on the master node (Cluster).
 Connect to it and check the replica file. You’ll see that the modifications you did when the VM were running in your DR site is present.

3 – Unplanned failover (UFO)

An unplanned is when your primary site goes down because of a power outage, a natural disaster or anything that could happen in your main site and which could not have been planned.
 To follow the demo steps open a console on your test VM (the same that you used during the previous steps) and add “test03»:

In my demo I simulated a failover by deactivate all network on Hyperv01 and Hyperv02.
As you can see in the following screenshot both node off and my Cluster is down.
 In the DR site you’ll have to connect to your replica server and activate the Failover feature of the virtual machine.

Select the appropriate snapshot.

The VM start:

Connect to the VM and open the Replica file. As you can see the “test03” is not present. Indeed, the last VM snapshot did not contained the modification. Thus we lost data between the last snapshot and the main site outage. As I configured the snapshot recurrence to 1hours I’ll only lost 1h our data production but this data loss depends on your replica configuration.

If the latest recovery point is not what you need you can revert to an older point N-1, N-2; N-3 etc. depending on how many snapshot you selected during initial configuration. To do so you can select “Cancel Failover”, re-do a “Failover” and select another recovery point.

To continue in our test process write test04 in the replica file.

Use your VM as normal.

When main site come back both virtual machine will be running, so turn the main site VM off.
 Merge all snapshot on your active VM (on the replica server).

To move back the master VM to the main site you’ll need first to “Remove Replication” on the VM located on your main site

Then “Reverse replication” from the replica VM (on HYPERV03).

Specify the replica broker and configure all replication:

The VM will begin to send replication to the VM in the cluster:

Once the replication is over, to move back the master role to the VM located on the cluster run a planned failover.

So shut down the VM on hyperv03 and select “Planned Failover»:

The virtual machine located on the Cluster boot.

You lost the data between outage and recovery but retrieve data created during outage on the VM located on the DR site.

Check the replication health:


1 – Each time I run a TFO a new test VM is created. Then when you stop TFO the VM is deleted.
BUT when you look in your VMM 2012 console the VM are still present:

You’ll have to manually delete it:

2 – I HIGHLY regret that one of the greatest and smartest feature of Hyper-v 2012 is not included in the VMM console!!!


Hyper-v replica is a great and easy to use feature. It allows you to build a disaster recovery solution with no additional costs. The TFO allows you to test regularly your disaster recovery solution without any impact on your infrastructure.
The PFO allows you to be proactive on any risky intervention or external activity in your primary datacenter.
Then is the most undesired case the UFO give you the ability to restart your production infrastructure in your disaster recovery environment very quickly and with a small data loss.
However, I regret that the behavior and the configuration take place in three different consoles VMM, Failover Cluster and Hyper-V Manager.