Tuesday 24 May 2016

Hyper-V Failover Clustering in Windows Server 2012 R2 - Overview

Hyper-V can be deployed in conjunction with the Failover Cluster server role in Windows Server 2012 R2, to achieve a highly available and resilient virtualization platform. Hyper-V Failover Clusters allow the movement of VM’s to neighbouring Hyper-V hosts within a cluster, this protects against host failure and also provide flexibility when performing updates and patches of the underlying host operating systems.

In the event of a node (Hyper-V host) failure the Failover Cluster Service will automatically failover or “cut over” the VM’s running on that node to another one of the nodes in the Failover Cluster. It should be noted that this does not provide instant failure, there is a short amount of latency between a host failing and the VM’s being started on another node.

The following are some key points relating to Hyper-V Failover Clusters if you are considering them for your infrastructure;
  • ·      Cluster node to node communications should be on a separate network segment.
  • ·      Hyper-V failover cluster hardware must pass all validation checks for compatibility.
  • ·      Hyper-V clusters are limited to 64 hosts and 8000 VM’s.
  • ·      Network interface card settings should be identical across hosts and switches.
  • ·      DNS must be configured on each cluster host.
  • ·      Private cluster networks should not be routable.
  • ·      Private cluster networks should use unique network segments ranges.
  • ·      Hyper-V hosts should be in the same Active Directory domain.
  • ·      A single Hyper-V host can run up to 1024 VMs concurrently.
  • ·      Shared storage is required for a failover cluster, SMB shares can be used.
  • ·      Virtual Networks on Hyper-V hosts must be named exactly the same on clustered nodes.
  • ·      Cluster Shared Volumes (CSV’s) are required for clustered VM’s.
  • ·      The Hyper-V integration services are required on clustered VM’s.
  • ·      Hyper-V QoS can be used to reduce the number of physical NIC’s required for different traffic types.

    You are probably comparing Hyper-V to VMware vSphere, it is a matter of opinion on what product is the best. I believe vSphere is a more mature product at this point in time.  Although in Windows Server 2012 R2 many of the most commonly used features are available in both, for example at one stage VMware’s Distributed Resource Scheduler, that automatically load balances VM workloads across clustered ESXi hosts was not something Hyper-V could do. Now with Hyper-V 2012 R2, VMM and Live Migrations there is a very similar service available that boasts “no noticeable” downtime whilst migrating.