Friday, 27 May 2016

Base Configuring an F5 Big-IP with LTM Module Device

You can get a trial of the F5 Big-IP device from their website, it provides a full Big-IP device (with the LTM module) for up to 90 days from the date of activation. I am looking to load balance Exchange 2016 and VMware View through a pair of Big-IP's with the LTM module so I thought I would spin it up and get it working in my lab prior to doing it for real. The Big-IP is much like Citrix NetScaler in the sense that it's an Application Delivery Controller, therefore many of the concepts are the same. I am using VMware Workstation, the Virtual Appliance is available for VMware, Citrix and Hyper-V. I have found the OVA hardware spec is a little low, the box seems to be pretty slow if you leave it at 2Gb of memory, I would recommend upping it to 4Gb.
Import the OVA into either vSphere or VMware Workstation.
Configure the Network Interfaces
·        vmnet0 - bridged (mgmt)
·        vmnet1 - host only (int)
·        vmnet2 - host only (ext)
·        vmnet 3 - host only (ha)
Your network topology could of course be different, I am choosing to build a topology close to the one I will be deploying the devices into in live. Obviously if you are deploying physical appliances instead of putting interfaces on to logical networks in VMware, you would have each of your interfaces in the corresponding VLAN's.
The default credentials for the vAppliance are root/default.

Type "config" at the initial prompt to launch the initial configuration utility. You can use native Linux commands such as ifconfig to set the mgmt address etc, but this way is easier.

In VMware I have found that the F5 sometimes screws up the order of the vnic's that are attached to the VM. Therefore I would recommend attaching a single vnic to the appliance, configuring your mgmt address then attaching the other vnics when you know what interface the F5 is interpreting as it's mgmt interface.

If you are having issues with the order of your network interfaces use the netstat -i to display all the physical interfaces that F5 has. In F5 TMOS mgmt IP's are also known as Self IP's, these are the much like NSIP's on the NetScaler platform.
The web interface has a different set of credentials admin/admin out of the box.

Before you can do anything you have to license the F5. Under Setup Utility and click Next.

Select the method of activation that suits you best. My F5 did not have a route to the Internet at this stage so I opted for the manual method. The Registration Key is the code F5 provide to you within an e-mail at the time you download a trial.

After a minute of so the verification will complete and you are free to start configuring some of the F5 features. The trial license comes with the Local Traffic (LTM) and Application Visibility and Reporting modules.

Problem
After bouncing my F5 a couple of times I started getting this when I logged in via the web interface "The Big-IP system has encountered a configuration problem that may prevent the configuration utility from functioning properly". I never managed to work out why, other that the F5 was shut down incorrectly. I just re provisioned another F5 device.


Thursday, 26 May 2016

Configuring Active Directory Activation with KMS

Active Directory Activation makes it possible to store activation object within the Active Directory schema, therefore simplifying the management of licenses within a Windows-based infrastructure. Although in some environment Key Management Service (KMS) will probably still be required, as AD Activation is only supported on Windows 8/Server 2012 and above at this time. Traditionally KMS has been a nightmare service to manage due to it being managed entirely from the CLI. It looks like Microsoft are trying to move away from this with AD Activation being the replacement.
Please note - KMS and Active Directory Activation CAN be running simultaneously.
If you have a mixed environment with KMS & AD Activation the activation order is;
·        Active Directory-based Activation
·        KMS Activation
·        MAK Activation
There are not many requirements for Active Directory Activation, the only one is that you have at least one Domain Controller in your domain running Windows Server 2012 or later, this ensure the domain has the 2012 schema extensions. This should not be confused with the domain/forest functional levels.
Domain Controllers running older versions of Windows Server can still participate in AD Activation as the objects are replicated to the entire schema.
Install the Volume Activation Services (VAS) from Server Manager, it is a Server Role.

Once installed click VA Services, right click and select Volume Activation Tools.

Click Next on the Introduction to Volume Activation Services wizard pane.

Click Active Directory-Based Activation and click Next.

Paste in your KMS key into the Install your KMS Host Key dialog box. If you have to install multiple KMS keys, which you probably will as there are separate keys for Windows 8, Windows Server etc you must run through this part multiple times. When you do this the wizard writes to the AD schema.

The KMS Keys have to be validated by either phone or over the Internet. 

Once the wizard has completed you can check by using ADSI Edit open a connection to the Configuration Container. 

Expand CN=Configuration/CN=Services/CN=Microsoft SPP/CN=Activation Objections if you are running AD-based Activation for the first time this should now exist. If you have added multiple KMS keys there will be multiple entries in the CN=Activation Objects directory.

The Software Protection service on Windows looks for the AD-based Activation, to check if your devices are being activated by AD Activation restart and then run slmgr /dli 

Wednesday, 25 May 2016

Hyper-V 2012 R2 Hyper-V Replica – Overview and Configuration

I am having to re certify my MCSE Server Infrastructure in the next couple of weeks so I really need to skill up on Hyper-V and understand what it can do in 2012 R2. I tend to always favour VMware in production deployments.

Overview of Hyper-V Replica on Windows Server 2012 R2

Hyper-V Replica is a technology that allows virtual machines (VM’s) to be asynchronously replicated between Hyper-V hosts. Hyper-V Replica was introduced in Windows Server 2012.


Hyper-V Replica Key Points;
  • ·       Asynchronous replication between Hyper-V hosts.
  • ·       Supported in Windows Server 2012 and above.
  • ·       Replication is achieved over ordinary IP-based networks.
  • ·       Hosts can be geographically separated.
  • ·       Supported along side Hyper-V clustering and without.
  • ·       No shared storage is required for Hyper-V Replica.
  • ·       Hyper-V hosts do not have to be domain joined.
  • ·       Hyper-V hosts can be domain joined, and reside in different forests.
  • ·       Authentication between hosts can be done with certificates and Kerberos.
  • ·       Hyper-V Replica can be used in conjunction with Azure ASR.
  • ·       Replication cycles can be configured at 30s, 5mins or 15mins.
  • ·       Hyper-V Replica traffic can be tagged using Trusted Groups for multi-tenant environments.

Hyper-V Replica seems to be aimed at the Disaster Recovery space for failover at site level to a DR DC. Although in some smaller organizations it can be used as failover between two Hyper-V hosts, when expensive SAN’s etc. are not available.

However, if you plan to use Hyper-V Replica as “cheap” failover, please bare in mind it’s not automatic. A manual step is required in order to cutover to the replica host. You can however perform test failover scenario’s so that you can effectively test your DR solution.

A log file is maintained by the Hyper-V Replica service to ensure the last changes are played back to the replica VM’s, this is done based on the replication cycle time you configure.

Configuring Hyper-V Replica on Windows Server 2012 R2

From the source Hyper-V host click Enable Replication from the Actions menu. Click Next on the introduction page.


Click Browse and find the destination Hyper-V host you are going to enable replication to. Click Next.


If you have not configured Hyper-V replica on the destination server you will receive the error “The specified Replica server is not configured to receive replication from this server.” the option to Configure Server… will appear.


This will display the Hyper-V Settings for the destination host. Click Enable this computer as a Replica Server. and then tick Use Kerberos (HTTP), I am going to cover the integration of certificates in a separate post.


Scroll down the windows and click Allow replication from the specified servers and click Add, from here enter the name of the source Hyper-V host. You will also be prompted to create a new Trust Group. A Trust Group is a form of Authorization which contains servers that are allowed to replicate between each other. 


As Kerberos is using TCP port 80 you must ensure that there is an incoming rule, bound to the domain profile for the source/destination servers.



One important thing to remember on the Replication VHD’s page, is that if you have Hyper-V Replica configured on a VM and you add additional disks to it at a later stage Hyper-V does not detect this can it will not automatically start replicating the new disks. You must reconfigure the replication to include the new disks.


The frequency you select will depend upon your RPO/RTO requirements. In most cases 30 seconds maybe a little excessive, as it will generate a very high amount of network traffic. By default, replication traffic is sent over the management interface.


Hyper-V Replica can be configured to store multiple recovery points; this allows you to store multiple versions of the server for more options to roll back.


Initial Replication can be configured, it’s worth knowing that if you have a large amount of VM’s with large data sets you can stage the data onto an external disk. 


You can monitor the initial replication status of the VM.


Once it’s completed you can view the status of the replication from the destination server, under the Replication tab.


Replication health can also be checked from the source server if you right click the VM and select Replication > Replication Health.


You can perform planned failovers to the replica VM if you require, for example you may have to take the source Hyper-V host offline for patching. Please note that your failover will not work if you do not have the Virtual Networks configured correctly on both side. You must have a Hyper-V Switch configured on both Hyper-V hosts with EXACTLY the same name, from a networking perspective this switch must be uplinked to the correct physical network segment to allow the VM to continue working properly.


A VM must be powered off to perform a planned failover, once it’s off you should be able to failover the VM instance.

Tuesday, 24 May 2016

Hyper-V Failover Clustering in Windows Server 2012 R2 - Overview

Hyper-V can be deployed in conjunction with the Failover Cluster server role in Windows Server 2012 R2, to achieve a highly available and resilient virtualization platform. Hyper-V Failover Clusters allow the movement of VM’s to neighbouring Hyper-V hosts within a cluster, this protects against host failure and also provide flexibility when performing updates and patches of the underlying host operating systems.

In the event of a node (Hyper-V host) failure the Failover Cluster Service will automatically failover or “cut over” the VM’s running on that node to another one of the nodes in the Failover Cluster. It should be noted that this does not provide instant failure, there is a short amount of latency between a host failing and the VM’s being started on another node.

The following are some key points relating to Hyper-V Failover Clusters if you are considering them for your infrastructure;
  • ·      Cluster node to node communications should be on a separate network segment.
  • ·      Hyper-V failover cluster hardware must pass all validation checks for compatibility.
  • ·      Hyper-V clusters are limited to 64 hosts and 8000 VM’s.
  • ·      Network interface card settings should be identical across hosts and switches.
  • ·      DNS must be configured on each cluster host.
  • ·      Private cluster networks should not be routable.
  • ·      Private cluster networks should use unique network segments ranges.
  • ·      Hyper-V hosts should be in the same Active Directory domain.
  • ·      A single Hyper-V host can run up to 1024 VMs concurrently.
  • ·      Shared storage is required for a failover cluster, SMB shares can be used.
  • ·      Virtual Networks on Hyper-V hosts must be named exactly the same on clustered nodes.
  • ·      Cluster Shared Volumes (CSV’s) are required for clustered VM’s.
  • ·      The Hyper-V integration services are required on clustered VM’s.
  • ·      Hyper-V QoS can be used to reduce the number of physical NIC’s required for different traffic types.

    You are probably comparing Hyper-V to VMware vSphere, it is a matter of opinion on what product is the best. I believe vSphere is a more mature product at this point in time.  Although in Windows Server 2012 R2 many of the most commonly used features are available in both, for example at one stage VMware’s Distributed Resource Scheduler, that automatically load balances VM workloads across clustered ESXi hosts was not something Hyper-V could do. Now with Hyper-V 2012 R2, VMM and Live Migrations there is a very similar service available that boasts “no noticeable” downtime whilst migrating.