Tuesday, 23 January 2018

Lessons learned from migrating legacy Windows Server 2003 VM's to Azure

Although Windows Server 2003 is long since been supported officially by Microsoft, and everyone should really be off the platform, there are some legacy applications still used in production that rely on these legacy operating systems.

To my knowledge, Microsoft has never supported Server 2003 on Azure, since its inception, and they pulled support for Server 2008 a good few years ago. It is, however, possible to run “unsupported” versions of Windows inside Azure IaaS VM’s, provided you don’t whine to Microsoft if they don’t work correctly. As Azure runs a flavour of Hyper-V, generally any specialised VHD you upload will run. You can prove this by using boot diagnostic screenshots, however the most frustrating thing to follow this is being unable to access the VM once you know it has booted.

This can even be the case if you follow the official guidance on preparing VHD’s for Azure

The above article walks you through ensuring the VM is configured correctly to support incoming RDP connections etc. It is however all done in PowerShell, most of the commands don’t run successfully on Server 2003 so you have to decipher and perform most of the changes manually using either the registry or CMD.

If you are going to run Server 2003 on Azure, you must ensure the Hyper-V Integration Services are installed inside the VM. One of the annoying things, if you are coming from VMware, is that the Integration Tools will fail to install if the setup does not detect Hyper-V. When I’m moving VMware VM’s I usually have a Hyper-V host available to do some testing, you can now run nested Hyper-V 2016 on Azure if you use certain instance types.

Full details on nested Hyper-V on Azure
It’s very simple to get nested Hyper-V going, you simply deploy a Server 2016 instance using one of the instance types that support nested virtualization. You then just install Hyper-V normally, after a few reboots you will have a nested Hyper-V server.

As tools such as ASR are out of the question if you are dealing with unsupported operating systems, I use the following method;
  1. ·       Ensure you have the local admin password available.
  2. ·       Prepare VM using checklist provided by MS.
  3. ·       Install the Azure VM Agent/Hyper-V Integration Services (2008 R2 onwards)
  4. ·       Use Disk2Vhd to convert the VM to VHDs (not VHDX).
  5. ·       Use Hyper-V to convert the dynamic VHD’s to fixed size VHD’s
  6. ·       Move the VHD’s to Azure Storage using Storage Explorer
  7. ·       Use nested Hyper-V to test the converted VM’s boot to Windows

o   Even if you are coming from on premise you can use nested Hyper-V in Azure, I usually make use of the high bandwidth available in Azure. If you have uploaded VHD’s to Azure Storage it is possible to download the VHD’s into your nested Hyper-V instance (at 100mbp/s ~), the upload speed is quick as well.

o   Only the newest versions of VMware support nested Hyper-V so it might be a good method if you cannot get a HV host up and going. The Azure route also helps if you do not have enough capacity to deploy a nested Hyper-V server.

o   When you run a VM inside the nested Hyper-V you will then have the ability to run the Hyper-V Integration Services, in my experience, these are required as a minimum for a VM to be reachable inside Azure (NIC drivers required). Even if your VM does not support the Azure VM Agent, you can get a VM connected if the Hyper-V integration services are installed.

o   Server 2016 does not provide the Integration Services ISO in the usual place C:\Windows\System32\vmguest.iso. However, you can get the ISO from Server 2012 R2.

·       Use predefined ARM templates to deploy a specialised VM from a VHD.

Monday, 22 January 2018

Azure VM's attaching data disk fails with "provisioning failed, the specified cookie value in VHD footer indicates that disk, with blob is not a supported VHD. Disk is expected to have cookie value 'conectiv. Invalid VHD"

When you try to attach an unmanaged Azure VM disk, it fails with "Provisioning failed, the specified cookie value in VHD footer indicates that disk, with blob is not a supported VHD. Disk is expected to have cookie value 'conectiv. Invalid VHD".

This error is caused when the VHD suffers corruption when it's being uploaded to Azure Storage. To resolve the issue, delete the old copy of the VHD and retry the upload using Storage Explorer or the Azure Portal.

Wednesday, 17 January 2018

How to convert Azure "unmanaged" Operating System disks to Managed Disks using Powershell

It is very easy to convert an "unmanaged" disk to a Managed Disk inside Azure, if the source VM is standalone and not part of an Availability Set. I recently had to do this after I had "lifted and shifted" a handful of VM's to Azure for a POC. The idea was that I would create new VM's from a specialised VHD's which were uploaded to Azure Storage. I did some research to see what the process was like for moving migrated VM's to Managed Disks.

The following PowerShell code can be used to convert a VM's unmanaged disk to a Managed Disk, please note that this will automatically convert all the associated data disks attached to the VM.

$rgName = "myResourceGroup"
$vmName = "myVM"

Stop-AzureRmVM -ResourceGroupName $rgName -Name $vmName -Force

ConvertTo-AzureRmVMManagedDisk -ResourceGroupName $rgName -VMName $vmName

Start-AzureRmVM -ResourceGroupName $rgName -Name $vmName

The process takes some time to complete, mainly because Azure copies the VHD from the Azure Storage Account. However, it is very quick, my 45GB VM was converted in less than 5 minutes. Once the conversion is completed, the VM will start and the lease on the original VHD will be released. 

When you convert a VHD to a Managed Disk, the Managed Disk is automatically labelled the same name as the VM. It discounts the name of the source VHD from the storage account.

Wednesday, 10 January 2018

Converting VMware VM’s to Hyper-V VHD’s for Azure “Lift and Shift” using Virtual Machine Converter 3.1 with ARM and CSP Subscription

In short - you cannot use the Virtual Machine Converter tool to target Azure Resource Manager resource or CSP Subscriptions in general.

I was recently tasked with helping a customer run a proof of concept with a production application in Azure. The application itself contains 4 x servers and is “off the shelve” but has years of custom development on it. The project in itself was straightforward, “lift and shift” the 4 x VM’s to Azure and see if they run. I went through all the usual planning stages of using the MAP tool to get compatibility and VM’s sizes etc. My original plan was to use Azure Site Recovery to replicate the VM’s to Azure for testing, however, this was a short-lived idea as 2 out of the 4 VM’s were running unsupported versions of Windows. After this was communicated to the customer, they decided to continue with the project as it was only a proof of concept, they accepted the risk that 2 out of the 4 VM’s might never run in Azure.

Due to the number of non-documented changes to the “out of the box” application it was considered a worthy risk to try using the unsupported operating systems. I did have in my mind that we could do a dirty upgrade of the unsupported servers to bring them up to Server 2008 R2, but who likes doing that? The application is to be retired in due course but a number of things must be done ahead of GDPR, therefore we continued and hoped for the best.

This particular customer has chosen to procure their Azure services using the Cloud Service Provider (CSP) model, which is very common as we all know. As Azure Site Recovery was out of the question in this particular case my plan was to do an offline “cold” conversion of the VMware VM’s using the Virtual Machine Conversion tool, provided by Microsoft. Although this was possible in theory I hit a snag which I could not get around (easily). When you launch the Virtual Machine Conversion tool and select the “Migrate to Microsoft Azure” option, the first information it asks for is you Subscription ID and your Subscription Certificate Thumbprint. This was the problem, I could get the Subscription ID from the Azure Portal, no problem. It was when I tried to access the Managed Certificates associated with that CSP subscription. As you can see below, it states that I have Insufficient Permissions to view or manage and managed certificates. It also states that the Co-Administrator role is required to manage and certificates related to this subscription. If you are well versed in dealing with CSP (not me), you will know that the Co-Administrator role does not exist for CSP subscriptions.

I proved this theory by trying to grant Co-Administrator to myself, the option does not even appear like it does when you try on a different subscription type other than CSP. I did some research and came across this article

which states that managed certificates should be accessed through the ASM portal. As this is was January 2018, the old ASM portal had been decommissioned. It looked like I was trapped in a period where the Virtual Machine Conversion tool was not going to be suitable. I did consider doing the conversion and targeting at another subscription which was not CSP, but this was not very clean. It turns out the Virtual Machines Conversion tool was all still based on ASM, even if you get past the certificate validation it only displays Storage Accounts that are “classic”.

To get around this problem and move the VM's to Azure, I used the following steps. 

Friday, 1 December 2017

Configuring Extranet Lock Protection in ADFS 2016

Extranet Lock Protection is used to protect your Internet facing ADFS from password brute force attacks. Extranet Lock Protection works much like an Account Lockout Policy in Active Directory, you set a password attempt threshold in conjunction with a period of time before the user in question can be authenticated. With ELP enabled, even if the user attempts to login to the Sign In page using valid credentials after the threshold has been met and before the lock time has expired, they will not be granted access.

ADFS ELP works separately from Active Directory account lock outs, if you enable ELP it will not disable on premise user accounts if a brute force attack has been attempted at the ADFS Sign In page.

Some comparisons should be made between the AD Account Lockout Policy and ELP.

·      Lockout threshold on ELP should be less than the threshold in AD.
·      Lockout time window on ELP should be greater than the threshold in AD.

Extranet Lock Protection is not enabled by default on Server 2012 R2 or Server 2016.

View the default using Get-AdfsProperties

Enable Extranet Lockout Protection with a threshold of 3 wrong authentication attempts and a lockout observation window of 15 minutes.

Set-AdfsProperties -EnableExtranetLockout $true -ExtranetLockoutThreshold 3 -ExtranetObservationWindow (new-timespan -Minutes 15)

Run the command Get-AdfsProperties again to ensure the change has been applied.

It’s worth noting that it takes some time for the warning to disappear from the Azure AD Connect page inside the Azure Portal. It does not seem to be instant and nothing I did seemed to force a re-evaluation.

Further reading https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-lockout-protection

Comments system

Disqus Shortname