Thursday, 9 November 2017

Customise IKE/IPsec Policy for Azure Virtual Network Gateway Connection Object

Microsoft has given us the ability to define custom IKE/IPsec policies for use with the native Virtual Network Gateway. In the past, the on-premise peer device had to support the fairly limited protocols suites that were statically set on the Virtual Network Gateway. 

It should be noted that custom IKE/IPsec policies are set at connection object level and not on the Virtual Network Gateway itself, this offers greater flexibility when you have multi-site VPN connections terminated on a single Virtual Network Gateway. This example covers how to create a custom policy for an S2S VPN connection, it is also possible to apply custom policies to vNet to vNet connection. Although this may not be as commonly done after vNet Peering becoming GA. 

Please note that in this example the Virtual Network Gateway along with the Local Network Gateway has already been defined in my subscription. The Local Network Gateway is the resource that contains the remote VPN peer address etc.

This command creates the new policy and stores it in a variable called $policy, this will be referenced when we are creating the new connection object.

$policy = New-AzureRmIpsecPolicy -DhGroup ECP384 -IkeEncryption AES256 -IkeIntegrity SHA384 -IpsecEncryption GCMAES256 -IpsecIntegrity GCMAES256 -PfsGroup 

This command stores the Virtual Network Gateway you want to create the connection object on and stores it in a variable called $gateway.

$gateway = Get-AzureRmVirtualNetworkGateway -Name "rbVNG01" -ResourceGroupName "rbRG1"

This command stores the Local Network Gateway object in the variable $remote

$remote = Get-AzureRmLocalNetworkGateway -Name "TW-2-AZ" -ResourceGroupName "rbRG1"

This command creates the connection object and applies the newly created policy.

New-AzureRmVirtualNetworkGatewayConnection -Name "TW-2-AZ" -ResourceGroupName "rbRG1" -VirtualNetworkGateway1 $gateway -LocalNetworkGateway2 $remote -Location "UK South" -ConnectionType IPsec -IpsecPolicies $policy -SharedKey "VerySecretCode"

Please review this article for full details on the supported protocol suites etc.

Tuesday, 5 September 2017

Modifying the primary Network Interface on an Azure IaaS VM using Power Shell

When you have a multi-NIC Azure IaaS VM, one of the NIC's must be marked as the "primary" NIC. This can only be configured using Power Shell.

The Azure IaaS VM instance must be in the stopped state before the NIC's can be modified

Stop-AzureRmVM -Name "myNVA" -ResourceGroupName "myNVA"

Set a variable for the Azure IaaS VM you want to interrogate

$vm = Get-AzureRmVm -Name "myNVA" -ResourceGroupName "myNVA"

Display all the network interfaces attached to that Azure IaaS VM


Set the priority of the Network Interface using the following commands, the primary should be set to “True”

$vm.NetworkProfile.NetworkInterfaces[0].Primary = $false
$vm.NetworkProfile.NetworkInterfaces[1].Primary = $true

Update-AzureRmVM -VM $vm -ResourceGroupName "myNVA"

Instructs Azure to start up the VM
Start-AzureRmVM -ResourceGroupName "myNVA" -Name "myNVA"


The code must be in this order

Stop-AzureRmVM -Name "GBAS1SER04" -ResourceGroupName "GBAS1-RG-FS-01"

$vm = Get-AzureRmVm -Name "GBAS1SER04" -ResourceGroupName "GBAS1-RG-FS-01"
$nicId = (Get-AzureRmNetworkInterface -ResourceGroupName "GBAS1-RG-FS-01" -Name "GBAS1SER04-NI-01").Id

Add-AzureRmVMNetworkInterface -VM $vm -Id $nicId

$vm.NetworkProfile.NetworkInterfaces[0].Primary = $false
$vm.NetworkProfile.NetworkInterfaces[1].Primary = $true

Update-AzureRmVM -ResourceGroupName "GBAS1-RG-FS-01" -VM $vm

If you do not put the

$vm.NetworkProfile.NetworkInterfaces[0].Primary = $false
$vm.NetworkProfile.NetworkInterfaces[1].Primary = $true

Section before the Update-AzureRmVM the script will fail.

Making changes to Azure IaaS VM NIC's in Azure Resource Manager using Powershell

Stop-AzureRmVM -Name "myNVA" -ResourceGroupName "myNVA"

$vm = Get-AzureRmVm -Name "myNVA" -ResourceGroupName "myNVA"
$nicId = (Get-AzureRmNetworkInterface -ResourceGroupName "myNVA" -Name "myNVA-eth0").Id

Remove-AzureRmVMNetworkInterface -VM $vm -NetworkInterfaceIDs $nicId

Update-AzureRmVm -ResourceGroupName "myNVA"

Friday, 2 June 2017

Azure Site Recovery (ASR) A few lessons learned....

A few useful things to keep in mind while utilising Azure Site Recovery from Hyper-V without VMM.

Azure Site Recovery replica traffic cannot be routed over a dedicated DR link

At the time of writing, you cannot route Hyper-V replica traffic over a dedicated network. By default, the management network is used for all ASR/Hyper-V replica traffic. Some client I have worked with maintain a 2nd Internet line which is usually dedicated for "DR" traffic. 

Hyper-V hosts loose connectivity to Azure when the agent gets out of date

An agent is required which is installed on the Hyper-V host servers in order to protect VM’s with ASR. The agent is constantly updated by Microsoft and I have found that the hosts lose connectivity if you are using an out of date agent. Generally, a simple reinstall/register resolves the issue.

Hyper-V VM disks cannot be converted to “protected disks” once the VM is protected

Once you have protected an on-premises VM using Azure Site Recovery, be sure to select all the required Virtual Hard Disk (VHD) files that belong to that VM. Once a VM is in the “protected” state (after the initial seed) you cannot protect additional disks that belong to that VM.

Add 25-50% extra bandwidth than the Azure Site Recovery Capacity Planner suggests

Microsoft has provided the following tool to help calculate the required amount of bandwidth to protect [x] amount of VM’s using Azure Site Recovery.

In my experience always add 25-30% to the suggested amount of bandwidth. Most of the recent Azure Site Recovery projects I have done recently are for small to medium sized businesses in the Caribbean so this may vary depending on what information you put into the capacity calculator.

Microsoft do not support Import/Export for Azure Site Recovery seeding

Microsoft does not offer the Import/Export services for Azure Site Recovery protected workloads, at the time of writing this post the only way to perform the initial “seed” of your protected VM’s is to do it over the WAN. Azure Site Recovery is based on Hyper-V Replica which was shipped in Windows Server 2012 and replica traffic is optimised to go over the WAN.

In the Caribbean Internet bandwidth is limited and very expensive, therefore customers are going to look at other solutions to protect their infrastructure over Azure Site Recovery. It would not be practical to replica a large number of VM’s to Azure using a typical Cayman Islands business grade DIA connection which might offer bandwidth in the region of 20-50 Mbp/s.

Microsoft needs to introduce the ability to ship an encrypted hard disk to their facility for offline data ingestion, otherwise up take in the Caribbean regions will be limited.

Extra Disk space is required on the source Hyper-V server to perform the ASR seed

ASR effectively takes a snapshot of the protected VM while it syncs it to Azure. Disk space is required on the source Hyper-V infrastructure in order for this to complete successfully.

Always move the page file to a dedicated disk, and ensure this disk is not protected with ASR


Fixed size VHD’s work the bes

Although ASR fully supports Microsoft’s latest virtual hard disk files (VHDX), Azure natively only supports VHD’s at this time. VM’s which have VHDX disks are automatically converted to VHD files on the fly. In my experience, if the success rate is better if you use VHD files on the source VM’s, in addition to this I prefer to use fixed size disks when possible. I’ve seen some issues when protecting differencing disks. The errors are mostly related to the AVHDX which is created during the initial sync. 

Friday, 17 February 2017

Tools to Analyze, Repair and Migrate Data to SharePoint Online

SharePoint Online is becoming a popular document management/file storage solution for many organizations using Office 365 for collaboration. Although SharePoint Online does have more limitations that standard file shares it meets the requirements for most customers.

One of the main limitations with SharePoint is the characters it does not support in file and folder names. A full list of unsupported characters can be found here;

The most common one I see is the use of the # character in purchase orders etc. The logic is simple if your data has unsupported characters in the name, the data cannot be uploaded to SharePoint. 

Luckily, Microsoft has provided a tool to analysis the data before you make the jump to SharePoint. The same tool can also be used to “fix” any issues found in the files and folders.  

The following link provides a copy of this tool; I have been unavailable to find the original version where I originally downloaded it.

1 - Analyze 

The following steps will show you how to analysis your data then fix the issue if you want. Launch the executable and select Local Folders, I would always suggest doing this on a replica copy of your production data. Also keep in mind that you are changing file and folder names, so any statically mappings to this data will break. 

Browse to the secondary copy of your production data and click Next.

You get another chance before it actually changes the data after this but click Yes.

Once it has completed analysis the data set it will generate a file called "UnsupportedCharsOnOneDrive" on the Desktop of the user running the tool. This is a text file that gives you a list of any problematic files or folders the tool has discovered. 

You should review the file and if you are happy with the changes you can proceed which will instruct the tool to actually change the data and remove any unsupported characters.

2 - Repair the Data Set

Click Yes if you want the tool to fix your data.

When you try to upload a file or folder to SharePoint Online which contains an unsupported character you will get an error like this one "File names can't contain the following characters:"

If you click Yes at the previous step the tool will batch through the data and repair it. Once it's completed it will confirm.

Now if you try to move the same file to SharePoint Online it will complete successfully. 

3 - Migrating Data to SharePoint Online

Another tool which I have found useful for migrating data to SharePoint Online is SPZilla, which is an open source tool available from here

It is inspired by the FTP client File Zilla and works extremely well for moving large quantities of data to a SharePoint Online site. 

Monday, 13 February 2017

Enable nested virtualization on Windows 10 Enterprise

It is now possible (and supported) to run nested virtualization hosts such as Hyper-V and VMware ESXi. This is enabled on a per-VM basis using the following command, please note that the VM must be in the powered off state before this command will work.

Set-VMProcessor -VMName DcLabServer01 -ExposeVirtualizationExtensions $true

When you try to install Hyper-V on a VM the role will fail to install unless this has been configured on the host. It is supported on Windows 10 as well as Server 2016.

Deploying Nano Server on Server 2016 as Hyper-V Hosts and Domain Join Nano Server with djoin.exe

The new server deployment option Nano Server can be used for Hyper-V hosts (along with other roles), it feels like Microsoft's attempt of getting Hyper-V closer to an appliance than a Windows Server. The Nano Server Recovery Console feels a bit like the DCUI for VMware ESXi. I was recently playing with Nano Server with the intent of deploying a two-node Hyper-V Cluster using it.

The Nano Server Image Builder (which required the Windows 10 ADK) is very effective at creating Nano Server images. It also generates the raw code used by the tool in order to do that. This code creates a new Nano Server VHDX with Containers, Hyper-V and Shielded VM support. The Image Builder makes generating these scripts remarkably easy.

New-NanoServerImage -MediaPath 'F:\' -Edition 'Datacenter' -DeploymentType Guest -TargetPath 'E:\Virtual Machines\Virtual Machines\NanoServer.vhdx' -MaxSize 8589934592 -SetupUI ('NanoServer.Containers', 'NanoServer.Compute', 'NanoServer.ShieldedVM') -ComputerName 'nanoserverlab01' -SetupCompleteCommand ('tzutil.exe /s "SA Pacific Standard Time"') -LogPath 'C:\Users\ryan_betts.MEGATRON\AppData\Local\Temp\NanoServerImageBuilder\Logs\2017-02-13 15-47'

Nano Server has almost no local logon capabilities for configuring Hyper-V etc, so everything should be done remotely. This is easy if the Nano Server hosts are joined to the AD domain. The Nano Server Image Builder offers you the ability to pre-join Nano Server instances to an Active Directory domain. For this to work, you must first pre-create the computer object in AD using djoin.exe

The following command can be used to do this, it will only work if the server running the Nano Server Image Builder is joined to the destination domain. I am sure you could get around this with djoin blobs etc. 

Djoin.exe /provision /domain /machine nanoserver02 /savefile offline.txt

One the Destination Machine Information page the Nano Server computer name must match the predefined computer object name for the process to work.

Select the Domain Join option, this is the part that will fail if you are not building the Nano Server VHD from a device that is joined to the destination domain.

Once you have created a new VM and attached the Nano Server VHD it should be pre-joined to the domain and it should let you login using Domain Admin credentials.

Monday, 6 February 2017

Finding Office 365 Tenant ID's the easy way

There are multiple ways you can find an Office 365 Tenant ID, however I have found the easiest way is to use the following commands;


The output from the Get-MsolAccountSku command displays the Tenant ID in it's text value. Many of the guides online state the method to find the alphanumeric value. I have not found any commands that will not accept the text value. 

How to auto populate email address for ADFS authentication to Office 365

When ADFS is deployed in conjunction with Office 365 it prevents users from having to re-enter their corporate passwords in order to access web-based Office 365 services. However, the user is still required to input their email address. When the email address is entered, the Office portal redirected the users to the ADFS URL which was configured as part of the federation between ADFS and Office 365. In an ideal world it would be possible to auto populate this email address field to increase the user experience further.

One of the easiest ways to achieve this with no further configuration to ADFS or the users browser is to use a customized URL to access Office 365 services. This basically appends the customers domain to the URL.

It is possible to code this behavior into the ADFS code the following blog post includes a snippet of C# code that sets this. None of the content in this blog post is produced or owned by me.

For this environment, there are only 25 named users therefore the customized URL is good enough. However it is good to know it is possible to set this statically if required.

Window Server 2016 - ADFS 4.0 "An error occurred. The resource you are trying to access is not available. Contact your administrator for more information"

When I setup ADFS I usually browse to the URL in order to test that ADFS is operation
Although it is not a conclusive test it does give me some level or assurance that ADFS is functioning if this web page responds with the sign in page. However, I setup a new ADFS 4.0 infrastructure on Windows Server 2016 and when I browsed to the URL I received the following error. 

"An error occurred. The resource you are trying to access is not available. Contact your administrator for more information."

It turns out the IdpInitiatedSignOnPage property is disabled by default in ADFS 4.0 to enable it use the following PowerShell command on your ADFS servers. 

Set-AdfsProperties -EnableIdpInitiatedSignOnPage $True

Thursday, 2 February 2017

Upgrade ADFS 3.0 to 4.0 with Windows Server 2016 using PowerShell

Windows Server 2016 introduces the ability to perform an in-place upgrade of Active Directory Federation Services (ADFS). In previous versions of Federation Services, you were required to “rip and replace” the deployment with a new set of servers, you would then have to export/import your configuration data.

It should be noted that an OS level upgrade from Windows Server 2012 R2 to 2016 will not automatically upgrade the ADFS farm.

         Windows Server 2012 R2 – ADFS 3.0
         Windows Server 2016 – ADFS 4.0 
Due to the nature of what ADFS does this was problematic for organizations, if your organization was using Federation Services for Office 365 authentication an upgrade would result in downtime to the service. ADFS 4.0 introduces the concept of a “farm behavior level” which is similar to how domain functional levels work in Active Directory. When you have ADFS 3.0 & 4.0 in the same farm this is considered a “mixed” farm. The features available across the server farm will be constraint to ADFS 3.0 if you are running in mixed mode. From the research I have done there are no reasons to retain a mixed mode farm, ADFS 4.0 is 100% backwards compatible with 3.0. A mixed mode farm will literally be for the period in which you introduce your new servers.

Step 1 – Upgrade Active Directory Schema

You must first upgrade the Active Directory schema before you can introduce ADFS 4.0 servers into the environment. This upgrade is non intrusive however it is recommended you ensure a healthy backup of your Active Directory is available in case it goes wrong. The upgrade code is on the Server 2016 ISO at \support\adprep you must also perform the upgrade with an account that has Schema & Enterprise Admin permissions. In addition to this the changes should be made on the Domain Controller that hosts the Schema Master FSMO role.

The following command can be used to determine your FSMO role holders

netdom query fsmo

The forest must be prepared first using the following command

adprep /forestprep

I do not think the /domainprep command is required if you only have a single domain

adprep /domainprep

Step 2 – Prepare the Windows Server 2016 ADFS Server

The command installs ADFS on the target server

Install-WindowsFeature Adfs-Federation -IncludeManagementTools

Step 3 – Introduce Server 2016 (ADFS 4.0) into the existing server farm

Set the following variable which is used to store the ADFS service account credentials, these should be entered in the format DOMAIN\username

$creds = Get-Credential

The following command joins the new server to the server farm hosted on the primary server, please note you do not need the -OverwriteConfiguration switch if you are running this for the first time

Add-AdfsFarmNode -ServiceAccountCredential $creds -PrimaryComputerName primaryserver.domain.local -CertificateThumbprint "DB84EE68879B8xxxxxxxx" -OverwriteConfiguration

Once this has completed run the following command on the new server this reconfigures the farm, and makes the new server the primary

Set-AdfsSyncProperties -Role PrimaryComputer

The following command must be run on all other ADFS servers

Set-AdfsSyncProperties -Role SecondaryComputer  -PrimaryComputerName "oldfsservers"

If you try to launch the ADFS Management console from the new Server 2016 instance it should open as the primary. 

The following command will show only the ADFS 4.0 servers in the farm


The CurrentFarmBehavior outlines what Farm Behavior Level the ADFS farm is operating at. Once the Server 2012 R2 instances are decommissioned and the farm level is raised this will be set to “3”.

Step 4 – Decommission Server 2012 R2 and ADFS 3.0 from the farm

In my environment, the requirement was to upgrade the entire server farm to ADFS 4.0, therefore before I could do this all the old ADFS 3.0 instances must be removed.

The following command has been deprecated in Windows Server 2012 R2.


The recommendation from Microsoft is to uninstall ADFS from the server to remove it from an ADFS server farm.

Step 5 – Raise ADFS Farm Behavior Level

It is advisable to run this command first to check you are in the position to upgrade the farm level


From the primary server run the following command to raise the ADFS server farm level


The Enterprise Key Admins group apparently does not exist until there is a Server 2016 Domain Controller introduced to the Active Directory, so it can be ignored for now. Yet to find any official documentation to support this. 

Wednesday, 1 February 2017

Resetting an Azure Gateway using PowerShell

I was recently troubleshooting an Azure VPN problem for a customer, the problem was that a S2S VPN that was working correctly for months was not forming with a Cisco ASA 5505 Firewall.

The following commands can be used to reset an Azure Gateway. A Gateway is basically two instance (RRAS running in Active/Passive I believe) that are managed by Microsoft. If you run this command Azure will fail over the Active not and reboot it. The Passive node will take over. 

$gateway = Get-AzureRmVirtualNetworkGateway -Name “cloud_gw” -ResourceGroup “rname”

Reset-AzureRmVirtualNetworkGateway -VirtualNetworkGateway $gateway

Tuesday, 31 January 2017

Connecting Server 2016 RDS Connection Broker to Azure Database fails with "the database specified in the database connection string is not available from the RD Connection Broker..."

When you try to configure a Windows Server 2016 Remote Desktop Services (RDS) Connection Broker to use an Azure PaaS database instance you get the following error;

"The database specified in the database connection string is not available from the RD Connection Broker server. Ensure that the database server is available on the network, the database exists and is empty (no scheme present), the Database server native client is installed on the RD Connection broker server, and the RD connection broker has write permissions to the database."

The error was caused by a firewall configuration on the Azure side, to alter the settings you must click the Firewall tab of the SQL Server (logical) in Azure, then enable the setting Allow access to Azure Services.

In addition to this click the Add Client IP option which will automatically populate an inbound firewall rule to allow incoming connections from the IP address you are currently accessing the portal via.

Click to Save the changes.

When you try to continue with the RDS installation the wizard will get past the error message. 

Comments system

Disqus Shortname