Friday, 2 June 2017

Azure Site Recovery (ASR) A few lessons learned....

A few useful things to keep in mind while utilising Azure Site Recovery from Hyper-V without VMM.


Azure Site Recovery replica traffic cannot be routed over a dedicated DR link

At the time of writing, you cannot route Hyper-V replica traffic over a dedicated network. By default, the management network is used for all ASR/Hyper-V replica traffic. Some client I have worked with maintain a 2nd Internet line which is usually dedicated for "DR" traffic. 

Hyper-V hosts loose connectivity to Azure when the agent gets out of date

An agent is required which is installed on the Hyper-V host servers in order to protect VM’s with ASR. The agent is constantly updated by Microsoft and I have found that the hosts lose connectivity if you are using an out of date agent. Generally, a simple reinstall/register resolves the issue.

Hyper-V VM disks cannot be converted to “protected disks” once the VM is protected

Once you have protected an on-premises VM using Azure Site Recovery, be sure to select all the required Virtual Hard Disk (VHD) files that belong to that VM. Once a VM is in the “protected” state (after the initial seed) you cannot protect additional disks that belong to that VM.

Add 25-50% extra bandwidth than the Azure Site Recovery Capacity Planner suggests

Microsoft has provided the following tool to help calculate the required amount of bandwidth to protect [x] amount of VM’s using Azure Site Recovery.


In my experience always add 25-30% to the suggested amount of bandwidth. Most of the recent Azure Site Recovery projects I have done recently are for small to medium sized businesses in the Caribbean so this may vary depending on what information you put into the capacity calculator.

Microsoft do not support Import/Export for Azure Site Recovery seeding

Microsoft does not offer the Import/Export services for Azure Site Recovery protected workloads, at the time of writing this post the only way to perform the initial “seed” of your protected VM’s is to do it over the WAN. Azure Site Recovery is based on Hyper-V Replica which was shipped in Windows Server 2012 and replica traffic is optimised to go over the WAN.

In the Caribbean Internet bandwidth is limited and very expensive, therefore customers are going to look at other solutions to protect their infrastructure over Azure Site Recovery. It would not be practical to replica a large number of VM’s to Azure using a typical Cayman Islands business grade DIA connection which might offer bandwidth in the region of 20-50 Mbp/s.

Microsoft needs to introduce the ability to ship an encrypted hard disk to their facility for offline data ingestion, otherwise up take in the Caribbean regions will be limited.

Extra Disk space is required on the source Hyper-V server to perform the ASR seed

ASR effectively takes a snapshot of the protected VM while it syncs it to Azure. Disk space is required on the source Hyper-V infrastructure in order for this to complete successfully.

Always move the page file to a dedicated disk, and ensure this disk is not protected with ASR

Self-explanatory.

Fixed size VHD’s work the bes

Although ASR fully supports Microsoft’s latest virtual hard disk files (VHDX), Azure natively only supports VHD’s at this time. VM’s which have VHDX disks are automatically converted to VHD files on the fly. In my experience, if the success rate is better if you use VHD files on the source VM’s, in addition to this I prefer to use fixed size disks when possible. I’ve seen some issues when protecting differencing disks. The errors are mostly related to the AVHDX which is created during the initial sync. 

Friday, 17 February 2017

Tools to Analyze, Repair and Migrate Data to SharePoint Online

SharePoint Online is becoming a popular document management/file storage solution for many organizations using Office 365 for collaboration. Although SharePoint Online does have more limitations that standard file shares it meets the requirements for most customers.

One of the main limitations with SharePoint is the characters it does not support in file and folder names. A full list of unsupported characters can be found here;


The most common one I see is the use of the # character in purchase orders etc. The logic is simple if your data has unsupported characters in the name, the data cannot be uploaded to SharePoint. 

Luckily, Microsoft has provided a tool to analysis the data before you make the jump to SharePoint. The same tool can also be used to “fix” any issues found in the files and folders.  

The following link provides a copy of this tool; I have been unavailable to find the original version where I originally downloaded it.


1 - Analyze 

The following steps will show you how to analysis your data then fix the issue if you want. Launch the executable and select Local Folders, I would always suggest doing this on a replica copy of your production data. Also keep in mind that you are changing file and folder names, so any statically mappings to this data will break. 


Browse to the secondary copy of your production data and click Next.


You get another chance before it actually changes the data after this but click Yes.


Once it has completed analysis the data set it will generate a file called "UnsupportedCharsOnOneDrive" on the Desktop of the user running the tool. This is a text file that gives you a list of any problematic files or folders the tool has discovered. 

You should review the file and if you are happy with the changes you can proceed which will instruct the tool to actually change the data and remove any unsupported characters.


2 - Repair the Data Set

Click Yes if you want the tool to fix your data.


When you try to upload a file or folder to SharePoint Online which contains an unsupported character you will get an error like this one "File names can't contain the following characters:"


If you click Yes at the previous step the tool will batch through the data and repair it. Once it's completed it will confirm.


Now if you try to move the same file to SharePoint Online it will complete successfully. 


3 - Migrating Data to SharePoint Online

Another tool which I have found useful for migrating data to SharePoint Online is SPZilla, which is an open source tool available from here https://spfilezilla.codeplex.com/

It is inspired by the FTP client File Zilla and works extremely well for moving large quantities of data to a SharePoint Online site. 


Monday, 13 February 2017

Enable nested virtualization on Windows 10 Enterprise

It is now possible (and supported) to run nested virtualization hosts such as Hyper-V and VMware ESXi. This is enabled on a per-VM basis using the following command, please note that the VM must be in the powered off state before this command will work.

Set-VMProcessor -VMName DcLabServer01 -ExposeVirtualizationExtensions $true

When you try to install Hyper-V on a VM the role will fail to install unless this has been configured on the host. It is supported on Windows 10 as well as Server 2016.

Deploying Nano Server on Server 2016 as Hyper-V Hosts and Domain Join Nano Server with djoin.exe

The new server deployment option Nano Server can be used for Hyper-V hosts (along with other roles), it feels like Microsoft's attempt of getting Hyper-V closer to an appliance than a Windows Server. The Nano Server Recovery Console feels a bit like the DCUI for VMware ESXi. I was recently playing with Nano Server with the intent of deploying a two-node Hyper-V Cluster using it.

The Nano Server Image Builder (which required the Windows 10 ADK) is very effective at creating Nano Server images. It also generates the raw code used by the tool in order to do that. This code creates a new Nano Server VHDX with Containers, Hyper-V and Shielded VM support. The Image Builder makes generating these scripts remarkably easy.

New-NanoServerImage -MediaPath 'F:\' -Edition 'Datacenter' -DeploymentType Guest -TargetPath 'E:\Virtual Machines\Virtual Machines\NanoServer.vhdx' -MaxSize 8589934592 -SetupUI ('NanoServer.Containers', 'NanoServer.Compute', 'NanoServer.ShieldedVM') -ComputerName 'nanoserverlab01' -SetupCompleteCommand ('tzutil.exe /s "SA Pacific Standard Time"') -LogPath 'C:\Users\ryan_betts.MEGATRON\AppData\Local\Temp\NanoServerImageBuilder\Logs\2017-02-13 15-47'

Nano Server has almost no local logon capabilities for configuring Hyper-V etc, so everything should be done remotely. This is easy if the Nano Server hosts are joined to the AD domain. The Nano Server Image Builder offers you the ability to pre-join Nano Server instances to an Active Directory domain. For this to work, you must first pre-create the computer object in AD using djoin.exe

The following command can be used to do this, it will only work if the server running the Nano Server Image Builder is joined to the destination domain. I am sure you could get around this with djoin blobs etc. 

Djoin.exe /provision /domain ryanbetts.co.uk /machine nanoserver02 /savefile offline.txt

One the Destination Machine Information page the Nano Server computer name must match the predefined computer object name for the process to work.


Select the Domain Join option, this is the part that will fail if you are not building the Nano Server VHD from a device that is joined to the destination domain.


Once you have created a new VM and attached the Nano Server VHD it should be pre-joined to the domain and it should let you login using Domain Admin credentials.



Monday, 6 February 2017

Finding Office 365 Tenant ID's the easy way

There are multiple ways you can find an Office 365 Tenant ID, however I have found the easiest way is to use the following commands;

Connect-MsolService
Get-MsolAccountSku

The output from the Get-MsolAccountSku command displays the Tenant ID in it's text value. Many of the guides online state the method to find the alphanumeric value. I have not found any commands that will not accept the text value. 

How to auto populate email address for ADFS authentication to Office 365

When ADFS is deployed in conjunction with Office 365 it prevents users from having to re-enter their corporate passwords in order to access web-based Office 365 services. However, the user is still required to input their email address. When the email address is entered, the Office portal redirected the users to the ADFS URL which was configured as part of the federation between ADFS and Office 365. In an ideal world it would be possible to auto populate this email address field to increase the user experience further.


One of the easiest ways to achieve this with no further configuration to ADFS or the users browser is to use a customized URL to access Office 365 services. This basically appends the customers domain to the URL. 

https://login.microsoftonline.com/login.srf?wa=wsignin1.0&whr=domain.com

It is possible to code this behavior into the ADFS code the following blog post includes a snippet of C# code that sets this. None of the content in this blog post is produced or owned by me.

https://janikvonrotz.ch/2013/10/18/adfs-login-customization/

For this environment, there are only 25 named users therefore the customized URL is good enough. However it is good to know it is possible to set this statically if required.

Window Server 2016 - ADFS 4.0 "An error occurred. The resource you are trying to access is not available. Contact your administrator for more information"

When I setup ADFS I usually browse to the URL in order to test that ADFS is operation https://fqdn.domain.com/adfs/ls/IdpInitiatedSignon.aspx
 
Although it is not a conclusive test it does give me some level or assurance that ADFS is functioning if this web page responds with the sign in page. However, I setup a new ADFS 4.0 infrastructure on Windows Server 2016 and when I browsed to the URL I received the following error. 

"An error occurred. The resource you are trying to access is not available. Contact your administrator for more information."


It turns out the IdpInitiatedSignOnPage property is disabled by default in ADFS 4.0 to enable it use the following PowerShell command on your ADFS servers. 

Set-AdfsProperties -EnableIdpInitiatedSignOnPage $True


Thursday, 2 February 2017

Upgrade ADFS 3.0 to 4.0 with Windows Server 2016 using PowerShell

Windows Server 2016 introduces the ability to perform an in-place upgrade of Active Directory Federation Services (ADFS). In previous versions of Federation Services, you were required to “rip and replace” the deployment with a new set of servers, you would then have to export/import your configuration data.

It should be noted that an OS level upgrade from Windows Server 2012 R2 to 2016 will not automatically upgrade the ADFS farm.

         Windows Server 2012 R2 – ADFS 3.0
         Windows Server 2016 – ADFS 4.0 
·        
Due to the nature of what ADFS does this was problematic for organizations, if your organization was using Federation Services for Office 365 authentication an upgrade would result in downtime to the service. ADFS 4.0 introduces the concept of a “farm behavior level” which is similar to how domain functional levels work in Active Directory. When you have ADFS 3.0 & 4.0 in the same farm this is considered a “mixed” farm. The features available across the server farm will be constraint to ADFS 3.0 if you are running in mixed mode. From the research I have done there are no reasons to retain a mixed mode farm, ADFS 4.0 is 100% backwards compatible with 3.0. A mixed mode farm will literally be for the period in which you introduce your new servers.

Step 1 – Upgrade Active Directory Schema

You must first upgrade the Active Directory schema before you can introduce ADFS 4.0 servers into the environment. This upgrade is non intrusive however it is recommended you ensure a healthy backup of your Active Directory is available in case it goes wrong. The upgrade code is on the Server 2016 ISO at \support\adprep you must also perform the upgrade with an account that has Schema & Enterprise Admin permissions. In addition to this the changes should be made on the Domain Controller that hosts the Schema Master FSMO role.

The following command can be used to determine your FSMO role holders

netdom query fsmo


The forest must be prepared first using the following command

adprep /forestprep


I do not think the /domainprep command is required if you only have a single domain

adprep /domainprep


Step 2 – Prepare the Windows Server 2016 ADFS Server

The command installs ADFS on the target server

Install-WindowsFeature Adfs-Federation -IncludeManagementTools

Step 3 – Introduce Server 2016 (ADFS 4.0) into the existing server farm

Set the following variable which is used to store the ADFS service account credentials, these should be entered in the format DOMAIN\username

$creds = Get-Credential

The following command joins the new server to the server farm hosted on the primary server, please note you do not need the -OverwriteConfiguration switch if you are running this for the first time

Add-AdfsFarmNode -ServiceAccountCredential $creds -PrimaryComputerName primaryserver.domain.local -CertificateThumbprint "DB84EE68879B8xxxxxxxx" -OverwriteConfiguration

Once this has completed run the following command on the new server this reconfigures the farm, and makes the new server the primary

Set-AdfsSyncProperties -Role PrimaryComputer

The following command must be run on all other ADFS servers

Set-AdfsSyncProperties -Role SecondaryComputer  -PrimaryComputerName "oldfsservers"


If you try to launch the ADFS Management console from the new Server 2016 instance it should open as the primary. 


The following command will show only the ADFS 4.0 servers in the farm

Get-AdfsFarmInformation

The CurrentFarmBehavior outlines what Farm Behavior Level the ADFS farm is operating at. Once the Server 2012 R2 instances are decommissioned and the farm level is raised this will be set to “3”.


Step 4 – Decommission Server 2012 R2 and ADFS 3.0 from the farm

In my environment, the requirement was to upgrade the entire server farm to ADFS 4.0, therefore before I could do this all the old ADFS 3.0 instances must be removed.

The following command has been deprecated in Windows Server 2012 R2.

Remove-AdfsFarmNode

The recommendation from Microsoft is to uninstall ADFS from the server to remove it from an ADFS server farm.


Step 5 – Raise ADFS Farm Behavior Level

It is advisable to run this command first to check you are in the position to upgrade the farm level

Test-AdfsFarmBehaviorLevelRaise

From the primary server run the following command to raise the ADFS server farm level

Invoke-AdfsFarmBehaviorLevelRaise

The Enterprise Key Admins group apparently does not exist until there is a Server 2016 Domain Controller introduced to the Active Directory, so it can be ignored for now. Yet to find any official documentation to support this. 


Wednesday, 1 February 2017

Resetting an Azure Gateway using PowerShell

I was recently troubleshooting an Azure VPN problem for a customer, the problem was that a S2S VPN that was working correctly for months was not forming with a Cisco ASA 5505 Firewall.

The following commands can be used to reset an Azure Gateway. A Gateway is basically two instance (RRAS running in Active/Passive I believe) that are managed by Microsoft. If you run this command Azure will fail over the Active not and reboot it. The Passive node will take over. 

$gateway = Get-AzureRmVirtualNetworkGateway -Name “cloud_gw” -ResourceGroup “rname”

Reset-AzureRmVirtualNetworkGateway -VirtualNetworkGateway $gateway

Tuesday, 31 January 2017

Connecting Server 2016 RDS Connection Broker to Azure Database fails with "the database specified in the database connection string is not available from the RD Connection Broker..."

When you try to configure a Windows Server 2016 Remote Desktop Services (RDS) Connection Broker to use an Azure PaaS database instance you get the following error;

"The database specified in the database connection string is not available from the RD Connection Broker server. Ensure that the database server is available on the network, the database exists and is empty (no scheme present), the Database server native client is installed on the RD Connection broker server, and the RD connection broker has write permissions to the database."


The error was caused by a firewall configuration on the Azure side, to alter the settings you must click the Firewall tab of the SQL Server (logical) in Azure, then enable the setting Allow access to Azure Services.


In addition to this click the Add Client IP option which will automatically populate an inbound firewall rule to allow incoming connections from the IP address you are currently accessing the portal via.


Click to Save the changes.


When you try to continue with the RDS installation the wizard will get past the error message. 


Windows Server 2016 RDS Connection Broker HA with Azure PaaS Databases using PowerShell

One of the most welcomed features in Windows Server 2016 when on the topic of Remote Desktop Services is the ability to store the RD Connection Broker state database in an Azure PaaS database instance. In previous versions of RDS, the only method to achieve high availability for the RD Connection Broker was to implement a shared SQL database using AlwaysOn Availability Groups or a similar HA technique inside SQL Server.



Connect to your Azure ARM account

Add-AzureRmAccount

Define the variable and create a new Resource Group

$resourceGroup = "rds2016"
$resourceGroupLocation = "West Europe"

New-AzureRmResourceGroup -Name $resourceGroup -Location $resourceGroupLocation

Define the variables for the SQL Server

$serverName = "rds2016demo"
$serverVersion = "12.0"
$serverLocation = $resourceGroupLocation
$serverResourceGroupName = $resourceGroup

$serverAdmin = "IT"
$serverAdminPassword = "pshere"
$securePassword = ConvertTo-SecureString -String $serverAdminPassword -AsPlainText -Force
$serverCreds = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $serverAdmin, $securePassword

Create the new logical SQL Server using defined variables

New-AzureRmSqlServer -ResourceGroupName $resourceGroup -ServerName $serverName -Location $serverLocation -ServerVersion $serverVersion -SqlAdministratorCredentials $serverCreds

Define the variables for the SQL database

$DatabaseName = "rdsdeployment"
$DatabaseEdition = "Basic"
$DatabaseServiceLevel = "Basic"

Create the new database using defined variables

$AzureDatabase = New-AzureRmSqlDatabase -DatabaseName $DatabaseName -ServerName $serverName -ResourceGroupName $resourceGroup -Edition $DatabaseEdition -RequestedServiceObjectiveName $DatabaseServiceLevel
$AzureDatabase


I used the portal to check that the resources had been created properly before I started configuring the Remote Desktop Connection Brokers. 


Now the Azure PaaS database has been created we can now configure our RD Connection Brokers to use it as the state database. Although you must first create some firewall rules on the Azure side to allow communication to your cloud SQL instance. Click the Firewall tab enable Allow access to Azure services and click the Add client IP


Commit the changes by clicking Save.

I have configured my deployment with two multi-role RDS servers, all the roles with the exception of the RD Connection Broker have already been made highly available.


From the Deployment Overview page, right click on the RD Connection Broker and select Configure High Availability.


Select Shared Database Server and click Next.


From your Azure PaaS database click on the Show database connection strings option.


Click the ODBC (Including Node.js) tab and copy the entire connection string. 


You then have to download and install the ODBC Driver 13 for SQL Server, you can grab a copy from here https://www.microsoft.com/en-us/download/details.aspx?id=53339

Once this has been done return to the RD configuration screen and enter the FQDN of the RDS cluster I have configured DNS Round Robin ahead of time for this deployment. Please note you could also use a hardware application delivery controller, this would be the recommended approach as DNS RR does not offer any kind of “failover”. I explain some of the differences in this blog post


You must copy the entire connection string, but please remember to change the password field. 



Comments system

Disqus Shortname