Saturday 30 May 2015

Integrating Visual Studio 2013 with Microsoft Azure Websites

There a number of ways to interface with Azure Websites including Web Matrix, PowerShell, X-Plat and even Visual Studio. This is handy as Visual Studio can be directly integrated with your Azure subscription so that you can publish your web applications and sites seemlessly from the Visual Studio IDE environment.
You must first install the Microsoft Azure Tools for Visual Studio. Please note that you need to have the Web Developer Tools installed for Visual Studio.

Just to test this integration I simply created a new application from one of the templates from the wizard. Click Tools and then Connect to Microsoft Azure Subscription.

You will then be prompted for your Azure subscription details, enter them and you should be able to expand Azure from the Server Explorer. If you have already created an Azure Website expand Websites and right click on it and select Download Publish Profile.

From the Solutions Explorer window you can right click on your project and select Publish. If you have not already Build the site and effectively compiled it, this option will be grayed out.

Click Import.

Browse to the Publish Settings file you downloaded in the earlier step.

Review the settings and click Publish.


Microsoft Azure Websites Part 1: Overview of Azure Websites

Azure Websites
Azure Websites are fully managed containers to run Web Applications and Sites. Azure Websites are not Azure hosted Windows Servers with the Internet Information Services (IIS) role installed, Azure Websites are Platform-as-a-Service (PaaS) thus Microsoft manage all of the under lying infrastructure that serve your websites and applications.
Azure Websites can be created using the Azure PowerShell SDK, the Azure Management Portal or the X-Plat CLI tools. When you create a new Azure Website you must specify the following;
  • A unique URL for the website/application at .azurewebsites.net
  • An App Service plan
  • A pricing tier
  • A resource group
  • A subscription to link it to
  • A location for the website to reside
Figure 1: Summary of new Azure Website
Azure Website Pricing and Resource Tiers
Azure Websites are available in four different tiers, each of which offer different amounts of resources and consumables that are available to the application or website.
The tiers are Free, Shared, Basic and Standard. Microsoft generously offer an entirely free Azure Website tier which can be used to host simple websites and applications, although this has some limitations such as a maximum daily bandwidth limit of 165Mb’s. As the tiers increase so does the price, but with this you’re Azure Websites are given more resources on the underlying Azure infrastructure.
The Free and Shared tiers both run on shared VM instances under the covers, but the Basic and Standard tiers have dedicated instances for any applications or websites being hosted. It should be noted that there is no interference between tenants who are potentially sharing a VM instance for example to host their web application.
Furthermore the Free and Share tiers only have a single instance tier, whereas the Basic and Standard come with three instance types for example B1, B3 and S2 for example, each of the resources for that instance increase. 
Figure 2: Overview of the Resources Available for S1 and B1
Enhancements such as built-in load balancing is also included with the Basic and Standard tiers, whereas this is not available for Free or Standard.

Figure 3: A Comparison between Basic and Standard Tier Instance Types
Data transfers for Azure are fairly standard across the board, inbound data transfers are not changeable but egress traffic from the Azure data centers is charged at a per GB/TB rate, the first 5Gb’s outbound is free across all zones.
You can read more on this at the following site as it’s likely to change http://azure.microsoft.com/en-gb/pricing/details/data-transfers/
Azure Website Quotas, Limits and Constraints
Most of the services that are available in Azure have two limitations, a default limitation and an ultimate (or hard) limit. The default limitation is the soft limit in which Microsoft has set on a service, this governs the amount of resources that is available to a given “stock service”. This default limit can normally be increased if you approach Microsoft with a valid reason and use case as to why you want it increase. The ultimate limit on the other hand is the maximum limit that is available to a given resource, Microsoft cannot expand this limit.

Friday 29 May 2015

VMware VCP-NV Network Virtualization (NSX) Exam Experience with some Tips!

I hope this post is not in breech of the VMware exam NDA, having read this a few times I don't think it is. If you disagree please let me know.

VMware NSX has been on my radar since about September last year to get in the lab and play with it. After some visibility of my potential next few projects with SystemsUp (and VMware PSO) I thought I better get it up and going ASAP.

What better way to prove you have understood an entire new product than taking the certification exam? So I decided to have a go at the VCP-NV credential.

This post outlines some of my thoughts around the exam, and perhaps the most importantly for most what I used to pass the exam.

  1. At least Cisco CCNA Routing and Switching is required (In my opinion)
  2. Understand Layer 2 - MAC addresses, VLANs, broadcast domains etc
  3. Understand Layer 3 - Routing, subnet masks, OSPF
  4. Understand the Leaf/Spine architecture, and how its better over traditional 3-tier
  5. Understand link state routing protocols OSPF, IS-IS
  6. Understand what VXLAN is and how its used
  7. Understand VPN and basic cryptography (diffie-hellman, sha etc)
You will find this exam very difficult if you do not spin it up in a lab, or at least do the free VMware Hands on Labs. I did a mixture of both.

I used the following snippets of training material to study for the VCP-NV exam.
  1. VMware NSX Documentation (https://www.vmware.com/support/pubs/nsx_pubs.html)
  2. VMware NSX for vSphere Network Virtualization Guide (Google it)
  3. VMware NSX for vSphere on Cisco UCS with Nexus 7000 Design Guide (Google it)
  4. VMware NSX Install, Configure and Manage Courseware
  5. Pluralsight VMware NSX Courses (Intro & Adv)
  6. CBT Nuggets "ICND1" OSPF Videos
  7. CBT Nuggets "CISSP" Cryptography Videos
  8. INE VXLAN Videos (under the CCIE Data Center Courseware)
  9. VMware Hands on Labs
If you do want to get it up and working in its entirety in your lab, you will need some powerful hardware and probably some decent Cisco switches to get VXLAN etc all working. I only got so far in my small lab and took to using the VMware Hands on Labs.

Wednesday 20 May 2015

Active Directory Certificate Services: Extending CRL Validity Period The Revocation Function was Unable to Check Revocation Because the Revocation Server was Offline 0x800092013 (-2146885613 CRYPT_E_REVOCATION_OFFLINE)

This is my second post on a very similar error a work around to the same problem can be found here: http://blog.ryanbetts.co.uk/2015/02/ad-cs-revocation-function-was-unable-to.html although this solution properly fixes the problem.

I have come across this issue on a number of occasions and it is down to the CRL installed on the Online Issuing CA being expired. In the environment where I have had this today, there is an Offline Root CA and an Online Issuing CA, the Offline CA issues the CRL to the Online CA. By default AD CS sets the CRL Validity Period to 1 Week, which in most places is not ideal as an Administrator has to manually copy the new CRL between the Offline and Online CA's once a week.
Your Online CA is in the disabled state, and when you try to manually start the AD CS service you are faced with "The Revocation Function was Unable to Check Revocation Because the Revocation Server was Offline 0x800092013 (-2146885613 CRYPT_E_REVOCATION_OFFLINE)"

This is because the CRL that is configured to ensure your Root Certificate is valid has expired, this is issued from the Offline CA. If you open the CRL file itself you will notice it has an Effective Date and a Next Update date. The image below would actually be valid the day I posted this blog post but if you get the "The Revocation Function was Unable to Check Revocation Because the Revocation Server was Offline 0x800092013 (-2146885613 CRYPT_E_REVOCATION_OFFLINE)" error, the chances are the date in the Next Update field  has already passed.

On the Issuing CA there is actually two CRL's, one for the Root Certificate (which will only be regenerated if the Root Certificate server is compromised or expired and needs regenerating). The second CRL is the one managed and automatically updated by the Online CA, this hosts a list of revoked certificates issued by the Online CA. You do not need to alter this unless you want to.
On the Offline CA, open Certificate Authority and right click Revoked Certificates and select Properties.

As my Root Certificate is valid for two years I have changed the CRL Publication Interval to 2 Years.

Right click on Revoked Certificates and All Tasks, then select Publish.

Click New CRL and then OK.

Now if you open the CRL file from the Offline CA you will see the Next Update is two years from the date of issue.

Now simply copy and replace the CRL on the Issuing CA and AD CS should start without issue.

Thursday 14 May 2015

VMware NSX Manager Integration with vSphere vCenter 6.0 "NXS Management Service operation failed. (Initialization of STS Clients failed. Root Cause: The SSL certificate of STS service cannot be verified)

When you try to configure the integration between the VMware NSX Manager and the vCenter Lookup Service you get the following error Initialization of STS Clients failed. Root Cause: The SSL certificate of STS service cannot be verified.

This is more of a work around than anything else, if you backup a stage and return to the pane where you configure the Lookup Service, if you change it to port 443, click OK and accept the certificate warning it then works correctly.

I always thought that the VMware Lookup Service operated over port 7444, not the typical HTTPS port of 443. The following VMware article supports this theory upto vSphere version 5.5. Although it does not seem to have been updated for vSphere 6. It would appear the port for the Lookup Service is now 443.

Using port 443 it integrates with the Lookup Service without problem.