Showing posts with label Networking. Show all posts
Showing posts with label Networking. Show all posts

Friday, 15 May 2020

Configuring Azure Firewall with a contiguous public IP Range

The Azure Firewall is a service offering available to customers in Azure. I'm not going to cover the details of the architecture or the basics of deploying Azure Firewall in this article. 

Many enterprise customers are adopting Azure Firewall to help control and manage the traffic flow for their services within a Azure and in hybrid locations across their WAN. A common ask for customers who are deploying edge facing services is around Public IP space, and how this differs in the cloud compared to on-premise with a traditional ISP.

Any customer with an active Azure Subscription can allocate and assign public addresses to their services from the portal. Many services, such as Azure Virtual Machines are provisioned with a public address as part of the automated deployment process.

Enterprise customers are usually looking for a little more control. This is where Public IP Prefixes come into the picture. It is possible for a customer to define a CIDR Block of public addresses directly in their subscription, to be used at their disposal. This is done by creating a new Public IP Prefix, as shown below.

You will see in the article below Public IP Prefixes can be provisioned with /31, /30, /29 or /28 CIDR Blocks giving a contiguous range of 2, 4, 8 or 16 public addresses. It is possible to bind one of these Public IP Prefixes to your Azure Firewall to ensure the public address range is contiguous.



Once you have you Public IP Prefix create, you must then use the Add IP Address option from the resource. This will create an actual usable address within the prefix range which can in turn be associated to Azure Firewall.


When you provision a new address you must give it a name and a resolveable DNS label.


Now head over to Azure Firewall and go to the Public IP Configuration section and click on Add a Public IP Configuration. This will guide you through binding this new public address to your Azure Firewall. It's worth noting that you cannot provision an Azure Firewall with a Public IP Prefix, you must first create the Azure Firewall with it's default of one random public address, then retrospectively configure the prefix like we are doing here.


From the Add Public IP Configuration window from within Azure Firewall you will notice from the drop down that the public addresses you provisioned as part of the prefix block are now available to be bound to the outside of the Azure Firewall. 


How to install and configure SSH on Ubuntu Server 19.03

If you elect not to install OpenSSH at the installation stage of Ubuntu Server, you must install and configure it once the server is deployed. The following commands can be used to achieve this:

sudo apt-get install openssh-server
sudo systemctl enable ssh
sudo systemctl start ssh

Wednesday, 30 January 2019

Configure NLB Nodes for WAP (non domain joined)

You might run into some node-level trust issues if you are trying to configure an NLB cluster for the Web Application Proxy role. 


The best practice from Microsoft states that any servers running the Web Application Proxy role should reside in a DMZ network and not be domain joined. This brings it's own set of issues as the nodes don't automatically trust each other. 

Gone are the days of creating two local administrator accounts on two non-domain joined hosts with the same password and praying it "passes through" authentication requests. Although we are still going to do this, a few other steps must be completed for it to work. 

If you are configuring an NLB cluster on none domain joined nodes, you will probably be faced with "Access Denied" when you attempt to add the second host to the already existent cluster. This is even if you have matching local administrator credentials on both machines. I'm led to believe this is due to later versions of Windows inspecting the local SID's of user accounts instead of the username string. 

To resolve this do the following - 
  • Create a new DWORD entry for LocalAccountTokenFilterPolicy in the registry of both nodes, this disables certain parts of UAC.  The registry path is HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System and for clarity the new entry should be a DWORD set to decimal and the value of 1.
  • Configure (from NLB Manager) Options > Credentials on both servers with the local admin account that has been created on each of the servers.
  • Configure the NLB cluster using node IP's and not DNS names (even if you have DNS names configured with the hosts file, I've found IP's seem to work better in a none domain joined NLB cluster).

Tuesday, 12 January 2016

Configuring Multi-Site VPN for Azure vNet's

It is possible in Microsoft Azure to configure your vNet to terminate multiple IPsec VPN connections to enable you to create mesh-like connectivity between your on premise sites and Azure.


In my example I have two on premise networks Network_A and Network_B, it is very important if you have two networks sharing a vNet gateway that they do not have overlapping address spaces.
            Network_A 10.25.1.0/16 (10.25.0.1 – 10.25.255.254)
            Network_B 10.22.1.0/16 (10.22.0.1 – 10.22.255.254)
The overlaps must also include the Gateway subnet, as this is a network in itself.
One thing to note if you are provisioning your vNet for the first time, is that for a multi-site VPN to be possible your Azure Gateway must be configured to use Dynamic Routing. If you are like me and you are re-configuring an existing gateway you can delete it and recreate it with a Dynamic Routing gateway. You will need to re-run your VPN script on your firewall though, as the Azure Gateway will be assigned a new IP.
Dynamic Routing is also known as Route-based in the context of networking. It is important to ensure your VPN devices support route-based routing. The following link can be referenced to check this;
Before you start your configuration, confirm your VPN devices at each of the sites can support route-based routing.  Surprisingly devices such as Cisco ASA’s and Palo Alto PAN-OS devices are not compatible- they only support policy-based VPNs.

Confirm you want to create a new Azure Gateway. It takes anywhere between 10-30 minutes to provision a new Azure Gateway so be patient.
Before you export your network configuration to XML you must first define the additional local networks that are being used at your other on premise sites.

Enter a descriptive name for the new network, if you forget to do this before you export the network configuration to XML you will be an XML validation error when you do the re-import.

Define the Address Space being used within the site. It is best to ensure your Address Spaces to not overlap, this is an easy mistake to make if you enter a "catch-all" mask to encapsulate the entire Class A, B or C subnet. A work around for overlaps if you do use general network ranges is to manually enter in the exact network ranges used in each of the on premise networks.

At this point you should have all the on premise networks defined under the Virtual Network.
The next step is to export the network configuration to XML. This can be done using the GUI.


You then have to edit the XML file to include your additional Local Networks that have been defined within your vNet.
If you export the network configuration before you define the Local Networks they will not be defined as <LocalNetworkSites> and you will receive a general XML validation error.
In my example script below, I have two Local Networks "Network_A" and "Network_B". The XML should be set like it is highlighted below (the variable is the local network name).
<Gateway>
    <ConnectionsToLocalNetwork>
      <LocalNetworkSiteRef name="Network_A"><Connection type="IPsec"   /></LocalNetworkSiteRef>
      <LocalNetworkSiteRef name="Network_B"><Connection type="IPsec"   /></LocalNetworkSiteRef>
    </ConnectionsToLocalNetwork>
</Gateway>

Save the XML file and return to the Azure Portal. Use the New wizard and select Network Services, Virtual Network and Import Configuration.

Browse for the edited XML file and click Next. The XML is validated before the import can complete, if Azure does detect a syntax error in the code it will state exactly what line is causing the problem.
Once the import is completed, the next step is to import the configuration script automatically generated by Azure. This script should be imported into each of VPN devices. It is important to read and understand the configuration script, as there are some assumptions made by Azure.
For example, the default configuration script creates an Internet Key Exchange (IKE) policy with a number of 10, if you have used this for another VPN tunnel you could cause a conflict on your firewall which could cause an outage.

Friday, 8 January 2016

Multi-Site VPN’s with Microsoft Azure and Hardware VPN Concentrators (Cisco ASA's)

Although it’s possible to terminate multiple VPN (multi-site) into a single Azure vNet, there are some limitation around the VPN hardware you use to do this. The following document from Microsoft outlines all of the supported VPN devices to work with Azure.


If your project requires multi-site VPN, the important column to review here is the Route-based, in short if you want to terminate multiple VPN’s into a single vNet your VPN device must support Route-based.

This is somewhat confusing as Route-based also means Dynamic Routing

·         Static Routing = Policy-based

·         Dynamic Routing = Route-based

The difference between the two is, Policy-based routing encapsulates and encrypts traffic and then forwards it out a specific interface according to an Access Control List. Route-based routing on the other hand forms dedicated tunnels with a neighbouring VPN device and forwards all of the traffic across this tunnel.

The Microsoft documentation to create a multi-site VPN states that the Azure vNet Gateway must be created as a Dynamic Routing gateway, or in other words a Route-based gateway.


This is to allow multiple VPN connection to be terminated into the vNet. If you are using the Cisco ASA for example this will not work, if you check the supported devices list above. The Cisco ASA does not support Route-based routing. In reality if your vNet is configured to use Dynamic Routing and you try and connect it to a Cisco ASA it simply does not work.

In short this basically means that if you have two sites with Cisco ASA’s you cannot create a multi-site VPN to Azure. There is a very small amount of hardware supported for such a topology by Microsoft.

Unless you are running one of the following you basically can’t do multi-site VPN to Azure (these are the only supported device for multi-site VPN);

·         Checkpoint Security Gateway

·         Cisco ISR, ASR

·         Dell SonicWALL

·         Fortinet

·         Juniper SRX, J-Series, ISG, SSG

·         Windows RRAS

The “work around” that doesn’t work

There are a number of forums on the Internet listing this exact problem with ASA’s, Palo Alto’s etc. and some people claim the ingenious workaround is to;

·         Create two separate vNet’s (one for each site you want a VPN from)

·         Create two VPN’s from each of the sites, terminating into their own vNets

·         Create a vNet to vNet VPN between the two separate vNets

I must admit before I did some research on this I did think this could be an option, but again if you review some of the Azure documentation you will notice that for any vNet to vNet VPN you must also create your Gateway using Dynamic Routing.
 
 
 
The bottom line
 
In summary as of January 2016, if you are running a Brocade, Citrix, Palo Alto, WatchGuard, F5, Barracuda or Cisco ASA firewall you cannot create a multi-site VPN to an Azure vNet.

Wednesday, 29 April 2015

Citrix NetScaler High Availability Configuration "Remote Node x.x.x.x PARTIAL_FAIL" & "Remote Node x.x.x.x COMPLETE_FAIL" HA Relationship Fails to Form

You have two identical NetScaler MPX 5560's, and they both have Management interfaces (NSIP's) using the 0/1 interface on each appliance which are configured on VLAN 8. When you try to configure High Availability between the two appliances, you recieve the following "Node State - Not Up" on the secondary appliance.


Firstly I clicked on the Dashboard, to see from the System Log if there were any clues to why the HA relationship was failing to form correctly, the System Log was stamped with "remote node x.x.x.x PARTIAL_FAIL" and "remote node x.x.x.x COMPLETE_FAIL".


The next step was to take a closer look at the /var/nslog/newnslog log file which can be accessed through the Web GUI from Configuration>Diagnostics>View Events.


Not much more information was listed in this log but my attention soon drifted towards the number of interfaces that were reporting "No heartbeats since HA startup".


More of the same referencing the "PARTIAL_FAIL" error message that was present in the System Log.


The first thing I tried was to disabled all of the NetScalers interfaces with the exception of the 0/1 interface which was the NSIP address on each appliance. You will receive an error if you try to disabled the Loopback interface.


Once all of the interfaces were in the "disabled" state, I tried to run the High Availability wizard again and this time it worked. It was obvious the issue was around a setting on the enabled interfaces.


For good practice I only enabled the two additional interfaces that were currently in use on the appliances anyway, which were 1/1 and 1/2.


I left most of the settings at defaults with the exception of "HA Monitoring", which I changed to the OFF state. This appeared to fix the original issue.


The next stage was to re-configure the NetScaler High Availability pair, to do this click System>High Availability. In my deployment I want the first appliance to always remain the Primary node unless a failure occurs, therefore I highlighted the appliance and clicked Edit.


From the Configure HA Node menu, the High Availability Status can be configured for each of the NetScaler nodes, I set the first appliance to STAY PRIMARY.


And the second appliance to STAY SECONDARY.


Once I have completed all of the above configuration I saved the NetScaler running config. I then run through the HA pairing wizard again, which completed successfully.


Thursday, 19 March 2015

Azure Networks Configuring Address Spaces, Subnets and Configuring Azure VM's with Azure Networks

Azure Networks are Network Services that can be designed and configured to support complex network infrastructure, supporting both on-premise and cloud based workloads. I have found certain Azure Virtual Network configurations easier to do from the Azure Portal, which is still in Beta/Preview mode. You can use it to configure your subscription http://portal.azure.com
Designing Azure Networks is must like designing and provisioning traditional networks, at a high level the following steps are required to define Azure Networks;
·       Create Virtual Network (which is a Network Service)
·       Assign Address Space and Subnets to Virtual Network
·       Configure Additional Subnets for the Virtual Network
All the traditional principals of TCP/IP address design are the same when designing Azure Virtual Networks. An Address Space, is usually defined as an RFC 1918 CIDR block which are
·       10.0.0.0/8
·       172.16.0.0/12
·       192.168.0.0/16
You can then define Subnets within that Address Space to separate network traffics, like you would in a traditional network. An example of how I have configured this Azure Virtual Network is
·       192.168.1.0/24 (Servers)
·       192.168.2.0/24 (DMZ)
·       192.168.3.0/24 (On-Prem)
The Subnet Mask (written in CIDR notation above) can be altered to make your Subnets bigger, or smaller. You should design out your networks on paper if you are going to change the Subnet Masks, and move to a class-less network design. This is to ensure your logical network stay contiguous.
From the Azure Portal, click New, then Networking, then Virtual Network. You must populate the Name field with something unique. Click Address Space, and then input your Address Space from the Address Space column. At this point in the Azure Virtual Network creation you only create a single subnet, in my example below I have called the Subnet "Servers" and it uses the subnet 192.168.1.0/24. Click OK to create the Virtual Network.


Return to the Azure Portal Home screen, and the newly created Azure Virtual Network will appears as a tile object, click on the tile mine is called "vNetwork-Test".

To add additional subnets to the Azure Virtual Network click Subnets.

Click Add Subnet button.

Populate the Name field, this a unique and descriptive name for the Subnet. Also set the Address Space to the Address Space you want this subnet to reside in, this example Azure Virtual Network only has one. Also enter a CIDR Block, I used 192.168.1.0/24 for my Servers subnet so 192.168.2.0/24 is the next available logical piece of network space. It would be my recommendation to design these so that overlaps and class-less networking is avoided if possible.

You can then add any of the subnets you require for your Azure subscription.

The next part is creating a new Azure Virtual Machine, and adding it to the correct Virtual Network and Subnet. This is something I believe you cannot do from the current Management Portal (please correct if you can). From the Azure Portal, click New, Compute and then select the Operating System, name the VM, and give it a username and password. Click on Optional Configuration.

From Optional Configuration click Network, then select your Azure Virtual Network, the Subnet you want the VM to be in and click OK. It's worth mentioning that once a VM has been configured inside a Virtual Network it cannot be move to another network, the only way to do it is to delete the Azure VM and retain the disks. You can then create a new VM from the old disks and connect it to a different network, I would imagine this will change at some point as Azure matures.

From the same Network pane you can also configure the VM with a static TCP/IP address within the same logical subnet that you are connecting the VM to.

An Azure Network also has a DNS Server, if you do not configure it to use a DNS Server it will  automatically point to one of the Azure hosted DNS servers. You can of course configure this to be your own DNS server(s) or even a global DNS provider.

Tuesday, 10 February 2015

Configuring Citrix NetScaler v10.5 VPX High Availability to Load Balance HTTP Traffic

Citrix Netscaler is an Application Delivery Controller (ADC), by Citrix Systems. Netscaler is a widely deployed appliance that is available in three forms, the MPX (physical appliance), the VPX (virtual appliance) and the SPX, the physical appliance running XenServer that can host multiple virtual instances of Netscaler. If am using Netscaler to load balance ordinary HTTP traffic between two Windows Server 2008 R2 servers, with the IIS 7.5 role installed.
The topology that is being adopted is the “Two-armed mode, multi-subnet” model as show below, this is a Citrix recommended design when deploying Netscaler.
You can download a trial of the Citrix NetScaler 10.5 VPX from Citrix. It is available for XenServer, Hyper-V and VMware vSphere. In this example I am using vSphere, when you download the vSphere version of the VPX it comes as an OVF file that should be imported into vSphere. This can be done from the local machine you are using the vSphere Console from, so there is no need to upload the OVF to a vSphere datastore.
In Citrix Netscaler there is a significant difference between Clustering and High Availability, for one Clustering requires a special "clustering" license, where as traditional High Availability is provided as part of all the Netscaler editions.
In my example I am configuring two Netscaler VPX's in a HA pair, the following facts should be noted with HA and Citrix Netscaler;
·       Setup in Pairs (max 2 nodes)
·       Primary Node owns the VIP, SNIP (only one per pair)
·       Heartbeat every 200ms over UDP/3003 (3 second threshold for failover to initiate)
·       TCP port 3010, 3008 is used for node sync, file sync TCP 22
·       Configuration made on the primary are replicated over TCP 3011, 3009
As this is only a test environment I have created two new vSphere Standard Switches, with no adapter uplinks connected. The External vSwitch represents a DMZ, and the Internal my local area network.

My TCP/IP configuration(s) are as follows;
  • RB_Test_Internal (LAN Subnet) – 192.168.0.0/24
  • RB_Test_External (DMZ Subnet) – 172.16.0.0/24
  • NS01 (NSIP) is 192.168.0.20/24
  • NS02 (NSIP) is 192.168.0.21/24
  • HA Pair (SNIP) is 192.168.0.23/24
  • Web Server 1 is 192.168.0.50/24
  • Web Server 2 is 192.168.0.51/24
  • NS HA Pair VIP is 172.16.0.100/24
If you have reviewed the Citrix eDocs on Netscaler, the physical topology and logical subnet configuration I am doing in this example is referred to as a “mutli-armed, multi-subnet” deployment.
In a production environment you would probably have several dedicated uplinks from each of these vSwitches to provide connectivity to the physical networks. These uplinks would be either access ports or trunk ports depending where you are doing EST, or VGT for VLAN tagging.
Once the OVF appliance is imported, open a Console Connection to the VPX to set the initial configuration at this stage this will be the address that is used to manage the Netscaler VPX from your web browser.

Once the initial management IP is set, you can use a browser to connect to the Netscaler. It would suggest using Google Chrome as it seems to have the least amount of issues with Java when you are making administration changes.

When you login the first screen you will be presented with will have four options, the Netscaler (NSIP) should already be configured and show a green tick indicating this. 

The next part to configure is the Subnet IP Address (SNIP), this is an interface that is used to communicate with servers on the backend. Click on the Subnet IP Address option to begin configuring it. 

The SNIP address should be on the same subnet and VLAN that your internal servers that you are trying to load balance are. The wizard also provides a simplified break down of how the SNIP is used to communicate with the backend servers.

Step 3 is to configure a hostname for the device along with a DNS server, call this whatever you want a point it to your local DNS server, which will typically be a Domain Controller. You should also remember to manually create an (A) record for the Netscaler pointing to the correct IP in your DNS Forward Lookup Zones. This is usually forgotten as Microsoft devices use Dynamic DNS to do this automatically.

You will be prompted to restart the VPX appliance once your click on done. Step 4 is where you configure the license for the VPX appliance, you can get a 90 trial from Citrix that should be ample for testing. The following blog post here http://blog.ryanbetts.co.uk/2014/09/downloading-licensing-and-basic.html covers licensing the Netscaler VPX in detail.

Once the reboot is completed you should be able to log back into the VPX, and you will be taken to the Configuration window. To ensure your license file has been imported correct click on Licenses, the trial license should allow Load Balancing, Content Switching and SSL Offloading. 

The next step is to configure the High Availability between the two Netscaler VPX's, to do this click System, High Availability, from there you should see the first node in the state UP. Click the Add button. 

You should now enter the NSIP of the secondary node into the Node IP field. The username and password to login to the Netscaler should be the same on both these devices, I have left these as there default nsroot/nsroot.

When you click Create the Netscaler will prompt you to restart the running configuration and reboot the device.

Once the restart has completed, under the High Availability section you should see both nodes. As heart beating should be operating between the devices the first Netscaler VPX should still be operating as the Primary. 

The Actions menu can be used to show details, Force Synchronization and Force Failover between the two devices. 

The next step is to define the Services (or Servers, that you want to load balance between), to do this expand Traffic Management, then Load Balancing and click on Services. Click on Add to launch the wizard.


Configure the settings to be in line with your environment, I have two Web Servers (192.168.0.50 & 51) that are inside the local area network. You must create a Service for each of these servers. 

The servers are still offline for me at the moment therefore they appear as DOWN. This will automatically change when the Netscaler can communicate using the SNIP over ICMP.


Once I brought the servers online and there was connectivity between the Netscaler and the Web Servers the State changed to UP, and the lights went green. It would be a good time to save the running configuration.

Also from the Load Balancing menu, click on Virtual Servers, a Virtual Server in NetScaler is a Netscaler entity that external clients can use to access applications hosted on the servers. A Virtual Server is represented by a hostname, Virtual IP (VIP), port and protocol. Click Add to begin creating a new Virtual Server.

The name of the Netscaler Virtual Server is only locally relevant, therefore it does not make much difference what this is called here. I have configured my Virtual Server with the IP address of 172.16.0.100, which is the subnet that is in use on my DMZ side of the network. The Netscaler VPX's have two NIC's, one on each side of the two networks, LAN and DMZ.

Once you click OK, you will be prompted to enable the feature "LB", click Yes to this.


After this completes you will see under Services and Service Groups, no Virtual Server Service Bindings, click on the arrow to begin configuring this.

This is where we bind the services (or servers to be load balanced) to the Netscaler Virtual Server, click the Plus button to open the console.

Select both of the services that you created in the previous steps, in my example I have named both of my web servers "iisx". Click OK once this has been done.


Click Bind.


Click Done.


You must now click on the Method button from under the Advanced menu, this will expand the configuration screen and allow you to choose a High Availability method. Netscaler supports a number of different load balancing algorithms, the most common ones being;
  • LEASTCONNECTION (Which service currently has the fewest client connections. This is the default load balancing algorithm.)
  • ROUNDROBIN (Which service is at the top of a list of services. After that service is selected for a connection, it moves to the bottom of the list.)
  • LEASTRESPONSETIME (Which load balanced server currently has the quickest response time.)
  • URLHASH (A hash of the destination URL.)
A full list of the supported algorithms can be found at the following Citrix eDocs article http://support.citrix.com/proddocs/topic/netscaler-load-balancing-93/ns-lb-customizing-lbalgorithms-wrapper-con.html


I am going to configure LEAST CONNECTION at this stage, once done click OK. You should review the eDocs page to determine which algorithm will suit your needs the best.


The Virtual Server still appears to be “DOWN”, this will come online when the configuration is applied and saved to the memory. Click Done.

Once a refresh has occurred, click the Save icon.

Click Yes to confirm.

Now if you browse to the external VIP IP address, you should be connected to one of the web servers, I changed the default IIS landing page to ensure it was working correctly.