Wednesday 29 April 2015

Citrix NetScaler High Availability Configuration "Remote Node x.x.x.x PARTIAL_FAIL" & "Remote Node x.x.x.x COMPLETE_FAIL" HA Relationship Fails to Form

You have two identical NetScaler MPX 5560's, and they both have Management interfaces (NSIP's) using the 0/1 interface on each appliance which are configured on VLAN 8. When you try to configure High Availability between the two appliances, you recieve the following "Node State - Not Up" on the secondary appliance.

Firstly I clicked on the Dashboard, to see from the System Log if there were any clues to why the HA relationship was failing to form correctly, the System Log was stamped with "remote node x.x.x.x PARTIAL_FAIL" and "remote node x.x.x.x COMPLETE_FAIL".

The next step was to take a closer look at the /var/nslog/newnslog log file which can be accessed through the Web GUI from Configuration>Diagnostics>View Events.

Not much more information was listed in this log but my attention soon drifted towards the number of interfaces that were reporting "No heartbeats since HA startup".

More of the same referencing the "PARTIAL_FAIL" error message that was present in the System Log.

The first thing I tried was to disabled all of the NetScalers interfaces with the exception of the 0/1 interface which was the NSIP address on each appliance. You will receive an error if you try to disabled the Loopback interface.

Once all of the interfaces were in the "disabled" state, I tried to run the High Availability wizard again and this time it worked. It was obvious the issue was around a setting on the enabled interfaces.

For good practice I only enabled the two additional interfaces that were currently in use on the appliances anyway, which were 1/1 and 1/2.

I left most of the settings at defaults with the exception of "HA Monitoring", which I changed to the OFF state. This appeared to fix the original issue.

The next stage was to re-configure the NetScaler High Availability pair, to do this click System>High Availability. In my deployment I want the first appliance to always remain the Primary node unless a failure occurs, therefore I highlighted the appliance and clicked Edit.

From the Configure HA Node menu, the High Availability Status can be configured for each of the NetScaler nodes, I set the first appliance to STAY PRIMARY.

And the second appliance to STAY SECONDARY.

Once I have completed all of the above configuration I saved the NetScaler running config. I then run through the HA pairing wizard again, which completed successfully.

Citrix NetScaler Firmware Upgrade - 10.1 Build 127.11 to 10.5 Build 56.15 with the Web GUI

To upgrade the firmware of a Citrix NetScaler is a very easy thing to do, I have documented the steps for anyone who has never done it before, or has limited exposure to the NetScaler platform. Please do not contact me asking for the NetScaler firmware, as I cannot provide this for you. Login to your Citrix account for the latest firmwares. 

The easiest way to do it, is to download the new firmware to your local device. The wizard then allows you to upload it to the device, it can be sourced from the flash of the device.

Open the Web GUI, and login. From the Configuration screen, click Upgrade Wizard...

Click Next, on the Introduction screen.

Select Local Computer, and click Browse, then search for the firmware file that you have downloaded from Citrix.

If you have a license file currently configured on your it should show here, I am actually not sure if it lets your upgrade the firmware if you don't have a license file imported.

I prefer to disable the Automatically move file to create space in flash, as this will basically delete the old firmware. For safety I would rather have that firmware image available in case the upgrade does not work for some reason, or if it needs rolled back. Select Reboot after successful installation and click Next.

Click Finish.

The wizard will upload the new firmware image from your local device to the NetScaler appliance.

You will be prompted if you want to enable Citrix Call Home, this allows the appliance to report issues directly to Citrix, this is entirely up to you if you enable this.

Once the device (or vAppliance) reboot, the Web GUI interface changes to darker colours. 

You should note that if you are going to configure services such as High Availability, both the NetScalers have to be exactly the same. This includes the platform, so you cannot mix VPX's with MPX's for example, the firmware and network interfaces used has to also match on neighboring devices.

Tuesday 28 April 2015

Citrix NetScaler 10.1 MPX, throws the following error when you try to run certain operations from the Web GUI "Cannot load Applet, Java Applet could not be loaded Details Possible reasons: JRE(Java Runtime Environment) not installed. JRE is installed but not running."

Disclaimer: Many people have blogged about this issue with NetScaler and Java Runtime, the reason I have done it again is because although there are blogs documenting similar fixes, none of these fixes resolved the problem for me alone. Therefore I have consolidated a single post, documenting all of the steps it took me to fix the problem in my environment. I have referenced the articles I used below to give the original contributors the credit. 

Citrix NetScaler, throws the following error when you try to run certain operations from the Web GUI "Cannot load Applet, Java Applet could not be loaded Details Possible reasons: JRE(Java Runtime Environment) not installed. JRE is installed but not running." I came across the issue trying to upgrade the firmware and enabling HA on two physical appliances.

In this environment I have the latest Java Runtime installed (April 2015) which is version 8 update 45. Open Control Panel, and then open the Java control panel. Click on the General Tab, and ensure the Keep Temporary Files on my Computer is disabled.

Then click Security and then Edit Site List..., and put an entry in for your problem NetScaler device here. 

Click on the Advanced tab, and set Mixed Code (sandboxed vs. trusted) security verification to Disable verification (not recommended) and Perform signed code certificate revocation checks on and Do not check (not recommneded).


VMware vSphere 6.0 Console "Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period" & "Unable to connect to the MKS: Internal error"

When you try to console to a server on your vSphere 6 platform using the traditional client, it throws the following error "Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period" and you cannot make a console session. 

The first place I checked for details was the VMware Client log files, which can be found at %temp% and then the vmware-%username% folder on the computer you are getting the pipe error on. I opened the log file with the latest date stamp and noticed the following line immediately "HostDeviceInfo: Failed to enumerate host parallel ports via the registry. Could not open device map parallel port registry key."

My first thought was to remove any devices that are not required by the VM, so I down powered the VM and edited the settings. I happened to be getting the error on my vCenter Server therefore I had to open a new vSphere Client session to the host running the VM. My first step was to remove the CD/DVD drive, which already had an ISO attached to it.

From there I then started the VM again, and tried to open a Console session and the error "Unable to connect to the MKS: Could not connect to pipe \\.\pipe\vmware-authdpipe within retry period" had disappeared.

I have another issue that was similar, but the error message was "Unable to connect to the MKS: Internal error" the problem here was DNS related, I was trying to use the vSphere console across a Cisco VPN connection, which in theory should be fine. But due to a number of changes on the client site, the VPN client IP pool was still assigning clients the old DNS server details. Once I hard coded the vCenter and ESXi hosts details in to the admin workstations hosts file, it all started working. 

That being said, one machine still kept giving me issues. And the fix for that one was to down power it and remove it from the vCenter Inventory, then re-add it using the VMX file. It then worked correctly.

Tuesday 21 April 2015

NetApp ONTAP 8.x Converting 7-Mode to Clustered-Mode

You have a new NetApp FAS2252 with two controllers and a single disk tray. The intended design for the FAS2252 is to have ONTAP 8.3 configured in Clustered-Mode to provide Active/Active failover.

The first controller boots without issue into ONTAP Clustered-Mode, and you have created the cluster using the cluster setup wizard from the console. You then try to configure and join the second controller to the newly created cluster.

You notice the second controller continually boots into ONTAP 7-Mode, which does not allow you to add it to an ONTAP cluster. 

Before I did this on the physical FAS2252, I downloaded the ONTAP 7-Mode Simulator from the NetApp website. You can import it into VMware Player, Workstation or Fusion.

When you extract it there is a VMware VMX file inside the folder, simply right click and open this with the VMware product you are using. The appliance will load and you must get to the LOADER prompt to change ONTAP boot system to Clustered-Mode. To do this hit any key at the prompt before, this breaks out the boot sequence. On the physical appliance the prompt was slightly different and you are instructed to push CTRL + C.

Now from the LOADER prompt, type the following commands.

LOADER> set-defaults (it is set defaults in earlier versions of code)
LOADER> setenv bootarg.init.boot_clustered true
LOADER> setenv bootarg.bsdportname e0M (may be different for you)
LOADER> boot_ontap

The last boot_ontap command instructs the appliance to boot the ONTAP OS, as the boot variable has just been set to ONTAP Clustered-Mode it should appear like below.

When ONTAP begins to load push CTRL + C to get to the boot menu, select option 4 Clean Configuration and Initialize all Disks. Each of the controllers will require at least 3 disks to be assigned to them for step 4 to work. If all of the disks in the shelf are assigned to a single controller step 4 will fail. To reassign disks you can use maintenance mode.

At the Zero disks, reset config and install a new file system?: Y

And This will erase all the data on the disk, are you sure?: Y

Bare in mind that if your filer has any data on it this step will destroy it, as I was only testing using the simulator I just continued.

Once the wipeconfig has completed, and the filer reboots you will be prompted with the cluster setup wizard, which I will cover in my next post.