Showing posts with label NetApp. Show all posts
Showing posts with label NetApp. Show all posts

Thursday, 9 July 2015

NetApp Clustered ONTAP 8.x Cluster/High Availability "Node NetAppNode has some LIF's that are not in their home node" Orphaned LIF Interface

I was recently checking over the configuration of a NetApp FAS 2252 to ensure it was installed correctly before it gets handed over to a customer. In doing this I checked the High Availability section under Cluster and then the NetApp Cluster name.
There was a yellow warning box above the NetApp HA diagram that stated "Node NetAppNode has some LIF's that are not in their home node", although I was confident this would not affect the failover itself I wanted to get rid of the error. 
Connectivity to SVMs is provided through logical interfaces (LIFs). A LIF has an IP address or World Wide Port Name used by a client or host to connect to an SVM. A LIF is hosted on a physical port. An SVM can have LIFs on any cluster node. Clients and hosts can access data regardless of the physical location of the data in the cluster. The cluster will use it’s interconnect to route traffic to the appropriate location regardless of where the request arrives. LIFs virtualize IP addresses or WWPNs, rather than permanently mapping IP addresses and WWPNs to NIC and HBA ports. Each SVM requires its own dedicated set of LIFs.


Open up an SSH session to the SVM mgmt IP, and use the following command;
network interface show

This will display all of the LIF interfaces assigned to the SVM's on your NetApp, I only have one SVM and as you can see it has three LIF interfaces. If you look above the SVM line (circled in purple) you will see one of the LIF interfaces is orphaned. This is what was causing the error.

The following command can be used to re-home the orphaned LIF interface.
network interface revert -vserver vservername -lif lifname


If you return to the GUI interface you will notice is warning has disappeared.

Tuesday, 21 April 2015

NetApp ONTAP 8.x Converting 7-Mode to Clustered-Mode

You have a new NetApp FAS2252 with two controllers and a single disk tray. The intended design for the FAS2252 is to have ONTAP 8.3 configured in Clustered-Mode to provide Active/Active failover.

The first controller boots without issue into ONTAP Clustered-Mode, and you have created the cluster using the cluster setup wizard from the console. You then try to configure and join the second controller to the newly created cluster.

You notice the second controller continually boots into ONTAP 7-Mode, which does not allow you to add it to an ONTAP cluster. 


Before I did this on the physical FAS2252, I downloaded the ONTAP 7-Mode Simulator from the NetApp website. You can import it into VMware Player, Workstation or Fusion.

When you extract it there is a VMware VMX file inside the folder, simply right click and open this with the VMware product you are using. The appliance will load and you must get to the LOADER prompt to change ONTAP boot system to Clustered-Mode. To do this hit any key at the prompt before, this breaks out the boot sequence. On the physical appliance the prompt was slightly different and you are instructed to push CTRL + C.


Now from the LOADER prompt, type the following commands.

LOADER> set-defaults (it is set defaults in earlier versions of code)
LOADER> setenv bootarg.init.boot_clustered true
LOADER> setenv bootarg.bsdportname e0M (may be different for you)
LOADER> boot_ontap

The last boot_ontap command instructs the appliance to boot the ONTAP OS, as the boot variable has just been set to ONTAP Clustered-Mode it should appear like below.


When ONTAP begins to load push CTRL + C to get to the boot menu, select option 4 Clean Configuration and Initialize all Disks. Each of the controllers will require at least 3 disks to be assigned to them for step 4 to work. If all of the disks in the shelf are assigned to a single controller step 4 will fail. To reassign disks you can use maintenance mode.


At the Zero disks, reset config and install a new file system?: Y

And This will erase all the data on the disk, are you sure?: Y

Bare in mind that if your filer has any data on it this step will destroy it, as I was only testing using the simulator I just continued.


Once the wipeconfig has completed, and the filer reboots you will be prompted with the cluster setup wizard, which I will cover in my next post.