Calling lopoetve - RDM, NPIV, a Virtual WWNs for VMs

sabregen

Fully [H]
Joined
Jun 6, 2005
Messages
19,501
I have many FC toys here in house, and we're about to get some new demo gear from QLogic so that we can quantify the abilities that we will have if we move to newer switches that support features we don't currently have (like NPIV, in particular). So, I have some questions before the new toys get here.

What is required, and how do you set up an RDM LUN for a VM? I know that there's still a VMDK file that points to the LUN (so it's just a pointer to the disk, and is very small), but I can't seem to get one set up. I have plenty of spare disks that I could use (which are new 15k 300gb FC, with no file system on them) to test this out.

Also, with regards to giving a VM a virtual WWN for use in switch zoning, and access inside of the VM to LUNs, do I need NPIV support from the HBA in the machine, through the switch, from the switch to the RAID controller, and from the RAID controller to the disk shelves? I am under the impression (perhaps incorrectly) that NPIV must be supported by all devices in the SAN chain to enable virtual WWNs in a VM. Is this correct?
 
ya gotta give me time to check the forums :-p I was testing stuff on an as-of-yet unreleased product all day today and yesterday :)

I'll respond as much as possible - have to run to something, so I may have to get it later.
 
I have many FC toys here in house, and we're about to get some new demo gear from QLogic so that we can quantify the abilities that we will have if we move to newer switches that support features we don't currently have (like NPIV, in particular). So, I have some questions before the new toys get here.

What is required, and how do you set up an RDM LUN for a VM? I know that there's still a VMDK file that points to the LUN (so it's just a pointer to the disk, and is very small), but I can't seem to get one set up. I have plenty of spare disks that I could use (which are new 15k 300gb FC, with no file system on them) to test this out.
All you need is a spare lun. You will have either an RDM or RDMP vmdk file, which is nothing more than a descriptor for the lun, giving pseudo "cylinder/head/block" information for the "drive". This file can be copied, deleted, etc - it's just text. although an ls will show it at the full size of the RDM. Note: Cold migration/storage migration/cloning will convert the RDM to a vmdk. It's a bug/feature. Present the lun, rescan a few times, then go to add storage - if it shows up there, it should then let you add it as an RDM. If not, I'll walk you through it from the command line The GUI is a bit picky - try direct to the hosts instead of VC as well, since sometimes VC doesn't think it's a valid lun.
Also, with regards to giving a VM a virtual WWN for use in switch zoning, and access inside of the VM to LUNs, do I need NPIV support from the HBA in the machine, through the switch, from the switch to the RAID controller, and from the RAID controller to the disk shelves? I am under the impression (perhaps incorrectly) that NPIV must be supported by all devices in the SAN chain to enable virtual WWNs in a VM. Is this correct?

Yes @ switch and HBA. The san doesn't care - it just thinks it's a separate HBA. You will ahve to manually register the WWN - the switches/san will ~not~ pick it up automatically. However, and fair warning to this - right now NPIV is a phase-1 proof-of-concept product. It has almost ~zero~ practical use, other than monitoring bandwidth at the VM level. You MUST have the RDM presented the entire time to the ESX host - we fail back to the RDM from the NPIV Lun when needed. It'll get more useful in the future, it's still really new.
 
All you need is a spare lun. You will have either an RDM or RDMP vmdk file, which is nothing more than a descriptor for the lun, giving pseudo "cylinder/head/block" information for the "drive". This file can be copied, deleted, etc - it's just text. although an ls will show it at the full size of the RDM. Note: Cold migration/storage migration/cloning will convert the RDM to a vmdk. It's a bug/feature. Present the lun, rescan a few times, then go to add storage - if it shows up there, it should then let you add it as an RDM. If not, I'll walk you through it from the command line The GUI is a bit picky - try direct to the hosts instead of VC as well, since sometimes VC doesn't think it's a valid lun.

So if I create a new LUN on the SAN controller, you're suggesting that I use the VIClient to go directly to a host, instead of to my VCServer VM, correct? From there I can add it as a Raw LUN inside of a virtual machine, by pointing the Add Device portion of the VM's Edit Settings menu? And, if I get all of that working, doing a VM powered off migration (cold), it will convert the RDM to a .VMDK on one of the VMFS stores? That's kind of fucked, especially in instances where the application requires an RDM (yeah I said it. I can think of several that "require" this configuration for the support team to even talk to you).

Yes @ switch and HBA. The san doesn't care - it just thinks it's a separate HBA. You will ahve to manually register the WWN - the switches/san will ~not~ pick it up automatically. However, and fair warning to this - right now NPIV is a phase-1 proof-of-concept product. It has almost ~zero~ practical use, other than monitoring bandwidth at the VM level. You MUST have the RDM presented the entire time to the ESX host - we fail back to the RDM from the NPIV Lun when needed. It'll get more useful in the future, it's still really new.

Switch & HBA is what I thought. With the new switches, I'll at least be able to give the PoC a shot and add a V-WWN to a VM. When you say "manually register," you're talking about manually adding the VMs WWPN to the zoning config of the switch, correct? Zero practical use, maybe. I can think of a few things to test out with it.

Thanks for the info, and the clarification. Know anyone that can get the VCDX course fees waived for me?;)
 
Back
Top