16 Aug 2011 by rayheffer
This section on storage continues on from section 1.2 in the blueprint (Manage Storage Capacity in a vSphere Environment) which at the time of writing these study notes, I haven’t completed yet. I felt that managing multipathing and PSA plugins deserves more attention, at least for me anyway. This is very command line heavy but remembering that during the VCAP-DCA exam documentation is provided (see key materials below), and the fact you can use command line help makes this a little less scary. Just try and remember what you need to achieve and have a good idea of which commands are used!
The most common PSA plug-in is EMC PowerPath which is what I’ll base this on. You can use the vMA (which I prefer) or vCLI.
# vihostupdate --query --server <hostname>
Install the PowerPath plugin:
# vihostupdate --server <hostname> --install --bundle EMCPower.VMWARE.5.4.SP2.b298.zip
The knowledge required according to the VCAP-DCA blueprint on this topic is to understand Pluggable Storage Architecture (PSA). The following sections will detail how to use the CLI, so grab a coffee as we dive into PSA!
As part of the VMkernel the PSA provides multipathing using either the Native Multipathing Plugin (NMP) or third-party multipathing plugins (MPP) such as EMC PowerPath. The NMP has two plugins it uses which are key to this topic; SATP (Storage Array Type Plugins) and PSP (Path Selection Plugins). If you haven’t seen it already, there is a well used diagram on these by VMware, which you’ll find on page 24 of the Fibre Channel SAN Configuration Guide:
The SATP (Storage Array Type Plugins) are responsible for monitoring the health of each path, reports changes to the state of each path, and will activate a passive path if required (active-passive arrays). Ed Grigson of vExperienced said the SATP is like the traffic cop, which is a good analogy.
The PSP (Path Selection Plugins) are responsible for choosing which path to use for I/O requests. The following PSP’s are available:
The VCAP-DCA blueprint for section 1.3 only mentions use of esxcli, so this will be the focus of these study notes. Remember when using esxcli with vMA, you’ll need to add –server
Note: esxcli nmp relevant for this section of the VCAP-DCA blueprint.
esxcli nmp is straight forward enough, just type it alone and it’ll present you with a list of available objects (see screenshot):
As this section of the blueprint is multipathing and PSA plugins, we’ll get started with listing the paths available on the ESX/ESXi host:
# esxcli nmp path list | less (can pipe to more, but I prefer less :)
You can also list the devices controlled by the nmp plugin:
# esxcli nmp device list | more
First, list the path selection plugins available:
# esxcli nmp psp list
Next, we’ll list the available devices so we can choose which one to set the multipath policy against. Tip: If you know the naa ID of the device, you can pipe this command to grep (see screenshot).
# esxcli nmp device list | grep <naa ID>
Next, we’ll set the multipathing policy for a given device.
# esxcli nmp device setpolicy --device naa.<ID> --psp VMW_PSP_???
So you can see with three commands, we can list the psp plugins, list the devices and use grep to filter to a specific device, then set the policy to another psp plugin.
Tip: If you get stuck in the exam, remember this is detailed in the vSphere Command Line Reference (now vSphere vSphere Command-Line Interface Installation and Scripting Guide).
To configure port binding for software iSCSI you’ll need to create two VMkernel ports and bind them to separate NIC’s. There are two ways of doing this, which you’ll see on page 38 of the iSCSI SAN Configuration Guide; a single vSwitch with two VMKernel ports and two physical NIC’s, or a separate vSwitch for each VMkernel port, each with it’s own NIC. It really doesn’t matter which you choose as the next step will be to bind the iSCSI ports on each NIC to each iSCSI adapter. Once you’ve done this, configure your software iSCSI adapter as usual. If you are not sure how to do this then refer to the iSCSI SAN Configuration Guide. Once your iSCSI LUN is presented we’ll do the rest from the command line.
1) Check which HBA your software iSCSI adapter is using from the vSphere Client:
2) From the command line, run the esxcli swisci command to confirm now NIC’s are already bound to the adapter.
# esxcli swiscsi nic list -d <vmhba> (E.g. esxcli swiscsi nic list -d vmhba35)
3) Next, we’ll add one of our VMkernel ports (E.g. vmk1) to the software iSCSI adapter (E.g vmhba35).
# esxcli swiscsi nic add -n vmk1 -d vmhba35
4) Do the same for the second VMkernel port.
# esxcli swiscsi nic add -n vmk2 -d vmhba35
5) Next, run the esxcli swiscsi command again to check the port bindings:
# esxcli swiscsi nic list -d vmhba35
6) Rescan the vmhba (E.g. vmhba35).
# esxcfg-rescan vmhba35
That’s it, both VMkernel ports are now bound to the software iSCSI adapter.
TIP: Use esxtop/resxtop to see traffic flowing through these VMkernel ports. In the following screenshot, I’ve started a storage vMotion to an iSCSI datastore and you can see it is using vmk1. Normally you would see two separate NIC’s under the TEAM-PNIC column, but I just wanted to show you that you can see the traffic flowing in esxtop. Also, if you have two vmnic’s assigned and one fails then the TEAM-PNIC column will show which one has failed.
Comments are closed for this post.