1.3 VCAP-DCA Study Guide - Configure and Manage Complex Multipathing & PSA Plug-ins

Posted on 16 Aug 2011 by Ray Heffer

This section on storage continues on from section 1.2 in the blueprint (Manage Storage Capacity in a vSphere Environment) which at the time of writing these study notes, I haven’t completed yet. I felt that managing multipathing and PSA plugins deserves more attention, at least for me anyway. This is very command line heavy but remembering that during the VCAP-DCA exam documentation is provided (see key materials below), and the fact you can use command line help makes this a little less scary. Just try and remember what you need to achieve and have a good idea of which commands are used!

Knowledge Required

  • Explain the Pluggable Storage Architecture (PSA) layout

Key Focus Areas

  • Install and Configure PSA plug-ins
  • Understand different multipathing policy functionalities
  • Perform command line configuration of multipathing options
  • Change a multipath policy
  • Configure Software iSCSI port binding

Key Materials (VMware PDF’s & KB articles)

Install and Configure PSA plug-ins

The most common PSA plug-in is EMC PowerPath which is what I’ll base this on. You can use the vMA (which I prefer) or vCLI.

  1. Download EMC PowerPath for VMware from https://powerlink.emc.com (you will need to register for an account).
  2. Extract the .zip file and copy EMCPower.VMWARE.5.4.SP2.b298.zip from the folder to your vMA (use WinSCP).
  3. Place the host into maintenance mode.
  4. Make sure that there is no previous installation of PowerPath/VE:

# vihostupdate --query --server <hostname>

Install the PowerPath plugin:

# vihostupdate --server <hostname> --install --bundle EMCPower.VMWARE.5.4.SP2.b298.zip

  1. Reboot the server.
  2. If you run vihostupdate query again you should see that PowerPath is installed.

Understand different multipathing policy functionalities

The knowledge required according to the VCAP-DCA blueprint on this topic is to understand Pluggable Storage Architecture (PSA). The following sections will detail how to use the CLI, so grab a coffee as we dive into PSA!

As part of the VMkernel the PSA provides multipathing using either the Native Multipathing Plugin (NMP) or third-party multipathing plugins (MPP) such as EMC PowerPath. The NMP has two plugins it uses which are key to this topic; SATP (Storage Array Type Plugins) and PSP (Path Selection Plugins). If you haven’t seen it already, there is a well used diagram on these by VMware, which you’ll find on page 24 of the Fibre Channel SAN Configuration Guide:

The SATP (Storage Array Type Plugins) are responsible for monitoring the health of each path, reports changes to the state of each path, and will activate a passive path if required (active-passive arrays). Ed Grigson of vExperienced said the SATP is like the traffic cop, which is a good analogy.

The PSP (Path Selection Plugins) are responsible for choosing which path to use for I/O requests. The following PSP’s are available:

  • Most Recently Used (VMW_PSP_MRU) – Uses the path that was most recently used to a given device.
  • Fixed (VMW_PSP_FIXED) – Uses a preferred path unless it’s down in which case it will select an alternative path at random. If a preferred path isn’t configured then it will used the fist path discovered at boot time. * See my note on path thrashing with this policy.
  • Round Robin (VMW_PSP_RR) – Enables load-balancing across the available paths using a rotating method.
  • VMW_PSP_FIXED_AP – Fixed functionality for active-passive and ALUA arrays.

Perform command line configuration of multipathing options

The VCAP-DCA blueprint for section 1.3 only mentions use of esxcli, so this will be the focus of these study notes. Remember when using esxcli with vMA, you’ll need to add –server , otherwise it will target the localhost (even if you’ve done vifptarget -s). I’ve also completed a section on esxcli corestorage which you’ll find in part two of my Implement and Manage Storage guide. Here are the available esxcli namespaces:

  • corestorage (covered here)
  • network
  • nmp *
  • swiscsi (detailed in the next section, see below)
  • vaai
  • vms

Note: esxcli nmp relevant for this section of the VCAP-DCA blueprint.

Using esxcli nmp is straight forward enough, just type it alone and it’ll present you with a list of available objects (see screenshot):

As this section of the blueprint is multipathing and PSA plugins, we’ll get started with listing the paths available on the ESX/ESXi host:

# esxcli nmp path list | less (can pipe to more, but I prefer less :)

You can also list the devices controlled by the nmp plugin:

# esxcli nmp device list | more

Setting the PSA Multipathing Policy with esxcli

First, list the path selection plugins available:

# esxcli nmp psp list

Next, we’ll list the available devices so we can choose which one to set the multipath policy against. Tip: If you know the naa ID of the device, you can pipe this command to grep (see screenshot).

# esxcli nmp device list | grep <naa ID>

Next, we’ll set the multipathing policy for a given device.

# esxcli nmp device setpolicy --device naa.<ID> --psp VMW_PSP_???

So you can see with three commands, we can list the psp plugins, list the devices and use grep to filter to a specific device, then set the policy to another psp plugin.

Tip: If you get stuck in the exam, remember this is detailed in the vSphere Command Line Reference (now vSphere vSphere Command-Line Interface Installation and Scripting Guide).

Configure Software iSCSI Port Binding

To configure port binding for software iSCSI you’ll need to create two VMkernel ports and bind them to separate NIC’s. There are two ways of doing this, which you’ll see on page 38 of the iSCSI SAN Configuration Guide; a single vSwitch with two VMKernel ports and two physical NIC’s, or a separate vSwitch for each VMkernel port, each with it’s own NIC. It really doesn’t matter which you choose as the next step will be to bind the iSCSI ports on each NIC to each iSCSI adapter. Once you’ve done this, configure your software iSCSI adapter as usual. If you are not sure how to do this then refer to the iSCSI SAN Configuration Guide. Once your iSCSI LUN is presented we’ll do the rest from the command line.

1) Check which HBA your software iSCSI adapter is using from the vSphere Client:

2) From the command line, run the esxcli swisci command to confirm now NIC’s are already bound to the adapter.

# esxcli swiscsi nic list -d <vmhba> (E.g. esxcli swiscsi nic list -d vmhba35)

3) Next, we’ll add one of our VMkernel ports (E.g. vmk1) to the software iSCSI adapter (E.g vmhba35).

# esxcli swiscsi nic add -n vmk1 -d vmhba35

4) Do the same for the second VMkernel port.

# esxcli swiscsi nic add -n vmk2 -d vmhba35

5) Next, run the esxcli swiscsi command again to check the port bindings:

# esxcli swiscsi nic list -d vmhba35

6) Rescan the vmhba (E.g. vmhba35).

# esxcfg-rescan vmhba35

That’s it, both VMkernel ports are now bound to the software iSCSI adapter.

TIP: Use esxtop/resxtop to see traffic flowing through these VMkernel ports. In the following screenshot, I’ve started a storage vMotion to an iSCSI datastore and you can see it is using vmk1. Normally you would see two separate NIC’s under the TEAM-PNIC column, but I just wanted to show you that you can see the traffic flowing in esxtop. Also, if you have two vmnic’s assigned and one fails then the TEAM-PNIC column will show which one has failed.