Posted on 04 Jul 2011 by Ray Heffer
This is the very first subject on the VCAP-DCA blueprint, and I intend to focus these study notes on what you need to know with essential learning points. Throughout my study notes I have made a few assumptions about the reader. You will:
With that in mind I recommend that rather that following the exam blueprint in order, you try and focus on the topics you find the hardest. If I’ve not included notes on some topics (RAID for example) it is because there is already a wealth of information available. This way, your VCAP-DCA study can be focused on key learning points that target gaps in your knowledge or areas of weakness. Also bear in mind that at the time of writing this I haven’t taken the VCAP-DCA yet, but as a former virtual infrastructure team lead and admin, in addition to recent knowledge in the field I hope my notes help not only myself, but others to pass the certification too.
VMware DirectPath allows a virtual machine to access hardware adapters in the ESX/ESXi host directly, whilst bypassing the virtualisation layer. You MUST have a processor that supports this, either Intel VTd or AMD IOMMU. I’ve based my study notes on my home lab which doesn’t support this so the screenshot below shows the DirectPath I/O Configuration screen with a message ‘Host does not support passthrough configuration’. A few key points to remember are:
I recommend you read key materials 1 and 2 in the list above which is a small 5-page VMware PDF on DirectPath I/O and a VMware KB on Configuring VMDirectPath I/O pass-through devices on an ESX host.
1) To enable VMware DirectPath IO using the vSphere Client, select the host, go to the configuration tab, and click Advanced Settings (under Hardware).
2) Click on ‘Configure Passthrough’ and select the device from the list.
You should then see the selected device listed, but it will not be passed through until the host is restarted.
3) To assign the device to a virtual machine (must be powered off), go to Edit Settings and click Add, select PCI Device, click Next.
4) You will see a list of devices available. Select the device, click Next and Finish.
The virtual machine operating system should then detect the hardware as it would on a physical machine.
The following features are NOT supported with DirectPath I/O:
NPIV allows a virtual machine to have it’s own WWN on the fibre channel SAN using a virtual HBA port. It uses an RDM (Raw Device Map) to map the LUN to the virtual machine.
Read key materials number 3 (listed above) by Brocade as it has an excellent overview on NPIV and how to configure it. Here is an overview:
Now we will assign a Raw Device Map disk (RDM) to the virtual machine)
Firstly, read key materials 4, 5 and 6 they are excellent resources and they must be understood before attempting the VCAP-DCA. Don’t forget that this isn’t a multiple choice exam, it’s a lab based exam. It’s important to focus your area of study on best practices as much as the hard facts. For example, ‘Most Recently Used’ is the default multipathing policy for an active / passive array, and fixed is recommended for active / active. But read the documentation in the key materials and understand why this is so.
You will read that using FIXED on an active / passive array will cause LUN path thrashing, but what does this mean? Well, my background is with EMC Clariion storage which contains two SP’s (Strorage Processors) and each LUN will have a preferred SP:
LUN 1 – SP A
LUN 2 – SP A
LUN 3 – SP B
LUN 4 – SP B
So whilst both storage processors are indeed active, you can’t access all LUN’s through a single SP unless it is the preferred storage processor for all LUN’s. This will happen if you manually change it for each LUN or a failure occurs in the SAN fabric. I like to use the term ‘LUN path thrashing’ not ‘path thrashing’ for that reason, as each LUN will (or may) have a preferred storage processor (A or B).
LUN path thrashing will occur when the active path for a given LUN repeatedly switches from one storage processor to the other. This is where the importance of this subject matters, especially in regards to the VCAP-DCA exam as we’re about to learn a good use for esxcfg-mpath!
esxcfg-mpath -l will list the paths, and in the example above where two LUN’s are active on SP A and the other two on SP B, you will see that two paths are ‘on’ for each storage processor and two paths are ‘standby’. Now consider a scenario where there are two hosts (server A and server B), and server A’s preferred path to LUN 1 is SP A and server B’s preferred path to LUN 1 is SP B. In this scenario it may cause the SP A to become the preferred controller for LUN 1, then the other host will cause SP B to become the preferred controller for LUN 1. This is LUN path thrashing.
Whilst we’re on the subject, ALUA (Asymmetric Logical Unit Access) will allow a host to reach a LUN via either storage processor (it will appear as active / active to the host) because it routes the I/O internally to the storage processor that owns the LUN. Incidentally if you want to learn more about ALUA and EMC Clariion storage then this article by Bas Raayman is highly recommended.
If you are familiar with Linux then you already know about cp and mv commands to move or copy files. VMware KB article 900 states the following:
To prevent performance and data management related issues on ESX, avoid the use of scp, cp, or mv for storage operations…
I won’t focus too much on this, but I’ve included two useful resources in the key materials section at the beginning of this post (18, 19).
The first thing to understand is why use RDM’s in the first place, and I’ve included a link (10) in the key materials section above to Performance Characterization of VMFS and RDM Using a SAN. This PDF from VMware was based on ESX 3.5, but the technology remains the same and it contains a performance study on using Raw Device Mappings. Also read page 135 of the ESX Configuration Guide (12) which has a good section on Raw Device Mapping. Frank Brix Pedersen (vFrank) has a blog post where he conducted some tests on using physical and virtual RDM’s measured with IOmeter and the result still stands that the performance difference is really very small. So small in fact, I would count out performance as a deciding factor.
There are some use cases for using RDM’s vs VMFS storage, one of which I have first hand experience with and that is P2V. In the past when migrating physical servers to virtual machines, I have encountered some cases for using an RDM, mainly on file servers. A typical example of this is where you migrate only the system disks of a physical server, but present the larger file storage LUN’s as Raw Device Maps to the virtual machine. This saves a considerable amount of time during the P2V process as you do not need to move all of the data to a new VMDK. In my case it was near to 1TB in size, and an RDM was the obvious choice. Another use case for RDM is NPIV mentioned above.
Remember that you have two compatibility modes when using an RDM; virtual and physical. A physical RDM will be excluded from snapshots, and will allow the virtual machine OS to access the disk directly, whereas using virtual mode will enable the use of snapshots.
Here are the steps to add an RDM disk to a virtual machine (the screenshots are from vSphere 5, but the process is the same):
Select your target LUN, click Next.
Choose the datastore on which to store the mapping, click Next. (This is a VMDK proxy file and will not contain the data.)
Choose the compatibility mode (Physical or Virtual), click Next.
Configure the advanced options if required, click Next then Finish.
Tagged with: vmware certification
Comments are closed for this post.