4.3 VCAP-DCA Study Guide - Configure a vSphere Environment to support MSCS Clustering

Posted on 12 Aug 2011 by Ray Heffer

I remember when I first started using Microsoft Cluster Servers with SQL 2000 and Exchange 2003, and I had plenty of experiences (good and bad) especially once when I lost the quorum disk and I was due to go on holiday the next day! When I saw this topic on the VCAP-DCA blueprint I thought ‘oh’. Funnily enough, whilst I have had plenty of experience with physical clusters, I’ve never had to implement clustering in a vSphere environment. Due to certain complexities of MSCS you can achieve unwanted downtime (such as when I lost the quorum disk, or through mis-configuration) and can be a headache in itself. I’m not sure to what level the VCAP-DCA exam will require us to configure MSCS, but I am confident that it’s just configure the vSphere environment and not the other things you would normally have to do (SAN zoning, shared quorum disk, etc).

The first and only document I will use for this section on MSCS is Setup for Failover Clustering and Microsoft Cluster Service, and it’s only 36 pages so don’t worry you won’t be spending the next three weeks on MSCS alone! In fact, the VCAP-DCA blueprint lists each section in order on this document so it’s a safe bet!

Knowledge Required

  • Identify MSCS clustering solution requirements
  • Identify the three supported MSCS configurations

Key Focus Areas

  • Configure Virtual Machine hardware to support cluster type and guest OS
  • Configure a MSCS cluster on a single ESX/ESXi Host
  • Configure a MSCS cluster across ESX/ESXi Hosts
  • Configure standby host clustering

Key Materials (VMware PDF’s & KB articles)

Configure Virtual Machine hardware to support cluster type and guest OS

Features such as vMotion are not supported, or running the virtual machines in a HA / DRS cluster (prior to vSphere 4.1). Let me list out the key points for configuring a virtual machine in a Microsoft cluster setup:

  • Fibre channel storage only (no NFS or iSCSI) * Local storage can be used on ‘cluster-in-a-box’
  • LSI SCSI adapter (LSI SAS for Windows 2008)
  • Thick provisioned disks only
  • VM hardware version 7

Regardless of virtual or physical hardware, each cluster node must have at two NIC’s; one for the public network (LAN) and the other for the cluster heartbeat (private network).

Limitations

  • iSCSI, NFS and FCoE are not supported
  • Mixed versions of ESX/ESXi not supported
  • Fault Tolerance (FT) not supported
  • vMotion no supported
  • N-Port ID Virtualisation (NPIV) not supported
  • NMP with Round Robin not supported
  • Must have VM hardware version 7
  • HA/DRS Clusters are only supported on vSphere 4.1 and above.

Configure a MSCS cluster on a single ESX/ESXi Host

Here are the steps required:

  1. Create the first node (VM) with 2 x vNIC’s and the required OS.
  2. Connect one NIC to the public network and the other to the private network for the cluster heartbeat (this could be an internal only vSwitch)
  3. Install Windows Server on the first node.
  4. Clone the first node (or create a template) to create a second node.
  5. On the guest operating system customisation, make sure to select Generate New Security ID (SID).
  6. Finish cloning the second node.

Create a Quorum Disk

1) Select the first node and add a new hard disk (1GB should be enough), make sure to select ‘Support clustering features such as Fault Tolerance’. 2) Select a location for the quorum disk, shared storage or local. * If using shared storage, the second node must match the exact same location.

3) Select a new SCSI device (E.g. SCSI 1.0)

4) Finish the wizard. 5) Edit the SCSI Controller type and make sure it’s set to LSI Logic Parallel (Windows Server 2003) or LSI Logic SAS (Windows Server 2008), and set SCSI Bus Sharing to Virtual.

Adding the Quorum Disk to the Second Cluster Node

  1. Go to the virtual machine settings of the second node and add a hard disk.
  2. Select ‘Use an existing virtual disk’.
  3. Select the same virtual device node as the first virtual machine (E.g SCSI 1:o).
  4. Browse to the location of the quorum disk.

Configure a MSCS cluster across ESX/ESXi Hosts

This is very similar to the single box solution so I won’t detail every step. Each node resides on a separate ESX/ESXi host. Take note of the following differences:

  • Create the quorum disk as an RDM in physical compatibility mode.
  • The ESX/ESXi host must have at least 3 physical NIC’s; Two for the MSCS cluster (one public and one private heartbeat) and one for ESX Service Console (or ESXi VMkernel for Management).
  • SCSI Bus Sharing must be set to Physical (not virtual, because we’re clustering between two physical ESX/ESXi hosts).

Steps

  1. Create the first virtual machine (first node) as before, but don’t add the shared quorum disk yet.
  2. Clone the first node to the second node, and place it on the second ESX/ESXi host.
  3. Generate a new Security ID (SID) as before.
  4. Add the quorum RDM to the first node (physical compatibility mode) with a new virtual SCSI device (E.g. SCSI 1:0).
  5. Set SCSI bus sharing to Physical.
  6. Add the quorum RDM to the second node, same process.
  7. Optionally add any other shared storage to each node.

Configure standby host clustering

With this configuration, the first node is a physical machine and the second is virtual. Here are the differences you need to be aware of:

  • Create the physical server first and attach the storage before creating the second (virtual) node.
  • Create the second (virtual) node and add the RDM that is presented to the first (physical) server using physical mode.
  • Configure the virtual adapters for the second node with the private heartbeat and public network (LAN).
  • Set SCSI bus sharing to Physical.