For my first post of 2013, I have decided to dive straight into sizing for VMware View 5.1. If you are planning a VMware View implementation then at some stage you will need to look at sizing, and calculating factors like how many desktops per View desktop pool, in addition to network configuration and storage considerations. The purpose of this article is to discuss sizing and configuration maximums for VMware View 5.1. Since VMware ESX 3.x, a configuration maximums document has been published by VMware for each version of vSphere that details the supported maximums for networking, compute, storage, vCenter, host, and even vCloud Director. Because there is no single ‘configuration maximums’ document for VMware View 5.1, I have included reference documents and material at the bottom of this article.
VMware KB 1027217 details the ports required between all the components in VMware View 5.0, but I noticed there were not any up to date diagrams illustrating this, so I’ve attached the View 5 Ports here. This won’t need much explanation, but a few key points to highlight here:
1. The replica View Connection Server detailed here is not a ‘slimmed down’ Connection Server, as both accept connections from the View Client and can tunnel connections. I’ve simply removed the PCoIP Gateway and HTTP(S) Secure Tunnel to keep the diagram tidy.
2. The JMS (Java Messaging Service) communication between the View Connection Server and Desktop VM (View Agent) is very important and requires that the View Connection servers are on the same low-latency LAN as the desktop VMs. This can also be encrypted with ‘Message Security Mode’ enabled.
3. When using RDP from the Windows View Client, notice that the RDP session is established locally (127.0.0.1) via the View Client which connects to the desktop VM.
4. If using the Security Server as a PCoIP gateway or secure tunnelling for RDP, the connection is established between the View Client and the Security Server, and then between the Security Server and the desktop VM (View Agent). In this configuration, the View Client does not connect to the desktop VM directly via RDP or PCoIP.
Installing the Cisco Nexus 1000V distributed virtual switch is not that difficult, once you have learned some new concepts. Before I jump straight into installing the Nexus 1000V, lets run through the vSphere networking options and some of the reasons you’d want to implement the Nexus 1000V.
vSS (vSphere Standard Switch)
Often referred to as vSwitch0, the standard vSwitch is the default virtual switch vSphere offers you, and provides essential networking features for the virtualisation of your environment. Some of these features include 802.1Q VLAN tagging, egress traffic shaping, basic security, and NIC teaming. However, the vSS or standard vSwitch, is an individual virtual switch for each ESX/ESXi host and needs to be configured as individual switches. Most large environments rule this out as they need to maintain a consistent configuration across all of their ESX/ESXi hosts. Of course, VMware Host Profiles go some way to achieving this but it’s still lacking in what features in distributed switches.
vDS (vSphere Distributed Switch)
So the vDS, also known as DVS (Distributed Virtual Switch) provides a single virtual switch that spans all of your hosts in the cluster, which makes configuration of multiple hosts in the virtual datacenter far easier to manage. Some of the features available with the vDS includes 802.1q VLAN tagging as before, but also ingress/egress traffic shaping, PVLANs (Private VLANs), and network vMotion. The key with using a distributed virtual switch is that you only have to manage a single switch.
Cisco Nexus 1000V
In terms of features and manageability, the Nexus 1000V is over and above the vDS as it’s going to be so familiar to those with existing Cisco skills, in addition to a heap of features that the vDS can’t offer. For example, QoS tagging, LACP, and ACLs (Access Control Lists). Recently I have come across two Cisco UCS implementations which require the Nexus 1000V to support PVLANs in their particular configuration (due to the Fabric Interconnects using End-Host Mode). There are many reasons one would choose to implement the Cisco Nexus 1000V, lets call it N1KV for short
Whilst working on a Vblock 300 implementation a few weeks ago I had an interesting conversation with one of the network architects at VCE and we discussed the subject of best practices surrounding 10Gb and 1Gb networking. Traditionally with 1Gb networking it is best practice to separate traffic on your ESX/ESXi hosts with vSwitches (or dvPortGroups) dedicated to each type of traffic (vMotion, Management, Storage, production networking) and typically designs will contain 6 to 8 NIC’s per host. With the introduction of 10Gb networking, I’ve noticed that some implementations have neglected to include some important design considerations regarding the use of 10Gb networking. Lets say for that we present 4 x 10Gb NIC’s to each host (these are vNIC’s in the Cisco UCS world) or we can present 6 x 1Gb NIC’s using traditional methods of separating the traffic into various dvportGroups. Which is best? Can we get away with just 2 x 10Gb NIC’s or do we need more? The key consideration here isn’t how many NIC’s (or vNIC’s) are presented to each host, but rather how much network bandwidth is available to each traffic type (i.e. vMotion, FT Logging, VM traffic) and critically how we control it.
Networking is a critical component of any virtual infrastructure, and often it’s the management networks that are overlooked. Back in the day before virtualisation, management networks were considered less important than production, and it wasn’t too much of a big deal as it only provided console access (E.g. iLO, DRAC), SNMP monitoring, web interfaces, and so on – it just didn’t impact production. I have noticed that this mindset has crept into some vSphere designs where management interfaces lack any form of redundancy. Why is this so important? Well for starters ESXi uses the Management network for vMotion, Fault Tolerance (FT) and HA. In fact if your management interface has no redundancy you’ll get a warning as described in VMware KB 1004700. Without redundancy on your management networks, these features will not work. In addition, you may be faced with mixed ESX and ESXi environments where the management networks are different between ESX/ESXi. Management on ESX uses the Service Console network, but ESXi uses a vmKernel network called Management Network. Duncan Epping of Yellow Bricks has an excellent article on VMware HA here, also check out Frank Denneman and Duncan Epping’s HA and DRS Technical Deepdive book is an excellent read and I recommend this especially if you’re studying for the VCAP-DCA.