I must admit that I had very little time to prepare for this exam, but that said I do have a pretty good home lab environment with two good sized ESXi hosts, iSCSI storage, VLANs and most of the features deployed that are part of the exam blueprint for the VCAP5-DCA. Having previously done the VCAP4-DCA last year I expected much of the same this time around, but I was mistaken. Sure, many of the blueprint topics share common ground but this exam tests your experience with vSphere 5 and you are expected to perform many of the tasks with your eyes shut. Well, not literally although much of it needs to be second nature to you. [Read more…] about VCAP5-DCA Exam Passed, Experience and Thoughts
This year I’ve been working on a VMware design for a large enterprise customer, and had various conversations with the solutions team on everything from storage sizing to networking (that was one day!). This prompted me to address one topic that I feel deserves more attention, and that is the Cisco Nexus 1000V. If you are new to the 1000V virtual switch, then you might want to read the guide I published back in April 2012 on How to Deploy the Cisco Nexus 1000V. For now, grab a coffee and let’s begin with load-balancing policies… [Read more…] about Cisco Nexus 1000V NIC Teaming and Load Balancing
VMware View has offered the ability to serve your desktops as linked clones since View 3.0 with View Composer, but with View 5.1 I still get asked many questions about how linked clones work, how snapshots are involved, delta files, and what other files make up each linked clone virtual desktop. You are probably already familiar with VMDK (Virtual Machine Disks) and snapshots, but the process View Composer takes to create linked clones may still be a bit of a mystery to you. Since the addition of View Storage Accelerator (VSA) in View 5.1 there are also some additional files that are created. This article will describe the files used by linked clones. [Read more…] about Understanding VMware View 5.1 Linked Clones
VMware KB 1027217 details the ports required between all the components in VMware View 5.0, but I noticed there were not any up to date diagrams illustrating this, so I’ve attached the View 5 Ports here. This won’t need much explanation, but a few key points to highlight here:
1. The replica View Connection Server detailed here is not a ‘slimmed down’ Connection Server, as both accept connections from the View Client and can tunnel connections. I’ve simply removed the PCoIP Gateway and HTTP(S) Secure Tunnel to keep the diagram tidy.
2. The JMS (Java Messaging Service) communication between the View Connection Server and Desktop VM (View Agent) is very important and requires that the View Connection servers are on the same low-latency LAN as the desktop VMs. This can also be encrypted with ‘Message Security Mode’ enabled.
3. When using RDP from the Windows View Client, notice that the RDP session is established locally (127.0.0.1) via the View Client which connects to the desktop VM.
4. If using the Security Server as a PCoIP gateway or secure tunnelling for RDP, the connection is established between the View Client and the Security Server, and then between the Security Server and the desktop VM (View Agent). In this configuration, the View Client does not connect to the desktop VM directly via RDP or PCoIP.
Whilst working on a Vblock 300 implementation a few weeks ago I had an interesting conversation with one of the network architects at VCE and we discussed the subject of best practices surrounding 10Gb and 1Gb networking. Traditionally with 1Gb networking it is best practice to separate traffic on your ESX/ESXi hosts with vSwitches (or dvPortGroups) dedicated to each type of traffic (vMotion, Management, Storage, production networking) and typically designs will contain 6 to 8 NIC’s per host. With the introduction of 10Gb networking, I’ve noticed that some implementations have neglected to include some important design considerations regarding the use of 10Gb networking. Lets say for that we present 4 x 10Gb NIC’s to each host (these are vNIC’s in the Cisco UCS world) or we can present 6 x 1Gb NIC’s using traditional methods of separating the traffic into various dvportGroups. Which is best? Can we get away with just 2 x 10Gb NIC’s or do we need more? The key consideration here isn’t how many NIC’s (or vNIC’s) are presented to each host, but rather how much network bandwidth is available to each traffic type (i.e. vMotion, FT Logging, VM traffic) and critically how we control it. [Read more…] about Designing vSphere for 10Gb converged networking, with Cisco UCS, Nexus 1000V and NetIOC