If you are looking to deploy multiple ESX/ESXi servers then there are plenty of methods and tools out there, some more complex than others. There are vendor specific deployment products available such as HP Rapid Depuployment Pack (RDP) which uses Altiris, or alternatively there are free deployment tools such as ESX Deployment Appliance (EsleeDA) and Ultimate Deployment Appliance (UDA). UDA is my favorite tool for the job as it offers great flexibility such as the use of subtemplates (discussed later), and therefore this will be the basis of this article. It was created by Carl Thijssen and thanks to Mike Laverick of RTFM, it also supports ESX/ESXi deployments, and the latest build supports ESX/ESXi 4.1.
In this article I detail the steps required to configure your vMA as a Syslog server, and configure your ESX/ESXi hosts to send logging information to the vMA. Logging is often overlooked, but when managing multiple hosts it is far easier to send your logs to a Syslog server. I’m studying for the VCAP-DCA exam, and using vicfg-syslog is a requirement of the exam (Section 6.1) and the vMA is also essential to understand (Section 8.1). I hope my notes help you as they have helped me.
VMware have released vSphere 4.1 Update 1 which adds support for additional operating systems (RHEL 6, RHEL 5.6, SLES 11 SP1 for VMware, Ubuntu 10.10, and Solaris 10 Update 9). ESX/ESXi 4.1 Update 1 now supports 160 logical processors. Looking at the number of patches for ESX and ESXi below, it makes me wonder whether this will be the last release of ESX, in favour of ESXi?
VMware HA (High Availability) admission control is something I wanted to understand better so I started making notes gathered from various sources on the subject, and in particular the way slot sizes are calculated. Duncan Epping’s Yellow Bricks site already covers HA very well and I bow down to his knowledge on the subject, well worth checking out. Also I would strongly recommend VMware vSphere 4.1 HA and DRS Technical Deepdive by Duncan Epping and Frank Denneman which I purchased at Comcol.nl which they shipped to me in the UK in just two days.
That said, I thought I would share my own views and notes I have taken on the subject. The vSphere Availability guide states “A slot is a logical representation of memory and CPU resources. By default, it is sized to satisfy the requirements for any powered-on virtual machine in the cluster.” – In simple terms a slot can be consumed by a single virtual machine, but a virtual machine may consume more than one slot.
If you are involved in DR for your organisations IT infrastructure and are replicating virtual machine VMFS datastores then you may be familiar with DisallowSnapshotLUN in ESX 3.x. Let’s start with a background on what these advanced settings are and why they are there.
Since virtualization changed the landscape for disaster recovery some time ago now, most businesses have embraced SAN storage replication for DR (see my other post). This is old news now, but unless your SAN vendor integrates with something like VMware Site Recovery Manager (SRM) then you will have a number of manual tasks involved in your DR recovery process.
The third part of this series continues with the vSphere build on my whitebox server, the Asus Rampage II Extreme with Intel Core i7 2.8Ghz and a 120GB SSD. Following on from the video in part 2 where we installed ESXi on to the USB drive, we are now ready to access the physical ESXi host and start creating some virtual machines. Since this is a home vSphere lab environment, accessing the lab from anywhere (not just at home) is a major advantage for me, so I’ll be taking you through the steps to create a Microsoft Windows Server 2008 R2 virtual machine with an RD Gateway (Remote Desktop Gateway). We will also need shared storage in order to use vMotion, so I will also guide you through the setup of an OpenFiler iSCSI virtual SAN.
Way back when VMware VI3 was released in 2006 (doesn’t time fly!), I built a home-brew lab server for ESX 3.0 and used it partly to study for my VCP exam. That particular machine is now my home theatre PC (HTPC) as it wouldn’t stand a chance of running VMware vSphere, so here is my mission to build a whitebox VMware vSphere lab server. I must also give credit to Simon Seagrave and Simon Gallagher their vSphere lab server articles which have inspired me to do something about it and build a vSphere lab at home. Simon has lots of great articles on building a vSphere lab, and I urge you to visit his site.
Snapshots are a fantastic way of providing a quick and reliable method of rolling back the state of a virtual machine, should something go astray following an patch or update. VMware VCB also uses virtual machine snapshots to quiesce the VM prior to taking the backup data.
However, in larger environments where there may be tens or hundreds of VMware ESX servers, snapshots can also be a pain in the backside if there is no control over who is using them. Why? Because snapshots work by creating a delta VMDK that records the changes in blocks, a process called copy-on-write (COW). Over time the delta VMDK file will grow, and depending on the level of I/O within the VM it could grow faster on some virtual machines and not others.
The danger only presents itself if the datastore where the VMDK resides reaches it’s capacity. When this happens, virtual machines that are not thin-provisioned should continue to run with no problems, but think about these situations:
1) You have other virtual machines in the same datastore using snapshots.
2) You have one or more virtual machines on thin-provisioned disks.
3) You have powered off virtual machines, that need to be powered on.
In all of the above scenarios if the datastore is full then the affected virtual machines will be suspended (paused). Virtual machines with thick-provisioned disks will continue to operate as the VMDK already has the full allocated of storage space available. Virtual machines that are powered off, and need to be powered back on will fail as they won’t have enough disk space to create the virtual swap file.