Way back when VMware VI3 was released in 2006 (doesn’t time fly!), I built a home-brew lab server for ESX 3.0 and used it partly to study for my VCP exam. That particular machine is now my home theatre PC (HTPC) as it wouldn’t stand a chance of running VMware vSphere, so here is my mission to build a whitebox VMware vSphere lab server. I must also give credit to Simon Seagrave and Simon Gallagher their vSphere lab server articles which have inspired me to do something about it and build a vSphere lab at home. Simon has lots of great articles on building a vSphere lab, and I urge you to visit his site.
Reasons for a home lab?
Throughout my entire IT career I have used equipment at home to help study for certifications and enhance my technical knowledge. As an IT professional you’ll not only need a lab to study for your certifications, but you will need a test-bed to experiment and try things for the first time. There have been occasions as a technical consultant where a fairly niche customer requirement presents itself and is something that requires very specialist skills. Having the flexibility to remotely connect to my home lab and test a procedure or new technology gives you the confidence to carry out that piece of work in front of the customer, even if it’s the first time you’ve done it. We all have to start somewhere, and there is a first time for everything… better that it’s in a test environment than a live production system though!
Ever since I first started in IT back in 1997, I have always had a home server, lab, test-bed or whatever you want to call it. I remember installing NT4 on a home built server in 1997, promoting it to a PDC and then configuring WINS and DHCP. I even had a couple of Cisco 2500 routers so I could emulate a serial connection between two subnets, and at the time I felt that I was at the cutting edge. For the past 5 years, VMware has made it a whole lot easier to create test and development platforms. VMware Workstation is the saviour of many developers and server administrators as it allows them to very easily create virtual servers for testing, experimenting or learning, whether it’s on their laptop or home PC.
The most important factor of having a home-lab for me personally has been the freedom to access a broad range of server technologies that I can do whatever I like, from build to destruction.
I desperately wanted to avoid buying a branded server from HP or Dell, because to get anything near the specification of my whitebox would cost a fortune. Not only that but the ML115 G5 is becoming very hard to get hold of, unless you find a bargain on Ebay. The later ML115 G6 is reported to work, but the B110i storage controller won’t work. I find that these type of servers can also be very noisy and my wife doesn’t understand the novelty of these things when they’re powered on in the spare bedroom!
Officially you would need to buy a server that is listed on the VMware HCL but thanks to VM-Help.com they have put together a whitebox HCL which has the Asus Rampage II motherboard on the whitebox HCL list. Also my intention is to build a system that I can put to other uses in the future such as a gaming PC, should later versions of vSphere not be compatible for any reason.
So for me the decision is clear… my own whitebox vSphere server!
My VMware vSphere ‘Whitebox’ Server Kit List
MOTHERBOARD: Asus Rampage II Extreme Intel X58 1366
CPU: Intel i7 930 2.8GHz Socket 1366 8MB L3 Cache
RAM: 12GB OCZ Gold Low Voltage (6x2GB) DDR3 PC3-16000C10 2000MHz Triple Channel
NETWORK: Intel Pro/1000 MT Dual port 1GbE (the two onboard GbE NIC’s don’t work with vSphere)
SSD: OCZ 120GB Vertex 2E SSD 2.5″ SATA-II Read = 285MB/s, Write = 275MB/s 50,000 IOPS
GRAPHICS: Asus HD 4350 512MB DDR2 DVI VGA HDMI Out PCI-E Low Profile Graphics Card
PSU: Antec TruePower New 650W Modular PSU
OTHER: LiteOn IHAS124-19 24x DVD±RW DL & RAM SATA Optical Drive
CASE: Antec Plusview 1000AMG
Network Switch: Linksys SLM2008
Thanks to Techhead for pointing this out. This is ideal for my home lab environment as it has 8 x 10/100/1000 GbE ports and supports 802.1q VLAN segmentation, and Jumbo Frame support (9KB). It even supports PoE (Power over Ethernet) from the first port, although I doubt I’ll use that. It is an awesome switch and pretty cheap at around £70.
Total Cost: £970 (inc VAT and Postage)*
*Best price I could find for all components listed as of December 2010
To help understand what features my setup will support, I’ve put together this table…
Features & Hardware Requirements
|Feature||Hardware Requirement||Meets Requirement?|
|DPM||Wake-on-LAN, Server IPMI or HP iLO||Yes|
|Dynamic Voltage & Frequency Scaling||Intel Enhanced Speedstep
AMD Enhanced PowerNow
|Fault Tolerance||Intel’s Core 2, Core i7
AMD’s 3rd Generation Opteron family
|VMotion||Similar CPU family (not cross vendor)||Yes|
|Enhanced VMotion Capability||Intel FlexMigration
AMD-V Extended Migration
* Fault tolerance would need more than one ESX host in an HA cluster, and a processor that supports virtualisation (VT), however the virtual ESX hosts will not provide VT therefore Fault Tolerance cannot be used.
So here is the case I’m using, it’s an Antec Plusview 1000 AMG which is larger than I would normally use, but as it was just sitting in my garage I decided to put it to good use. It will also sit quite happily in my spare room under the desk, and the only fans I am intending to use in this case are the CPU fan and obviously the built-in PSU fan. Other than those two the graphics card and motherboard are silent which means that my wife won’t get too upset!
In part 2 I’ll show you the hardware build, configuration and then the installation of ESXi 4.1 and vCenter.
Update: I have had issues with the Asus Rampage II Extreme detecting 8GB instead of 12GB memory, see this link for further information.