17 Oct 2016 by rayheffer
This has been an exciting time for the IT industry. At VMworld US 2016 (August 29th 2015) we had the announcement of VMware Cloud Foundation becoming an integral part of IBM SoftLayer and then we had the news of the strategic partnership with Amazon Web Services (AWS) and VMware (October 13th 2016). VMware Cloud Foundation is a shift in cloud infrastructure that enables the Software Defined Data Center (SDDC). This is significant because what we know as the SDDC, with technology such as VMware Horizon, NSX and Virtual SAN, can now be consumed and offered by service providers in a unique way.
At the core is SDDC Manager and lifecycle management (LCM) which allows a fully automated deployment, configuration and patching & upgrades. But what does the architecture look like behind VMware Cloud Foundation? Let’s take a closer look.
|VRM||Virtual Resource Manager (formerly EVO Rack Manager)|
|PRM||Physical Resource Manager|
|LRM||Logical Resource Manager|
|VIA||Virtual Imaging Appliance|
|OHMS||Open Hardware Management System|
|ISVM||Infrastructure Virtual Machine (Cassandra Database)|
I’ll discuss the physical architecture in more detail next, so let’s start by covering the management workload domain. Each rack contains a management cluster, managed by vCenter. The management cluster contains all of the management infrastructure virtual machines such as SDDC Manager components (ISVM, LCM, VRM), Platform Services Controllers (PSC), vRealize Operations, vRealize Log Insight, and NSX just to name a few. If a VDI workload domain is deployed (more on that in a moment), then View Composer, Security Servers and Connection Servers are also deployed here.
In addition to the management workload domain, you can deploy VI (Virtual Infrastructure) and VDI (Virtual Desktop Infrastructure) workload domains. Using the VDI workload domain for example (see screenshot), you can choose whether to simply reserve VDI resources allowing you to create desktop pools at your own accord, or deploy full or linked clone desktops as part of the workload domain creation process. The desktop deployment type determines whether SDDC Manager will also deploy Security Servers for external access, or or desktop will be access over the LAN.
By following the remaining steps of the workload domain creation process, other options are available such as compute sizing, network configuration, joining the VDI workload to an existing or new (internal) Active Directory domain. VMware App Volumes can also be deployed as part of the VDI workload domain.
A virtual infrastructure (VI) domain is just that. It’s a cluster of up to 64 hosts (refer to vSphere maximums), and a vCenter Server virtual appliance (vCSA).
One of the real benefits of VMware Cloud Foundation is the in-built lifecycle management. Using SDDC Manager, administrators can carry out patching and upgrades when they are available. The concept of workload domains also allows for each one to be upgraded separately. For instance, a VDI workload domain can be upgraded independently of a Virtual Infrastructure (VI) workload domain. Patching and upgrades can be carried out on ESXi hosts, NSX, Virtual SAN, and the OHMS.
There are three ways to deploy VMware Cloud Foundation.
If you decide to go down the path of using VSAN Ready Nodes (option 2) then be sure to check the compatibility guide as this contains details of all supported components.
Let’s take a look at what VMware Cloud Foundation looks like from a physical perspective. A VCF instance starts with a single rack, and scales up to 8 racks, each containing up to 32 hosts per rack (recently increased from 24). This gives us a total of 256 hosts per VCF instance. Each rack contains two top-of-rack (ToR) switches and a management switch, and racks 2-8 are connected to spine switches (not shown here) in the first or second rack.
VMware Cloud Foundation - Compute ResourcesA key point to remember is that physical compute, storage and network infrastructure becomes part of a single shared pool of resources. This allows customers to deploy workload domains, consuming the available resource as needed.
Storage is provided with Virtual SAN, and this can be hybrid (SSD and HDD) or all-flash (all SSD) deployments. VSAN is configured with two disk groups, and contains one caching disk (SSD) and up to seven capacity disks. In the case where all-flash is used, these are all SSD although there is still a caching device for writes.
There is no fibre channel storage available to VCF, but IP attached storage can be accessed from the data center network such as NFS or iSCSI. Virtual SAN 6.2 also provides the option for stretched clusters of VCF that is deployed in two data centers.
VMware NSX is a key component of Cloud Foundation and it’s deployed as part of the VCF deployment. When workload domains are deployed, the workload vCenter and NSX Manager for that domain are deployed in the Management workload domain. The NSX controllers reside in the workload domain being deployed. Another option is to provide a separate workload domain for the NSX Edge cluster in the first rack, or even outside of VCF.
I’d strongly recommend reading the NSX Network Virtualization Design Guide to understand NSX concepts in more detail.
Each rack contains a pair of top-of-rack switches (ToRs) and each host is connected to the ToRs via dual 10GbE links (to each switch). There is an out-of-band management VLAN, and each host’s 1GbE management port (E.g. IPMI) is connected to the management switch for the rack. When additional racks are deployed, the ToR switches are connected to a pair of spine switches (40GbE) which are typically installed in either rack 1 or 2.
Running on each management switch is the OHMS, recently made open source. This is a Java runtime software agent that is invoked to manage physical hardware across the racks. SDDC Manager will communicate with OHMS to configure switches and hosts (Cisco API, CIMC, Dell, etc.). VMware has developed plugins for Arista and Cisco, but now this is open-source vendors can write their own plugins for other hardware platforms.
As you can see in the diagram to the left, OHMS provides northbound API’s which can be accessed using a REST client (E.g. Postman). In addition OHMS performs rack inventory discovery of both server and switch physical components, and this is then provided using a JSON file (hms-inventory.json).
The VIA is a virtual appliance used by system integrators or administrators deploying VMware Cloud Foundation, will image physical racks with the SDDC Manager software. Once the VCF physical infrastructure is fully cabled and ready, a single management host is used to install the VIA via an OVA template. The VIA VM runs a number of services including TFTP, DHCP, PXE, Tomcat and the local repository for the bundles.
Each bundle, which can be downloaded from the VMware website > Downloads, is approximately 16GB and contains the entire SDDC VCF stack. Included in the bundle ISO are (not all are listed here):
Hopefully this gives you a bit more of an insight into what VMware Cloud Foundation 2.0 is, and how it can be deployed. One of the key advantages is not only the automated deployment of the infrastructure, but also the entire lifecycle management. If you follow the lead of giants like IBM and AWS, you can see how VMware Cloud Foundation can reduce the complexity of deploying the SDDC stack. Architecturally, VI workload domains operate in the same way with benefits of HA, DRS, Virtual SAN, NSX, just to name a few. Take that workload domain and do what you will, deploy vRealize Automation for example.
Comments are closed for this post.