2.1 VCAP-DCA Study Guide - Implement and Manage Complex Virtual Networks

20 Sep 2011 by rayheffer

Networking is a critical component of any virtual infrastructure, and often it’s the management networks that are overlooked. Back in the day before virtualisation, management networks were considered less important than production, and it wasn’t too much of a big deal as it only provided console access (E.g. iLO, DRAC), SNMP monitoring, web interfaces, and so on – it just didn’t impact production. I have noticed that this mindset has crept into some vSphere designs where management interfaces lack any form of redundancy. Why is this so important? Well for starters ESXi uses the Management network for vMotion, Fault Tolerance (FT) and HA. In fact if your management interface has no redundancy you’ll get a warning as described in VMware KB 1004700. Without redundancy on your management networks, these features will not work. In addition, you may be faced with mixed ESX and ESXi environments where the management networks are different between ESX/ESXi. Management on ESX uses the Service Console network, but ESXi uses a vmKernel network called Management Network. Duncan Epping of Yellow Bricks has an excellent article on VMware HA here, also check out Frank Denneman and Duncan Epping’s HA and DRS Technical Deepdive book is an excellent read and I recommend this especially if you’re studying for the VCAP-DCA.

Knowledge Required

  • Identify common virtual switch configurations

Key Focus Areas

  • Determine use cases for and apply IPv6
  • Configure NetQueue
  • Configure SNMP
  • Determine use cases for and apply VMware DirectPath I/O (this is covered in Implement and Manage Storage, part 1)
  • Migrate a vSS network to a Hybrid or Full vDS solution – Coming soon
  • Configure vSS and vDS settings using command line tools – Coming soon
  • Analyse command line output to identify vSS and vDS configuration details – Coming soon

Key Materials (VMware PDF’s & KB articles)

  1. vSphere Command-Line Interface Installation and Scripting Guide
  2. VMware vNetwork Distributed Switch Migration and Configuration
  3. ESX Configuration Guide
  4. ESXi Configuration Guide
  5. IPv6 Support in vSphere by Eric Siebert
  6. VMworld 2009: vSphere Networking Deepdive

Determine use cases for and apply IPv6

So what is the use case for IPv6? I can think of only two scenarios, one is that you’re insane and have run out of IPv4 addresses, the other is the company infrastructure policy requires all systems to have IPv6 enabled, not that unreasonable eh? Good job it’s easy to configure ;)

Remember that even when IPv6 is enabled on your ESX/ESXi host, your network infrastructure must support it. This includes your DNS and DHCP servers, routers, switches, etc.

1) Using the vSphere Client, select your ESX/ESXi host and go to the configuration tab, select Networking and click Properties.

2) Check ‘Enable IPv6 support on this host system’, click OK and reboot your host.

3) Have a coffee and explain to your colleagues how IPv6 works :)

You can also enable IPv6 from the DCUI, after setup completes by pressing F2 > Configure Management Network > IPv6 Configuration, then enabling IPv6:

Alternatively you can enable IPv6 from the command line. Follow these steps:

1) Enable IPv6 for the vmKernel

# esxcfg-vmknic -6 true

2) Enable IPv6 for the Service Console (ESX only)

# esxcfg-vswif -6 true

3) Reboot the host for the changes to take effect.

You can tell when IPv6 is enabled because the DCUI screen shows the management URL in IPv4 and IPv6 format:

You will also see IPv6 settings in the Management Network properties:

Configure NetQueue

When the physical NIC on the ESX/ESXi host sends or receives packets (TX/RX) it enters a single TX or RX queue which the vmKernel schedules on a single CPU. Without NetQueue, the result that is noticed on 10GbE NIC’s is the throughput isn’t optimal and the single TX or RX queue causes a bottleneck. NetQueue allows for multiple TX/RX queues which are scheduled on multiple CPU’s. A single VM that places heavy demand on bandwidth will still have it’s queue processed on a single CPU, but other VM’s have their own queue processed by a different CPU. The result of using NetQueue is 10GbE interfaces will be able to maximise the throughput available.

Note: It is enabled by default if your hardware supports it. Your physical NIC must support NetQueue in addition to your system supporting MSI-X.

Advantages:

  • Lower CPU utilisation
  • Higher network throughput, especially on 10GbE interfaces
  • Better scaling and load balancing across CPU’s

Requirements:

  • NIC hardware must support NetQueue
  • System hardware must support MSI-X

You can enable or disable NetQueue in the Advanced Settings of your host:

It can also be enabled or disabled with the esxcfg-advcfg command:

# esxcfg-advcfg -k 1 netNetqueueEnabled (1 to enable, or 0 to disable)

Configure SNMP

You can configure an ESX/ESXi host with SNMP is minutes, it’s that easy. If you want to test this in your home lab then you’ll need an SNMP receiver, typically a monitoring tool, or you can download an evaluation of Kiwi Syslog which you can use instead as it has an SNMP receiver service. Using Kiwi Syslog, go to File > Setup > Inputs > SNTP. Tick ‘Listen for SNMP traps” and click OK.

Using the vMA run the following commands to point the ESX/ESXi host at an SNMP server and set the community string, enable SNMP (disabled by default), and then run a test. Kiwi Syslog will show the traps received so you can see it working.

# vicfg-snmp --server core-esx.home.lab --username root --password ******* -t 192.168.4.40/public
# vicfg-snmp --server core-esx.home.lab -enable
# vicfg-snmp --server core-esx.home.lab --username root --password  -test

Comments are closed for this post.