Cisco Nexus 1000V NIC Teaming and Load Balancing
Posted on 18 Feb 2013 by Ray Heffer
This year I’ve been working on a VMware design for a large enterprise customer, and had various conversations with the solutions team on everything from storage sizing to networking (that was one day!). This prompted me to address one topic that I feel deserves more attention, and that is the Cisco Nexus 1000V. If you are new to the 1000V virtual switch, then you might want to read the guide I published back in April 2012 on How to Deploy the Cisco Nexus 1000V. For now, grab a coffee and let’s begin with load-balancing policies…
If you follow VMware networking best practices then you may already be familiar with the load-balancing policies available to you when configuring your virtual switches for NIC teaming, in addition to the failover order of your network adapters (vmnics). First a bit of history on where these best practices came from. Back in the day when 10GbE networking was a vision and out of reach for most of us (and it still is for some!), our vSphere ESXi hosts had multiple 1Gb NICs, typically 6 or more. Traffic would then be separated by assigning different physical network adapters to port groups, allowing for vMotion, Management, VM traffic and NFS or iSCSI to be separated. To further make NIC assignments more efficient, one NIC could be made ‘active’ and the other ‘standby’, do this for one port group and then you reverse that order for another. This would eliminate standby NICs from being idle and maximise throughput for your hosts.
So do we have the same options available to us when configuring the Nexus 1000V? Well, yes and no. If fact there is a whole lot more we can configure. There are certainly load balancing policies, but there are differences. For example, you don’t specify which network adapters are ‘active’ and which ones are ‘standby’. Let’s start by comparing what is available on a VMware virtual network switch, and that of the Cisco Nexus 1000V.
Starting with the vNetwork Standard Switch (vSS) we have four load balancing options available to us.
- Route based on the originating virtual port ID
- Route based on IP hash
- Route based on source MAC hash
- Use explicit failover order
The dvSwitch (or vDS) gives us some additional features (apart from the obvious fact it’s distributed!), and one more load balancing policy.
- Route based on originating virtual port
- Route based on IP hash
- Route based on source MAC hash
- Route based on physical NIC load*
- Use explicit failover order
So you can assign the failover order of your network adapters, using ‘active’, ‘standby’ or ‘unused’, which as I mentioned previously is used to efficiently separate types network traffic (vMotion, management, VM traffic) and distribute network traffic amongst your network adapters.
In addition to the failover order, the vDS can perform both ingress and egress traffic shaping and allows you to configure network resource pools. This is an important distinction between the standard vSwitch (vSS) and dvSwitch (vDS).
Cisco Nexus 1000V
The Nexus 1000V load balances across all physical network adapters in a port-channel. If you’re new to Cisco networking then port-channels may be unfamiliar to you, but essentially it is a grouping of the physical NICs (ports) on the ESXi host that are tied together to form a ‘port-channel’ (see the diagram below). What you know as port groups are configured as port-profiles on the 1000V. It’s port-channels, however, that contain the ESXi hosts physical network adapters and provides redundancy and load balancing.
There are 17 load balancing policies available on the Nexus 1000V, which are configured as part of the port-channel. Here are the 17 options available using the port-channel load-balance command (the default is source-mac):
port-channel load-balance ethernet {dest-ip-port | dest-ip-port-vlan | destination-ip-vlan | destination-mac |
destination-port | source-dest-ip-port | source-dest-ip-port-vlan | source-dest-ip-vlan | source-dest-mac |
source-dest-port | source-ip-port | source-ip-port-vlan | source-ip-vlan | source-mac | source-port |
source-virtual-port-id | vlan-only}
The default load balancing configuration on the Nexus 1000V is to use Source-Based load balancing (Source-MAC) which will only use a single NIC at a time. In other words, it uses a hash of the source MAC address and sends that traffic to a physical adapter in the port-channel. So a traffic from a single virtual machine or a vMotion would occur on one link, another virtual machine would use another link, and so on. If a link fails then it will designate a new NIC (or port) in the port-channel and the traffic will flow through another link to ensure redundancy.
This next bit is often overlooked and I nearly gave it a heading of its own, the upstream switch. That is the switch(es) that your ESXi host connects to, whether it’s a single switch or better still a pair of upstream switches. Depending on the features / type of upstream switch will depend on how you configure your port channels. The upstream switches may not even support port channels, but if that’s the case all is not lost as the Nexus 1000V can be configured to use vPC-HM with MAC pinning , in English that’s virtual port-channels in Host Mode using source MAC addresses. But, if your switches support port channels then you could use LACP. All of this might sound a little scary at first, but Cisco have done a fantastic job at explaining port channel configuration in their guide on port channels.
Here are the common choices of configuring your port channels for your upstream switches:
- LACP (Link Aggregation Control Protocol)
- vPC-HM (Virtual Port Channel Host Mode)
- vPC-HM with MAC Pinning
LACP
If you choose a flow based load balancing option then LACP is required. This will make the best use of ALL links in the port-channel as traffic can use multiple links at once. vMotion for example can use the bandwidth of multiple links. This is my preferred option when it’s available and the network infrastructure support it.
vPC-Host Mode
This is used if your upstream switches do not support port-channels. It achieves this by dividing the port-channel into sub-groups (one for each upstream switch). This can be done manually or using CDP (Cisco Discovery Protocol) providing your switches support it. vPC-HM with MAC Pinning
As with the previous option, this is used if your upstream switches do not support port-channels, but it uses MAC pinning (not CDP) which is the preferred method. The MAC address of a virtual machine is used and sub-groups are automatically configured as shown in Figure 1 above.
Summary
If you’ve deployed the Cisco Nexus 1000V and not even thought about port channels then hopefully you have enough information now to make sure you are getting the most out of your networking configuration. If you have simply deployed the Nexus 1000V using default settings then have a look at the configuration and you’ll see that it’s using the default load balancing option (source-mac). There is nothing wrong with that, but hopefully now you are aware of the other options available to you. Check what your upstream switch(es) are, and if they support port channels, CDP or neither. If you are using Cisco UCS or HP Flex-10 then you’ll probably find there is already a recommended configuration available anyway. For example, Cisco UCS must use vPC-HM with MAC pinning as the fabric interconnects use End Host mode. If you can’t find a recommended configuration for your switches then MAC pinning is often the preferred option.
So can you configure Active / Standby links for the Nexus 1000V port profiles? Nope, not really. But you now know how much more there is to making NICs active or standby, we’ve entered the networking world of port channels!
Tagged with: vmware networking