• VMware ESX 5 – Arista LACP guide

 
 
Print Friendly, PDF & Email

LACP Overview

Link aggregation is a method for combining multiple Ethernet links into a single logical link with the goal of increasing both bandwidth and availability. By aggregating multiple links within a link aggregation group (LAG) customers improve performance by increasing bandwidth & ensure path redundancy in the event of a single link failure.  These benefits have made link aggregation a popular deployment standard in today’s Data Centers. Link Aggregation can be implemented in two ways: static and dynamic. With static LAG, each port-channel is created manually, with no automated mechanism to validate the port-channel configuration on the other switch. This can result in misconfigurations and faulty links. Alternatively, with dynamic LAG, devices use IEEE 802.3ad Link Aggregation Control Protocol (LACP) as a configuration validation mechanism between devices forming the LAG.

LACP based NIC teaming ESX 5.1 to Arista Switches Virtualization is a key requirement in some data center designs today. A virtualized infrastructure usually means a combination of hypervisor based soft switches directly connected to physical network switches. For redundancy & high-availability, link aggregation between virtual switches with the hardware network infrastructure is a common deployment. This implementation guide walks through all the steps needed to configure dynamic LACP link-bonding between VMware vSphere distributed switches (VDS) and Arista. The guide is applicable to all supported Arista platforms and EOS software versions. LACP support was introduced in VMware ESX 5.1 for the VDS. Prior to this, a static approach to NIC teaming between the virtual and physical setups has been utilized. As a result, an ESX host facing port-channel is forced on (non-negotiated) on the switch. This inherently voids advantages of protocol based negotiated link-bonding like dynamic link-failure & configuration mismatch detection. The goal here is to provide step-by-step instructions for the configuration and implementation of LACP based link-aggregation using ESX 5.1 and Arista switches.  This document will cover the minimal configuration needed on the vSphere 5.1 web client as well as any Arista platform.

Configuration

Logging into the vSphere web-client

Assuming a VDS has already been created on the vCenter for the concerned host, follow the steps listed below:

  1. Add new VDS on vCenter under Inventory -> Networking tab, select VDS version 5.1
  2. Name and select required uplink ports
  3. Select the host & physical adaptors
  4. Select “Finish” Additionally, it is recommended to change protocol discovery setting to LLDP (If EOS software version is 4.11 or above). This can be done via Manage vSphere Distributed Virtual Switch -> Advanced -> Discovery Protocol Type -> Link Layer Discovery Protocol **as shown in Fig. 1. **Enabling LLDP additionally helps in mapping the virtual to the physical setup allowing for greater visibility and ease of troubleshooting and monitoring.

esxlacp1

Figure 1: LLDP selection on VDS

VMware allows enabling LACP based NIC teaming through the vSPhere web client only. On the web-client, click on the vCenter icon on the left navigator screen on the home page. Browse over to Networking, select on the existing VDS as shown in Fig. 2.

esxlacp2

Figure 2: VDS management screen on vSphere web-client

Navigate to Related Objects -> Uplink Port Groups **tab and edit settings. As shown in Fig. 3, select **LACP and change Status to Enabled. This enables LACP on the VDS. The Active mode setting initiates LACP negotiation while Passive selection here would mean the VDS simply listens for LACP packets. With this choice, please be sure to have LACP set to active on the other end of the port-channel. To avoid an accidental dual-passive misconfiguration, Arista recommends configuring LACP mode to *Active *on the VDS.

esxlacp3

Figure 3: Edit LACP settings page for uplink group of VDS

After finishing this step, navigate to the left to the Distributed Port Groups tab and edit settings. As shown in Fig. 4 below, select the Teaming and failover option and choose Route based on IP hash as the Load balancing algorithm. This is as per the recommendation by VMware vSphere ESXi 5.1 Networking Guide. This option uses only the source and destination IP headers in each packet to calculate the hash used to pick a physical link from the aggregated set. Be sure to have the Network failure detection set to Link status only. It functions solely on the link status provided by the physical adaptor Cable pulls, and physical power failures of switches. Beacon probing option is not supported by VMware with the route based IP hash option and hence setting Link Status only as the network detection mechanism is required.

esxlacp4

Figure 4: Edit Teaming and failover settings page on VDS port group

Physical implementation options

Two most commonly deployed customer designs, as illustrated in Fig. 5, have ESX servers link-bonded to either a single TOR Arista switch or a pair of Arista switches configured in  MLAG mode. From an ESX server vSwitch standpoint, it is agnostic of a multi-chassis configuration, the LACP setup as described in the steps above remains the same. Finally, as vSwitches are slim bits of code to forward traffic, they do not learn devices as physical switches, there is not STP protocol implementation and do not create loops. Think split horizon. Therefore, there are no configuration requirements for STP.

esxlacp5

Figure 5: Common virtual to physical infrastructure deployments

A majority of the network designs seen today are 10G bonded links for the data uplinks while 1G port-speeds for VM management and ILO. With increasing amounts of east west traffic patterns, tieing in extra 1G link into a port-channel do not fix problems for flows greater than 1G or hashing collisions for flows to the same server. Following section lists out the configuration needed on the Arista Switches in both designs. Sample Configuration and Show Commands The configuration needed on the Arista switch is straightforward in either design. It requires configuring a port-channel interface and mapping physical interfaces to the port-channel in active LACP mode. Simple. In the case of a non-MLAG setup refer to the example below. Port-channel 50 has two physical interfaces ethernet 50 & 51 connected to ESX host are mapped to port-channel 50 in LACP active mode. On the Arista Switch:

Arista(config-if-Po50)#description connected_to_esx

Arista(config-if-Et50-51)#channel-group 50 mode active

Verify the status of the port-channel using the following:

Arista#show port-channel detailed
Port Channel Port-Channel50:
Active Ports:
Port            Time became active      Protocol      Mode
--------------- ---------------------- -------------- ------
Ethernet50      16:46:00                LACP          Active
Ethernet51      16:46:00                LACP          Active

In the second design, the ESX host is multi-chassis lagged to interface ethernet 50 on both the  Arista switches (configured as an MLAG pair). The only extra config needed for this setup on the Port-Channel 50 on each switch, is the mlag-id  that, must be identical on both the peers in the MLAG pair. Please refer to the Arista user manual or EOS Central for MLAG configuration guidance.

On peer 1:
Arista-1(config-if-Po50)#description connected_to_esx
Arista-1(config-if-Po50)#mlag 50

Arista-1(config-if-Et50)#channel-group 50 mode active
On peer 2:
Arista-2(config-if-Po50)#description connected_to_esx
Arista-2(config-if-Po50)#mlag 50

Arista-2(config-if-Et50)#channel-group 50 mode active

Verify the status of the port-channel using the following:

Arista-1#show port-channel detailed
Port Channel Port-Channel50:
Active Ports:
Port                Time became active      Protocol    Mode
-------------------- ------------------- -----------
Ethernet50          16:46:00                LACP         Active
PeerEthernet50    16:46:00                 LACP       Active

VMtracer with ESX vSphere 5.1

Please refer to the Arista user manual or EOS Central for information on VMtracer feature from Arista. VMtracer feature utilizes discovery protocol packets sent out vmtracer-enabled switch interfaces to map out directly connected ESX hosts. CDP is always-on enabled by default when feature is turned on. If the VDS in vCenter is setup to listen to LLDP, configure ‘lldp transmit’ is set on the Arista switch interface to ensure inter-operability. Prior to EOS software version 4.11, the default discovery protocol as part of the VMtracer feature was CDP because of the lack of LLDP support on VMware VDS prior to ESX 5.X. To ensure VMtracer functionality for a pre-4.11 EOS version on Arista & newer 5.X VMware ESX versions, please enable the discovery protocol in vCenter for the VDS to CDP.

Follow

Get every new post on this blog delivered to your Inbox.

Join other followers: