• Arista 7280QR-C36 Load Balancing Optimization for Dual Homed Systems and Networks

Print Friendly, PDF & Email

Arista 7280QR-C36 

The Arista DCS-7280QR-C36 switch is a purpose built flexible fixed configuration 1RU system capable of supporting a wide range of interface choices. Its designed for the highest performance environments such as IP Storage, Content Delivery Networks, Data Center Interconnect and IP Peering. The 7280QR-C36 is optimized for environments with dual connected nodes such as storage and for spine applications with dual homed leaf switches. This technical application note describes the internal optimized load-balancing mechanism used within the switch and how network architects can best deploy this system to maximize overall system performance.

The internal architecture of the DCS-7280QR-C36 switch is in figure 1. The DCS-7280QR-C36 is based around two high performance packet processors, each with up to 720Mpps of packet processing and 1.08Tbps of total switching capacity. Both packet processors service up to twelve 40Gbps and six 100Gbps capable ports for a total of 24 40GbE and 12 100GbE ports or up to 36 ports of 40GbE. The packet processors have a high capacity internal link with 400Gbps of connectivity between them.


Figure 1. Internal architecture of DCS-7280QR-C36 switch

Optimized Internal Load-balancing

The regular behavior in Arista switches is to load-balance traffic flows evenly over all possible ECMP or LAG members to reach a destination. On the DCS-7280QR-C36 switch, a maximum of  400 Gbps is available for traffic between the two packet processors. To reduce the potential for oversubscription on this connection the default load-balancing behavior has been optimized for total system performance and reduce traffic between the packet processors. As shown in figure 2, the DCS-7280QR-C36 switch load-balances traffic flows over the set of ECMP or LAG members being serviced by the same packet processor on which the flow was received, rather than considering links on the adjacent packet processor as available paths.


Figure 2. Default load-balancing mechanism of DCS-7280QR-C36 switch

This behavior allows the internal connection between packet processors to be used exclusively for traffic flows that are ingressing on one packet processor and egressing on the other (non-ECMP / non-LAG traffic flows). Understanding the sets of ports that are serviced by each packet processor allows customers to develop deployment best practices for both aggregation and leaf use cases. Ports are arranged as follows, and as shown in Figure 2.

  • Ports 1-12 (40G) and 25-30 (40/100G) are on the first packet processor
  • Ports 13-24 (40G) and 31-36 (40/100G) are on the second packet processor

Best practice recommendations:

In topologies where the DCS-7280QR-C36 switch is used as a spine or aggregation, it is recommended that links from leaf switches or dual homed appliances be connected to front panel ports serviced by both of the packet processors as shown in figure 3. Based on the modified load-balancing behavior, the spine switch will load-balance traffic out of interfaces serviced by the same packet processor reducing traffic across the inter packet processor link.


Figure 3. DCS-7280QR-C36 switch used as a spine

When a DCS-7280QR-C36 switch is used as a top of rack leaf, it is recommended that uplinks to the spine be made from sets of interfaces spread between the two packet processors so traffic arriving on either packet processor can egress from interfaces serviced by the same packet processor.


Figure 4. DCS-7280QR-C36 switch used as a leaf

Changing load-balancing mode on DCS-7280QR-C36

To allow customers the maximum control over the system operation the default load-balancing mode on DCS-7280QR-C36 switch can be changed to forward traffic flows across ECMP/LAG members belonging to both packet processors using the following command:


[no] ip load-sharing prefer local

show ip load-sharing

ECMP prefer local ASIC for forwarding : enabled

show ip load-sharing ecmp

Number of ECMP routes : 1234

ECMP routes approximate load sharing across Faps:

   Fap0: 40%

   Fap1: 60%

For LAG:

[ { no | default } ] load-balance sand profile <name>

show load-balance profile <name>                        

[ { default | no } ] prefer local

show port-channel load-balance jericho fields

Note, that the default behavior is recommended and network operators should exercise caution in modifying the mode as this could potentially lead to higher traffic levels over the internal connection between two packet processors, which may result in internal packet losses.


The DCS-7280QR-C36 is optimized to deliver a compact high performance system with 36 ports of QSFP supporting 40G and 100G, with the ability to support a wide range of interface configurations and speed. This makes it ideal for both a hyper-converged architecture or the spine of a mixed 40/100G solution, with a deep buffer and VoQ architecture that is ideal for both mixed traffic types and intensive workload patterns. The internal forwarding is optimized to ensure traffic is sent to a local connected interface, where possible. The flexibility to change this behavior gives customers full control over the system if necessary. This article explains the system architecture and if required the options to adjust the load balancing algorithms from the default.


Get every new post on this blog delivered to your Inbox.

Join other followers: