Posted on November 20, 2020 10:09 am
 |  Asked by Heeyeol Yu
Print Friendly, PDF & Email


I want to know iperf client on Arista achieves 600 ~ 900Mbps while traffic path is made of 10Gbps

I have this arista router with EOS shell:

Arista Networks EOS shell

Arista DCS-7280SR-48C6-R
Hardware version: 01.02
Serial number: JPE16220052
System MAC address: 444c.a873.c563

Software image version: 4.17.3F
Architecture: i386
Internal build version: 4.17.3F-3965951.4173F
Internal build ID: 8c8dcf71-3cfb-49ab-b45d-4770d6266650

I want to run iperf on vrf routing-context where vlan SVI with two ethernets of 10G links.

I have traffic path of guest VM <-> ESXi <-> Arista Router and 10G links are connected between ESX and Arista router

iperf run between guest VM and Arista VRF bash shell shows 600Mbps and when I set MTU 9000 on ESXi/Arista links, it could achieve at most 990Mbps. I run iperf on Arista as client.

Question1: By Arista product warranty, does iperf run on vrf bash shell ensures full 10G link rate as long as guest VM/ESXi provides 10G throughput?

Question2: Is there other command to see traffic rate on Arista I/F?

Question3: Can I install other performance tools like netperf? If so how?



Answered on November 20, 2020 10:24 am


The Arista switch will be capable to forward the traffic in the data plane, however, when running IPERF to generate traffic, the bandwidth will be limited due to the communication channel between the CPU and ASIC and control plane rate limit.

You can install different software to generate traffic, but you would run into the same scenario. So ideally, to perform throughput tests you will use external traffic generators with the switches in the data path which will be capable of forwarding traffic in line rate.

Posted by Alexis Dacquay
Answered on November 20, 2020 11:01 am

I would like to clarify: There are 2 bottlenecks


1) Control-plane protection (default to fe 100Mbps), which can be increased to as much as you want, at risk of overloading other EOS functionalities (EOS needs to communicate with the network processor)

2) CPU performance: the CPU is over-sized for networking, so that it converges fast and allows doing more on the switch (running scripts, etc).
However it's just a CPU.

Such CPU cannot generate 10Gbps of traffic; maybe 1G at best. So the number you provided are in line with the CPU max capacity.

High-end servers with very high CPU can generate 10Gbps, but not the switch CPU itself.
Many testers and traffic generators (Ixia, Spirent), are based on FPGA. That is hardware-generated, not CPU.

For generating more than 1Gbps from a switch, you can:
a) loop. Called a "snake": you let traffic come in and out the switch through physical loops, maybe disable MAC learning, which leads to flooding, at each loop you can land traffic in a different VLAN or VRF, and send all that accumulated traffic out a trunk towards your destination.

## Q2
You can use "show interfaces counter ..." You have lot of options there: by bucket (pack size), etc.

## Q3
Get a Linux package from a centos trusted repository, make sure you match the kernel version and that the dependencies are satisfied.
Or simply put you binary in a VM or a container and load that in EOS.

Post your Answer

You must be logged in to post an answer.