• vEOS Router Architecture

 
 
Print Friendly, PDF & Email

One of the benefits of the vEOS router is that it runs the same code as normal EOS, but in a virtual machine instead of on a hardware switch. Thanks to the EOS’s modular architecture, supporting a virtual machine instance is as simple as adding different agents to the system to accommodate the virtual network drivers. In this post, I describe how the different virtual networking options work with vEOS both on a server and in the cloud. Then I will explore some of the differences between vEOS router, vEOS-lab, and EOS on a hardware switch. For more information on the architecture of EOS in general, please refer to the EOS Whitepaper: https://www.arista.com/assets/data/pdf/EOSWhitepaper.pdf

vEOS router on a hypervisor

At its core, vEOS router has the same state based database, Linux OS, and programmatic access as every other version of EOS. However, there are some changes needed to support features for a VPN/gateway router, as well as to optimize forwarding performance on a hypervisor instead of on a hardware switching platform. When running vEOS router as a VM, the user has the option of using virtio/vmxnet3 drivers, PCI passthrough, or SR-IOV to interact with the underlying network hardware.

virtio/vmxnet3

With virtio (or vmxnet3 on ESX), the server network interface is abstracted and a virtual interface presented to the VM. Although this provides the greatest level of flexibility and no hardware dependency at the VM level, it also has the worst performance since packet forwarding must go through a layer of virtualization.

PCI Passthrough

 

PCI passthrough removes the bottleneck of virtio, and allows the virtual machine to use the underlying NIC directly. This gives a large improvement in network performance, but only allows a single VM to use the physical NIC card. The network interface is now dedicated to the VM and not usable by any other VM on the hypervisor. Intel VT-d or AMD-V IOMMU must be enabled on the server, but one benefit is that this can be performed without any special network hardware.

SR-IOV

 

SR-IOV requires a specialized network interface card that can support two different PCI functions: Physical Function (PF) and Virtual Functions (VF). The PF manages the SR-IOV functionality on the NIC, and is controlled by the hypervisor. The VFs present multiple virtual instances of the network adapter, allowing multiple virtual machines to share the network adapter resource, while still maintaining high performance. One thing to consider with SR-IOV is that aggregate bandwidth within the NIC is limited to the NIC interface speed, including VM to VM traffic within the same server.

vEOS in the Public Cloud

Although many of the same concepts above still apply to vEOS in the cloud, there are some minor differences depending on the public cloud provider’s infrastructure.

vEOS on AWS

When deploying vEOS router on AWS, vEOS takes advantage of the enhanced networking capabilities to provide higher throughput and lower latency networking for EC2 virtual machines. This is done through the Elastic Network Adapter (ENA) driver which can provide speeds up to 25 Gbps and is supported on all the recommended instance types for the vEOS router. For more information refer to the AWS Enhanced Networking guide: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html

vEOS router on Azure

As of right now the vEOS router uses the Hyper-V network driver (much like the virtio scenario above), with support for Accelerated Networking coming when it is fully supported for Linux VMs in Azure. In the next section we’ll explore how vEOS router differs from the other flavors of EOS

vEOS router vs EOS

The obvious difference when running the EOS codebase in a VM as opposed to natively on a switch is the abstraction of the hypervisor, and lack of network ASICs. Memory and CPUs become virtual memory and vCPUs, just as in any other VM you may encounter. But when it comes to forwarding packets, vEOS router can use virtio/vmxnet3, PCI passthrough, or SR-IOV to pass the packets to or through the hypervisor. With the latter two options, high throughput and efficiency can be achieved for traffic crossing the vEOS router. One of the benefits of EOS and its state sharing design is that we can easily swap the hardware ASIC forwarding agent with a software agent and keep the rest of EOS intact.

vEOS router vs vEOS lab

The differences here are subtle but important. With vEOS lab the use case is to simulate as much of the functionality of EOS as possible (minus any hardware dependent features), with little importance given to forwarding performance. This includes features such as layer 2 protocols that are not relevant in a pure layer 3 router. To accomplish this vEOS lab has a software based switch emulator that allows EOS to behave as if it were connected to a hardware switch, but at the cost of packet processing performance. In other words, vEOS lab is designed to be a lab tool, but not actually used to forward traffic in a production environment.

vEOS router is designed to forward production traffic at high speeds. It leverages technologies such as SR-IOV to get closer to the underlying NIC hardware, and does not support layer 2 protocols and switching features that would not be needed in a virtual router environment. It also supports tunneling protocols such as NAT, GRE, and IPSEC to provide capabilities needed for a routing gateway device.

Conclusion

There are lots of options when it comes to providing network functionality on a virtual machine. With the vEOS router, Arista has taken advantage of many of the enhanced forwarding capabilities both on an on prem hypervisor or in the cloud. By leveraging technologies such as SR-IOV the Arista vEOS router is able to perform as a high performance virtual router in a wide variety of deployments, while still providing all the benefits of the same EOS that powers thousands of data centers.

Follow

Get every new post on this blog delivered to your Inbox.

Join other followers: