As of EOS 4.22.0F, EVPN all active multihoming is supported as a standardized redundancy solution.  Redundancy

This feature extends the BGP Layer 3 VPN Import/Export and VRF Route Leaking functionality to “default” VRF.

To use IPv6 addresses for VXLAN underlay, there are two different approaches.  The first approach is to make use of

In a Service Provider network, a Provider Edge (PE) device learns VPN paths from remote PEs and uses the Route Target

This feature allows a Data Center (DC) operator to incrementally migrate their VXLAN network from IPv4 to IPv6

As described in the Multi VTEP MLAG TOI, singly connected hosts can lead to suboptimal peer link utilisation. By

RFC7432 defines the MAC/IP advertisement NLRI (route type 2) for exchanging EVPN overlay end-hosts MAC addresses reachability information.  When an EVPN MAC/IP route contains more than one path to the same L2 destination, the EVPN MAC/IP best-path selection algorithm determines which of these paths should be considered as the best path to that L2 destination. 

In the Centralized Anycast Gateway configuration, the Spines are configured with EVPN IRB and are used as the IP

In the Centralized Anycast Gateway configuration, the Spines are configured with EVPN IRB and are used as the IP

This feature enables support for Macro Segmentation Service (MSS) to insert security devices into the traffic path

Multihoming in EVPN allows a single customer edge (CE) to connect to multiple provider edges (PE or tunnel endpoint). In any multihoming EVPN instance (EVI), for each ethernet segment a designated forwarder is elected using EVPN type 4 Ethernet Segment (ES) routes sent through BGP. In single-active mode, the designated forwarder (DF) is responsible for sending and receiving all traffic. In all-active mode, the DF is only used to determine whether broadcast, unknown

E-Tree is an L2 EVPN service (defined in RFC8317) in which each attachment circuit (AC) is assigned the role of Root or Leaf. Once roles are assigned, the following forwarding rules are enforced:

Ethernet VPN (EVPN) is an extension of the BGP protocol introducing a new address family: L2VPN (address family

EVPN route advertisements carry RD and RT. RD (Route Distinguisher) : prepend to the tenant’s IP Prefix or MAC address to make it globally unique. RT (Route Target) : a BGP extended community used to tag the EVPN route.  The EVPN import policy is chosen to select what is the target tenant VRF is imported from the global EVPN table.

This feature adds control plane support for inter subnet forwarding between EVPN and IPVPN networks. It also

In the traditional data center design, inter-subnet forwarding is provided by a centralized router, where traffic traverses across the network to a centralized routing node and back again to its final destination. In a large multi-tenant data center environment this operational model can lead to inefficient use of bandwidth and sub-optimal forwarding.

This feature adds control plane support for inter subnet forwarding between EVPN networks. This support is achieved

This feature is available when configuring Layer2 EVPN or EVPN IRB. As described in RFC7432 section 15

“MLAG Domain Shared Router MAC” is a new mechanism to introduce a new router MAC to be used for MLAG TOR

EVPN VXLAN 4.21.3F

EVPN MPLS VPWS (RFC 8214) provides the ability to forward customer traffic to / from a given attachment circuit (AC) without any MAC lookup / learning. The basic advantage of VPWS over an L2 EVPN is the reduced control plane signalling due to not exchanging MAC address information. In contrast to LDP pseudowires, EVPN MPLS VPWS uses BGP for signalling. Port based and VLAN based services are supported.

In network deployments, where border leaf or Superspine act as PEG and it is in the transit path to other multicast VTEPs, the multicast stream will not pass since the border leaf will decapsulate the packet even if it doesn't have a receiver. This transit node is called the Bud Node. The device should be able to send decapsulated packets to any local receivers as well as send the encapsulated packets to other VTEPs.

EVPN Multihoming defines a mechanism for Multihoming PEs to quickly signal, to remote PEs, a failure in an Ethernet Segment (ES) connectivity with the use of Ethernet A-D per ES route

Multihoming in EVPN allows a single customer edge (CE) to connect to multiple provider edges (PE or tunnel endpoint).

In EVPN, an overlay index is a field in type-5 IP Prefix routes that indicates that they should resolve indirectly rather than using resolution information contained in the type-5 route itself. Depending on the type of overlay index, this resolution information may come from type-1 auto discovery or type-2 MAC+IP routes. For this feature the gateway IP address field of the type-5 NLRI is used as the overlay index, which matches the target IPv4 / IPv6 address in the type-2 NLRI.

As described in the L3 EVPN VXLAN Configuration Guide, it is common practice to use Layer 3 EVPN to provide multi

Flexible cross-connect service is an extension of EVPN MPLS Virtual Private Wire Service (VPWS) (RFC 8214). It allows for multiplexing multiple attachment circuits across different Ethernet Segments and physical interfaces into a single EVPN VPWS service tunnel while still providing single-active and all-active multi-homing.

In EOS 4.22.0F, EVPN VXLAN all active multi homing L2 support is available. A customer edge (CE) device can connect to

Ethernet VPN (EVPN) networks normally require some measure of redundancy to reduce or eliminate the impact of outages and maintenance. RFC7432 describes four types of route to be exchanged through EVPN, with a built-in multihoming mechanism for redundancy. Prior to EOS 4.22.0F, MLAG was available as a redundancy option for EVPN with VXLAN, but not multihoming. EVPN multihoming is a multi-vendor standards-based redundancy solution that does not require a dedicated peer link and allows for more flexible configurations than MLAG, supporting peering on a per interface level rather than a per device level. It also supports a mass withdrawal mechanism to minimize traffic loss when a link goes down.

EVPN gateway support for all-active (A-A) multihoming adds a new redundancy model to our multi-domain EVPN solution introduced in [1]. This deployment model introduces the concept of a WAN Interconnect Ethernet Segment identifier (WAN I-ESI) which is shared by gateway nodes within the same domain (site) and set in MAC-IP routes that cross domain boundaries. The WAN I-ESI allows the gateway’s EVPN neighbors to form L2 and L3 overlay ECMP on routes re-exported by the gateway.

This feature enables support for an EVPN VxLAN control plane in conjunction with Arista’s OpenStack ML2 plugin for

Starting with EOS release 4.22.0F, the EVPN VXLAN L3 Gateway using EVPN IRB supports routing traffic from one IPV6

Starting with EOS release 4.22.0F, the EVPN VXLAN L3 Gateway using EVPN IRB supports routing traffic from IPV6 host to

Starting with EOS release 4.22.0F, the EVPN VXLAN L3 Gateway using EVPN IRB supports routing traffic from one IPV6

In a traditional EVPN VXLAN centralized anycast gateway deployment, multiple L3 VTEPs serve the role of the

Typical Wi Fi networks utilize a single, central Wireless LAN Controller (WLC) to act as a gateway between the

In EVPN deployment with VXLAN underlay when an EVPN type 5 prefix is imported into an IP VRF, the IGP cost of the underlay

EVPN 4.25.2F

This solution allows delivery of IPv6 multicast traffic in an IP-VRF using an IPv4 multicast in the underlay network. The protocol used to build multicast trees in the underlay network is PIM Sparse Mode.

Several customers have expressed interest in using IPv6 addresses for VXLAN underlay in their Data Centers (DC). Prior to 4.24.1F, EOS only supported IPv4 addresses for VXLAN underlay, i.e., VTEPs were reachable via IPv4 addresses only.

As of EOS 4.22.0F, EVPN all active multihoming is supported as a standardized redundancy solution. For effective

This solution allows the delivery of customer BUM (Broadcast, Unknown unicast and Multicast) traffic in a VLAN using

This solution optimizes the delivery of multicast to a VLAN over an Ethernet VPN (EVPN) network. Without this solution IPv6 multicast traffic in a VLAN is flooded to all Provider Edge(PE) devices which contain the VLAN.

This feature provides the ability to interconnect EVPN VXLAN domains. Domains may or may not be within the same data

EVPN VXLAN 4.26.1F

This feature extends the multi-domain EVPN VXLAN feature introduced to support interconnect with EVPN MPLS networks. The following diagram shows a multi-domain deployment with EVPN VXLAN in the data center and EVPN MPLS in the WAN. Note that this is the only supported deployment model, and that an EVPN MPLS network cannot peer with an EVPN MPLS network.

[L2 EVPN] and  [Multicast EVPN IRB] solutions allow for the delivery of customer BUM (Broadcast, Unknown unicast

This solution allows delivery of multicast traffic in an IP VRF using multicast in the underlay network. It builds on

Multicast EVPN IRB solution allows for the delivery of customer BUM (Broadcast, Unknown unicast and Multicast) traffic in L3VPNs using multicast in the underlay network. This document contains only partial information that is new or different for the Multicast EVPN Multiple Underlay Groups solution.

[L2 EVPN] and  [Multicast EVPN IRB] solutions allow for the delivery of customer BUM (Broadcast, Unknown unicast and Multicast) traffic in a L2VPN and L3VPNs respectively using multicast in the underlay network.

PIM External Gateways (PEGs) allow an EVPN overlay multicast network to interface with an external PIM domain. They can be used to interconnect two data centers using an external PIM domain in between them.

Private VLAN is a feature that segregates a regular VLAN broadcast domain while maintaining all ports in the same IP

Arista MLAG supports STP for Layer 2 loop detection. In fact, most customers enable STP in their MLAG(s) to ensure no