• Layer 2 Data Center Interconnect – Reference Designs

Print Friendly, PDF & Email


VxLAN is a popular choice for extending Layer 2 both intra and inter DC using overlays. Arista offers multiple control plane choices for VxLAN: Static HER, CVX and EVPN. In this article, two approaches to designing a L2 DCI over a L3 underlay are discussed. High-level technical details of each design approach is described first, followed by a comparison of the two options along with their typical use cases.

Design 1: Multi-domain Overlay

In this design, two overlay domains are identified:

  • DC Fabric domain: This is the VxLAN domain within the DC Layer 3 Leaf-Spine Fabric with Leafs acting as VTEPs.
  • DCI domain: This is the VxLAN domain across the DCI spanning multiple data centers with DCI Leafs acting as VTEPs.

The two overlay domains have independent VxLAN control planes and dot1q trunking is used to stitch the data planes together. The Edge Leaf VTEPs mark the boundary of the DC Fabric VxLAN domain and connect to the DCI Leaf VTEP pair via 802.1Q trunks. So, the overlay traffic traversing between the Edge and DCI Leafs is naked without any VxLAN encapsulation. The replication domain for BUM traffic is localized within the DC i.e., each VTEP within the DC Fabric VxLAN domain will only see the DC local VTEPs in its flood list plus the local Edge VTEPs. This reduces the overall volume of BUM traffic traversing the DCI.  

Below are the VxLAN control plane choices for the two domains:

  • DC Fabric domain: Static HER or CVX or EVPN
  • DCI domain: Static HER or EVPN

Within each DC Fabric domain, you can run any flavor of VxLAN overlay routing: direct, indirect or centralized. The DCI Leafs are strictly L2 with no overlay routing enabled i.e., no SVIs corresponding to the inter DC extended VLANs.

In addition to VxLAN, MPLS is an alternate option for the data plane in DCI domain.

Design 2: Single-domain Overlay

In this design, there’s a single overlay domain spanning multiple DCs with a transparent DCI that offers only underlay IP routing capabilities. The Edge Leafs connect to the remote DC Edge Leafs via the DCI transport and they have no overlay functions enabled. End to End VxLAN tunnels are created i.e., the tunnels originating on Leaf VTEPs in a DC only terminate on the remote DC Leaf VTEPs. From an overlay perspective, this is a single BUM replication domain i.e., for a VLAN stretched across DCs, each VTEP will see all the remote DC VTEPs in its flood list in addition to the local DC VTEPs.

With a single overlay domain, the following design choices are available:

2.1. End to End EVPN

In this design, the DC Fabric is built with EVPN as the VxLAN control plane. The Spines offer EVPN transit router functionality and reflect the EVPN routes. The Spine transit routers across DCs are logically meshed using multi-hop eBGP/EVPN peerings to interconnect the control planes of the two DC Fabric domains.

2.2. CVX + EVPN

In this design, the DC Fabric is built with CVX as the VxLAN control plane. In addition to offering VCS to the local VTEPs, the CVX nodes are also BGP/EVPN speakers and the control planes across DCs are stitched together using multi-hop eBGP/EVPN peerings between the CVX nodes. This design does not support VRFs in the overlay i.e., direct routing or asymmetric IRB is the only supported overlay routing option.

Comparison of the two designs

Multi-domain Overlay

Single-domain Overlay

Segmented approach with dot1q handoff to provide clear demarcation between the DC Fabric and DCI domains. This design also offers:
(1) Choice of control planes in each domain: Static HER / CVX / EVPN
(2) Choice of data plane encapsulation on the DCI: VxLAN / MPLS

Single overlay domain with an extended VxLAN control plane across DCs.

Appropriate design choice for multi-site DCI and large deployments to restrict VTEP/MAC/ARP scale within a DC. Design can potentially be perceived as complex with additional devices and config knobs such as ARP reply relay.  

Simple design appropriate for small and medium scale deployments where scale is not a constraint. Scale optimizations are possible with Symmetric IRB and selective ARP learning.

The BUM replication domain is isolated between the DC Fabric and DCI. This facilitates more efficient use of the DCI bandwidth by reducing the volume of BUM traffic traversing across the DCI links. (DCI Leafs only see remote DCI Leafs in their flood list)

One BUM replication domain and no control of dynamic flood list. Each VTEP sees all the VTEPs in the remote DC(s) in its flood list in addition to local VTEPs.

Separate VNI administrative domains. From the DCI stand point, only devices part of DCI domain (DCI Leafs) need to have consistent VLAN to VNI mapping. The mappings within each DC Fabric are local to the DC and need not be consistent across the board.

Single VNI administrative domain across all DCs and reduces the flexibility to translate VLAN/VNI.

dot1q separation offers more flexibility in integrating with existing brownfield deployments. Existing DCs can run any flavor of VxLAN control plane in the DC Fabric or can be even legacy L2LS type deployment.

End-to-End EVPN design works well for greenfield deployments where all DCs are built from scratch.

CVX + EVPN design facilitates federation of CVX based DCs using BGP/EVPN. This design also offers easier migration of CVX based fabrics to EVPN.

Requires an additional pair of DCI Leaf VTEPs per DC.

No additional gear is needed.


Get every new post on this blog delivered to your Inbox.

Join other followers: