• Migrating from legacy DC design to EVPN VXLAN Fabric

 
 
Print Friendly, PDF & Email

Introduction

This document is intended to provide a reference of steps and sequence followed for: 

(1) migrating a legacy 3-tier L2 network to EVPN based VXLAN environment using Leaf & Spine design

(2) migrating an L2 Leaf & Spine network with VXLAN using CVX as the control plane to EVPN based control plane

(3) migrating an L2 Leaf & Spine network with VXLAN using static VXLAN as the control plane to EVPN based control plane.

Scope

The key objective of this report is to migrate a Layer 2 datacenter to EVPN based VXLAN using Leaf & Spine (L3LS) solution for various traffic types in the underlay and overlay networks. 

VXLAN will be the Layer 2 overlay network in this L3LS topology. VXLAN bridging, centralized routing and distributed routing scenarios are covered in this report.

This is an example for any such migrations from L2, Static and CVX to EVPN and does not cover all the use cases and migration strategies. 

Migrating Legacy 3-tier L2 architecture to EVPN VXLAN Fabric using Leaf-Spine design

Key Considerations/Strategy for Migration

  1. Prior to migration, DL1/DL2 serves as first-hop router  for the hosts on all the L2 Racks and provides routing function for the entire datacenter.  Migration strategy should ensure the routing between non-migrated L2 racks and EVPN + VXLAN migrated racks continue to get routed in DL1/DL2 during the migration phase. After migration of all L2 racks to EVPN + VXLAN, migrated Leafs will perform direct routing on the Leaf. 
  2. Prior to migrating any given L2 rack, compute hosts behind the L2 rack that is being migrated will be moved to any another rack to minimize the application impacts. Post migration of the L2 rack, compute hosts will be moved back to their original rack
  3. During & post migration phase, Leaf 3 and Leaf 4 will continue to be the demarcation between legacy L2 racks and EVPN+VXLAN migrated racks. Hence, they will continue to have L2 links with DL1/DL2 and also have L3 Leaf-Spine VXLAN path. 
  4. During the migration phase, routing between EVPN+VXLAN racks & L2 (non-migrated) will use VXLAN(Symmetric IRB) path towards Core1/Core2 for South to North traffic and traffic routed at FW.  Return traffic will use Core1/Core2 towards DL1/DL2 L3 links and finally DL1/DL2 will bridge the traffic to non-migrated L2 rack
  5. During & post migration phase, Inter-VLAN routing/Intra-VLAN bridging between EVPN+VXLAN migrated racks will use VXLAN(Asymmetric IRB) path for east-west traffic 
  6. Migration should eliminate the need for hosts to change their GW MAC by using the same VARP MAC on DL1/DL2 and the migrated VXLAN VTEP Leafs.
  7. Post migrating of all L2 racks to EVPN + VXLAN, routing function will be moved from DL1/DL2 to Leaf3-8 with minimal traffic impact, and each migrated Leaf will do routing for inter-VLAN traffic in non-default VRF (Asymmetric IRB);Inter-VRF traffic will be VXLAN routed to Core1/Core2 using Symmetric IRB (Type 5), where theFW will do the routing between VRFs.
    1. Core1/Core2 BGP network statement for the host GW network will be de-configured in default VRF and configured in non-default VRF where VXLAN SVIs are present to force the traffic to take the VXLAN path instead of Legacy DL1/DL2 path

Migration Procedure

This section covers the details of how each of the migration steps are executed at high level and presents the traffic loss/convergence numbers captured during the migration steps.

L2 running-config of all the devices in this topology prior to migration are captured and can be shared if needed.  

Migration Steps

  1. Upgrade Spine-1/2 to EOS-4.20.1F (with MaintenanceMode)
    • Enable BGP Multi-agent for EVPN support
  2. Leaf-3/4 – Upgrade to EOS-4.20.1F using MLAG ISSU
    • bring down secondary MLAG peer (running EOS-4.17.6M
    • bring up secondary MLAG peer (running EOS-4.20.1F)
    • bring down primary MLAG peer (running 4.17.6M)
    • bring up primary MLAG peer (running EOS-4.20.1F)
  3. Core-1/2 7280R’s – Upgrade to EOS-4.20.1F using MLAG ISSU
    • Enable arBGP for EVPN Support/ VXLAN TCAM Profile / MLAG Peer MAC routing
    • bring down secondary MLAG peer (running EOS-4.17.6M
    • bring up secondary MLAG peer (running EOS-4.20.1F)
    • bring down primary MLAG peer (running 4.17.6M)
    • bring up primary MLAG peer (running EOS-4.20.1F)
  4. Enable EVPN+VRF+VXLAN+MLAG+Routing on Core-1/2 by enabling the VXLAN interface, mapping VLANs 10, 20 & 30 to VNI’s 10, 20 & 30 respectively. NOTE: VXLAN Routing is not going to be enabled for VLAN 30. Update SVI’s for VLAN 10 and 20 on Core1/2 with ‘ip address virtual’.
  5. Provision new L3 links to the Spine switches from Leaf 3/4. Create L1, enable BGP (AS # 65105) on these L3 links and advertise the L0/1 IP address only.
  6. Enable EVPN+VXLAN+MLAG on Leaf 3/4 by enabling the VXLAN interface, mapping VLANs 30 to VNI’s 130.
  7. Vmotion vm-3 and vm-4 from srv-40 to srv-39. This will free up Leaf7/8 and there will be no hosts behind them. Disable ixia traffic to server-7/8 (behind leaf7/8)
  8. Provision L3 links from Leaf 7/8 (Leaf7/8) to the Spine switches. Create L1 and enable BGP on these L3 links from Leaf-7/8 to the Spine switches and advertise the L0/1 IP address only.
  9. Enable EVPN+VRF+VXLAN+MLAG+Routing on (Leaf7/8) by enabling the VXLAN interface, mapping VLANs 10, 20 & 30 to VNI’s 10, 20 & 30 respectively.
    • NOTE: VXLAN Routing is not going to be enabled for VLAN 30. Update SVI’s for Vlan 10 and 20 on (Leaf7/8) with ‘ip address virtual’.
  10. Vmotion vm-1, vm-2, vm-3 and vm-4 from srv-39 to srv-40. This will free up Leaf-5/6 (pts320/321) and there will be no hosts behind them. Enable ixia traffic to server-7/8, Disable Ixia traffic to server-5/6
    • shutdown Po31 on Leaf 5/6
  11. Shutdown Po41 in Leaf 7/8
    • Remove SVIs 10,20 from DL
    • Remove network statements in Core1/2 for SVI 10/20 from default VRF and add network statement for SVI 10 in VRF1 and network statement for SVI 20 in VRF2.
  12. Provision L3 links from Leaf-5/6 to the Spine switches. Create L1 and enable BGP on these L3 links from Leaf-5/6 to the Spine switches and advertise the L0/1 IP address only.
  13. Enable VXLAN+MLAG+Routing on Leaf-5/6 (pts320/321) by enabling VXLAN interface, mapping VLANs 10, 20 & 30 to VNI’s 10, 20 & 30 respectively and update SVI’s for VLAN 10 & 20 with ‘ip address virtual’.
    • NOTE: VXLAN Routing is not going to be enabled for VLAN 30.
  14. Vmotion vm-3 and vm-4 from srv-39 to srv-40.
    • Re-enable flows to/from Server-5/6

Result : We achieved 1.5 seconds worst flow convergence for this migration.

Migrating Static VXLAN to BGP EVPN VXLAN control plane

Centralized routing for most of the VLANs and Leafs were only doing VXLAN bridging to Service Leafs and routing performed by the Border Gateway routers running Varp. 

Please see the below migration topology where we added these new migration devices (Migration VTEPs) to form an island between legacy Static VXLAN and migrated EVPN island during the migration process. Objective is to perform rack by rack migration and without downtime or moving applications/workload to perform the migration for a given rack.

Key aspects of this migration approach

  1. Set high MAC aging, GARPs running on Border routers(hs521/hs522), Migrate Compute Leafs followed by Service Leafs)
  2. Service Leafs migrated at the end 
  3. Service Leafs will send GARPs to all VTEP including EVPN VTEPs
  4. Migrated EVPN VTEPs will not learn GW MACs from GARPs (data plane learning disabled in EVPN control plane)
  5. Migrated EVPN VTEPs also don’t flush the previously learnt MACs 
  6. Minimal routing traffic impact as the GW MACs changed only once

Migration Procedure

  1. Bring up first Migration Static VTEP (100.1.1.2) device connect it to existing Spines
    • Add this Migration Static VTEP in all the Static VTEP’s Flood list
  2. Bring up second Migration EVPN VTEP (100.1.1.1) device connect it to existing Spines
  3. Spines – Enable multi-agent BGP for EVPN Support (with MaintenanceMode)
    • NOTE: Do code upgrade, if required with on-boot maintenance mode.
  4. Enable EVPN configs on Compute Leaf1 and Spines
    • Add EVPN configs on Service and Spine Leafs
    • Reload MLAG secondary/primary of Service Leafs (to change BGP mode)
  5. Remove Compute#1 from all Static VTEP’s, Service VTEP’s Flood list
    • Add Migration Static VTEP in all Static VTEP’s Flood list
    • Change the control plane of Compute#1 Leaf to EVPN and remove Static VX Flood list
  6. Enable EVPN configs on Compute Leaf2 and Spines
    • Add EVPN configs on Service and Spine Leafs
    • Reload MLAG secondary/primary of Service Leafs (to change BGP mode)
    • Change BGP multi agent mode when Secondary/primary in MLAG reload timer
  7. Remove Compute#2’s Static Flood list & Compute#2 from all Static VTEP’s Flood list (Migration VTEP only at this point in Static VX)
    • Change the control plane of Compute#2 Leafs to EVPN and remove Static VX Flood list
  8. Enable EVPN sessions between Spines and Migration EVPN VTEP/Service Leafs added in Step 2.
    • Add EVPN configs on Service, Migration EVPN VTEP and Spine Leafs
    • Reload MLAG secondary/primary of Service Leafs (to change BGP mode)
    • Change BGP multi agent mode when Secondary/primary in MLAG reload timer
  9. Remove Static VX Flood list and Change the control plane of Service Leafs to EVPN
    • Update/Remove Static Flood list
    • Disable GARPs on GWs
    • Add Static VARP MAC on Int Po1 on Service Leafs
    • Enabled MAC VRFs only with import only , redistribute static
    • RT export on psp108/109 (verify Step 4 completed and all EVPN routes programmed)
  10. Enable GARP on Border Routers with 1 Sec Remove Static MAC entries on Service Leafs, Remove Redistribute Static Change GARP on Border Routers with default 30 sec advertisement Power off Migration VTEPs
Follow

Get every new post on this blog delivered to your Inbox.

Join other followers: