• EVPN configuration – An eBGP EVPN over eBGP network design

 
 
Print Friendly, PDF & Email

Introduction

This document describes the operation and configuration of an eBGP design for the exchange of EVPN routes in a VXLAN overlay network, in conjunction with MLAG.A single eBGP session can be configured to advertise both the underlay routes (loopback IPs) and the overlay EVPN routes, however, to provide operational separation between the overlay and underlay network, this documents discussion an eBGP over eBGP design, where separate eBGP sessions are created for the underlay and overlay advertisements.  The configuration and guidance within the document unless specifically noted are based on the platforms and EOS releases noted in the table below.

Platform

Software Release

7050X Series

EOS release 4.18.1

7050X2 series

EOS release 4.18.1

7060X Series

EOS release 4.18.1

7160 series

EOS release 4.18.1

7280SE/7280R/7500R/7050E

EOS release 4.18.1

Leaf spine underlay architecture

VXLAN provides the ability to decouple and abstract the logical topology, by using MAC in IP encapsulation, from the physical underlay network. The logical topologies created by the encapsulation, often referred to as the ‘Overlay Network’, enable the construction of multi-tenant layer 2 and layer 3 environments over a single shared physical infrastructure.Within the context of a data center deployment, the physical IP infrastructure or underlay network is typically constructed from a fully redundant layer 3 leaf spine architecture utilising eBGP for propagation of the underlay routes and ECMP for traffic load-balancing as illustrated on the figure below.

Figure 1: eBGP design for the network underlay

In the eBGP underlay design, the leaf switches within each rack are deployed within the same AS (65001) , the spine switches are configured within a single AS (65000). To provide fast failover and avoid the need to modify any BGP timers, the BGP sessions are configured on the physical interface of both the leaf and spine switches. BFD can be configured on each BGP session, however given the session is running on the physical link, failure of the link will result in a sub-second failure of the eBGP session and it’s associated routes being withdrawn, BFD therefore will only add an overhead to the design without providing any significant value.

Figure 2: eBGP sessions running on the physical interface of the network nodes

Running the eBGP sessions on the physical interface, directly connecting the leaf and spine nodes, also removes the need for an IGP in the underlay for the advertisement of any loopback address.  In the four spine topology, illustrated in figure 2 above, each leaf switch will have four eBGP peerings, one to each of the  spine switches.

MLAG and Logical VTEP

To provide resiliency within the rack, the leaf switches are deployed in an MLAG configuration. Arista’s Multi-Chassis LAG (MLAG) technology provides the ability to build a loop free active-active layer 2 topology. The technology operates by allowing two physical Arista switches to appear as a single logical switch (MLAG domain), third-party switches, servers or neighbouring Arista switches connect to the logical switch via a standard port-channel (static, passive or active) with the physical links of the port-channel split across the two switches of the MLAG domain. With this configuration, all links and switches within the topology are active and forwarding traffic, with no loops within the topology the configuration of spanning-tree becomes optional.

Multi-Chassis LAG (MC-LAG)

From a layer 3 perspective the two switches of the MLAG domain, act as independent layer 3 routers. Therefore each switch, will run it’s own separate eBGP peering to each of the Spine switches. To ensure failover even in the event of a leaf switch losing all four links to the Spine, an iBGP session is also run across the peer-link of the MLAG, providing a backup path to the Spine.

eBGP peering  for an MLAG domain in a layer 3  leaf-spine topology

To provide resiliency and active-active forwarding in the VXLAN overlay network, the switches within the MLAG domain act as a single logical VTEP sharing a loopback IP address, which would be the source IP for any VXLAN encapsulated traffic.

MLAG – logical VTEP

To provide IP connectivity to the logical VTEP in the underlay network,  the MLAG peers advertises the shared loopback  (2.2.2.1 in the figure above) into the underlay network via eBGP. With each physical switch, eBGP peering with each of the 4 Spine switches, every remote VTEP will have four equal cost routes to the logical VTEP,  thereby providing ECMP load-balancing across the Spines to the logical VTEP.  

The ECMP model, load-balances traffic across all links and nodes in the network based on a hash of the 5-tuples of the routed packet, the resultant hash deciding which particular link the packet is forwarded on. This provides an active-active forwarding model, where all links and nodes in the topology are active and forwarding traffic. The classic hashing model can however cause disruption to all flows when a link/node in the network fails, as all flows in the network need to be re-hashed to the remain links rather than just the flows affected by the failed link. To avoid this level of disruption, the Arista switches support resilient ECMP, where only the flows affected by the failed link are re-hashed to a new link, all other flows are unaffected by the failure, and continue to be hashed on the same link as before the failure.

With both MLAG peers announcing the shared IP address of the logical VTEP, in the event of a switch failure within the MLAG domain, there will be zero disruption to the forwarding plane. A switch failure in the MLAG domain will result in it’s physical links to the Spine being brought down along with the BGP session and any corresponding routes learnt via the session removed, including the logical VTEP IP.

 

Remote VTEPs however will still learn the shared IP address via the remaining MLAG peer, and continue to send traffic to the same logical IP address, which would be load-balanced across the four ECMP routes learnt via the Spine, due to the MLAG peer failure each Spine switch will now have a single route to the logical IP, via the remaining leaf in MLAG domain.  

 

The network underlay configuration

The figure below, outlines a typical layer 3 leaf-spine topology, with a 4 switch Spine and two racks, with a pair of leaf switches deployed in each rack in an MLAG configuration. To provide the connectivity into the overlay network, each MLAG domain is configured as a logical VTEP (VTEP-1 and VTEP-2), each with a shared loopback IP. The following configuration, outlines the network underlay configuration; eBGP, MLAG and VTEP etc for the leaf  switches in rack-1 and the eBGP underlay configuration for two Spine switches; Spine-1 and 2.

Leaf MLAG configuration

The leaf switches are configured with a standard MLAG configuration, where a port-channel  1000 (Ethernet 49 and 50) is created  on both switches for the MLAG peer link, and interface Vlan 4094 is configured on the port-channel for the MLAG peering session across the port-channel. A second interface VLAN 4093  is created on the peer-link, which will be used for the iBGP session between the two switches. To ensure both VLANs (4094 and 4093) are not automatically create on any other trunk link of the switch, unless specifically configured, both VLANs are defined within their own trunk group  (MLAG_PEER and LEAF_PEER_L3)

Leaf-11

Leaf-12

!

vlan 4094

  name MLAG_PEER

  trunk group MLAG

!

vlan 4093

  name LEAF_PEER_L3

  trunk group LEAF_PEER_L3

!

interface Vlan4094

  ip address 172.168.10.1/30

!

interface Vlan4093

  ip address 172.168.11.1/30

!

interface Ethernet49

channel-group 1000 mode active

!

interface Ethernet50

channel-group 1000 mode active

!

interface port-channel 1000

  description To Leaf-Peer

  switchport trunk allowed vlan 2-4094

  switchport mode trunk

  switchport trunk group LEAF_PEER_L3

  switchport trunk group MLAG

!

mlag configuration

  domain-id Rack-1

  local-interface Vlan4094

  peer-address 172.168.10.2

  peer-link port-channel 1000

!

!

vlan 4094

  name MLAG_PEER

  trunk group MLAG

!

vlan 4093

  name LEAF_PEER_L3

  trunk group LEAF_PEER_L3

!

interface Vlan4094

  ip address 172.168.10.2/30

!

interface Vlan4093

  ip address 172.168.11.2/30

!

interface Ethernet49

channel-group 1000 mode active

!

interface Ethernet50

channel-group 1000 mode active

!

interface port-channel 1000

  description To Leaf-Peer

  switchport trunk allowed vlan 2-4094

  switchport mode trunk

  switchport trunk group LEAF_PEER_L3

  switchport trunk group MLAG

!

mlag configuration

  domain-id Rack-1

  local-interface Vlan4094

  peer-address 172.168.10.1

 peer-link port-channel 1000

!

Leaf logical VTEP configuration

To create a logical VTEP for the MLAG domain, a shared loopback IP address (2.2.2.1)  is configured on both switches and the VXLAN interface is created with the shared loopback as the source IP for the VTEP interface. No overlay networks are created on the VXLAN interface at this point. A separate unique loopback IP address (lo0) is created on both switches, for the router-id and will be used as loopback for sourcing overlay EVPN eBGP session.

Leaf-11

Leaf-12

!

interface Loopback0

  ip address 1.1.1.11/32

!

interface Loopback1

  ip address 2.2.2.1/32

!

vxlan 1

interface Vxlan1

  vxlan source-interface Loopback1

  vxlan udp-port 4789

!

!

interface Loopback0

  ip address 1.1.1.12/32

!

interface Loopback1

  ip address 2.2.2.1/32

!

vxlan 1

interface Vxlan1

  vxlan source-interface Loopback1

  vxlan udp-port 4789

!

eBGP underlay configuration

The EVPN BGP functionality is provided by a new BGP agent, therefore before configuring any eBGP functionality for either the underlay or overlay networks on the leaf or spine switches, the new BGP agent needs to enabled on each of the switches with the command below:

CLI command to enable EVPN BGP agent on the Leaf and Spine nodes

“service routing protocols model multi-agent”

 Leaf underlay eBGP configuration

Both leaf switches in the MLAG domain, have four uplinks, one to each of the Spine switches. A point-to-point routed interface is created on each of the uplinks and eBGP neighbour for connecting to each Spine is defined and separate iBGP session is created across the MLAG peer link. To simplify the BGP configuration separate peer-groups are created for the Spine and the MLAG peer. . To provide IP connectivity to both the shared and dedicated loopbacks, a route-map is created announcing the loopback IP addresses into the underlay.

Leaf-11

Leaf-12

!

interface Ethernet2

  description To_Spine-1

  no switchport

  ip address 172.168.1.2/30

!

interface Ethernet3

  description To_Spine-2

  no switchport

  ip address 172.168.1.6/30

!

interface Ethernet4

  description To_Spine-3

  no switchport

  ip address 172.168.1.10/30

!

interface Ethernet5

  description To_Spine-4

  no switchport

  ip address 172.168.1.14/30

!

ip prefix-list loopback

   seq 10 permit 1.1.1.11/32

   seq 11 permit 2.2.2.1/32

!

route-map loopback permit 10

  match ip address prefix-list loopback

!

router bgp 65001

  router-id 1.1.1.11

  maximum-paths 4

  neighbor LEAF_PEER peer-group

  neighbor LEAF_PEER remote-as 65001

  neighbor LEAF_PEER next-hop-self

  neighbor LEAF_PEER maximum-routes 12000

  neighbor SPINE peer-group  

  neighbor SPINE remote-as 65000

  neighbor SPINE route-map loopback out

  neighbor SPINE allowas-in 1

  neighbor SPINE send-community

  neighbor SPINE maximum-routes 20000

  neighbor 172.168.1.1 peer-group SPINE

  neighbor 172.168.1.5 peer-group SPINE

  neighbor 172.168.1.9 peer-group SPINE

  neighbor 172.168.1.13 peer-group SPINE

  neighbor 172.168.11.2 peer-group LEAF_PEER

  redistribute connected route-map loopback

  !

!

interface Ethernet2

  description To_Spine-1

  no switchport

  ip address 172.168.2.2/30

!

interface Ethernet3

  description To_Spine-2

  no switchport

  ip address 172.168.2.6/30

!

interface Ethernet4

  description To_Spine-3

  no switchport

  ip address 172.168.2.10/30

!

interface Ethernet5

  description To_Spine-4

  no switchport

  ip address 172.168.2.14/30

!

ip prefix-list loopback

   seq 10 permit 1.1.1.12/32

   seq 11 permit 2.2.2.1/32

!

route-map loopback permit 10

  match ip address prefix-list loopback

!

router bgp 65001

  router-id 1.1.1.12

  maximum-paths 4

  neighbor LEAF_PEER peer-group

  neighbor LEAF_PEER remote-as 65001

  neighbor LEAF_PEER next-hop-self

  neighbor LEAF_PEER maximum-routes 12000

  neighbor SPINE peer-group  

  neighbor SPINE remote-as 65000

  neighbor SPINE route-map loopback out

  neighbor SPINE allowas-in 1

  neighbor SPINE send-community

  neighbor SPINE maximum-routes 20000

  neighbor 172.168.2.1 peer-group SPINE

  neighbor 172.168.2.5 peer-group SPINE

  neighbor 172.168.2.9 peer-group SPINE

  neighbor 172.168.2.13 peer-group SPINE

  neighbor 172.168.11.1 peer-group LEAF_PEER

  redistribute connected route-map loopback

  !

Spine underlay eBGP configuration

The configuration below, outlines eBGP configuration on the Spine switches; Spine-1 and Spine-2 switches, a similar configuration would hold true for the other Spine switches, Spine-3 and Spine-4.

Spine-1

Spine-2

!

interface Loopback0

  ip address 1.1.1.1/32

!

interface Ethernet2

  description To_Leaf-11

  no switchport

  ip address 172.168.1.1/30

!

interface Ethernet3

   description To_Leaf-12

  no switchport

  ip address 172.168.2.1/30

!

interface Ethernet4

   description To_Leaf-21

  no switchport

  ip address 172.168.3.10/30

!

interface Ethernet5

  description To_Leaf-22

  no switchport

  ip address 172.168.4.1/30

!

ip prefix-list loopback

   seq 10 permit 1.1.1.1/32

!

route-map loopback permit 10

  match ip address prefix-list loopback

!

!

interface Loopback0

  ip address 1.1.1.2/32

!

interface Ethernet2

  description To_Leaf-11

  no switchport

  ip address 172.168.1.5/30

!

interface Ethernet3

   description To_Leaf-12

  no switchport

  ip address 172.168.2.5/30

!

interface Ethernet4

   description To_Leaf-21

  no switchport

  ip address 172.168.3.5/30

!

interface Ethernet5

  description To_Leaf-22

  no switchport

  ip address 172.168.4.5/30

!

ip prefix-list loopback

   seq 10 permit 1.1.1.2/32

!

route-map loopback permit 10

  match ip address prefix-list loopback

!

 

Spine-1

Spine-2

!

router bgp 65000

  router-id 1.1.1.1

  maximum-paths 4

  neighbor LEAF peer-group

  neighbor LEAF remote-as 65001

  neighbor LEAF maximum-routes 20000

  neighbor 172.168.1.2 peer-group LEAF

  neighbor 172.168.2.2 peer-group LEAF

  neighbor 172.168.3.2 peer-group LEAF

  neighbor 172.168.4.2 peer-group LEAF

  redistribute connected route-map loopback

!

!

router bgp 65000

  router-id 1.1.1.2

  maximum-paths 4

  neighbor LEAF peer-group

  neighbor LEAF remote-as 65001

  neighbor LEAF maximum-routes 20000

  neighbor 172.168.1.6 peer-group LEAF

  neighbor 172.168.2.6 peer-group LEAF

  neighbor 172.168.3.6 peer-group LEAF

  neighbor 172.168.4.6 peer-group LEAF

  redistribute connected route-map loopback

!

eBGP underlay routing table

With the eBGP configuration on the Spine and leaf switches complete, each leaf switch will have 5 BGP neighbors, one eBGP session for each of the four Spine switches and a single iBGP session for it’s MLAG peer.

Leaf-11

Leaf-11(config)#show ip bgp summary

BGP summary information for VRF default

Router identifier 1.1.1.11, local AS number 65002

Neighbor Status Codes: m – Under maintenance

 Neighbor         V  AS           MsgRcvd   MsgSent  InQ OutQ  Up/Down State  PfxRcd PfxAcc

 172.168.1.1      4  65000          96151     96177    0    0   56d18h Estab  5      5

 172.168.1.5      4  65000          96172     96209    0    0   56d18h Estab  5      5

 172.168.1.9      4  65000          96178     96166    0    0   56d18h Estab  5      5

 172.168.1.13     4  65000          96234     96203    0    0   56d18h Estab  5      5

 172.168.11.2     4  65001          96214     96243    0    0    6d13h Estab  9      6

Leaf-11(config)#

With the 4 eBGP sessions and the iBGP session up, below is the resultant routing table for Leaf-11. With ECMP enabled on the leaf switches, Leaf-11 is now learning four paths to the physical loopback of leaf-21 and leaf-22 (via each of the Spine switches) and four paths to the share loopback (2.2.2.2/32) for the logical VTEP-2 of rack-2. The leaf switch is also learning the physical loopback of each of the Spine switches, which will be used for eBGP EVPN session for the overlay network in the section of the document.

Leaf-11

Leaf-11(config)#show ip route

VRF name: default  

Codes: C – connected, S – static, K – kernel,

      O – OSPF, IA – OSPF inter area, E1 – OSPF external type 1,

      E2 – OSPF external type 2, N1 – OSPF NSSA external type 1,

      N2 – OSPF NSSA external type2, B I – iBGP, B E – eBGP,

      R – RIP, I L1 – IS-IS level 1, I L2 – IS-IS level 2,

      O3 – OSPFv3, A B – BGP Aggregate, A O – OSPF Summary,

      NG – Nexthop Group Static Route, V – VXLAN Control Service

Gateway of last resort:

S      0.0.0.0/0 [1/0] via 192.168.1.254, Management1

B E   1.1.1.1/32 [200/0] via 172.168.1.1, Ethernet2   —> Spine-1

B E   1.1.1.2/32 [200/0] via 172.168.1.5, Ethernet3   —> Spine-2

B E   1.1.1.3/32 [200/0] via 172.168.1.9, Ethernet4   —> Spine-3

B E   1.1.1.4/32 [200/0] via 172.168.1.13,Ethernet5   —> Spine-4

C     1.1.1.11/32 is directly connected, Loopback0

B I   1.1.1.12/32 [200/0] via 172.168.11.2, Vlan4093 →

B E   1.1.1.21/32 [200/0] via 172.168.1.1, Ethernet2 → ECMP to Leaf-21

                          via 172.168.1.5, Ethernet3

                          via 172.168.1.9, Ethernet4

                          via 172.168.1.13,Ethernet5

B E    1.1.1.22/32 [200/0] via 172.168.1.1,Ethernet2 → ECMP to Leaf-22

                           via 172.168.1.5,Ethernet3

                           via 172.168.1.9, Ethernet4

                           via 172.168.1.13, Ethernet5

C      2.2.2.1/32 is directly connected, Loopback1

B E    2.2.2.2/32 [200/0] via 172.168.1.1, Ethernet2 → ECMP to VTEP-2

                          via 172.168.1.5, Ethernet3

                          via 172.168.1.9, Ethernet4

                          via 172.168.1.13,Ethernet5

———————- Output truncated ———————————–

 

Spine-1

an-Spine-1(config-router-bgp)#show ip route

VRF name: default

Codes: C – connected, S – static, K – kernel,

      O – OSPF, IA – OSPF inter area, E1 – OSPF external type 1,

      E2 – OSPF external type 2, N1 – OSPF NSSA external type 1,

      N2 – OSPF NSSA external type2, B I – iBGP, B E – eBGP,

      R – RIP, I L1 – IS-IS level 1, I L2 – IS-IS level 2,

      O3 – OSPFv3, A B – BGP Aggregate, A O – OSPF Summary,

      NG – Nexthop Group Static Route, V – VXLAN Control Service

Gateway of last resort:

S      0.0.0.0/0 [1/0] via 192.168.1.254, Management1

C      1.1.1.1/32 is directly connected, Loopback0

B E    1.1.1.11/32 [20/0] via 172.168.1.2, Ethernet2

B E    1.1.1.12/32 [20/0] via 172.168.2.2, Ethernet3

B E    1.1.1.21/32 [20/0] via 172.168.3.2, Ethernet4

B E    1.1.1.22/32 [20/0] via 172.168.4.2, Ethernet5

B E    2.2.2.1/32 [20/0] via 172.168.1.2, Ethernet2

                         via 172.168.2.2, Ethernet3

B E    2.2.2.2/32 [20/0] via 172.168.3.2, Ethernet4

                         via 172.168.4.2, Ethernet5

C      172.168.1.0/30 is directly connected, Ethernet2

C      172.168.2.0/30 is directly connected, Ethernet3

C      172.168.3.0/30 is directly connected, Ethernet4

C      172.168.4.0/30 is directly connected, Ethernet5

———————- Output truncated ———————————–

 

The EVPN eBGP overlay underlay configuration

With the logical VTEP IP addresses announced in the underlay and learnt by each of the leaf switches, along with the physical loopback IPs of each switch (Leaf and Spine), the overlay eBGP EVPN session are created on the physical loopbacks between the leaf and spine nodes. This overlay multi-hop eBGP EVPN design, is highlighted in the diagram below.

The configurations below detail the EVPN eBGP configuration on the leaf nodes; leaf-11 and leaf-12  and the Spine node Spine-1. The detail does NOT include the configuration of any overlay networks to provide Layer 2 or 3 VPNs across the underlay. The configuration of the Layer 2 and 3  VPN’ss is covered in separate EOS central articles.

To provide resiliency, the leaf nodes have separate eBGP EVPN sessions with each of the four Spine switches. This ensures consistent configuration and behavior on the Spine, regardless of which Spine  fails or placed in maintenance, the remaining Spine switches will provide the same functionality.

The eVPN session could be run on the physical interface connecting to the Spine, in a similar manner to the underlay, however to maintain operational separation between the overlay and underlay networks, the EVPN eBGP session is run  on the loopback IP’s. To simplify the configuration a peer-group (SPINE_EVPN) is created for the four Spine neighbours, as the session is run on the loopback IP of both switches, multi-hop eBGP is used. The EVPN routes (Type-2 and Type-5) use a number of BGP extended communities, advertisement of extended communities therefore needs to be allowed on the peer-group.

Leaf-11

Leaf-12

!

router bgp 65001

  router-id 1.1.1.11

  maximum-paths 4

 neighbor SPINE_EVPN peer-group

  neighbor SPINE_EVPN remote-as 65000

  neighbor SPINE_EVPN update-source Loopback0

  neighbor SPINE_EVPN ebgp-multihop 5

  neighbor SPINE_EVPN send-community extended

  neighbor SPINE_EVPN maximum-routes 12000

  neighbor 1.1.1.1 peer-group SPINE_EVPN

  neighbor 1.1.1.2 peer-group SPINE_EVPN

  neighbor 1.1.1.3 peer-group SPINE_EVPN

  neighbor 1.1.1.4 peer-group SPINE_EVPN

  !

     address-family evpn

     neighbor SPINE_EVPN activate

  !

  address-family ipv4

     no neighbor SPINE_EVPN activate

  !

  address-family ipv6

     no neighbor SPINE_EVPN activate

  !

!

router bgp 65001

  router-id 1.1.1.12

  maximum-paths 4

  neighbor SPINE_EVPN peer-group

  neighbor SPINE_EVPN remote-as 65000

  neighbor SPINE_EVPN update-source Loopback0

  neighbor SPINE_EVPN ebgp-multihop 5

  neighbor SPINE_EVPN send-community extended

  neighbor SPINE_EVPN maximum-routes 12000

  neighbor 1.1.1.1 peer-group SPINE_EVPN

  neighbor 1.1.1.2 peer-group SPINE_EVPN

  neighbor 1.1.1.3 peer-group SPINE_EVPN

  neighbor 1.1.1.4 peer-group SPINE_EVPN

  !

     address-family evpn

     neighbor SPINE_EVPN activate

  !

  address-family ipv4

     no neighbor SPINE_EVPN activate

  !

  address-family ipv6

     no neighbor SPINE_EVPN activate

  !

To avoid the need for full eBGP mesh between all the leaf nodes in the design, the Spine switches act as a EVPN transit nodes. Where the Spine will peer with each EVPN leaf node and re-advertise the EVPN routes, with the next-hop unchanged (originating VTEP IP remains) and any  extended community  re-advertised. The configuration of Spine-1 and Spine-2, to allow EVPN transit behavior is shown below: 

Spine-1

Spine-2

!

 router bgp 65000

  router-id 1.1.1.1

  distance bgp 20 200 200

  maximum-paths 4

  neighbor LEAF_EVPN peer-group

  neighbor LEAF_EVPN remote-as 65001

  neighbor LEAF_EVPN next-hop-unchanged

  neighbor LEAF_EVPN update-source Loopback0

  neighbor LEAF_EVPN ebgp-multihop 5

  neighbor LEAF_EVPN send-community extended

  neighbor LEAF_EVPN maximum-routes 12000

  neighbor 1.1.1.11 peer-group LEAF_EVPN

  neighbor 1.1.1.12 peer-group LEAF_EVPN

  neighbor 1.1.1.21 peer-group LEAF_EVPN

  neighbor 1.1.1.22 peer-group LEAF_EVPN

  !

  address-family evpn

     neighbor LEAF_EVPN activate

  !

  address-family ipv4

     no neighbor LEAF_EVPN activate

  !

  address-family ipv6

     no neighbor LEAF_EVPN activate

!

!

 router bgp 65000

  router-id 1.1.1.2

  distance bgp 20 200 200

  maximum-paths 4

  neighbor LEAF_EVPN peer-group

  neighbor LEAF_EVPN remote-as 65001

  neighbor LEAF_EVPN next-hop-unchanged

  neighbor LEAF_EVPN update-source Loopback0

  neighbor LEAF_EVPN ebgp-multihop 5

  neighbor LEAF_EVPN send-community extended

  neighbor LEAF_EVPN maximum-routes 12000

  neighbor 1.1.1.11 peer-group LEAF_EVPN

  neighbor 1.1.1.12 peer-group LEAF_EVPN

  neighbor 1.1.1.21 peer-group LEAF_EVPN

  neighbor 1.1.1.22 peer-group LEAF_EVPN

  !

  address-family evpn

     neighbor LEAF_EVPN activate

  !

  address-family ipv4

     no neighbor LEAF_EVPN activate

  !

  address-family ipv6

     no neighbor LEAF_EVPN activate

!

With the Spine and leaf EVPN sessions configured, each leaf switch will have four BGP EVPN sessions, one to each Spine as illustrated below:

Leaf-11

Leaf-11(config)#show bgp evpn summary

BGP summary information for VRF default

Router identifier 1.1.1.11, local AS number 65002

Neighbor Status Codes: m – Under maintenance

 Neighbor         V  AS           MsgRcvd   MsgSent  InQ OutQ  Up/Down State  PfxRcd PfxAcc

 1.1.1.1          4  65000         102830    102820    0    0   60d14h Estab  17     17

 1.1.1.2          4  65000         102765    102851    0    0   60d14h Estab  17     17

 1.1.1.3          4  65000         102829    102872    0    0   60d14h Estab  17     17

 1.1.1.4          4  65000         102829    102829    0    0   60d14h Estab  17     17

Leaf-11(config)#

Leaf-11(config-router-bgp)#show bgp neighbors 1.1.1.1

BGP neighbor is 1.1.1.1, remote AS 65000, external link

 BGP version 4, remote router ID 1.1.1.1, VRF default

 Inherits configuration from and member of peer-group SPINE_EVPN

 Last read 00:00:40, last write 00:00:06

 Hold time is 180, keepalive interval is 60 seconds

 Configured hold time is 180, keepalive interval is 60 seconds

 Hold timer is active, time left: 00:02:19

 Keepalive timer is active, time left: 00:00:48

 Connect timer is inactive

 Idle-restart timer is inactive

 BGP state is Established, up for 60d15h

 Number of transitions to established: 1

 Last state was OpenConfirm

 Last event was KeepAlive

 Neighbor Capabilities:

   Multiprotocol L2VPN EVPN: advertised and received and negotiated

   Four Octet ASN: advertised and received

   Send End-of-RIB messages: advertised and received and negotiated

   Additional-paths recv capability:

     L2VPN EVPN: advertised

   Additional-paths send capability:

     L2VPN EVPN: received

   Graceful Restart advertised:

     Restart-time is 300

     Restarting: yes

   Graceful Restart received:

     Restart-time is 300

     Restarting: yes

 Restart timer is inactive

 End of rib timer is inactive

 Message Statistics:

                        Sent      Rcvd

   Opens:                  1         1

   Updates:              131       140

   Keepalives:        102691    102691

   Notifications:          0         0

   Route-Refresh:          0         0

   Total messages:    102823    102832

Prefix Statistics:

                        Sent      Rcvd

   IPv4 Unicast:           –         0

   IPv6 Unicast:           –         0

 Configured maximum total number of routes is 12000

 Inbound updates dropped by reason:

   AS path loop detection: 0

   Malformed MPBGP routes: 0

   Originator ID matches local router ID: 0

   Nexthop matches local IP address: 0

Local AS is 65001, local router ID 1.1.1.11

TTL is 5, external peer can be 5 hops away

Local TCP address is 1.1.1.11, local port is 179

Remote TCP address is 1.1.1.1, remote port is 34490

Each Spine switch will have an BGP EVPN sessions, to each of the leaf switches in the network, four in this case; Leaf-11 (2.2.2.11), Leaf-12 (2.2.2.12), Leaf-21 (2.2.2.21) and Leaf-22 (2.2.2.22).

Spine-1

Spine-1(config-router-bgp)#show bgp evpn summary

BGP summary information for VRF default

Router identifier 1.1.1.1, local AS number 65001

Neighbor Status Codes: m – Under maintenance

 Neighbor         V  AS           MsgRcvd   MsgSent  InQ OutQ  Up/Down State  PfxRcd PfxAcc

 1.1.1.11         4  65001         102825    102835    0    0   60d15h Estab  19     10

 1.1.1.12         4  65001         102835    102815    0    0   60d15h Estab  17     8

 1.1.1.21         4  65001         102857    102793    0    0    6d14h Estab  27     9

 1.1.1.22         4  65001         102813    102857    0    0   60d15h Estab  0      0

Spine-1(config-router-bgp)#

Spine-1(config-router-bgp)#show bgp neighbors 1.1.1.11

BGP neighbor is 1.1.1.11, remote AS 65001, external link

 BGP version 4, remote router ID 1.1.1.11, VRF default

 Inherits configuration from and member of peer-group LEAF_EVPN

 Last read 00:00:10, last write 00:00:12

 Hold time is 180, keepalive interval is 60 seconds

 Configured hold time is 180, keepalive interval is 60 seconds

 Hold timer is active, time left: 00:01:51

 Keepalive timer is active, time left: 00:00:36

 Connect timer is inactive

 Idle-restart timer is inactive

 BGP state is Established, up for 60d15h

 Number of transitions to established: 1

 Last state was OpenConfirm

 Last event was KeepAlive

 Neighbor Capabilities:

   Multiprotocol L2VPN EVPN: advertised and received and negotiated

   Four Octet ASN: advertised and received

   Send End-of-RIB messages: advertised and received and negotiated

   Additional-paths recv capability:

     L2VPN EVPN: advertised

   Additional-paths send capability:

     L2VPN EVPN: received

   Graceful Restart advertised:

     Restart-time is 300

     Restarting: yes

   Graceful Restart received:

     Restart-time is 300

     Restarting: yes

 Restart timer is inactive

 End of rib timer is inactive

 Message Statistics:

                        Sent      Rcvd

   Opens:                  1         1

   Updates:              140       131

   Keepalives:        102695    102694

   Notifications:          0         0

   Route-Refresh:          0         0

   Total messages:    102836    102826

Prefix Statistics:

                        Sent      Rcvd

   IPv4 Unicast:           –         0

   IPv6 Unicast:           –         0

 Configured maximum total number of routes is 12000

 Inbound updates dropped by reason:

   AS path loop detection: 82

   Malformed MPBGP routes: 0

   Originator ID matches local router ID: 0

   Nexthop matches local IP address: 0

Local AS is 65000, local router ID 1.1.1.1

TTL is 5, external peer can be 5 hops away

Local TCP address is 1.1.1.1, local port is 34490

Remote TCP address is 1.1.1.11, remote port is 179

 

 

 

 

Follow

Get every new post on this blog delivered to your Inbox.

Join other followers: