• Deploying Arista Switches using CloudVision Portal

 
 
Print Friendly, PDF & Email

Deploying Arista switches using CloudVision Portal (CVP)

Introduction

CloudVision Portal or CVP is an automation and orchestration tool for management and deployment of switch configuration across an entire IP based data center network. CVP uses a container hierarchy for organizing devices into logical groups and splits the device configurations into ‘configlets’ which can be applied at varying levels of the hierarchy to provide inheritance and de-duplication of effort when developing device configuration. This approach reduces human error thru inheritance of configuration. Operators can focus on the device specific configuration, knowing that general configuration, such as, AAA, domain name and DNS settings, NTP and so on will be applied based on inheritance from parent containers. As an example, in this document we will show how to apply configuration at the top level and see its effect all the way to each individual switch in the example Layer-3 Leaf Spine (L3LS) Topology.

About this document

The intention of this document is to provide a hands on guide for working with CloudVision Portal (CVP) to provision, configure and manage Arista switches. The examples used in this document are built from the Arista Standard VXLAN Validated Design1 and will provide a very concise yet comprehensive layer 3 leaf spine configuration. The examples also include configuration for MLAG+VXLAN2 and VARP3. This document is also intended to be a companion to the Layer 3 Leaf & Spine Design and Deployment Guide. This material is derived from the CVP 2015.1.2 release and does not include discussion of newer features introduced in CVP 2016.1 or later.  

Scope

In this document we will explore and provide examples for configuring switches through CVP in a greenfield environment. This greenfield environment has a few assumptions. First, it is assumed that all switches will be reset to factory settings and start in Zero Touch Provisioning (ZTP) mode, this is the default behavior of an Arista switch out-of-the-box that has no startup-config stored on it. To return an Arista switch to ZTP mode, you must erase the startup-config and reboot. This should only be done on switches that are not currently in production. Second, the assumption is you have deployed CVP according to the CloudVision Configuration Guide. Please consult the CVP Configuration Guide and the installation video here. The deployment of CVP will not be covered in this guide. Third, we will not cover the use or setup of CloudVision Exchange or CVX. The deployment, use and management of CVX is out of the scope of this document. However, it is worth mentioning that CVX runs on EOS just like Arista switches and therefore can be managed by CVP. Furthermore, a CVX configuration is included in the Appendix. Last, we will provide various links to supporting documentation but we will not explicitly discuss the rationale, merits or details of any network topology or design. With that said, it is good to note that the configuration examples were taken from a live network environment based on the Arista Universal Cloud Network (UCN) L3LS Design. It is also assumed the reader has strong familiarity with the EOS Command-Line Interface (CLI) or familiarity with similar network operating systems. Knowledge of how to configure a switch is assumed and detailed level explanation of the commands and examples is out of scope within this document. For an introduction to managing Arista EOS devices from the traditional CLI point of view take a look at this excellent primer on EOS Central. Readers can also review the User Manual and EOS Transfer of Information (TOIs) for detailed command explanation.

 

The environment

← previous ↑ back to top Throughout the rest of this document we will be using an environment made up of 10 EOS switch devices. We will build a 4-way layer-3 leaf spine network with 3 pairs of MLAG leaf switches. Each point-to-point link between leaf and spine will use a /31 address space. We will use the inheritance model that CVP provides to build up the L3 IP leaf spine network. First, we will use CVP’s Zero Touch Provisioning (ZTP) to collect the new switches in what is called the Undefined Container. Then we are ready to create our basic containers which will represent a data center, a pod, the spine nodes and the leaf nodes. At that point we will apply some basic configuration to the top level container which will ensure reachability via eAPI as well as configure our basic authentication settings. Once that is complete we can start moving the devices into their respective containers and apply the device specific management configuration which will include the all important management IP address. From there we will add device specific configuration to the switches themselves via one or more CVP Configlets. Please review the topology diagram below, we will use CVP to build a network based on this simple L3 Leaf Spine design. We will use BGP as our routing protocol. We will also deploy VXLAN Tunnel End Points (VTEPs) on the leaf switch pairs. The drawing below depicts a CVP instance as the main method for modifying the configuration of the entire IP network. To support the drawing we will also present the important configuration item information in tables to accompany the drawing. 4-way ECMP L3LV Lab

Terminology

← previous ↑ back to top

Term Definition
Container A CVP object that can contain zero or more devices or other containers. With the exception of the Tenant container, all containers must have a single parent. Containers can be assigned EOS images and configlets.
Device In CVP a device is a switch or VM running Arista EOS. Example devices include, a 7050SX-64 switch, a CVX vEOS vm or a vEOS-lab vm. Devices cannot contain other devices or containers.
Configlet A section or sub-set of the entire configuration. The configuration is broken into these units to be applied to containers and devices.
Leaf Spine  Simplified 2-tier network that provides sufficient scalable east-west bandwidth with low oversubscription between tiers
L3LS (Layer 3 Leaf Spine)  Leaf Spine network that utilizes routed point-to-point links between the leaf and the spine switches. Uses Equal Cost Multi-path (ECMP) provided by BGP, ISIS, OSPF
MA1 Management 1 interface. Each switch has a dedicated management interface
MLAG Multi-chassis link aggregation, a method for full bi-sectional bandwidth without STP blocked links
VXLAN Virtual Extensible Local Area Network, see RFC 7348 and VXLAN Bridging with MLAG on EOS Central for more details
VNI VXLAN Network Identifier, a unique 24-bit segment ID used in VXLAN

 

Configuration prerequisites

← previous ↑ back to top It is assumed that management of the Arista switches will be done using the built in Management 1 interface of each switch. This is a dedicated out of band management interface. Using an in band method for management with CVP is not recommended and is out of scope for this guide. The first thing we need to do is record all of the Management 1 interface MAC Addresses for the switches we wish to deploy. Each switch will have this MAC Address on a barcode on the switch chassis. When installing the switches into the rack this MAC Address should be recorded. An IP address and a hostname can then be associated to the device and recorded in a table such as the one provided below. In the example environment no serial number will be shown but a column has been provided in this table for re-use. Note, we have also recorded the allocated management 1 IP address, Loopback0 IP address, and the Loopback2 IP address in this table. The Loopback0 IP address will be used for the BGP routing protocol. The Loopback2 IP address will be used for VXLAN on leaf switches in the example configlets provided in the Appendix. A detailed explanation of VXLAN is beyond the scope of this document, if you would like more information on VXLAN follow this link for Arista resources on VXLAN.

Table 1: Switch Management & Loopbacks
Switch Role MA1 MAC Address MA1 IP Addr SN Lo 0 IP Addr (BGP) Lo2 IP Addr (vxlan VTI)
EOS-10 Spine-1 0050.5672.3960 10.0.5.10/24 NA 10.254.254.10/32 NA
EOS-11 Spine-2 0050.56b4.43a6 10.0.5.11/24 NA 10.254.254.11/32 NA
EOS-12 Spine-3 0050.562a.5fd4 10.0.5.12/24 NA 10.254.254.12/32 NA
EOS-13 Spine-4 0050.566c.4b18 10.0.5.13/24 NA 10.254.254.13/32 NA
EOS-14 Leaf 1-1 0050.5628.f76a 10.0.5.14/24 NA 10.254.254.14/32 10.253.14.15/32
EOS-15 Leaf 1-2 0050.56f9.3793 10.0.5.15/24 NA 10.254.254.15/32 10.253.14.15/32
EOS-16 Leaf 2-1 0050.565d.deb6 10.0.5.16/24 NA 10.254.254.16/32 10.253.16.19/32
EOS-19 Leaf 2-2 0050.5640.840c 10.0.5.19/24 NA 10.254.254.19/32 10.253.16.19/32
EOS-20 Leaf 3-1 0050.56e7.b527 10.0.5.20/24 NA 10.254.254.20/32 10.253.20.21/32
EOS-21 Leaf 3-2 0050.566b.4683 10.0.5.21/24 NA 10.254.254.21/32 10.253.20.21/32

For this example environment we have also recorded all of the point-to-point links for the interfaces on the leaf and spine switches that will connect to each other. We will reference this table throughout the document as we build the configuration to include these elements.

Table 2: Switch L2 & L3 parameters
Switch Eth1 Eth2 Eth3 Eth4 Eth5 Eth6 Eth7 Eth8
EOS-10 10.10.11.1/31 10.10.11.11/31 10.10.12.1/31 10.10.12.11/31 10.10.13.1/31 10.10.13.11/31 NA NA
EOS-11 10.10.11.3/31 10.10.11.13/31 10.10.12.3/31 10.10.12.13/31 10.10.13.3/31 10.10.13.13/31 NA NA
EOS-12 10.10.11.5/31 10.10.11.15/31 10.10.12.5/31 10.10.12.15/31 10.10.13.5/31 10.10.13.15/31 NA NA
EOS-13 10.10.11.7/31 10.10.11.17/31 10.10.12.7/31 10.10.12.17/31 10.10.13.7/31 10.10.13.17/31 NA NA
EOS-14 10.10.11.0/31 10.10.11.2/31 10.10.11.4/31 10.10.11.6/31 po1 hv 1 trunk po2 hv 2 trunk po999 peer-link po999 peer-link
EOS-15 10.10.11.10/31 10.10.11.12/31 10.10.11.14/31 10.10.11.16/31 po1 hv 1 trunk po2 hv 2 trunk po999 peer-link po999 peer-link
EOS-16 10.10.12.0/31 10.10.12.2/31 10.10.12.4/31 10.10.12.6/31 po1 hv 1 trunk po2 hv 2 trunk po999 peer-link po999 peer-link
EOS-19 10.10.12.10/31 10.10.12.12/31 10.10.12.14/31 10.10.12.16/31 po1 hv 1 trunk po2 hv 2 trunk po999 peer-link po999 peer-link
EOS-20 10.10.13.0/31 10.10.13.2/31 10.10.13.4/31 10.10.13.6/31 po1 hv 1 trunk po2 hv 2 trunk po999 peer-link po999 peer-link
EOS-21 10.10.13.10/31 10.10.13.12/31 10.10.13.14/31 10.10.13.16/31  po1 hv 1 trunk po2 hv 2 trunk  po999 peer-link  po999 peer-link

Additional configuration items included in the example configuration that can be changed or removed based on applicability to the user’s environment.

Table 3: Additional parameters
Component Value
name-server 10.0.5.1
domain-name lan
default route 10.0.5.1
CVX host 10.0.3.208

Initial CVP container setup

← previous ↑ back to top In order to configure our ten switches with CVP we must first setup the CVP hierarchy. In this example we will create a number of containers, as well as, the initial configlets and apply those at the top of the hierarchy. How you build this hierarchy is an organizational decision and should be based on naming conventions and best practices established within your organization. As a simple example, we will be using a hierarchy of tenant, data center (DC1), pod (Pod1), Leaf-Nodes, Spine-Nodes. The tenant container is already created and is the top level default container in CVP. Any configlet applied to the top level will be inherited by the nested containers and therefore the devices that are within those containers.

Container creation

← previous ↑ back to top To create a new container right click on the parent container in which you want the new container to reside. Initially we will right click on the Tenant container and select the Container sub-menu and click Add. Name this new container DC1. Then create the Pod1 container by right clicking on the DC1 container and repeating the previous steps. You can create the Leaf-Nodes and Spine-Nodes containers under Pod1. See the two illustrations below.

Screen Shot 2015-11-09 at 10.05.54 AM

Container > Add

Screen Shot 2015-11-09 at 10.08.03 AM

Use Container menu to add DC1, POD1, Leaf-Nodes and Spine-Nodes containers or suitable hierarchy for your environment

  Tip: The top level container ‘Tenant’ can be renamed by double-clicking on the name and changing it to something that makes sense for your organization and  naming conventions.

Initial CVP Configlet Creation

← previous ↑ back to top Download the Configlets used in this article cvp_configlets-02212016.tar Before we boot up our switches and start to pull them into CVP management we need to create two key configlets and apply those configlets to the top level container. These two configlets should be considered mandatory and they ensure two elements of reachability are in place. These elements will enable the eAPI management interface and authentication (AAA). CVP uses eAPI to communicate with the switches and it must be authenticated just as any user on the switch. You should modify the AAA configlet so that it follows your organization’s password policies and is unique to your organization. Each switch will eventually need to have three key pieces of configuration to be managed either via CVP or the CLI. Those key pieces of configuration are eAPI management must be turned on, AAA settings must be set, and the switch must have a management IP address. The first two key configuration items will be applied at the top level Tenant container, the management IP address will be part of a device specific configlet for each switch and applied to the switch itself in CVP. To create a configlet start at the CVP home screen and click the tile labeled Configlets.  

Screen Shot 2015-11-24 at 8.59.19 AM

Home > Configlets

After clicking the Configlets tile, click the plus (+) sign at the top right of the interface. Screen Shot 2016-01-19 at 3.59.53 PM Name this configlet eAPI and add the following CLI commands.

management api http-commands
no shutdown

Screen Shot 2016-01-19 at 4.00.33 PM Save the configlet when finished. Next, repeat these steps to create a configlet called AAA. We are creating a cvpadmin username, this is the account CVP will use throughout the rest of this document. Put the following CLI commands in this configlet.

username admin role network-admin secret 0 AristaRocks!
username cvpadmin role network-admin secret 0 CVPRocks!

Save the configlet when finished. Note: it will be important to use the password hash instead of using the plaintext password as we have for the example above. You can generate the password hash using the following command on a Linux machine with OpenSSL installed.

Linux $python -c "import random,string,crypt; randomsalt = ''.join(random.sample(string.ascii_letters,8)); print crypt.crypt('AristaRocks!', '\$6\$%s\$' % randomsalt)"
$6$eohiQvNl$2dj/Zwf3DUDVo9iMLU6zq4Ey93o5x22pCLJaHdTOjsTbvQJ452WLy4tIYHR14nwt47AiPrFRBJcFQ98PpcE2O1

Linux $python -c "import random,string,crypt; randomsalt = ''.join(random.sample(string.ascii_letters,8)); print crypt.crypt('AristaRocks!', '\$6\$%s\$' % randomsalt)"
$6$oiHcrRja$ghv9SG9/nFZiqj1lnRgEqkkA4Y0ce86MRR2WuKWQ.RuSphsgLj1eq1hVVasKp0qAALA9xgf6UUnXczLWb6TQF0

If you use the command above you can paste these lines in your AAA configlet and keep your plaintext password safe.

username admin role network-admin secret sha512 $6$eohiQvNl$2dj/Zwf3DUDVo9iMLU6zq4Ey93o5x22pCLJaHdTOjsTbvQJ452WLy4tIYHR14nwt47AiPrFRBJcFQ98PpcE2O1
username cvpadmin role network-admin secret sha512 $6$oiHcrRja$ghv9SG9/nFZiqj1lnRgEqkkA4Y0ce86MRR2WuKWQ.RuSphsgLj1eq1hVVasKp0qAALA9xgf6UUnXczLWb6TQF0

Alternatively, you can copy the hash from an existing switch that uses the same password. Once these two configlets are created we will apply them to the top level container called Tenant. By doing this, all devices under CVP management will have eAPI management turned on and will have these AAA settings. It is important to note that your AAA configuration may be of any level of sophistication, this example is simple but you can, for example, setup RADIUS or TACACS in this configlet as well. If your environment requires different authentication settings, for example, in DC1 vs. DC2 then you can apply the AAA configlet at a lower level such as the DC container. It is important to make sure that all devices have this configuration, so apply it at the highest level possible for your environment. To apply these configlets to the Tenant container, start from the CVP home screen and click the Network Provisioning tile and then right click on the Tenant container and choose Manage and click Configlets as shown below.

Screen Shot 2015-11-24 at 11.06.51 AM

Home > Network Provisioning: Manage configuration on Devices and Containers

 
Screen Shot 2015-11-09 at 10.09.54 AM

Manage > Configlets: Right click on Tenant Container and choose Configlets

Simply place a check in the box of the AAA and eAPI configlets and click update as shown in the illustration below.
Screen Shot 2015-11-24 at 9.14.50 AM

Configlets > Tenant: Select the AAA and eAPI Configlets by checking the boxes

High level configuration with configlets

← previous ↑ back to top In addition to the eAPI and AAA configlets there are a number of other configlets we will create that will apply at the container level and be inherited by all switches below that point in the hierarchy. Here is a list of these configlets and a brief description of what they will contain. See Appendix A Table 1 for a complete list and content of all configlets used.

Table 4: Container level configlets
Container Configlets Applied Purpose
DC1 CVX-Client-Config Setup CVX Management
  Domain-Config Name Servers, domain name
  SNMP-Config SNMP community and IP Address of server
  Switch-Defaults change from MST to Rapid-PVST, set timezone, turn on IP routing, see Appendices for more detail
  Aliases command aliases and multi-line aliases used to abbreviate longer complex commands

This document will examine three specific examples in detail but if you are using this document to setup your environment you should create these configlets using the content from Appendix A Table 1 and editing to match appropriately to your network addressing and name space. Below are the contents of the Domain-Config configlet.

ip name-server vrf default 10.0.5.1
ip domain-name lan

Screen Shot 2015-11-24 at 11.59.40 AM

Domain-Config: An example of a Configlet that can be applied to the DC1 Container

The same procedure can be used to select these configlets as when we applied the eAPI and AAA configlets. Navigate to the Network Provisioning tile on the home screen, then right click on the DC1 container, select Manage and click Configlets (Refer to illustration. Select each of the Configlets you have created from table 4 above. Click update when finished, as shown below. Do not forget to click Save on the Network Provisioning page. Screen Shot 2015-11-24 at 1.48.20 PM Screen Shot 2015-11-09 at 10.18.07 AM Now, any container or device below DC1 will get all of these configuration statements. Once this is complete we have a couple more tasks before we can start the process of booting up our switches and pulling them into CVP through the zero-touch provisioning (ZTP) capability Arista includes with EOS.

Images

← previous ↑ back to top Images in CVP are EOS network operating system images that you upload to the CVP server. Like Configlets, these images can be applied to containers or devices. The image will be uploaded to the device during the zero-touch provisioning process. CVP will also update the boot variable and reload the switch so it is running the EOS version you have specified. In our environment we will associate an image to the container DC1. This way, any new device provisioned through CVP’s ZTP bootstrap process will be upgraded, or downgraded, to this particular version of EOS. We could have applied the Image to the highest level Tenant, but we chose DC1 in this case. It is quite possible, though probably not best practice, to use the same CVP server to manage both vEOS lab devices and EOS based physical switches. You can see from this discussion on Images and containers, that it is important that all vEOS instances are placed in their own top level container and all EOS physical switches in theirs respectively. You would not, for example, want a vEOS image to be applied to a new EOS based physical switch as part of the ZTP process. In this environment we have also placed our CVX instance into its own container as some general configuration inherited by switches would quite possibly be unnecessary or just plain bad for CVX, which runs EOS but serves a different role. Users of CVP will want to do some planning and possibly iterating on that plan once implemented to get the container hierarchy built in a way that is best suited for leveraging the benefits of inheritance and automation. It is also a good idea to do thought experiments with colleagues to imagine what scenarios should be avoided in your environment. A vEOS lab with CVP is a way to iterate on this plan far from the production environment. A vEOS lab can be leveraged in myriad similar ways to reduce risk and positively enhance outcomes of change management activities. In short vEOS and CVP provide a robust lab environment that can be used for training, change management practice sessions, eAPI or EOSSDK development and so on. This is a great companion to an EOS physical switch environment managed by CVP. The time invested in this effort will provide improved operational expense, and better qualified support engineers and cloud architects. Please note that the minimum EOS version supported by CVP is 4.14.8M. What is great about images during the ZTP process is CVP has no problem upgrading a switch during ZTP to 4.14.8M or higher, regardless of what version the switch might be running at time of provisioning. Just remember, if you add a device running a version prior to 4.14.8M you will have to upgrade the switch for CVP to manage it properly. This is a perfect opportunity to leverage automation to level set new switches to a specific CVP supported version.

Creating an Image Bundle in CVP

← previous ↑ back to top First you will have to upload an image by choosing Images from the CVP home screen.

Screen Shot 2015-11-30 at 2.19.19 PM

Home > Images

  Click the plus (+) sign to add a new Image. Screen Shot 2015-11-30 at 2.19.39 PM Next, give the new Image Bundle a name, as an example we are using the EOS file name without the extension, i.e., EOS-4.15.2F. After you have named the Image Bundle click the far right folder icon which will allow you to upload the EOS .swi file from your local machine. Once you select the local file it will begin to upload the image. Once the image is uploaded, click save to complete the Image creation.

Associating an image bundle with a container

← previous ↑ back to top Here are the steps to apply an image to the DC1 container in CVP. From the home screen select Network Provisioning as we have in previous examples. Right click on the DC1 Container and choose Manage and click Image Bundle. Screen Shot 2015-11-09 at 10.08.56 AM   Check the box for the image you want to apply and then click Update.   Screen Shot 2015-11-30 at 2.20.20 PM That’s it, you now have the DC1 container to provide EOS image inheritance for all switches that are provisioned using CVP with ZTP. This will result in the consistent deployment of your current EOS version on all new switches that are added to your environment.

Configlet Theory – or How many Configlets do I need?

← previous ↑ back to top Throughout the rest of this document our examples will apply various configlets to the DC1 container and to the devices themselves. By separating the hierarchy at the pod level and separating leaf nodes from spine nodes we could apply configuration differently depending on pod number or role of the switch. The ability to apply configuration to any of these containers could become useful throughout the lifecycle of our data center network. You will see that, the higher in the hierarchy we apply a configlet, the more general and widely applicable to a larger number of devices it becomes. In fact we already have an example of this generalization with the eAPI and AAA configlets we applied during initial container and configlet creation. Each specific device we pull into CVP management will have at least one device specific configlet. You will see as we progress through our examples, each switch or device will have two device specific configlets applied at the device level. These will be identified as follows.

SWITCHNAME-mgmt

SWITCHNAME-config

It is not required to have more than one device specific configlet applied at the device level. Indeed, depending on your environment you may want multiple device specific configlets or you may just desire to keep it as simple as possible with one device specific configlet per switch. The rationale we’re using is we want to have a configlet that just describes the management configuration items for the switch. In our example you will see the SWITCHNAME-mgmt configlet contains strictly management type definitions. For example, we define the management 1 IP address and a default route, we also give the switch a hostname. Tip: By keeping this SWITCHNAME-mgmt configlet purpose built for management, the risk of breaking any of these settings is lowered through separating management from interface/routing configuration. If you don’t need to make changes to management configuration there is no need to edit this configlet. Two device specific configlets helps to protect our management settings from accidental modification and is the recommended approach to device specific configuration.

Provisioning switches with CVP

← previous ↑ back to top Now that we have revealed the thought and rationale behind developing our configlets, both in terms of generalized configlets applied to a container and device specific configlets that are applied to the switch device itself, we can start provisioning switches. If your CVP server is already setup and configured properly and your DHCP server is set to hand out a CVP bootfile you may already see switches in the undefined container. This would mean your switches have been powered up and have downloaded and executed the bootfile which has bootstrapped the devices to CVP. This is the result of the ZTP process and the nature of Arista switches booting to ZTP mode when no startup-config is found. Such is the case with a brand new switch from Arista Networks. Here is an example of what your DHCP configuration should look like to hand out the bootstrap to new switches. Please consult section 16.2 pgs. 147-148 of the CloudVision Configuration Guide for more detail on DHCP setup.

#
# switches managed by CVP specify it's bootstrap for ZTP
#
host EOS-10 {
  option dhcp-client-identifier 00:50:56:72:39:60;
  fixed-address 10.0.5.10;
  option bootfile-name "http://cvp01.lan/ztp/bootstrap";
}

Examples

← previous ↑ back to top

Example 1: Spine switch

← previous ↑ back to top In this example we will add EOS-10 to be in the Spine-Nodes container under DC1>Pod1 in the CVP hierarchy. EOS-10 will inherit all of the configuration from DC1 and we will add the EOS-10-mgmt configlet which contains the following (refer to the Appendix for the mgmt configlets of other EOS devices in these examples).

hostname EOS-10
ip route 0.0.0.0/0 10.0.5.1
interface Management1
  ip address 10.0.5.10/24

Once we have created this configlet we can find the switch in the Undefined container. Based on Table 1, we know the MAC Address of EOS-10 is: 0050.5672.3960 and when we hover over the device in the undefined container we can find a match. Screen Shot 2015-11-09 at 10.05.34 AMScreen Shot 2015-11-09 at 10.12.01 AMNext, we right click on this device and click Move. Then select the container Spine-Nodes Screen Shot 2015-11-09 at 10.14.49 AMScreen Shot 2015-11-09 at 10.15.00 AM Once you have moved the device to this container, immediately right click the device again and select Manage, then click Configlets. Screen Shot 2015-11-09 at 10.15.12 AMNext, make sure to check the box for the EOS-10-mgmt configlet. Screen Shot 2015-11-24 at 2.22.20 PM Then click Validate which will bring up a window with a three column view that shows the Proposed Configuration, the Designed Configuration and the current Running-Configuration that has been pulled via eAPI in real time. Where, the Designed Config is simply all of the configlets that will be applied to this device through inheritance and via device specific configlet(s). The Designed Configuration is the configuration that is derived from the Proposed Configuration and shows line numbers and has all elements of a startup-configuration in EOS. The Running Configuration is what is currently running on the switch. At the top of the Designed Configuration column you will see color coded counters for Total Lines, New Lines, Mismatch Lines, and To Reconcile. These counters allow you to quickly see how the designed configuration may differ from the running-configuration and gives you the ability to override the running-configuration (push) or Reconcile (pull) its differences in to the CVP managed configuration, if you choose. The sections between columns 2 & 3 are also color shaded following the same convention as the counters to see differences at a glance. Since this is the initial configuration it is ok to click Save, the differences are expected and there is no need to use CVP’s reconcile function. Tip: Notice the many potentially useful logging commands that are enabled by Arista when in ZTP mode. Screen Shot 2015-11-09 at 10.15.32 AM If we go back to the CVP home screen after clicking Save and then click on the Tasks tile, we should see a new task in the task list. Screen Shot 2015-11-24 at 2.29.21 PM   We can run this task by checking its checkbox and clicking the play (►) icon. The CVP server will now communicate with the switch over eAPI to make the configuration on the switch match that of the Designed Configuration. It will also apply the image applied to the container if applicable. You may want to watch this for the first time on the console of the switch if you have console access to get a better understanding of the ZTP process. If an image upgrade is performed the switch will reload more than once before finally being part of the Spine-Nodes container identified by its hostname EOS-10. At that point you should be able to SSH to the switch and login as normal to check for shell access. Remember, despite the switch being under CVP management the CLI is still available and an integral part of the operational model. However, we are now leveraging CVP to be the central source of truth for modeling and deploying the configuration. Any configuration changes that are made at the CLI and written will show up in CVP to be reconciled. It is best if, for example, using the CLI during troubleshooting that results in a configuration change to add that change to the appropriate configlet and push it out via CVP. One can also reconcile the change in CVP which will create a separate device specific configlet. See earlier comments on configlet sprawl as a reason to incorporate the changes into an existing device specific or general configlet rather than creating a new one with the Reconcile option. Screen Shot 2015-12-01 at 3.19.53 PM Screen Shot 2015-12-01 at 3.20.05 PM   We have now provisioned our first switch. The switch is EOS-10, it is now moved to the Spine-Nodes container and we have a very basic configuration, in fact the only thing unique about it at this point is the attributes given to it by the configlet called EOS-10-mgmt. The final step to configuring this switch is to add all of the interface and routing configuration. The approach to finalizing the configuration in our case is to put all remaining configuration into the EOS-10-config. On spine switches this is primarily the leaf to spine IP addressing on the ethernet interfaces and setting up BGP routing. On the Leaf switches we are also using MLAG, VARP, VXLAN VTEPs. While researching to write this document we were actually quite iterative in our approach. For example, for the first leaf switch pair we added the interface configuration and pushed that out with CVP. Then we added the MLAG configuration and pushed that out. Meanwhile between iterations we looked to make sure the configuration was taking effect as expected running commands like ‘show mlag’ and ‘show port-channel summary’ on the CLI to make sure the MLAG domain was up and so on. An iterative approach allows you to get statements into configlets that are verified before replicating them to additional switch pairs or spine nodes. If you are following along and using this as a guide you can leverage a lot of the work that has been done by replacing organizational specific configuration items with values that are correct for your environment. See the Appendix for a complete list of Configlets used in the writing of this document. Below is the EOS-10-config, which is the final device specific configlet to apply directly to EOS-10.

interface Ethernet1
   mtu 9000
   no switchport
   ip address 10.10.11.1/31
!
interface Ethernet2
 mtu 9000
 no switchport
 ip address 10.10.11.11/31
!
interface Ethernet3
 mtu 9000
 no switchport
 ip address 10.10.12.1/31
!
interface Ethernet4
 mtu 9000
 no switchport
 ip address 10.10.12.11/31
!
interface Ethernet5
 mtu 9000
 no switchport
 ip address 10.10.13.1/31
!
interface Ethernet6
  mtu 9000
  no switchport
  ip address 10.10.13.11/31
!
interface Ethernet7
   shutdown
!
interface Ethernet8
  shutdown
!
interface Loopback0
   ip address 10.254.254.10/32
!
router bgp 65050
   router-id 10.254.254.10
   update wait-for-convergence
   update wait-install
   maximum-paths 16 ecmp 16
   neighbor pod1 peer-group
   neighbor pod1 remote-as 65001
   neighbor pod1 send-community 
   neighbor pod1 maximum-routes 12000
   neighbor pod2 peer-group
   neighbor pod2 remote-as 65002
   neighbor pod2 send-community 
   neighbor pod2 maximum-routes 12000
   neighbor pod3 peer-group
   neighbor pod3 remote-as 65003
   neighbor pod3 send-community
   neighbor pod3 maximum-routes 12000
   neighbor 10.10.11.0 peer-group pod1
   neighbor 10.10.11.10 peer-group pod1
   neighbor 10.10.12.0 peer-group pod2
   neighbor 10.10.12.10 peer-group pod2
   neighbor 10.10.13.0 peer-group pod3   
   neighbor 10.10.13.10 peer-group pod3
   redistribute connected

Once the EOS-10-config configlet is created, you will right click on EOS-10, select manage and click configlets just as you have before. Complete the configuration by putting  a check mark in the EOS-10-config configlet. Once saved, you will have a task to run (►) in the Tasks tile of CVP. NOTE: The following two statements will not work on vEOS and are meant for physical switches. See the EOS 4.14.5F TOI pages 103 & 113 for details of these BGP convergence enhancements.

update wait-for-convergence
update wait-install

Example 2: Leaf switch

← previous ↑ back to top For the first leaf switch the process is nearly the same. Instead of moving the device to the Spine-Nodes container, we will move it to the Leaf-Nodes container. As before, upon moving the leaf switch to the Leaf-Nodes container we will also apply the management configlet. In this case the switch is EOS-14 and we apply the EOS-14-mgmt configlet. EOS-14, will then inherit the configuration applied to DC1 as well since the Leaf-Nodes container is a member of DC1. Once EOS-14 has been successfully moved to the Leaf-Nodes container and its ZTP process is completed it will be reachable via SSH due to the EOS-14-mgmt configlet, it will be reachable by eAPI and have a AAA configuration due to the inheritance of the configuration elements applied in the eAPI and AAA configlets. For the leaf switch we must apply the device specific configlet EOS-14-config, to provide the switch with its interface configuration, MLAG configuration and a number of other parameters. Here is the device specific configlet EOS-14-config, notice there are no VLANs or VXLAN configurations, aside from vlan 4094 that is used for the mlag peer-link. We will use the power of CVP to apply the VXLAN interface, vni to vlan mappings and associated vlans to all leaf switches in the topology.

ip virtual-router mac-address 00:1c:73:00:00:01
!
interface Port-Channel1
   mtu 9000
   switchport mode trunk
   mlag 1
!
interface Port-Channel2
   mtu 9000
   switchport mode trunk
   mlag 2
!
interface Port-Channel999
 mtu 9000
 switchport mode trunk
 switchport trunk group mlagpeer
!
interface Ethernet1
  mtu 9000
   no switchport
   ip address 10.10.11.0/31
!
interface Ethernet2
 mtu 9000
 no switchport
 ip address 10.10.11.2/31
!
interface Ethernet3
 mtu 9000
 no switchport
 ip address 10.10.11.4/31
!
interface Ethernet4
 mtu 9000
 no switchport
 ip address 10.10.11.6/31
!
interface Ethernet5
mtu 9000
channel-group 1 mode active
!
interface Ethernet6
channel-group 2 mode active
!
interface Ethernet7
channel-group 999 mode active
!
interface Ethernet8
 channel-group 999 mode active
!
interface Loopback0
   ip address 10.254.254.14/32
!
Interface Loopback2
   ip address 10.253.14.15/32
!
router bgp 65001
   router-id 10.254.254.14
   update wait-install
   maximum-paths 4 ecmp 4
   neighbor spine peer-group
   neighbor spine remote-as 65050
   neighbor spine send-community
   neighbor spine maximum-routes 12000
   neighbor 10.10.11.1 peer-group spine
   neighbor 10.10.11.3 peer-group spine
   neighbor 10.10.11.5 peer-group spine
   neighbor 10.10.11.7 peer-group spine
   !! IBGP session to MLAG Peer EOS-15
   neighbor 10.10.1.2 remote-as 65001
   neighbor 10.10.1.2 next-hop-self
   neighbor 10.10.1.2 allowas-in 1
   neighbor 10.10.1.2 maximum-routes 12000
   redistribute connected 
! 
vlan 4094 
   trunk group mlagpeer 
! 
no spanning-tree vlan 4094 
! 
interface Vlan4094 
   ip address 10.10.1.1/30 
! 
mlag configuration 
   domain-id mlagL1 
   local-interface Vlan4094 
   peer-address 10.10.1.2 
   peer-link port-channel 999 
!

Example 3: VXLAN and VLAN configuration

← previous ↑ back to top The topology in this guide was built to show an example of a VXLAN Direct Routing configuration. It leverages CloudVision Exchange (CVX) as a Global Network Controller to collect and disseminate VXLAN flood lists for Head End Replication (HER), and MAC reachability information for the overlay logical L2 segments. As this is an L3 leaf spine architecture, VXLAN tunnels are used to provide L2 adjacency between racks of compute resources. This design can be extended to include third party integration with other controllers such as Openstack’s ML2 mechanism driver or NSX-v or NSX-MH and others. These third party integrations allow much more dynamic allocation of VXLAN segments that are added and removed based on the needs of the orchestrated virtual computing environment. VXLAN Direct Routing is not the only way to build a network virtualization architecture. Please review all related material on Arista Networks VXLAN solutions to determine if VXLAN Direct Routing or Indirect Routing fit your architecture and network virtualization strategy. Since each TOR pair will be an MLAG VTEP and each TOR pair will be the default gateway for each of the logical L2 segments, the configuration will be identical for each leaf switch pair. This is a great opportunity to leverage the power of CVP to push the vlan configuration to each leaf-switch via the Leaf-Nodes container. Note: there are reasons why you may not want to deploy every single vlan to every pair of TORs, modify appropriately for your environment. Create the following configlet called Leaf-VXLAN-config. The major components of this configlet include the Vxlan1 interface configuration. This is where you specify the vxlan source interface (Loopback2), the method for how vxlan information is acquired/sent (controller-client), the port to use for vxlan (4789 by default) and the vxlan vlan to vni mappings. We define and name the VLANs in this configlet. We also define the vlan SVI interfaces here. Since each pair of TORs is its own distinct L2 space you can re-use the same VLAN and SVI addresses. Remember, L2 is extended through VXLAN tunnels so each rack is essentially its own L2 domain. You should adjust this configlet to fit your requirements and VLAN/VNI name space and apply it to the Leaf-Nodes container.

interface Vxlan1
   vxlan source-interface Loopback2
   vxlan controller-client
   vxlan udp-port 4789
   vxlan vlan 5 vni 5500
   vxlan vlan 6 vni 6600
   vxlan vlan 7 vni 7700
   vxlan vlan 8 vni 8800
!
vlan 5
 name 10.5.5.0/24
!
vlan 6
 name 10.6.6.0/24
!
vlan 7 
 name 10.7.7.0/24
!
vlan 8 
 name 10.8.8.0/24
!
interface Vlan5
ip address virtual 10.5.5.1/24
!
interface Vlan6
ip address virtual 10.6.6.1/24
!
interface Vlan7
ip address virtual 10.7.7.1/24
!
interface Vlan8
ip address virtual 10.8.8.1/24
!
end

Completing the build out of the network

← previous ↑ back to top So far we have set up the CVP inheritance model using Containers. We have also uploaded an EOS image to leverage this inheritance by applying it to the DC1 container. In this container resides three more containers, Pod1, and Leaf-Nodes and Spine-Nodes which are members of the DC1 container. Additionally there are two EOS instances built out called EOS-10 & EOS-14, one spine and one leaf. By using the same methods described previously and the list of configlets in the Appendix you can build the remaining three spine switches and any number of MLAG leaf switch pairs with CVP. Each switch will show a number of configlets applied when you right click on a device, select Manage and click Configlets.

Screen Shot 2015-12-02 at 11.34.10 AM

Final switch configuration modeled by Configlets. Note: vEOS-10_INTF_Config was a previous name for the configlet EOS-10-config

Summary

← previous ↑ back to top At this point we have shown the main components of CVP. We have created basic generalized configlets that can be  applied at the Tenant level and DC1 Containers. We used ZTP with CVP to collect the new switches in the undefined container and we showed how to move switches from the undefined Container to reside in one of the child containers such as Tenant > Pod1 > Spine-Nodes. Finally, we showed how to apply the device specific configlet(s) and why they are important to manage the switch after the initial deployment is complete. If you replace the values in the provided tables with ones that are apropos to your environment and you replace these values in the provided configlets, you will have a near complete L3 leaf spine configuration that also includes MLAG+VXLAN leaf configuration. It is also possible to modify these configlets and replicate them as appropriate to add or remove components for your environment. The examples should suffice to provide solid configurations that are based on the Arista Networks Standard VXLAN Validated Design and adhere to the concepts of the L3LS UCN Network architecture. If you are planning to do heavy modification of the leaf configuration, it is a good idea to get one leaf or leaf pair set right and deployed and then copy those configlets for the remaining leaf switches. This will help to avoid a lot of manual edits to configlets after they are in place. Another recommendation would be to deploy all VLANs and SVIs via a common configlet applied to the leaf container. This approach will allow you to consistently deploy vlans to every leaf switch using a single configlet. Be sure to map the VLAN to the VNI for proper encapsulation across the L3 transport network. ↑ Table of Contents ↑

Appendices

Appendix A – Configlets used in this document

General Configlets

Configlet Name Configlet Contents
AAA username admin role network-admin secret 5 $1$kSKd7XlD$9.2uSo4K39YmfilzqIpqj1 username cvpadmin role network-admin secret 5 $1$kSKd7XlD$9.2uSo4K39YmfilzqIpqj1
Aliases alias conint sh interface | I connected alias dump bash tcpdump -i %1 alias routeage bash echo ‘show ip route’ | cliribd alias senz show interface counter error | nz alias shmc show int | awk ‘/^[A-Z]/ { intf = $1 } /, address is/ { print intf, $6 }’ alias snz show interface counter | nz alias spd show port-channel %1 detail all alias sqnz show interface counter queue | nz alias srnz show interface counter rate | nz alias shvxaddr show vxlan address-table alias rexmpp   10 conf t   20 management xmpp   30 shut   40 no shut   50 exit
CVX-Client-Config management cvx no shutdown !! Tip: CVX supports using a DNS A record with multiple IP Addresses vs. 3 entries in the config server host 10.0.3.209 server host 10.0.3.208 server host 10.0.3.210 source-interface management 1 end
Domain-Config ip name-server vrf default 10.0.5.1 ip domain-name lan
SNMP-Config snmp-server location minneapolis snmp-server community br0b0ts ro snmp-server user observium observium v2c
Switch-Defaults ntp server vrf default 129.6.15.28 ip routing spanning-tree mode rapid-pvst clock timezone US/Central end
Leaf-VXLAN-config interface Vxlan1 vxlan source-interface Loopback2 vxlan controller-client vxlan udp-port 4789 vxlan vlan 5 vni 5500 vxlan vlan 6 vni 6600 vxlan vlan 7 vni 7700 vxlan vlan 8 vni 8800 ! vlan 5 name 10.5.5.0/24 ! vlan 6 name 10.6.6.0/24 ! vlan 7 name 10.7.7.0/24 ! vlan 8 name 10.8.8.0/24 ! interface Vlan5 ip address virtual 10.5.5.1/24 ! interface Vlan6 ip address virtual 10.6.6.1/24 ! interface Vlan7 ip address virtual 10.7.7.1/24 ! interface Vlan8 ip address virtual 10.8.8.1/24 ! end

 

Device Specific Configlets

EOS-10-mgmt hostname EOS-10 ip route 0.0.0.0/0 10.0.5.1 interface Management1   ip address 10.0.5.10/24
EOS-11-mgmt hostname EOS-11

 

ip route 0.0.0.0/0 10.0.5.1 interface Management1   ip address 10.0.5.11/24

EOS-12-mgmt hostname EOS-12 ip route 0.0.0.0/0 10.0.5.1 interface Management1   ip address 10.0.5.12/24
EOS-13-mgmt hostname EOS-13 ip route 0.0.0.0/0 10.0.5.1 interface Management1   ip address 10.0.5.13/24
EOS-14-mgmt hostname EOS-14 ip route 0.0.0.0/0 10.0.5.1 interface Management1   ip address 10.0.5.14/24
EOS-15-mgmt hostname EOS-15 ip route 0.0.0.0/0 10.0.5.1 interface Management1   ip address 10.0.5.15/24
EOS-16-mgmt hostname EOS-16 ip route 0.0.0.0/0 10.0.5.1 interface Management1   ip address 10.0.5.16/24
EOS-19-mgmt hostname EOS-19 ip route 0.0.0.0/0 10.0.5.1 interface Management1   ip address 10.0.5.19/24
EOS-20-mgmt hostname EOS-20 ip route 0.0.0.0/0 10.0.5.1 interface Management1   ip address 10.0.5.20/24
EOS-21-mgmt hostname EOS-21 ip route 0.0.0.0/0 10.0.5.1 interface Management1   ip address 10.0.5.21/24
EOS-10-config interface Ethernet1 mtu 9000 no switchport ip address 10.10.11.1/31 ! interface Ethernet2 mtu 9000 no switchport ip address 10.10.11.11/31 ! interface Ethernet3 mtu 9000 no switchport ip address 10.10.12.1/31 ! interface Ethernet4 mtu 9000 no switchport ip address 10.10.12.11/31 ! interface Ethernet5 mtu 9000 no switchport ip address 10.10.13.1/31 ! interface Ethernet6 mtu 9000 no switchport ip address 10.10.13.11/31 ! interface Ethernet7 shutdown ! interface Ethernet8 shutdown ! interface Loopback0 ip address 10.254.254.10/32 ! router bgp 65050 router-id 10.254.254.10 update wait-for-convergence update wait-install maximum-paths 16 ecmp 16 neighbor pod1 peer-group neighbor pod1 remote-as 65001 neighbor pod1 send-community neighbor pod1 maximum-routes 12000 neighbor pod2 peer-group neighbor pod2 remote-as 65002 neighbor pod2 send-community neighbor pod2 maximum-routes 12000 neighbor pod3 peer-group neighbor pod3 remote-as 65003 neighbor pod3 send-community neighbor pod3 maximum-routes 12000 neighbor 10.10.11.0 peer-group pod1 neighbor 10.10.11.10 peer-group pod1 neighbor 10.10.12.0 peer-group pod2 neighbor 10.10.12.10 peer-group pod2 neighbor 10.10.13.0 peer-group pod3 neighbor 10.10.13.10 peer-group pod3 redistribute connected
EOS-11-config interface Ethernet1 mtu 9000 no switchport ip address 10.10.11.3/31 ! interface Ethernet2 mtu 9000 no switchport ip address 10.10.11.13/31 ! interface Ethernet3 mtu 9000 no switchport ip address 10.10.12.3/31 ! interface Ethernet4 mtu 9000 no switchport ip address 10.10.12.13/31 ! interface Ethernet5 mtu 9000 no switchport ip address 10.10.13.3/31 ! interface Ethernet6 mtu 9000 no switchport ip address 10.10.13.13/31 ! interface Ethernet7 shutdown ! interface Ethernet8 shutdown ! interface Loopback0 ip address 10.254.254.11/32 ! router bgp 65050 router-id 10.254.254.11 update wait-for-convergence update wait-install maximum-paths 16 ecmp 16 neighbor pod1 peer-group neighbor pod1 remote-as 65001 neighbor pod1 send-community neighbor pod1 maximum-routes 12000 neighbor pod2 peer-group neighbor pod2 remote-as 65002 neighbor pod2 send-community neighbor pod2 maximum-routes 12000 neighbor pod3 peer-group neighbor pod3 remote-as 65003 neighbor pod3 maximum-routes 12000 neighbor 10.10.11.2 peer-group pod1 neighbor 10.10.11.12 peer-group pod1 neighbor 10.10.12.2 peer-group pod2 neighbor 10.10.12.12 peer-group pod2 neighbor 10.10.13.2 peer-group pod3 neighbor 10.10.13.12 peer-group pod3 redistribute connected
EOS-12-config interface Ethernet1 mtu 9000 no switchport ip address 10.10.11.5/31 ! interface Ethernet2 mtu 9000 no switchport ip address 10.10.11.15/31 ! interface Ethernet3 mtu 9000 no switchport ip address 10.10.12.5/31 ! interface Ethernet4 mtu 9000 no switchport ip address 10.10.12.15/31 ! interface Ethernet5 mtu 9000 no switchport ip address 10.10.13.5/31 ! interface Ethernet6 mtu 9000 no switchport ip address 10.10.13.15/31 ! interface Ethernet7 shutdown ! interface Ethernet8 shutdown ! interface Loopback0 ip address 10.254.254.12/32 ! router bgp 65050 router-id 10.254.254.12 update wait-for-convergence update wait-install maximum-paths 16 ecmp 16 neighbor pod1 peer-group neighbor pod1 remote-as 65001 neighbor pod1 send-community neighbor pod1 maximum-routes 12000 neighbor pod2 peer-group neighbor pod2 remote-as 65002 neighbor pod2 send-community neighbor pod2 maximum-routes 12000 neighbor pod3 peer-group neighbor pod3 remote-as 65003 neighbor pod3 maximum-routes 12000 neighbor 10.10.11.4 peer-group pod1 neighbor 10.10.11.14 peer-group pod1 neighbor 10.10.12.4 peer-group pod2 neighbor 10.10.12.14 peer-group pod2 neighbor 10.10.13.4 peer-group pod3 neighbor 10.10.13.14 peer-group pod3 redistribute connected
EOS-13-config interface Ethernet1 mtu 9000 no switchport ip address 10.10.11.7/31 ! interface Ethernet2 mtu 9000 no switchport ip address 10.10.11.17/31 ! interface Ethernet3 mtu 9000 no switchport ip address 10.10.12.7/31 ! interface Ethernet4 mtu 9000 no switchport ip address 10.10.12.17/31 ! interface Ethernet5 mtu 9000 no switchport ip address 10.10.13.7/31 ! interface Ethernet6 mtu 9000 no switchport ip address 10.10.13.17/31 ! interface Ethernet7 shutdown ! interface Ethernet8 shutdown ! interface Loopback0 ip address 10.254.254.13/32 ! router bgp 65050 router-id 10.254.254.13 !! update wait-for-convergence !! update wait-install maximum-paths 16 ecmp 16 neighbor pod1 peer-group neighbor pod1 remote-as 65001 neighbor pod1 send-community neighbor pod1 maximum-routes 12000 neighbor pod2 peer-group neighbor pod2 remote-as 65002 neighbor pod2 send-community neighbor pod2 maximum-routes 12000 neighbor pod3 peer-group neighbor pod3 remote-as 65003 neighbor pod3 maximum-routes 12000 neighbor 10.10.11.6 peer-group pod1 neighbor 10.10.11.16 peer-group pod1 neighbor 10.10.12.6 peer-group pod2 neighbor 10.10.12.16 peer-group pod2 neighbor 10.10.13.6 peer-group pod3 neighbor 10.10.13.16 peer-group pod3 redistribute connected
EOS-14-config ip virtual-router mac-address 00:1c:73:00:00:01 ! interface Port-Channel1 mtu 9000 switchport access vlan 5 mlag 1 spanning-tree portfast ! interface Port-Channel2 mtu 9000 switchport mode trunk mlag 2 spanning-tree portfast ! interface Port-Channel999 mtu 9000 switchport mode trunk switchport trunk group mlagpeer ! interface Ethernet1 mtu 9000 no switchport ip address 10.10.11.0/31 ! interface Ethernet2 mtu 9000 no switchport ip address 10.10.11.2/31 ! interface Ethernet3 mtu 9000 no switchport ip address 10.10.11.4/31 ! interface Ethernet4 mtu 9000 no switchport ip address 10.10.11.6/31 ! interface Ethernet5 mtu 9000 channel-group 1 mode active ! interface Ethernet6 mtu 9000 channel-group 2 mode active ! interface Ethernet7 channel-group 999 mode active ! interface Ethernet8 channel-group 999 mode active ! interface Loopback0 ip address 10.254.254.14/32 ! Interface Loopback2 ip address 10.253.14.15/32 !! ip address 10.253.14.150/32 secondary ! router bgp 65001 router-id 10.254.254.14 !! update wait-install maximum-paths 4 ecmp 4 neighbor spine peer-group neighbor spine remote-as 65050 neighbor spine send-community neighbor spine maximum-routes 12000 neighbor 10.10.11.1 peer-group spine neighbor 10.10.11.3 peer-group spine neighbor 10.10.11.5 peer-group spine neighbor 10.10.11.7 peer-group spine !! IBGP session to MLAG Peer EOS-15 neighbor 10.10.1.2 remote-as 65001 neighbor 10.10.1.2 next-hop-self neighbor 10.10.1.2 allowas-in 1 neighbor 10.10.1.2 maximum-routes 12000 redistribute connected ! vlan 4094 trunk group mlagpeer ! no spanning-tree vlan 4094 ! interface Vlan4094 ip address 10.10.1.1/30 ! mlag configuration domain-id mlagL1 local-interface Vlan4094 peer-address 10.10.1.2 peer-link port-channel 999 !
EOS-15-config  ip virtual-router mac-address 00:1c:73:00:00:01 ! interface Port-Channel1 mtu 9000 switchport access vlan 5 mlag 1 spanning-tree portfast ! interface Port-Channel2 mtu 9000 switchport mode trunk mlag 2 spanning-tree portfast ! interface Port-Channel999 mtu 9000 switchport mode trunk switchport trunk group mlagpeer ! interface Ethernet1 mtu 9000 no switchport ip address 10.10.11.10/31 ! interface Ethernet2 mtu 9000 no switchport ip address 10.10.11.12/31 ! interface Ethernet3 mtu 9000 no switchport ip address 10.10.11.14/31 ! interface Ethernet4 mtu 9000 no switchport ip address 10.10.11.16/31 ! interface Ethernet5 mtu 9000 channel-group 1 mode active ! interface Ethernet6 mtu 9000 channel-group 2 mode active ! interface Ethernet7 channel-group 999 mode active ! interface Ethernet8 channel-group 999 mode active ! interface Loopback0 ip address 10.254.254.15/32 ! Interface Loopback2 ip address 10.253.14.15/32 !! ip address 10.253.14.150/32 secondary ! router bgp 65001 router-id 10.254.254.15 !! update wait-install maximum-paths 4 ecmp 4 neighbor spine peer-group neighbor spine remote-as 65050 neighbor spine send-community neighbor spine maximum-routes 12000 neighbor 10.10.11.11 peer-group spine neighbor 10.10.11.13 peer-group spine neighbor 10.10.11.15 peer-group spine neighbor 10.10.11.17 peer-group spine !! IBGP session to MLAG Peer EOS-14 neighbor 10.10.1.1 remote-as 65001 neighbor 10.10.1.1 next-hop-self neighbor 10.10.1.1 allowas-in 1 neighbor 10.10.1.1 maximum-routes 12000 redistribute connected ! vlan 4094 trunk group mlagpeer ! no spanning-tree vlan 4094 ! interface Vlan4094 ip address 10.10.1.2/30 ! mlag configuration domain-id mlagL1 local-interface Vlan4094 peer-address 10.10.1.1 peer-link port-channel 999 !
EOS-16-config ip virtual-router mac-address 00:1c:73:00:00:02 ! interface Port-Channel1 mtu 9000 switchport access vlan 5 mlag 1 spanning-tree portfast ! interface Port-Channel2 mtu 9000 switchport access vlan 6 mlag 2 spanning-tree portfast ! interface Port-Channel999 mtu 9000 switchport mode trunk switchport trunk group mlagpeer ! interface Ethernet1 mtu 9000 no switchport ip address 10.10.12.0/31 ! interface Ethernet2 mtu 9000 no switchport ip address 10.10.12.2/31 ! interface Ethernet3 mtu 9000 no switchport ip address 10.10.12.4/31 ! interface Ethernet4 mtu 9000 no switchport ip address 10.10.12.6/31 ! interface Ethernet5 mtu 9000 channel-group 1 mode active ! interface Ethernet6 mtu 9000 channel-group 2 mode active ! interface Ethernet7 channel-group 999 mode active ! interface Ethernet8 channel-group 999 mode active ! interface Loopback0 ip address 10.254.254.16/32 ! Interface Loopback2 ip address 10.253.16.19/32 !! ip address 10.253.16.190/32 secondary ! router bgp 65002 router-id 10.254.254.16 !! update wait-install maximum-paths 4 ecmp 4 neighbor spine peer-group neighbor spine remote-as 65050 neighbor spine send-community neighbor spine maximum-routes 12000 neighbor 10.10.12.1 peer-group spine neighbor 10.10.12.3 peer-group spine neighbor 10.10.12.5 peer-group spine neighbor 10.10.12.7 peer-group spine !! IBGP session to MLAG Peer EOS-19 neighbor 10.10.1.6 remote-as 65002 neighbor 10.10.1.6 next-hop-self neighbor 10.10.1.6 allowas-in 1 neighbor 10.10.1.6 maximum-routes 12000 redistribute connected ! vlan 4094 trunk group mlagpeer ! no spanning-tree vlan 4094 ! interface Vlan4094 ip address 10.10.1.5/30 ! mlag configuration domain-id mlagL2 local-interface Vlan4094 peer-address 10.10.1.6 peer-link port-channel 999 !
EOS-19-config ip virtual-router mac-address 00:1c:73:00:00:02 ! interface Port-Channel1 mtu 9000 switchport access vlan 5 mlag 1 spanning-tree portfast ! interface Port-Channel2 mtu 9000 switchport access vlan 6 mlag 2 spanning-tree portfast ! interface Port-Channel999 mtu 9000 switchport mode trunk switchport trunk group mlagpeer ! interface Ethernet1 mtu 9000 no switchport ip address 10.10.12.10/31 ! interface Ethernet2 mtu 9000 no switchport ip address 10.10.12.12/31 ! interface Ethernet3 mtu 9000 no switchport ip address 10.10.12.14/31 ! interface Ethernet4 mtu 9000 no switchport ip address 10.10.12.16/31 ! interface Ethernet5 mtu 9000 channel-group 1 mode active ! interface Ethernet6 mtu 9000 channel-group 2 mode active ! interface Ethernet7 channel-group 999 mode active ! interface Ethernet8 channel-group 999 mode active ! interface Loopback0 ip address 10.254.254.19/32 ! Interface Loopback2 ip address 10.253.16.19/32 !! ip address 10.253.16.190/32 secondary ! router bgp 65002 router-id 10.254.254.19 !! update wait-install maximum-paths 4 ecmp 4 neighbor spine peer-group neighbor spine remote-as 65050 neighbor spine send-community neighbor spine maximum-routes 12000 neighbor 10.10.12.11 peer-group spine neighbor 10.10.12.13 peer-group spine neighbor 10.10.12.15 peer-group spine neighbor 10.10.12.17 peer-group spine !! IBGP session to MLAG Peer EOS-16 neighbor 10.10.1.5 remote-as 65002 neighbor 10.10.1.5 next-hop-self neighbor 10.10.1.5 allowas-in 1 neighbor 10.10.1.5 maximum-routes 12000 redistribute connected ! vlan 4094 trunk group mlagpeer ! no spanning-tree vlan 4094 ! interface Vlan4094 ip address 10.10.1.6/30 ! mlag configuration domain-id mlagL2 local-interface Vlan4094 peer-address 10.10.1.5 peer-link port-channel 999 !
EOS-20-config ip virtual-router mac-address 00:1c:73:00:00:03 ! interface Port-Channel1 switchport mode trunk mlag 1 spanning-tree portfast ! interface Port-Channel2 switchport mode trunk mlag 2 spanning-tree portfast ! interface Port-Channel999 mtu 9000 switchport mode trunk switchport trunk group mlagpeer ! interface Ethernet1 mtu 9000 no switchport ip address 10.10.13.0/31 ! interface Ethernet2 mtu 9000 no switchport ip address 10.10.13.2/31 ! interface Ethernet3 mtu 9000 no switchport ip address 10.10.13.4/31 ! interface Ethernet4 mtu 9000 no switchport ip address 10.10.13.6/31 ! interface Ethernet5 channel-group 1 mode active shutdown ! interface Ethernet6 channel-group 2 mode active shutdown ! interface Ethernet7 channel-group 999 mode active ! interface Ethernet8 channel-group 999 mode active ! interface Loopback0 ip address 10.254.254.20/32 ! Interface Loopback2 ip address 10.253.20.21/32 !! ip address 10.253.20.210/32 secondary ! router bgp 65003 router-id 10.254.254.20 !! update wait-install maximum-paths 4 ecmp 4 neighbor spine peer-group neighbor spine remote-as 65050 neighbor spine send-community neighbor spine maximum-routes 12000 neighbor 10.10.13.1 peer-group spine neighbor 10.10.13.3 peer-group spine neighbor 10.10.13.5 peer-group spine neighbor 10.10.13.7 peer-group spine !! IBGP session to MLAG Peer EOS-21 neighbor 10.10.1.10 remote-as 65003 neighbor 10.10.1.10 next-hop-self neighbor 10.10.1.10 allowas-in 1 neighbor 10.10.1.10 maximum-routes 12000 redistribute connected ! vlan 4094 trunk group mlagpeer ! no spanning-tree vlan 4094 ! interface Vlan4094 ip address 10.10.1.9/30 ! mlag configuration domain-id mlagL3 local-interface Vlan4094 peer-address 10.10.1.10 peer-link port-channel 999 !
EOS-21-config ip virtual-router mac-address 00:1c:73:00:00:03 ! interface Port-Channel1 switchport mode trunk mlag 1 spanning-tree portfast ! interface Port-Channel2 switchport mode trunk mlag 2 spanning-tree portfast ! interface Port-Channel999 mtu 9000 switchport mode trunk switchport trunk group mlagpeer ! interface Ethernet1 mtu 9000 no switchport ip address 10.10.13.10/31 ! interface Ethernet2 mtu 9000 no switchport ip address 10.10.13.12/31 ! interface Ethernet3 mtu 9000 no switchport ip address 10.10.13.14/31 ! interface Ethernet4 mtu 9000 no switchport ip address 10.10.13.16/31 ! interface Ethernet5 channel-group 1 mode active shutdown ! interface Ethernet6 channel-group 2 mode active shutdown ! interface Ethernet7 channel-group 999 mode active ! interface Ethernet8 channel-group 999 mode active ! interface Loopback0 ip address 10.254.254.21/32 ! Interface Loopback2 ip address 10.253.20.21/32 !! ip address 10.253.20.210/32 secondary ! router bgp 65003 router-id 10.254.254.21 !! update wait-install maximum-paths 4 ecmp 4 neighbor spine peer-group neighbor spine remote-as 65050 neighbor spine send-community neighbor spine maximum-routes 12000 neighbor 10.10.13.11 peer-group spine neighbor 10.10.13.13 peer-group spine neighbor 10.10.13.15 peer-group spine neighbor 10.10.13.17 peer-group spine !! IBGP session to MLAG Peer EOS-21 neighbor 10.10.1.9 remote-as 65003 neighbor 10.10.1.9 next-hop-self neighbor 10.10.1.9 allowas-in 1 neighbor 10.10.1.9 maximum-routes 12000 redistribute connected ! vlan 4094 trunk group mlagpeer ! no spanning-tree vlan 4094 ! interface Vlan4094 ip address 10.10.1.10/30 ! mlag configuration domain-id mlagL3 local-interface Vlan4094 peer-address 10.10.1.9 peer-link port-channel 999 !
cvx01 alias xmpp-all xmpp session all-switches@conference.localhost alias xmpp-veos-all xmpp session veos-all-switches@conference.localhost ! cvx no shutdown peer host 10.0.3.209 peer host 10.0.3.210 service vxlan no shutdown ! transceiver qsfp default-mode 4x10G ! hostname cvx01 ip domain-name lan ip name-server vrf default 10.0.3.1 ! ntp server vrf default 129.6.15.28 ! spanning-tree mode mstp ! no aaa root ! clock timezone US/Central ! interface Management1 ip address 10.0.3.208/24 ! ip route 0.0.0.0/0 10.0.3.1 ! no ip routing vrf default ! management api http-commands no shutdown ! end

 

Appendix B – Subscript references

Arista Standard VXLAN Validated Design1 – L3 Leaf Spine topology with MLAG+VXLAN VTEPs at Top of Rack. A VXLAN indirect routing topology using VARP for FHRP. VXLAN+MLAG2 – A VXLAN Tunnel Endpoint (VTEP) consisting of two switches in MLAG configuration and sharing a single VXLAN Tunnel Interface (VTI) VARP3  – An L3 Anycast Gateway standards based approach to active-active First Hop Redundancy

Appendix C – Importing the configlets into CVP with cvptool.py

Download the Configlets used in this article cvp_configlets-02232016.tar To import the configlets used in this article into your CVP use the following command. /cvp/tools/cvptool.py –host –user cvpadmin –password –action backup –tarFile cvp_configlets-02212016.tar.gz –objects Configlets [root@cvp03 ~]# /cvp/tools/cvptool.py –help usage: cvptool.py [-h] –host HOST –user USER –tarFile TARFILE [–port PORT] [–ssl {True,False}] [–password PASSWORD] [–action {backup,restore}] [–bundleName BUNDLENAME] [–objects [{Configlets,ConfigletBuilders,ImageBundles,Devices,Images,Containers,Roles,swi} [{Configlets,ConfigletBuilders,ImageBundles,Devices,Images,Containers,Roles,swi} …]]] CVP management tool optional arguments: -h, –help show this help message and exit –host HOST Hostname or IP address of cvp –user USER Cvp user username –tarFile TARFILE *tar.gz file to save/retrieve compressed Cvp state information –port PORT Cvp web-server port number –ssl {True,False} Https link –password PASSWORD password corresponding to the username –action {backup,restore} Type of action to be performed on Cvp instance –bundleName BUNDLENAME Name of image bundle which will contain the image to be added –objects [{Configlets,ConfigletBuilders,ImageBundles,Devices,Images,Containers,Roles,swi} [{Configlets,ConfigletBuilders,ImageBundles,Devices,Images,Containers,Roles,swi} …]] List of objects on which corresponding action is to be performed ↑ Table of Contents ↑

Follow

Get every new post on this blog delivered to your Inbox.

Join other followers: