Datacenter Deployment Automated

Planning Methodology

There is a lot of talk about automation in the datacenter which indeed saves time but a lot of effort still goes into planning. After all, failing to plan is planning to fail. I needed a way to start automating some of the planning and repetitive tasks needed for deploying the same blueprint across various sites.

One of the bigger tasks is the IP Plan and making sure that the correct IP’s get used in configurations. Additionally making sure that the same methodology gets used on different sites.

Initially, I set out to use a very nice utility called racktables with a built in IPAM, but I still found this very time consuming, so I set out on a mission to create a structured IPAM file in JSON format. I started playing around with extracting IPAM information into a JSON file so I could use the same JSON file for all my automated configuration scripts. The reason I wanted a separate file rather than just extracting everything from the DB, was that if one later decided to use a different IPAM, one just had to figure out how to pull the info into the JSON file, instead of re-writing all the configuration scripts. Because of some time constraints, I abandoned this idea and just decided to automate the creation of the IP Data into a JSON file.

I still want to go back to the idea of Racktables but maybe automatically programming the JSON info into the DB as it has a very nice user interface and loads more functionality for tracking useful information.

First I needed to do some subnet planning, so I came up with a few questions to answer in regards to the information I would like in the database. The list of questions grew over time as I started building configurations and realised I have additional variables that could change between sites.

First I needed all the network management info such as Syslog IP, AAA servers, SNMP Servers, NTP etc.

Next, I needed to know what subnet the specific site would use as a whole, but only the first two octets would be unique per site e.g. 22.1 for site one and 22.2 for site two.

I then planned how I would like to divide the subnet into smaller subnets for different use cases e.g.

  • 22.1.50.0/24 – management
  • 22.1.0.0/24 – Loopback0
  • 22.1.100.0/24 –  Loopback100
  • 22.1.1.0/24 – uplinks between leap pairs and Spine 1
  • 22.1.2.0/24 – uplinks between leap pairs and Spine 2
  • 22.1.3.0/24 – uplinks between leap pairs and Spine 3
  • 22.1.4.0/24 – uplinks between leap pairs and Spine 4
  • 22.1.6.0/24 – MLAG links
  • 22.1.7.0/24 – iBGP Links (The customer wanted to split iBGP and MLAG traffic onto a different subnet)

I left enough room for growth of our specific implementation requirement. This implementation is obviously for a specific blueprint but can be modified to meet different requirements. The scripts can also be revised to be more dynamic in nature. 

Each of these smaller subnets then had to be divided again so that I could use it in configurations e.g. all management and Loopbacks are /32. All uplinks are /31 etc.

Using some python modules I then constructed a JSON IPAM file that had the following hierarchy:

Use case >  subnet > switch name > ip e.g Loopbacks > 22.1.0.0/24 > leaf1 : 22.1.0.1/32.

I had to use different lists, tuples and dictionaries to get the right layout and some had to have a little different format than others to be useful such as the uplinks where the spine switch name would use multiple IP addresses. The rest was all mathematics and pairing up the information I needed.

I also added a section to automatically create a dhcp.conf file so that I can easily do reservations in DHCP to be used for Zero Touch Provisioning.

Next, I created a script per switch type e.g one for server leaf, one for border leaf, one for spine etc. The different switches have different hardware configurations and configuration requirements.

These type-specific scripts would then simply reference the JSON IPAM file and fill in all switch specific variables needed such as the /31 IP to link to Spine 1 – 4. I now had a way to very easily update a JSON IPAM per site and generate configurations based on hostnames.

Deployment process

Now that I could generate the various switch configurations I needed a way to deploy them on the switches and a way to easily apply changes across multiple switches if needed. I made use of Arista CloudVision Portal that has built in Zero Touch Provisioning functionality as well as workflow automation.

Arista switches are in ZTP mode out of the box. The switch would boot up and get a management address (DHCP Server needed) as well as a hostname (DNS Server Needed) based on the mac address as a unique identifier. In CVP’s GUI the switch would then be in an unidentified container. Inside CVP I then create an additional container for the site e.g. DC1 and a container for each switch use case e.g. Spine or Leaf. CVP has various ways of adding configlets (parts of config) to the switch. One is a configlet, basically the same as a switch configuration text file but broken into sections e.g. only the management config or only AAA. This configlet is then applied to the site (DC1) container as it is the same across all the switches at the site.

The second option is a config builder, that basically uses scripting and API’s to build the configlet file. The script can reference a form that basically accepts input from the engineer or it can reference external sources such as an external IPAM or my JSON IPAM file. As soon as the engineer moves the switch from the unidentified container into the specific container such as Leaf, CVP accesses the switch via eAPI to get its hostname and then builds the switch-specific configuration by matching the hostname against variables in my IPAM file. This specific configuration plus the top level configuration would then be merged into the proposed configuration for the specific switch. Additionally one could apply a new firmware version at this time. This process includes configuration validation to make sure the configuration works on the specific model and also gives me a comparison of pre- and post-configuration changes.

Work in progress

As with all automation and scripting, there is always much room for improvement and going down the rabbit hole can prove to be time-consuming at first but well worth in the end. By automating this process an engineer can basically boot up a new site in minutes instead of hours and have the surety that configurations and methodology would be consistent. Additionally, the engineer has a very easy way of making small modifications in configuration across multiple switches such as AAA or Syslog. CVP also has built-in change management functionality with integration into products such as ServiceNow and a myriad of other great workflow automation and telemetry features that I will leave for another discussion.

Attached to this feed are all the scripts and output examples. Feel free to change as per your requirement and to share back to the community.

automation-scripts