Leveraging CVP Telemetry and ZTP in an Ansible Environment

This guide will discuss one of several options for integrating Arista’s network management tool, CloudVision Portal (CVP), into an Ansible environment.

Summary

In data center environments where Ansible is used for configuration management of all devices including networking equipment, the network operations team may want to leverage the telemetry and Zero Touch Provisioning (ZTP) functionality provided by the CloudVision Portal product. In this example, CVP will be used for ZTP, image upgrades, and telemetry while Ansible will be used to manage the switch configuration directly. Documentation for setting up ZTP can be found in the CloudVision configuration guide.

Implementation

This setup allows a user to leverage some functionality of CVP while continuing to do their configuration management with Ansible. This functionality is implemented by having the user run a couple of scripts as cronjobs on their Ansible provisioning server (or another server with access to both the Ansible server and CVP). The server needs access to the management network where CVP and the Arista switches are connected. The CVP setup requires the user to create a couple of device grouping containers on the CVP network provisioning page and some basic configlets. The automation will move new devices between these containers and apply the basic configlets as necessary. The end goal will be a device fully under Ansible’s control and sitting in the final container (called the Provisioned container in this example). *Note: CVP compliance checks are not supported after the handoff to Ansible because the device configuration is now the responsibility of Ansible instead of CVP. There is no need to do any reconciling of the configs within CVP because it is expected that the configuration within CVP will be different from the device configuration after Ansible takes control and applies its configurations.

Scripts and Config file

The scripts running as cronjobs that automate this Ansible solution support a YAML style configuration file.  The base config file is shown below, but can be extended if needed:

Script 1: Initial Provisioning script

(Found here)

Script 2: Ansible Handoff script

(Found here)

Config file: config.yml

Both scripts can use the same configuration file. The target_container field is the destination container of a device after execution of the Initial Provisioning script and the provisioned_container field is the is the destination container of a device after execution of the Ansible Handoff script. An example is below:

---
cvp_host:
    - '<cvp node>'
cvp_user: <username>
cvp_pw: <password>
target_container: Ansible
provisioned_container: Provisioned
image: '<image name>'
gw: '<gateway>'
subnet_mask: <subnet mask&gt
ansible_path: '<path to ansible-playbook>'
playbook: <playbook yaml file&gt

Example

For this example the three containers will be the existing Undefined container, a user created Ansible container and a user created Provisioned container. The scripts running as cronjobs on the Ansible server will need to know these two user created container names and where in the automated process each container is located. In our example the order of operations moves a newly ZTP’ed device from the Undefined container to the intermediary Ansible container then finally to the Provisioned container where it will remain and be managed by Ansible. netprov

The Initial Provisioning script will poll the Undefined container for new devices on a regular schedule set by the user (Ex: every 5 minutes). All interaction between the Cron scripts and CVP will use the Python cvprac library that is available on Github and PyPi. When this script finds new devices in the Undefined container (these devices have already been bootstrapped by the ZTP process that is part of CVP), it will send the appropriate API calls to CVP to apply necessary base configlets to the device, move it to the Ansible container, and optionally upgrade its EOS image. The base configlets that are applied during this process set up credentials that will allow the Ansible server to access and configure the device. The final step for this script is to execute the tasks created for the required actions. In addition to creating the two containers mentioned above, the setup requires several configlets to be created in CVP before starting the auto deployment Cron scripts (these could be in a single configlet or separate ones but each of the features mentioned below need to be applied to the device in some manner):

  • Management Ip address
  • AAA config with CVP user
  • Enable EAPI
  • (Optional) Local Ansible user.  This is not needed if Tacacs/RADIUS is used to authenticate the user that will be running the Ansible playbook.

configlets1

In the above example the AAA, eAPI and Ansible configlets are created within CVP and applied to the Ansible container as part of setup before the scripts are running. The final device-specific configlet is created by the Initial Provisioning script which applies the configlet directly to the device. The end state of the Initial Provisioning script will be all devices found in the Undefined container moved to the Ansible container with the proper configlets (and optionally image) applied. The next step of the automated device handoff to Ansible is for the Ansible Handoff script running as a Cronjob, to find the devices in the Ansible container and move them to the Provisioned container. After moving the device from the Ansible container to the Provisioned container the script will cancel the task associated with the action to prevent any configurations from being removed. Finally the script will add the new device to the Ansible inventory and kick off the Ansible playbook for provisioning the device. If there is a failure during the Ansible provisioning the script will catch this and move the device back to the Ansible container. The overview for the script that is responsible for running Ansible is as follows:

  • Check the Ansible container for devices
  • Move the device to the Provisioned container
  • Add the host to the Ansible inventory list
  • Call the ansible-playbook

At this point CVP will no longer be managing the configuration of the device. It is important to note that the device’s configuration will no longer be in sync with the configuration in CVP and that none of the tasks generated by the second script should be executed (the script should automatically cancel them) otherwise there is a risk that Ansible and/or CVP will lose access to the device. CVP’s role at this point is to manage EOS images and EOS device telemetry. Another CVP/Ansible integration example can be found here: Export Constrained CVP Functionality to Ansible