• Sending Telemetry Data from TerminAttr to Multiple CVP instances

 
 
Print Friendly, PDF & Email

Sending Telemetry Data from TerminAttr to

Multiple CVP instances

Overview

This article will explore the ability of the CloudVision Telemetry agent to send data to more than one CloudVision Portal (CVP) instance or CloudVision and a third party application.

 

 

The configuration used in this lab was also used as part of the “Synchronising CloudVision Portal Configlets with Ansible” POC lab to enable both CloudVision instances to receive Telemetry data from all the switches. The article for “Synchronising CloudVision Portal Configlets with Ansible” can be found here : https://eos.arista.com/synchronising-cloudvision-portal-configlets-with-ansible/

 

Introduction

The Proof of Concept Lab created to demonstrate the ability to synchronise configlets across multiple instances of CloudVision utilised the support for streaming telemetry data to multiple CVP instances that was introduced in TerminAttr  version 1.7.1 and supported on CVP instances running 2019.1.x or later. A high level diagram of the PoC lab is shown below:

 

 

Lab Setup

Each EOS device in the lab streaming Telemetry data to multiple CVP instances required the following TerminAttr configuration: 

 

daemon TerminAttr
   exec /usr/bin/TerminAttr -cvopt=DC1-London.addr=192.168.30.100:9910
      -cvopt=DC1-London.vrf=default -cvopt=DC1-London.auth=key,TelemtryK1 
      -cvopt=DC2-Luton.addr=192.168.30.102:9910 -cvopt=DC2-Luton.vrf=default 
      -cvopt=DC2-Luton.auth=key,TelemetryK2 -cvcompression=gzip 
      -smashexcludes=ale,flexCounter,hardware,kni,pulse,strata 
      -ingestexclude=/Sysdb/cell/1/agent,/Sysdb/cell/2/agent -taillogs -sflow
   no shutdown

 

Most of the TerminAttr configuration is no different than the standard single CVP instance configuration. The required TerminAttr options for this solution have been highlighted in bold and are explained below:

 

Option Description
-cvopt {name*}.addr={IP Address}
-cvopt={name*}.addr={IP Address}
Single or comma separated list of CVP cluster IPs for CVP clusters or third party Telemetry consumers
-cvopt {name*}.auth=key,{KeyPhrase}
-cvopt={name*}.auth=key,{KeyPhrase}
Authentication key used to connect to the CVP clusters or third party Telemetry consumers
-cvopt {name*}.vrf={VRFname}
-cvopt={name*}.vrf={VRFname}
Name of the VRF to use to connect to the CVP clusters or third party Telemetry consumers

*Each cluster needs a unique name. The cluster names can be set as required, for example Data centre or location names

 

For each CVP cluster that TerminAttr is required to send telemetry data to the IP addresses, authentication Key, and name of VRF that the cluster is reachable in are required. The other TerminAttr options can be copied from the Sys_TelemetryBuilder configlet builders included with CVP and are common to all clusters. The IP address options can be entered as either a single IP address for a CVP instance or a list of each node IP Address in a CVP cluster (for load balancing across nodes), examples for both are shown below:

 

Single Address:

 -cvopt=DC1-London.addr=10.81.110.104:9910 
 -cvopt=DC1-London.auth=key,cvp

Address List:

   -cvopt=DC2-Luton.addr=10.83.12.71:9910,10.83.12.72:9910,10.83.12.73:9910 
   -cvopt=DC2-Luton.auth=key,arista

Prior to EOS 4.23.2F there was a limit to the number of arguments that can be added to a daemon such as TermAttr. This EOS limitation currently has a limit of 25 arguments. A line such as

   -cvopt DC1-London.addr=10.81.110.104:9910

counts as two arguments, where as

   -cvopt=DC1-London.addr=10.81.110.104:9910

counts as one argument, by using the additional “=” the argument count can be reduced and additional arguments can be included. This allowed the additional CVP nodes to be configured without hitting the maximum number of arguments. To avoid any issues with argument counts it is recommended that the “=” option is always used.

 

Resource Considerations

The TerminAttr daemon used a small amount of switch resource in the form of CPU utilization, memory usage, and bandwidth. Using the available data provided by CVP telemetry the increased requirements for an additional CVP node could be found. The Telemetry daemon used approximately 1% – 2% of the available CPU cycles to produce the Telemetry data for one CVP instance.

 

Device CPU utilization for a single CVP instance

 

By adding a second CVP instance the processing requirements were increased proportionally by about 1% – 2%

Device CPU utilization for a dual CVP instances

 

This CPU and memory increase did not impact the performance of the switch in the lab, but for some of the older systems with less system resource this may cause some issues.

 

The Telemetry data consumed a small amount of the available network bandwidth, as a simple rule of thumb this was “hundreds of Kb/s” steady state ignoring the device type, scale, etc. In the test network it was seen that the steady state streaming bandwidth was approximately 30-50Kb/s for Telemetry data to a single CVP cluster. This bandwidth increased proportionally for the number of CVP nodes that the TerminAttr process was streaming to. Note the lab was using vEOS device so the bandwidth requirements were lower than the rule of thumb estimates.

 

Summary

Telemetry data provided by TerminAttr, consolidated in CVP, is a very powerful and useful tool for operational support, capacity planning, and troubleshooting. 

Using the ability to stream Telemetry data to more than one CVP instance or third party consumer combined with the Synchronisation of  CloudVision Portal Configlets with Ansible provides a resilient solution for both CVP provisioning and Telemetry data. These combined configurations provide a multi site, multi CVP management and monitoring solution. 

Follow

Get every new post on this blog delivered to your Inbox.

Join other followers: