• CVX Deployment Recommendations for VxLAN Control Service

 
 
Print Friendly, PDF & Email

CVX (CloudVision eXchange) is an infrastructure for aggregating and sharing state across a network of physical switches running EOS. Services that run on top of this infrastructure provide network-wide visibility and coordination. CVX is a single pane of glass for network wide visibility and orchestration of physical switches running EOS.

CVX provides VxLAN Control Service (VCS) which is a mechanism by which hardware VTEPs share states between each other in order to establish VxLAN tunnels without the need for a multicast control plane or manual configuration of static flood-set for Head End Replication. CVX is built on the same underlying robust architecture as Arista EOS with an interactive CLI. eAPI can be used (similar to EOS) for programmatic access.

The key point to remember about the communication between a CVX client (EOS device) and the CVX server is that CVX is used for control plane only.  Data plane traffic remains on the switches and simply leverages the state information learned from CVX in making forwarding decisions. This article provides CVX deployment recommendations to build a robust and highly-available CVX control plane for VxLAN.

  • For application-level resiliency in a production environment, it is recommended that CVX is deployed as a multi-node cluster.  A cluster must comprise of 2N+1 Controllers (where N ≥ 1).
  • Each CVX instance within the cluster must be deployed on a separate physical compute node for hardware-level resiliency.
  • Each instance of CVX VM must meet the minimum hardware and software requirements specified here.
  • Ensure that ‘vEOS-lab’ image is not used for provisioning CVX VMs. vEOS-lab.swi cannot sustain high throughput on front panel ports and is intended for lab vEOS deployments only.
  • It is a best practice and highly recommended that the version of CVX should match the EOS version running on the switches.
  • CVX live vMotion is supported after 4.21.1. If the Hypervisor environment is set up for live vMotion, it has to be disabled for the CVX VMs older than 4.21.1.
  • In a clustered CVX deployment, ensure that the VTEPs (CVX client switches) are configured to connect to all the CVX cluster members.
  • A unique Loopback IP must be set as the ‘source-interface’ for VCS on each VTEP. Do not use the VxLAN tunnel interface IP (common Loopback IP shared by the Leaf pair; logical VTEP) for VCS in MLAG environment. In the configuration example below, Loopback0 is VCS source-interface (control plane) and Loopback1 is VxLAN tunnel interface (data plane).

    management cvx
      no shutdown
      server host <CVX-1_IP>
      server host <CVX-2_IP>
      server host <CVX-3_IP>
      source-interface Loopback0

    interface Vxlan1
      vxlan source-interface Loopback1
      vxlan controller-client
      vxlan udp-port 4789
      vxlan vlan 15 vni 1015
      vxlan vlan 20 vni 1020

  • If the existing management network does not support link-level redundancy then using the Management interface with “preserve mount enabled” configured CVX is recommended.
  • If the existing management network supports link-level redundancy and the device has enough Management interfaces then using the Management Interfaces with “preserve mount enabled” on CVX is recommended.
  • If the existing management network supports link-level redundancy and the device does not have enough Management interfaces then using front panel ports is recommended in conjunction with a “reload-delay mlag” that is greater than the “reload-delay non-mlag”. The connection to CVX must be over a non-mlag interface. Using a logical interface (loopback or SVI ) as described above is also recommended. The pair of “reload-delay” configurations will ensure that CVX reachability is established before prior to forwarding on the MLAG interfaces.

  • On each CVX instance, the L3 interface used for cluster connectivity to peer CVX nodes and CVX client (VTEP) connectivity must always be part of the default VRF. You can still place Ma1 in a management VRF (starting EOS 4.20.5F) for general management purposes such as remote access, SNMP, logging etc., CVX vIP feature is not supported in non-default VRF i.e., when using this feature, both Ma0 and Ma1 will need to be in default-VRF.
  • The VLAN where CVX is hosted (VLAN 10 in the above examples) should not be part of the Overlay extended VLANs as it is used to provide infrastructure for VCS in the Underlay.

Resources

  1. CloudVision eXchange Configuration Guide
  2. VxLAN Control Service
  3. CVX High-Availability
Follow

Get every new post on this blog delivered to your Inbox.

Join other followers: