• CloudVision Appliance Deployment Recommendations (DCA-100-CV)

 
 
Print Friendly, PDF & Email

DCA-CV-100 is end of sale as of March 2019, please refer to this link for an updated article covering DCA-CV-200.

CloudVision Appliance Introduction

CloudVision Appliance (DCA-100-CV) is a physical appliance that runs CentOS base image and hosts one instance of each CloudVision Portal (CVP) and CloudVision eXchange (CVX) using KVM hypervisor. It comes with 4X1G NICs. The virtual NICs on CVP, CVX VMs are mapped to the physical NICs 1-4 as follows:

For additional details, refer to CloudVision Appliance quick start guide here.

Deployment Recommendations

1. Ensure that you are running the latest version of the host image; this provides updated OS packages and security patches. The current version of the host image can be checked using the following command from the CVA CLI:

[root@cva-1 ~]# cat /cva/version.txt
2.0.0

In older code versions, use the following command if the above doesn’t work.

[root@cv etc]# cat /etc/cva-version.txt 
1.0.1-130617-f1887cd374f758c0b7f869d8df232bf1

If you are running an older version, download the latest version from www.arista.com software downloads page.Procedure to update the host image can be found here.

2. Network Placement
The diagram below shows both CVP and CVX setup as a three-node cluster using three CloudVision appliances.

2.1 NIC-1 and NIC-2 are teamed with bond mode 1 (active/standby) by default and is used for the following:

• iDRAC via NIC-1
• CVP Mgmt for CVP ←→ User traffic
• CVP Mgmt for CVP ←→ Managed Device traffic including Telemetry
• CVX Mgmt for CVX ←→ User traffic

2.2 NIC-3 and NIC-4 are teamed with bond mode 4 (LACP) by default and is used for the following:

• CVP inter-node cluster traffic
• CVX in-band VxLAN Control Services

3. The above design incorporates the following leading practices:

3.1 CVP inter-node cluster traffic is isolated from rest of the traffic. On each CVP VM:

• Cluster traffic uses eth1 and rest of the traffic including management & telemetry uses eth0.
• Routing is setup with default gateway via eth0 which will be used for user access and CVP ←→ Managed Device traffic.

Note: Cluster interface (eth1) and Device interface (eth0) needs to be appropriately chosen during shell-based configuration of CVP.

3.2 CVX is hosted on Service Leaf along with other services offering in-band VCS. More details on in-band VCS design can be found here.

3.3 Same subnet and VLAN is used for both in-band VCS and CVP inter-node cluster traffic.

3.4 For OOB management, both CVP and CVX nodes are dual-homed to Management Leaf MLAG pair.

4. Sample switch configs on Service Leaf and Management Leaf for interfaces facing the CVA:

4.1 Management Leaf
MgmtLeaf-1
interface Ethernet1
  description CVA1 downlink NIC-1
  switchport access vlan 100
  spanning-tree portfast
  spanning-tree bpduguard enable
!
MgmtLeaf-2
interface Ethernet1
  description CVA1 downlink NIC-2
  switchport access vlan 100
  spanning-tree portfast
  spanning-tree bpduguard enable
!
Note: the interfaces on the Management Leafs facing CVA are not in MLAG port-channel.

4.2 Service Leaf
ServLeaf-1
interface Ethernet1
  description CVA1 downlink NIC-3
  channel-group 1 mode active
!
interface Port-Channel1
  description CVA1 downlink
  switchport access vlan 200
  mlag 1
  spanning-tree portfast
  spanning-tree bpduguard enable
!
ServLeaf-2
interface Ethernet1
  description CVA1 downlink NIC-4
  channel-group 1 mode active
!
interface Port-Channel1
  description CVA1 downlink
  switchport access vlan 200
  mlag 1
  spanning-tree portfast
  spanning-tree bpduguard enable
!

Resources

1. CloudVision Appliance Quick Start Guide

2. CloudVision Configuration Guide

3. CVX Deployment Recommendations for VxLAN Control Service

Follow

Get every new post on this blog delivered to your Inbox.

Join other followers: