• CloudVision Appliance Deployment Recommendations (DCA-200-CV)

 
 
Print Friendly, PDF & Email

CloudVision Appliance (CVA) Introduction

CloudVision Appliance (DCA-200-CV) is a physical appliance that runs CentOS base image and hosts one instance of each CloudVision Portal (CVP) and CloudVision eXchange (CVX) using KVM hypervisor. It comes with 4X1G NICs and a separate 1G NIC for iDRAC. The virtual NICs on CVP, CVX VMs are mapped to the physical NICs 1-4 as follows:

CloudVision Appliance quick start guide can be found here.

Deployment Recommendations

1. Ensure that you are running the latest version of the host image; this provides updated OS packages and security patches. The current version of the host image can be checked using the following command from the CVA CLI:

[root@cv ~]# version
CVA Version: 2.1.3.1 

If you are running an older version, download the latest version from www.arista.com software downloads page.

Procedure to update the host image can be found here.

2. Network Placement

The diagram below shows both CVP and CVX setup as a three-node cluster using three CloudVision appliances.

2.1 NIC-1 and NIC-2 are teamed with bond mode 4 (LACP active/active) by default and is used for the following:

    • CVP Mgmt for CVP ←→ User traffic
    • CVP Mgmt for CVP ←→ Managed Device traffic including Telemetry
    • CVX Mgmt for CVX ←→ User traffic

2.2 NIC-3 and NIC-4 are teamed with bond mode 4 (LACP active/active) by default and is used for the following:

    • CVP inter-node Cluster traffic
    • CVX in-band VxLAN Control Services

2.3 iDRAC interface is connected to OOB Mgmt Leaf and is used for remote server management.

3. The above design incorporates the following leading practices:

3.1 CVP inter-node cluster traffic is isolated from rest of the traffic. On each CVP VM:

    • Cluster traffic uses eth1 and rest of the traffic including management & telemetry uses eth0. 
    • Routing is setup with default gateway via eth0 which will be used for user access and CVP ←→ Managed Device traffic.

Note: Cluster interface (eth1) and Device interface (eth0) needs to be appropriately chosen during shell-based configuration of CVP.

3.2 CVX is hosted on Service Leaf along with other services offering in-band VCS. More details on in-band VCS design can be found here.

3.3 Same subnet and VLAN is used for both in-band VCS and CVP inter-node cluster traffic.

3.4 For OOB management, both CVP and CVX nodes are dual-homed to Management Leaf MLAG pair.

4. Sample switch configs on Service Leaf and Management Leaf for interfaces facing the CVA:

4.1 Management Leaf

MgmtLeaf-1

interface Ethernet1
   description CVA1 downlink NIC-1
   channel-group 1 mode active
!
interface Port-Channel1
   description CVA1 downlink
   switchport access vlan 100
   mlag 1
   spanning-tree portfast
   spanning-tree bpduguard enable
!
interface Ethernet4
   description CVA1 downlink iDRAC
   switchport access vlan 300
   spanning-tree portfast
   spanning-tree bpduguard enable
!

MgmtLeaf-2

interface Ethernet1
   description CVA1 downlink NIC-2
   channel-group 1 mode active
!
interface Port-Channel1
   description CVA1 downlink
   switchport access vlan 100
   mlag 1
   spanning-tree portfast
   spanning-tree bpduguard enable
!
interface Ethernet48
   description CVA2 downlink iDRAC
   switchport access vlan 300
   spanning-tree portfast
   spanning-tree bpduguard enable
!

4.2 Service Leaf

ServLeaf-1

interface Ethernet1
   description CVA1 downlink NIC-3
   channel-group 1 mode active
!
interface Port-Channel1
   description CVA1 downlink
   switchport access vlan 200
   mlag 1
   spanning-tree portfast
   spanning-tree bpduguard enable
!

ServLeaf-2

interface Ethernet1
   description CVA1 downlink NIC-4
   channel-group 1 mode active
!
interface Port-Channel1
   description CVA1 downlink
   switchport access vlan 200
   mlag 1
   spanning-tree portfast
   spanning-tree bpduguard enable
!

5. In cases where CVX is not used, all the NICs may be connected to the Management Leaf MLAG pair as shown below. Note that the inter-node cluster traffic still uses NIC-3,4 and is separate from rest of the traffic. The switch ports connected to NIC-3,4 can be placed in a non routable VLAN (no default gateway).

Resources

  1. CloudVision Appliance Quick Start Guide
  2. CloudVision Configuration Guide
  3. CVX Deployment Recommendations for VxLAN Control Service
Follow

Get every new post on this blog delivered to your Inbox.

Join other followers: