Installing CloudVision eXchange (CVX) on Ubuntu / KVM

Introduction

This post is intended to give step-by-step instructions on how to install CVX on a KVM Hypervisor on Ubuntu LINUX.

The Cloudvision Configuration Guide provides provides excellent instructions on configuring CVX after the install process is complete. You can also browse to the guide via the Support > Product Documentation pages on arista.com.

Basic familiarity with Linux is needed in order to complete this task.

Installation Procedure

Refer to Section 1.1 of the Cloudvision Configuration Guide for host system requirements.

Install Steps

  1. Download the Aboot and EOS software from https://www.arista.com/en/support/software-download. (CVX is really just an instance of EOS configured with the CVX Server function enabled. Aboot is the boot-loader)
    Recommended software versions are latest EOS vmdk and Aboot iso.
    Minimum versions: EOS-4.16.6M.vmdk and Aboot-veos-serial-8.0.0.iso (Aboot is located in the vEOS section of the software download pages).
    If possible, the client switch EOS software should match the CVX version.  Earlier versions of switch EOS may not operate with newer versions of CVX.
  2. Acquire the superuser privileges required for some of the following steps.
    sudo su
  3. Confirm that KVM is running on the server by entering:

    virsh -c qemu:///system list

    The output should be:

    Id Name                 State
    -------------------------------
    $

    If not, refer to https://help.ubuntu.com/community/KVM/Installation for assistance.

  4. Convert the vmdk file to qcow2:

    qemu-img convert EOS_4_16_6M.vmdk -O qcow2 EOS.qcow2
  5. Use brctl to add bridges for the KVM VM to use (this step is not required if you already have 2 bridges defined in 2 different subnets).  br1 and br2 can be any names that you choose. 

    brctl addbr br1
    brctl addbr br2

    ifconfig can be used to identify the required Ethernet ports to be bridged. Add these ports to the bridges.
    Example:

    brctl addif br1 enx803f5d086eae

    Confirm that the bridges are operational,  br1 and br2 should be listed under bridge names in the output:

    brctl show

    Next, in order to activate the bridges, enter:

    ifconfig br1 up
    ifconfig br2 up
  6. Next, execute the generateXmlForKvm.py script to create  an output xml file (cvx.xml in this example)to use when creating your CVX VM.  generateXmlForKvm.py and kvmTemplate.xml are both required for this step. They are included in the CloudVision Portal (CVP) tarball for Ubuntu if you are have CVP software.  These scripts are also included at the end of this document for CVX implementations not using CVP.

    generateXmlForKvm.py requires a lot of input parameters so I would recommend typing it into a scratch pad, then editing it before doing a cut-and-paste into a Linux Terminal.  NOTE: This should be run from the directory containing the generateXmlForKvm.py script.  An example command follows but will vary depending on your server setup and naming choices:

    python generateXmlForKvm.py -n cvx --device-bridge br1 --cluster-bridge br2 -e /usr/bin/kvm
    -i kvmTemplate.xml -c /home/myUserName/Downloads/Aboot-veos-serial-8.0.0.iso 
    -x /home/myUserName/Downloads/EOS.qcow2 -b 8192 -p 2 -t
    -o cvx.xml

    Parameters
      -n cvx: VM name.  
      –device-bridge br1:   This is the name you gave the bridge – br1 or anything else.
      –cluster-bridge br2:  Cluster bridge if clustering servers.
      -e /usr/bin/kvm : Ubuntu path to KVM.
    (for RHEL KVM this is: -e usr/libexec/qemu-kvm)
      -i kvmTemplate.xml:  Path to XML file input template.
     -c: Path to Aboot file.
      -x: Path to EOS.qcow2 file.
      -b 8192: 8G of RAM.
      -p 2:  # of CPU cores.
      -t: This parameter indicates the file defined by -x is for CVX.
     -o: Output XML file which virsh uses to define the KVM Virtual Machine.
     -k:  VM ID number used by virsh. If not entered, a random number is assigned.

  7. After generateXmlForKvm.py executes successfully, run the following commands:
    virsh define cvx.xml
    virsh start cvx
    virsh console cvx
    
  8. If you always want cvx to boot when KVM is started, enter:

    virsh autostart cvx

At this point, the VM running your version of EOS will boot up and you can begin to configure your network interfaces and start the CVX service. See Cloudvision Configuration Guide – Chapter 2 – CloudVision eXchange (CVX) for instructions on configuring the CVX server and switch CVX clients. 

 


 

Python XML Creation Script – generateXmlForKvm.py

#!/bin/env python
# Copyright (c) 2015 Arista Networks, Inc. All rights reserved.
# Arista Networks, Inc. Confidential and Proprietary.
import os
import sys
import xml.etree.ElementTree as ET
import uuid
import argparse
import random

MIN_CPU = 8
MIN_RAM_MB = 16384

dryRun = False
debug = False
warningStrings = []
errorStrings = []

def printWarnings():
i = 1
for ws in warningStrings:
print "WARNING[ %d ]: " %i + ws
i += 1

def printErrors():
i = 1
for es in errorStrings:
print "ERROR[ %d ]: " %i + es
i = i + 1

def dprint( string ):
if debug:
print string

def findDiskType( diskPath ):
''' Validate if disks are of type Qcow2/qcow or raw only'''
if not diskPath:
return None

diskType = diskPath.split( '.' )[ -1 ]
if diskType not in [ 'raw', 'qcow', 'qcow2' ]:
return None
return diskType

class XmlParser( object ):
'''
Class to parse a template CVP XML for libvirt/virsh and update specific
parameters in order to create a unique and deployable XML file for virsh
to use.
'''
def __init__( self, inFile=None, outFile=None ):
self.inFile = inFile
self.outFile = outFile
self.root = None
self.tree = None
if self.inFile and os.access( self.inFile, os.F_OK|os.R_OK|os.W_OK ):
self.tree = ET.parse( inFile )
self.root = self.tree.getroot()
if self.root is None:
print 'Could not parse invalid input XML template'
sys.exit( 1 )

def setDeviceBridge( self, deviceBr=None ):
for child in self.root.iter( 'interface' ):
for c in child.iter( 'source' ):
if c.attrib[ 'bridge' ] == '@device_bridge_name@' and deviceBr:
c.attrib[ 'bridge' ] = deviceBr

def setClusterBridge( self, swBr=None ):
for device in self.root.iter( 'devices' ):
for child in device.iter( 'interface' ):
for c in child.iter( 'source' ):
if c.attrib[ 'bridge' ] == '@cluster_bridge_name@':
if swBr:
c.attrib[ 'bridge' ] = swBr
else:
device.remove( child )
return

def setName( self, vmName='CVP_Appliance_XXX' ):
self.root.find('name').text = vmName

def setUuid( self ):
self.root.find('uuid').text = str( uuid.uuid4() )

def setCpuCount( self, cpuCount ):
if int( cpuCount ) < MIN_CPU :
warningStrings.append( "%s cpu cores may not suffice. We recommend "
"%d cpu cores for optimal performance." \
% ( cpuCount, MIN_CPU ) )
self.root.find('vcpu').text = cpuCount

def setRam( self, ramMb ):
if int( ramMb ) < MIN_RAM_MB:
warningStrings.append( '%s MB RAM may not suffice.'
'We recommend %d MB '
'for optimal performance.' % ( ramMb, MIN_RAM_MB ) )
self.root.find('memory').text = ramMb
self.root.find('currentMemory').text = ramMb

def setVmId( self, identifier=100 ):
self.root.attrib[ 'id' ] = str( identifier )

def setEmulator( self, qemuPath='/usr/bin/qemu-kvm' ):
for child in self.root.iter( 'devices' ):
child.find('emulator').text = str( qemuPath )

def getDisksInfo( self ):
nameToInfo = {}
for child in self.root.iter( 'devices' ):
for disk in child.iter( 'disk' ):
target = disk.find('target')
dev = target.attrib[ 'dev' ] if target is not None else None
src = disk.find('source')
filep = src.attrib[ 'file' ] if src is not None else None
drv = disk.find( 'driver')
typ = drv.attrib[ 'type' ] if drv is not None else None

nameToInfo[ dev ] = { 'path': filep, 'type': typ }

# Make a map of dev to file path and type
return nameToInfo

def setDiskInfo( self, diskName = None, diskType='raw', diskPath = None ):
if diskPath is None:
# Remove disk entry from output
for child in self.root.iter( 'devices' ):
for disk in child.iter( 'disk' ):
if disk.find( 'target' ).attrib[ 'dev' ] == diskName:
# Remove this section from output XML
child.remove( disk )
return
return
if diskType is None:
errorStrings.append( "Disk image at %s is not supported. "
"Only qcow, qcow2 and raw formats are supported."
%diskPath )
for child in self.root.iter( 'devices' ):
for disk in child.iter( 'disk' ):
if diskName == disk.find( 'target' ).attrib[ 'dev' ]:
disk.find( 'source' ).attrib[ 'file' ] = diskPath
disk.find( 'driver' ).attrib[ 'type' ] = diskType

def setCdromBoot( self, cdromBoot ):
if not cdromBoot:
return
for child in self.root.iter( 'os' ):
child.find( 'boot' ).attrib[ 'dev' ] = 'cdrom'

def setIso( self, diskPath = None ):
for child in self.root.iter( 'devices' ):
for disk in child.iter( 'disk' ):
if disk.attrib[ 'device' ] == 'cdrom':
if disk.find( 'source' ) is not None:
disk.find( 'source' ).attrib[ 'file' ] = diskPath
else:
ET.SubElement( disk, 'source' , attrib={ 'file' : diskPath } )

def writeXml( self ):
self.tree.write( self.outFile )

# Main test area
def main():
global debug
global dryRun
parser = argparse.ArgumentParser( description='Get your VM going!' )
parser.add_argument( '-d', '--debug', action='store_true',
help='print debug messages to stdout' )
parser.add_argument( '-r', '--dry-run', action='store_true',
help='Does not write any changes to a file' )
parser.add_argument( '-n', '--vmname', help='Name to be given to the VM',
required=True )
parser.add_argument( '--device-bridge', help='Name of device bridge '
'network which connects to VM port 1', required=True )
parser.add_argument( '--cluster-bridge', help='Name of the cluster control '
'bridged network which connects to VM port 2', default=None )
parser.add_argument( '-e', '--emulator', help='Fully qualified file '
'system path to qemu-kvm binary',
default='/usr/bin/qemu-kvm' )
parser.add_argument( '-k', '--identifier', help='Unique ID for virsh '
'to use to identify the VM. '
'Uses a random value if left unspecified',
default=random.choice( xrange( 1, 1000 ) ) )
parser.add_argument( '-i', '--input', help='Path to XML template file',
default='./kvmTemplate.xml' )
parser.add_argument( '-o', '--output',
help='Path to XML output file', default='./cvp.xml' )
parser.add_argument( '-c', '--cdrom',
help='Path to configuration ISO file for CVP and '
'Aboot-veos-serial.iso for CVX', default=None )
parser.add_argument( '-x', '--disk1', help='Path to primary disk for CVP',
default='./cvp.qcow2', required=True )
parser.add_argument( '-y', '--disk2', help='Path to the data disk for CVP' )
parser.add_argument( '-b', '--memory', help='Memory in Mega Bytes (MB)',
default='16384' )
parser.add_argument( '-p', '--cpu', help='Number of CPUs to use', default='8' )
parser.add_argument( '-t', '--bootcdrom', action='store_true',
help='Boot from ISO/CDROM. Needed for CVX' )
args = parser.parse_args()

debug = args.debug
dryRun = args.dry_run
dprint( "Using %s as name of CVP VM" % args.vmname )
dprint( "Using %s as device network connectivity" % args.device_bridge )
dprint( "Using %s as switch access network" % args.cluster_bridge )
dprint( "Using %s as emulator" % args.emulator )
dprint( "Using %s as VM identifier" % args.identifier )
dprint( "Using %s input template xml" % args.input )
dprint( "Using %s ouput xml file" % args.output )
dprint( "Using %s memory with %s cpu" % ( args.memory, args.cpu ) )

xml = XmlParser( args.input, args.output )
xml.setDiskInfo( diskName='hda', diskType=findDiskType( args.disk1 ),
diskPath=args.disk1 )
xml.setDiskInfo( diskName='hdb', diskType=findDiskType( args.disk2 ),
diskPath=args.disk2 )

xml.setDeviceBridge( deviceBr=args.device_bridge )
xml.setClusterBridge( swBr=args.cluster_bridge )
xml.setName( args.vmname )
xml.setEmulator( args.emulator )
xml.setUuid()
xml.setVmId( identifier=args.identifier )
if args.cdrom:
xml.setIso( diskPath=args.cdrom )
xml.setCdromBoot( args.bootcdrom )
xml.setCpuCount( cpuCount=args.cpu )
xml.setRam( ramMb=args.memory )
printErrors()
printWarnings()
if errorStrings:
print "FAILURE: Fix all the errors and try again"
elif not dryRun:
xml.writeXml()
print 'SUCCESS: XML output is in %s' % args.output

if __name__ == "__main__":
main()


 

Input XML Template – kvmTemplate.xml

<domain type='kvm' id='@random_id@'>
<name>@cvp_appliance_name@</name>
<uuid>@unique_uuid@</uuid>
<memory unit='MiB'>16384</memory>
<currentMemory unit='MiB'>16384</currentMemory>
<vcpu placement='static'>8</vcpu>
<os>
<type arch='x86_64' machine='pc'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>@Path_to_qemu_kvm_binary@</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/persist/var/lib/libvirt/images/cvp-disk1.qcow2'/>
<target dev='hda' bus='ide'/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/persist/var/lib/libvirt/images/cvp-disk2.qcow2'/>
<target dev='hdb' bus='ide'/>
<alias name='ide0-0-1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file=''/>
<target dev='hdc' bus='ide'/>
<alias name='ide0-1-0'/>
<readonly/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<alias name='usb0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<alias name='usb0'/>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<alias name='usb0'/>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<alias name='usb0'/>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x2'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</controller>
<interface type='bridge'>
<source bridge='@device_bridge_name@' />
<target dev='vnet0'/>
<model type='virtio'/>
<alias name='net0'/>
</interface>
<interface type='bridge'>
<source bridge='@cluster_bridge_name@' />
<target dev='vnet1'/>
<model type='virtio'/>
<alias name='net1'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/3'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/3'>
<source path='/dev/pts/3'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
<listen type='address' address='127.0.0.1'/>
</graphics>
<video>
<model type='cirrus' vram='16384' heads='1'/>
<alias name='video0'/>
</video>
<memballoon model='virtio'>
<alias name='balloon0'/>
</memballoon>
</devices>
<seclabel type='none'/>
</domain>