• Deploying Cloudvision Portal (CVP) on Proxmox VE

 
 
Print Friendly, PDF & Email

Introduction

Proxmox is an open source server virtualization solution based on QEMU/KVM and LXC.  You can manage virtual machines, containers, high availability clusters, storage and networks with an integrated, easy-to-use web interface or via CLI.

The purpose of this article is to assist in deployment of Arista’s Cloudvision Portal (CVP) within Proxmox VE.  The benefit of utilizing CVP within Proxmox VE is that it offers an open source, subscription free option for those who may not be able to afford proper VMware licensing for lab/demo deployments and/or would like to utilize the rich, open source feature set provided by Proxmox VE.

For more information about Proxmox VE, please visit https://www.proxmox.com/en/proxmox-ve

NOTE: Proxmox VE is not recommended for production deployments.  Any support deploying or troubleshooting this environment will have to be provided by the Proxmox VE community.

Requirements

For this article, CVP 2020.1.1 will be installed on Proxmox VE 6.1-8 using the VMware OVA located on Arista’s Software Download Page (See below image).

NOTE: CVP can be installed on Proxmox VE using multiple methods.  The following method utilizes the VMware OVA, converts it, and deploys it within Proxmox VE.  An alternative way to deploy CVP on Proxmox VE would be to set up a CentOS minimal VM, then download and run the CVP RPM installer.  This method is particularly useful if you don’t have a host with a 1TB disk as recommended in the deployment guide and gives you more control over the VM.   For more information on this method, please see the EOS Central article CVP RPM Installer.

For CVP guest resource allocation, please reference the CVP Portal 2020.1.1 Release Notes, Section “2020.1 Supported Scale” also located on the Arista Software Download Page.

Preparing Images

Download the latest CVP .ova file.

Once downloaded, extract the contents of the .ova file (Note that an .ova file is simply a TAR archive that contains the various files associated with a VM such as its .ovf file and .vmdk disk images).

In this example, I used 7-Zip to extract the contents shown below.

After extracting the contents of the .ova file, SCP/SFTP cvp.ovf and it’s two .vmdk images to the Proxmox VE host (In this example, I put them in /root for simplicity).  Note that cvp.mf is not needed and can be discarded.

Deploying Cloudvision Portal (CVP) on Proxmox VE

Once the required files have been transferred to the Proxmox VE host, issue the following command to convert the VMware configuration/images and create the virtual machine:

qm importovf 100 cvp.ovf local-lvm

Where the three digit number 100 is the VM ID that will be used for the VM (Every VM in Proxmox needs a unique ID), and local-lvm is the storage medium being used on Proxmox VE to store the VM’s contents (This will be the location for most users unless network storage is being used).

Once this operation has completed, you should see the following within the Proxmox VE Web GUI.

Even though CVP has been added to Proxmox VE, additional configuration is required in order for the VM to successfully boot.

 

Let’s start by clicking on the “Hardware Tab”.

In this tab,

  • Click on “Memory” and then click edit to change guest memory resources.  Allocate memory based on the “2020.1 Supported Scale” recommendations within the release notes.  Note that for this example I have allocated 65,536MB or 64GB.
  • Click on “Processors” and then click edit to change guest processor resources.  Allocate processor cores based on the “2020.1 Supported Scale” recommendations within the release notes and for best performance ensure the CPU type is set to “host”.  Note that for this example I have allocated 28 vCPU’s.
  • Note: When using the “host” CPU option, be aware that if the CVP Guest is moved to a machine with a different processor architecture, the guest may not boot.  Please keep default (kvm64) but slower settings if this will be moved between servers with unlike processor architectures.
  • Click on both “scsi0” and “scsi1” hard disks and then click detach.  Once both images have been detached, edit them and re-provision them as a “VirtIO Block” device with “Direct sync” cache.
  • Click on add to add a network device.  Connect the network device to the proper bridge (In this example, we use the default “vmbr0”) and use the VirtIO network interface as this can deliver up to three times the throughput of an emulated Intel E1000 network card.
  • Once everything is configured, your hardware configuration should be similar to the below image.

Now lets move onto the “Options Tab”

In this tab,

  • Change the name of the VM if preferred (optional).
  • Define if the guest should start at boot and define start/shutdown order (optional).
  • Click on “OS Type” and then click edit to change the OS Type to “Linux 5.x – 2.6 Kernel”.
  • Click “Boot Order” and then click edit to change Disk to “virtio0”.
  • Click “QEMU Guest Agent” and then click edit, checking the box to enable QEMU Guest Agent.
  • Once everything is configured, your options should be similar to the below image.

Once this modified configuration has been applied, you should be able to successfully start the VM.

When you start the VM, click on the console tab within Proxmox VE and you should be presented with the login screen after the guest boots.  Continue provisioning CVP via Shell-based configuration.

For assistance with configuring CVP via the shell, click on the following link.

https://www.arista.com/en/cg-cv/cv-shell-based-configuration#ww1169945

Known Limitations

  • Do not try to backup the virtual machine while running.  Note that when quiescing CVP’s filesystem, the filesystem becomes corrupt, making it unbootable, losing all data.
  • Do not enable Non-Uniform Memory Access (NUMA) under the the VM’s processor configuration.  This causes the guest to have unpredictable behavior and will either fail to complete shell-based configuration or become corrupt over time.
Follow

Get every new post on this blog delivered to your Inbox.

Join other followers: