• Setting up EVE-NG, CloudVision Portal and vEOS

 
 
Print Friendly, PDF & Email

Introduction

EVE-NG is a client-less multivendor network emulation software that enables network and security professionals to build out network topologies and simulate networking environments.

Using EVE-NG, Arista vEOS and Arista CloudVision, it is possible to simulate from start to end, connecting and provisioning a datacenter network, test scripting for CloudVision and finally test your EVPN Spine-Leaf configuration.

This guide explains how to start an EVE-NG environment (either using the OVF image provided on the EVE-NG site, or by doing a bare-metal install), adding the vEOS and CloudVision images to EVE-NG and connecting the switches in a Leaf-Spine topology. The vEOS switches will use ZTP to register on CloudVision and prepare for the initial deployment. This guide will also prepare the lab for manual provisioning of and EVPN deployment, or using the Arista Validated Designs (AVDs) available here – https://github.com/aristanetworks/netdevops-examples/tree/master/ansible/ztp-avd-cvp

More detailed installation-guides can be found on the EVE-NG website:

https://www.eve-ng.net/

Deploy EVE-NG from an OVF file in VMWare ESXi 6.7.0 Update 3

  • Start and login to the VMware ESXi Client, Create / Register VM and choose “deploy a virtual machine from an OVF or OVA file”

  • Enter a name for the virtual machine and select the EVE-NG OVF and VMDK File

  • Important deployment options: Choose the right datastore with at least 30GB free space and set the provisioning type to thick. For network-mappings select a network where to manage it in.

  • After the deployment validate the memory and cpu-setting, where hardware virtualisation must be exposed to the guest OS and the minimum amount of memory is 16GB (as a minimum best practice). Take note that each switch uses 2GB of ram, and Cloudvision is defined as 22GB of ram. Ensure you assign enough memory and CPU.

Bare-metal install EVE-NG on Ubuntu

Note – Skip if deploying OVF file

  • Download the latest installation ISO from the EVE-NG Download site – https://www.eve-ng.net/index.php/download/
  • Follow the Ubuntu server installation. Detailed installation instruction of Ubuntu servers are not included in this guide.
  • Remember to select install EVE Bare:

  • After the installation of Ubuntu server is done, the server should be ready with EVE-NG installed.

EVE-NG Wizard for base configuration

After installation OR after the deploy of an OVF file on VMWare ESXi, a setup wizard is available. When Ubuntu(with EVE-NG installed) is booting for the first time, the wizard will auto-start.

  • If you need to reconfigure using the EVE-NG Wizard or if the networks has changed, start up the EVE-NG server and login on the CLI with the following credentials:
Username: ‘root’ 
Password:  ‘eve’ OR “your bare-metal installation password”
  • Force the wizard to start next (re-)boot (if change in configuration needed):
rm -f /opt/ovf/.configured
reboot
  • After running the wizard, login as root and run the update/upgrade command.
    • apt-get update
    • apt-get upgrade
  • If during the upgrade you are requested on what to do with a modified file, always say keep (Default=N):

Prepare EVE-NG for the use of vEOS-lab switch images

To get the latest vEOS lab-image, you need to create an account on the Arista website using your corporate email address:

https://www.arista.com/en/support/software-download

We require the following files:

Note: for vEOS the MTU-size is currently limited to 9000 with EVE-NG

Note: Older and future versions should also be supported.

  • Upload the downloaded vEOS-lab image to an EVE temp folder using for example FileZilla or WinSCP. Then login as root using SSH protocol and convert it:
cd /tmp 
/opt/qemu/bin/qemu-img convert -f vmdk -O qcow2 vEOS-lab-4.27.1F.vmdk hda.qcow2
  • Create a new veos folder in “/opt/unetlab/addons/qemu/” for the converted vEOS image and move the converted vEOS  to this new folder (the name of the folder must start with “veos-” to be selectable in EVE-NG):
mkdir -p /opt/unetlab/addons/qemu/veos-4.27.1F 
mv hda.qcow2 /opt/unetlab/addons/qemu/veos-4.27.1F
  • Clean and fix permissions:
cd .. 
rm -rf /tmp/* 
/opt/unetlab/wrappers/unl_wrapper -a fixpermissions

Prepare EVE-NG for the use of CloudVision images

To use the CloudVision lab-image, please contact your local Arista SE/Sales. Alternatively, it is available for download on the same portal:

https://www.arista.com/en/support/software-download

  • cvp-2021.2.1-kvm.tgz (need to extract to get the disk images – example command on linux would be tar -xvf cvp-2021.2.1-kvm.tgz)
    • disk1.qcow2
    • disk2.qcow2

Note: It is recommended that CVP node VMs have 22GB of RAM and 16 vCPU’s allocated for deployments.

Note: Older and future versions should also be supported.

  • Every node type in EVE-NG has a template file that specifies node startup parameters such as node name, number of interfaces, which hypervisor supports the node, and the startup parameters used by the hypervisor.
  • When creating a new custom image (like CVP), it is usually easiest to create a template file by copying an existing template that is similar to the type of node you will create, and then modifying the copy. The following shows how to copy and modify an existing template file.
  • The new versions of EVE-NG has templates that cater for different CPUs, and depending on what CPU (Intel of AMD) you have on the EVE-NG server, you will need to create and modify the templates in that directory. In the example below, we have an AMD CPU.
  • Before copying images, we need to create a cvp-template based on a copy of the linux-template:
cd /opt/unetlab/html/templates/amd/
cp linux.yml cvp.yml
  •  Change the following values in the new file (can also use integrated WinSCP-editor):
    • vi /opt/unetlab/html/templates/amd/cvp.yml
---
type: qemu
description: CloudVisionPortal
name: CloudVisionPortal
cpulimit: 1
icon: CVP.png
cpu: 16
ram: 22528
ethernet: 2
console: vnc
shutdown: 1
qemu_arch: x86_64
qemu_nic: virtio-net-pci
qemu_options: -machine type=pc,accel=kvm -vga std -usbdevice tablet -boot order=dc
...

  • I have added a custom PNG file for CVP – This can be uploaded to /opt/unetlab/html/images/icons

  • Now the template is ready and you can upload the CloudVision image files. Upload the extracted images to EVE using for example FileZilla or WinSCP. Then login as root using SSH protocol, rename and copy them to the final folder: (the name of the folder must start with “cvp-” and will finally be visible in EVE-NG)
cd /tmp  
#Copy files to /tmp
mkdir -p /opt/unetlab/addons/qemu/cvp-2021.2.1
mv disk1.qcow2 /opt/unetlab/addons/qemu/cvp-2021.2.1/hda.qcow2 
mv disk2.qcow2 /opt/unetlab/addons/qemu/cvp-2021.2.1/hdb.qcow2
  • Clean and fix permissions
cd /tmp
rm *
/opt/unetlab/wrappers/unl_wrapper -a fixpermissions

Note: for our test setup (AMD Ryzen 3900x) we had to modify the qemu version for the Arista vEOS to use 2.12.0 – We also increased the default ethernet interfaces:

vi /opt/unetlab/html/templates/amd/veos.yml
---
type: qemu
config_script: config_veos.py
description: Arista vEOS
name: vEOS
cpulimit: 1
icon: AristaSW.png
cpu: 1
ram: 2048
ethernet: 12
console: telnet
qemu_arch: x86_64
qemu_version: 2.12.0
qemu_options: -machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic -display
none -no-user-config -rtc base=utc -boot order=d

Test EVE-NG with the new Arista vEOS and CloudVision images

Before you start, you need the Windows/Mac/Linux client side pack that will install everything necessary for running telnet, vnc, wireshark, rdp applications when working on/building labs on EVE-NG:

https://www.eve-ng.net/index.php/download/

Start up the EVE-NG user interface in your browser and log in:

  • Username: ‘admin’
  • Password:  ‘eve’

  • Create a new LAB and add new nodes

  • Successful installation should render the nodes similar to the outputs below:

 

Add nodes using ZTP to CloudVision

Note – In this example we will add the switches without using the Arista Validated Designed (AVD) ansible scripts.

To painlessly add switches to CVP, we use the ZTP function on the switches to grab an IP from the DHCP (which we will set up on the CVP server) and point to the bootfile-name option to the CVP server. We will be following the IP scheme for the AVD located at https://github.com/arista-netdevops-community/ansible-avd-cloudvision-demo/blob/master/INSTALLATION.md

This allows you to break away from manually provisioning the lab and use the automated way as shown here – https://github.com/arista-netdevops-community/ansible-avd-cloudvision-demo

In short:

  • CVP Server
    • Cluster interface: eth0 / Should use your own IP address that you can reach
    • Device interface: eth1 / 10.255.0.1/24
  • Management Network10.255.0.0/24
    • DC1-SPINE110.255.0.11/24
    • DC1-SPINE210.255.0.12/24
    • DC1-LEAF1A10.255.0.13/24
    • DC1-LEAF1B10.255.0.14/24
    • DC1-LEAF2A10.255.0.15/24
    • DC1-LEAF2B10.255.0.16/24
    • DC1-L2LEAF1A10.255.0.17/24
    • DC1-L2LEAF2B10.255.0.18/24
  • Default Username & Password:
    • admin / arista123
    • cvpdamin / arista123

Note: In newer versions of CVP (2021.1 and onwards) , there is a requirement for a /16 subnet for the internal kubernetes cluster overlay network. Please ensure that this is not being used anywhere in your network. The benchmark private IP range can be leveraged for this – 198.18.0.0/16

Starting with the CVP server:

Apply and wait for CVP to come up:

Log into CVP:

Default username and password is cvpadmin, this is changed to cvpadmin – arista123

Now get the MAC addresses of the devices in the lab, and setup the DHCP server on CVP to assign the names:

For example, I will grab the MAC address from the Spines and add them to the DHCP server:

  • Log into Spine 1
    • localhost#show int ma1 | i Hardware
      Hardware is Ethernet, address is 5000.0002.0000 (bia 5000.0002.0000)
  • Log into Spine 2
    • localhost#show int ma1 | i Hardware
      Hardware is Ethernet, address is 5000.0003.0000 (bia 5000.0002.0000)
  • On the CVP server
$ vi /etc/dhcp/dhcpd.conf

subnet 10.255.0.0 netmask 255.255.255.0 {
    range 10.255.0.200 10.255.0.250;
    option routers 10.255.0.1;
    option domain-name-servers 10.83.28.52, 10.83.29.222;
    option bootfile-name "http://10.255.0.1/ztp/bootstrap";
}

host DC1-SPINE1 {
    option host-name "DC1-SPINE1";
    hardware ethernet 50:00:00:02:00:00;
    fixed-address 10.255.0.11;
    option bootfile-name "http://10.255.0.1/ztp/bootstrap";
}

host DC1-SPINE2 {
    option host-name "DC1-SPINE2";
    hardware ethernet 50:00:00:03:00:00;
    fixed-address 10.255.0.12;
    option bootfile-name "http://10.255.0.1/ztp/bootstrap";
}
[...]

Then, restart your DHCP server:

$ service dhcpd restart

Once restarted, the switches will start with the ZTP process:

Dec 14 10:09:14 localhost ZeroTouch: %ZTP-6-DHCPv4_QUERY: Sending DHCPv4 request on [ Ethernet1, Ethernet2, Ethernet3, Ethernet4, Ethernet5, Ethernet6, Ethernet7, )
Dec 14 10:09:15 localhost ZeroTouch: %ZTP-6-DHCPv4_SUCCESS: DHCPv4 response received on Management1 [ Ip Address: 10.255.0.15/24/24; Hostname: DC1-LEAF2A; Nameserv)
Dec 14 10:09:20 DC1-LEAF2A ZeroTouch: %ZTP-6-CONFIG_DOWNLOAD: Attempting to download the startup-config from http://10.255.0.1/ztp/bootstrap
Dec 14 10:09:20 DC1-LEAF2A ZeroTouch: %ZTP-6-CONFIG_DOWNLOAD_SUCCESS: Successfully downloaded config script from http://10.255.0.1/ztp/bootstrap
Dec 14 10:09:20 DC1-LEAF2A ZeroTouch: %ZTP-6-EXEC_SCRIPT: Executing the downloaded config script
Dec 14 10:09:20 DC1-LEAF2A cvIps = 10.255.0.1
Dec 14 10:09:20 DC1-LEAF2A cvpNotifyIntvl = 60
Dec 14 10:09:20 DC1-LEAF2A configPollIntvl = 2
Dec 14 10:09:20 DC1-LEAF2A cvpUrl = https://10.255.0.1/cvpservice/services/ztp/config
Dec 14 10:09:20 DC1-LEAF2A cvpUser = cvptemp
Dec 14 10:09:20 DC1-LEAF2A cvAddr = 10.255.0.1:9910
Dec 14 10:09:20 DC1-LEAF2A cvAuth = token,/tmp/token
Dec 14 10:09:20 DC1-LEAF2A Removing temporary files
[...]

The switches should now register on CVP after a short while:

 

 


Original Author: Remi Batist. Network Architect at AXEZ ICT Solutions B.V.
Updated by Micholl Thom at Arista Networks – Last updated on 2021/12/14

 

Follow

Get every new post on this blog delivered to your Inbox.

Join other followers: