• vEOS – Running EOS in a VM

Print Friendly, PDF & Email


EOS is released as a single image that supports all of our platforms. That same single image can even be run in a virtual machine! This article describes how to set up a virtual machine in order to test EOS functionality, or to develop and test your own extensions.

EOS in a VM

EOS run in a VM can be used to test almost all aspects of EOS, including:

  • Management Tools – CLI, Snmp, Aaa, ZTP
  • L1 Connectivity – Link up/down (when connected to another EOS VM port), LLDP
  • L2 – VLANs, Port-channels, MLAG
  • L3 – Routed ports, Static routing, BGP, OSPF, VARP, VRRP
  • Extensibility – eAPI, EOS SDK, OpenFlow

The VM simulates a fixed system switch with the following hardware:

  • boot loader (IDE CD-ROM drive)
  • internal 2GB flash (IDE hard disk)
  • external USB port
  • 1 to 8GB of memory
  • Management1 interface
  • Up to 4 or 7 front-panel interfaces (depends on the hypervisor)

A Few Things to Note

2GB of memory  per vEOS instance is recommended, though 1.5GB is sufficient for most testing.

Aboot-veos 8.0.0 or above is recommended. It is required when running EOS versions 4.17.0F and above.

Certain data-plane features can only be configured if front-panel interfaces exist, so be sure to configure your VM with at least 2 interfaces (1 management and 1 front-panel interface).

The simulated flash differs a bit from a new system from the factory in that it does not contain an empty startup-config file. This prevents ZTP from starting automatically at boot, which is what would normally happen on a new system. In order to test ZTP, you can delete the startup-config file and reboot.

You will notice a significant delay the very first time you boot the VM. As part of the standard boot process, we copy the EOS swi image we’re booting to another location on flash if it has changed since the last time we booted. This can take some time on the virtual flash, but will not be necessary after the first boot.

The vEOS SWI image provided in the vEOS vmdk is currently a derivative of the official EOS release. It cannot be loaded onto real hardware as it has had its hardware support stripped out.

At this point, official EOS releases do not support running in an EOS VM with front panel interfaces. A couple of minor changes were required support front-panel interfaces. These changes will eventually make it back to an official release, at which point you will once again be able to run the exact same image on your switch as on your VM.

Starting the VM

vEOS is supported on QEMU/KVM, VirtualBox, VMware Workstation, VMware Fusion 4 and 5, VMware ESX, and Hyper-V. For the most flexible networking options, QEMU/KVM, VirtualBox, or VMware ESX are recommended.

Hypervisor-agnostic Configuration

The Aboot-veos iso must be set as a CD-ROM image on the IDE bus, and the EOS vmdk must be a hard drive image on the same IDE bus. The simulated hardware cannot contain a SATA controller or vEOS will fail to fully boot.

EOS is a Fedora-based linux distribution with a 64-bit kernel. So when specifying the Operating System type, use “Fedora” or “Linux”, and be sure to specify that it is 64-bit rather than 32-bit.

The e1000 network adapter is generally recommended for hypervisors that support it. The virtio network adapter is also fully supported, but requires EOS version 4.14.2 and Aboot-veos version 2.1.0 and above. Other alternative network adapters are specified below and are supported on a best effort basis.

Using VMware

vEOS can be run on a variety of VMware products – we support VMware Workstation, VMware Fusion, and ESX.

You can get up to 4 front-panel interfaces on VMware Workstation and Fusion, and many more on ESX. For all VMware hypervisors, the e1000 network adapter should be used for best results. Each version VMware provides different levels of support for configuring network connections between VMs.

Once you’ve download the Aboot-veos bootloader iso and the EOS vmdk, you can set them up as the CD-ROM image and hard drive image on the IDE bus, create the number of desired network adapters, and you’re ready to go!

VMware ESX

See Running vEOS on ESXi 5.5.

VMware Fusion

See the VMware Fusion virtual networks tech tip.

VMware Workstation

VMware Workstation has some of the most flexible networking options.

Open up your “Virtual Network Editor” ( Edit->Virtual Network Editor ). Create a “host-only” vmnet interface for every virtual wire you want to connect two VM interfaces together with. Subnet IP address do not really matter, and keep DHCP off.

Once you’ve created these interfaces, it is really important that you make sure that your user has full permissions on the associated devices, otherwise VMware does not allow all packet types through the connection. In Linux this means making sure you have read and write permissions on all /dev/vmnet* devices. I’m not sure what, if anything, is necessary in windows or mac systems.

Once you’ve done this you can change the network connection type of each front-panel interface and associate each one with a different vmnet interface. You can then wire two VM interfaces together by associating them with the same vmnet interface.

It is also possible to use virtual LAN segments to connect the front-panel interfaces, but this has not been fully tested.

Using QEMU and KVM

vEOS is fully supported on QEMU/KVM with up to 7 front-panel interfaces. Running EOS in a virtual machine on top of KVM requires that the qemu-kvm package is installed. Once you’ve downloaded the boot loader iso and EOS vmdk from the downloads section, you can start the VM with 4 front-panel ports using:

qemu-kvm -nographic -vga std -cdrom Aboot-veos-serial-8.0.0.iso -boot d
-hda vEOS-lab-4.14.5F.vmdk -usb -m 1024-net nic,macaddr=52:54:00:01:02:03,model=e1000
-net nic,macaddr=52:54:00:01:02:04,model=e1000
-net nic,macaddr=52:54:00:01:02:05,model=e1000
-net nic,macaddr=52:54:00:01:02:06,model=e1000
-net nic,macaddr=52:54:00:01:02:07,model=e1000

The e1000 or virtio network adapters should be used for best results.

In order to connect to the VM over the various interfaces, you’ll need to change the options passed to -net based on how you want to set up your virtual network.

Using Hyper-V

vEOS has beta support for Hyper-V with up to 4 front-panel interfaces. The Aboot-veos iso must be set as a CD-ROM image on the IDE bus, and the EOS vmdk must be a hard drive image on the same IDE bus. Each interface must be created as a legacy network adapter (which simulates a DEC 21140).

Using VirtualBox

vEOS has support for VirtualBox with up to 7 front-panel interfaces. The EOS vmdk must be a hard drive image on the IDE bus, and the Aboot-veos iso must be set as a CD-ROM image on the same IDE bus. For best results, the virtio network adapter should be used, though requires EOS version 4.14.2 or above and Aboot version 2.1.0 or above. Alternatively the PCnet-FAST III (Am79C973) network adapter can be used.  Note that neither of these are the default network adapter.

VirtualBox supports a wide range of networking options for connecting VMs. Attaching interface adapters to a named Internal Networks is an easy way to connect VM interfaces to each other. Note that you must set Promiscuous Mode for each interface adapter to at least “Allow VMs” for traffic to flow properly between VMs.

For more information about running vEOS on VirtualBox, check out vEOS and VirtualBox and Building a Virtual Lab with Arista vEOS and VirtualBox

Download vEOS

All that is required to get a copy of vEOS is a guest user registration and acceptance of end user licensing agreement, visit our software download page to get started. The vEOS directory contains the latest vEOS-lab vmdk and Aboot iso images.


Get every new post on this blog delivered to your Inbox.

Join other followers: