Posted on December 5, 2020 10:18 am
 |  Asked by Phuoc Hoang
Print Friendly, PDF & Email

Hi guys,

I build a testbed OpenStack with an Arista router as TOR. I tried ML2 features but it’s not clear to me in some facets. Here are my concerns and I want to know more detail about them.

1. ML2 Hierarchical Port Binding

As I know this feature is used to offload encap/decap VXLAN packets at physical router TOR rather than using software (such as ovs) on compute node.
However in my testbed, I only use one Arista TOR router connect to compute node A. I have another compute node B without any TOR connected to it. Does ML2 HPB support creating VXLAN network to connect VM on compute node A and B like traffic is encapsulated VXLAN on TOR and decap-ed VXLAN by ovs on compute node B?

2. About configuration the ML2 HPB above
In OpenStack Deployment Guide, I’m not clear the value bridge_mappings.
If compute node A connect to TOR A, compute B connect to TOR B, what’s the value bridge_mappings on
– neutron server node
– compute node

3. Arista L3 Service
Using this feature, I configure service_plugins = arista_l3. If I create a router to connect some virtual networks, so on my Arista router, I see vlan interface is configured. Is this feature only support on vlan network? Because I see that with VXLAN networks, routing still happens on network node.

Thank you so much.

Posted by Mitchell Jameson
Answered on December 7, 2020 9:56 pm

Hi Phuoc,

First, HPB is not the only mechanism that supports encap/decap offload to an Arista switch. This can also be achieved by simply provisioning VLAN networks in OpenStack after configuring a VLAN to VNI mapping for the OpenStack region on CVX. (See 'Automating VLAN to VNI mapping' in the Arista OpenStack deployment guide: In many circumstances, this meets users needs.

Where HPB becomes most useful is in scaling beyond 4K tenant networks or in multi-vendor deployments where all TORs support a shared control plane (eg. EVPN.)

On to your questions:

  1. Yes, HPB does technically support this type of deployment. However, in practice, we have not tested any control plane that supports MAC reachability distribution between Arista TOR VTEPs and OVS VTEPs. For this reason, we recommend using either a purely TOR VXLAN fabric or a purely software/OVS VXLAN fabric.
  2. On the neutron server node it should be <connected TOR>:<OVS bridge on neutron server>, on compute node A, it should be TORA:<OVS bridge on compute node A> and on compute node B, it should be TORB:<OVS bridge on compute node B>. This configuration as well as other HPB configuration is detailed in the ML2 Hierarchical Port Binding" section of the aforementioned Arista OpenStack deployment guide.
  3. Yes, the Arista L3 plugin is primarily intended for use with VLAN networks. If you wish to use it in conjunction with VXLAN, VLAN to VNI mappings must be manually configured on the Arista HW router. This configuration is not supported with HPB as the VLAN to VNI mappings are not consistent throughout the VXLAN fabric.

Hope that helps,


Post your Answer

You must be logged in to post an answer.