Posted on March 30, 2016 12:27 pm
 |  Asked by Nicholas Sheridan
 |  3650 views
RESOLVED
0
0
Print Friendly, PDF & Email

Hi forum,

Considering the design elements of a L3LS, the separation of functions into dedicated leaves (such as Services, Compute, Storage, Border), and given Cisco’s VDC and it’s logical separation (or rather, the broadly accepted/ marketed view that secure and less secure networks can be collapsed onto the same physical device) are people doing the same with L3LS deployments with VLANs and VXLAN?  Or are they interconnecting two L3LS ‘instances’ with border leaves with a secure gateway?

Cloud datacentres seem to make no distinction between what a customer defines as secure and not secure and generally they seem to suggest they offer ‘adequate’ logical division to keep the two ‘things’ separate.  Enterprises like de-risk it by contracting out the risk to the SPI provider, were possible.

The temptation to collapse all networks onto a single L3LS ‘instance’ for an enterprise, use VLAN and VXLAN to divide, and make a policy statement of ‘everything is a DMZ’ is overwhelming from a cost point of view and I’d be lying if I didn’t find a single universal scalable approach of a single L3LS ‘instance’ for all networks… perhaps a ‘campus leaf” for example?

Can anyone give me a steer on this or point me to further reading material?

Many thanks

1
Posted by Alexis Dacquay
Answered on March 30, 2016 5:03 pm

Hi Nicholas,

This is an interesting topic.

 

are people doing the same with L3LS deployments with VLANs and VXLAN?

By definition, a Layer3 Leaf-Spine design only has Layer3 between Leaf and Spine nodes, VLANs are isolated within each Leaf, but do provide the Layer2 isolation most need. Layer2 isolation exists at the physical switch-level but also on virtual-switches.

Is Layer2 isolation enough ? That is a valid question that companies have to answer with security, legal, risk, cost considerations. Can traffic jump between vSwitches, VLANs, VNIs, MPLS labels, VRFs ? Normally not, per networking fundamentals, but how far can you trust a vendor implementation and a user/admin design/setup ? Or is the biggest risk not jumping isolated segments but instead having the devices/services suffer unauthorized access ? VLAN, L3Ls underlay and VXLAN do not make security worse or better, it’s the same, from my personal perspective. But virtualisation in general brings challenges to classical security models.

A reminder that sometimes confuse security bodies: a VRF has got nothing to do on Layer2-only segments (VLANs or VNIs)

Network virtualisation (with VXLAN on a L3LS underlay) provide L2 connectivity anywhere, but also preserves the isolation of the L2 domain, from other L2 domains, and from the L3 underlay.

There is therefore no less security and isolation than classical VLAN and MPLS offer.

Network virtualisation and a L3LS design comes to satisfy modern workloads that cannot be satisfied with traditional network and service design, which include central bottleneck such as routers(DGW/etc)/firewalls/LB/etc. You cannot have 100Tbps firewall seating between all flows of a Cloud, so services have to adapt to the new workloads. Some use farms or hardware service appliances, some use virtualised distributed models, some mix the models (private vs public services), or some choose the legacy model (centralised single service pair) because the scale and amount of traffic is low.

 

’everything is a DMZ’

I am not sure what you mean  here. The underlay network and the overlay networks are isolated, transparent to each other, they are decoupled.

If in an existing environment you could say ”every VLAN is DMZ” then yes, maybe every VNI should be ’DMZ’.

If on the other side your network would have some VLANs for DMZ, some for Internet, some for Internal traffic, then the exact same would apply for VNIs. If you don’t trust VLANs, then you might not want to trust VNIs.

Public clouds normally do not allow tenants to communicate with each other, and have strict DMZ or dedicated virtual services. I do not thing any network can be justified to be 100% DMZ, or DMZ loses its meaning, it’s just dedicated to untrusted public cloud network segments.

 

Or are they interconnecting two L3LS ’instances’ with border leaves with a secure gateway?

That might be fine for classical workloads, but maybe not for virtualisation, clouds, IP storage, Big Data, HPC cluster. What if your two ”instances” need to exchange 1Tbps of traffic ? What ”secure gateway” would satisfy ? Or maybe the two ”instances” do not speak much to each other ? What if they later need to ? Maybe a different security model, more scalable is needed.

 

I’d be lying if I didn’t find a single universal scalable approach of a single L3LS ’instance’ for all networks… perhaps a ’campus leaf’’ for example?

Sorry but I am not clear about what you found or did not find as suitable design, or what ’campus leaf’ relate to.

The goal of a universal infrastructure is indeed to have a single L3LS network satisfying any workload, whether physical, virtual, low or high-performance, providing consistent and scalable connectivity.

You simply cannot put security on every link. You cannot either dedicate tiny central resources. The security services, like any other load-balancing, WAN, or routing services, must be able to scale with the need. Farms of physical appliances can satisfy this, or a distributed virtual/physical model.

For example you may have a services leaf pair, or dedicated security leaf pair, or just distribute virtual/physical security services to every pod/racks. A farm of appliances could be 20x Firewalls, all providing well-known security to traffic from VLANs; the network virtualisation being provided by hardware VTEPs (Arista switches for example) that can be driven by the orchestrator / controller of your choice (VMware NSX, Openstack, etc).

 

Please let me know if there was any misunderstanding in your question, or any disagreement in my responses.

 

Another model beyond traditional Layer2 segmentation is Macro-Segmentation (M.S.S), if you want to go beyond:

https://www.arista.com/assets/data/pdf/MSS_SolutionBrief.pdf

 

Here are two design guides that my be helpful, if you have not read them yet:

 

Arista Universal Cloud Network Design Guide

https://www.arista.com/custom_data/downloads/?f=/support/download/DesignGuides/Arista-Universal-Cloud-Network-Design.pdf

 

Layer 3 Leaf & Spine Design and Deployment Guide

https://www.arista.com/custom_data/downloads/?f=/support/download/DesignGuides/Arista_L3LS_Design_Deployment_Guide.pdf

 

Best regards,

Alexis

0
Posted by Nicholas Sheridan
Answered on March 30, 2016 6:54 pm

Alexis,

 

Sterlin repsonse – i’ll read the docs first before another response

1
Posted by Nicholas Sheridan
Answered on April 2, 2016 7:34 am

Surely for this approach to be successful there must be a need to divide traffc across the spine into ’discrete VPNs’ with separate VNI’s within the VXLAN configuration just as, for example a carrier divides VPNs across the MPLS core?  Just as tennants would be separated across a single leaf or MLAG pair by use of VLANs?

Whilst I have no doubt about this being a working and proven method (despite any misunderstanding that I currently have), I have some reservations about the margin of error that an administrative mistake could produce.  I do identify that this is not neccesarily limited to this kind of deployment (again apologies for the misunderstand if any)

If the comparison above is valid, then would it be fair for me to think that the leaf spine interconnets be ’off limits’ for end host visbility, just as within a MPLS VPN, the carrier core address space is off limits – even though it is used as a transport for multiple tennants?

I will provide a diagram to illustrate what I mean later on in this weekend, time permitting.

Many thanks.

You are correct about the isolation: the underlay Layer3 network (Layer3 links between Leaf and Spine) is not aware about the L2 tenant traffic encapsulated inside Layer3 VXLAN, and the Layer2 tenants are not aware about crossing a Layer3 network.
The routing information cannot leak as it is encapsulated, the two planes are dissociated: underlay / overlay.
As you say, the underlay (”leaf spine interconnects”) is ’off limits’ for end host visibility.

At some point the tenant traffic might need to be routed (to Internet or other tenants), but this is the same model as today: routing can take place on firewalls, routers, etc. But certainly, any provider would never allow tenant routing mix with their core routing/addresses

For deeper implementation details…
Any Cloud/hosting provider would not want the underlay routing to merge with overlay/tenant routing. However some enterprise network might not care so much about that strong isolation, or might run routing at the VTEP level (VXLAN routing in addition to VLAN bridging), with VRF isolation. Tenant overlay vs underlay routing isolation can be expanded with VLAN L3VPN. Note that a typical Top-of-Rack switch VTEP does not have the VRF scaling or some large PE router/firewalls. So different designs might be considered whether the use case is enterprise network, private Cloud, or public Cloud/hosting.

For more information about routing on the VTEP:
https://eos.arista.com/vxlan-routing-with-mlag/

(Alexis Dacquay at April 5, 2016 12:37 am)
0
Posted by Nicholas Sheridan
Answered on April 9, 2016 6:46 pm

Another excellent and comprehensive response, many thanks indeed!

Post your Answer

You must be logged in to post an answer.