If you are considering how best to do Layer 2 stretch for virtual machine mobility, then you might be considering Overlay Transport Virtualization (OTV). OTV designed by Cisco to offer L2 stretch with what they said was an easy to deploy protocol. It was only available on the Nexus switching product line, which didn’t support VPLS/MPLS. Until recently MPLS/VPLS was Juniper’s recommended technology for network segmentation and Layer 2 stretch, which Cisco also offers on the ASR routers. We’ve recently announced E-VPN, which is MPLS/VPLS based and brings all of the benefits of VPLS and then some. Cisco has announced E-VPN on the ASR router as well. Now that E-VPN is available, maybe it’s time to consider your best option. Let’s take a look at why OTV isn’t the best choice for VM mobility and why E-VPN is.
Why OTV was Invented
OTV has been in the Cisco’s news announcements, highlighted at Cisco Live and featured in several Cisco blogs. It’s something I’ve been meaning to cover along with my blogs on LISP and VXLAN as these all get discussed together as parts of a complete solution for live VM migration. Cisco first announced OTV on Feb. 8, 2010. Overlay Transport Virtualization is a Cisco proprietary protocol which provides Layer 2 extensions, over IP, to interconnect remote data centers. Cisco claims that the OTV is a simpler technology than MPLS/VPLS, which is a standards-based and proven technology for network segmentation and Layer 2 Extension. They said that OTV can be provisioned within minutes, using only four commands, and that it provides increased scalability (however without seeing the independent studies we don’t know if this is true). It was only offered it only on the Nexus 7000, which didn’t offer MPLS/VPLS technology. With OTV, Cisco pushed yet another proprietary protocol that is not as well proven as standards-based MPLS/VPLS or the newer E-VPN. Cisco supports VPLS on the ASR router so it is curious that they did OTV on the Nexus which doesn’t sit at the right place in the network to do L2 Stretch. The Cisco ASR, like the Juniper MX, is meant to do L2 stretch at the data center edge, not in the data center core where the Nexus switches sit.
Core vs. Edge Layer 2 Stretch
The Nexus switches sit in the data center core at the wrong place in the network for L2 Stretch. The optimum place is at the data center edge where the Juniper MX routers sit.
It seems that OTV was a quick to market implementation that provided a way for Cisco to hook the Nexus into the data center interconnect architecture and use the lure of easy virtual machine migration as a way to promote the Nexus products. Prominent network engineer and blogger Ivan P. discovered that OTV is nothing else than the very familiar EoMPLSoGREoIP. See, VxLAN and OTV, I've been suckered.
OTV Top Concerns:
1 Proprietary solution: It requires all WAN devices in the OTV domain to be a Cisco Device. Cisco has proposed OTV to IETF, but it is unclear whether OTV will become an industry standard.
2 Does not support Traffic Engineering: MPLS based solutions support traffic engineering that allows Optimal Bandwidth utilization using MPLS Traffic Engineering.
3 Does not offer a high level of resiliency: In case of a WAN link failure, OTV over IP takes seconds to converge; with MPLS FRR, convergence takes less than 50ms.
4 Does not support same level of reliability as E-VPN: OTV traffic is carried over the untrusted WAN using IP, which is not as reliable as MPLS/E-VPN.
5 OTV on the Nexus Switches in the data center core sits at the wrong place in the network for L2 Stretch. The optimum place is at the data center edge where the Juniper MX routers sit.
Issues with How OTV Works
OTV is an IP based protocol that does not natively support Traffic Engineering like MPLS based solutions do. OTV will likely have serious scalability problems as the number of sites grows, as it is transporting Unicast traffic on Unicast transport and needs adjacency servers. The adjacency servers are facilitators in establishing adjacencies between the edge devices; this implies that the edge devices still establish n*(n-1)/2 links to signal adjacencies with each other. MPLS/VPLS solutions use Border Gateway Protocol (BGP) based signaling (RR) to avoid the scalability problems. The RR, unlike the adjacency servers, are not mere facilitators for establishing adjacencies but maintain adjacencies thereby resolving the n*(n-1)/2 scalability problems.
Cisco recommends transporting Unicast traffic on Multicast transport. Such a design introduces significant complexity. OTV supports a proprietary version of P2MP based multicast to limit multicast packets from reaching other destinations. OTV must be configured to flood unknown Unicast addresses, in scenarios, when MAC addresses are not advertised such as that for Microsoft Cluster. OTV floods the initial ARP packets and snoops the response packets to create the ARP proxy cache, thus OTV is not free of ARP broadcasts.
All the benefits that OTV claims to provide such as single protocol, and auto discovery come standard in a standards-based MPLS protocol that provides additional benefits such as resiliency and efficient bandwidth management. Further, OTV does require significant configuration in each of sites and requires designated routers such as Authoritative Edge Devices, and requires configuration of join-interfaces, adjacency servers, control group, data group, extend-VLAN, dedicated control VLAN etc.
The Caveats with FabricPath
OTV and FabricPath don’t work together. These notes are from Cisco’s documentation.
“Because OTV encapsulation is done on M-series modules, OTV cannot read FabricPath packets. Because of this restriction, terminating FabricPath and reverting to Classical Ethernet where the OTV VDC resides is necessary.”
“In addition, when running FabricPath in your network, Cisco recommends that you use the spanning-tree domain <id> command on all devices that are participating in these VLANs to speed up convergence times.”
Source: Overlay Transport Virtualization Best Practices Guide.
Why E-VPN is the Right Choice for L2 Extension
Customers who want to do live VM migration from data center to data center realize the value of a good L2 stretch implementation. With OTV Cisco sought to appeal to this need, but not with what we think is the optimal technology. Juniper has supported VM migration with VPLS in the past and now supports it with E-VPN. Juniper’s has always supported standards-based MPLS technology for the enterprise data center.
MPLS/E-VPN is an industry standard encapsulation. It is a multi-vendor solutions vs. single vendor / single platform. It is an open solution allowing multiple vendors to compete. It is connection oriented, instead of connectionless. It provides fault detection, notification, management (predictable service quality including latency). It provides resilience with active/active multi-homing, fast convergence, improved administrative and policy control, and an easier migration and interworking path to enable new services.
What Cisco Says About EVPN
“To support service providers, Cisco is working with other network vendors to standardize a resilient and massively scalable solution using Ethernet VPN, which will extend Layer 2 traffic over MPLS.
Cisco introduced MAC routing to the L2VPN space in 2009. E-VPN takes the VPN principles introduced and matured by OTV and ports them into a Border Gateway Protocol(BGP) based standard proposal that leverages the MPLS data plane that SPs are used to operate upon. One could think of E-VPN as OTV over a native MPLS transport.
In addition to its strength and high scalability, E-VPN improves redundancy and multicast optimization in MPLS with all-active attachment circuits for multi-homing deployment, which are usually missing in traditional VPLS-based LAN extension solutions and were introduced with MAC routing by OTV.”
Source: Distributed Virtual Data Center for Enterprise and Service Provider Cloud.
“We have previewed E-VPN and PBB-EVPN with service providers from around the world and have received overwhelmingly positive responses. Their feedback points to flexible multi-homing capabilities as most attractive feature, while others also benefit from the scale provided by PBB-EVPN. Currently, we are planning early field trials with several of them as we speak.”
Source: E-VPN and PBB-EVPN Take Data Center Interconnect to the Next Level.
“The E-VPN solution is being designed to address all of the above requirements and more by performing MAC distribution and learning, over the MPLS network, in the control plane using multiprotocol BGP. While E-VPN introduces a paradigm shift from existing VPLS and VPWS solutions, it does bring L2VPN technologies closer to L3VPN in terms of the general operational model and underlying protocol machinery. As such, it can be characterized as an evolution of MPLS-based L2VPN solutions to enable a richer set of capabilities and introduce a new set of services.”
Source: Evolving Provider L2VPN Services with E-VPN
Juniper’s E-VPN Solution
Juniper has used open as a guiding principle and has thus builds standards-based protocols into our platforms (MPLS/VPLS, and now E-VPN). This gives customers the most flexibility in their choice of equipment and architectures and ensures that they are not stuck with the choices that they make as their needs change in the future. We feel that for L2 data center interconnect standards-based E-VPN is a better choice over the proprietary OTV with its limited set of supported equipment.
For more information on Juniper’s E-VPN solution see my previous blogs:
Optimizing EVPN for Virtual Machine Mobility over the WAN
Enhancing VM Mobility with VxLAN, OVSDB and EVPN
Thursday, March 6, 2014
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment