Network virtualization is a growing topic of interest and for some good reasons as networks scale to meet the challenges of cloud computing they are running up against VLAN scaling limitations. There have been several network overlay technologies released that seek to address the challenges with network scaling and to enable workload mobility. One of these technologies is VXLAN. It has a few proponents who say that it can meet the requirements for network virtualization. While it sounds good on the surface, it is worth it to take a closer look. With VMWorld happening this week in San Francisco I’m sure that network virtualization will be a hot topic, especially considering the VMware Nicera news, so I thought I’d comment on it and offer some thoughts and options.
The Origins of VXLAN
The VXLAN buzz started during a keynote at VMworld in August 2011, when VMware CTO Steve Herrod announced the Virtual eXtensible LAN protocol, which VMware positions as a technology that “enables multi-tenant networks at scale, as the first step towards logical, software-based networks that can be created on-demand, enabling enterprises to leverage capacity wherever it’s available. Networking vendors Cisco and Arista are actively promoting VXLAN and have collaborated with VMware to develop and test the technology on their products. Cisco highlighted VXLAN at their Cisco Live user conference again in June 2012 and Arista is demoing it at VMWorld, however with the Nicira announcement VMWare seems to have taken that next step. VXLAN sounds interesting, so let's see how good of an idea it is.
What VXLAN Is and What it Does
VXLAN is a new framework that requires the creation of overlay networks for virtual machines (VMs) to communicate with each other and/or to move VMs over an overlay network both within a data center and between data centers. VXLAN implements a Layer 2 network isolation technology that uses a 24-bit segment identifier to scale beyond the 4K limitations of VLANs. VXLAN technology creates LAN segments by using an overlay approach with MAC in IP encapsulation. Vendors who promote VXLAN say that traditional data center/cloud networks fall short in two key areas and VXLAN will solve these issues:
1. Multi-Tenancy IaaS Scalability: Network isolation technologies such as VLAN and VRF may not provide enough network segments for large cloud deployments.
2. Virtual Machine Mobility: Layer 3 boundaries create silos that virtual machines (VMs) cannot cross, limiting the scalability of the VM resource pools that cloud deployments rely on.
The primary goals behind this network architecture are to:
1. Increase traditional VLAN limits from 4,094 to a larger, as yet undetermined, number of virtual networks in a multi-tenant (Cloud IaaS) deployment.
2. Enable VM mobility across Layer 3 subnets as cloud deployments grow into multiple L3 subnets.
The proposed solution is a new protocol, VXLAN, which is captured in an IETF draft version 00, see link http://tools.ietf.org/html/draft-mahalingam-dutt-dcops-vxlan-00. The proposal is still in an experimental state and there is no confirmed date for ratification.
Some Issues with VXLAN
Multicast: A complicating aspect is that VXLAN expects multicast to be enabled on physical networks, and it does MAC flooding to learn end points. This will impact the performance and scalability of existing physical network segments in the data center, and over the WAN, creating design, scalability and operational challenges.
Overlay Tunnels: Since VXLAN is an overlay tunnel, it adds a layer to the network that must be managed, and creates operational and scaling challenges. It imposes new end-points, usually a vSwitch, that takes the L2 frames from the VMs, encapsulate them and attaches an IP header. VXLAN creates considerations around what the IP address termination device should be.
Lack of Control Plane: Most of the control plane complexities such as segment ID allocation, and multi-cast are not addressed by VXLAN. To solve these issues you need a control plane, but VXLAN does not have one so it puts the problem on the network. A control plane mechanism is needed to solve this problem. The question is if it should be an SDN controller or the router.
Tunnels in Tunnels: Interoperability with the widely used VPLS/MPLS network segmentation scheme is not yet defined with VXLAN and VXLAN tunnels can’t prevent themselves from being tunneled further, creating complexity as well as a lack of visibility in to network traffic, hindering application performance management, as well as potentially impacting the benefits of VXLAN.
Security: VXLAN security is not addressed in the draft yet. Typically, security for overlay protocols is addressed with IPSec tunnels. This will add additional overhead and the solutions will become burdensome to implement and manage.
Scalability: The VXLAN overlay network originates from a VM on a server at the software level and this could impact overall performance as administrators scale their VM deployments. In addition, many best practices and protocols developed for physical infrastructure need to be replicated for VXLAN in software, adding more performance and scalability challenges. Potentially this process should be off loaded to the physical switch using a technology such as VEPA.
Physical Devices: A challenge with the end points being vSwitches is that you can only connect virtual ports to VXLAN segments, so you can’t connect your physical firewall, server load balancer, or router directly. You have to use virtualized versions that run in VMs, so performance could be an issue and you need to manage server load. Deploying virtualized appliances has some advantages but we still need to sort out interoperability with the physical network.
Some Consideration and Takeaways
The ability to stretch L2 adjacencies to accommodate the live migration of VMs is considered important for IaaS. Currently the viable construct to provide isolation / separation for L2 is the VLAN, so a large number of VLANs is seen as desirable. Most network switching equipment only supports 4,096 VLANs, however, several of Juniper’s product lines scale beyond the VLAN limitation. The MX Series routers support 256,000 VLANs for example. There are ways to overcome VLAN limitations such as QinQ, or VLAN stacking, vCDNI, and Provider Backbone Bridging.
Preservation of private IP subnets while enabling VM mobility across Layer 3 boundaries is seen as desirable for large-scale cloud deployments. Juniper provides technologies that enable L2 stretch like Virtual Chassis on the Juniper EX systems and VPLS on the MX series and the QFabric System, with integrated L2 and L3, which scales massively .
VXLAN does not have a control plane and it uses multi-cast to flood the network for endpoint discovery, so it poses control plane scalability and network manageability issues. This could be addressed by integrating VXLAN with an SDN controller or by deploying another overlay tunneling protocol that is managed from an SDN controller instead. There are a number of such devices on the market and Juniper is evaluating some of them for interoperability with our equipment.
Since it runs in the hypervisor VXLAN uses shared resources and performance cannot be guaranteed. A method is needed to ensure priority allocation of compute resources in a hypervisor environment if this type of technology is going to scale or the tunnel processing needs to be offloaded to the physical switch perhaps using VEPA technology. Juniper partners with IBM that provides a VEPA enable soft switch for example and Juniper has included VEPA in JUNOS 12.1.
At Juniper we continue to evaluate the overlay network technology as it evolves and we are working to find answers that fit the needs of our customers as we develop technologies to support network virtualization. We are taking a close look at VXLAN and the value that it can deliver for our customer's networks.
I know that I have not covered everything but I hope that this post has provided some valuable information to help you evaluate your technology choices and see the value that VXLAN can bring to your network.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment