Friday, April 25, 2014

A Network Fabric for New Applications in the Data Center

The Network Challenge with Deploying Applications
Thanks to virtualization organizations can bring up new applications and services quickly. Unfortunately, many data center networks don’t let you fully capitalize on the business agility that virtualization and modern application architectures provide. Traditional network architectures are too slow and too cumbersome to configure. For true agility, enterprises need a high-performance, low-latency fabric network that can be managed like a single, logical switch.

Today’s mid-sized and large-scale data centers are built with high-performance blade and rack servers, which typically run multiple VMs, which in turn run increasingly modular, web- and cloud-based applications. These modern applications are driving increased traffic levels and different traffic patterns, which place specific demands on the data center network.Critical business applications such as enterprise resource planning (ERP) and customer relationship management (CRM) are divided into multiple modules. As a result, relatively simple operations such as entering an order can trigger 100 or more individual transactions, each imposing its own latency. Some applications, including eCommerce applications, even behave dynamically, spinning up new instances or migrating workloads in response to traffic loads.

Wednesday, April 16, 2014

Design Your Network For Virtualized, SDN Enabled Applications in the Cloud

Applications are Driving Change
Organizations are going through a transition in how they build network infrastructure. This is because the number and type of applications in use is growing rapidly. The way that applications are accessed is changing and so is the way that they are designed.

Organizations are hosting applications in their own data centers. They are extending their applications across data centers. They are accessing applications from cloud hosting providers. Service providers are hosting applications on behalf of the enterprise and providing application management services.

These new applications are complex and they are virtualized and many of newer ones are SDN enabled. Many applications are used in multiple geographies and workloads follow the sun. All of this is changing how applications need to be delivered and how the network needs to be build to serve them.

Challenges with Deploying Applications
There are 3 major that you need to address to deploy new applications effectively. These are virtualization, SDN and cloud.

Thursday, March 6, 2014

The Critical Role of the Business Edge Network

The Service Provider’s Challenge
Businesses are increasingly using service providers to host critical applications and data in order to better control the security and availability of the data and to mitigate the expenses associated with hosting and serving data locally to the entire business user base. They access these applications and data over the service provider’s business edge network. This creates a challenge for the service provider, because their customers measure the success of the network by its ability to handle critical data and provide a superior user experience. With more and more applications and data centralized, the importance of the network, and its role in the success of the business, becomes ever more critical.

Market forces have created an environment of unpredictability for the service provider. Mobile devices, new applications, and the flood of new content place a direct strain on the business edge network, forcing the business and the service provider to address the network in creative ways. Maintaining and evolving the end-user experience is critical to the success of the network and key to enabling the service provider to meet its business goals.

The business user base, places considerable expectations on the network. Users expect their business e-mail and voice over IP calls to work flawlessly, while also expecting mobile video and applications to perform seamlessly. The impact of network performance is real and must be factored into the design of future business networks. This challenges the business edge provider to optimize the design and find ways to reduce the cost in the face of the increasingly complex and growing demand for a high-quality end-user experience.

A major challenge for the service provider’s business edge network services is how to deal with the future. The growth and purpose of the network of the future will largely be driven by mobility, cloud, and video. The world is becoming increasingly mobile and this trend is apparent in the enterprise. By the year 2020, it is projected that 50 billion devices will be connected to the Internet—and many of these devices are in the hands of the business user. The service provider must consider solutions today that enable future scaling to meet the demands of mobile devices on the business network.

Does E-VPN Spell the End for OTV?

If you are considering how best to do Layer 2 stretch for virtual machine mobility, then you might be considering Overlay Transport Virtualization (OTV). OTV designed by Cisco to offer L2 stretch with what they said was an easy to deploy protocol. It was only available on the Nexus switching product line, which didn’t support VPLS/MPLS. Until recently MPLS/VPLS was Juniper’s recommended technology for network segmentation and Layer 2 stretch, which Cisco also offers on the ASR routers.  We’ve recently announced E-VPN, which is MPLS/VPLS based and brings all of the benefits of VPLS and then some. Cisco has announced E-VPN on the ASR router as well. Now that E-VPN is available, maybe it’s time to consider your best option. Let’s take a look at why OTV isn’t the best choice for VM mobility and why E-VPN is.

Why OTV was Invented
OTV has been in the Cisco’s news announcements, highlighted at Cisco Live and featured in several Cisco blogs. It’s something I’ve been meaning to cover along with my blogs on LISP and VXLAN as these all get discussed together as parts of a complete solution for live VM migration. Cisco first announced OTV on Feb. 8, 2010. Overlay Transport Virtualization is a Cisco proprietary protocol which provides Layer 2 extensions, over IP, to interconnect remote data centers.  Cisco claims that the OTV is a simpler technology than MPLS/VPLS, which is a standards-based and proven technology for network segmentation and Layer 2 Extension.  They said that OTV can be provisioned within minutes, using only four commands, and that it provides increased scalability (however without seeing the independent studies we don’t know if this is true).  It was only offered it only on the Nexus 7000, which didn’t offer MPLS/VPLS technology. With OTV, Cisco pushed yet another proprietary protocol that is not as well proven as standards-based MPLS/VPLS or the newer E-VPN. Cisco supports VPLS on the ASR router so it is curious that they did OTV on the Nexus which doesn’t sit at the right place in the network to do L2 Stretch. The Cisco ASR, like the Juniper MX, is meant to do L2 stretch at the data center edge, not in the data center core where the Nexus switches sit.

Saturday, December 21, 2013

Enhancing VM Mobility with VxLAN, OVSDB and EVPN

Organizations are increasingly using virtual machine mobility to optimize server resources, ensure application performance and to aid in disaster avoidance. Typically VM live migration has relied on increasing the scale of the L2 broadcast domain to ensure that the VMs can be reached after migrations using their current addressing. This has resulted in the increasing use of VLANs and the need for L2 extension over the WAN.  As a result organizations are looking for ways overcome the limitations with VLAN scale and for methods to extend the L2 domain over the WAN that ensure the best performance. VxLAN has emerged as an alternative technology to VLANs, and EVPN has emerged at a better way to transport VMs over the WAN. Together these technologies can enable VM live migration over the WAN, or long distance vMotion in VMware parlance, but they need to all work together effectively and this is where OSVDB, VxLAN routing and a new technology from Juniper called ORE come in to play.

VxLAN Increases VLAN Scale
Organizations are increasingly looking to VxLAN as a solution. The primary goals behind this network architecture is to increase traditional VLAN limits from 4,094 and to enable VM mobility across Layer 3 subnets. VxLAN is a tunneling technology and is used to create an overlay network so that virtual machines can communicate with each other and to enable the migration of VMs both within a data center and between data centers. VxLAN enables multi-tenant networks at scale, as a component of these logical, software-based networks that can be created on-demand. VxLAN enables enterprises to leverage capacity wherever it’s available by supporting VM live migration. VxLAN implements a Layer 2 network isolation technology using MAC in IP encapsulation that uses a 24-bit segment identifier to scale beyond the 4K limitations of VLANs.

Saturday, December 7, 2013

Optimizing EVPN for Virtual Machine Mobility over the WAN

Organizations need to insure that their applications are available and performing. Server virtualization helps by enabling virtual machine mobility.  If a server is overworked or will be unavailable vMotion can be used to migrate live workloads to another server in the current data center or in another data center. This requires that the addressing including the MAC, IP address and VLAN ID remain the same so that sessions are not dropped when the VM move happens. This is done by extending the L2 domain to the new location, know as Layer 2 stretch. Within a subnet this is easy to do. Across subnets in the data center it becomes more difficult. Doing live migration over the WAN introduces considerable challenges. Juniper has introduced a number of technologies to make virtual machine live migration possible.

The challenge with VM mobility is how to do the Layer 2 stretch in a way that ensures that the VM can be reached after it is moved. There are a number of issues that need to be dealt with. The MAC and IP address no longer pinned to a site or to an interface as they have moved with the VM. You need fast convergence of network paths as VM moves so that traffic will reach it quickly. You need ingress and egress traffic convergence and optimization to avoid having traffic go through the former default gateway after the VM has moved. You need learning of the effects of the live motion event and information distribution control so that the network isn’t impacted by signaling traffic. You need proper L2 & L3 interaction so that everything happens in a timely manner to ensure the best experience for the users of the applications that are affected by the VM move. VPLS has been the traditional methods of doing this, and now Juniper is supporting EVPN to provide enhancements to the solution.

Tuesday, October 29, 2013

Connecting Islands of Resources in an SDN Data Center

Application Agility is Critical
Organizations are rolling out new applications that they use to drive the business. These applications are virtualized. They are increasingly distributed, dynamic and they can span locations. They connect employees, customers and the supply chain. They make employees more productive, help customers to engage with the business and facilitate better inventory management. They also provide timely business intelligence.  This means revenue to the organization. Time to deploy is critical. Organizations need to be agile when it comes to deploying new applications.

The problem is that the network is an obstacle. Due to the complexity of configuring the network speed of deployment is an issue. There are so many things that need to be configured. You need to configure route mapping, port mapping, VLAN mapping, QOS, NAT, ACLs and the list goes on.  The networking side hasn’t changed since it was invented decades ago. It takes weeks to configure the network connections that are needed when you deploy an application.

Organizations have been using server virtualization for years to overcome the limitations of physical server virtualization. When you have to deploy a physical server it could take weeks from the time you first knew you needed it until it was up and running. Now provisioning virtual servers only takes minutes. With virtualized servers we realized agility and resilience and improved physical server utilization. We need the same type of benefits for the network. You can’t let the network get in the way when you need to move fast and gain the advantages of new applications. Organizations are looking for ways to provision the network work quickly.