Wednesday, September 10, 2014

Making the Transition to an Open SDN Architecture in the Enterprise Private Cloud

Enterprise IT is moving away from acting as siloed service organizations to aligning with the business’ goals to help their organizations enhance their business agility, value and customer experience. Many of these organizations are moving to a cloud computing model to achieve these goals, but they haven’t determined the best strategy for making this transition.
Transitioning to the cloud will help these organizations accelerate innovation and business agility by emphasizing a number of important factors.  Adaptability is one.  Seamless scale, both upward and downward, is another. Network intelligence that will provide proactive resource optimization is a third.

To make this transition, organizations need the right infrastructure, and they must be prepared to answer questions regarding SDN, orchestration, security, network protocols and many other issues.

These are addressed by two application-aware architectures for the enterprise private cloud:

1.    Juniper Networks’ open SDN architecture

2.    A proprietary programmable architecture that requires investment in a centralized controller and application-aware switch combination.


Wednesday, August 13, 2014

Announcing Juniper's Most Highly-Available Campus Distribution Switch

Recently Juniper Networks announced the EX4600 switch that is built for the distribution layer in campus networks. It is designed for IT networking teams of enterprises and SMBs who are rolling out new applications and services and require higher performance to support them. This compact switch delivers innovations that were developed in the data center including ISSU for hitless upgrades, Insight Technology for performane monitoring and 40GbE ports for high speed uplinks. It works as a standalone switch as well as in Virtual Chassis and MC-LAG architectures as an options providing simplified management and network resiliency.


IT organizations are concerned about having sufficient switching performance to support increasing access to applications in the cloud from the campus. They need to support diverse devices for the current BYOD trend as well as support the demands of services like VDI, the growing use of wireless access and unified communications. As they continue to scale, designing their operations to accommodate increasing volumes of traffic, IT organizations are concerned about managing secure access, connectivity and bandwidth usage. However, despite the increasing and ever changing network usage patterns and the demands of supporting more devices and services IT teams are still expected to make things work with limited staffing resources and budgets. What the IT teams require is an easy-to-manage solution that delivers high-performance connectivity at an affordable price.

“More than three-quarters of the IT organizations we surveyed have either already standardized their campus networks on 10 GbE or are actively planning a move from 1 GbE to 10 GbE. Drivers include bandwidth-intensive applications, current or anticipated traffic levels and falling costs of the technology,” said Bob Laliberte, Senior Analyst, Enterprise Strategy Group.

Friday, April 25, 2014

A Network Fabric for New Applications in the Data Center

The Network Challenge with Deploying Applications
Thanks to virtualization organizations can bring up new applications and services quickly. Unfortunately, many data center networks don’t let you fully capitalize on the business agility that virtualization and modern application architectures provide. Traditional network architectures are too slow and too cumbersome to configure. For true agility, enterprises need a high-performance, low-latency fabric network that can be managed like a single, logical switch.

Today’s mid-sized and large-scale data centers are built with high-performance blade and rack servers, which typically run multiple VMs, which in turn run increasingly modular, web- and cloud-based applications. These modern applications are driving increased traffic levels and different traffic patterns, which place specific demands on the data center network.Critical business applications such as enterprise resource planning (ERP) and customer relationship management (CRM) are divided into multiple modules. As a result, relatively simple operations such as entering an order can trigger 100 or more individual transactions, each imposing its own latency. Some applications, including eCommerce applications, even behave dynamically, spinning up new instances or migrating workloads in response to traffic loads.

Their distributed nature means modern applications are spread across racks of servers, each served by multiple switches. The applications generate a tremendous amount of server-to-server, or east-west, traffic as the various modules communicate with one another. Multi-tier network architectures aren’t well matched to modern applications. They force this east-west traffic to first travel north and south, up and down the network tree, before arriving at its ultimate destination, adding significant latency that can cause application performance to degrade under load.

Wednesday, April 16, 2014

Design Your Network For Virtualized, SDN Enabled Applications in the Cloud

Applications are Driving Change
Organizations are going through a transition in how they build network infrastructure. This is because the number and type of applications in use is growing rapidly. The way that applications are accessed is changing and so is the way that they are designed.

Organizations are hosting applications in their own data centers. They are extending their applications across data centers. They are accessing applications from cloud hosting providers. Service providers are hosting applications on behalf of the enterprise and providing application management services.

These new applications are complex and they are virtualized and many of newer ones are SDN enabled. Many applications are used in multiple geographies and workloads follow the sun. All of this is changing how applications need to be delivered and how the network needs to be build to serve them.

Challenges with Deploying Applications
There are 3 major that you need to address to deploy new applications effectively. These are virtualization, SDN and cloud.

Modern applications are virtualized and dynamic. They communicate internally to a great degree and they move workloads around the network. This means that you need network fabric with mesh connectivity for them to perform properly.

Many new applications are SDN enabled and their component parts are connected via a virtualized overlay network, however these applications need to communicate with existing infrastructure so you need a way to incorporate SDN enabled applications into the rest of the infrastructure.

Thursday, March 6, 2014

The Critical Role of the Business Edge Network

The Service Provider’s Challenge
Businesses are increasingly using service providers to host critical applications and data in order to better control the security and availability of the data and to mitigate the expenses associated with hosting and serving data locally to the entire business user base. They access these applications and data over the service provider’s business edge network. This creates a challenge for the service provider, because their customers measure the success of the network by its ability to handle critical data and provide a superior user experience. With more and more applications and data centralized, the importance of the network, and its role in the success of the business, becomes ever more critical.

Market forces have created an environment of unpredictability for the service provider. Mobile devices, new applications, and the flood of new content place a direct strain on the business edge network, forcing the business and the service provider to address the network in creative ways. Maintaining and evolving the end-user experience is critical to the success of the network and key to enabling the service provider to meet its business goals.

The business user base, places considerable expectations on the network. Users expect their business e-mail and voice over IP calls to work flawlessly, while also expecting mobile video and applications to perform seamlessly. The impact of network performance is real and must be factored into the design of future business networks. This challenges the business edge provider to optimize the design and find ways to reduce the cost in the face of the increasingly complex and growing demand for a high-quality end-user experience.

Does E-VPN Spell the End for OTV?

If you are considering how best to do Layer 2 stretch for virtual machine mobility, then you might be considering Overlay Transport Virtualization (OTV). OTV designed by Cisco to offer L2 stretch with what they said was an easy to deploy protocol. It was only available on the Nexus switching product line, which didn’t support VPLS/MPLS. Until recently MPLS/VPLS was Juniper’s recommended technology for network segmentation and Layer 2 stretch, which Cisco also offers on the ASR routers.  We’ve recently announced E-VPN, which is MPLS/VPLS based and brings all of the benefits of VPLS and then some. Cisco has announced E-VPN on the ASR router as well. Now that E-VPN is available, maybe it’s time to consider your best option. Let’s take a look at why OTV isn’t the best choice for VM mobility and why E-VPN is.

Why OTV was Invented
OTV has been in the Cisco’s news announcements, highlighted at Cisco Live and featured in several Cisco blogs. It’s something I’ve been meaning to cover along with my blogs on LISP and VXLAN as these all get discussed together as parts of a complete solution for live VM migration. Cisco first announced OTV on Feb. 8, 2010. Overlay Transport Virtualization is a Cisco proprietary protocol which provides Layer 2 extensions, over IP, to interconnect remote data centers.  Cisco claims that the OTV is a simpler technology than MPLS/VPLS, which is a standards-based and proven technology for network segmentation and Layer 2 Extension.  They said that OTV can be provisioned within minutes, using only four commands, and that it provides increased scalability (however without seeing the independent studies we don’t know if this is true).  It was only offered it only on the Nexus 7000, which didn’t offer MPLS/VPLS technology. With OTV, Cisco pushed yet another proprietary protocol that is not as well proven as standards-based MPLS/VPLS or the newer E-VPN. Cisco supports VPLS on the ASR router so it is curious that they did OTV on the Nexus which doesn’t sit at the right place in the network to do L2 Stretch. The Cisco ASR, like the Juniper MX, is meant to do L2 stretch at the data center edge, not in the data center core where the Nexus switches sit.