Friday, May 15, 2015

Using Application Delivery Services to Build Scalable OpenStack Clouds

As your organization seeks to increase IT agility and reduce operating costs building an orchestration platform like OpenStack to automate the deployment of resources makes a lot of sense. As you plan the implementation of your OpenStack platform ensuring application availability and performance is a necessary design goal. There are a number of things to consider to this end, for example how do you minimize downtime, or support your legacy applications, as well as your applications that are built for the cloud. You might need to host multiple tenants on your cloud platform, and deliver performance SLAs to them. Larger application deployments might require extending cloud platform services to multiple locations.

To ensure a successful implementation of OpenStack you need design recommendations around best practices for multi-zone and multi-region cloud architectures. There are two major areas to look at. One is resource segregation or ‘pooling’ and the use of cloud platform constructs such as availability zones and host aggregates to group infrastructure into fault domains and high-availability domains. The other is how to use an ADC to provide highly available, highly performant, application delivery and load balancing services in your distributed, multi-tenant, fault-tolerant cloud architecture.

Best Practices for Multi-Zone and Multi-Region Cloud Integration
It’s easier to build resilient and scalable OpenStack data centers if three best-practice rules are applied in planning:
•Segregate physical resources to create fault domains, and plan to mark these as OpenStack Availability Zones.
•Distribute OpenStack and database controllers across three or more adjacent fault domains to create a resilient cluster.
•Networks - Design considerations should include both data plane and control plane networking criteria for scale out and high-availability.

Thursday, April 2, 2015

Simplify Integration Of L4 - L7 Services With OpenStack and NetScaler

Many organizations are building private cloud platforms as a way to increase the agility of IT infrastructure and to increase the efficiency of operations to support their business critical applications. Over the past few years we have seen an increasing move towards deploying OpenStack, which is an open source cloud management platform, in production environments.
As organizations use OpenStack to automate the deployment of servers, storage and networking, they are also looking to automate the provisioning of L4 – L7 services. To do this, they need their networking equipment vendors to provide integration of their devices with OpenStack in a way that addresses deployment challenges involved in offering infrastructure-as-a-service. These challenges include scalability, elasticity, performance and flexibility/control over resource allocation.
To enable the automated deployment of application delivery services with OpenStack, Citrix has built NetScaler Control Center as a way to integrate with the LBaaS service in OpenStack. The Citrix LBaaS solution enables IT organizations to guarantee performance and availability service level assurances (SLAs) as well as provide redundancy and seamless elasticity while rapidly deploying line of business applications in OpenSack.

The Challenge with Resource Deployment

OpenStack has come a long way in simplifying the provisioning of computer, storage and networking resources as part of an application deployment workflow. Neutron, which is the networking project for OpenStack, automates the creation and management of L2/L3 networks, as well as the associated L4/L7 network services such as firewalling, load balancing and VPN services. While Neutron has made rapid advancements in enabling a self-service consumption model for networking, there are still operational gaps that need to be addressed for successfully deploying business critical workloads. Some of these gaps include providing for service-aware resource allocation, resource elasticity on demand, monitoring and visibility, fault tolerance and high availability. It is important that cloud providers have complete control over policies that control these operational characteristics, even in fully automated environments.

Saturday, January 31, 2015

A Look Back at 2014 and Innovation at Juniper Networks

As the year comes to an end it’s always interesting to look back at the changes in the industry and the progress that we made as a company in the last year. There were many trends that emerged or took further hold of the industry in 2014. Let’s take a look at them and see how Juniper delivered innovation in these areas. Cloud computing and the need for on-demand resources was a big one. The open source movement is continuing to grow in the cloud space and OpenStack and CloudStack are gaining momentum. The Dev/Ops movement and the need for automation of IT resources was another big trend in the news. We saw Dev/Ops extend to networking equipment like the top of rack switch, when it had previously been mainly for server configuration. Overlay networks took hold in 2014 with the likes of Juniper’s Contrail and VMware’s NSX gaining momentum. New network fabric architectures were introduced like IPClos that is popular with the MSDC’s or Massively Scalable Data Center Operators and Spine and Leaf architectures that offer simplified deployment and management. The rise of the Open Compute Project and its move to include networking was a bit of a surprise for me. There is certainly something going on there.

Openstack/Cloudstack Integration
Cloud computing is transforming the way business is done today. It’s not hard to see why when you consider all the benefits that the cloud promises such as flexibility, business agility and economies of scale. As you look into the underlying layers of compute, storage and network, there is complexity in managing such an infrastructure in a dynamic environment. Organizations that are building clouds need a platform to automate the deployment of infrastructure. In addition to offerings from commercial vendors this type of software stack is being developed by the user community of open source organizations. In the interests of being open and offering our customers choices Juniper announced support for OpenStack back in 2013. We continued this momentum by announcing support for CloudStack in 2014. For more on CloudStack see, CloudStack and Juniper’s MetaFabric, Enabling Private and Public Cloud.

Automation Integration with Puppet, Chef and Ansible
Juniper has always been about being open. We serve a diverse set of customers with different use cases who like to use different tool sets. Back in 2013 we announced support for Puppet. We kept up this momentum by later announcing support for Chef and then for Ansible in 2014. There are sysadmin using Puppet or Chef to manipulate infrastructure as code. Because we are open, we’ve productize the capability to work with these tools into both our hardware and software solutions. Ultimately this gives our customers greater flexibility, without having to do a costly rip and replace of their infrastructure, in choosing which automation tools to use. Of course Juniper has had on the box automation as a part of JUNOS for many years. For more on automation see, Automation with Chef, Puppet and Ansible.

Wednesday, September 10, 2014

Making the Transition to an Open SDN Architecture in the Enterprise Private Cloud

Enterprise IT is moving away from acting as siloed service organizations to aligning with the business’ goals to help their organizations enhance their business agility, value and customer experience. Many of these organizations are moving to a cloud computing model to achieve these goals, but they haven’t determined the best strategy for making this transition.
Transitioning to the cloud will help these organizations accelerate innovation and business agility by emphasizing a number of important factors.  Adaptability is one.  Seamless scale, both upward and downward, is another. Network intelligence that will provide proactive resource optimization is a third.

To make this transition, organizations need the right infrastructure, and they must be prepared to answer questions regarding SDN, orchestration, security, network protocols and many other issues.

These are addressed by two application-aware architectures for the enterprise private cloud:

1.    Juniper Networks’ open SDN architecture

2.    A proprietary programmable architecture that requires investment in a centralized controller and application-aware switch combination.

transition.png

Wednesday, August 13, 2014

Announcing Juniper's Most Highly-Available Campus Distribution Switch

Recently Juniper Networks announced the EX4600 switch that is built for the distribution layer in campus networks. It is designed for IT networking teams of enterprises and SMBs who are rolling out new applications and services and require higher performance to support them. This compact switch delivers innovations that were developed in the data center including ISSU for hitless upgrades, Insight Technology for performane monitoring and 40GbE ports for high speed uplinks. It works as a standalone switch as well as in Virtual Chassis and MC-LAG architectures as an options providing simplified management and network resiliency.

 ex4600.png


IT organizations are concerned about having sufficient switching performance to support increasing access to applications in the cloud from the campus. They need to support diverse devices for the current BYOD trend as well as support the demands of services like VDI, the growing use of wireless access and unified communications. As they continue to scale, designing their operations to accommodate increasing volumes of traffic, IT organizations are concerned about managing secure access, connectivity and bandwidth usage. However, despite the increasing and ever changing network usage patterns and the demands of supporting more devices and services IT teams are still expected to make things work with limited staffing resources and budgets. What the IT teams require is an easy-to-manage solution that delivers high-performance connectivity at an affordable price.

“More than three-quarters of the IT organizations we surveyed have either already standardized their campus networks on 10 GbE or are actively planning a move from 1 GbE to 10 GbE. Drivers include bandwidth-intensive applications, current or anticipated traffic levels and falling costs of the technology,” said Bob Laliberte, Senior Analyst, Enterprise Strategy Group.

Friday, April 25, 2014

A Network Fabric for New Applications in the Data Center

The Network Challenge with Deploying Applications
Thanks to virtualization organizations can bring up new applications and services quickly. Unfortunately, many data center networks don’t let you fully capitalize on the business agility that virtualization and modern application architectures provide. Traditional network architectures are too slow and too cumbersome to configure. For true agility, enterprises need a high-performance, low-latency fabric network that can be managed like a single, logical switch.

Today’s mid-sized and large-scale data centers are built with high-performance blade and rack servers, which typically run multiple VMs, which in turn run increasingly modular, web- and cloud-based applications. These modern applications are driving increased traffic levels and different traffic patterns, which place specific demands on the data center network.Critical business applications such as enterprise resource planning (ERP) and customer relationship management (CRM) are divided into multiple modules. As a result, relatively simple operations such as entering an order can trigger 100 or more individual transactions, each imposing its own latency. Some applications, including eCommerce applications, even behave dynamically, spinning up new instances or migrating workloads in response to traffic loads.

Their distributed nature means modern applications are spread across racks of servers, each served by multiple switches. The applications generate a tremendous amount of server-to-server, or east-west, traffic as the various modules communicate with one another. Multi-tier network architectures aren’t well matched to modern applications. They force this east-west traffic to first travel north and south, up and down the network tree, before arriving at its ultimate destination, adding significant latency that can cause application performance to degrade under load.

Wednesday, April 16, 2014

Design Your Network For Virtualized, SDN Enabled Applications in the Cloud

Applications are Driving Change
Organizations are going through a transition in how they build network infrastructure. This is because the number and type of applications in use is growing rapidly. The way that applications are accessed is changing and so is the way that they are designed.

Organizations are hosting applications in their own data centers. They are extending their applications across data centers. They are accessing applications from cloud hosting providers. Service providers are hosting applications on behalf of the enterprise and providing application management services.

These new applications are complex and they are virtualized and many of newer ones are SDN enabled. Many applications are used in multiple geographies and workloads follow the sun. All of this is changing how applications need to be delivered and how the network needs to be build to serve them.

Challenges with Deploying Applications
There are 3 major that you need to address to deploy new applications effectively. These are virtualization, SDN and cloud.

Modern applications are virtualized and dynamic. They communicate internally to a great degree and they move workloads around the network. This means that you need network fabric with mesh connectivity for them to perform properly.

Many new applications are SDN enabled and their component parts are connected via a virtualized overlay network, however these applications need to communicate with existing infrastructure so you need a way to incorporate SDN enabled applications into the rest of the infrastructure.