Friday, December 21, 2012

Network Functions Virtualization is Changing How Services are Delivered

As Service Providers are faced with increased competition from Over the Top Providers they are seeking new markets to enter. However, as they look to create and launch new services they must grapple with the growing number and complexity of hardware devices in their networks. This creates challenges due to the time it takes to certify equipment and with staffing and training of skilled operators for many devices. It also creates cost pressure with the need for more space and power at a time when these resources are becoming ever more expensive. Making upfront capital outlays for equipment in anticipation of revenue that ramps up over time can stress budgets. As a result Service Providers are looking to change how network services are deployed and some are finding Network Functions Virtualization (NFV) as the answer to their problems.

Defining Network Functions Virtualization Services
Network Functions Virtualization is transforming how network operators architect networks by enabling the consolidation of network services onto industry standard servers, which can be located in Data Centers, on Network Nodes or at the user premises. NFV involves delivering network functions as software that can run as virtualized instances and that can be deployed at locations in the network as required, without the need to install equipment for each new service. Network Functions Virtualization is applicable to any network function in both mobile and fixed networks. Network Functions Virtualization is complementary to Software Defined Networking (SDN) but not dependant on it. Virtual appliances might be configured using SDN capabilities and they might be connected via overlay network tunnels in clusters based on an application or based on the needs of an organization.

Monday, December 10, 2012

Achieving Single Touch Provisioning of Services Across The Network

Service provider networks are continually growing and evolving in order to provide a rich end user experience. Ethernet-based services are increasing rapidly to deliver high bandwidth and anytime access to video, voice, and broadband applications. In order to meet the needs of their customers, service providers have built complex networks consisting of thousand of high capacity Ethernet ports on devices in many locations. Typically these devices need to be provisioned, configured, and managed separately, which increases operating costs and time to deploy new services. The challenge is how to achieve single touch provisioning of services across the network.

Most factors contributing to the complexity of service provider networks revolve around the need to provision and manage separate physical platforms. The majority of the service provider’s OpEx cost results from the necessity to have the technical support infrastructure required to maintain and deploy services in a distributed network. Case studies show that by eliminating the need to manage individual devices separately can reduce the service provider’s OpEx by 70%.

Juniper Networks understands the challenges faced by service providers in today’s environment and has created a system called the Junos® Node Unifier (JNU) that enables centralized management, provisioning, and single touch deployment of services across thousands of Ethernet ports. The JNU solution is Juniper’s way to simplify today’s networks. JNU consists of satellite devices connected to a hub, where the satellites can be configured, provisioned, and managed from the hub device, giving a single unified node experience to network operators.

Friday, December 7, 2012

Securing Virtualization in the Cloud-Ready Data Center

With the rapid growth in the adoption of server virtualization new requirements for securing the data center have emerged. Today’s data center contains a combination of physical servers and virtual servers. With the advent of distributed applications traffic often travels between virtual servers and might not be seen by physical security devices. This means that security solutions for both environments are needed. As organizations increasingly implement cloud computing security for the virtualized environment is as integral a component as traditional firewalls have been in physical networks.

Juniper's Integrated Portfolio Delivers a Solution
With a long history of building security products Juniper Networks understands the security requirements of the new data center, and Juniper’s solutions are designed to address these changing needs. The physical security portfolio includes the Juniper Networks SRX3000 and SRX5000 line of services gateways, and the Juniper Networks STRM Series Security Threat Response Managers. These physical devices are integrated with the Juniper Networks vGW Virtual Gateway software firewall that integrates with the VMware vCenter and the VMware ESXi server infrastructure.

Fundamental to virtual data center and cloud security is the control of access to virtual machines (VMs) and the applications running on them, for the specific business purposes sanctioned by the organization. At its foundation, the vGW is a hypervisor-based, VM safe certified, stateful virtual firewall that inspects all packets to and from VMs, blocking all unapproved connections. Administrators can enforce stateful virtual firewall policies for individual VMs, logical groups of VMs, or all VMs. Global, group, and single VM rules ensure easy creation of “trust zones” with strong control over high value VMs, while enabling enterprises to take full advantage of many virtualization benefits. vGW integration with the STRM and SRX Series provides a complete solution for the mixed physical and virtualized workloads.

Thursday, November 15, 2012

How Big Data Is Changing IT and Bringing Out The Vote

One of the most interesting stories to come out of the presidential election was the use of big data analytics to help campaign workers to bring out the vote. Both candidates used the technology. Project Narwhal was the Obama team’s effort. It was used to connect previously separate databases so that information on potential voters was accessible to campaign workers. They used the information to target voters with specific issues before the election and during the election they used it to determine who had not yet voted and do last minute outreach.

Big Data Changes the Process
The angle that makes this fascinating is that the information was there before, but it wasn’t accessible in a comprehensive way. The available data was the result of many siloed data gathering efforts, used for specific purposes over the past years. The role that big data analytics played was in bringing disparate information together in a comprehensive way. This meant that campaign workers could draw from information in sources such as volunteer-management programs, campaign finance and budgeting tools, voter-file interfaces and especially social media to gain a bigger picture of the voter’s profile and to guide the campaign’s success. As a result canvassers weren’t dispatched to knock on the doors of people who were already supporters of Obama, and if a donor had given the maximum contribution, instead of getting email for money they got an email asking them to volunteer.

The way that they did it was a lesson in the creative use of technology. Instead of hiring a consulting company, the Obama campaign pulled together a team of technologists who brought a range of skills and experience to get the job done. Using various programming languages and APIs they built a set of services that acted as an interface to a single shared data store for their applications. This made it possible to quickly develop new applications and to integrate existing ones into the system. As a result they were able to create a dashboard that provided access to a set of tools to use across all of their data sets and drive each step of the campaign process. The application included an analytics programs called Dreamcatcher that was developed to microtarget voters based on sentiments in social media text. Using this tool they could determine which way the vote was going and where to focus their resources. For more information on the technology see, Built to Win: Deep Inside Obama's Campaign Tech.

Saturday, November 3, 2012

Architecting Your Network to Avoid a Disaster

Every company that does business over the web, WAN, or through a data center must have a  plan to protect connectivity and assets in case any data center components fail. The key is having a good business continuity and disaster recovery solution in place. Over the last few months I’ve been working on a solution for architecting the network for disaster avoidance and recovery. I delivered a webinar about this just days before Hurricane Sandy landed on the East Coast. It’s not often that a topic is so relevant. As I watched the news a couple of days later it was shocking to see what happened to people’s neighborhoods. My first reaction was to hope that rescue efforts were underway and that people would be safe. As a few days went by I was wondering about how people were doing with getting their businesses up and running again. I wrote the need for good disaster planning in a blog in early September, see link.

What We Have Learned From Our Conversations
At Juniper we have talked with many organizations about their networks and their business continuity situation. I’d like to share some of what we learned. Many organizations tell us that they have grown organically and also through acquisitions. As a result they are concerned about inconsistent IT management policies. They have a range of applications from the organizations that they acquired and since they are often in high growth businesses and are adding applications for special projects they see server and application sprawl. Often a new CIO will initiate a process of checks on the infrastructure in an effort to see how they can normalize policies and streamline IT management. Sometimes the news of a natural disaster prompts a review the BC/DR plan.

Challenges Confronting the Organization
What organizations often find is that they are confronted with a number of challenges. They might build infrastructure without clearly identifying their application needs. This can result in poor SLA definition for applications, instead of having strict requirements with metrics. Many times they deploy infrastructure in an ad hoc manner without consistent policies. The result is many failure points, as well as difficulty managing the network and with provisioning it. Poor link utilization, with links that are frequently idle is another consequence. They often have a distributed authentication, authorization and enforcement infrastructure. The result is complex firewall policies that prevent user specific enforcement and that are deployed based on local Data Center IT policies, not global policies. These inconsistent policies for users and application access resulted in security holes.

Sunday, October 28, 2012

OpenStack Summit San Diego 2012

The OpenStack Summit in San Diego was the place to be last week. I attended and wanted to share my observations. There was a lot of participation and energy. It was sold out with over 1300 people attending and about 35 vendors displaying their products.  Previous summits were developer forums. This time the format was expanded and there were hundreds of sessions in many categories including case studies, industry analysis as well as the usual develop sessions. See, http://openstacksummitfall2012.sched.org/ for a list of the session.

What is Openstack and What Does it Do?
OpenStack (www.openstack.org) is a foundation that manages an open source cloud computing platform. It was founded in 2010 out of a project that was started by NASA and RackSpace to build their cloud infrastructure. Their mission is “To produce the ubiquitous open source cloud computing platform that will meet the needs of public and private cloud providers regardless of size, by being simple to implement and massively scalable.” OpenStack focuses on the core infrastructure for compute, storage, images and networking. There is a large ecosystem of vendors providing tools do the things that OpenStack does not do.

OpenStacks consists of modules to configure cloud computing resources. The components are the Nova Compute Service, Swift Storage Service, Glance Image Service and Quantum Network Service. They automate the functions that are required to set up these services. OpenStack lets organizations quickly provision and reprovision compute resources. Even with virtualization it can take days to fully set up a virtual server, networking and storage. Organizations want this to happen in minutes whether they are offering a commercial service or an internal IT service.

Wednesday, October 17, 2012

Have We Hit The Network Breaking Point?

We have all noticed that the business environment is changing at a rapid pace. IT departments, previously seen as cost centers, were tasked with reducing their infrastructure and even justifying their existence, but not anymore. With the rise of just-in-time manufacturing, and long tail customization, all driven by ecommerce that’s increasingly accessed from mobile devices, the old rules are being turned upside down.  The IT organization is now tasked with enabling the business to drive innovation.

The Opportunity that Comes with Change
While change comes with opportunity, we know change isn’t always easy. The old ways of building infrastructure and networks are no longer working. Organizations need to adapt to a world where customers use social media and create huge amounts of data that contains business intelligence.  All of us have questions about what is happening in the IT space, what the challenges are and what do to about it. That is why Juniper got together with Forrester and did some research to help sort this all out.

It started in July 2012, when Juniper Networks commissioned Forrester Consulting to evaluate what enterprises need from a network in order to scale for business and meet future needs. Forrester Consulting surveyed 150 IT business decision-makers from enterprises across the United States, and yesterday the results were shared in a webinar. It was a conversation between Forrester Analyst Andre Kindness, Juniper’s CIO Bask Iyer and Mark Settle from BMC Software. I listened in and there was a lot to learn. 

Thursday, October 11, 2012

Monetizing the Business Edge with Hosted Private Cloud

The last few years have seen major changes in enterprise networking, with the growth of MPLS virtual private networks (VPNs), and the rise in adoption of cloud services. A growing trend, which blends the two, is Hosted Cloud Services, delivered over MPLS VPNs from the service provider’s data center to the customer’s branch office locations. These services are facilitated and enhanced by Juniper’s combined routing and switching solutions as I discussed in my blog about the Junos Node Unifier.

Network Service Providers’ Opportunity with Cloud Services
NSPs are looking at new areas to boost revenue growth, and cloud computing is one of the key areas they are looking at. Based on research from STL Partner’s Telco 2.0 report, the Hosted Private Cloud market is expected to grow 34% annually to $6.5 billion by 2014. Since they already own the network, as well as the infrastructure around it including the billing and management systems, NSPs are in a good position to monetize the cloud opportunity. While the cloud services market is highly competitive, there is an opportunity for NSPs to differentiate their service offerings by leveraging their enterprise-grade VPN infrastructure to provide Hosted Private Cloud services with enhanced end-to-end security and service-level guarantees.

Public cloud services lack the SLA and QoS guarantee levels that enterprises have grown accustomed to with their VPN networks. Recent power outages associated with major public cloud service providers have impacted many popular sites and highlighted issues associated with reliance on public cloud services. As a result hosted private cloud services are emerging as a cost-effective and robust solution that offers quality and reliability for enterprise applications. Network service providers (NSPs) are embracing cloud services to grow new revenue streams and increase customer retention.

Sunday, October 7, 2012

Innovating at the Edge in the Age of the Cloud

With the rise of Software as a Service and Social Media network service providers are witnessing a game changing shift in how consumer and business services and applications are delivered. Some service providers see the opportunity to break beyond their connection oriented business model and embrace these new cloud-based services. In order to do so they are looking for ways to adapt their networks to accommodate these new services.

Service Delivery on the Edge
In order to take advantage of the new business models brought about by the service transformation network service providers need to consider a number of factors, including adopting progressive business and monetization strategies and considering subscribers’ preferences in the service definition process. For the network service provider the most critical change is to leverage the underlying network architecture to support the new service offerings. As a result the traditional architecture of service provider edge networks is undergoing an aggressive period of evolution and shifting focus from simply a point of network connectivity to becoming a vital services creation and innovation point.

Subscriber Defined Services
Matching services and applications to customer expectations has always been a formidable challenge for telecom operators. However, it is now even more difficult given that the expectations of a telecom subscriber have changed drastically over the past few years. In the past consumer and business subscribers were tethered to the network services provider as their sole source of services, but today these subscribers have connections to OTT providers and access a variety of personal and business applications. An Important change is that these subscribers are not just service consumers, but they are also shaping service innovations by leveraging more intelligent and programmable platforms and devices.

Wednesday, October 3, 2012

Solving the Network Services Provisioning Challenge

Centralization of applications at large scale in a consolidated data center, and the increasing size of software-as-a-service application deployments, create the need to deploy thousands of 1GbE/10GbE ports to connect to servers and storage devices in the data center. Service providers need a way to terminate application servers without having to build layers of separately managed networking devices. They also need to be able to configure services from a central location and automate the provisioning of their network devices. This is driving the need for a device-based solution that can control thousands of network ports from a single point and that can interface with service orchestration systems.

The Services Provisioning Challenge
As service providers seek to deploy new cloud and network services at high scale, managing and maintaining individual network devices adds additional layers of operational complexity. As layers of network devices are added to the environment, service providers have to work with multiple management systems to provision, troubleshoot, and operate the devices. These additional layers may translate into additional points of failure or dependency reducing service performance. To meet this challenge Juniper Networks has released the Junos Node Unifier, a Junos OS platform clustering program that reduces complexity and increases deployment flexibility by centralizing management and automating configuration of switch ports attached to MX Series 3D Universal Edge Routers acting as hubs. You can visit the landing page for the product launch here, link.

Simplifying the Network
The Junos Node Unifier enables scaling up of applications in the data center by supporting a low cost method to connect network devices to a central hub. It reduces equipment and cabling costs and increases deployment flexibility by centralizing management and automating device configuration, while overcoming chassis limitations to enable the connection of thousands of switch ports to be attached to the MX Series platform. The Junos Node Unifier solution leverages the MX Series modular chassis-based systems as well as access platforms including Juniper Networks QFX3500 QFabric Node, Juniper Networks EX4200 Ethernet Switch and EX3300 Ethernet Switch, to be used as hub and satellites respectively. Junos Node Unifier leverages the full feature set of these devices to support multiple connection types at optimal rates, with increased interface density as well as support for L2 switching and L3/MPLS routing on the access satellites.

Saturday, September 29, 2012

Getting on the Path to SDN with Juniper Networks

A number of IT trends—including the consumerization of IT, cloud computing, and social media— present significant opportunities for businesses to improve productivity. Before adopting these technologies, however, organizations try to fully understand the impact they will have on the underlying infrastructure and, more specifically, the network environment, since it is a critical enabler for all of these services. As a result organizations are looking to innovation in the network to meet their business needs.

Trends Impacting the Network
As organizations continue to innovate and expand their virtualized environments beyond the simple benefits of consolidation to a more agile infrastructure, they have begun to build out private clouds. These agile IT environments enable business managers to rapidly turn up new services to meet unexpected demand or requirements. However, this abstraction layer can create blind spots in the infrastructure and make meeting compliance requirements difficult.

Social media applications present another challenge as the explosion in the number of network-connected devices opens up avenues to new applications and collaboration tools. Well known applications such as Facebook, YouTube, and Twitter often blur the lines between business and personal usage and video create demands on the network. The question is how to ensure performance, while protecting data ensuring privacy.

Tuesday, September 11, 2012

Juniper’s Internet Edge Implementation Guide is Here

Juniper has created an implementation guide that will help network designers create a simplified Internet edge solution using Juniper Networks MX Series 3D Universal Edge Routers, SRX Series Secure Services Gateways, and EX Series Ethernet Switches. This guide details specific design considerations, best practices, and Juniper tools that can be used to build the optimal solution. It concludes with a real-world deployment example that illustrates the solution and recommended configurations in detail.

The Role of the Internet Edge
The Internet edge acts as the enterprise’s gateway to the Internet. It provides connectivity to the Internet for data center, campus, and branch offices, and it connects remote workers, customers, and partners to enterprise resources. It can also be used to provide backup connectivity to the WAN for branch offices, in case the primary connection to the enterprise WAN fails.

Today’ s Internet edge must enable access to a variety of applications such as cloud computing solutions, mission critical applications, and bandwidth hungry applications such as video. The Internet edge must also scale seamlessly to support growing application performance and bandwidth needs, while supporting a rich set of routing and security features. This guide will help you reach this goal.

Tuesday, September 4, 2012

Is it Time to Rethink Your Business Continuity - Disaster Recovery Plan?

Have you been thinking about the need to update your Business Continuity and Disaster Recovery plan? You are not alone. According to recent research by The 451 Group disaster recovery planning is top of mind for the Enterprise and data replication is a top 2 storage initiative for IT organizations. Data replication has always been important but it was often seen as too expensive to implement. With virtualization technology data replication can be easier to implement than before and BC/DR is seen as a major motivator for implementing server virtualization. With virtualization in place you can replicate data at the Virtual Machine level. In many cases you can even put your database in a virtual machine. This makes it much easier to control the back up process and to keep track of your applications and data as well as to get them up and running at the new location.

Business Continuity is a Top Concern
It’s no wonder that BC/DR planning is getting more attention. We remember the outages and financial loss that occurred from disasters ranging from floods, to tornadoes and hurricanes to the tsunami in Japan. You have probably seen the statistics warning that 75% without business continuity plans fail within three years of a disaster, and 43% with no emergency plan never reopen. Government regulations have also dramatically increased the Data Replication and compliance requirements. These situations have increased the awareness of the need to maintain productivity within a company, sustain value chain relationships, deliver continued services to customers and partners, all of which can be difficult if we are moving applications and user connections to a new data center location.

Sunday, August 26, 2012

Is VXLAN the Answer to the Network Virtualization Question?

Network virtualization is a growing topic of interest and for some good reasons as networks scale to meet the challenges of cloud computing they are running up against VLAN scaling limitations. There have been several network overlay technologies released that seek to address the challenges with network scaling and to enable workload mobility. One of these technologies is VXLAN. It has a few proponents who say that it can meet the requirements for network virtualization. While it sounds good on the surface, it is worth it to take a closer look. With VMWorld happening this week in San Francisco I’m sure that network virtualization will be a hot topic, especially considering the VMware Nicera news, so I thought I’d comment on it and offer some thoughts and options.

The Origins of VXLAN
The VXLAN buzz started during a keynote at VMworld in August 2011, when VMware CTO Steve Herrod announced the Virtual eXtensible LAN protocol, which VMware positions as a technology that “enables multi-tenant networks at scale, as the first step towards logical, software-based networks that can be created on-demand, enabling  enterprises to leverage capacity wherever it’s available. Networking vendors Cisco and Arista are actively promoting VXLAN and have collaborated with VMware to develop and test the technology on their products. Cisco highlighted VXLAN at their Cisco Live user conference again in June 2012 and Arista is demoing it at VMWorld, however with the Nicira announcement VMWare seems to have taken that next step. VXLAN sounds interesting, so let's see how good of an idea it is.

Wednesday, August 22, 2012

Meet up with Juniper at VMWorld San Francisco

Are you and your colleagues headed to VMworld next week? Stop by and meet the Juniper team at booth #1517 where will be talking about the security and network architectures needed to move to an agile virtualized datacenter. It's going to be interesting with all of the changes that are happening in network virtualization. I'm looking forward to some interesting keynotes and sessions as well as catching up with friends in the industry.

Juniper has a lot planned for the show including speaking sessions, and a slew of in booth demos, plus we are handing out USB car chargers. I'll be there on Sunday afternoon so stop by and say hello.

You should stop by our booth if you:
- Are virtualizing important apps or are from regulated industries
- Have service oriented application architectures
- Want to use the vMotion technology to move workloads around a datacenter
- Have an old network that you need to upgrade to enable greater performance and workload mobility
- Need to secure mobile devices for VDI or BYOD
- Want to learn how UAC or Secure Access can be run in a VMware environment

Thursday, August 9, 2012

Making the Case for Long Distance Virtual Machine Mobility

With VMWorld coming up I’m reminded of a top of mind subject, virtual machine mobility. The reason for moving virtual machines is to better allocate server resources and maintain application performance. It’s a useful technology that works great in the data center. We also hear a lot about the need to move virtual machines across the WAN, live, without losing sessions. This is known as Long Distance vMotion or generically as long distance live migration. This might sound like a good idea, but it gets a bit complicated when you think outside the data center walls and across the WAN. It creates complexity in the network, as maintaining sessions requires keeping the same IP address and MAC address after the move.  There are many proposed use cases for it, but is it such a good idea?

Limitations of Live Migration
Long distance live migration over the WAN has limitations due to latency and bandwidth requirements and the complexity of extending layer 2 which is required to maintain the same MAC address. Issues include the potential for traffic coming first to the original data center where the gateway is, and then looping to the new data center where the VM has moved.  Traffic can also loop back over the WAN to reach the storage that stayed behind. There are also bandwidth requirements to handle the large scale movement, as well as issues with storage pooling and storage replication as well as the complexity of implementing the L2 bridging architecture. If we are going to deal with all of this complexity of moving virtual machines over the WAN there had better be a good reason to do it. But is there? Here is a look at the various use cases for long live migration that I have found.

Wednesday, August 1, 2012

The Myth of the Broadcast Domain Apocalypse

There is a train of thought that is popular with some network vendors and long time network engineers that there is a compelling need to “orchestrate” the physical network and the visualized-networking world.  This is expressed as a desire to ensure that VLANs are synchronized between the physical network switches and the virtual switches on the visualization hosts.

The Orchestrated VLAN Model
VMware has a concept of a “backing VLAN”. This simply put means that traffic belonging to a portgroup uses a configured transport VLAN when it traverses the physical network.  For example, if a group of VMs belong to a portgroup “backed” by a VLAN, that VLAN must be “allowed” on the trunk ports connecting all of the physical ESX hosts that are hosting any of those VMs.

In addition to this, it is argued that the trunk ports to ESX hosts that do not host a VM belonging to that portgroup should not “allow” the presence of the VLAN that backs that portgroup.  It is suggested that the reason for this requirement is that the unbridled propagation of VLANs will cause the ESX host to process broadcast packets it does not need to, with potentially dire consequences.

Tuesday, July 24, 2012

Why Network Latency Matters for Virtualized Applications

My colleague Russell Skingsley has an interesting take on the effects of latency on virtualized application performance. The purpose of virtualization is to optimize resource utilization. As Russell pointed out it isn’t just an academic conversation. For the cloud hosting provider it’s about revenue maximization. Network latency has a direct impact on virtualized application performance and therefore revenue for the service provider. Your choice of network infrastructure will impact your business, but it can be difficult to see how investing in high performance networking solutions will improve your business. I will try to connect the dots.

Don't Sell it Just Once
It’s a service provider axiom that you don’t want to waste your valuable assets by selling them to a single customer, even at a premium. Take dark fiber for instance, every SP has come to learn that selling a fiber to a single customer will never bring scalable revenues. Making the move to adding DWDM to the service mix allows you to scale up the number of customers and is an improvement over selling dark fiber, but is limited by the number of lamdas that can fit in the spectrum on a single fiber. The scaling is linear. A similar situation exists for shared cloud computing resources.

Sunday, July 22, 2012

Is There a Real Use Case for LISP, The Locator/ID Separation Protocol?

LISP is a protocol that pops back in the news just when you might have forgot that it existed. It happened again in June when Cisco re-launched LISP for fast mobility at their annual user conference, complete with an on stage demo and much fanfare. While the demo’s are impressive, the history of LISP makes me wonder what is going on behind the curtain. This isn’t the first time that Cisco has proposed LISP for a new use case. In 2011, Cisco positioned LISP as a solution for IPv6 transition and virtual machine mobility, along with VXLAN and OTV, creating a triumvirate of proprietary protocols to further pioneering use cases. But is there real value in LISP?

What is LISP? 
LISP is a protocol and an addressing architecture originally discussed at the IETF in 2006 to help contain the growth of the route tables in core routers. The LISP proposal which was submitted in 2010 is still under development as an experimental draft in the IETF, see link.  As far as I have see there is not much consensus regarding the usefulness of LISP, and it has several open issues in the areas of security, service migration and deployability. Because the cost and risk associated with LISP are significant, network operators have scaled their routing systems using other techniques such as by deploying routers with sufficient FIB capacity, and by deploying NAT.

Wednesday, June 13, 2012

Juniper’s Vision for SDN Enables Network Innovation

The importance of the network continues to grow and innovation in the network needs to support new businesses models, and drive social and economic change. SDN will help address challenges currently faced by the network as it evolves to support business applications. It is worthwhile to step back and determine what the big-picture problem is to be solved. SDN is about adding value and solving a problem that has not been solved. It is about helping network operators to combine best-of-breed networking equipment with SDN control to facilitate cost effective networks that provide new business opportunities is key to Juniper’s strategy. I’d like to elaborate on a few key points.

Juniper’s Vision For The New Network And Software-Defined Networking Are Aligned
I’ve talked in previous blogs about Juniper’s New Network Platform Architecture. Juniper is delivering the New Network to increase the rate of innovation, streamline network operating costs through automation, and reduce overall capital expenses. Legacy networks have grown large and complex. This has stifled innovation and made networks costly to build and maintain. Both the New Network and Software-Defined Networking (SDN) are about removing complexity, treating the network itself as a platform, and shifting the emphasis from maintenance to innovation.

SDN provides an abstracted, logical view of the network with externalized software-based control and reduced control points for better network control and simplified network operations. Juniper’s vision for SDN includes bi-directional interaction between the network and applications and a real-time feedback loop to ensure an optimal outcome for all elements and a predictable experience for users. This capability is transparent allowing customers to augment their existing network infrastructures to be SDN-enabled.

How the QFabric System Enables a High-Performance, Scalable Big Data Infrastructure

Big Data Analytics is a trend that is changing the way businesses gather intelligence and evaluate their operations. Driven by a combination of technology innovation, maturing open source software, commodity hardware, ubiquitous social networking and pervasive mobile devices, the rise of big data has created an inflection point across all verticals, such as financial services, public sector and health care, that must be addressed in order for organizations to do business effectively and economically.

Analytics Drive Business Decisions
Big data has recently become a top area of interest for IT organizations due to the dramatic increase in the volume of data being created and due to innovations in data gathering techniques that enable the synthesis and analysis of the data to provide powerful business intelligence that can often be acted upon in real time. For example, retailers can experience increased operational margins by responding to customer’s buying patterns and in the health industry, big data can enhance outcomes in diagnosis and treatment.

The big data phenomenon brings up a challenging question to CIOs and CTOs: What is the big data infrastructure strategy?  A unique characteristic of big data is that, does not work well in traditional Online Transaction Processing (OLTP) data stores or with structured query language (SQL) analysis tools.  Big data requires a flat, horizontally scalable database, accessed with unique query tools that work in real time. As a result of this requirement IT must invest in new technologies and architectures to utilize the power of real-time data streams.

Tuesday, June 5, 2012

Juniper's Campus Validated Design Guide

In my previous blog (link) I wrote about the New Network Platform Architecture and how Juniper is delivering network designs that will enable our customers to optimize their network investments. With these designs Juniper’s goal is to help customers overcome technology limitations so that they can deliver greater efficiency, and increased business value, by leveraging their networks more effectively. To this end Juniper has released the Juniper Networks Horizontal Campus Validated Design Guide.

A Step-by-Step Process
This guide provides a simple, step-by-step process that businesses can use to rapidly deploy a small campus network solution. The deployment in this guide is based on a tested reference topology designed by Juniper that can easily be scaled and adapted to specific customer requirements. This guide is for network administrators who are tasked with designing and deploying a small campus network for a small enterprise who want to complement their understanding of networks, with specific guidelines and configurations from Juniper.

Juniper Networks offers this validated design guide for the campus and branch domain to help customers start building and configuring their own networks. A validated design represents a specific configuration of Juniper Networks hardware and software platforms that has been tested together and represents a reliable foundation on which to base a customized network for a business.

Juniper’s New Network Platform Architecture

If you talked with me at Interop, Las Vegas last week you heard me speak of Juniper’s New Network Platform Architecture. The New Network Platform Architecture is an initiative that brings together Juniper’s innovations in silicon, software, and systems to deliver best-in-class network designs that enable business advantages for our customers. This is a milestone for Juniper, and it shows the commitment that we have to delivering value to our customers to help them compete in today’s market place.

Juniper’s Focus on the Customer
Juniper’s approach to designing network architectures is to solve today’s business and technology limitations with designs that deliver greater efficiency, increased business value, better performance, and opportunity that will last. With a focus on our customer’s business objectives, the demands of their applications, and workflow needs, our approach is to design architectures that lift legacy limitations and transform our customer’s expectations for the network. Our objective is to drive business value for our customers by optimizing their network investments. We simplify architectures, operation models, and workflows to optimize network investments. By using our domain designs customer can optimize their network investments to increase productivity, generate revenue, and enhance the quality of the user experience.

The Changing Environment
With soaring growth in bandwidth demand, mobile consumer devices, cloud, and M2M, network needs and pace of innovation have changed dramatically, and Juniper is enabling our customers to grow and capture the opportunity. Our differentiation is our ability to take complex architectures, legacy operating systems, and simplify, modernize, and scale for our customers. From core routing to data centers, Juniper has consistently delivered breakthrough innovations leveraging our expertise in silicon, systems, and software to help our customers increase efficiency, drive business value, and accelerate service delivery.

Tuesday, January 3, 2012

My 2011 Update and Thoughts on What’s Coming in 2012

An Eventful Year with Many Changes
It’s been an eventful year for me and I thought an update was in order. On the work front there have been a lot of changes. I joined Juniper Networks in November. I’m focused on technical marketing for data center and cloud. This is a return for me as I was at Juniper before moving to Cisco in 2007. It’s interesting how the world turns, and I’m excited to be back.

When I was interviewing at Juniper I was asked what attracted me to the jobs that I had. Of course there are many things, but being a techie what stands out for me is working on market leading products and I’ve been fortunate to have worked on some firsts. At Lucent it was the Portmaster (formerly Livingston), the first digital remote access server. At Redback it was the SMS (Subscriber Management System), the first broadband aggregation platform. At Entone it was a suite of products for IPTV that preceded FIOS and UVerse. When I first went to Juniper in 2005 it was the Peribit WX, the first WAN Optimization devices. At Cisco I got into cloud computing and marketed data center solutions including virtualized network services that ran on the UCS compute platform. At VCE I marketing the first pre-engineered converged infrastructure platform for private cloud computing.

So the question is what brought me to Juniper? Working at a company that continues to innovate in the networking space was a big draw. The opportunity to be on a newly formed team that is focused on critical areas of the network and to be the lead for data center and cloud computing was important. My role in technical marketing is to bridge the gap between the capabilities of the networking equipment and the value delivered to the business in terms of application performance and availability. I believe that the network design is critical to delivering a superior user experience and enabling applications architected for the cloud. As I looked at how applications are becoming more complex and more highly distributed I saw how Juniper’s innovative QFabric switching platform was ideally suited to meet the requirements of this trend in application architecture. So the short answer is the opportunity to market data center solutions based on QFabric.

Here are a few areas of interest that I expect to be writing about in the coming months.