Wednesday, June 13, 2012

How the QFabric System Enables a High-Performance, Scalable Big Data Infrastructure

Big Data Analytics is a trend that is changing the way businesses gather intelligence and evaluate their operations. Driven by a combination of technology innovation, maturing open source software, commodity hardware, ubiquitous social networking and pervasive mobile devices, the rise of big data has created an inflection point across all verticals, such as financial services, public sector and health care, that must be addressed in order for organizations to do business effectively and economically.

Analytics Drive Business Decisions
Big data has recently become a top area of interest for IT organizations due to the dramatic increase in the volume of data being created and due to innovations in data gathering techniques that enable the synthesis and analysis of the data to provide powerful business intelligence that can often be acted upon in real time. For example, retailers can experience increased operational margins by responding to customer’s buying patterns and in the health industry, big data can enhance outcomes in diagnosis and treatment.

The big data phenomenon brings up a challenging question to CIOs and CTOs: What is the big data infrastructure strategy?  A unique characteristic of big data is that, does not work well in traditional Online Transaction Processing (OLTP) data stores or with structured query language (SQL) analysis tools.  Big data requires a flat, horizontally scalable database, accessed with unique query tools that work in real time. As a result of this requirement IT must invest in new technologies and architectures to utilize the power of real-time data streams.

Big Data Needs the Network
For example, consider Apache Hadoop, a de facto big data analytics platform. To manage and process data in a server cluster, which is required to scale to thousands of servers, the performance of the network is critical.  In fact, most data center infrastructures, especially those based on multi-tier networking, face operating and performance challenges when it comes to moving and analyzing big data in a Hadoop cluster, which can interconnects thousands of servers.

Legacy network architectures are not designed to handle highly distributed application architectures, nor can they deliver the reliability and performance at scale demanded by big data. Just as big data applications represent a new ways of collecting, making sense of, and taking action on business data, the underlying network foundation of big data projects should be considered in a new light. Network architectures can either enhance or inhibit the ability to  integrate big data initiatives from pilot to large scale production.

Finding the Right Architecture
Organizations need a network solution that overcomes these issues and enables them to gain from the business intelligence benefits of big data analytics. Network architectures can either enhance or inhibit the ability to easily initiate, grow, and integrate big data initiatives from pilot to large-scale production. Fortunately, as big data pilots launch and business cases develop, there is innovation occurring in network architecture that can enhance big data processing. The Juniper Networks QFabric System can play a role as the network foundation of big data projects providing the required performance and scale to process large data sets in real time.

To learn more read how organizations can achieve simplified management, improved performance, and optimized data reliability by using Juniper Networks’ QFabric System solution for big data: Understanding Big Data and the QFabric System.

This blog first appeared on the Juniper Network's blog site, see link.

No comments:

Post a Comment