Wednesday, July 10, 2013

Making the Transition to Converged Storage and Data Networks

Storage and data network convergence holds the promise to transform the data center and make it a more cost effective operation for the Enterprise. There is the potential for considerable savings as a result of reducing the number of network interface cards per server, reducing cabling, and lowering the power and cooling draw, as well as having one less physical network to manage. The change is made possible by the capability to transport Fiber Channel frames over 10 GB Ethernet using Fiber Channel over Ethernet technology. Making the transition isn’t an easy task though. Let’s take a look at some of the considerations and how the transition can be made more easily.

The value of converging networks using FCoE is compelling and many organizations are considering making the move to FCoE but the question is how to do it without disrupting operations. For organizations that are building new data centers and consolidating older ones the answer is easy. They can just build an FCoE capable network in the new data center and migrate their applications and storage over to the new infrastructure. The more difficult situation is what to do if you need to convert an existing production network and cut over to FCoE live? This is where it gets interesting.

This game changing FCoE technology will not just roll itself into the data centers and implementing it isn’t a typical transformation project. There are important deployment and cut-over considerations and planning that are required to get FCoE up and running smoothly. Some of the questions that we need to address before deploying FCoE or cutting-over to FCoE are when should the organization start deploying FCoE, how should the organization deploy FCoE and what tasks does FCoE deployment entail?

To ensure success server and storage administrators should be aware of and prepared for the effort and challenges that will come with rolling an existing network over to FCoE. The discussion here is intended for addressing a cut-over to FCoE and the considerations from a server and storage perspective. Doing an FCoE cut-over in an existing enterprise is only achievable with careful planning and consideration of storage and server dependencies. Given that the rationale for FCoE in the data center is well established success of the transformation gets down to how well prepared organizations are for the exercise.

When and How to Deploy FCoE
The goal is for a “Converged Network” rather than a Fiber Channel and IP network, but to get there we need to get IP and Fiber Channel traffic routed through the same switches and over the same cables. Though some storage vendors have announced products with an FCoE interface, the situation is that storage arrays with native FCoE interfaces is just at the early stages of adoption. To make the process easier you can break the process in to two manageable parts.

The first step in converting over to FCoE is to start on the server side. The time for FCoE is right up front for new servers or racks of servers that are being deployed in the data center. New servers should be configured with a Converged Network Adaptor (CNA) that will replace both the Fiber Channel HBA and Ethernet NIC adapters. With standards now complete for both the IEEE Data Center Bridging enhancements to Ethernet and the INCITS T11 FC-BB-5 standard for Fibre Channel over Ethernet (FCoE) the specifications for the first part of the process are laid out. With FC-BB-5 there is a standard for running Fiber Channel over Ethernet to an FC gateway and connecting to the existing FC network.

What is common is that storage arrays are connected via a legacy fiber channel switch, to a high performance and intelligent FCoE capable switch that encapsulates and de-encapsulation FC to FCoE and acts as the gateway between the two networks. This gateway makes a two part transition to FCoE possible. Many of our Juniper customers are taking this step and using the Juniper QFX3500 switch as the gateway device. With this phase you eliminate the FC network on the server side, taking out the Host Bus Adaptors on the servers and the cabling for the FC network. For the easiest transition FCoE capable switching fabrics should be implemented and run in parallel with the existing network Ethernet and Fiber Channel networks.

Existing servers will need to be taken down as you replace the NIC and HBA adapters with CNA cards. Consider the business impact of having to shutdown hundreds or thousands of servers that are running high transactional application and databases like OLTP, EBPP etc. If you deploy many servers the rational way to do this to start building each new servers with 10 GB CNA adapters so that they can connect to the FCoE network running in parallel to the existing network. This will reduce the future need to convert servers. If you have implemented server virtualization you can move the application VMs to the new servers and then take the older servers down a few at a time or even decommission them if they have reached their end of life. New applications and databases will benefit from the speed and ease of management of the converged network right off and the need for cut-over is completely avoided. Consider the priority of each application to the business when planning the changeover. This can be done using a similar process to what I outlined in the business continuity considerations, see Is it Time to Rethink Your Business Continuity - Disaster Recovery Plan?

The next setup is to go with FCoE all the way to the storage arrays. This standard is known as FC-BB-6 or VN2VN. Juniper did the first public demonstration of end-to-end FCoE based on the FC-BB-6 VN2VN standard at the Intel Developer Forum. For information on this technology and a video of the demo see this blog, FCoE (FCF/FC-Switch Not Included). I’ll talk more about this process in a future blog.

Operating Systems Considerations for FCoE
Before we can do the cut over, as we are addressing FCoE from the server and storage perspective, some important exercises are required in converting from the current architecture to the FCoE architecture. Though the network will be a converged network of IP and Fiber Channel the fundamental requirements involved in making sure that servers maintain their storage volume (LUN) association after the cut-over do not change:

Persistent Binding - Servers still need to see their existing disks. Some operating systems have to be associated with their LUNs via specific targets and controllers. The Solaris Operating Systems boot-device must be configured to boot from the Disk Address that is the "first slice on the first disk on the first target on the first controller".  If migration to FCoE is done in a Solaris environment without thorough planning and understanding of the server dependencies you risk not being able to boot-up the operating system.

Oracle RAC Cluster (formerly Oracle Parallel Server) will not work properly if certain disk devices are named arbitrarily. Disk naming convention as instructed by Oracle must be followed otherwise, serious issues will arise in the database. So, a cut-over from a functional Fiber Channel attached storage to FCoE technology requires proper planning for Oracle RAC Cluster special device naming conventions in mind.

LUN Masking (Disk Security) - This will need to be maintained on the back-end storage array for servers and their corresponding disks. As the identification of each HBA vanishes following a replacement with a CNA, storage administrators must have plans in place for LUN masking or they risk having the entire back-end storage volumes go invisible to the operating systems.

Switch zoning  - the HBAs identification will change in a cut-over scenario but the reference point between the Fiber Adapter or Disk Director ports on the storage array will continued to be maintained, coordinated and managed in the fabric zones. Without the possibility of "WWN Spoofing" in FCoE technology, all ten thousand zones in a Fiber Channel switch zone-configuration have to be manually edited as an integral step in the cut-over from fiber channel to FCoE. Storage Administrators have no other choice than to make this happen.

Legacy Fiber Channel switches - For some new high performance switches that must connect with legacy Fiber Channel switches, the zoning exercise still remains a task that can only be accomplished on the legacy Fiber Channel switches. Some of these high performance switches may have the facility for zoning already or it might come as the technology advances.

To Sum it All Up
In summary, when moving to FCoE, if you are building a new data center it’s easy, you can build an end to end FCoE network. For an existing data center you can convert in two steps, on the server side and then on the storage side. As you build out new servers equip them with CNA adaptors and migrate your applications to them. Run your new FCoE capable switches in parallel and attach new servers to it.  Be sure to take the operating system dependencies in to account and consider your application priorities and dependencies. While this isn’t a comprehensive look at the considerations I hope that this information is useful.

For More Information
If you want to learn more about Juniper’s solution for FCoE see the recording of the our Webinar, Storage Silos in the Data Center? Time to Simplify the Network.

No comments:

Post a Comment