Sunday, August 26, 2012

Is VXLAN the Answer to the Network Virtualization Question?

Network virtualization is a growing topic of interest and for some good reasons as networks scale to meet the challenges of cloud computing they are running up against VLAN scaling limitations. There have been several network overlay technologies released that seek to address the challenges with network scaling and to enable workload mobility. One of these technologies is VXLAN. It has a few proponents who say that it can meet the requirements for network virtualization. While it sounds good on the surface, it is worth it to take a closer look. With VMWorld happening this week in San Francisco I’m sure that network virtualization will be a hot topic, especially considering the VMware Nicera news, so I thought I’d comment on it and offer some thoughts and options.

The Origins of VXLAN
The VXLAN buzz started during a keynote at VMworld in August 2011, when VMware CTO Steve Herrod announced the Virtual eXtensible LAN protocol, which VMware positions as a technology that “enables multi-tenant networks at scale, as the first step towards logical, software-based networks that can be created on-demand, enabling  enterprises to leverage capacity wherever it’s available. Networking vendors Cisco and Arista are actively promoting VXLAN and have collaborated with VMware to develop and test the technology on their products. Cisco highlighted VXLAN at their Cisco Live user conference again in June 2012 and Arista is demoing it at VMWorld, however with the Nicira announcement VMWare seems to have taken that next step. VXLAN sounds interesting, so let's see how good of an idea it is.

Wednesday, August 22, 2012

Meet up with Juniper at VMWorld San Francisco

Are you and your colleagues headed to VMworld next week? Stop by and meet the Juniper team at booth #1517 where will be talking about the security and network architectures needed to move to an agile virtualized datacenter. It's going to be interesting with all of the changes that are happening in network virtualization. I'm looking forward to some interesting keynotes and sessions as well as catching up with friends in the industry.

Juniper has a lot planned for the show including speaking sessions, and a slew of in booth demos, plus we are handing out USB car chargers. I'll be there on Sunday afternoon so stop by and say hello.

You should stop by our booth if you:
- Are virtualizing important apps or are from regulated industries
- Have service oriented application architectures
- Want to use the vMotion technology to move workloads around a datacenter
- Have an old network that you need to upgrade to enable greater performance and workload mobility
- Need to secure mobile devices for VDI or BYOD
- Want to learn how UAC or Secure Access can be run in a VMware environment

Thursday, August 9, 2012

Making the Case for Long Distance Virtual Machine Mobility

With VMWorld coming up I’m reminded of a top of mind subject, virtual machine mobility. The reason for moving virtual machines is to better allocate server resources and maintain application performance. It’s a useful technology that works great in the data center. We also hear a lot about the need to move virtual machines across the WAN, live, without losing sessions. This is known as Long Distance vMotion or generically as long distance live migration. This might sound like a good idea, but it gets a bit complicated when you think outside the data center walls and across the WAN. It creates complexity in the network, as maintaining sessions requires keeping the same IP address and MAC address after the move.  There are many proposed use cases for it, but is it such a good idea?

Limitations of Live Migration
Long distance live migration over the WAN has limitations due to latency and bandwidth requirements and the complexity of extending layer 2 which is required to maintain the same MAC address. Issues include the potential for traffic coming first to the original data center where the gateway is, and then looping to the new data center where the VM has moved.  Traffic can also loop back over the WAN to reach the storage that stayed behind. There are also bandwidth requirements to handle the large scale movement, as well as issues with storage pooling and storage replication as well as the complexity of implementing the L2 bridging architecture. If we are going to deal with all of this complexity of moving virtual machines over the WAN there had better be a good reason to do it. But is there? Here is a look at the various use cases for long live migration that I have found.

Wednesday, August 1, 2012

The Myth of the Broadcast Domain Apocalypse

There is a train of thought that is popular with some network vendors and long time network engineers that there is a compelling need to “orchestrate” the physical network and the visualized-networking world.  This is expressed as a desire to ensure that VLANs are synchronized between the physical network switches and the virtual switches on the visualization hosts.

The Orchestrated VLAN Model
VMware has a concept of a “backing VLAN”. This simply put means that traffic belonging to a portgroup uses a configured transport VLAN when it traverses the physical network.  For example, if a group of VMs belong to a portgroup “backed” by a VLAN, that VLAN must be “allowed” on the trunk ports connecting all of the physical ESX hosts that are hosting any of those VMs.

In addition to this, it is argued that the trunk ports to ESX hosts that do not host a VM belonging to that portgroup should not “allow” the presence of the VLAN that backs that portgroup.  It is suggested that the reason for this requirement is that the unbridled propagation of VLANs will cause the ESX host to process broadcast packets it does not need to, with potentially dire consequences.