VMware Storage Part 2: SAN

Continuing on with the theme from last week. Again this is just to get things started, I do intend to dig into some of these areas much deeper, and discuss specific products, but to assume that everyone knows the basics would be in opposition to my desire to make technology, specifically virtualizaiton, something we can all understand and work with.

I was first introduced to the concept of a SAN with the Apple X-Raid in 2004 by a software developer I worked with. We were at a small software startup in Sacramento, CA, and the concept seemed outrageous to me. Shortly after that I ended up moving to a Casino where I was handed a SAN and the VMware ISO images. I quickly learned the value of shared storage.

The concept behind a SAN is that rather than the islands of storage we talked about in part 1, we can logically divide a large pool of storage between multiple servers. This is important in a VMware environment to enable things such as High Availability and Load Balancing between hosts. Since all hosts have access to all shared storage (SAN), a Virtual Machine may reside on any host.

A critical design point when using a SAN in any environment, but especially in VMware, is multipathing. This is simply having more than one connection from the host server to the shared storage. This becomes particularly critical in a VMware environment. Remember we are dealing with consolidation, so I may be moving 5,10, or more workloads to a single VMware host. Not only does this increase risk but also the load carried by the storage connections. This is where your SAN vendor’s controller design can help or hurt you, but that is a topic for another day.

SAN’s come in many flavors, but the connectivity methods are generally iSCSI, Fiber Channel, and Fiber Channel Over Ethernet. Each of these has it’s own advantages and disadvantages. What I have found is generally speaking this is largely dependent on the environment. For smaller customers, iSCSI is often perfectly acceptable, and can provide a familiar medium for the networking team. In larger environments, Fiber Channel is often preferable since it offers a simple and low latency network which is designed to do one thing and one thing only.

One closing thought on storage networking, it is important to consider line speed. With the release of 16G Fiber Channel, and 10GbE becoming more affordable, it is often wise to step up and pay for the fastest storage network you can afford. As Solid State Drives continue to gain market share we are seeing more and more storage networks become saturated. Many storage array vendors are dropping support for slower speeds, 1GbE iSCSI in particular. Always wise to prevent bottlenecks wherever possible even if it does cost a little more up front.

Advertisements
VMware Storage Part 2: SAN

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s