As I am on my flight to EMC World in Las Vegas, I can think of no more appropriate time to write about storage in a virtual environment.
Starting with the basics. Since we are dealing with VMware, we have several options available to us. To set the proper foundation, I would like to give a little thought on each one.
Fiber Channel over Ethernet (FCoE)
First of all, we need to group our protocols. The first three are block protocols. This means nothing more than the storage is presented at a block level which is to say it appears to be local to the host. Think of this very similarly to the hard drive in your laptop or desktop, only it lives somewhere else and is presented through a network protocol.
The NFS protocol is a *nix protocol which is used for file sharing. This is similar to a windows file share, if you go to a \\\ this is very similar to NFS.
On the block protocols, each has its own advantages and disadvantages. First of all Fiber Channel. This is an old protocol, used in many enterprise environments. It is currently spec’ed at 2,4, and 8 gbps. This is essentially sending the scsi commands over a fiber optic network. It is very fast, very efficient, and relatively expensive. It requires special adapters in the host server, as well as special fiber channel switches, and expensive fiber optic cable. The performance is good since it is a transport protocol, and dedicated to only one task.
FCoE is a newer protocol, designed to allow us to move to converged networks. This is typically run over one or more 10gbps connections. Modern implementations of this protocol run over a converged network using QOS and Storage and Network I/O control to provide maximum performance. The concept is similar, Fiber Channel, but does not require dedicated infrastructure. This also allows for backwards compatibility with legacy FIber Channel Networks.
iSCSI is not new, but is very enticing as a block protocol in the VMware environments. Operating at Layer 3 on the OSI model as opposed to Layer 2 as FCoE, there is a bit more overhead associated with iSCSI, but not enough to rule it out in a majority of environments. With the advent of 10GBe networks, we are now able to use iSCSI in a converged network similar to FCoE.
NFS stands out as very different. In the previous protocols, we use a dedicated network, or a subset of a converged network to present the storage using a block protocol. In the case of NFS, we use a file protocol. With the block protocol, we are present the storage and let VMware take over. With NFS, we present a file share. By definition, this must have an operating system on it already. VMware can take this file share and place the Virtual Machine files on it, and run as though it were a native and local file system.
Logically presenting a block protocol should be faster and more efficient, the OS get’s to manage the storage and handle everything internally. This is not always the case though. FCoE is the most efficient block protocol since we run it over a 10GBe network, and the protocol gives us more freedom than a native Fiber Channel Network. NFS on the other hand is simply a large open file. Rather than performing block writes, the NFS datastore is written as though it were a large open file. No write a block, write a block, write a block, as we find in the block based protocols.
So which protocol is best? Well the answer is as always, it depends. For Microsoft Exchange, and a few other applications, we have to use a block protocol for support reasons, though there is no performance impact and we expect this to change soon. For the majority of deployments, I recommend using NFS, with iSCSI for block requirements. If there is a specific need we can fall back to the Fiber Channel or FCoE protocols, but those are very specific use cases in very large environments. The additional advantage to NFS is simplicity of expansion, and it’s ease of deployment.
At the end of the day there is no right or wrong answer, but generally speaking multiple protocols are always nice to have.