To home lab or not to home lab

As I often do, I am again debating my need for a home lab.  My job is highly technical, to take technology architecture and tie it all together with the strategic goals of my customers.  Keeping my technical skills up to date is a full time job in and of itself, and begs the question, should I build out a home lab, or are my cloud based labs sufficient.

One of the perks to working at a large company is the ability to use our internal lab systems.  This can also include my laptop with VMware Workstation or Fusion product which affords some limited testing capabilities, mostly due to memory constraints.  Most of the places I have been have had great internal labs, demo gear, etc, which has been nice.  I have often maintained my own equipment as well, but to what end.  Keeping the equipment up to date becomes a full time job, and adds little value to my daily job.

With the competition in cloud providers, many providers will provide low or no cost environments for testing.  While this is not always ideal, for the most part, we are now able to run nested virtual systems, testing various hypervisors, and other solutions.  Many companies are now providing virtual appliance based products which enable us to stay fairly up to date.

Of course one of my favorites is VMware’s Hands on Labs.  In fairness I am a bit biased, working at VMware, and with the hands on labs team as often as I can.  Since a large majority of what I do centers around VMware’s technology, I will often run through the labs myself to stay sharp on the technology.

While the home lab will always have a special place in my heart, and while I am growing a rather large collection of raspberry pi devices, I think my home lab will be limited to smaller lower power devices for IoT testing for the moment.  While always subject to change, it is tough to justify the capital expenditure when there are so many good alternatives.

To home lab or not to home lab

He who controls the management software controls the universe.

No one ever got fired for buying IBM.  Well…how did that work out?

When I started working in storage, it was a major portion of our capital budget.  When we made a decision on a storage platform, we had to write the proposal for the CIO to change to another brand, and we had better be sure we didn’t have issues on the new platform.  We didn’t buy on price, we bought on brand, period.

I was speaking with a customer recently, and they were talking about how they were moving to a storage startup which recently went through an IPO.  I asked them how happy they were about it, and the response was, something to the effect, it is great, but we will likely make a change in a few years when someone comes out with something new and cool.  This wasn’t an smb account, not a startup, this was a major healthcare account.  They were moving away from a major enterprise storage vendor, and they were not the first one I had spoken to who is going down this path.

I remember when virtualization really started to take off.  The concept was amazing, we thought we were going to see massive reduction in data-centers and physical servers.  Please raise your hand if you have less physical servers than you did 10 years ago.  Maybe you do, but for the most part I rarely see that anyone has significantly reduced the number of workloads.  I guess virtualization failed and was a bad idea, time to move on to something else?  Of course not, we just got more efficient and started to run more workloads on the same number of systems.  We got more efficient and better at what we do, we prevented server sprawl, and thus realized cost savings through cost avoidance.  What has changed though is moving from one server vendor to another is pretty simple.

If I were still in the business of running datacenters I would probably spread over two or more vendors with some standard builds to keep costs down, and provide better availability.  From a storage perspective I wouldn’t really care who my storage vendors were provided they could meet my requirements.  Honestly I would probably build a patchwork datacenter.  Sure it would be a bit more work with patching and such, but if there are API’s, and we can do centralized management to deploy firmware to each system, why not, why be loyal.  For that matter, why have a single switch vendor?

See what I did there?  It is all about the software.  Whether you believe VMware, Microsoft, Red Hat, or someone else will win, the reality is it is a software world.  If your hardware will play nice with my hypervisor, and my management tool, why should I use only one vendor, if it won’t, why should I use it?  It is all about applications and portability.  Hardware isn’t going away, but it is sure getting dumber, as it should, and we are pushing more value through software.  He who controls the management software controls the universe.

 

He who controls the management software controls the universe.

There can be only one…or at least less than there are now.

Since the recent announcement  of Dell acquiring EMC, there has been great speculation on the future of the storage industry.  In previous articles I have observed that small storage startups are eating the world of big storage.  I suspect that this trend had something to do with the position EMC found themselves in recently.

Watching Nimble, Pure, and a few others IPO recently, one cannot help but notice there are still far more storage vendors standing, with new ones coming out regularly, and the storage market has not consolidated as we thought it would.  During recent conversations with some of the sales teams for  a couple storage startups, we discussed what their act two was to be.  I was surprised to learn that for a number of them, it is simply more of the same, perhaps less a less expensive solution to sell down market, perhaps some new features, but nothing really new.

Looking at the landscape, there has to be a “quickening” eventually.  With EMC being acquired, HP not doing a stellar job of marketing the 3Par product they acquired, Netapp floundering, and Cisco killing their Whiptail acquisition, we are in a sea of storage vendors with no end in sight.  HP splitting into two companies bodes well for their storage division, but the biggest challenge for most of these vendors is they are focused on hardware.

For most of the storage vendors, it is likely that lack of customers will eventually drive them out of business when the finally run out of funding.  For some, they will survive, get acquired, or merge to create a larger storage company, and probably go away eventually anyway.  For a few they will continue to operate in their niche, but for the ones who intend to have long term viability, it is likely they are going to need to find a better act two, something akin to hyper converged infrastructure, or more likely simply move to a software approach.  While neither are a guarantee, they do have higher margins, and are more inline with where the industry is moving.

We are clearly at a point where hardware is becoming commoditized.  If your storage array can’t provide performance, and most of the features we now assume to be standard, then you shouldn’t even bother coming to the table.  The differentiation has to be something else, something outside the norm.  Provide some additional value with the data, turn it into software, integrate it with other software, make it standards based.  Being the best technology, the cheapest price, or simply the biggest company doesn’t matter any more.  Storage startups, watch out, your 800lb gorilla of a nemesis being acquired might make you even bigger targets.  You better come up with something now or your days are numbered.

There can be only one…or at least less than there are now.

What is Dell really buying?

Standard disclaimer, this is my personal opinions, and does not reflect those of my employer, or of any insider knowledge, take it for what it is worth.

When I heard rumors of the Dell EMC deal, I was pretty skeptical.  I am a numbers guy, and the amount of debt that would be required is a bit staggering.  Why would a company like Dell even want to acquire a company like EMC?  Especially after we all watched the pain they went through to take the company private.  Why would EMC want to go through the pain of being taken private, by a former competitor no less?  With the HP breakup, and IBM selling off a number of their product lines over the past decade or so, this almost seems counterintuitive, an attempt to recreate the big tech companies of the 90’s & 2000’s which are all but gone.

Sales and Engineering Talent

I have many friends at Dell, I was even a customer when I worked for some small startups many years ago.  In my experience, Dell is really good at putting together commodity products, and pricing them to move.  Their sales teams are good, but the compensation model makes them tough to partner with.

EMC has a world class sales and marketing organization.  EMC enterprise sales reps are all about the customer experience.  They are machines with amazing relationship skills, and they are well taken care of.  Engineering at EMC is a huge priority as well.  EMC’s higher end support offerings, while costly, are worth every penny.  I have seen them fly in engineers for some larger customers to fix problems.  EMC products are all about the customer experience.  Even though I have not been a fan of their hardware lately, they have done some amazing things around making the experience second to none.

An Enterprise Storage & Software product

Let’s be honest, Dell has not been a truly enterprise player in the storage and software arena.  If we look at the products they have acquired, a majority of them are mid market plays.  Compellent was supposed to be their big enterprise storage play, but that is mid market at best.  From a software perspective, most of the products are low end, and they don’t tend to develop them further.

EMC on the other hand has enterprise class storage.  Say what you want about the complexity of the VMAX line, it is pretty solid.  It may be a pain to manage sometimes, but it does set the standard in enterprise storage.  EMC has also done amazing things with software.  ViPR Controller and ViPR SRM are impressive technologies when implemented appropriately.  EMC has also done quite well with some of their other software products, but more so they treat software as a critical part of the stack.

VMware

Enough said, the real value for Dell is getting a good stake in VMware.  Like it or not VMware is the market leader in Hypervisors, Cloud Management, Software Defined Networking, and making incredible strides in Automation, and Software Defined Storage.  The best thing that EMC has done is allowing VMware to continue to be independant.  If Dell can stick to that plan, the rewards can be incredible.

The reality is this deal won’t change much in the short term from an IT industry perspective.  Large storage companies such as EMC and HP Storage are getting their lunch eaten by smaller more agile storage startups.  Servers are becoming more of a commodity, and software continues to be the path forward for many enterprises.  This is a good deal for both Dell and EMC, the challenge will be not to go the way of HP.  If I could give Michael Dell one piece of advice, it would be to hire smart people and listen to them.  Culture matters and the culture is what makes EMC and VMware what they are so don’t try to change it.  Culture is the true value of this acquisition.

What is Dell really buying?

Software Defined Storage Replication

In a conversation recently with a colleague, we were discussing storage replication in a VMware environment.  Basically the customer in question had bought a competitors array, brand X, and wanted to see if there was a way to replicate to one of our arrays at a lower price point.

This is a fairly common question coming from customers, more so in the SMB space, but with the increasing popularity of Software Defined Storage, customers want options, they don’t want to be locked into a single vendor solution.  In an openstack environment, high availability is handled at the application level, and I have to say I strongly recommend this as a policy for all new applications, however how do we handle legacy apps in the interim?

In a traditional storage array, we typically do replication at the storage level.  VMware Site Recovery Manager allows us to automate the replication and recovery process integrating with the storage replication, and in smaller environments, can even handle replication at the vSphere host level. Array based replication is generally considered the most effecient, and the most recoverable. This does require similar arrays from the same vendor, with replication licensing. In a virtual environment this looks something like the picture below.

Storage_Replication_ArrayBased

This works well, but can be costly and leads to storage vendor lockin, not a bad thing if you are a storage vendor, but not always the best solution from a consumer perspective. So how do we abstract the replication from the storage? Remember, one of the purposes of virtualization and openstack is to abstract as much as possible from the hardware layer. That is not to mean hardware is not important, quite the contrary, but it does enable us to become more flexible.

So to provide this abstraction there are a couple options. We can always rewrite the application, but that takes time, we can do replication at the file system level or similarly using a 3rd party software to move data, but in order to really abstract the replication from the hardware/software we need to insert something in the middle.

In the conversation I was having at the begining, the goal was to replicate from the production datacenter running brand X storage to a remote location using an HP storage product. To accomplish this, we discussed using vSphere replication, something I will discuss in a future post, we discussed host based replication, but that is not as seamless, and what we settled on is below. Not the most elegant solution, but something that helps us abstract the replication layer. Essentially using the HP StoreVirtual VSA, since it has replication built in, we put that in front of the brand X storage, and then on the other side we can put another VSA on a server with some large hard drives, and voila, replication and DR storage handled.

Storage_Replication_VSA - Edited

Not the most elegant solution, but it is a way to abstract the replication from the storage, and to do so at a reasonable cost. The advantage to this solution is that we have also given ourselves DR storage. Next I will explore vSphere replication, but as always I want to point out, this solution minimized vendor lock in on the hypervisor and storage levels.

Software Defined Storage Replication

The changing role of shared storage in the Software Defined Datacenter: Part 3

As we have discussed, the role of shared storage is changing.  VMware has supported vMotion without shared storage for a while now, software defined storage is enabling shared compute and storage virtualization, and for the past year or so, we have been hearing more about the concept of vVols.  I am certainly not the first to talk about this, there are a number of blogs on this, my personal favorite being The future of VMware storage – vVol demo by @hpstorageguy.

As always, in the interests of full disclosure, I do work for HP, but this is my personal blog, and I write about things I think are interesting.  I am not going into great detail on how vVol’s work, but I do want to show a few diagrams to differentiate current architecture from what we MAY see in the future.

So looking at the current and legacy architecture of VMware storage, we typically present storage to all hosts in the cluster in the form of a shared LUN or Volume.  This is very simple, the VMware admin asks the storage admin for a number of volumes of a specific size, in our example below, let’s say they are 2TB volumes and they request 2 of them.  The VMware administrator then creates datastores, which formats them with the VMFS file system and allows virtual machines to be created within it.  Of course this whole process can be done through the VMware GUI using the vSphere storage API’s, but the net effect is the same.  We still create another layer in the storage stack which is not the most efficient way of handling this.

Traditional_VMware_Storage

 

vVols are VMwares new way of handling storage which resolves this problem in a rather unique way.  Currently we can bypass the datastore concept and do a raw disk map or RDM, which allows us to present a raw disk device to the virtual machine itself.  Unfortunately this does not give us a measurable difference in performance, and can become tedious to manage.  vVols on the other hand, appear to be datastores, but really pass through the individual volumes to the individual VM’s.  In the drawing below, the individual volumes appear to the VM administrator as Datastores, but they are broken out on the storage array.  This removes the performance layer, and enables a more policy based storage interface for the VMware administrator.  This is critical to note, policy based storage at a VMware level.  This brings us closer to self service in a virtualized environment.  I don’t yet have a full handle on how this will be managed, but I think it is safe to say the storage administrator will create a container giving the VMware admin a specific amount of storage with specific characteristics.  In the case of our example, 2TB containers.

 

vVols_Storage

 

Note above the volumes are of varying sizes, but what is not shown is the volumes or luns are individual disks presented directly to the virtual machine itself.  This is important to remember since we are offloading the performance of each individual disk presented to the virtual machine to the storage array, but we are still able to manage it as a datastore or a container on the VMware side.

Coming back to the policy based storage thought, this is not dissimilar to how the HP 3Par storage operates, volumes within common provisioning groups which are containers.  The policy is set on the container in both cases, so it isn’t a stretch to see how this will work well together.  Again I don’t have any inside information, but if you look at the post from referenced above, Calvin does an excellent job of showing us  what is coming.  This, combined with VMware’s VSAN announcements recently, seem to show that there is going to be a role for the traditional storage array in addition to software defined storage in the software defined datacenter at least for now.

The changing role of shared storage in the Software Defined Datacenter: Part 3

The changing role of shared storage in the Software Defined Datacenter: Part 2

Previously we discussed the shift from the traditional array based storage to a more software defined model.  Of course this is not a huge shift per-say, but rather a changing of marketing terms and perceptions.  The Software Defined Datacenter is nothing more than virtualzation at all its levels, simply a further shift of our distributed compute trend.  It is also important to remember, traditional array based shared storage is not dead, despite the numerous storage vendors competing in the software defined storage space, there is still a great deal of development around the traditional storage array, but that is a future topic.

When looking at traditional storage, it is necessary to understand the essence of the storage array.  You have a processor, memory, networking, and hard drives, all tied together by an operating system.  In essence you have a very specialized, or in some cases a commodity, server.  So what differentiates all the vendors?  Generally speaking, the software and support.  Most arrays provide similar functionality to one degree or another.  Certainly one manufacturer may do something somewhat better, another company may have some specialized hardware, but from a very high level business business perspective, they essentially perform similar functions, and it is left to those of us who are fascinated by the details to sort out which array is best in a specific environment, and determine who can best support them for the long term.

As we begin to shift storage onto servers, relying on industry standard processors, memory, and network components rather than specific and dedicated parts, there is a trade off, much like we saw when we began to virtualize the compute layer.  No longer does the storage vendor have control of the underlying hardware.  No longer is purpose built hardware available to absorb some of the load.  This presents an interesting challenge for storage and server vendors alike.

Unfortunately while manufacturers will set standards, write reference architectures, and create support matrices, many users will bend or even simply ignore them.  When the storage vendor cannot control the hardware, it becomes much more difficult to provide performance guarantees or support.  There will always be a certain need for traditional array based storage, for performance guarantees, and workloads that software defined storage just cannot support.  As users demand more, faster, and cheaper storage for their apps, we are going to have to find a way to strike a balance between the traditional arrays, software defined storage, and the new technologies being created in labs around the world.

The changing role of shared storage in the Software Defined Datacenter: Part 2